- Network Traffic Analysis and Packet Capture Fundamentals
- Understanding Packets and Packet Capture
- Levels of Network Visibility
- Visibility Across Packets
- Visibility Across Flows
- Visibility Across Logs
- Network Security Monitoring Use Cases
- Intrusion Detection and Post-Compromise Reconstruction
- Data Exfiltration Validation
- Command and Control Communication Analysis
- Malware Staging and Payload Reconstruction
- Insider Misuse and Policy Violations
- Data Collection Methods
- Malicious Traffic Patterns
- Beaconing Behavior
- Domain Flux Techniques
- Protocol Mimicry
- Exfiltration Patterns
- Network Monitoring Infrastructure
- Sensors and Data Collection
- Acquisition Methods
- Network Analysis Tools
- Wireshark for Packet Analysis
- Zeek for Network Security Monitoring
- Brim Security Analysis Platform
- Intrusion Detection with Suricata
- TLS Fingerprinting with JA3 and JA4
- Fileless Malware and Code Injection Techniques
- Understanding Fileless Malware
- Code Injection Methodology
- Process Hollowing
- Code Injection Implementation Example
- Code Injection Process Flow
- Windows API Foundation
- Memory Forensics with Volatility
- Memory Acquisition
- Volatility Framework Overview
- Understanding Modules and Handles
- System Profiling
- Process Enumeration
- Memory Structure Inspection
- Network Activity Analysis
- Malicious Behavior Detection
- Advanced Memory Analysis Tools
- Understanding Code Compilation
- MemProcFS for Virtual Filesystem Analysis
Network Traffic Analysis and Packet Capture Fundamentals
Understanding Packets and Packet Capture
A packet represents a small unit of data that travels across networks. Each packet consists of two main components: the payload, which contains the actual information being transmitted, and control data that includes source and destination addresses along with error-checking information. This structure allows large messages to be broken down into smaller, manageable pieces that can be transmitted efficiently across network infrastructure.
PCAP, which stands for packet capture, is a file format specifically designed to store data packets collected from network traffic. This format enables security analysts and network engineers to perform detailed analysis of network activity at a later time, making it invaluable for troubleshooting network issues and conducting cybersecurity investigations. The PCAP file acts as a comprehensive log of network communication, preserving raw data such as source and destination addresses, protocol information, and the actual content of packets. Tools like Wireshark and tcpdump are commonly used to create these packet capture files, which then serve as the foundation for monitoring network health, detecting security threats, and investigating incidents.
Levels of Network Visibility
Visibility Across Packets
Packet capture provides the most comprehensive and granular view of network traffic available. This approach captures every single byte transmitted between hosts on the network, giving analysts the ability to inspect payloads in their entirety, examine protocol details at the deepest level, and observe attacker actions with precision. While this level of visibility is the most thorough, it comes with significant storage and processing requirements that can be challenging to manage at scale.
Visibility Across Flows
Flow data takes a different approach by summarizing network traffic based on key identifiers such as source and destination addresses, ports, and volume metrics, all without storing complete packet payloads. This method enables analysts to quickly identify patterns that may indicate malicious activity, such as network scanning behavior, data exfiltration attempts, or unusual communication patterns between hosts. Flow data is considerably lighter in terms of storage requirements compared to full packet capture, making it ideal for large-scale monitoring across extensive network infrastructures with many devices.
Visibility Across Logs
Log-based visibility provides event-focused insights generated by security devices and network services throughout the infrastructure. These logs capture information about blocked connections, visited domains, DNS lookup requests, triggered security signatures, and policy enforcement actions. This type of visibility delivers contextual security insights that form the backbone of detection systems, correlation engines, and threat hunting operations. Logs complement other visibility methods by providing semantic context about security events and user behavior.
Network Security Monitoring Use Cases
Intrusion Detection and Post-Compromise Reconstruction
Intrusion detection and post-compromise reconstruction efforts focus on identifying indicators that suggest a system or network has been breached, followed by comprehensive analysis to understand the full scope of the attack. Security analysts trace the movements of attackers through the environment, catalog the tools and techniques they employed, and document any modifications made to systems after gaining initial access. This detailed reconstruction allows the organization to build a complete timeline of the intrusion and determine the attacker's ultimate objectives. By methodically piecing together these activities, organizations can identify security weaknesses that were exploited, enhance their detection capabilities, and implement measures to prevent similar incidents in the future. The primary goal remains detecting compromises as early as possible while maintaining the ability to accurately reconstruct what occurred to strengthen the overall security posture.
Data Exfiltration Validation
Data exfiltration validation involves examining network traffic and system logs to determine whether sensitive information has been removed from the network without authorization. This analysis identifies the destinations where data was sent, measures the volumes of information transferred, and determines the methods attackers used for the transfer. Analysts leverage multiple data sources including traffic flows, DNS logs, proxy records, and other monitoring tools to identify suspicious data transfers and confirm the extent of any data loss. Understanding the specific details of how data was exfiltrated, where it was sent, and how much information was compromised helps quantify the impact of the incident and provides critical guidance for remediation efforts. This validation process ensures that the organization develops a clear and accurate picture of potential breaches and can implement appropriate protective measures to prevent future occurrences.
Command and Control Communication Analysis
Command and control, commonly abbreviated as C2, refers to the process through which attackers establish and maintain communication channels between their infrastructure and compromised systems within a target network. These channels allow attackers to issue commands and control malicious activities remotely.
C2 mapping focuses specifically on identifying how compromised systems communicate with attacker-controlled infrastructure, often through periodic beaconing or the misuse of legitimate protocols to avoid detection. Security analysts observe traffic patterns looking for unusual port usage, repeated connections to external hosts, or other anomalies that might indicate hidden communication channels and remote control activity. Successfully mapping these C2 communications helps reveal the attacker's infrastructure, provides insight into their intentions, and exposes ongoing malicious operations within the network. Understanding these communication patterns enables security teams to disrupt the malicious channels and effectively mitigate the risk posed by active threats in the environment.
Malware Staging and Payload Reconstruction
Malware staging and payload reconstruction involve detailed analysis of how attackers prepare and deploy malicious software within a compromised environment. Security analysts collect various artifacts including log files, network packet captures, and file system evidence to piece together the complete picture of staged files and scripts. This reconstruction process reveals the malware's full execution chain and exposes its capabilities. By reconstructing payloads in this manner, organizations gain critical understanding of infection pathways, attacker objectives, and the potential impacts on affected systems. This knowledge proves essential for developing effective countermeasures, improving detection rules to catch similar threats, and implementing preventive controls to stop future infections.
Insider Misuse and Policy Violations
Monitoring for insider misuse, policy violations, and gathering compliance evidence focuses on detecting improper or unauthorized actions taken by individuals within the organization. Security teams analyze system logs, access patterns, and data transfer records to identify attempts to bypass security controls, copy sensitive information, or violate organizational policies. These investigations provide crucial evidence for internal audits, regulatory compliance requirements, and disciplinary actions when necessary, while simultaneously highlighting areas where security controls need improvement. By correlating these activities across multiple data sources, organizations can effectively enforce security policies, mitigate insider threats, and maintain accountability throughout their operations.
Data Collection Methods
Full packet capture involves collecting every bit of data transmitted over the network, capturing both payload content and packet headers in their entirety. This approach provides the richest level of detail available for analysis, enabling security teams to investigate exactly what information was sent and received during any communication session. However, this comprehensive visibility comes with substantial storage requirements due to the immense volume of information that must be retained.
Flow records take a different approach by summarizing network traffic using key identifiers such as source and destination IP addresses, port numbers, protocol types, along with counters and timing information. These records are significantly more lightweight and scalable compared to full packet capture, making them ideal for continuous monitoring without consuming excessive storage or processing resources.
Application logs generated by systems such as proxies, DNS servers, mail systems, and web application firewalls offer semantic insight into how users and applications interact with network resources. These logs help analysts detect suspicious behavior, identify misconfigurations, and understand the context of network activity.
Intrusion Detection Systems and Network Security Monitoring platforms generate summarized traffic records along with intrusion alerts. These systems provide real-time detection capabilities for potential threats and help blue teams prioritize their investigations efficiently based on the severity and context of detected events.
Malicious Traffic Patterns
Beaconing Behavior
Beaconing refers to a technique where malware makes periodic, low-and-slow callbacks to its command-and-control server. These callbacks typically occur at regular intervals specifically designed to blend into normal background traffic patterns, making them difficult to detect. These subtle check-ins allow attackers to maintain persistent access to compromised systems without triggering obvious alerts in security monitoring systems.
Domain Flux Techniques
Domain flux is a sophisticated evasion technique where malware rapidly cycles through large numbers of domain names, often using very short Time-To-Live values in DNS records. This constant cycling makes it extremely difficult for defenders to block malicious infrastructure or successfully take down command-and-control servers, as the domains are constantly changing.
Protocol Mimicry
Protocol mimicry occurs when malware disguises its communications to appear like legitimate protocol traffic. Attackers may make their malicious traffic look like normal HTTPS connections, DNS queries, or cloud service communications, which makes it significantly less likely that security tools will flag the traffic as suspicious. This technique allows malware to operate under the cover of legitimate network activity.
Exfiltration Patterns
Exfiltration patterns typically involve specific behaviors that can indicate data is being quietly removed from the environment. These patterns include unusually long-lived network sessions, data transfers of unexpected sizes, or file transfers occurring during off-hours when legitimate activity would be minimal. Recognizing these patterns helps analysts identify when sensitive data is being stolen from the organization.
Network Monitoring Infrastructure
Sensors and Data Collection
Sensors in network devices are specialized components or software modules embedded within infrastructure such as routers, switches, firewalls, or dedicated monitoring appliances. These sensors continuously capture and analyze network traffic as it passes through the device. They collect various types of information including packet headers, session metadata, and statistical summaries, often processing this data in real time to provide immediate visibility into network activity.
Acquisition Methods
Acquisition in network security refers to the systematic process of collecting data for subsequent analysis and investigation. This process often begins with packet captures using tools like tcpdump or dumpcap, or through dedicated sensors and network TAPs that provide granular visibility into all network traffic. Flow records such as NetFlow, IPFIX, or sFlow offer a lighter-weight alternative by summarizing traffic patterns without the need to store complete packet contents. Network logs from various sources including proxies, firewalls, DNS servers, and VPN concentrators add another critical layer of context by revealing user activity and application behavior across the environment. In cloud environments, telemetry sources such as AWS VPC Flow Logs, Azure NSG logs, and GCP Packet Mirroring provide comparable visibility into cloud network traffic. Additionally, IDS and NSM sensors produce session summaries and alerts that help analysts efficiently identify anomalies and potential threats.
Network Analysis Tools
Wireshark for Packet Analysis
Wireshark serves as a comprehensive packet analyzer that captures and displays every packet traversing the network in real time. This tool allows analysts to inspect packet payloads, examine headers in detail, and understand protocol behavior at the most granular level. Wireshark excels in situations requiring deep, detailed investigations where understanding the exact content and sequence of network communications is critical to solving the problem at hand.
Zeek for Network Security Monitoring
Zeek, in contrast, functions as a network security monitoring platform that passively observes traffic and produces high-level summaries, connection logs, and behavioral records. While Zeek does not store every packet by default like Wireshark does, it provides semantic insights and anomaly detection capabilities that make it significantly easier for blue teams to identify suspicious patterns and potential threats over extended periods of time.
Brim Security Analysis Platform
Brim Security, now known as Zui, is a desktop application that enables analysts to load PCAP files and immediately begin exploring network traffic with fast search and pivoting capabilities. The tool automatically converts raw packets into Zeek-style structured logs, which dramatically accelerates the analysis process. Analysts can also apply Zeek rules and scripts to enrich the data or detect suspicious activity directly from captured traffic.
Common queries in Brim include commands to show all Zeek streams using the syntax that counts by path and sorts in reverse order, show all DNS queries by counting query fields, and show all HTTP requests by counting URI fields and sorting the results.
Show all Zeek streams → count() by _path | sort -r
Show all DNS queries → _path=="dns" | count() by query | sort -r
Showing all http requests → _path=="http" | count() by uri | sort -rFor practical examples, you can reference detailed CTF writeups such as the CovertByte challenge from YUCTF 2025 available at https://sameerfakhoury.com/ctf-writeups/ctf-categories/yuctf-2025-writeups/covertbyte
Intrusion Detection with Suricata
Suricata is an open-source Intrusion Detection System, Intrusion Prevention System, and Network Security Monitoring engine designed for analyzing network traffic to detect and stop threats in real time. Developed by the non-profit Open Information Security Foundation, Suricata is a high-performance tool capable of identifying, blocking, and assessing sophisticated attacks by using rules, signatures, and scripts.
A signature pattern represents a component inside a rule that matches specific strings or patterns within network traffic. For example, a simple content match for the string "malware" in HTTP traffic would be written as:
content:"malware";A complete rule that incorporates this signature along with additional logic and context might look like:
alert http any any -> any any (
msg:"Detected malware keyword";
count() dest_port > asc = 150
sid:100001;
rev:1;
)A practical Suricata rule example demonstrates detection of a potential SSH scan:
alert tcp $CTI_MAL_DOMAIN any -> $HOME_NET 22 (msg:"Potential SSH Scan"; flags:S; classtype:attempted-recon; sid:1000001; rev:1;)This rule alerts on multiple SYN packets directed to port 22, which is indicative of a potential SSH scanning activity. The rule consists of an action component which is "alert", the header section that defines protocol, source and destination IP addresses and ports, and the options section enclosed in parentheses that includes elements like the message and flags to match.
TLS Fingerprinting with JA3 and JA4
The fundamental concept behind JA3 and JA4 is to uniquely identify TLS clients and servers based on the specific characteristics of their handshake process, rather than attempting to analyze the encrypted content itself. JA3 creates a fingerprint derived from client-side handshake parameters, including details such as supported cipher suites, extensions, and elliptic curves used in the negotiation. JA4, on the other hand, fingerprints the server-side handshake responses. These fingerprints enable security teams to detect unusual or malicious software communicating over encrypted channels, even when the traffic superficially appears to be normal HTTPS communication. By comparing these fingerprints against databases of known malicious or suspicious fingerprints, analysts can quickly flag suspicious activity or identify unauthorized applications operating within the network. Overall, JA3 and JA4 provide a powerful mechanism to monitor and classify encrypted traffic without requiring decryption.
In practical terms, JA3 and JA4 extract metadata from the TLS handshake process, including elements like cipher suites, extensions, and other negotiation parameters, and transform them into hash values. These hash values are then compared against databases of known fingerprints, which often include signatures of malware families, suspicious tools, or legitimate applications. If the computed hash matches a known malicious fingerprint in the database, analysts can detect malicious activity without ever needing to decrypt the traffic. Essentially, you are hashing the structural characteristics of the request and response, not the actual content, and using these signatures to identify unusual or dangerous traffic patterns.
An example of JA3 fingerprint data in JSON format might look like:
For reference databases of known malicious JA3 fingerprints, analysts can consult resources such as the SSL Blacklist database at https://sslbl.abuse.ch/ja3-fingerprints/
Fileless Malware and Code Injection Techniques
Understanding Fileless Malware
Fileless malware represents malicious code that operates directly within a computer's memory (RAM) rather than being stored on the hard drive as traditional malware would be. This technique makes detection significantly more difficult for traditional antivirus software, which primarily scans files stored on disk.
Code Injection Methodology
Code injection is an advanced technique where an attacker forces malicious code into another running process, causing it to execute within that process's security context. This approach is often used to bypass security controls or blend malicious activity in with legitimate process behavior. The technique involves writing malicious payloads directly into a process's memory space and then redirecting execution flow to that injected code.
Process Hollowing
Process hollowing represents a more advanced variation of code injection. In this technique, the attacker creates a legitimate process in a suspended state, removes the original code from the process's memory space, and replaces it entirely with malicious code before resuming execution. From the perspective of the operating system and monitoring tools, the process still appears to be legitimate, even though it is now running the attacker's payload. Both code injection and process hollowing are commonly employed by sophisticated malware to evade detection, maintain persistence on compromised systems, and execute harmful actions while masquerading as trusted process names.
Code Injection Implementation Example
The following C code demonstrates the technical implementation of code injection:
Code Injection Process Flow
The attacker begins by opening a target process with PROCESS_ALL_ACCESS permissions, which grants the ability to manipulate its memory and execution. Next, they allocate memory inside that process using the VirtualAllocEx function, creating space where the malicious payload can be stored. The attacker then writes their malicious code into the allocated memory using WriteProcessMemory, effectively injecting their payload into the target process. Finally, they create a new thread within the target process using CreateRemoteThread, which begins executing the injected code while appearing to be part of the legitimate process.
A simplified pseudocode representation of this workflow would be:
# PSEUDO: Code injection workflow
hProc = OpenProcess(TARGET_PID)
mem = VirtualAllocEx(hProc, size_of(payload))
WriteProcessMemory(hProc, mem, payload)
CreateRemoteThread(hProc, mem)Windows API Foundation
The Windows API is a collection of C functions built into Windows Dynamic Link Libraries such as kernel32.dll. These functions provide direct access to operating system features and represent the lowest-level interface available for interacting with Windows. The API is written in C and uses C-style function calls, pointers, and manual memory management, giving developers and attackers alike powerful control over system behavior.
Memory Forensics with Volatility
Memory Acquisition
Memory acquisition tools like WinPmem enable investigators to capture the entire contents of a system's RAM for forensic analysis. The WinPmem tool can be downloaded from https://github.com/Velocidex/WinPmem/releases/tag/v4.0.rc1 . The memory capture process is straightforward, using commands like:
winpmem_mini_x64_rc2.exe
.\winpmem_mini_x64_rc2.exe mem.rawVolatility Framework Overview
Volatility is a powerful open-source framework specifically designed for memory forensics. This tool enables analysts to extract and examine data from a computer's RAM, revealing critical information about system state at the time of capture. It helps blue teams investigate security incidents by exposing running processes, loaded DLLs, active network connections, open files, and even hidden or injected code that might not appear anywhere on disk. By analyzing memory snapshots, Volatility can uncover artifacts left by malware, rootkits, or suspicious activity that traditional disk-based forensics would completely miss. The framework supports multiple operating systems and provides an extensive plugin ecosystem for different types of investigations, making it an extremely versatile tool for deep system analysis.
For comprehensive reference materials, analysts can consult resources such as the Volatility cheatsheet available at https://blog.onfvp.com/post/volatility-cheatsheet/ and https://hacktivity.fr/volatility-2-windows-cheatsheet/
Understanding Modules and Handles
Modules are the executable files or Dynamic Link Libraries that a process loads into its memory space. These modules define the code that the process can execute as part of its normal functionality. Handles, on the other hand, are references that a process uses to interact with various system resources including files, registry keys, or even other processes. By reviewing handles during an investigation, analysts can uncover unexpected file access patterns, persistence mechanisms, or inter-process interactions that may indicate malicious behavior is occurring on the system.
System Profiling
Profiling the system involves identifying the operating system version and build information from the memory dump. This is accomplished using commands like:
vol3 -f memdump.raw windows.info
Variable Value
------------ -------------------------------------------
Kernel DTB 0x1aa000
KDBG 0xf802ac120
Machine Type x64
Operating Sys Windows 10 Pro (10.0.19045)
ASLR Enabled True
PAE Enabled True
Process Enumeration
Enumerating active components including processes, process trees, and loaded modules provides a comprehensive view of what was running in memory. The process list can be obtained with:
vol3 -f memdump.raw windows.pslist
PID PPID ImageFileName Offset(V)
---- ---- --------------- ----------
4 0 System 0x8f123a00
412 4 smss.exe 0x8fa21000
624 412 csrss.exe 0x8fc90000
980 624 explorer.exe 0x9012a000
2332 980 powershell.exe 0x91d41000The process tree view shows parent-child relationships:
vol3 -f memdump.raw windows.pstree
System (PID 4)
└── smss.exe (412)
└── csrss.exe (624)
└── explorer.exe (980)
└── powershell.exe (2332)Memory Structure Inspection
Inspecting memory structures including DLLs, handles, and threads reveals detailed information about process behavior. The DLL list for a specific process can be examined:
vol3 -f memdump.raw windows.dlllist --pid 2332
powershell.exe (PID 2332)
0x7ff9d2100000 C:\Windows\System32\kernel32.dll
0x7ff9d3100000 C:\Windows\System32\kernelbase.dll
0x7ffa00230000 C:\Users\User\AppData\Local\Temp\evil.dll <== SuspiciousHandles for a process can be examined to understand what resources it was accessing:
vol3 -f memdump.raw windows.handles --pid 2332
Type Handle Details
------- ------ ------------------------------------
File 0x50 \Device\HarddiskVolume2\Users\User\script.ps1
Process 0x84 PID 980 (explorer.exe)
Key 0xa0 HKCU\Software\Microsoft\Windows\RunNetwork Activity Analysis
Analyzing network activity from memory reveals active connections and listening sockets at the time of capture:
vol3 -f memdump.raw windows.netscan
Proto Local Address Foreign Address PID Owner
----- ------------------ ------------------- --- -----------
TCP 10.0.0.15:49722 103.21.244.15:443 2332 powershell.exe <== Suspicious outbound TLS
UDP 10.0.0.15:5353 224.0.0.251:5353 980 explorer.exeMalicious Behavior Detection
Detecting malicious behavior involves identifying hidden code, injected memory, or suspicious memory regions with read-write-execute permissions:
vol3 -f memdump.raw windows.malfind
Process: powershell.exe (2332)
Address: 0x0000024f20a00000
Protection: PAGE_EXECUTE_READWRITE
Data:
MZ 90 00 03 00 00 00 04 00 00 00 … <== PE header in memory
Possible injected code detected.Advanced Memory Analysis Tools
Understanding Code Compilation
Code compilation is the process of translating human-readable source code written in a high-level programming language such as C++ or Java into a lower-level language, typically machine code, that a computer's processor can directly execute. This transformation is essential for converting programmer-friendly code into instructions that hardware can actually process.
MemProcFS for Virtual Filesystem Analysis
MemProcFS is a powerful tool that mounts a memory dump as a virtual filesystem, enabling analysts to navigate the contents of RAM as if it were a normal directory structure on disk. This approach makes it significantly easier to explore processes, threads, handles, and memory regions without requiring deep command-line forensics knowledge or extensive experience with complex memory analysis commands. YARA rules complement MemProcFS by scanning memory for known malware signatures, specific patterns, or behavioral traits using custom rule sets, allowing analysts to quickly identify known threats within memory dumps.