- DevOps
- DevSecOps
- Understanding SAST and DAST
- DevSecOps Lifecycle Stages
- Planning
- Coding
- Building
- Testing
- Deployment
- Operations
- Monitoring & Feedback
- Why Automate SCM
- Core Automation Concepts
- Infrastructure as Code
- Declarative Model
- Idempotency
- Change Control
- Putting It All Together
- Terraform
- SCM Tools, Capabilities & Architecture
- Understanding CI/CD Pipelines
- Lab One - Git and GitHub Fundamentals
- Basic Git Commands
- GitHub Integration
- Professional CI/CD Pipeline Implementation
- Ansible
- Puppet
- Automation Architecture
- Lifecycle of Automated SCM
- Policy as Code & Compliance Automation
- Understanding Cloud Storage Types
- Policy as Code Example
- Drift Detection and Remediation
- Patch Management Overview & Phases
- Patch Management Lifecycle
- Risk-Based Approach and Automation
DevOps
DevOps is a cultural philosophy, set of practices, and tools that integrate and automate software development (Dev) and IT operations (Ops) to deliver applications faster and more reliably by breaking down silos, fostering collaboration, and automating workflows from code to deployment.
DevSecOps
DevSecOps is the practice of integrating security into every stage of the software development and operations lifecycle. It combines development (Dev), security (Sec), and operations (Ops) to ensure applications are secure by design. By automating security checks and fostering collaboration, DevSecOps reduces vulnerabilities and accelerates safe software delivery. Instead of adding security at the end, DevSecOps embeds it throughout the planning, coding, building, testing, deployment, and operations stages. Teams use tools like SAST, DAST, and CI/CD pipelines to catch issues early. This approach enhances compliance, minimizes risks, and maintains continuous delivery. Organizations adopting DevSecOps achieve faster release cycles without compromising security. Continuous monitoring, feedback, and automated remediation ensure applications stay secure in dynamic environments. It bridges gaps between developers, security teams, and operations for a proactive security culture.
Understanding SAST and DAST
SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing) are two different methods for finding security vulnerabilities in software. SAST analyzes an application's source code without running it, identifying issues like coding errors and potential flaws early in the development process. DAST, conversely, tests the running application from the outside, simulating attacks to find runtime vulnerabilities and configuration errors.
DevSecOps Lifecycle Stages
Planning
In this stage, teams define project requirements, security policies, and compliance goals. For example, creating a threat model for a web application or defining GDPR compliance requirements. Planning ensures security is integrated from the very beginning, not retrofitted later.
Coding
Developers write application code following secure coding standards. For instance, using OWASP guidelines and input validation to prevent SQL injection. Peer reviews and automated scans with tools like SonarQube help prevent insecure code from entering the pipeline.
Building
Code is compiled and packaged into deployable artifacts. Tools like Maven, Jenkins, or GitHub Actions can check dependencies with Snyk for known vulnerabilities. Continuous integration ensures that any security issues are caught early in the build process.
Testing
Automated and manual security tests, including dynamic application testing and penetration tests, are conducted. For example, running OWASP ZAP scans or Burp Suite for vulnerabilities. This stage ensures that only secure, reliable code moves forward.
Deployment
Applications are deployed to staging or production environments using automated pipelines. Security controls such as Docker container hardening or Kubernetes RBAC are applied. Continuous monitoring ensures deployments do not introduce new risks.
Operations
The running application is continuously monitored for performance and security threats. For example, using Splunk or Prometheus + Grafana to track logs and anomalies. Incident response procedures like automated AWS GuardDuty alerts maintain rapid threat detection and mitigation.
Monitoring & Feedback
Feedback from operations is used to improve future development and security practices. Security metrics and alerts from tools like Elastic Security or Azure Sentinel inform teams about vulnerabilities. Continuous learning ensures the DevSecOps cycle evolves to counter emerging threats.
Why Automate SCM
Configuration drift is the divergence of IT system settings (servers, networks, apps) from their original, intended baseline due to undocumented manual changes, updates, or errors, creating security gaps, performance issues, and compliance failures, often fixed by Infrastructure as Code (IaC), automation, and regular audits to maintain consistency.
Automating SCM is important today because modern IT environments often have hundreds or even thousands of servers, user devices, and cloud virtual machines, which makes manual management extremely difficult and risky. When administrators apply changes manually, systems slowly become different from each other over time, which is known as configuration drift, and this can lead to weak security and unstable performance.
Automation solves this by making sure every change follows the same rules every time, is fully tracked, and can be quickly rolled back if something goes wrong. For example, if a company pushes a security update automatically to all servers and one update causes an issue, the system can instantly revert to the previous safe version. Human mistakes are also a major reason for security breaches, such as forgetting to close a port or using weak permissions, and this is a well-known risk highlighted by security research. By automating these tasks, organizations reduce operational cost, lower staff burnout, and replace repetitive manual work with scheduled, rule-based actions that run consistently without fatigue.
Core Automation Concepts
Infrastructure as Code
Infrastructure as Code means managing servers, networks, and system settings using code instead of manual setup. These configurations are stored in version control systems like Git, making changes trackable and reversible. For example, you can deploy a full secure server environment with one automated script instead of configuring it by hand.
Declarative Model
The declarative model focuses on describing the final desired state of a system rather than the steps to reach it. You simply state what you want, such as "this service must always be running," and the tool ensures that state is enforced. This reduces errors and makes security configurations more consistent across all systems.
Idempotency
Idempotency means that running the same automation task multiple times always produces the same secure result. It prevents duplicate users, repeated firewall rules, or broken configurations caused by re-execution. For example, enforcing password policies repeatedly will never weaken or corrupt the system.
Change Control
Change control ensures that all configuration updates go through an approval and review process before being applied. This prevents unauthorized or risky changes from reaching production systems. For example, a security rule must be reviewed by a senior engineer before it is merged and deployed.
Putting It All Together
Core automation concepts in SCM are built around treating system configurations the same way developers treat software code, which is known as Infrastructure as Code, where everything is stored in version control so changes can be tracked and rolled back easily. Instead of writing step-by-step instructions on how to configure a system, automation uses a declarative model where you simply define what the final secure state should look like, such as saying a firewall port must be closed rather than describing every command to close it. Idempotency ensures that if you run the same automation task multiple times, the system always ends up in the same secure state without causing new issues, for example repeatedly enforcing correct user permissions without duplicating users. Change control adds a strong security layer by requiring all configuration updates to go through reviews and approvals before being applied, which prevents risky or unauthorized changes and ensures accountability across the organization.
Terraform
Terraform is an Infrastructure as Code tool used to define, provision, and manage servers and cloud resources using code instead of manual setup. It allows you to create, change, version, and destroy infrastructure safely and automatically across multiple cloud providers.
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "secure_server" {
ami = "ami-0abcdef1234567890"
instance_type = "t2.micro"
tags = {
Name = "Secure-Server"
}
}
This code means that your infrastructure is now defined as a file instead of manual steps. When you run Terraform, it automatically creates the virtual machine exactly as defined, and if you delete it, Terraform can recreate it the same way every time. This ensures consistency, prevents human error, and allows full tracking and rollback just like software code.
SCM Tools, Capabilities & Architecture
The SCM ecosystem includes many different tools that vary based on the programming language they use, whether they rely on agents installed on systems or work agentless, and how they are deployed across environments such as cloud, on-prem, or hybrid setups. Even though these tools are technically different, they all share the same main goal, which is to continuously maintain the desired and secure configuration state across the entire infrastructure so systems do not drift into insecure setups. Integration with CI/CD pipelines and SIEM platforms is also essential because it allows configuration changes to be automatically tested, deployed, monitored, and correlated with security events, for example when a new hardened server configuration is pushed through the pipeline and any suspicious change is instantly logged and monitored in the SIEM for security visibility.
Understanding CI/CD Pipelines
A CI/CD pipeline is an automated process that helps teams continuously build, test, and deploy code or configurations without manual intervention. Continuous Integration means that every change is automatically tested and verified when it is added, while Continuous Delivery or Deployment means the approved changes are automatically released to systems. For example, when a security engineer updates a firewall rule in a configuration file, the pipeline automatically checks the syntax, tests it in a staging environment, and then deploys it to production if it passes. This reduces human error, speeds up updates, and ensures consistent and secure changes across all systems. In modern SCM and DevOps environments, CI/CD pipelines are the backbone that connects code changes directly to safe and controlled deployments.
Lab One - Git and GitHub Fundamentals
Git and GitHub are essential for code versioning, collaboration, and maintaining code integrity.
Git is a tool (CLI or GUI) that acts like a "memory card" for your code. It lets you save versions of your project, track changes, and revert to earlier stages if needed. GitHub is an online platform where you can push your local Git repositories, share your code, and collaborate with others.
Basic Git Commands
GitHub Integration
Professional CI/CD Pipeline Implementation
In a professional environment, CI/CD pipelines are configured using tools like GitHub Actions, Jenkins, GitLab CI, or Azure DevOps, typically set up by DevOps engineers in collaboration with developers. Developers push their code, such as SIEM modules or detection rules, to a Git repository, which triggers the pipeline automatically. The pipeline is defined in a configuration file (YAML or Jenkinsfile) specifying stages like build, test, deploy, and verification. Automation ensures that the latest code is pulled, unit tests are run, code is packaged, deployed to a test environment, and verified for correctness. Security checks, including static code analysis and dependency scanning, are often included to prevent vulnerabilities. If any stage fails, the pipeline can stop and rollback to the last stable version, ensuring stability and reliability. This setup allows developers to focus on coding while the pipeline handles testing, deployment, and validation automatically.
name: SIEM CI/CD Pipeline
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Run unit tests
run: ./tests/run_tests.sh
- name: Deploy to test environment
run: ./deploy/deploy_siem.sh
- name: Verify deployment
run: ./deploy/verify.sh
In this example, GitHub Actions automatically triggers every time someone pushes to the main branch. The jobs perform the same stages we simulated with pipeline.sh, but in a scalable, automated, and professional environment.
Ansible
Ansible is an agentless automation tool that uses SSH to manage remote hosts. It relies on simple, human-readable YAML playbooks to define tasks and configurations. Ansible is ideal for hybrid environments, supporting both on-premises and cloud systems. It is widely used for enforcing security benchmarks, deploying applications, and orchestrating repetitive tasks. Its simplicity, readability, and agentless architecture make it a popular choice for IT operations and cybersecurity teams.
- hosts: all
become: yes
tasks:
- blockinfile:
path: /etc/ssh/sshd_config
block: |
PermitRootLogin no
PasswordAuthentication no
- service:
name: sshd
state: reloaded
This playbook runs on all hosts and escalates privileges using become: yes. It uses blockinfile to update the SSH configuration file, disabling root login and password authentication. After modifying the file, it reloads the SSH service to apply the changes. Essentially, it automates a security hardening task for SSH across multiple machines.
Puppet
Puppet is a tool for automated configuration management that lets you describe how your systems should be set up using declarative manifests, which are essentially files that state the desired state of your servers, like which software should be installed or which services should be running. Once these manifests are defined, Puppet agents installed on each system regularly check in with the Puppet server to see if the system matches the desired state. If anything has changed or drifted from what's defined, the agents automatically correct it to match the manifests. This automation ensures consistency across all machines without needing manual intervention. Puppet also provides detailed reporting, so you can see what changes were made, and role-based access control, which lets administrators define who can make changes, adding an extra layer of security. For example, if you want all your web servers to have Nginx installed and running, Puppet ensures that even if someone accidentally stops Nginx on one server, it will automatically restart it to stay compliant.
Automation Architecture
In automated SCM, the Automation Architecture is made up of several key components. The Control Node or Orchestrator is responsible for executing the configuration playbooks or recipes and coordinating all changes. The Managed Nodes are the target systems that receive and apply these configurations, ensuring they match the desired state. A Version Control system stores all approved configuration files, making it easy to track changes over time. Finally, a Dashboard provides visibility into compliance status and alerts when any configuration drift occurs, helping administrators maintain consistent environments.
Lifecycle of Automated SCM
The Lifecycle of Automated SCM begins with defining a Baseline, which sets the desired state for systems and configurations. Next, configurations are Deployed across all managed nodes according to the defined baseline. Continuous Monitoring ensures systems remain compliant, and any Drift from the desired state is detected. Finally, the system can Remediate issues automatically and generate Reports detailing changes and compliance status, keeping environments secure and consistent.
Policy as Code & Compliance Automation
Policy as Code brings a modern approach to security and compliance by expressing policies in code rather than documents. This removes subjectivity in security reviews, ensuring that rules are applied consistently across all teams and environments. It also reduces the time and effort needed to prepare for audits and allows for continuous compliance monitoring, so any violations are detected and addressed in real time, keeping systems secure and standardized.
Understanding Cloud Storage Types
Amazon S3 stores data as objects within buckets. An object is a file and any metadata that describes the file. A bucket is a container for objects. To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region.
Block Storage is a virtual hard disk used to install an operating system on a VM. File Storage is a shared folder on a network that many users can open and edit files in. Object Storage involves saving photos and videos in cloud storage like AWS S3 or Azure Blob.
Policy as Code Example
An example of Policy as Code could be a rule that ensures all cloud storage buckets are not publicly accessible. Instead of manually checking each bucket, you write a policy in code (using a tool like Terraform Sentinel, Open Policy Agent, or AWS Config rules) that automatically verifies the configuration. For instance, the code might state: "All S3 buckets must have public access blocked." When this policy is applied, the system continuously scans buckets and flags or even remediates any bucket that violates the rule, ensuring consistent security without manual checks. This way, whether you have 10 or 1,000 buckets, the policy is enforced automatically and uniformly.
package s3
# Policy: S3 buckets must not be publicly accessible
deny[message] {
bucket := input.buckets[_]
bucket.acl == "public-read"
message := sprintf("Bucket %v is publicly accessible", [bucket.name])
}
Policy as Code using Open Policy Agent (OPA) syntax for ensuring AWS S3 buckets are not public. In this example, input.buckets represents the list of S3 buckets in your environment. The policy scans each bucket, and if any bucket has its ACL set to public-read, it generates a message denying it.
Drift Detection and Remediation
Drift Detection and Remediation is a core part of automated compliance. Agents continuously scan managed systems, comparing their current state against the defined baseline to identify any drift. Minor drifts can be automatically corrected, while major deviations trigger alerts for human review. This process can also integrate with SOAR platforms to streamline incident workflows, ensuring rapid response and maintaining compliance without manual intervention.
Patch Management Overview & Phases
Patch management is a critical process in cybersecurity because a large portion of breaches around 60 to 70% happen by exploiting known vulnerabilities. By regularly applying patches, organizations reduce the time systems are exposed to these risks and maintain a secure baseline, complementing automated SCM practices.
Patch Management Lifecycle
The patch lifecycle begins with asset discovery, where all systems in the environment are identified, followed by assessment, which uses scanners to find missing patches. Once vulnerabilities are known, they are prioritized based on severity (like CVSS scores) and business impact, ensuring critical systems are fixed first. Before wide deployment, patches are tested in staging environments to confirm they don't break functionality, then deployed either automatically or in phases, and finally verified to ensure success and documented through reporting.
Risk-Based Approach and Automation
A risk-based approach means organizations focus on critical and high-severity vulnerabilities first, using threat intelligence to prioritize vulnerabilities that are actively exploited. Any exceptions or temporary deferrals are carefully documented. Automation helps make this process efficient: updates can be scheduled regularly, vulnerability feeds like NVD or MSRC can inform which patches are needed, and compliance reports can be automatically generated for audits. For example, a company might automatically patch all Windows servers every Tuesday while prioritizing a newly discovered, actively exploited CVE on their internet-facing web servers immediately, ensuring high-risk systems are protected without waiting for manual intervention.