Acknowledgment: This project benefited greatly from the insights and tutorials provided by the YouTube channel DFIR. Their comprehensive videos were invaluable in understanding and implementing the various components of the SOC Automation Lab.
The SOC Automation Project is designed to develop an automated Security Operations Center (SOC) workflow that optimizes event monitoring, alerting, and incident response. By utilizing robust open-source tools like Wazuh, Shuffle, and TheHive, this project enhances the efficiency and effectiveness of SOC operations. The implementation includes configuring a Windows 10 client with Sysmon for detailed event logging, Wazuh for comprehensive event management and alerting, Shuffle for workflow automation, and TheHive for case management and coordinated incident response.
- Automate Event Collection and Analysis: Enable real-time security event collection and analysis with minimal manual effort, facilitating proactive threat detection and response.
- Streamline the Alerting Process: Automate alert generation and forwarding to relevant systems and personnel, minimizing response times and ensuring critical incidents are not overlooked.
- Enhance Incident Response Capabilities: Implement automated responses to security incidents, improving reaction speed, consistency, and overall threat mitigation effectiveness.
- Increase SOC Efficiency: Reduce the workload of SOC analysts by automating routine tasks, allowing them to focus on high-priority threats and strategic initiatives.
- A host machine capable of running multiple virtual machines simultaneously.
- Sufficient CPU, RAM, and disk space to support the VMs and their expected workloads.
- VMware Workstation or VirtualBox: Industry-standard virtualization platform for creating and managing virtual machines.
- Windows 10: The client machine for generating realistic security events and testing the SOC automation workflow.
- Ubuntu 22.04 LTS: The stable and feature-rich Linux distribution for deploying Wazuh and TheHive.
- Sysmon: A powerful Windows system monitoring tool that provides detailed event logging and telemetry.
- Wazuh: An open-source, enterprise-grade security monitoring platform that serves as the central point for event collection, analysis, and alerting.
- Shuffle: A flexible, open-source security automation platform that handles workflow automation for alert processing and response actions.
- TheHive: A scalable, open-source Security Incident Response Platform designed for SOCs to efficiently manage and resolve incidents.
- VirusTotal: An online service that analyzes files and URLs to detect various types of malicious content using multiple antivirus engines and scanners.
- Cloud Services or Additional VMs: Wazuh and TheHive can be deployed either on cloud infrastructure or additional virtual machines, depending on your resource availability and preferences.
3.1.3 Download Sysmon configuration files from Sysmon Modular Config
3.1.4 Extract the Sysmon zip file and open PowerShell as an administrator. Navigate to the Sysmon directory extracted from the zip file. Place the Sysmon configuration file into the Sysmon directory as well.
Command:
.\Sysmon64.exe -i .\sysmonconfig.xml
To set up the Wazuh server, we will be using DigitalOcean, a popular cloud service provider. However, you can use any other cloud platform or virtual machines as well. We start by creating a new Droplet from the DigitalOcean menu:
OS: Ubuntu 22.04
Use a root password for authentication
Change the Droplet name to "Wazuh" and create the Droplet:
We need to set up a firewall to prevent unauthorized access and external scan spams. From the DigitalOcean menu, go to Networking > Firewall > Create Firewall:
We modify the inbound rules to allow access only from our own IP address
After setting up the firewall rules, we apply the firewall to our Wazuh Droplet
From the DigitalOcean left-side menu, go to Droplets > Wazuh > Access > Launch Droplet Console. This allows us to connect to the Wazuh server using SSH
sudo apt-get update && sudo apt-get upgrade
We start the Wazuh installation using the official Wazuh installer script:
curl -sO https://packages.wazuh.com/4.7/wazuh-install.sh && sudo bash ./wazuh-install.sh -a
Take note of the generated password for the "admin" user:
User: admin
Password: ***************************
To log in to the Wazuh web interface, we open a web browser and enter the Wazuh server's public IP address with https:// prefix
Use the generated password with the username "admin" to log in to the Wazuh web interface
We create another Droplet on DigitalOcean with Ubuntu 22.04 for hosting TheHive and enable the firewall that we set up earlier for the TheHive Droplet.
apt install wget gnupg apt-transport-https git ca-certificates ca-certificates-java curl software-properties-common python3-pip lsb-release
wget -qO- https://apt.corretto.aws/corretto.key | sudo gpg --dearmor -o /usr/share/keyrings/corretto.gpg
echo "deb [signed-by=/usr/share/keyrings/corretto.gpg] https://apt.corretto.aws stable main" | sudo tee -a /etc/apt/sources.list.d/corretto.sources.list
sudo apt update
sudo apt install java-common java-11-amazon-corretto-jdk
echo JAVA_HOME="/usr/lib/jvm/java-11-amazon-corretto" | sudo tee -a /etc/environment
export JAVA_HOME="/usr/lib/jvm/java-11-amazon-corretto"
Cassandra is the database used by TheHive for storing data.
wget -qO - https://downloads.apache.org/cassandra/KEYS | sudo gpg --dearmor -o /usr/share/keyrings/cassandra-archive.gpg
echo "deb [signed-by=/usr/share/keyrings/cassandra-archive.gpg] https://debian.cassandra.apache.org 40x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
sudo apt update
sudo apt install cassandra
Elasticsearch is used by TheHive for indexing and searching data.
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
sudo apt-get install apt-transport-https
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
sudo apt update
sudo apt install elasticsearch
Create a jvm.options file under /etc/elasticsearch/jvm.options.d and add the following configurations to optimize Elasticsearch performance
-Dlog4j2.formatMsgNoLookups=true
-Xms2g
-Xmx2g
wget -O- https://archives.strangebee.com/keys/strangebee.gpg | sudo gpg --dearmor -o /usr/share/keyrings/strangebee-archive-keyring.gpg
echo 'deb [signed-by=/usr/share/keyrings/strangebee-archive-keyring.gpg] https://deb.strangebee.com thehive-5.2 main' | sudo tee -a /etc/apt/sources.list.d/strangebee.list
sudo apt-get update
sudo apt-get install -y thehive
Default credentials for accessing TheHive on port 9000:
Username: admin@thehive.local
Password: secret
We need to configure it by modifying the cassandra.yaml
file:
nano /etc/cassandra/cassandra.yaml
This is where we customize the listen address, ports, and cluster name.
Set the listen_address
to TheHive's public IP:
Next, configure the RPC address by entering TheHive's public IP
Lastly, change the seed address under the seed_provider
section. Enter TheHive's public IP in the seeds
field:
Stop the Cassandra service:
systemctl stop cassandra.service
Remove the old Cassandra data files since we installed TheHive using the package:
rm -rf /var/lib/cassandra/*
Start the Cassandra service again:
systemctl start cassandra.service
Check the Cassandra service status to ensure it's running:
systemctl status cassandra.service
We need to configure it by modifying the elasticsearch.yml
file:
nano /etc/elasticsearch/elasticsearch.yml
Optionally, change the cluster name. Uncomment the node.name
field. Uncomment the network.host
field and set the IP to TheHive's public IP.
Optionally, uncomment the http.port
field (default port is 9200). Optionally, uncomment the cluster.initial_master_nodes
field, remove node-2
if not applicable.
Start and enable the Elasticsearch service:
systemctl start elasticsearch
systemctl enable elasticsearch
Check the Elasticsearch service status:
systemctl status elasticsearch
Before configuring TheHive, ensure the thehive user and group have access to the necessary file paths:
ls -la /opt/thp
If root has access to the thehive directory, change the ownership:
chown -R thehive:thehive /opt/thp
This command changes the owner to the thehive user and group for the specified directories.
Now, configure TheHive's configuration file:
nano /etc/thehive/application.conf
Modify the database
and index config
sections. Change the hostname
IP to TheHive's public IP. Set the cluster.name
to the same value as the Cassandra cluster name ("Test Cluster" in this example). Change the index.search.hostname
to TheHive's public IP. At the bottom, change the application.baseUrl
to TheHive's public IP.
By default, TheHive has both Cortex (data enrichment and response) and MISP (threat intelligence platform) enabled.
Save the file, start, and enable the TheHive service:
systemctl start thehive
systemctl enable thehive
Important note: If you cannot access TheHive, ensure all three services (Cassandra, Elasticsearch, and TheHive) are running. If any of them are not running, TheHive won't start.
If all services are running, access TheHive from a web browser using TheHive's public IP and port 9000:
http://139.59.112.192:9000/login
Log in to TheHive using the default credentials: Username: admin@thehive.local
Password: secret
Log in to the Wazuh web interface. Click on "Add agent" and select "Windows" as the agent's operating system. Set the server address to the Wazuh server's public IP.
Copy the installation command provided and execute it in PowerShell on the Windows client machine. The Wazuh agent installation will start.
After the installation, start the Wazuh agent service using the net start wazuhsvc command or through Windows Services.
Check the Wazuh web interface to confirm the Windows agent is successfully connected.
The Windows agent should be listed with an "Active" status.
On the Windows client machine, navigate to C:\Program Files (x86)\ossec-agent
and open the ossec.conf
file with a text editor (e.g., Notepad).
In the ossec.conf
file, add a new <localfile>
section to configure Sysmon event forwarding to Wazuh. Check the full name of the Sysmon event log in the Windows Event Viewer.
Since modifying the ossec.conf
file requires administrator privileges, open a new Notepad instance with administrator rights and save the changes to the file.
Restart the Wazuh agent service to apply the configuration changes.
In the Wazuh web interface, go to the "Events" section and search for Sysmon events to confirm they are being received.
4.2.1 Download Mimikatz download mimikatz
On the Windows client machine, download Mimikatz, a tool commonly used by attackers and red teamers to extract credentials from memory. To download Mimikatz, you may need to temporarily disable Windows Defender or exclude the download directory from scanning.
Open PowerShell, navigate to the directory where Mimikatz is downloaded, and execute it.
By default, Wazuh only logs events that trigger a rule or alert. To log all events, modify the Wazuh manager's ossec.conf
file. Connect to the Wazuh server via SSH and open /var/ossec/etc/ossec.conf.
Create a backup of the original configuration file:
cp /var/ossec/etc/ossec.conf ~/ossec-backup.conf
Change the <logall>
and <logall_json>
options under the <ossec_config>
section from "no" to "yes". Restart the Wazuh manager service:
systemctl restart wazuh-manager.service
This configuration forces Wazuh to archive all logs in the /var/ossec/logs/archives/
directory.
To enable Wazuh to ingest the archived logs, modify the Filebeat configuration:
nano /etc/filebeat/filebeat.yml
Change the enabled: false
to true
for the "archives" input and restart the Filebeat service.
After updating Filebeat and the Ossec configuration, create a new index in the Wazuh web interface to search the archived logs. From the left-side menu, go to "Stack Management" > "Index Management".
Create a new index named wazuh-archives-*
to cover all archived logs.
On the next page, select "timestamp" as the time field and create the index.
Go to the "Discover" section from the left-side menu and select the newly created index.
To troubleshoot if Mimikatz logs are being archived, use cat
and grep
on the archive logs in the Wazuh manager CLI:
cat /var/ossec/logs/archives/archives.log | grep -i mimikatz
If no Mimikatz events are found in the archives, it means no Mimikatz event was generated, and you won't see any related events in the Wazuh web interface.
Relaunch Mimikatz on the Windows client machine and check the Event Viewer to ensure Sysmon is capturing Mimikatz events.
Check the archive file again for Mimikatz logs to confirm they are being generated.
Examine the Mimikatz logs and identify a suitable field for crafting an alert. In this example, we will use the originalfilename
field.
Using the originalfilename
field ensures the alert will trigger even if an attacker changes the Mimikatz executable name.
You can create a custom rule either from the CLI or the Wazuh web interface.
In the web interface, click on the "Manage rule files" button. Filter the rules by name (e.g., "sysmon") and view the rule details by clicking the eye icon.
These are Sysmon-specific rules built into Wazuh for event ID 1. Copy one of these rules as a reference and modify it to create a custom Mimikatz detection rule.
Example custom rule:
<rule id="100002" level="15">
<if_group>sysmon_event1</if_group>
<field name="win.eventdata.originalFileName" type="pcre2">(?i)mimikatz\.exe</field>
<description>Mimikatz Usage Detected</description>
<mitre>
<id>T1003</id>
</mitre>
</rule>
Go to the "Custom rules" button and edit the "local_rules.xml" file. Add the custom Mimikatz detection rule.
Save the file and restart the Wazuh manager service.
To test the custom rule, rename the Mimikatz executable on the Windows client machine to something different.
Execute the renamed Mimikatz.
Verify that the custom rule triggers an alert in Wazuh, even with the renamed Mimikatz executable.
Go to the Shuffle website (shuffler.io) and create an account.
Click on "New Workflow" and create a workflow. You can select any random use case for demonstration purposes.
On the workflow page, click on "Triggers" at the bottom left. Drag a "Webhook" trigger and connect it to the "Change Me" node. Set a name for the webhook and copy the Webhook URI from the right side. This URI will be added to the Ossec configuration on the Wazuh manager.
Click on the "Change Me" node and set it to "Repeat back to me" mode. For call options, select "Execution argument". Save the workflow.
On the Wazuh manager CLI, modify the ossec.conf
file to add an integration for Shuffle:
nano /var/ossec/etc/ossec.conf
Add the following integration configuration:
<integration>
<name>shuffle</name>
<hook_url>[Your Suffle URL]</hook_url>
<level>3</level>
<alert_format>json</alert_format>
</integration>
Replace the <level>
tag with <rule_id>100002</rule_id>
to send alerts based on the custom Mimikatz rule ID.
Restart the Wazuh manager service:
systemctl restart wazuh-manager.service
Regenerate the Mimikatz telemetry on the Windows client machine. In Shuffle, click on the webhook trigger ("Wazuh-Alerts") and click "Start".
Verify that the alert is received in Shuffle.
Workflow Steps:
- Mimikatz alert sent to Shuffle
- Shuffle receives Mimikatz alert / extract SHA256 hash from file
- Check reputation score with VirusTotal
- Send details to TheHive to create an alert
- Send an email to the SOC analyst to begin the investigation
Observe that the return values for the hashes are appended by their hash type (e.g., sha1=hashvalue
). To automate the workflow, parse out the hash value itself. Sending the entire value, including sha1=
, to VirusTotal will result in an invalid query.
Click on the "Change Me" node and select "Regex capture group" instead of "Repeat back to me". In the "Input data", select the "hashes" option. In the "Regex" tab, enter the regex pattern to parse the SHA256 hash value: SHA256=([0-9A-Fa-f]{64})
. Save the workflow.
Click on the "Show execution" button (running man icon) to verify that the hash value is extracted correctly.
Create a VirusTotal account to access the API.
Copy the API key and return to Shuffle. In Shuffle, click on the "Apps" tab and search for "VirusTotal". Drag the "VirusTotal" app to the workflow, and it will automatically connect.
Enter the API key on the right side or click "Authenticate VirusTotal v3" to authenticate.
Change the "ID" field to the "SHA256Regex" value created earlier.
Save the workflow and rerun it.
Expand the results to view the VirusTotal scan details, including the number of detections.
In Shuffle, search for "TheHive" in the "Apps" and drag it into the workflow. TheHive can be connected using the IP address and port number (9000) of the TheHive instance created on DigitalOcean.
Log in to TheHive using the default credentials: Username: admin@thehive.local
Password: secret
Create a new organization and user for the organization in TheHive.
Add new users with different profiles as needed.
Set new passwords for the users. For the SOAR user created for Shuffle integration, generate an API key.
Create an API key and store it securely. This key will be used to authenticate Shuffle. Log out from the admin account and log in with one of the user accounts.
In Shuffle, click on the orange "Authenticate TheHive" button and enter the API key created earlier. For the URL, enter the public IP address of TheHive along with the port number.
Under "Find actions", click on "TheHive" and select "Create alerts". Set the JSON payload for TheHive to receive the alerts. Here's an example payload for the Mimikatz scenario:
{
"description": "Mimikatz Detected on host: [Your host computer client's name]",
"externallink": "",
"flag": false,
"pap": 2,
"severity": "2",
"source": "Wazuh",
"sourceRef": "Rule:100002",
"status": "New",
"summary": "Details about the Mimikatz detection",
"tags": [
"T1003"
],
"title": "Mimikatz Detection Alert",
"tlp": 2,
"type": "Internal"
}
Expand the "Body" section to set the payload.
Save the workflow and rerun it. An alert should appear in the TheHive dashboard.
Note: If the alert doesn't appear, ensure that the firewall for TheHive in your cloud provider allows inbound traffic on port 9000 from any source.
Click on the alert to view the details.
In Shuffle, find "Email" in the "Apps" and connect VirusTotal to the email node.
Configure the email settings, including the recipient, subject, and body, to send the alert with relevant event information.
Save the workflow and rerun it.
Verify that the email is received with the expected alert details.
We have successfully set up and configured the SOC Automation Lab, integrating Wazuh, TheHive, and Shuffle for automated event monitoring, alerting, and incident response. This foundation provides a solid starting point for further customization and expansion of automation workflows to meet our specific SOC requirements. The key steps and achievements of this lab include:
- Installing and configuring a Windows 10 client with Sysmon for detailed event generation.
- Setting up Wazuh as the central event management and alerting platform.
- Installing and configuring TheHive for case management and coordinated response actions.
- Generating Mimikatz telemetry and creating custom alerts in Wazuh.
- Integrating Shuffle as the SOAR platform for workflow automation.
- Building an automated workflow to extract file hashes, check reputation scores with VirusTotal, create alerts in TheHive, and notify SOC analysts via email.
With this lab, we have gained hands-on experience in implementing an automated SOC workflow using powerful open-source tools. We can now leverage this knowledge to enhance your organization's security operations, improve incident response times, and streamline SOC processes.