Security Orchestration, Automation, & Response (SOAR) Deployment
Security Orchestration, Automation, & Response (SOAR) Deployment
Security Orchestration, Automation, & Response (SOAR) Deployment
"An ounce of prevention is worth a pound of cure'
-Benjamin Franklin
"An ounce of prevention is worth a pound of cure'
-Benjamin Franklin
"An ounce of prevention is worth a pound of cure'
-Benjamin Franklin
Project Introduction: Deploying SOAR platform tools
This project implements a Security Orchestration, Automation, and Response (SOAR) solution using TheHive, Fleet Server, Elasticsearch, and Kibana within a homelab environment.
By centralizing alert processing, automating security workflows, and integrating real-time threat intelligence, this deployment follows NIST 800-53 and ISO 27001 security best practices to create an efficient incident response framework.
This project showcases practical security automation techniques aligned with enterprise SOAR best practices for scalable and compliant security operations.
This project implements a Security Orchestration, Automation, and Response (SOAR) solution using TheHive, Fleet Server, Elasticsearch, and Kibana within a homelab environment.
By centralizing alert processing, automating security workflows, and integrating real-time threat intelligence, this deployment follows NIST 800-53 and ISO 27001 security best practices to create an efficient incident response framework.
This project showcases practical security automation techniques aligned with enterprise SOAR best practices for scalable and compliant security operations.
This project implements a Security Orchestration, Automation, and Response (SOAR) solution using TheHive, Fleet Server, Elasticsearch, and Kibana within a homelab environment.
By centralizing alert processing, automating security workflows, and integrating real-time threat intelligence, this deployment follows NIST 800-53 and ISO 27001 security best practices to create an efficient incident response framework.
This project showcases practical security automation techniques aligned with enterprise SOAR best practices for scalable and compliant security operations.
New Subnet Addition
To provision the required SOAR infrastructure, we will expand from 10.0.0.0/28 to 10.0.1.0/28, ensuring sufficient host availability. This includes:
Creating an additional internal vSwitch to segment domain services from SOAR components
Assigning 10.0.0.0/28 for Windows Server Active Directory Infrastructure and 10.0.1.0/28 for SOAR components
We'll begin by accessing the Virtual Switch Manager on the VMHOST machine and renaming the existing vSwitch from Internal Switch to Internal Switch 10.0.0.0
To provision the required SOAR infrastructure, we will expand from 10.0.0.0/28 to 10.0.1.0/28, ensuring sufficient host availability. This includes:
Creating an additional internal vSwitch to segment domain services from SOAR components
Assigning 10.0.0.0/28 for Windows Server Active Directory Infrastructure and 10.0.1.0/28 for SOAR components
We'll begin by accessing the Virtual Switch Manager on the VMHOST machine and renaming the existing vSwitch from Internal Switch to Internal Switch 10.0.0.0




Next, we'll then create another internal vSwitch to use in addition to the previous one. The function of the vSwitch bridged to 10.0.0.0/28 will be the to host the Windows Server Domain infrastructure devices, and the function of the internal vSwitch at 10.0.1.0/28 will be to host the devices involved in the SOAR deployment.
Next, we'll then create another internal vSwitch to use in addition to the previous one. The function of the vSwitch bridged to 10.0.0.0/28 will be the to host the Windows Server Domain infrastructure devices, and the function of the internal vSwitch at 10.0.1.0/28 will be to host the devices involved in the SOAR deployment.


We will now access the settings of the Gateway01 vGW and add a network adapter under the "Add Hardware" section. This will facilitate connection to all three switches. That is, the three switches bridged at 10.0.0.0/28. 10.0.1.0/28, and 192.168.1.0/24
We will now access the settings of the Gateway01 vGW and add a network adapter under the "Add Hardware" section. This will facilitate connection to all three switches. That is, the three switches bridged at 10.0.0.0/28. 10.0.1.0/28, and 192.168.1.0/24






We will now launch the Gateway01 machine, connect, and logon. After logon we will navigate to the Network Connections segment of Control Panel by using the ncpa.cpl command, and to avoid confusion, will rename the newly available ethernet connection to Internal Switch 10.0.1.0
We will now launch the Gateway01 machine, connect, and logon. After logon we will navigate to the Network Connections segment of Control Panel by using the ncpa.cpl command, and to avoid confusion, will rename the newly available ethernet connection to Internal Switch 10.0.1.0




We will then access RAS and see the Internal Switch bridged at 10.0.1.0/28 subnet has already been identified
We'll need to configure the virtual Gateway (vGW) from to bridge the subnets
10.0.0.0/28 (Windows Infrastructure)
10.0.1.0/28 (SOAR Infrastructure)
192.168.1.0 (External Network)
We will then access RAS and see the Internal Switch bridged at 10.0.1.0/28 subnet has already been identified
We'll need to configure the virtual Gateway (vGW) from to bridge the subnets
10.0.0.0/28 (Windows Infrastructure)
10.0.1.0/28 (SOAR Infrastructure)
192.168.1.0 (External Network)


Our previously created static route to forward all traffic without a corresponding IP address on the local subnet to the external switch, then out to the next hop address of the external gateway should allow traffic to be routed from the 10.0.1.0/28 subnet automatically , thus we can skip the need to configure an additional static route, and just configure routing on the appropriate switch from RAS.
Our previously created static route to forward all traffic without a corresponding IP address on the local subnet to the external switch, then out to the next hop address of the external gateway should allow traffic to be routed from the 10.0.1.0/28 subnet automatically , thus we can skip the need to configure an additional static route, and just configure routing on the appropriate switch from RAS.


As shown, NAT on the private subnet 10.0.1.0/28 has now been configured, and we can continue with appropriately setting network settings at the machine involved in the SOAR deployment
In this step, we're just ensuring NAT is applied on 10.0.1.0/28 to route traffic through the vGW at 10.0.1.1. This should allow traffic forwarding between 10.0.0.0/28 ↔ 10.0.1.0/28
As shown, NAT on the private subnet 10.0.1.0/28 has now been configured, and we can continue with appropriately setting network settings at the machine involved in the SOAR deployment
In this step, we're just ensuring NAT is applied on 10.0.1.0/28 to route traffic through the vGW at 10.0.1.1. This should allow traffic forwarding between 10.0.0.0/28 ↔ 10.0.1.0/28




We'll now move back to ncpa.cpl and configure IPv4 settings on the alternative subnet (10.0.1.0/28) interface with a static IP address of 10.0.1.1, with all other details being the same as configured for the other subnet's internal network adapter.
We'll now move back to ncpa.cpl and configure IPv4 settings on the alternative subnet (10.0.1.0/28) interface with a static IP address of 10.0.1.1, with all other details being the same as configured for the other subnet's internal network adapter.



Now that the interface is set to a static address, we'll need to configure our DHCP server to account for the new scope. Navigating there, we'll begin by opening DHCP manager, and right-clicking IPv4 from the provided menu, opening the New Scope Configuration Wizard
This scope will cover 10.0.1.2 to 10.0.1.14 and we will continue to set the default gateway specification, where we will indicate the interface to our vGateway at 10.0.1.1.
Now that the interface is set to a static address, we'll need to configure our DHCP server to account for the new scope. Navigating there, we'll begin by opening DHCP manager, and right-clicking IPv4 from the provided menu, opening the New Scope Configuration Wizard
This scope will cover 10.0.1.2 to 10.0.1.14 and we will continue to set the default gateway specification, where we will indicate the interface to our vGateway at 10.0.1.1.






We will then specify 10.0.0.2 as the primary and 8.8.8.8 as an alt DNS server for the scope to utilize, then conclude setup with default options
We will then specify 10.0.0.2 as the primary and 8.8.8.8 as an alt DNS server for the scope to utilize, then conclude setup with default options




Finally, once we confirm the DHCP settings within Netplan on the Ubuntu machines, they will acquire an APIPA address until they're successfully able to connect to a DHCP server and obtain a lease. In order to facilitate that communication, we'll need to configure a DHCP Relay agent from RAS. Configuring a relay to allow for secure communication between both subnets satisfies NIST SP 800-53 SC-12 for secure network infrastructure by ensuring DHCP forwarding adheres to subnet segmentation practices Firstly, we'll open RAS, and extend our GATEWAY01 listing, expand IPv4, and right-click on "General", then pick New Routing Protocol
Finally, once we confirm the DHCP settings within Netplan on the Ubuntu machines, they will acquire an APIPA address until they're successfully able to connect to a DHCP server and obtain a lease. In order to facilitate that communication, we'll need to configure a DHCP Relay agent from RAS. Configuring a relay to allow for secure communication between both subnets satisfies NIST SP 800-53 SC-12 for secure network infrastructure by ensuring DHCP forwarding adheres to subnet segmentation practices Firstly, we'll open RAS, and extend our GATEWAY01 listing, expand IPv4, and right-click on "General", then pick New Routing Protocol



From this menu, we'll select DHCP Relay Agent and press okay.
We will then right-click on the DCHP Relay Agent again, and select New Interface in order to specify which interface that the relay agent should be listening on to forward requests. From here, we will select the Internal Switch 10.0.1.0 to have the agent listen on the same subnet as our SOAR Ubuntu machines.
From this menu, we'll select DHCP Relay Agent and press okay.
We will then right-click on the DCHP Relay Agent again, and select New Interface in order to specify which interface that the relay agent should be listening on to forward requests. From here, we will select the Internal Switch 10.0.1.0 to have the agent listen on the same subnet as our SOAR Ubuntu machines.



Once the individual machines are deployed, we'll configure them to use DHCP and configure DHCP Reservations. To finalize the new subnet's creation, we'll right-click the new Relay Agent listed under IPv4 and select Properties. We'll then add our DHCP server IP Address to the Server Address field.
Once the individual machines are deployed, we'll configure them to use DHCP and configure DHCP Reservations. To finalize the new subnet's creation, we'll right-click the new Relay Agent listed under IPv4 and select Properties. We'll then add our DHCP server IP Address to the Server Address field.





II. Elasticsearch Deployment
Now that our new subnet is deployed with a working DHCP relay agent to assign IP addresses between 10.0.0.0/28 and 10.0.1.0/28, we are now ready to begin the process of deploying the Elasticstack, beginning with the deployment of an Elasticsearch Cluster.
To start, we'll navigate back to VMHOST and deploy a new Gen2 VM with the following configurations.
8Gb of Ram
127Gb of Diskspace
Internal Switch bridged at 10.0.1.0/28
Now that our new subnet is deployed with a working DHCP relay agent to assign IP addresses between 10.0.0.0/28 and 10.0.1.0/28, we are now ready to begin the process of deploying the Elasticstack, beginning with the deployment of an Elasticsearch Cluster.
To start, we'll navigate back to VMHOST and deploy a new Gen2 VM with the following configurations.
8Gb of Ram
127Gb of Diskspace
Internal Switch bridged at 10.0.1.0/28



After creating the VM in Hyper-V Manager, we'll begin configuring the VM by disabling Secure Boot and enabling 2vCPUs.
After creating the VM in Hyper-V Manager, we'll begin configuring the VM by disabling Secure Boot and enabling 2vCPUs.





After logging on, we'll configure a new DHCP reservation for the Elasticsearch VM on the DCHP server located at 10.0.0.3/28
After logging on, we'll configure a new DHCP reservation for the Elasticsearch VM on the DCHP server located at 10.0.0.3/28




Switching back to the Elasticsearch VM, we'll begin by modifying the Netplan configuration to utilize the DHCP reservation that we just configured.
Switching back to the Elasticsearch VM, we'll begin by modifying the Netplan configuration to utilize the DHCP reservation that we just configured.




Next, we'll run:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
This will allow us to import the Elastic PGP key for adding Elastic's repository to our package manager's source list.



Now we'll make sure that the proper dependency, apt-transport-https is installed by running
Now we'll make sure that the proper dependency, apt-transport-https is installed by running
sudo apt-get install apt-transport-https
sudo apt-get install apt-transport-https
sudo apt-get install apt-transport-https



With the PGP key in place, and the proper dependencies installed, we can begin to install Elasticsearch onto our VM. To do so, we'll add the Elastic repository definition into our /etc/apt/sources.list.d directory by running the command:
With the PGP key in place, and the proper dependencies installed, we can begin to install Elasticsearch onto our VM. To do so, we'll add the Elastic repository definition into our /etc/apt/sources.list.d directory by running the command:
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.listdeb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.listdeb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main



Finally, we'll update our package manager and install Elasticsearch with
Finally, we'll update our package manager and install Elasticsearch with
sudo apt update && sudo apt install elasticsearch
sudo apt update && sudo apt install elasticsearch
After installation, we will see all the default generated values from Elasticsearch, including the generated password for the built-in elastic super-user, as well as information for how to reset the password and generate enrollment tokens for other ES nodes and Kibana
After installation, we will see all the default generated values from Elasticsearch, including the generated password for the built-in elastic super-user, as well as information for how to reset the password and generate enrollment tokens for other ES nodes and Kibana



After installation, we'll want to enable Elasticsearch and start it
After installation, we'll want to enable Elasticsearch and start it



The first thing that we'll want to do now that Elasticsearch is online is reset the password for the super-user. This needs to be performed firstly in order to adhere to NIST 800-53 AC-2 on secure user management. To do so, we will access /usr/share/elasticsearch/bin/elasticsearch-reset-password utility. Whenever changing this, we'll need to specify the elastic user with the -u switch, and the interactive mode with the -I switch. The interactive mode will allow us to actually specify a password for the elastic user that we want to use rather than elasticsearch generating another new password.
The first thing that we'll want to do now that Elasticsearch is online is reset the password for the super-user. This needs to be performed firstly in order to adhere to NIST 800-53 AC-2 on secure user management. To do so, we will access /usr/share/elasticsearch/bin/elasticsearch-reset-password utility. Whenever changing this, we'll need to specify the elastic user with the -u switch, and the interactive mode with the -I switch. The interactive mode will allow us to actually specify a password for the elastic user that we want to use rather than elasticsearch generating another new password.




We should now be able to cURL the URL where elasticsearch is running to verify that the service is running as expected
We should now be able to cURL the URL where elasticsearch is running to verify that the service is running as expected



Looks good!
Looks good!
Next, we'll want to make configuration changes to Elasticsearch itself, and to do that, we'll access the /etc/elasticsearch/elasticsearch.yml file.
From here, we'll want to make a few changes, firstly to the node.name value, where we will uncomment and name the node, in this case, we'll name it app1, the network.host value, where elasticsearch will be hosted, which we'll use the IP address of our VM (10.0.1.2) and we'll uncomment the line with http.port for good measure. We'll then close the configuration file and save our changes.









In order for Elasticsearch to listen on port 9200, we'll need to allow it to do so by opening the port in the local firewall, after which, we'll reload it to apply our new rule.
In order for Elasticsearch to listen on port 9200, we'll need to allow it to do so by opening the port in the local firewall, after which, we'll reload it to apply our new rule.



III. Kibana Deployment
Now that the Elasticsearch cluster has been deployed, we can begin deploying Kibana to enable actionable integration and alerting.
To start, we'll deploy a new Gen 2 VM with the following configurations:
8Gb of RAM
127Gb of Diskspace
Internal vSwitch bridged at 10.0.1.0/28
Now that the Elasticsearch cluster has been deployed, we can begin deploying Kibana to enable actionable integration and alerting.
To start, we'll deploy a new Gen 2 VM with the following configurations:
8Gb of RAM
127Gb of Diskspace
Internal vSwitch bridged at 10.0.1.0/28



As with the previous Ubuntu-based VMs, we'll disable Secureboot and deploy 4vCPUs.
As with the previous Ubuntu-based VMs, we'll disable Secureboot and deploy 4vCPUs.



Next, we'll log onto our DHCP server and set a new DHCP MAC-based reservation for the Kibana machine.
Next, we'll log onto our DHCP server and set a new DHCP MAC-based reservation for the Kibana machine.




Now that we are networked, we can begin the installation process by firstly adding the Elastic PGP key to our keyring directory as we done before on our Elasticsearch machine
Now that we are networked, we can begin the installation process by firstly adding the Elastic PGP key to our keyring directory as we done before on our Elasticsearch machine
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg


We'll then adjust the Netplan configuration on the Kibana machine to utilize the DHCP reservation set at the server




Now that we are networked, we can begin the installation process by firstly adding the Elastic PGP key to our keyring directory as we done before on our Elasticsearch machine
Now that we are networked, we can begin the installation process by firstly adding the Elastic PGP key to our keyring directory as we done before on our Elasticsearch machine
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg


Next, also similar to before, we'll install the apt-transport-https dependency with by running the command:
Next, also similar to before, we'll install the apt-transport-https dependency with by running the command:
Next, also similar to before, we'll install the apt-transport-https dependency with by running the command:
sudo apt-get install apt-transport-https
sudo apt-get install apt-transport-https
sudo apt-get install apt-transport-https



After this, as before, we can import the Elastic repository to our /etc/apt/sources.list.d directory which will include our package to install Kibana
After this, as before, we can import the Elastic repository to our /etc/apt/sources.list.d directory which will include our package to install Kibana
After this, as before, we can import the Elastic repository to our /etc/apt/sources.list.d directory which will include our package to install Kibana



Next we'll run sudo apt update && sudo apt upgrade && sudo apt-get install kibana
And after installing, we'll enable and start the service
Next we'll run sudo apt update && sudo apt upgrade && sudo apt-get install kibana
And after installing, we'll enable and start the service
Next we'll run sudo apt update && sudo apt upgrade && sudo apt-get install kibana
And after installing, we'll enable and start the service



Once installed, just as with Elasticsearch, we have some additional configurations to make to the /etc/kibana/kibana.yml file.
These will include:
Specifying a server.port value of 5601
Setting server.host to our Kibana machine's address (10.0.1.3)
Setting the server.publicbaseurl to the URL that we will use to access Kibana from the web browser
Setting our elasticsearch.password to 'password' (but leaving elasticsearch.username to default value
Setting elasticsearch.hosts to the IP address and port number of our elasticsearch machine
Specifying the CA of elasticsearch by setting the elasticsearch.certificateauthorities value to the root CA file generated during the initial security configuration during the our installation of ES to the directory of /etc/kibana/certs/http_ca.crt
(We will SCP this file to this directory after configurations have been made)
Once installed, just as with Elasticsearch, we have some additional configurations to make to the /etc/kibana/kibana.yml file.
These will include:
Specifying a server.port value of 5601
Setting server.host to our Kibana machine's address (10.0.1.3)
Setting the server.publicbaseurl to the URL that we will use to access Kibana from the web browser
Setting our elasticsearch.password to 'password' (but leaving elasticsearch.username to default value
Setting elasticsearch.hosts to the IP address and port number of our elasticsearch machine
Specifying the CA of elasticsearch by setting the elasticsearch.certificateauthorities value to the root CA file generated during the initial security configuration during the our installation of ES to the directory of /etc/kibana/certs/http_ca.crt
(We will SCP this file to this directory after configurations have been made)
Once installed, just as with Elasticsearch, we have some additional configurations to make to the /etc/kibana/kibana.yml file.
These will include:
Specifying a server.port value of 5601
Setting server.host to our Kibana machine's address (10.0.1.3)
Setting the server.publicbaseurl to the URL that we will use to access Kibana from the web browser
Setting our elasticsearch.password to 'password' (but leaving elasticsearch.username to default value
Setting elasticsearch.hosts to the IP address and port number of our elasticsearch machine
Specifying the CA of elasticsearch by setting the elasticsearch.certificateauthorities value to the root CA file generated during the initial security configuration during the our installation of ES to the directory of /etc/kibana/certs/http_ca.crt
(We will SCP this file to this directory after configurations have been made)












Now we will need to create a directory to house the Root CA file so that we can establish trust between our Elasticsearch Cluster and Kibana
Now we will need to create a directory to house the Root CA file so that we can establish trust between our Elasticsearch Cluster and Kibana
Now we will need to create a directory to house the Root CA file so that we can establish trust between our Elasticsearch Cluster and Kibana



Next, we'll just need to place the http_ca.crt Root CA file into the /etc/kibana/certs directory so that Kibana can find the file as specified in its kibana.yml configuration.
Next, we'll just need to place the http_ca.crt Root CA file into the /etc/kibana/certs directory so that Kibana can find the file as specified in its kibana.yml configuration.
Next, we'll just need to place the http_ca.crt Root CA file into the /etc/kibana/certs directory so that Kibana can find the file as specified in its kibana.yml configuration.






Initial certificate and hosting configurations should now be finalized and we can now setup the kibana_system account's new password to match the one in kibana.yml. Kibana will use this account to connect and communicate with Elasticsearch.
We can begin by connecting back to the Elasticsearch machine and utilizing the /usr/share/elasticsearch/bin/elasticsearch-reset-password utility once again, except this time by specifying kibana_system as the user.
Initial certificate and hosting configurations should now be finalized and we can now setup the kibana_system account's new password to match the one in kibana.yml. Kibana will use this account to connect and communicate with Elasticsearch.
We can begin by connecting back to the Elasticsearch machine and utilizing the /usr/share/elasticsearch/bin/elasticsearch-reset-password utility once again, except this time by specifying kibana_system as the user.
Initial certificate and hosting configurations should now be finalized and we can now setup the kibana_system account's new password to match the one in kibana.yml. Kibana will use this account to connect and communicate with Elasticsearch.
We can begin by connecting back to the Elasticsearch machine and utilizing the /usr/share/elasticsearch/bin/elasticsearch-reset-password utility once again, except this time by specifying kibana_system as the user.



Next we'll want to allow communication over tcp/5601 within ufw
Next we'll want to allow communication over tcp/5601 within ufw
Next we'll want to allow communication over tcp/5601 within ufw



Finally we can reboot kibana and access it in the web browser after it starts up at the specified address and port
Finally we can reboot kibana and access it in the web browser after it starts up at the specified address and port
Finally we can reboot kibana and access it in the web browser after it starts up at the specified address and port



Finally, we can logon using the elastic super-user account to see the Kibana dashboard
Finally, we can logon using the elastic super-user account to see the Kibana dashboard
Finally, we can logon using the elastic super-user account to see the Kibana dashboard



IV: Securing the ElasticStack
In this section we'll begin securing the inter-node communication between the nodes in order to allow TLS in our configuration. To begin, we'll need to create a certificate authority to distribute certs throughout the stack. We can begin by running elasticsearch-certutil in ca mode, and setting a password for the new ca file
To do this, we will access the /usr/share/elasticsearch/bin/elasticsearch-certutil
utility, and we will run it in ca mode
In this section we'll begin securing the inter-node communication between the nodes in order to allow TLS in our configuration. To begin, we'll need to create a certificate authority to distribute certs throughout the stack. We can begin by running elasticsearch-certutil in ca mode, and setting a password for the new ca file
To do this, we will access the /usr/share/elasticsearch/bin/elasticsearch-certutil
utility, and we will run it in ca mode
In this section we'll begin securing the inter-node communication between the nodes in order to allow TLS in our configuration. To begin, we'll need to create a certificate authority to distribute certs throughout the stack. We can begin by running elasticsearch-certutil in ca mode, and setting a password for the new ca file
To do this, we will access the /usr/share/elasticsearch/bin/elasticsearch-certutil
utility, and we will run it in ca mode



Next, we'll copy our newly-generated certificate file into our certs folder, inside our Elasticsearch certificates directory.
Next, we'll copy our newly-generated certificate file into our certs folder, inside our Elasticsearch certificates directory.
Next, we'll copy our newly-generated certificate file into our certs folder, inside our Elasticsearch certificates directory.



To make sure that elasticsearch can access the certificate file correctly, we'll change the owner to the elasticsearch group, and set permissions for the elastic-certificates.p12 file to the same as the created by default certificate files created during Elasticsearch's setup
To make sure that elasticsearch can access the certificate file correctly, we'll change the owner to the elasticsearch group, and set permissions for the elastic-certificates.p12 file to the same as the created by default certificate files created during Elasticsearch's setup
To make sure that elasticsearch can access the certificate file correctly, we'll change the owner to the elasticsearch group, and set permissions for the elastic-certificates.p12 file to the same as the created by default certificate files created during Elasticsearch's setup



We'll next need to configure the paths used for Elasticsearch's keystore and truststore in the elasticsearch.yml file. We'll change these to access the elastic-certificates.p12 file, then save and close the file.
We'll next need to configure the paths used for Elasticsearch's keystore and truststore in the elasticsearch.yml file. We'll change these to access the elastic-certificates.p12 file, then save and close the file.
We'll next need to configure the paths used for Elasticsearch's keystore and truststore in the elasticsearch.yml file. We'll change these to access the elastic-certificates.p12 file, then save and close the file.



Next, since we've set passwords for our certificate, Elasticsearch will need said password for accessing the .p12 file. To provide this in a secure way without plainly placing it directly into the .yml file in plaintext, we can add the certificate password as entries into Elasticsearch's keystore.
To begin, we'll access the elasticsearch keystore utility located in /usr/share/elasticsearch/bin/elasticsearch-keystore, where we will add:
Next, since we've set passwords for our certificate, Elasticsearch will need said password for accessing the .p12 file. To provide this in a secure way without plainly placing it directly into the .yml file in plaintext, we can add the certificate password as entries into Elasticsearch's keystore.
To begin, we'll access the elasticsearch keystore utility located in /usr/share/elasticsearch/bin/elasticsearch-keystore, where we will add:
Next, since we've set passwords for our certificate, Elasticsearch will need said password for accessing the .p12 file. To provide this in a secure way without plainly placing it directly into the .yml file in plaintext, we can add the certificate password as entries into Elasticsearch's keystore.
To begin, we'll access the elasticsearch keystore utility located in /usr/share/elasticsearch/bin/elasticsearch-keystore, where we will add:
xpack.security.transport.ssl.keystore.secure-password
xpack.security.transport.ssl.truststore.secure-password
xpack.security.transport.ssl.keystore.secure-password
xpack.security.transport.ssl.truststore.secure-password
xpack.security.transport.ssl.keystore.secure-password
xpack.security.transport.ssl.truststore.secure-password



The secure configuration used thus far should be sufficient to secure traffic between nodes, and we'll now shift our focus to securing https traffic. To start, we'll need to stop both Kibana and Elasticsearch services
After both services are stopped, we'll need to use the elasticsearch-certutility once again in http mode. During certificate creation, we'll use the following options for the requested input fields:
Generate a CSR? N
Use an existing CA? Y
For how long should the certificate be valid? 5y
Do you wish to generate one certificate per node? Y
Node #1 Name: appl
Which hostnames will be used to connect to appl? Elasticsearch-virtual-machine
Which IP address will be used to connect to appl? 10.0.1.2
Specify a password for the http.p12 file ——
The secure configuration used thus far should be sufficient to secure traffic between nodes, and we'll now shift our focus to securing https traffic. To start, we'll need to stop both Kibana and Elasticsearch services
After both services are stopped, we'll need to use the elasticsearch-certutility once again in http mode. During certificate creation, we'll use the following options for the requested input fields:
Generate a CSR? N
Use an existing CA? Y
For how long should the certificate be valid? 5y
Do you wish to generate one certificate per node? Y
Node #1 Name: appl
Which hostnames will be used to connect to appl? Elasticsearch-virtual-machine
Which IP address will be used to connect to appl? 10.0.1.2
Specify a password for the http.p12 file ——
The secure configuration used thus far should be sufficient to secure traffic between nodes, and we'll now shift our focus to securing https traffic. To start, we'll need to stop both Kibana and Elasticsearch services
After both services are stopped, we'll need to use the elasticsearch-certutility once again in http mode. During certificate creation, we'll use the following options for the requested input fields:
Generate a CSR? N
Use an existing CA? Y
For how long should the certificate be valid? 5y
Do you wish to generate one certificate per node? Y
Node #1 Name: appl
Which hostnames will be used to connect to appl? Elasticsearch-virtual-machine
Which IP address will be used to connect to appl? 10.0.1.2
Specify a password for the http.p12 file ——



Now that the cert-bundle is created, we'll cd into our /usr/share/elasticsearch directory and unzip the newly created elasticsearch-ssl-http.zip file
After unzipping, we can list the contents of the unzipped elasticsearch directory to find the http.p12 certificate, a backup configuration file, and a readme.txt file. Before moving this into our certs folder, we'll need to rename the pre-existing http.p12 file already present inside the folder to http.p12.old, then proceed to move the new http.p12 file into that directory to take its place.
Now that the cert-bundle is created, we'll cd into our /usr/share/elasticsearch directory and unzip the newly created elasticsearch-ssl-http.zip file
After unzipping, we can list the contents of the unzipped elasticsearch directory to find the http.p12 certificate, a backup configuration file, and a readme.txt file. Before moving this into our certs folder, we'll need to rename the pre-existing http.p12 file already present inside the folder to http.p12.old, then proceed to move the new http.p12 file into that directory to take its place.
Now that the cert-bundle is created, we'll cd into our /usr/share/elasticsearch directory and unzip the newly created elasticsearch-ssl-http.zip file
After unzipping, we can list the contents of the unzipped elasticsearch directory to find the http.p12 certificate, a backup configuration file, and a readme.txt file. Before moving this into our certs folder, we'll need to rename the pre-existing http.p12 file already present inside the folder to http.p12.old, then proceed to move the new http.p12 file into that directory to take its place.






Now that the new certificate file is in place, we'll want to set permissions and ownership of the new http.p12 file to match that of the certificate files created during the security configuration performed at install.
Now that the new certificate file is in place, we'll want to set permissions and ownership of the new http.p12 file to match that of the certificate files created during the security configuration performed at install.
Now that the new certificate file is in place, we'll want to set permissions and ownership of the new http.p12 file to match that of the certificate files created during the security configuration performed at install.



Now, just like for the transport security password, we'll add our http.ssl.keystore passwords to the elasticsearch keystore by once again, using the elasticsearch-keystore utility.
Now, just like for the transport security password, we'll add our http.ssl.keystore passwords to the elasticsearch keystore by once again, using the elasticsearch-keystore utility.
Now, just like for the transport security password, we'll add our http.ssl.keystore passwords to the elasticsearch keystore by once again, using the elasticsearch-keystore utility.



This should be all the security configuration needed for the elasticsearch node, and now we should can begin securing Kibana to ensure that we are able to access the web interface using https. Firstly, we'll return to our /usr/share/elasticsearch directory to find the unzipped kibana folder where we will find a elasticsearch-ca.pem file. We will then cp this file to Kiana's /tmp/ directory, and then into Kiana's configuration directory.
This should be all the security configuration needed for the elasticsearch node, and now we should can begin securing Kibana to ensure that we are able to access the web interface using https. Firstly, we'll return to our /usr/share/elasticsearch directory to find the unzipped kibana folder where we will find a elasticsearch-ca.pem file. We will then cp this file to Kiana's /tmp/ directory, and then into Kiana's configuration directory.
This should be all the security configuration needed for the elasticsearch node, and now we should can begin securing Kibana to ensure that we are able to access the web interface using https. Firstly, we'll return to our /usr/share/elasticsearch directory to find the unzipped kibana folder where we will find a elasticsearch-ca.pem file. We will then cp this file to Kiana's /tmp/ directory, and then into Kiana's configuration directory.






Now we'll access the kibana.yml file and specify the new Elasticsearch certficate authority
Now we'll access the kibana.yml file and specify the new Elasticsearch certficate authority
Now we'll access the kibana.yml file and specify the new Elasticsearch certficate authority



Finally, we can now begin the process of encrypting traffic between the browser and Kibana. We'll firstly need to generate a Certificate Signing Request (CSR). To begin this process, we'll log back on to our Elasticsearch VM and run the elasticsearch-certutil utility in CSR mode. When prompted, we'll allow the default name of csr-bundle.zip
Finally, we can now begin the process of encrypting traffic between the browser and Kibana. We'll firstly need to generate a Certificate Signing Request (CSR). To begin this process, we'll log back on to our Elasticsearch VM and run the elasticsearch-certutil utility in CSR mode. When prompted, we'll allow the default name of csr-bundle.zip
Finally, we can now begin the process of encrypting traffic between the browser and Kibana. We'll firstly need to generate a Certificate Signing Request (CSR). To begin this process, we'll log back on to our Elasticsearch VM and run the elasticsearch-certutil utility in CSR mode. When prompted, we'll allow the default name of csr-bundle.zip



Now that the .zip file has been created, we'll unzip it and use the CSR in order to generate a certificate. Again, we will use the elasticsearch-certutil utility for this, and output the certificate in .pem format, which is usable with Kibana, verified by the elastic-stack-ca.p12 file.
Now that the .zip file has been created, we'll unzip it and use the CSR in order to generate a certificate. Again, we will use the elasticsearch-certutil utility for this, and output the certificate in .pem format, which is usable with Kibana, verified by the elastic-stack-ca.p12 file.
Now that the .zip file has been created, we'll unzip it and use the CSR in order to generate a certificate. Again, we will use the elasticsearch-certutil utility for this, and output the certificate in .pem format, which is usable with Kibana, verified by the elastic-stack-ca.p12 file.
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert —pem -ca /usr/share/elasticsearch/elastic-stack-ca.p12 -name kibana-https-server
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert —pem -ca /usr/share/elasticsearch/elastic-stack-ca.p12 -name kibana-https-server
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert —pem -ca /usr/share/elasticsearch/elastic-stack-ca.p12 -name kibana-https-server






The previous command will result in the generation of the certificate bundle which will include our private key. We will unzip its contents. Within is a kibana-https-server.crt file that we will move to our Kibana machine's tmp directory, then into our configuration directory.
The previous command will result in the generation of the certificate bundle which will include our private key. We will unzip its contents. Within is a kibana-https-server.crt file that we will move to our Kibana machine's tmp directory, then into our configuration directory.
The previous command will result in the generation of the certificate bundle which will include our private key. We will unzip its contents. Within is a kibana-https-server.crt file that we will move to our Kibana machine's tmp directory, then into our configuration directory.









Now that both files are present in Kibana's configuration directory, we'll access the Kibana.yml file and make the following changes:
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/kibana-https-server.crt
server.ssl.key: /etc/kibana/kibana-https-server.key
Now that both files are present in Kibana's configuration directory, we'll access the Kibana.yml file and make the following changes:
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/kibana-https-server.crt
server.ssl.key: /etc/kibana/kibana-https-server.key
Now that both files are present in Kibana's configuration directory, we'll access the Kibana.yml file and make the following changes:
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/kibana-https-server.crt
server.ssl.key: /etc/kibana/kibana-https-server.key



Finally, we'll change the permissions of both the .key and .crt files to allow for read-access by Kibana
Finally, we'll change the permissions of both the .key and .crt files to allow for read-access by Kibana
Finally, we'll change the permissions of both the .key and .crt files to allow for read-access by Kibana



Finally, after launching Kibana and logging in again with the elastic super-user, we can see that the certificate is recognized in the browser, and communication has been encrypted.
Finally, after launching Kibana and logging in again with the elastic super-user, we can see that the certificate is recognized in the browser, and communication has been encrypted.
Finally, after launching Kibana and logging in again with the elastic super-user, we can see that the certificate is recognized in the browser, and communication has been encrypted.



V: Fleet & ElasticAgent
V: Fleet & ElasticAgent
Now that Kibana and Elasticsearch are both connected and trusted, we'll begin deploying our Fleet server. This will be used to centrally manage our Elastic Agent services that will be deployed later.
To start, we'll navigate to Management > Fleet > Add Fleet Server
Now that Kibana and Elasticsearch are both connected and trusted, we'll begin deploying our Fleet server. This will be used to centrally manage our Elastic Agent services that will be deployed later.
To start, we'll navigate to Management > Fleet > Add Fleet Server
Now that Kibana and Elasticsearch are both connected and trusted, we'll begin deploying our Fleet server. This will be used to centrally manage our Elastic Agent services that will be deployed later.
To start, we'll navigate to Management > Fleet > Add Fleet Server



Next, we'll tab over to "Advanced" and create "Fleet Server policy 1"
Next, we'll tab over to "Advanced" and create "Fleet Server policy 1"
Next, we'll tab over to "Advanced" and create "Fleet Server policy 1"



In the Advanced section, we'll select Production in order to provide the certificate that we've created earlier for Fleet to use to establish trust with deployed agents.
And we'll finalize the deployment.
In the Advanced section, we'll select Production in order to provide the certificate that we've created earlier for Fleet to use to establish trust with deployed agents.
And we'll finalize the deployment.
In the Advanced section, we'll select Production in order to provide the certificate that we've created earlier for Fleet to use to establish trust with deployed agents.
And we'll finalize the deployment.






Next, we'll navigate to Fleet > Settings and specify the correct host url for our Elasticsearch cluster
Next, we'll navigate to Fleet > Settings and specify the correct host url for our Elasticsearch cluster
Next, we'll navigate to Fleet > Settings and specify the correct host url for our Elasticsearch cluster



Next, we'll need to supply the Elasticsearch CA in the outputs section. To do so, we'll create a new certs directory within /etc/kibana for Fleet to use to find the Root CA for Elasticsearch, and we'll copy the renamed ca.crt root certificate file into the newly created directory.
Next, we'll need to supply the Elasticsearch CA in the outputs section. To do so, we'll create a new certs directory within /etc/kibana for Fleet to use to find the Root CA for Elasticsearch, and we'll copy the renamed ca.crt root certificate file into the newly created directory.
Next, we'll need to supply the Elasticsearch CA in the outputs section. To do so, we'll create a new certs directory within /etc/kibana for Fleet to use to find the Root CA for Elasticsearch, and we'll copy the renamed ca.crt root certificate file into the newly created directory.



Before continuing, we'll allow traffic on tcp/8220 within the ufw for Fleet traffic
Before continuing, we'll allow traffic on tcp/8220 within the ufw for Fleet traffic
Before continuing, we'll allow traffic on tcp/8220 within the ufw for Fleet traffic



Next, we'll navigate back to our Edit Output window and supply the path to the Root CA whenever specifying the SSL certificate authorities value. This will allow the fleet server to trust the self-signed certificate of our Elasticsearch instance since it has the full SSL chain to verify its identity.
Next, we'll navigate back to our Edit Output window and supply the path to the Root CA whenever specifying the SSL certificate authorities value. This will allow the fleet server to trust the self-signed certificate of our Elasticsearch instance since it has the full SSL chain to verify its identity.
Next, we'll navigate back to our Edit Output window and supply the path to the Root CA whenever specifying the SSL certificate authorities value. This will allow the fleet server to trust the self-signed certificate of our Elasticsearch instance since it has the full SSL chain to verify its identity.



Now that policy should be configured correctly, we're able to generate a Fleet service-token that will automatically be placed into our installation script. The installation script will need editing for our configuration, but before making those edits, we'll need to create a new certificate for our fleet server to use when communicating with Elasticsearch
Now that policy should be configured correctly, we're able to generate a Fleet service-token that will automatically be placed into our installation script. The installation script will need editing for our configuration, but before making those edits, we'll need to create a new certificate for our fleet server to use when communicating with Elasticsearch
Now that policy should be configured correctly, we're able to generate a Fleet service-token that will automatically be placed into our installation script. The installation script will need editing for our configuration, but before making those edits, we'll need to create a new certificate for our fleet server to use when communicating with Elasticsearch
Back on the Elasticsearch VM, we'll generate a new certificate based on the Root CA that we created earlier in the .pem format using the Elasticsearch cert utility. This will create fleet-server-cert.zip file within our /tmp/ directory that we will move to our Kibana/Fleet VM and unzip
Back on the Elasticsearch VM, we'll generate a new certificate based on the Root CA that we created earlier in the .pem format using the Elasticsearch cert utility. This will create fleet-server-cert.zip file within our /tmp/ directory that we will move to our Kibana/Fleet VM and unzip
Back on the Elasticsearch VM, we'll generate a new certificate based on the Root CA that we created earlier in the .pem format using the Elasticsearch cert utility. This will create fleet-server-cert.zip file within our /tmp/ directory that we will move to our Kibana/Fleet VM and unzip









Now that our certificate and key are in place for our Fleet server, we can take our installation script and run it on our Kibana machine which is also hosting our Fleet server. Firstly, we can make the following changes to specify our Elasticsearch and Fleet CA, as well at the certificate and key that Fleet will use.
certificate-authorities=/etc/certs/ca.crt
fleet-server-es-ca=/etc/certs/ca.crt
fleet-server-cert=/etc/certs/fleet/fleet.crt
fleet-server-cert-key=/etc/certs/fleet/fleet.key
Then, after our changes, we can run this command.
Now that our certificate and key are in place for our Fleet server, we can take our installation script and run it on our Kibana machine which is also hosting our Fleet server. Firstly, we can make the following changes to specify our Elasticsearch and Fleet CA, as well at the certificate and key that Fleet will use.
certificate-authorities=/etc/certs/ca.crt
fleet-server-es-ca=/etc/certs/ca.crt
fleet-server-cert=/etc/certs/fleet/fleet.crt
fleet-server-cert-key=/etc/certs/fleet/fleet.key
Then, after our changes, we can run this command.
Now that our certificate and key are in place for our Fleet server, we can take our installation script and run it on our Kibana machine which is also hosting our Fleet server. Firstly, we can make the following changes to specify our Elasticsearch and Fleet CA, as well at the certificate and key that Fleet will use.
certificate-authorities=/etc/certs/ca.crt
fleet-server-es-ca=/etc/certs/ca.crt
fleet-server-cert=/etc/certs/fleet/fleet.crt
fleet-server-cert-key=/etc/certs/fleet/fleet.key
Then, after our changes, we can run this command.



Now that our certificate and key are in place for our Fleet server, we can take our installation script and run it on our Kibana machine, which is also hosting our Fleet server.
Now that our certificate and key are in place for our Fleet server, we can take our installation script and run it on our Kibana machine, which is also hosting our Fleet server.
Now that our certificate and key are in place for our Fleet server, we can take our installation script and run it on our Kibana machine, which is also hosting our Fleet server.






After the running the installation script, we can see that enrollment has succeeded, and that we can verify the Fleet Server instance on the Kibana web interface.
After the running the installation script, we can see that enrollment has succeeded, and that we can verify the Fleet Server instance on the Kibana web interface.
After the running the installation script, we can see that enrollment has succeeded, and that we can verify the Fleet Server instance on the Kibana web interface.
VI: Ubuntu User & Elastic Agent Deployment
VI: Ubuntu User & Elastic Agent Deployment
Now that we've finished deploying the Fleet server on our Kibana VM, we can install the individual elastic-agent service on our Ubuntu user VM.
From the Fleet Server agent page, we can select "Add Agent" option to begin
Now that we've finished deploying the Fleet server on our Kibana VM, we can install the individual elastic-agent service on our Ubuntu user VM.
From the Fleet Server agent page, we can select "Add Agent" option to begin
Now that we've finished deploying the Fleet server on our Kibana VM, we can install the individual elastic-agent service on our Ubuntu user VM.
From the Fleet Server agent page, we can select "Add Agent" option to begin



Within the right-pane window that opens, we will create our default agent policy that will be pushed to the deployment Elastic agent
Within the right-pane window that opens, we will create our default agent policy that will be pushed to the deployment Elastic agent
Within the right-pane window that opens, we will create our default agent policy that will be pushed to the deployment Elastic agent






In the second drop-down we will select to "Enroll in Fleet"
In the second drop-down we will select to "Enroll in Fleet"
In the second drop-down we will select to "Enroll in Fleet"



We will then be provided with an installation script that we will make edits to to ensure proper functionality with the self-signed certificates in use in our Elastic Stack Deployment. Before we make edits to this installation script however, we'll deploy a new VM that is to function as a ubuntu-based user workstation. This will be the machine that is used for Elastic Agent's enrollment with our Fleet server. The creation of a secure enrollment token satisfies NIST SP 800-53 SC-13 compliant on secure encryption.
To begin, we'll navigate back to VMHOST and create a new VM named Ubuntu-User which will have the following configurations:
OS: Ubuntu 24.04
Memory: 8Gb
Diskspace: 127Gb
Networking: Internal Switch Bridged @ 10.0.1.0/28
We will then be provided with an installation script that we will make edits to to ensure proper functionality with the self-signed certificates in use in our Elastic Stack Deployment. Before we make edits to this installation script however, we'll deploy a new VM that is to function as a ubuntu-based user workstation. This will be the machine that is used for Elastic Agent's enrollment with our Fleet server. The creation of a secure enrollment token satisfies NIST SP 800-53 SC-13 compliant on secure encryption.
To begin, we'll navigate back to VMHOST and create a new VM named Ubuntu-User which will have the following configurations:
OS: Ubuntu 24.04
Memory: 8Gb
Diskspace: 127Gb
Networking: Internal Switch Bridged @ 10.0.1.0/28
We will then be provided with an installation script that we will make edits to to ensure proper functionality with the self-signed certificates in use in our Elastic Stack Deployment. Before we make edits to this installation script however, we'll deploy a new VM that is to function as a ubuntu-based user workstation. This will be the machine that is used for Elastic Agent's enrollment with our Fleet server. The creation of a secure enrollment token satisfies NIST SP 800-53 SC-13 compliant on secure encryption.
To begin, we'll navigate back to VMHOST and create a new VM named Ubuntu-User which will have the following configurations:
OS: Ubuntu 24.04
Memory: 8Gb
Diskspace: 127Gb
Networking: Internal Switch Bridged @ 10.0.1.0/28






Next, we'll need to move our Elasticsearch ca.crt file from our Kibana machine to our ubuntu-user machine as the Fleet server has the same root ca file it is using in policy
Next, we'll need to move our Elasticsearch ca.crt file from our Kibana machine to our ubuntu-user machine as the Fleet server has the same root ca file it is using in policy
Next, we'll need to move our Elasticsearch ca.crt file from our Kibana machine to our ubuntu-user machine as the Fleet server has the same root ca file it is using in policy



After installing Ubuntu we'll logon, install updates, and begin making edits to our Elastic-agent installation and enrollment script that will include:
After installing Ubuntu we'll logon, install updates, and begin making edits to our Elastic-agent installation and enrollment script that will include:
After installing Ubuntu we'll logon, install updates, and begin making edits to our Elastic-agent installation and enrollment script that will include:
—-fleet-server-es-ca=/opt/certs/ca.crt
—certificate-authorities=/opt/certs/ca.crt
—-fleet-server-es-ca=/opt/certs/ca.crt
—certificate-authorities=/opt/certs/ca.crt
—-fleet-server-es-ca=/opt/certs/ca.crt
—certificate-authorities=/opt/certs/ca.crt
After making our changes, we'll move the ca.crt scp'd from our Kibana VM to the /opt/certs directory and run the enrollment script
After making our changes, we'll move the ca.crt scp'd from our Kibana VM to the /opt/certs directory and run the enrollment script
After making our changes, we'll move the ca.crt scp'd from our Kibana VM to the /opt/certs directory and run the enrollment script






We'll now need to confirm that Elastic agent enrollment was successful, and we can do so by logging back into our Kibana VM and checking Fleet > Agent section to find that our ubuntu-user-Virtual-Machine has been listed successfully.
We'll now need to confirm that Elastic agent enrollment was successful, and we can do so by logging back into our Kibana VM and checking Fleet > Agent section to find that our ubuntu-user-Virtual-Machine has been listed successfully.
We'll now need to confirm that Elastic agent enrollment was successful, and we can do so by logging back into our Kibana VM and checking Fleet > Agent section to find that our ubuntu-user-Virtual-Machine has been listed successfully.






In order to add EDR functionality to our SOAR implementation, we'll need to utilize the Elastic Defend integration. Within the Kibana search bar, we can navigate to "Integrations" and find Elastic Defend. Then we can simply select "Add"
In order to add EDR functionality to our SOAR implementation, we'll need to utilize the Elastic Defend integration. Within the Kibana search bar, we can navigate to "Integrations" and find Elastic Defend. Then we can simply select "Add"
In order to add EDR functionality to our SOAR implementation, we'll need to utilize the Elastic Defend integration. Within the Kibana search bar, we can navigate to "Integrations" and find Elastic Defend. Then we can simply select "Add"



From the next menu, we can supply a name and select "Complete EDR" under the dialog to configure settings
From the next menu, we can supply a name and select "Complete EDR" under the dialog to configure settings
From the next menu, we can supply a name and select "Complete EDR" under the dialog to configure settings



Finally, we can push this integration into our existing policy, "Agent Policy 1" and we can verify that it was pushed to our currently running agent on our Ubuntu user machine
Finally, we can push this integration into our existing policy, "Agent Policy 1" and we can verify that it was pushed to our currently running agent on our Ubuntu user machine
Finally, we can push this integration into our existing policy, "Agent Policy 1" and we can verify that it was pushed to our currently running agent on our Ubuntu user machine






VII: TheHive Deployment
VII: TheHive Deployment
With the Elastic Defend Integration added, we'll finally deploy a triage ticketing system for our Security Incidents. To fulfill this need, we'll go with the Security Incident Case Management Software "TheHive" from StrangeBee.
TheHive is a open-source SOAR platform that can help orchestrate some of our workflows using other tools that have been deployed in the project together.
To begin, we'll navigate back to our Hyper-V Manager and deploy a new VM under the name "TheHive" with the following configurations:
OS: Ubuntu Server 24.04
Memory: 8Gb
DiskSpace: 127Gb
Networking: Internal Switch Bridged at @ 10.0.1.0/28



As with our previous Ubuntu-based VMs, we'll disable Secure Boot, and increase the number of vCPUs running on the system



After logging on, we'll run updates, domain-join the machine using realms, and begin configuring our Hive instance






To start, we'll follow the StrangeBee docs for configuring our instance by firstly ensuring that the proper software dependencies are installed. For this, we'll run the command below to install java and python dependencies:
apt install wget gnupg apt-transport-https git ca-certificates ca-certificates-java curl software-properties-common python3-pip lsb-release
apt install wget gnupg apt-transport-https git ca-certificates ca-certificates-java curl software-properties-common python3-pip lsb-release



Now that initial dependencies are installed, we'll install Amazon Corretto for Java Virtual machine. We will then verify our JDK and Corretto version by running java —version
wget -qO- https://apt.corretto.aws/corretto.key | sudo gpg --dearmor -o /usr/share/keyrings/corretto.gpg echo "deb [signed-by=/usr/share/keyrings/corretto.gpg] https://apt.corretto.aws stable main" | sudo tee -a /etc/apt/sources.list.d/corretto.sources.list sudo apt update sudo apt install java-common java-11-amazon-corretto-jdk echo JAVA_HOME="/usr/lib/jvm/java-11-amazon-corretto" | sudo tee -a /etc/environment export JAVA_HOME="/usr/lib/jvm/java-11-amazon-corretto"
wget -qO- https://apt.corretto.aws/corretto.key | sudo gpg --dearmor -o /usr/share/keyrings/corretto.gpg echo "deb [signed-by=/usr/share/keyrings/corretto.gpg] https://apt.corretto.aws stable main" | sudo tee -a /etc/apt/sources.list.d/corretto.sources.list sudo apt update sudo apt install java-common java-11-amazon-corretto-jdk echo JAVA_HOME="/usr/lib/jvm/java-11-amazon-corretto" | sudo tee -a /etc/environment export JAVA_HOME="/usr/lib/jvm/java-11-amazon-corretto"



TheHive requires a database system be deployed as it needs to potentially be able to handle a large amount of structured and unstructured data. To begin, we'll need to install Apache Cassandra. For that, we'll first need to import the repository keys, adding the repository to our package manager sources list, then using the below command to perform the installation
wget -qO - https://downloads.apache.org/cassandra/KEYS | sudo gpg --dearmor -o /usr/share/keyrings/cassandra-archive.gpg
echo "deb [signed-by=/usr/share/keyrings/cassandra-archive.gpg] https://debian.cassandra.apache.org 40x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
wget -qO - https://downloads.apache.org/cassandra/KEYS | sudo gpg --dearmor -o /usr/share/keyrings/cassandra-archive.gpg
echo "deb [signed-by=/usr/share/keyrings/cassandra-archive.gpg] https://debian.cassandra.apache.org 40x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list



After installation, we'll need to access the Cassandra.yml file to begin making configurations. We'll make the following changes:
listen_address: 10.0.1.7
rpc_address: 10.0.1.7
seed_provider: - seeds: 10.0.1.7









We'll next need to install Elasticsearch on this machine. This is because TheHive relies on Elasticsearch to index and search data. To begin, as before, we'll import the Elastic PGP key. We'll run the two commands below to import the PGP key and add the repository definition to our apt sources list. After importing the repo, we'll begin installation
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list



After installing Elastic search, we'll need to make a few changes to the configuration to properly integrate it with the hive. We'll make the following changes:
cluster.name: thp
discovery.type: single-node
network.host: 10.0.1.7






We will then start the elasticsearch service, and verify it is listening on tcp/9200



We will store the log and configuration files for TheHive on our local machine, and will therefore need to make the directories required in order to use TheHive with the local file system. We can do that by running the following commands which will create a new home directory for TheHive, as well as the apply the appropriate permissions
sudo mkdir –p /opt/thp/thehive/files
sudo chown -R thehive:thehive /opt/thp/thehive/files
sudo mkdir –p /opt/thp/thehive/files
sudo chown -R thehive:thehive /opt/thp/thehive/files
sudo mkdir –p /opt/thp/thehive/files
sudo chown -R thehive:thehive /opt/thp/thehive/files



Next, we can proceed with installation. We'll begin by importing TheHive into our repo sources list, update our package manager, and install TheHive
echo 'deb [arch=all signed-by=/usr/share/keyrings/strangebee-archive-keyring.gpg] https://deb.strangebee.com thehive-5.4 main' |sudo tee -a /etc/apt/sources.list.d/strangebee.list sudo apt-get update sudo apt-get install -y thehive
echo 'deb [arch=all signed-by=/usr/share/keyrings/strangebee-archive-keyring.gpg] https://deb.strangebee.com thehive-5.4 main' |sudo tee -a /etc/apt/sources.list.d/strangebee.list sudo apt-get update sudo apt-get install -y thehive
echo 'deb [arch=all signed-by=/usr/share/keyrings/strangebee-archive-keyring.gpg] https://deb.strangebee.com thehive-5.4 main' |sudo tee -a /etc/apt/sources.list.d/strangebee.list sudo apt-get update sudo apt-get install -y thehive






After installing, we can begin configuring TheHive from /etc/thehive/application.conf, where we will make a few changes:
Database Configurations
Removing the authentication values for Cassandra as those have not yet been configured
username: thehive
password: password
Setting hostname: [10.0.1.7] within Cassandra's configuration in the hive's application.conf file
Index Configuration
Hostname: [10.0.1.7]


After installing, we can begin configuring TheHive from /etc/thehive/application.conf, where we will make a few changes:
Database Configurations
Removing the authentication values for Cassandra as those have not yet been configured
username: thehive
password: password
Setting hostname: [10.0.1.7] within Cassandra's configuration in the hive's application.conf file
Index Configuration
Hostname: [10.0.1.7]

With these configurations in place, and both Cassandra and Elasticsearch now running, we should be able to start thehive and visit the interface hosted at tcp/9000 in our web browser.



After logging in with credential defaults, we'll firstly want to change these for the built-in admin account



After this is completed, we can begin the process of creating our Organization. TheHive uses organizations as a key concept for structuring and managing access to cases, alerts, and other resources. Each organization acts as its own tenant with its own Cases, Alerts, Observables, etc.
We'll create our "Demo Org" Organization now by selecting Organizations > Add and set its name to Demo. Then, we'll create the first user within that organization that we can use to login with org-admin permissions.
This prevent us from having to logon as the admin user, and allows us to begin triaging cases.



To finalize initial configurations, we'll login as the info@smmdevice.local user account to ensure the credentials work



And looks good! Login seems to take, and we're able to create a test case.



Conclusion
This concludes our deployment of SOAR security infrastructure and logging capabilities aligned with NIST SP 800-53. In addition to the deployed tools, we've also encrypted data transfer validated via TLS (SC-12) within the Elastic Stack, Enforced Access Control policies per ISO 27001, and incident response automation has been tested within TheHive platform. From here, we can begin building a variety of security automations to deploy test cases within our environment.