Dec 19 2020 11:42 AM - edited Dec 22 2020 02:48 PM
I've seen various posts across the internet of people trying to get pfSense working with Azure Sentinel and I wanted to share this project I have been working on myself.
I would firstly like to say thank you to some of the great open-source projects out there on GitHub which made this entire process possible. I would like to call out a3ilson for his awesome work with PFELK, I have used his GROK patterns which parse the pfSense data and add additional context the messages, such as GeoIP data, rule types, friendly names and more.
Further information about this project and some KQL functions can be found on my GitHub page
Configuration
Ubuntu 18.04-20.04 Server onPrem
1. Install Ubuntu Server 20.04 on a Virtual Machine or Computer and update the OS
sudo apt update; sudo apt upgrade -y
2. Disabling Swap - Swapping can be disabled for performance and stability. (Optional)
sudo swapoff -a
3. Configuration Date/Time Zone
sudo timedatectl set-timezone Europe/London
4. Download and install the public GPG signing key
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
5. Download and install apt-transport-https package
sudo apt install apt-transport-https
6. Add Elasticsearch|Logstash Repositories (version 7+)
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
7. Install Java 14 LTS
sudo apt install openjdk-14-jre-headless
Install and Configure MaxMind (Optional)
1. Add MaxMind Repository
This step is optional, you can skip this step and the configuration will default to the built-in GeoIP lookups for Elastic
sudo add-apt-repository ppa:maxmind/ppa
2. Install MaxMind
sudo apt install geoipupdate
3. Configure MaxMind
Create a MaxMind Account
4. Add the new license keys to the configuration file
sudo nano /etc/GeoIP.conf
5. Modify lines 7 & 8 as follows (without < >):
AccountID <Input Your Account ID>
LicenseKey <Input Your LicenseKey>
6. Modify line 13 as follows:
EditionIDs GeoLite2-City GeoLite2-Country GeoLite2-ASN
7. Modify line 18 as follows:
DatabaseDirectory /usr/share/GeoIP/
8. Download Maxmind Databases
sudo geoipupdate
9. Add cron (automatically updates Maxmind every week on Sunday at 1700hrs)
sudo nano /etc/cron.weekly/geoipupdate
10. Add the following and save/exit
00 17 * * 0 geoipupdate
Logstash Configuration
1. Install Logstash
sudo apt update && sudo apt install logstash
2. Create Required Directories
sudo mkdir /etc/logstash/conf.d/{databases,patterns,templates}
3. Download the following configuration files (Required)
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/01-inputs.conf -P /etc/logstash/conf.d/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/02-types.conf -P /etc/logstash/conf.d/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/03-filter.conf -P /etc/logstash/conf.d/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/05-firewall.conf -P /etc/logstash/conf.d/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/07-interfaces.conf -P /etc/logstash/conf.d/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/10-apps.conf -P /etc/logstash/conf.d/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/30-geoip.conf -P /etc/logstash/conf.d/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/45-cleanup.conf -P /etc/logstash/conf.d/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/50-outputs.conf -P /etc/logstash/conf.d/
4. Download the following configuration files (Optional)
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/pfSense/35-rules-desc.conf -P /etc/logstash/conf.d/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/36-ports-desc.conf -P /etc/logstash/conf.d/
5. Download the grok pattern (Required)
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/patterns/pfelk.grok -P /etc/logstash/conf.d/patterns/
6. Download the Database(s) (Optional)
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/databases/rule-names.csv -P /etc/logstash/conf.d/databases/
sudo wget https://raw.githubusercontent.com/noodlemctwoodle/pfsense-azure-sentinel/main/Logstash-Configuration/etc/logstash/conf.d/databases/service-names-port-numbers.csv -P /etc/logstash/conf.d/databases/
These databases will be required if you carried out the optional step (4)
7. Configure Firewall Rule Database (Optional)
In pfSense and go to diagnostics -> Command Prompt
Enter the following command in the execute shell command box and click the execute button
pfctl -vv -sr | grep label | sed -r 's/@([[:digit:]]+).*(label "|label "USER_RULE: )(.*)".*/"\1","\3"/g' | sort -V -u | awk 'NR==1{$0="\"Rule\",\"Label\""RS$0}7'
The results will look something like this:
"55","NAT Redirect DNS"
"56","NAT Redirect DNS"
"57","NAT Redirect DNS TLS"
"58","NAT Redirect DNS TLS"
"60","BypassVPN"
Copy the entire results to your clipboard and past within the rule-names.csv as follows:
"Rule","Label"
"55","NAT Redirect DNS"
"56","NAT Redirect DNS"
"57","NAT Redirect DNS TLS"
"58","NAT Redirect DNS TLS"
"60","BypassVPN"
8. Update the Logstash configuration
Go back to the server you installed Logstash.
sudo nano /etc/logstash/conf.d/databases/rule-names.csv
9. Paste the results from pfSense into the first blank line after "0","null"
Example:
"0","null"
"1","Input Firewall Description Here
You must repeat step 1 (Rules) if you add new rules in pfSense and then restart Logstash
10. Update firewall interfaces
Amend the 05-firewall.conf file
sudo nano /etc/logstash/conf.d/05-firewall.conf
Adjust the interface name(s) to correspond with your hardware the interface below is referenced as igb0 with a corresponding “WAN” and friendly name of "ISP Provider". Add/remove sections, depending on the number of interfaces.
### Change interface as desired ###
if [interface][name] =~ /^igb0$/ {
mutate {
add_field => { "[interface][alias]" => "WAN" }
add_field => { "[network][name]" => "ISP Provider" }
}
}
Forwarding pfSense Logs to Logstash
1. In pfSense navigate to Status -> System Logs -> Settings
2. General Logging Options
3. General Logging Options > Log firewall default blocks (optional)
4. Remote Logging Options:
Install Log Analytics Plugin
1. Run the command to install the Azure Log Analytics plugin
sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-azure_loganalytics
2. Configuration
sudo nano /etc/logstash/conf.d/50-outputs.conf
Amend the output to match your Sentinel workspace
output {
azure_loganalytics {
customer_id => "<OMS WORKSPACE ID>"
shared_key => "<CLIENT AUTH KEY>"
log_type => "<LOG TYPE NAME>"
}
}
Example:
output {
azure_loganalytics {
customer_id => "1234567-7654321-345678-12334445"
shared_key => "kflsdjkgfslfjsdf0ife0f0efe0-09f0we9f-ef-w00e-0w-f0w-0fwe-f0d0-w=="
log_type => "pfsense_logstash"
}
}
3. Restart Logstash
sudo systemctl restart logstash
4. If you run into any problems use the Logstash plain.log to troubleshoot
cat /var/log/logstash/logstash-plain.log
5. Wait for data to show in Azure Sentinel
Query the data
1. Using KQL we can now query the data
// PFSesne GeoIp Traffic
pfsense_logstash_CL
| where TimeGenerated > ago(1m)
| where tags_s contains "GeoIP"
| project TimeGenerated, interface_alias_s, network_name_s, interface_name_s, source_ip_s, source_port_s, source_geo_region_name_s, source_geo_country_iso_code_s,
source_geo_country_name_s, destination_ip_s, destination_port_s, destination_geo_region_name_s, destination_geo_country_code3_s,
network_direction_s, event_action_s, event_reason_s, ruleName, destination_service_s, network_transport_s
Mar 18 2021 03:51 PM
@TS-noodlemctwoodle the links in section 3. Download the following configuration files (Required)
are not longer working.
Mar 18 2021 03:52 PM - edited Mar 18 2021 03:54 PM
@George__Wilburn There is an updated guide on my github 🙂
Jul 09 2021 11:11 PM
Mar 29 2023 08:56 AM
Nice Guide @TS-noodlemctwoodle !
Does anyone know if there will be an adapter or service for Sentinal that will allow native integration for platforms that cannot natively send anything other than syslog? To have such an awesome product as Sentinal not be able to ingest one of the oldest (if not the oldest) standardized logging capabilities seems a bit unusual. And yes, I know I can stand up a vm to bridge the gap, but that really seems to be a counter cloud pattern. Paying to run an OS or event a container as a log adapter just seems very 90's.