Blog Post

Microsoft Sentinel Blog
11 MIN READ

Ingesting Alien Vault OTX Threat Indicators into Azure Sentinel

Matt Egen's avatar
Matt Egen
Icon for Microsoft rankMicrosoft
Jan 03, 2020

**UPDATE** 11/4/2022:  Please note, to enable this capability in Sentinel, you will need to ensure that you've enabled the "Threat Intelligence Platforms" data connector.

One of the key capabilities of Azure Sentinel has always been its ability to work with data from multiple sources including Threat Indicator Providers who can provide their data directly into the environment via the Microsoft Security Graph. But what if you have a source of indicators or other enrichment data that you want to use in Azure Sentinel but no connector to ingest it with? While Ofer Shezaf has written a great blog post about creating custom connectors and Ian Hellen wrote up an outstanding blog about using OTX data in Jupyter Notebooks in Sentinel, this blog post is going to expand upon their work by walking through adding a custom Sentinel Playbook (Azure Logic App) to connect to Alien Vault’s Open Threat Exchange (OTX) REST API to ingest threat indicators for use in hunting and alerts. While this blog is specifically about using AlienVault OTX, one could use this same methodology with most any API based data source.

What is OTX?

OTX is an open community sharing various indicators of compromise (IOC’s) such as IP addresses, domains, hostnames, URL’s, SHAs, etc. For this example, we’re going to limit our ingestion to just IP’s, URLs, and hostnames, but many of the IOC's in OTX can be imported into the Azure Sentinel and Microsoft Defender ATP as indicators.

Establish an OTX account

To utilize the OTX API feed, you’ll want to head over to https://otx.alienvault.com/ to establish an account. Once you’ve signed up you will be able to access detailed documentation as well as your API key via the dashboard. On the dashboard, select the “API Integration” link to get to your API key.

This section of the panel is also where you’ll be able to confirm from the OTX side that your connection is functional.

Create a new playbook in Sentinel

Now that we have a key for the OTX API, we’re going to need to create a new Playbook in Sentinel. To start, navigate to the Playbooks tab in Sentinel and select “Add Playbook”.

Give your playbook a descriptive name and select the correct Azure Subscription to attach it to. For the Resource Group field, you can either create a new Resource Group or attach it to an existing one. The best practice would be to attach it to the same Resource Group you’re using for Sentinel(you can determine the Resource Group for your Sentinel instance by going to Settings, Workspace Settings and then select “Properties”). Finally, choose the geographic location  you wish your Playbook to run in. After clicking “Create”, your new Playbook will be added to the Playbooks tab and you will be taken to the Logic Apps Designer workspace.

Configure your Playbook

Since we’re going to be creating a custom connector, we’re going to be manually defining the values for our Playbook. To do this, select “Blank Logic App”

Select a trigger [manual or scheduled]

As you can see, there are multiple options available for us to choose from. In this case, we’re going to choose a “Scheduled” trigger. Scheduled triggers come in two flavors:

  • “Recurrence” where the trigger will fire on a regular basis, and
  • “Sliding Window” where the triggers are a series of fixed-sized, non-overlapping, and contiguous time intervals from a specified start time.

For this example, we’re going to use a simple Recurrence trigger and set the frequency to 1 day

 

Get the data

Great! Now that we’ve defined when we want to go get our data, now it’s time to go get it. Click the “New Step” button below the Trigger. This will present us with the “Choose an action” window to choose our next step. We’re going to be seeing more of this page in the future so I’ll only include it this once. For retrieving the OTX data, we’re going to choose the “HTTP Built-in” connector and then the “HTTP” action.

This will then open the parameters page for the HTTP action.

We’re going to use the following settings for this connection:

  • Method: GET
  • URI: https://otx.alienvault.com/api/v1/indicators/export
  • Headers: The headers field is broken out into name / value fields. In the first field enter “X-OTX-API-KEY” (minus the quotes) and in the second field enter your API key from the OTX Dashboard.
  • Queries: This is where we’re going to add in the parameters of the actual query itself. These are defined in the “Docs” page for OTX and we’re using the “indicators/export” call. This has a number of parameters available, but we’re only going to be using two of them.
    • modified_since: This is an ISO format datetime string. For this scenario, we’re taking advantage of an Expression in LogicApps. Expressions allow us to create programmatic values in lieu of fixed values. We can use the GUI to click each of the expression variables or just enter it directly. In this case we’re using the expression “addDays(utcNow(),-1)”. Since we’re running our Trigger to fire once a day, we’re going to look at new indicators since the last time we ran.
    • types: These are the indicator types that we want to retrieve from the OTX feed. OTX has a lot of different types, but for this example we’re going to use the domain, hostname, and IPv4 types. You can get a JSON file of all of the supported types by navigating to https://otx.alienvault.com/api/v1/pulses/indicators/types.

At this point it’s probably a good call to save our work as well as test our connection to see if we’re getting back the data we want. This is also a good time to collect an example of the output of the call as we’re going to use that to build a schema for the next step. To save our work, just click the “Save” button. This will now enable the “Run” button which we can click to have our connection fire. Assuming everything went as expected, we should see a page like the following:


You’ll want to copy the “Body” section (highlighted above) to use in the next step.

(Note: If the “Body” section is empty, it may just be that there haven’t been any new indicators added in the last day. You might want to consider removing or changing the “modified_since” parameter to get a list of indicators)

Parse the data

Now that we have our HTTP data connection, it’s time to parse the JSON that’s returned. If you did a test run of the Playbook so far, click the “Designer” button to go back to the designer page. Now select the “New step” button that is below the HTTP section. From the page that opens, choose “Data Operations” and then “Parse JSON”. In the panel that opens, you’ll see two fields: Content and Schema. Clicking in the “Content” field will open the Dynamic Content flyout panel from which we’re going to select the pre-built “Body” option. This is telling the Parse JSON connector to parse the body content from the HTTP connector we defined earlier.

For the Schema field, we’re going to use the body data that we copied earlier. Below the Schema box, there is a link to “Use sample payload to generate schema” click that link and then paste the body data into the “Enter or paste a sample JSON payload” box and then click “Done”.

This will create the schema for the data:

One small change that I had to make from using this method was to modify the “next” value to be an empty set of braces instead of the values the schema generator created to account for scenarios where the “next” link isn’t populated.

Send the data to the Graph

Now that we’ve connected to the OTX API, retrieved our data, and parsed it, we need to send this data to the Microsoft Security Graph API. The Microsoft Graph supports the ingestion of Threat Intelligence Indicators (tiIndicators) which can be shared to both Azure Sentinel and Microsoft Defender ATP. Let’s add this functionality to the ingestion playbook we just created. Because we’re writing to the Graph, we first need to create an application registration in Azure AD that has the “ThreatIndicators.ReadWrite.OwnedBy” permissions. The process of creating a new application has been very well documented, so I am not going to reproduce it in detail here, but instead point you to the docs.microsoft.com page: Walkthrough: Register an app with Azure Active Directory .  Just make sure that when you grant the application permissions that you give it the “ThreatIndicators.ReadWrite.OwnedBy” permissions. Once you’ve registered the application, we’re going to need three pieces of information: The Tenant ID, Application (client) ID, and the Client Secret.

The first thing we’re going to do is add a “Switch” step after the Parse JSON step. Click the “+” symbol after the Parse JSON step. From the options presented choose “Control” and then select “Switch”. A Switch statement allows us to make a branching action based on the value of a field.

For this Switch block, we’re going to evaluate the “type” field from our parsed JSON data, so click in the “Choose a value” field and select the “type” value from the JSON dynamic data set:

When you select the “type” field from the Parse JSON step, the Logic App page is going to embed the Switch block into a “For-Each” control flow block. This is because we’re going to be iterating over each of the records returned from the OTX API and Logic Apps is smart enough to realize this and automatically take care of this for us.

Now that we know what we’re going to switch our actions on, we need to choose actions for each type of data we want to act on. I’m only going to walk through creating the switch case for the URL response, but the other data types use the same pattern so you can reproduce it for them. 
[added 1/26/2022]: Please note that for this code to function properly you will want to create a switch case for each of the datatypes you are requesting from AlienVault OTX. e.g. If you requested "domain" indicators, then you will need a switch case for domain, if you have "IPv4" then you'll need a case for IPv4, etc. (thanks emilec)

In the Case window, click on the “Equals” field and enter “URL” (without the quotes). The Switch comparison is case sensitive, so we need to make sure we’re using the proper case that’s returned and for URL’s from OTX it’s upper case.

Now we just need to add an action. We’re already familiar with the HTTP API call to get data from OTX, and we’re going to use it again here to put data into the Microsoft Graph.
Select the HTTP action from the actions list.

Just like we did when we connected to OTX, we’re going to need to supply some values to the HTTP connector as well as the Body of the request:

  • Method: POST
  • URI: https://graph.microsoft.com/beta/security/tiIndicators
  • Headers: The headers field is broken out into name / value fields. In the first field enter “content-type” (minus the quotes) and in the second field enter “application/json” (again, without the quotes)
  • Body: Here is where we make the actual API call into the Graph. By way of example, here is what I am using for the Sentinel connection (you can see what it looks like in the playbook in the image above).  These are the minimum required fields for the tiIndicators data type.  You can add more fields as appropriate for your use case:

{
"action": "alert",
"activityGroupNames": [],
"confidence": 0,
"description": "OTX Threat Indicator - @{items('For_each')?['type']}",
"expirationDateTime": "@{addDays(utcNow(),7)}",
"externalId": "@{items('For_each')?['id']}",
"killChain": [],
"malwareFamilyNames": [],
"severity": 0,
"tags": [],
"targetProduct": "Azure Sentinel",
"threatType": "WatchList",
"tlpLevel": "white",
"url": "@{items('For_each')?['indicator']}"
}

The “expiration date time” value is using an expression to expire the custom TI seven days after ingestion.

“tlpLevel” is referring to the Traffic Light Protocol (https://www.us-cert.gov/tlp) that defines the shareability of the information / indicator. Since this is all public information, I went ahead and hardcoded my entries to “white” which means it has an unrestricted distribution.

There are other values that we could be supplying in the Threat Intelligence Indicators (tiIndicators) call, like the “Diamond Model” or “Kill Chain” values, however for this example I am just using the required minimums.

Since we’re writing to the graph though, we also need to provide our authentication information. We get this information from the Azure AD application we registered earlier.

  • Authentication: Active Directory OAuth
  • Tenant: Your Azure tenant ID
  • Audience: https://graph.microsoft.com
  • Client ID: The client id from the Azure AD application registration you did earlier.
  • Credential Type:
  • Secret: The secret from the Azure AD application registration you did earlier.

Once you’ve created this step we’re done and the connector is ready to test.

This first connector will make the TI indicators available only to Sentinel, however, you could create another HTTP connection to supply the indicators to Microsoft Defender ATP.  However, you must separate these calls as the API requires that the “targetProduct” value be set to either “Microsoft Defender ATP” or “Azure Sentinel”. A really cool benefit  of sending the data to MDATP as well as to Azure Sentinel is that if you change the “action” parameter from “alert” to “block” and if you’ve enabled Network Protection, your Windows 10 clients (v1709 and higher) enrolled in MDATP will be automatically blocked from accessing those URL’s! Making the MDATP connector is the same as making the Azure Sentinel connector except for a minor tweak on the Ip addresses. Microsoft Defender ATP supports destination IPv4/IPv6 only. This means that for IPv4/IPv6 indicators you need to set the “networkDestinationIPv4” or “networkDestinationIPv6” properties.  To add a second REST API call to the graph, just click the "+" sign after the Sentinel API call.  Make sure to change the "targetProduct" field value to "Microsoft Defender ATP".

Run queries against the data

Now that we have our enrichment data, how can we use it? Of course it’s going to depend on what your other data sources are as well as what you’re looking for. We have a GitHub repository of really great queries that utilize the Threat Intelligence Indicators Here’s one query I thought to write that looked at the malicious IP addresses and see if any of them were showing up in my Azure AD SigninLogs.  This can be done pretty simply with the following query:

 

let ipIndicators =

ThreatIntelligenceIndicator

| where NetworkIP != ""

| project IPAddress = NetworkIP;

ipIndicators

| join (SigninLogs) on IPAddress

 

What this query is doing is creating a temporary table (“ipIndicators”) that is composed of just the IPv4 addresses from the ThreatIntelligenceIndicator table. This is then joined to the SigninLogs table using IPAddress as they key for the join (e.g. where the field values match in both tables).

 

Happy hunting!

 

Matt Egen @FlyingBlueMonki
Cybersecurity Solutions Group

References

Traffic Light Protocol: https://www.us-cert.gov/tlp

Diamond Model of intrusion Analysis: https://apps.dtic.mil/docs/citations/ADA586960

Lockheed Martin Cyber Kill Chain: https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html

Azure Sentinel: Creating Custom Connectors: https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-creating-custom-connectors/ba-p/864060

Using Threat Intelligence in your Jupyter Notebooks: https://techcommunity.microsoft.com/t5/azure-sentinel/using-threat-intelligence-in-your-jupyter-notebooks/ba-p/860239

 

Updated Nov 04, 2022
Version 4.0