Microsoft Entra Suite Tech Accelerator
Aug 14 2024, 07:00 AM - 09:30 AM (PDT)
Microsoft Tech Community
Using NXLog to enhance Azure Sentinel’s ingestion capabilities
Published Feb 02 2021 12:42 PM 6,086 Views
Copper Contributor

In this post, the technology we will be examining is the Azure Monitor HTTP Data Collector API, which enables clients, such as the NXLog Enterprise Edition agent, to send events to a Log Analytics workspace, making them directly accessible using Azure Sentinel queries.


We will present two examples of sending logs to Azure Sentinel: in the first one, we send Windows DNS Server logs and in the second one, Linux kernel audit logs. Both of these log sources are of interest from a security perspective.


Proactive monitoring of DNS activity can help network administrators quickly detect and respond to attempted security breaches in DNS implementations that might otherwise lead to data theft, denial-of-service, or other service disruptions related to malicious activity.


In comparison, Linux Audit has a much wider scope and could arguably be called the most comprehensive tool for monitoring and reporting security events on Linux distributions.


About NXLog Enterprise Edition

If you aren’t familiar with the NXLog Enterprise Edition, it is a full-featured log processing agent with a small footprint. It can read and write all standard log formats and integrates with over 70 third-party products. It offers many additional features not found in the free Community Edition. To evaluate the configurations presented in this post, download the appropriate trial edition for your platform. For more information on supported platforms and how to install an agent, see the NXLog Deployment chapter of the NXLog EE User Guide.


Collecting DNS Server logs via Windows Event Tracing

Event Tracing for Windows (ETW) provides not only efficient logging of both kernel and user-mode applications but also access to the Debug and Analytical channels that are not available through Windows Event Log channels (which also contains some DNS Server logs).



The pivotal part of sending secure HTTPS requests to Azure is the authentication process. Azure validates the values of two custom HTTP headers, Authorization and x-ms-date along with the length of the data payload to determine if the request is authentic. The value assigned to the Authorization header is dynamically generated using a cryptographic hash. For details, see the Azure Monitor Authorization section in the Microsoft documentation.


To allow easy integration with the NXLog HTTP(s) (om_http) module that sends events to REST API endpoints, NXLog provides a Perl script that regenerates the single-use authorization string for each new batch of events to be sent.


Capturing ETW events - The input side

NXLog can natively collect ETW logs without the need to capture the trace into an .etl file. Configuring an NXLog agent to capture Windows DNS Server events using the Event Tracing for Windows (im_etw) input module is fairly straightforward as illustrated here:

nxlog.conf (Section: DNS_Logs input instance)



<Input DNS_Logs>
    Module              im_etw
    Provider            Microsoft-Windows-DNSServer
    Exec                to_json();




The default location for the NXLog configuration file on Windows is C:\Program Files\nxlog\conf\nxlog.conf. This file is used to configure as many inputs, outputs, and routes as needed for a host. For more information on configuring NXLog in general, see the Configuration Overview in the NXLog User Guide.


Please note that the first (opening) line of the Input block defines the name of this instance as DNS_Logs. The output module for sending events to Azure uses this name for creating the Azure Sentinel table that will collect these events.

The Exec statement on line 4 of the DNS_Logs input instance above invokes the to_json() procedure, which converts the Windows events to JSON records, as required by Azure’s HTTP Data Collector API.


Sending ETW events - The output side
The output module is the part that connects directly to Azure. The first step in configuring the output instance is retrieving the Workspace ID and either the Primary key or the Secondary key (also referred to as the shared key). These keys can be found by navigating in the Azure portal to Log Analytics workspace > Settings > Agents management. The same set of keys can be viewed under either the Windows servers or Linux servers tab.




The next step is to add this information to the nxlog.conf file as constants (see the following code example) making them accessible to the output instance.


The SUBDOMAIN, RESOURCE, and APIVER are used to construct the complete URL. The value for SIZELIMIT can be tuned to your needs. It represents the maximum size in bytes of the data payload for each batch of events. 65000 is the upper limit. The higher values mean better network efficiency. Lower values mean events can be received faster because they are not waiting for a large buffer to be full before they can be sent.


nxlog.conf (Section: Defining Constants)



define WORKSPACE        18fb21ab-d8d4-4448-bdf6-3748c9c03135
define SHAREDKEY        VfIQqBoz6fxmnI/E4PKVPza2clH/YAdJ20RnCDwzHCqCMnobYdM1/dD1+KJ6cI6AkR4xPJlTIWI/jfwPU6QHmw==
define RESOURCE         api/logs
define APIVER           api-version=2016-04-01
define SIZELIMIT        65000




When looking at the entire output instance that uses the HTTP(s) (om_http) module, you can see how batches of events are buffered and then flushed:


nxlog.conf (Section: DNS_Logs output instance)



<Extension plxm>
    Module              xm_perl
    PerlCode            %INSTALLDIR%\modules\extension\perl\

<Output AzureHTTP>
    Module              om_http
    URL                 https://%WORKSPACE%.%SUBDOMAIN%/%RESOURCE%?%APIVER%
    ContentType         application/json
    HTTPSAllowUntrusted TRUE
    HTTPSCAFile         %INSTALLDIR%\cert\ca-certificates.crt
        create_stat('ec', 'COUNT');
        create_stat('bc', 'COUNT');

        #---BEGIN--- the enrichment of this event with any new fields:
        # The following can be used for debugging batch mode if needed:
        # $BatchNumber = get_stat('bc');
        # $EventNumber = get_stat('ec');
        # to_json();
        #---END--- the enrichment of this event

        if (size(get_var('batch')) + size($raw_event) + 3) > %SIZELIMIT%
        # Flush this batch of events
            set_var('nextbatch', $raw_event);
            $raw_event = '[' + get_var('batch') + ']';
            $Workspace = "%WORKSPACE%";
            $SharedKey = "%SHAREDKEY%";
            $ContentLength = string(size($raw_event));
            $dts = strftime(now(),'YYYY-MM-DDThh:mm:ssUTC');
            $dts_no_tz = replace($dts,'Z','');
            $parsedate_utc_false = parsedate($dts_no_tz,FALSE);
            $x_ms_date = strftime($parsedate_utc_false, '%a, %d %b %Y %T GMT');
            $delimiter = get_stat('ec') == 1 ? '' : ",\n";
            set_var('batch', get_var('batch') + $delimiter + $raw_event);




The values for the three HTTP headers Authorization, Log-Type, and x-ms-date are set using the add_http_header procedure as shown above on lines 41-43. Log-Type is dynamically set to $SourceModuleName, the name of the input instance we chose at the beginning. Since all REST API events are categorized by Azure Monitor as Custom Logs, Azure appends _CL to the value of Log-Type in order to prevent naming conflicts with other Azure tables thus the name we originally chose, DNS_Logs, appears in Azure Sentinel as DNS_Logs_CL.


By leveraging $SourceModuleName for defining Log-Type, we have created a completely generic output instance that can be used with any other log sources.


Configuration checklist
To prepare for testing, let’s run through the steps needed to ensure success:

  1. Use the output instance in this example nxlog.conf configuration in your current C:\Program Files\nxlog\conf\nxlog.conf NXLog configuration file.
  2. Ensure that you have changed the values of WORKSPACE and SHAREDKEY to match those of your Log Analytics workspace.
  3. Download the Perl script. Copy it to the location defined by the PerlCode directive in the xm_perl instance (plxm, lines 1-4 above) and rename it to
  4. Read about the Windows requirements for Perl in the Perl (xm_perl) in the NXLog Reference Manual.
  5. Once the Perl requirements for Windows have been met, restart the nxlog service via Windows Services.

To test DNS Server logging of audit events, we added an A record for and reloaded the zone. This logs an event with EventID 515 (Record Create) and another one with EventID 561 (Zone Reload).


Now it’s time to log into the Azure Log Analytics workspace that was defined in the DNS_Logs output instance and open Logs. After expanding Custom Logs the DNS_Logs_CL table should be visible. With a simple query, the newly ingested events are visible.




Expanding the first event’s details shows the complete set of fields and their values:





Temporary output instance

For testing purposes, you may want to add a temporary output instance for validating the integrity of your configuration. This lets you compare the events and their fields with what Azure Sentinel is ingesting. As you can see here, by adding a new output instance named TempFile as an additional destination to the route, this allows you to view the events in JSON format that will be stored in the file defined by the File directive.


nxlog.conf (Section: file output instance and modified route)



<Output TempFile>
    Module  om_file
    File    'C:\Program Files\nxlog\data\dnsetw.json'

<Route DnsRoute1>
    Path  DNS_Logs => AzureHTTP, TempFile




Pretty-printed JSON of the captured DNS Server audit event record



  "SourceName": "Microsoft-Windows-DNSServer",
  "ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
  "EventID": 515,
  "Version": 0,
  "ChannelID": 17,
  "OpcodeValue": 0,
  "TaskValue": 5,
  "Keywords": "4611686018428436480",
  "EventTime": "2020-10-06T10:59:00.795199-05:00",
  "ExecutionProcessID": 1728,
  "ExecutionThreadID": 5012,
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "WIN-FFMCPAJ76HP",
  "Domain": "WIN-FFMCPAJ76HP",
  "AccountName": "Administrator",
  "UserID": "S-1-5-21-1830054504-3820897498-340727717-500",
  "AccountType": "User",
  "Type": "1",
  "NAME": "",
  "TTL": "604800",
  "BufferSize": "4",
  "RDATA": "0xC0A8015D",
  "Zone": "",
  "ZoneScope": "Default",
  "VirtualizationID": ".",
  "EventReceivedTime": "2020-10-06T10:59:03.295804-05:00",
  "SourceModuleName": "DNS_Logs",
  "SourceModuleType": "im_etw",
  "DNS_LogType": "Audit"




If you are unable to see any events arriving in your Azure Sentinel table, try these troubleshooting steps:

  • Look at the NXLog internal log file for clues which is located in C:\Program Files\nxlog\data\nxlog.log on Windows. Success should look like this:



2020-09-30 22:06:15 INFO [om_http|DNS_Logs] Successfully connected to (using URL:
2020-09-30 22:06:15 INFO [om_http|DNS_Logs] Generated from Shared Key and hashed signing string based on:; ContentLength: 64746; x-ms-date: Thu, 01 Oct 2020 03:06:15 GMT; Authorization: SharedKey 18fb21ab-d8d4-4448-bdf6-3748c2c03135:2I2iSNqGZeJZh8QdTPl7Ate2xRLvJbEL6dpa6UL4WKo=
2020-09-30 22:08:19 INFO [om_http|DNS_Logs] Reconnect...



  • The following error message in C:\Program Files\nxlog\data\nxlog.log usually indicates one or more of these three conditions:
    1. First line of the Perl script doesn’t contain use lib 'c:\Program Files\nxlog\data';
    2. Wrong version of Strawberry Perl (only will work)
    3. The presence of a conflicting copy of perl528.dll located in C:\Program Files\nxlog\ that will need to be deleted
      Can't locate in @INC (you may need to install the lib module) (@INC contains:) at C:\Program Files\nxlog\modules\extension\perl\ line 1.
      BEGIN failed--compliation aborted at C:\Program Files\nxlog\modules\extension\perl\ line 1.
      2020-07-30 10:25:39 ERROR [xm_perl|plxm] the perl interpreter failed to parse C:\Program Files\nxlog\modules\extension\perl\
  • Make sure the input instance is correctly configured and that events are actually being captured by adding an additional output instance for logging them to a local temporary file as demonstrated in the Temporary output instance section above.

Including DNS Server analytical logs captured with ETW
If analytical event logging is enabled, you can capture and view DNS Sever analytical events having EventIDs ranging from 256 to 286. Technically, no further changes are needed for logging and viewing both audit and analytical events in Azure Sentinel. However, there is one enhancement you might want to implement:


Enrich the schema with a new attribute: DNS_LogType. If you need to frequently differentiate between audit and analytical DNS Server events, querying for a range of values on a regular basis is not only tedious and makes queries less readable, but it can also be slower on large data sets. This is as simple as replacing the original Exec to_json(); with an Exec block that sets the new $DNS_LogType field to either Audit or Analytical depending on the value of EventID before calling the to_json() which will then enrich the schema with this new field.


nxlog.conf (DNS_Logs input instance)



<Input DNS_Logs>
    Module              im_etw
    Provider            Microsoft-Windows-DNSServer
        if $EventID >= 256 and $EventID <= 286 $DNS_LogType = 'Analytical';
        if $EventID >= 512 and $EventID <= 596 $DNS_LogType = 'Audit';






Collecting Linux Audit logs
In this section we examine Linux Audit logs and how they can be sent to Azure Sentinel. Since the prerequisites of data format (JSON), transport (HTTPS REST API with some special headers), and authentication (single-use cryptographic hash) are the same for sending Linux log sources to Azure Sentinel, we are now free to focus on the log source itself and the minor differences between a Windows deployment and a Linux deployment.


The Linux Audit system provides fine-grained logging of security related events. These logs can also provide a wealth of security information: changes to DNS zone files, system shutdowns, attempts to access unauthorized files, and other suspicious activity. The NXLog Enterprise Edition includes the im_linuxaudit module for directly accessing the kernel component of the Audit System. With this module, NXLog can be configured to build Audit rules and collect logs without requiring auditd or any other user-space software.


Capturing Linux Audit events - The input side
Let’s take a look at the configuration file to see how the input module is configured and how the rules are defined.


nxlog.conf (Section: LinuxAudit input instance)



<Extension _resolver>
    Module              xm_resolver

<Input LinuxAudit>
    Module              im_linuxaudit
    FlowControl         FALSE
    LoadRule            %INSTALLDIR%/etc/im_linuxaudit.rules
    ResolveValues       TRUE
    Exec                to_json();




The default location for the NXLog configuration file on Linux is /opt/nxlog/etc/nxlog.conf.


Instead of defining a small set of audit rules within a Rules block directly in the LinuxAudit input instance, we use the LoadRule directive to load a more comprehensive collection of rules in an audit rule file which is based on the ruleset maintained by the Best Practice Auditd Configuration project.

The xm_resolver module is needed for the ResolveValues directive in the audit input instance, where it is used for resolving some of the numeric values to more human-readable string values.


Sending Linux Audit events - The output side
It should be noted that there are some configuration differences between Linux and Windows as the NXLog directory structure is slightly different, thus the PerlCode path is as follows:


nxlog.conf (Section: xm_perl instance)



<Extension plxm>
    Module              xm_perl
    PerlCode            %INSTALLDIR%/lib/nxlog/modules/extension/perl/




Also, the first line of Perl scripts on Linux needs to point to the location of the perl binary.



use strict;
use warnings;
use Log::Nxlog;
use MIME::Base64




Since the Linux configuration files exhibit only minor differences when compared to their Windows counterparts displayed in the ETW section, we won’t display them here. Instead, you can download them using these links:


Download/view the Linux Perl script.


Once these changes have been implemented and the NXLog service has been restarted, events should be sent to the LinuxAudit_CL Azure Sentinel table based on the name given to the input module, LinuxAudit. The following JSON event was triggered and captured according to the very last line in the im_linuxaudit.rules file.


Pretty-printed JSON of the captured Linux Audit event record



  "type": "PATH",
  "time": "2020-10-06T16:58:58.518000+00:00",
  "seq": 72170,
  "item": 1,
  "name": "/etc/bind/zones/",
  "inode": 527881,
  "dev": "fc:02",
  "mode": "file,644",
  "ouid": "root",
  "ogid": "bind",
  "rdev": "00:00",
  "nametype": "CREATE",
  "cap_fp": "0",
  "cap_fi": "0",
  "cap_fe": "0",
  "cap_fver": "0",
  "cap_frootid": "0",
  "EventReceivedTime": "2020-10-06T16:58:58.530798+00:00",
  "SourceModuleName": "LinuxAudit",
  "SourceModuleType": "im_linuxaudit"




Upon successful receipt in the Log Analytics workspace by Azure Monitor, events are further processed and finally ingested by Azure Sentinel where they can be viewed via user-defined queries.




After expanding the following event to reveal its columns and their values, it can be verified against the JSON-formatted event shown above that was sent via the REST API.






Given the configuration samples and use cases presented here, you should now possess the basic information needed to benefit from these additional security monitoring opportunities in your own enterprise. To recap, the main advantages are:

  • Event Tracing for Windows (ETW) offers better performance because it doesn’t need to capture the trace into an .etl file and provides access to Debug and Analytical channels
  • The native NXLog Linux Audit input module that works out of the box without the need to install auditd and when coupled with the NXLog Resolver extension module can resolve IP addresses as well as group/user IDs to their respective names, making Linux audit logs more intelligible to security analysts
  • A general-purpose output configuration enabling Azure Sentinel to ingest events from multiple, diverse log sources simultaneously, from any host in your enterprise having outbound access to Azure

With thanks to @Ofer_Shezaf for his assistance in understanding Azure Sentinel’s integration capabilities, as well as my colleagues at NXLog, Botond Botyanszki and Tamás Burtics, for their comments, feedback, and encouragement to write this article.


Version history
Last update:
‎Nov 02 2021 06:33 PM
Updated by: