<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure Data Explorer Blog articles</title>
    <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/bg-p/AzureDataExplorer</link>
    <description>Azure Data Explorer Blog articles</description>
    <pubDate>Sat, 18 Apr 2026 16:16:17 GMT</pubDate>
    <dc:creator>AzureDataExplorer</dc:creator>
    <dc:date>2026-04-18T16:16:17Z</dc:date>
    <item>
      <title>Introducing Data Series Colors: tell clearer stories with ADX Dashboards</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/introducing-data-series-colors-tell-clearer-stories-with-adx/ba-p/4495831</link>
      <description>&lt;P&gt;A frequent request we receive from dashboard editors is the ability to have control over color settings.&lt;/P&gt;
&lt;P&gt;Until now, color assignments in ADX dashboards were largely automatic. While this worked for basic scenarios, it often fell short in operational and reporting use cases where color isn’t decoration - it’s meaning. Today, we’re introducing&amp;nbsp;&lt;STRONG&gt;Data Series Colors&lt;/STRONG&gt;, a new capability that gives editors direct control over how colors are applied to their visuals.&lt;/P&gt;
&lt;P&gt;With Data Series Colors, dashboard editors can now:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Assign colors directly to each data series&lt;/LI&gt;
&lt;LI&gt;Override system defaults with intentional choices&lt;/LI&gt;
&lt;LI&gt;Maintain consistency across visuals and dashboards&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;This feature is supported across pie charts, time charts, line charts, area charts, bar charts, column charts, anomaly and scatter charts, covering the most used visualization types.&lt;/P&gt;
&lt;H2&gt;Using color to convey meaning - and tell a story&lt;/H2&gt;
&lt;P&gt;When colors are assigned intentionally, users no longer need to read legends or labels to understand what is happening. A spike, a drop, or a comparison immediately carries context. Over time, viewers learn the language of the dashboard - what each color represents, what’s normal, and what needs attention.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 55.8333%; height: 395px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 39px;"&gt;&lt;td style="height: 39px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Before&lt;/STRONG&gt;:&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39px;"&gt;
&lt;P&gt;&lt;STRONG&gt;After&lt;/STRONG&gt;:&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 356px;"&gt;&lt;td style="height: 356px;"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 356px;"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.4737%" /&gt;&lt;col style="width: 49.4727%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This makes dashboards more than monitoring tools. They become a way to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Highlight what matters most.&lt;/LI&gt;
&lt;LI&gt;Reinforce shared understanding across teams.&lt;/LI&gt;
&lt;LI&gt;Present a clear narrative with each visual.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Learn more by exploring the documentation: &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/dashboard-customize-visuals#data-series-colors" target="_blank"&gt;Customize Azure Data Explorer Dashboard Visuals - Azure Data Explorer | Microsoft Learn&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Feb 2026 10:55:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/introducing-data-series-colors-tell-clearer-stories-with-adx/ba-p/4495831</guid>
      <dc:creator>Michal_Bar</dc:creator>
      <dc:date>2026-02-23T10:55:16Z</dc:date>
    </item>
    <item>
      <title>Billing Announcement: Azure Data Explorer Storage Adjustment (November 2025)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/billing-announcement-azure-data-explorer-storage-adjustment/ba-p/4467432</link>
      <description>&lt;P&gt;It was recently discovered that some Azure Data Explorer clusters were undercharging for the amount of data being stored. This was corrected on November 3&lt;SUP&gt;rd&lt;/SUP&gt; which may result in an increase in the Azure Data Explorer cluster storage cost for some customers. Below are the two meters that are affected by this change.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Meter Name&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Cluster Type&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Hot LRS Data Stored&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Single Availability Zone&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Hot ZRS Data Stored&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Multiple Availability Zones&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;This does not affect any other meter categories but it’s possible that you’ll see an increase in the storage cost around November 3&lt;SUP&gt;rd&lt;/SUP&gt;. We apologize for any inconvenience this might cause.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Nov 2025 17:52:15 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/billing-announcement-azure-data-explorer-storage-adjustment/ba-p/4467432</guid>
      <dc:creator>bwatts670</dc:creator>
      <dc:date>2025-11-06T17:52:15Z</dc:date>
    </item>
    <item>
      <title>Labeling Kusto Data in Azure Managed Grafana for Machine Learning Workflows</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/labeling-kusto-data-in-azure-managed-grafana-for-machine/ba-p/4419073</link>
      <description>&lt;H1 data-sourcepos="1:1-1:77"&gt;Labeling Kusto Data in Azure Managed Grafana for Machine Learning Workflows&lt;/H1&gt;
&lt;P data-sourcepos="3:1-3:333"&gt;In today's data-driven world, the quality of machine learning models heavily depends on the quality of labeled training data. Whether you're detecting anomalies in manufacturing processes, identifying potential health risks, or analyzing weather patterns, having properly labeled datasets is crucial for building reliable AI systems.&lt;/P&gt;
&lt;P data-sourcepos="5:1-5:241"&gt;This blog post demonstrates how to create an interactive labeling system for Kusto (Azure Data Explorer) data using Azure Managed Grafana, enabling subject matter experts to efficiently label large datasets for machine learning applications.&lt;/P&gt;
&lt;H2 data-sourcepos="7:1-7:28"&gt;Why Data Labeling Matters&lt;/H2&gt;
&lt;P data-sourcepos="9:1-9:114"&gt;Labeled data serves as the foundation for supervised machine learning models. Consider these real-world scenarios:&lt;/P&gt;
&lt;UL data-sourcepos="11:1-17:0"&gt;
&lt;LI data-sourcepos="11:1-11:176"&gt;&lt;STRONG&gt;Healthcare&lt;/STRONG&gt;: Medical professionals labeling diagnostic test results to train models that can detect false positives in virus tests or identify anomalies in medical imaging&lt;/LI&gt;
&lt;LI data-sourcepos="12:1-12:134"&gt;&lt;STRONG&gt;Manufacturing&lt;/STRONG&gt;: Quality engineers marking defective products in chip production data to build automated quality control systems&lt;/LI&gt;
&lt;LI data-sourcepos="13:1-13:94"&gt;&lt;STRONG&gt;Finance&lt;/STRONG&gt;: Fraud analysts labeling suspicious transactions to train fraud detection models&lt;/LI&gt;
&lt;LI data-sourcepos="14:1-14:103"&gt;&lt;STRONG&gt;Weather Monitoring&lt;/STRONG&gt;: Meteorologists categorizing storm events to improve weather prediction models&lt;/LI&gt;
&lt;LI data-sourcepos="15:1-15:122"&gt;&lt;STRONG&gt;Automotive&lt;/STRONG&gt;: Engineers labeling sensor data from vehicle testing to identify component failures or performance issues&lt;/LI&gt;
&lt;LI data-sourcepos="16:1-17:0"&gt;&lt;STRONG&gt;Retail&lt;/STRONG&gt;: Analysts categorizing customer behavior patterns to improve recommendation engines&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-sourcepos="18:1-18:96"&gt;Without accurate labels, even the most sophisticated algorithms will produce unreliable results.&lt;/P&gt;
&lt;H2 data-sourcepos="20:1-20:24"&gt;Architecture Overview&lt;/H2&gt;
&lt;P&gt;In this solution, large time series datasets stored in an Azure Data Explorer (Kusto) database are enriched with labels. The labeling metadata is maintained in an Azure SQL Database and updated through the&amp;nbsp;&lt;STRONG&gt;Volkov Labs Business Table&lt;/STRONG&gt;&amp;nbsp;plugin in Grafana. To provide a unified view, we use&amp;nbsp;&lt;STRONG&gt;external tables&lt;/STRONG&gt;&amp;nbsp;in Kusto to seamlessly combine the raw time series data with the labeling information stored in SQL.&lt;/P&gt;
&lt;P&gt;In summary, the architecture integrates three key Azure services:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Data Explorer (Kusto):&lt;/STRONG&gt;&amp;nbsp;Stores the large-scale time series data that needs to be labeled.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure SQL Database:&lt;/STRONG&gt;&amp;nbsp;Holds the labeling metadata, which can be updated interactively.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Managed Grafana:&lt;/STRONG&gt; Serves as the user interface for labeling, enabling users to view, assign, and update labels directly from the dashboard.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img&gt;High-Level-Architecture&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-sourcepos="30:1-30:16"&gt;Prerequisites&lt;/H2&gt;
&lt;P data-sourcepos="32:1-32:33"&gt;Before starting, ensure you have:&lt;/P&gt;
&lt;UL data-sourcepos="33:1-37:0"&gt;
&lt;LI data-sourcepos="33:1-33:23"&gt;An Azure SQL Database&lt;/LI&gt;
&lt;LI data-sourcepos="34:1-34:57"&gt;A Kusto database (Running on Azure or Microsoft Fabric)&lt;/LI&gt;
&lt;LI data-sourcepos="35:1-35:36"&gt;An Azure Managed Grafana workspace&lt;/LI&gt;
&lt;LI data-sourcepos="36:1-37:0"&gt;Appropriate permissions on all services&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-sourcepos="38:1-38:28"&gt;Demo Dataset: StormEvents&lt;/H2&gt;
&lt;P data-sourcepos="40:1-40:328"&gt;For this demonstration, we'll use the&amp;nbsp;StormEvents&amp;nbsp;table from the&amp;nbsp;&lt;A href="https://help.kusto.windows.net/" target="_blank" rel="noopener"&gt;Azure Data Explorer help cluster&lt;/A&gt;. This publicly available dataset contains detailed information about storm events in the United States from the National Weather Service, making it perfect for demonstrating labeling workflows.&lt;/P&gt;
&lt;P data-sourcepos="42:1-42:84"&gt;You can explore this dataset by connecting to&amp;nbsp;help.kusto.windows.net&amp;nbsp;and querying:&lt;/P&gt;
&lt;LI-CODE lang="rust"&gt;StormEvents 
| take 10&lt;/LI-CODE&gt;
&lt;P data-sourcepos="49:1-49:133"&gt;The dataset includes fields like EventType, State, EventNarrative, and DamageProperty, providing rich context for labeling exercises.&lt;/P&gt;
&lt;H2 data-sourcepos="51:1-51:37"&gt;Setting Up the SQL Database Schema&lt;/H2&gt;
&lt;P data-sourcepos="53:1-53:189"&gt;The labeling system uses a well-structured SQL schema that separates label types from actual event labels. Below are the key components (you can find the full SQL code in the linked gist):&lt;/P&gt;
&lt;H3 data-sourcepos="55:1-55:23"&gt;1. Label Type Table&lt;/H3&gt;
&lt;P data-sourcepos="57:1-57:90"&gt;The&amp;nbsp;LabelType&amp;nbsp;table defines the available label categories with ordering and validation:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE TABLE LabelType (
    LabelId INT IDENTITY(1,1) PRIMARY KEY,
    LabelShortDesc NVARCHAR(50) NOT NULL UNIQUE,
    LabelLongDesc NVARCHAR(500) NOT NULL,
    LabelOrder INT NOT NULL DEFAULT 999,
    LabelCreatedTimestamp DATETIME2(3) NOT NULL DEFAULT GETUTCDATE(),
    LabelUpdateTimestamp DATETIME2(3) NOT NULL DEFAULT GETUTCDATE()
);&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 data-sourcepos="70:1-70:27"&gt;2. Event Labeling Table&lt;/H3&gt;
&lt;P data-sourcepos="72:1-72:62"&gt;The&amp;nbsp;EventLabel&amp;nbsp;table stores actual labels applied to events:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE TABLE EventLabel (
    EventLabelId BIGINT IDENTITY(1,1) PRIMARY KEY,
    EventId INT NOT NULL,
    LabelId INT NOT NULL DEFAULT 1,
    EventLabelDesc NVARCHAR(1000) NULL,
    EventLabelUserName NVARCHAR(256) NOT NULL DEFAULT SUSER_SNAME(),
    EventLabelCreatedTimestamp DATETIME2(3) NOT NULL DEFAULT GETUTCDATE(),
    EventLabelUpdateTimestamp DATETIME2(3) NOT NULL DEFAULT GETUTCDATE(),
    CONSTRAINT UQ_EventLabel_EventId UNIQUE (EventId)
);&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 data-sourcepos="87:1-87:30"&gt;3. Upsert Stored Procedure&lt;/H3&gt;
&lt;P data-sourcepos="89:1-89:354"&gt;The most critical component is the&amp;nbsp;sp_AddEventLabel&amp;nbsp;stored procedure that handles label insertion and updates safely. It performs an upsert operation using the SQL MERGE statement, ensuring data consistency without duplicate entries. This approach significantly simplifies the dashboard implementation by abstracting the complexity of label management.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE PROCEDURE sp_AddEventLabel
    @EventId BIGINT,
    @LabelShortDesc NVARCHAR(50),
    @EventLabelDesc NVARCHAR(1000) = NULL,
     NVARCHAR(256) = NULL
AS
BEGIN
    -- Get LabelId from short description with validation
    DECLARE @LabelId INT;
    SELECT @LabelId = LabelId FROM LabelType WHERE LabelShortDesc = @LabelShortDesc;
    
    IF @LabelId IS NULL
    BEGIN
        THROW 50001, 'Invalid label short description provided', 1;
        RETURN;
    END
    
    -- Use MERGE for true upsert functionality
    MERGE EventLabel AS target
    USING (SELECT @EventId AS EventId, @LabelId AS LabelId, 
                  @EventLabelDesc AS EventLabelDesc, 
                  ISNULL(@UserName, SUSER_SNAME()) AS EventLabelUserName) AS source
    ON target.EventId = source.EventId
    WHEN MATCHED THEN
        UPDATE SET LabelId = source.LabelId, 
                   EventLabelDesc = source.EventLabelDesc,
                   EventLabelUserName = source.EventLabelUserName,
                   EventLabelUpdateTimestamp = GETUTCDATE()
    WHEN NOT MATCHED THEN
        INSERT (EventId, LabelId, EventLabelDesc, EventLabelUserName)
        VALUES (source.EventId, source.LabelId, source.EventLabelDesc, source.EventLabelUserName);
END;&lt;/LI-CODE&gt;
&lt;P data-sourcepos="126:1-126:93"&gt;This procedure ensures data integrity and provides automatic timestamping for audit purposes.&lt;/P&gt;
&lt;H3 data-sourcepos="128:1-128:24"&gt;4. Consolidated View&lt;/H3&gt;
&lt;P data-sourcepos="130:1-130:229"&gt;The&amp;nbsp;&lt;EM&gt;vw_EventLabeling&lt;/EM&gt;&amp;nbsp;view brings together both tables and introduces calculated fields to streamline querying and analysis. This unified view simplifies access to labeling data, making it easier to build reports and dashboards.&lt;/P&gt;
&lt;H2 data-sourcepos="132:1-132:45"&gt;Configuring Authentication and Permissions&lt;/H2&gt;
&lt;H3 data-sourcepos="134:1-134:34"&gt;Grafana Managed Identity Setup&lt;/H3&gt;
&lt;P data-sourcepos="136:1-137:61"&gt;Azure Managed Grafana uses its workspace name as the name for the name of the managed identity. We use the Grafana Workspace managed identity, accessing the Azure SQL database. Configure the SQL database users and permissions accordingly:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE USER [your-grafana-workspace-name] FROM EXTERNAL PROVIDER;
ALTER ROLE db_datareader ADD MEMBER [your-grafana-workspace-name];
ALTER ROLE db_datawriter ADD MEMBER [your-grafana-workspace-name];
GRANT EXECUTE ON SCHEMA::dbo TO [your-grafana-workspace-name];&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 data-sourcepos="146:1-146:21"&gt;User Group Access&lt;/H3&gt;
&lt;P data-sourcepos="148:1-148:287"&gt;To enable seamless integration, make sure your user group—already configured for read access in the Kusto database—is also granted read permissions on the SQL tables via external tables. This ensures consistent access across both data sources and simplifies cross-platform querying.:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE USER [your-entra-id-user-group] FROM EXTERNAL PROVIDER;
ALTER ROLE db_datareader ADD MEMBER [your-entra-id-user-group];&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-sourcepos="155:1-155:27"&gt;Creating External Tables in Kusto&lt;/H2&gt;
&lt;P data-sourcepos="157:1-157:142"&gt;Since Grafana can only connect to one data source per panel, we need external tables in Kusto to access the SQL Database data. This enables us to:&lt;/P&gt;
&lt;UL data-sourcepos="159:1-161:0"&gt;
&lt;LI data-sourcepos="159:1-159:48"&gt;Make SQL data available in the Kusto database&lt;/LI&gt;
&lt;LI data-sourcepos="160:1-161:0"&gt;Work around Grafana's single data source limitation per panel&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="rust"&gt;// Create external table for the event labeling view
.create external table EventLabeling (
    // EventLabel fields
    EventLabelId: long,
    EventId: long,
    EventLabelDesc: string,
    // ... other fields
    LabelStatus: string
)
kind=sql
table=vw_EventLabeling
(
    h@'Server=&amp;lt;your-sql-server&amp;gt;.database.windows.net;Database=&amp;lt;your-database&amp;gt;;Authentication=Active Directory Integrated;'
)&lt;/LI-CODE&gt;
&lt;P data-sourcepos="162:1-162:342"&gt;The complete source code definition is available in the linked gist. We use Entra ID integrated authentication to securely connect to the SQL database. To ensure proper access, verify that the user group with read permissions in Kusto also has read access to the SQL database. This alignment is essential for seamless querying through external tables.&lt;/P&gt;
&lt;H2 data-sourcepos="164:1-164:65"&gt;Building the Grafana Dashboard with Volkov Labs Business Table&lt;/H2&gt;
&lt;P data-sourcepos="166:1-166:168"&gt;The magic happens in Grafana using the&amp;nbsp;&lt;STRONG&gt;Business Table plugin from Volkov Labs&lt;/STRONG&gt;. This plugin provides an editable table interface perfect for data labeling workflows.&lt;/P&gt;
&lt;img&gt;The final dashboard&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 data-sourcepos="171:1-171:30"&gt;Key Configuration Elements&lt;/H3&gt;
&lt;H4 data-sourcepos="175:1-175:44"&gt;1. Workspace Data Sources Configuration&lt;/H4&gt;
&lt;P data-sourcepos="177:1-177:285"&gt;Ensure that both the Kusto and SQL database data sources are properly configured in your Grafana workspace. For SQL authentication, the Grafana workspace uses its managed identity, allowing secure and seamless access to the external SQL tables without the need for storing credentials. For Azure Data Explorer we are using the &lt;EM&gt;current user&lt;/EM&gt; authentication method here, passing through the current user in Azure Managed Grafana to the Kusto database (see also my previous blog post &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/azuredataexplorer/visualizing-narrow-kusto-tables-with-azure-managed-grafana/4025681" target="_blank" rel="noopener" data-lia-auto-title="here" data-lia-auto-title-active="0"&gt;here&lt;/A&gt;).&lt;/P&gt;
&lt;H4 data-sourcepos="179:1-179:32"&gt;2. Panel Configuration&lt;/H4&gt;
&lt;P data-sourcepos="180:1-180:50"&gt;We configure two datasources for the business table&lt;/P&gt;
&lt;P&gt;1. &lt;EM&gt;events:&lt;/EM&gt; Querying the StormEvents and joining the labeling data from the SQL database:&lt;/P&gt;
&lt;LI-CODE lang="rust"&gt;let selectedEvents=toscalar(
cluster("help.kusto.windows.net").database("Samples").StormEvents
| where $__timeFilter(StartTime) and EventType in ($v_EventType) and State in ($v_State)
|summarize make_list (EventId));
let EventLabel=external_table('EventLabeling')
| where EventId in (selectedEvents);
cluster("help.kusto.windows.net").database("Samples").StormEvents
| where $__timeFilter(StartTime) and EventType in ($v_EventType) and State in ($v_State)
| lookup EventLabel on EventId
| order by StartTime, EndTime, State asc , EventType asc&lt;/LI-CODE&gt;
&lt;P data-sourcepos="194:1-195:416"&gt;In our example, the join to the external SQL table is performed using the EventId column. However, in real-world scenarios, events often lack a single identity column. Instead, a composite natural key—such as StartTime, EndTime, and EventType—may be used to uniquely identify records. To ensure efficient query performance, it's important to verify that filters are pushed down to the SQL database. You can confirm this by reviewing the audit logs. In our case, we explicitly filter on EventIds within the query to achieve pushdown. Note that performing a join (used here as a lookup) does not automatically push the EventIds as filters to the SQL database, which can lead to less efficient execution.&lt;/P&gt;
&lt;P&gt;2. &lt;EM&gt;LabelType&lt;/EM&gt; We are using this to populate the drop down list for the label types.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;Queries used in the business table&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4 data-sourcepos="204:1-204:38"&gt;3. Editable Columns Configuration&lt;/H4&gt;
&lt;P data-sourcepos="206:1-207:130"&gt;The Business Table plugin allows specific columns to be editable with different editor types.&lt;/P&gt;
&lt;P data-sourcepos="209:1-209:98"&gt;With the editor type &lt;EM&gt;select&lt;/EM&gt;, we are confguring the drop down list, based on the&amp;nbsp;&lt;EM&gt;LableType&lt;/EM&gt;-Query:&lt;/P&gt;
&lt;img&gt;Example configuration with Editor Type Select&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4 data-sourcepos="215:1-215:33"&gt;4.Update Query Configuration&lt;/H4&gt;
&lt;P data-sourcepos="217:1-217:53"&gt;Users can add or update labels through the interface. For this we have to configure the &lt;EM&gt;Update Request&lt;/EM&gt;:&lt;/P&gt;
&lt;img&gt;Update Request configuration&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-sourcepos="221:1-221:150"&gt;The upsert operation is executed via the&amp;nbsp;&lt;EM&gt;sp_AddEventLabel&lt;/EM&gt; stored procedure, which receives input from the business table using predefined variables. This is the SQL code:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;EXEC sp_AddEventLabel @EventId = '${payload.EventId:int}', @LabelShortDesc = '${payload.LabelShortDesc}', 
     @EventLabelDesc = '${payload.EventLabelDesc:sqlsting}', ='${__user.login}';&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-sourcepos="228:1-228:32"&gt;Interactive Labeling Workflow&lt;/H2&gt;
&lt;P data-sourcepos="230:1-230:38"&gt;With everything configured, users can:&lt;/P&gt;
&lt;OL data-sourcepos="232:1-239:0"&gt;
&lt;LI data-sourcepos="232:1-232:91"&gt;&lt;STRONG&gt;Browse Events&lt;/STRONG&gt;: View unlabeled events from the Kusto dataset with context information&lt;/LI&gt;
&lt;LI data-sourcepos="233:1-233:109"&gt;&lt;STRONG&gt;Apply Labels&lt;/STRONG&gt;: Use dropdown menus to categorize events (e.g., "False Positive", "Investigation Needed")&lt;/LI&gt;
&lt;LI data-sourcepos="234:1-234:67"&gt;&lt;STRONG&gt;Add Context&lt;/STRONG&gt;: Include detailed descriptions for complex cases&lt;/LI&gt;
&lt;LI data-sourcepos="235:1-235:83"&gt;&lt;STRONG&gt;Track Changes&lt;/STRONG&gt;: Automatic timestamping and user tracking for full audit trail&lt;/LI&gt;
&lt;LI data-sourcepos="236:1-236:70"&gt;&lt;STRONG&gt;Monitor Progress&lt;/STRONG&gt;: View labeling statistics and completion rates&lt;/LI&gt;
&lt;LI data-sourcepos="237:1-239:0"&gt;&lt;STRONG&gt;Collaborate&lt;/STRONG&gt;: Multiple users can label different events simultaneously&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2 data-sourcepos="240:1-240:17"&gt;Best Practices&lt;/H2&gt;
&lt;OL data-sourcepos="242:1-249:0"&gt;
&lt;LI data-sourcepos="242:1-242:75"&gt;&lt;STRONG&gt;Start Simple&lt;/STRONG&gt;: Begin with basic label categories and expand as needed&lt;/LI&gt;
&lt;LI data-sourcepos="243:1-243:76"&gt;&lt;STRONG&gt;User Training&lt;/STRONG&gt;: Ensure labelers understand the categories and criteria&lt;/LI&gt;
&lt;LI data-sourcepos="244:1-244:70"&gt;&lt;STRONG&gt;Quality Control&lt;/STRONG&gt;: Implement review processes for critical labels&lt;/LI&gt;
&lt;LI data-sourcepos="245:1-245:95"&gt;&lt;STRONG&gt;Performance&lt;/STRONG&gt;: Index your tables appropriately for large datasets (as shown in our schema)&lt;/LI&gt;
&lt;LI data-sourcepos="246:1-246:66"&gt;&lt;STRONG&gt;Backup&lt;/STRONG&gt;: Regular backups of your labeling data are essential&lt;/LI&gt;
&lt;LI data-sourcepos="247:1-247:66"&gt;&lt;STRONG&gt;Validation&lt;/STRONG&gt;: Use database constraints to ensure data quality&lt;/LI&gt;
&lt;LI data-sourcepos="248:1-249:0"&gt;&lt;STRONG&gt;Audit Trail&lt;/STRONG&gt;: Maintain full history of who labeled what and when&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2 data-sourcepos="259:1-259:13"&gt;Conclusion&lt;/H2&gt;
&lt;P data-sourcepos="261:1-261:294"&gt;This integrated approach combining Azure SQL Database, Kusto, and Grafana provides a powerful, scalable solution for data labeling workflows. The interactive interface empowers domain experts to efficiently label large datasets, while the robust backend ensures data integrity and traceability.&lt;/P&gt;
&lt;P data-sourcepos="263:1-263:21"&gt;Key benefits include:&lt;/P&gt;
&lt;UL data-sourcepos="264:1-269:0"&gt;
&lt;LI data-sourcepos="264:1-264:79"&gt;&lt;STRONG&gt;Unified Interface&lt;/STRONG&gt;: Single pane of glass for data exploration and labeling&lt;/LI&gt;
&lt;LI data-sourcepos="265:1-265:69"&gt;&lt;STRONG&gt;Real-time Collaboration&lt;/STRONG&gt;: Multiple users can work simultaneously&lt;/LI&gt;
&lt;LI data-sourcepos="266:1-266:63"&gt;&lt;STRONG&gt;Full Audit Trail&lt;/STRONG&gt;: Complete history of labeling activities&lt;/LI&gt;
&lt;LI data-sourcepos="267:1-267:69"&gt;&lt;STRONG&gt;Flexible Schema&lt;/STRONG&gt;: Easy to adapt for different labeling scenarios&lt;/LI&gt;
&lt;LI data-sourcepos="268:1-269:0"&gt;&lt;STRONG&gt;Enterprise Ready&lt;/STRONG&gt;: Built on Azure services with proper authentication&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-sourcepos="270:1-270:247"&gt;Whether you're building the next generation of medical diagnostic tools, optimizing manufacturing processes, or improving weather prediction models, this labeling system provides the infrastructure needed to turn raw data into actionable insights.&lt;/P&gt;
&lt;P data-sourcepos="272:1-272:179"&gt;The result is high-quality labeled data that forms a solid foundation for building reliable machine learning models—ultimately driving better business outcomes across a wide range of industries.&lt;/P&gt;
&lt;H2 data-sourcepos="273:1-273:23"&gt;Additional Resources&lt;/H2&gt;
&lt;UL data-sourcepos="275:1-280:176"&gt;
&lt;LI data-sourcepos="275:1-275:99"&gt;&lt;A href="https://gist.github.com/hau-mal/f2d8c404535cfcb69dbdc1c8cbc25715" target="_blank" rel="noopener"&gt;The gist with the source code&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-sourcepos="276:1-276:134"&gt;&lt;A href="https://learn.microsoft.com/kusto/management/external-sql-tables?view=microsoft-fabric" target="_blank" rel="noopener"&gt;Create and alter Azure SQL external tables&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-sourcepos="277:1-277:90"&gt;&lt;A href="https://grafana.com/docs/grafana/latest/datasources/" target="_blank" rel="noopener"&gt;Grafana Datasource Configuration&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-sourcepos="278:1-278:63"&gt;&lt;A href="https://volkovlabs.io/plugins/" target="_blank" rel="noopener"&gt;Volkov Labs Grafana Plugins&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-sourcepos="279:1-279:133"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/" target="_blank" rel="noopener"&gt;Azure Managed Identity Documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-sourcepos="280:1-280:176"&gt;&lt;A href="https://docs.microsoft.com/en-us/sql/relational-databases/security/security-center-for-sql-server-database-engine-and-azure-sql-database" target="_blank" rel="noopener"&gt;SQL Server Security Best Practices&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 14 Jul 2025 13:01:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/labeling-kusto-data-in-azure-managed-grafana-for-machine/ba-p/4419073</guid>
      <dc:creator>Hauke</dc:creator>
      <dc:date>2025-07-14T13:01:37Z</dc:date>
    </item>
    <item>
      <title>Azure Data Explorer's Advanced Geospatial Analysis: Breaking New Ground</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-s-advanced-geospatial-analysis-breaking-new/ba-p/4403300</link>
      <description>&lt;P&gt;In the rapidly evolving landscape of data analytics, Azure Data Explorer (ADX) continues to push boundaries with its latest enhancement to geospatial capabilities. These new features, meticulously developed by Michael Brichko and the Kusto team, represent a significant advancement in how organizations can analyze and derive insights from location-based data directly within the Kusto Query Language (KQL).&lt;/P&gt;
&lt;H1&gt;The Evolution of Geospatial Analysis in ADX&lt;/H1&gt;
&lt;P&gt;Azure Data Explorer has long supported basic geospatial operations, but this latest release dramatically expands its capabilities with functions that address specific analytical challenges faced by data engineers and analysts working with spatial data.&lt;/P&gt;
&lt;H2&gt;New Powerful Lookup Plugins and Joins&lt;/H2&gt;
&lt;P&gt;At the core of this update are powerful additions that solve complex spatial relationship problems: the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/kusto/query/geo-polygon-lookup-plugin" target="_blank" rel="noopener"&gt;geo_polygon_lookup&lt;/A&gt; and &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/kusto/query/geo-line-lookup-plugin" target="_blank" rel="noopener"&gt;geo_line_lookup&lt;/A&gt; plugins, alongside comprehensive geospatial join capabilities.&lt;/P&gt;
&lt;H3&gt;geo_polygon_lookup&lt;/H3&gt;
&lt;P&gt;This plugin efficiently determines relationships between points and polygons, answering questions like "which sales territory contains this customer?" or "which service zone covers this location?":&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;let territories = datatable(region_name:string, polygon:dynamic) [ "Northeast", dynamic({"type":"Polygon", "coordinates":[[[-73.97375,40.74300],[-73.98653,40.75486],[-73.99910,40.74112],[-73.97375,40.74300]]]}), "Southwest", dynamic({"type":"Polygon","coordinates":[[[2.57564,48.76956],[2.42009,49.05163],[2.10167,48.80113],[2.57564,48.76956]]]}), ]; let customer_locations = datatable(customer_id:string, longitude:real, latitude:real) [ "Customer1", -73.98000, 40.74800, "Customer2", 2.50000, 48.90000, "Customer3", 10.00000, 50.00000 ]; customer_locations | evaluate geo_polygon_lookup(territories, polygon, longitude, latitude) | project customer_id, region_name&lt;/LI-CODE&gt;
&lt;P&gt;The performance benefits here are substantial - instead of complex self-joins or multi-step operations, this plugin handles the spatial relationship calculations in a single, optimized operation.&lt;/P&gt;
&lt;H3&gt;geo_line_lookup&lt;/H3&gt;
&lt;P&gt;Similarly, the geo_line_lookup plugin identifies lines (like highways, pipelines, or power lines) within a specified distance of points:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;let infrastructure = datatable(line_id:string, line:dynamic) [ "Highway 101", dynamic({"type":"LineString","coordinates":[[-122.40297,37.79329],[-122.38855,37.77867]]}), "Main Pipeline", dynamic({"type":"LineString","coordinates":[[-118.35645,34.17247],[-118.32962,34.09873]]}), ]; let maintenance_reports = datatable(report_id:string, longitude:real, latitude:real, issue:string) [ "R001", -122.39500, 37.78500, "Debris", "R002", -118.34000, 34.15000, "Leak", "R003", -120.00000, 36.00000, "Damage" ]; maintenance_reports | evaluate geo_line_lookup(infrastructure, line, longitude, latitude, 500) // within 500 meters | project report_id, line_id, issue&lt;/LI-CODE&gt;
&lt;P&gt;This capability is invaluable for infrastructure management, transportation analysis, and network optimization scenarios.&lt;/P&gt;
&lt;H3&gt;Advanced Geospatial Joins&lt;/H3&gt;
&lt;P&gt;Beyond the lookup plugins, ADX now provides comprehensive support for various geospatial join strategies. These capabilities allow for sophisticated spatial analysis using different geo-hashing approaches:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;// Join locations using S2 cells let locations1 = datatable(name: string, longitude: real, latitude: real) [ "Store A", -0.12433080766874127, 51.51115841361647, "Store B", -0.12432651341458723, 51.511160848670585, "Store C", -0.12432466939637266, 51.51115959669167, "Store D", 1, 1, ]; let customer_visits = datatable(customer_id: string, longitude: real, latitude: real) [ "Customer1", -0.12432668105284961, 51.51115938802832 ]; let s2_join_level = 22; // Higher level = more precision locations1 | extend hash = geo_point_to_s2cell(longitude, latitude, s2_join_level) | join kind = inner ( customer_visits | extend hash = geo_point_to_s2cell(longitude, latitude, s2_join_level) ) on hash | project name, customer_id&lt;/LI-CODE&gt;
&lt;P&gt;For more complex proximity requirements, the H3 geo-hashing system with neighbor awareness provides an elegant solution:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;// Join locations using H3 cells with neighbor awareness let retail_locations = datatable(store_name: string, longitude: real, latitude: real) [ "Downtown", -0.12433080766874127, 51.51115841361647, "Westside", -0.12432651341458723, 51.511160848670585, "Eastside", -0.12432466939637266, 51.51115959669167, "Remote", 1, 1, ]; let customer_events = datatable(event_id: string, longitude: real, latitude: real) [ "Purchase123", -0.12432668105284961, 51.51115938802832 ]; let to_hash = (lng: real, lat: real) { let h3_hash_level = 14; // Precision level let h3_hash = geo_point_to_h3cell(lng, lat, h3_hash_level); array_concat(pack_array(h3_hash), geo_h3cell_neighbors(h3_hash)) }; retail_locations | extend hash = to_hash(longitude, latitude) | mv-expand hash to typeof(string) | join kind = inner ( customer_events | extend hash = to_hash(longitude, latitude) | mv-expand hash to typeof(string) ) on hash | distinct store_name, event_id&lt;/LI-CODE&gt;
&lt;P&gt;For proximity-based joins that require precise distance calculations, the buffer-based approach provides exceptional flexibility:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;// Join locations based on precise distance buffers let venues = datatable(venue_name: string, longitude: real, latitude: real) [ "O2 Entrance", 0.005889454501716321, 51.50238626916584, "North Gate", 0.0009625704125020596, 51.50385432770013, "Greenwich Park", 0.0009395106042404677, 51.47700456557013, ]; let points_of_interest = datatable(poi_id: string, longitude: real, latitude: real) [ "O2 Arena", 0.003159306017352037, 51.502929224128394 ] | extend buffer = geo_point_buffer(longitude, latitude, 300, 0.1); // 300-meter radius venues | evaluate geo_polygon_lookup(points_of_interest, buffer, longitude, latitude) | project venue_name, poi_id&lt;/LI-CODE&gt;
&lt;P&gt;Beyond joins and lookups, ADX now offers specialized functions for precise geospatial calculations:&lt;/P&gt;
&lt;H3&gt;geo_from_wkt()&lt;/H3&gt;
&lt;P&gt;The new &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/kusto/query/geo-from-wkt-function" target="_blank" rel="noopener"&gt;geo_from_wkt()&lt;/A&gt; function bridges the gap between different geospatial systems by converting Well-Known Text (WKT) format - a standard in GIS systems - into GeoJSON objects that ADX can process:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;// Convert WKT to GeoJSON for further analysis let wkt_shapes = datatable(shape_id:string, wkt_representation:string) [ "City Boundary", "POLYGON((-122.406417 37.785834, -122.403984 37.787343, -122.401826 37.785069, -122.404681 37.782928, -122.406417 37.785834))", "Highway", "LINESTRING(-122.33707 47.60924, -122.32553 47.61803)" ]; wkt_shapes | extend geojson_shape = geo_from_wkt(wkt_representation)&lt;/LI-CODE&gt;
&lt;P&gt;Result:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;shape_id&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;wkt_representation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;geojson_shape&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;City Boundary&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;POLYGON((-122.406417 37.785834, -122.403984 37.787343, -122.401826 37.785069, -122.404681 37.782928, -122.406417 37.785834))&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;{ &lt;BR /&gt;"type": "Polygon",&lt;BR /&gt;"coordinates": [&lt;BR /&gt;[&lt;BR /&gt;[&lt;BR /&gt;-122.406417,&lt;BR /&gt;37.785834&lt;BR /&gt;],&lt;BR /&gt;[&lt;BR /&gt;-122.403984,&lt;BR /&gt;37.787343&lt;BR /&gt;],&lt;BR /&gt;[&lt;BR /&gt;-122.401826,&lt;BR /&gt;37.785069&lt;BR /&gt;],&lt;BR /&gt;[&lt;BR /&gt;-122.404681,&lt;BR /&gt;37.782928&lt;BR /&gt;],&lt;BR /&gt;[&lt;BR /&gt;-122.406417,&lt;BR /&gt;37.785834&lt;BR /&gt;]&lt;BR /&gt;]&lt;BR /&gt;]&lt;BR /&gt;}&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Highway&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;LINESTRING(-122.33707 47.60924, -122.32553 47.61803)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;{ &lt;BR /&gt;"type": "LineString",&lt;BR /&gt;"coordinates": [&lt;BR /&gt;[&lt;BR /&gt;-122.33707,&lt;BR /&gt;47.60924&lt;BR /&gt;],&lt;BR /&gt;[&lt;BR /&gt;-122.32553,&lt;BR /&gt;47.61803&lt;BR /&gt;]&lt;BR /&gt;]&lt;BR /&gt;}&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;This function supports all standard geometry types including Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon, and GeometryCollection.&lt;/P&gt;
&lt;H3&gt;Route Analysis Functions&lt;/H3&gt;
&lt;P&gt;Two complementary functions provide sophisticated route analysis capabilities:&lt;/P&gt;
&lt;H4&gt;geo_line_interpolate_point&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/kusto/query/geo-line-interpolate-point-function" target="_blank" rel="noopener"&gt;geo_line_interpolate_point()&lt;/A&gt; calculates a point at a specified fraction along a line:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;// Find points along a delivery route at 25%, 50%, and 75% of the journey let delivery_routes = datatable(route_id:string, route:dynamic) [ "Route A", dynamic({"type":"LineString","coordinates":[[-122.33707, 47.60924], [-122.32553, 47.61803]]}) ]; delivery_routes | extend start_point = geo_line_interpolate_point(route, 0), quarter_point = geo_line_interpolate_point(route, 0.25), midpoint = geo_line_interpolate_point(route, 0.5), three_quarters = geo_line_interpolate_point(route, 0.75), end_point = geo_line_interpolate_point(route, 1)&lt;/LI-CODE&gt;
&lt;P&gt;Result:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;route_id&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;route&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;start_point&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;quarter_point&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;midpoint&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;three_quarters&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;end_point&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Route A&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;{ &lt;BR /&gt;"type": "LineString",&lt;BR /&gt;"coordinates": [&lt;BR /&gt;[&lt;BR /&gt;-122.33707,&lt;BR /&gt;47.60924&lt;BR /&gt;],&lt;BR /&gt;[&lt;BR /&gt;-122.32553,&lt;BR /&gt;47.61803&lt;BR /&gt;]&lt;BR /&gt;]&lt;BR /&gt;}&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;{ &lt;BR /&gt;"type": "Point",&lt;BR /&gt;"coordinates": [&lt;BR /&gt;-122.33707,&lt;BR /&gt;47.609240000000007&lt;BR /&gt;]&lt;BR /&gt;}&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;{ &lt;BR /&gt;"type": "Point",&lt;BR /&gt;"coordinates": [&lt;BR /&gt;-122.33418536369042,&lt;BR /&gt;47.611437608491833&lt;BR /&gt;]&lt;BR /&gt;}&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;{ &lt;BR /&gt;"type": "Point",&lt;BR /&gt;"coordinates": [&lt;BR /&gt;-122.33130048494128,&lt;BR /&gt;47.613635144663533&lt;BR /&gt;]&lt;BR /&gt;}&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;{ &lt;BR /&gt;"type": "Point",&lt;BR /&gt;"coordinates": [&lt;BR /&gt;-122.3284153637215,&lt;BR /&gt;47.615832608503482&lt;BR /&gt;]&lt;BR /&gt;}&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;{ &lt;BR /&gt;"type": "Point",&lt;BR /&gt;"coordinates": [&lt;BR /&gt;-122.32553000000002,&lt;BR /&gt;47.61803&lt;BR /&gt;]&lt;BR /&gt;}&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H4&gt;geo_line_locate_point&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/kusto/query/geo-line-locate-point-function" target="_blank" rel="noopener"&gt;geo_line_locate_point()&lt;/A&gt; performs the inverse operation, determining how far along a route a specific point is located:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;// Calculate what percentage of the route has been completed let active_routes = datatable(vehicle_id:string, route:dynamic, current_long:real, current_lat:real) [ "Truck1", dynamic({"type":"LineString","coordinates":[[-122.33707, 47.60924], [-122.32553, 47.61803]]}), -122.33000, 47.61500 ]; active_routes | extend completion_percentage = geo_line_locate_point(route, current_long, current_lat) * 100 | project vehicle_id, completion_percentage&lt;/LI-CODE&gt;
&lt;P&gt;Result:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;vehicle_id&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;completion_percentage&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Truck1&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;63,657018697669&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Together, these functions enable precise tracking and analysis of movement along routes, critical for logistics, fleet management, and transportation applications.&lt;/P&gt;
&lt;H3&gt;Closest Point Calculations&lt;/H3&gt;
&lt;P&gt;Two new functions address the need to find exact points on geometric features that are closest to reference points:&lt;/P&gt;
&lt;H4&gt;geo_closest_point_on_line&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/kusto/query/geo-closest-point-on-line-function" target="_blank" rel="noopener"&gt;geo_closest_point_on_line()&lt;/A&gt; identifies the exact point on a line nearest to a reference point:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;print point_on_line = geo_closest_point_on_line(-115.199625, 36.210419, dynamic({ "type":"LineString","coordinates":[[-115.115385,36.229195],[-115.136995,36.200366],[-115.140252,36.192470],[-115.143558,36.188523],[-115.144076,36.181954],[-115.154662,36.174483],[-115.166431,36.176388],[-115.183289,36.175007],[-115.192612,36.176736],[-115.202485,36.173439],[-115.225355,36.174365]]}))&lt;/LI-CODE&gt;
&lt;H4&gt;geo_closest_point_on_polygon&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/kusto/query/geo-closest-point-on-polygon-function" target="_blank" rel="noopener"&gt;geo_closest_point_on_polygon()&lt;/A&gt; calculates the closest point on a polygon boundary to a given location:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;let central_park = dynamic({"type":"Polygon","coordinates":[[[-73.9495,40.7969],[-73.95807266235352,40.80068603561921],[-73.98201942443848,40.76825672305777],[-73.97317886352539,40.76455136505513],[-73.9495,40.7969]]]});
print geo_closest_point_on_polygon(-73.9839, 40.7705, central_park)&lt;/LI-CODE&gt;
&lt;P&gt;These functions enable precise proximity analysis for applications ranging from emergency response to facility planning.&lt;/P&gt;
&lt;H2&gt;Technical Implementation and Performance Considerations&lt;/H2&gt;
&lt;P&gt;What makes these new geospatial features particularly impressive is their integration with ADX's query engine. The functions leverage ADX's columnar storage and parallel processing capabilities to perform complex spatial operations efficiently at scale.&lt;/P&gt;
&lt;P&gt;For large datasets, the lookup plugins use optimized spatial indexing techniques to avoid the performance pitfalls that typically plague geospatial joins. This means that operations that might take minutes or hours in traditional GIS systems can execute in seconds on properly optimized ADX clusters.&lt;/P&gt;
&lt;H2&gt;Real-World Applications&lt;/H2&gt;
&lt;P&gt;The new geospatial capabilities in ADX enable sophisticated solutions across industries:&lt;/P&gt;
&lt;H3&gt;Telecommunications&lt;/H3&gt;
&lt;P&gt;Network operators can analyze signal coverage against population density polygons to identify optimization opportunities:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;let cell_towers = datatable(tower_id:string, signal_longitude:real, signal_latitude:real, signal_strength:int)
[
    "Tower1", -73.98000, 40.74800, 85,
    "Tower2", -73.97500, 40.75200, 72,
    "Tower3", -73.99000, 40.74100, 90
];
let population_zones = datatable(zone_name:string, population_density:int, zone_polygon:dynamic)
[
    "Midtown", 25000, dynamic({"type":"Polygon", "coordinates":[[[-73.97375,40.74300],[-73.98653,40.75486],[-73.99910,40.74112],[-73.97375,40.74300]]]})
];
let signal_threshold = 80;
let density_threshold = 20000;
cell_towers
| evaluate geo_polygon_lookup(population_zones, zone_polygon, signal_longitude, signal_latitude)
| summarize avg_signal_strength=avg(signal_strength) by zone_name, population_density
| where avg_signal_strength &amp;lt; signal_threshold and population_density &amp;gt; density_threshold&lt;/LI-CODE&gt;
&lt;H3&gt;Energy&lt;/H3&gt;
&lt;P&gt;Pipeline operators can identify sensors that might be affected by maintenance on nearby infrastructure:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;let planned_maintenance = datatable(maintenance_id:string, maintenance_longitude:real, maintenance_latitude:real, start_time:datetime)
[
    "M001", -118.34500, 34.16000, datetime(2025-04-15),
    "M002", -118.33000, 34.10000, datetime(2025-04-20)
];
let pipelines = datatable(pipeline_id:string, pipeline_path:dynamic)
[
    "Main", dynamic({"type":"LineString","coordinates":[[-118.35645,34.17247],[-118.32962,34.09873]]}),
    "Secondary", dynamic({"type":"LineString","coordinates":[[-118.36000,34.15000],[-118.34000,34.08000]]})
];
let sensors = datatable(sensor_id:string, pipeline_id:string, sensor_type:string, next_scheduled_reading:datetime)
[
    "S001", "Main", "Pressure", datetime(2025-04-16),
    "S002", "Main", "Flow", datetime(2025-04-18),
    "S003", "Secondary", "Temperature", datetime(2025-04-22)
];
planned_maintenance
| evaluate geo_line_lookup(pipelines, pipeline_path, maintenance_longitude, maintenance_latitude, 500)
| join kind =inner sensors on pipeline_id
| project maintenance_id, sensor_id, sensor_type, next_scheduled_reading&lt;/LI-CODE&gt;
&lt;H3&gt;Transportation &amp;amp; Logistics&lt;/H3&gt;
&lt;P&gt;Fleet operators can optimize routing by analyzing historical trip data against road networks:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;let completed_trips = datatable(trip_id:string, trip_success:bool, incident_longitude:real, incident_latitude:real, incident_type:string)
[
    "T001", false, -122.40000, 47.61000, "Delay",
    "T002", false, -122.39500, 47.60800, "Traffic",
    "T003", false, -122.41000, 47.62000, "Weather"
];
let road_networks = datatable(road_name:string, road_type:string, road_path:dynamic)
[
    "5th Avenue", "Urban", dynamic({"type":"LineString","coordinates":[[-122.40297,47.59329],[-122.40297,47.62000]]}),
    "Highway 99", "Highway", dynamic({"type":"LineString","coordinates":[[-122.39000,47.60000],[-122.41000,47.63000]]})
];
completed_trips
| where trip_success == false
| evaluate geo_line_lookup(road_networks, road_path, incident_longitude, incident_latitude, 100)
| summarize incidents_count=count() by road_name, road_type, incident_type
| order by incidents_count desc&lt;/LI-CODE&gt;
&lt;H3&gt;Environmental Monitoring&lt;/H3&gt;
&lt;P&gt;Researchers can correlate sensor readings with geographic zones to track pollution dispersion:&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;let sensor_readings = datatable(sensor_id:string, reading_type:string, reading_value:real, timestamp:datetime, sensor_longitude:real, sensor_latitude:real)
[
    "ENV001", "PM2.5", 35.2, datetime(2025-04-01), -122.33000, 47.61500,
    "ENV002", "PM2.5", 22.8, datetime(2025-04-01), -122.34000, 47.62000,
    "ENV003", "PM2.5", 41.3, datetime(2025-04-01), -122.32000, 47.60500
];
let air_quality_zones = datatable(zone_name:string, zone_boundary:dynamic)
[
    "Downtown", dynamic({"type":"Polygon","coordinates":[[[-122.34000,47.60000],[-122.34000,47.62000],[-122.32000,47.62000],[-122.32000,47.60000],[-122.34000,47.60000]]]})
];
sensor_readings
| where reading_type == "PM2.5" and timestamp &amp;gt; ago(7d)
| evaluate geo_polygon_lookup(air_quality_zones, zone_boundary, sensor_longitude, sensor_latitude)
| summarize avg_reading=avg(reading_value) by zone_name, bin(timestamp, 1h)&lt;/LI-CODE&gt;
&lt;H2&gt;Getting Started with ADX Geospatial Analysis&lt;/H2&gt;
&lt;P&gt;The geospatial functions are available in all ADX clusters without requiring any special configuration.&lt;/P&gt;
&lt;P&gt;For optimal performance with large spatial datasets:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Consider partitioning strategies that account for spatial locality&lt;/LI&gt;
&lt;LI&gt;Pre-compute and cache complex geometries when dealing with static boundaries&lt;/LI&gt;
&lt;LI&gt;Monitor query performance to identify opportunities for optimization&lt;/LI&gt;
&lt;/OL&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;Azure Data Explorer's enhanced geospatial capabilities represent a significant advancement in making spatial analysis accessible and performant within a cloud analytics platform. By eliminating the need for specialized GIS tools and complex data movement, these features enable organizations to derive deeper insights from location-based data more efficiently than ever before.&lt;/P&gt;
&lt;P&gt;Whether you're analyzing telecommunications networks, optimizing logistics operations, managing energy infrastructure, or monitoring environmental patterns, ADX now provides the tools to incorporate sophisticated geospatial analysis directly into your data workflows.&lt;/P&gt;
&lt;P&gt;#AzureDataExplorer #ADX #Kusto #KQL #GeospatialAnalysis #DataAnalytics #CloudComputing #BigData #SpatialIntelligence #BusinessIntelligence #DataEngineering #MicrosoftAzure #DataScience #GIS #Analytics&lt;/P&gt;</description>
      <pubDate>Sun, 13 Apr 2025 05:33:15 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-s-advanced-geospatial-analysis-breaking-new/ba-p/4403300</guid>
      <dc:creator>cosh23</dc:creator>
      <dc:date>2025-04-13T05:33:15Z</dc:date>
    </item>
    <item>
      <title>Deprecation of variable length edge dot notation in graph-match</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/deprecation-of-variable-length-edge-dot-notation-in-graph-match/ba-p/4399470</link>
      <description>&lt;P&gt;In our ongoing efforts to improve the efficiency and clarity of graph queries in KQL, we are introducing changes to the way variable length edge properties are used in &lt;A href="https://learn.microsoft.com/en-us/kusto/query/graph-match-operator" target="_blank"&gt;graph-match&lt;/A&gt; constraints and projections. Specifically, we are deprecating the use of variable edge properties together with binary operators or scalar functions.&lt;/P&gt;
&lt;P&gt;asd&lt;/P&gt;
&lt;LI-CODE lang=""&gt;G
| graph-match &amp;lt;Path Pattern&amp;gt;
  where &amp;lt;Constraints&amp;gt;
  project &amp;lt;Projections&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;&lt;EM&gt;Constraints:&lt;/EM&gt; A Boolean expression composed of properties of named variables in the&amp;nbsp;Pattern&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Projections:&lt;/EM&gt; The project clause converts each pattern to a row in a tabular result&lt;/P&gt;
&lt;H1&gt;Constraints&lt;/H1&gt;
&lt;P&gt;For graph-match constraints, we will no longer support the use of variable edge properties with binary operators or scalar functions. Instead, we encourage the use of the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/kusto/query/all-graph-function?view=microsoft-fabric" target="_blank"&gt;all()&lt;/A&gt;&amp;nbsp;graph function.&lt;/P&gt;
&lt;P&gt;For example:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The pattern&amp;nbsp;(n1)-[*1..3]-&amp;gt;(n2)&amp;nbsp;with the constraint&amp;nbsp;e.prop has "abc"&amp;nbsp;should be updated to&amp;nbsp;&lt;STRONG&gt;all(e, prop has "abc")&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Similarly,&amp;nbsp;isnotempty(e.prop)&amp;nbsp;should be updated to&amp;nbsp;&lt;STRONG&gt;all(e, isnotempty(prop))&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Graph Projections&lt;/H1&gt;
&lt;P&gt;For graph-match projections, we are also deprecating the use of variable edge properties with binary operators or scalar functions. We recommend using the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/kusto/query/map-graph-function" target="_blank"&gt;map()&lt;/A&gt;&amp;nbsp;graph function instead.&lt;/P&gt;
&lt;P&gt;For example:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The projection&amp;nbsp;project strcat(e.prop, "suffix")&amp;nbsp;should be updated &lt;STRONG&gt;to&amp;nbsp;map(e, strcat(prop, "suffix"))&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;The projection&amp;nbsp;bag_pack("key1", e.prop1, "key2", e.prop2)&amp;nbsp;should be updated to&amp;nbsp;&lt;STRONG&gt;map(e, bag_pack("key1", prop1, "key2", prop2))&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These changes are aimed at simplifying and standardizing the way graph queries are written, making them more readable and maintainable. We encourage you to update your existing queries to align with these new guidelines.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Starting in May 2025, the system will no longer support dot notation for variable length edges and will generate errors for non-compliant queries. Queries using deprecated variable edge properties with binary operators or scalar functions will not execute.&lt;/P&gt;
&lt;P&gt;Users should review and update their graph-match constraints and projections to adhere to the new guidelines. Using the recommended functions will prevent disruptions and ensure consistent query practices.&lt;/P&gt;
&lt;P&gt;Thank you for your support during this transition.&lt;/P&gt;</description>
      <pubDate>Tue, 01 Apr 2025 07:27:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/deprecation-of-variable-length-edge-dot-notation-in-graph-match/ba-p/4399470</guid>
      <dc:creator>cosh23</dc:creator>
      <dc:date>2025-04-01T07:27:58Z</dc:date>
    </item>
    <item>
      <title>New ADX Dashboards Customization Features: More Control, Better Usability, and Improved Performance</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/new-adx-dashboards-customization-features-more-control-better/ba-p/4378204</link>
      <description>&lt;P&gt;We’re introducing new dashboard customization features to enhance control, usability, and performance. From managing data series visibility to improving navigation and map behavior, these updates help create a clearer, more efficient dashboard experience.&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-start="284" data-end="475"&gt;&lt;A href="#community--1-Legend" target="_self"&gt;Legend Number Configuration&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-start="476" data-end="627"&gt;&lt;A href="#community--1-pane" target="_self"&gt;Adjustable Panel Width&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-crosshair" target="_self"&gt;Crosshair Tooltip Number Configuration&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-start="476" data-end="627"&gt;&lt;A href="#community--1-map" target="_self"&gt;Map Centering Configuration&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-linked-item" style="color: var(--lia-bs-headings-color); font-family: var(--lia-bs-headings-font-family); font-size: var(--lia-bs-h3-font-size); font-style: var(--lia-headings-font-style); font-weight: var(--lia-h3-font-weight); letter-spacing: var(--lia-h3-letter-spacing); background-color: var(--lia-rte-bg-color);"&gt;&lt;a id="community--1-Legend" class="lia-anchor"&gt;&lt;/a&gt;Legend Number Configuration for Dashboards&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;To enhance readability and performance, dashboard authors can now configure the number of data series displayed on load when multiple series are expected in a chart.&lt;/P&gt;
&lt;P&gt;Additional series remain accessible via the legend and can be rendered as needed.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;For example, imagine a chart designed to display energy consumption over time for a fleet of cars. The dashboard author expects a large number of data series—one for each vehicle. To make the chart easier to interpret and improve dashboard performance, they can now set a limit on how many series are rendered initially. Users can still explore the full dataset by selecting additional series from the legend.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;&lt;a id="community--1-crosshair" class="lia-anchor"&gt;&lt;/a&gt;Crosshair Tooltip Number Configuration&lt;/H3&gt;
&lt;P&gt;We’re introducing a new setting under &lt;STRONG&gt;Display options&lt;/STRONG&gt; that allows dashboard authors to control the number of data points displayed in a chart’s crosshair tooltip.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Depending on the expected number of data series in a chart and the specific use case, dashboard owners can now set a limit on how many data points appear in the tooltip. This helps improve readability and prevents overcrowding when dealing with a large number of series.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this update, users can tailor the tooltip experience to focus on the most relevant insights while keeping charts clear and easy to interpret.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&lt;BR /&gt;This tile-level setting may be overridden by the general ADX web setting, &lt;STRONG&gt;"Show all series in chart tooltip."&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="lia-linked-item"&gt;&lt;a id="community--1-pane" class="lia-anchor"&gt;&lt;/a&gt;&lt;A class="lia-anchor" name="_Toc190342333" target="_blank"&gt;&lt;/A&gt;Adjustable Panel Width for Editors and Viewers&lt;/H3&gt;
&lt;P&gt;We’re introducing a highly requested improvement: the ability to manually adjust the width of the pages pane in both edit and view modes.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For dashboards with multiple pages—especially those with long names—users can now resize the panel by dragging to expand or collapse it, making navigation easier and improving usability. This flexibility ensures a more comfortable viewing experience, allowing users to see more of their page names at a glance without truncation.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="lia-linked-item"&gt;&lt;a id="community--1-map" class="lia-anchor"&gt;&lt;/a&gt;&lt;A class="lia-anchor" name="_Toc190342334" target="_blank"&gt;&lt;/A&gt;Map Centering Configuration for Dashboard Tiles&lt;/H3&gt;
&lt;P&gt;Introducing a new setting to Map visualizations in Dashboards, giving users more control over how maps behave during data refreshes.&lt;/P&gt;
&lt;P&gt;With the new &lt;STRONG&gt;auto center&lt;/STRONG&gt; setting, displayed on top of the map visualization, users can choose whether the map resets its zoom and center position upon refresh or maintains their manually adjusted view:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Auto center OFF&lt;/STRONG&gt;: The zoom level and position set by the user will persist across data refreshes, preventing unwanted zoom-in/out changes. Users can still manually reset the view using the &lt;STRONG&gt;Center&lt;/STRONG&gt; button.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Auto center ON&lt;/STRONG&gt;: The map will automatically adjust its zoom and center position with each data refresh, ensuring the view is always recalibrated based on the latest data.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This feature helps prevent disruptions in analysis, particularly for users who prefer a fixed view while monitoring live data updates.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Data Explorer Web UI team is looking forward for your feedback in&amp;nbsp;&lt;A href="mailto:KustoWebExpFeedback@service.microsoft.com" target="_blank"&gt;KustoWebExpFeedback@service.microsoft.com&amp;nbsp;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;You’re also welcome to add more ideas and vote for them here -&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/adx.ideas" target="_blank"&gt;Ideas&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 13 Feb 2025 12:22:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/new-adx-dashboards-customization-features-more-control-better/ba-p/4378204</guid>
      <dc:creator>Michal_Bar</dc:creator>
      <dc:date>2025-02-13T12:22:58Z</dc:date>
    </item>
    <item>
      <title>Retirement of Virtual Network Injection for Azure Data Explorer Extended to May 1st 2025</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/retirement-of-virtual-network-injection-for-azure-data-explorer/ba-p/4372670</link>
      <description>&lt;P&gt;We have some important news for Azure Data Explorer users! The retirement date for Virtual Network Injection (VNet Injection) has been extended to May 1st, 2025. This extension provides users with additional time to transition their workloads and adapt their configurations to the new network capabilities of Private Endpoints.&lt;/P&gt;
&lt;P&gt;This decision was made to ensure a smooth and seamless transition for all Azure Data Explorer users, allowing ample time to integrate and align with the updated network architecture. The Azure team remains committed to delivering top-notch service and support during this period.&lt;/P&gt;
&lt;P&gt;Documentation: &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/security-network-migrate-vnet-to-private-endpoint?tabs=arg%2Cportal" target="_blank"&gt;How to migrate from virtual network injection&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Please provide &lt;A href="https://forms.office.com/r/Vadqc7T9Vp" target="_blank"&gt;feedback&lt;/A&gt; on your migration.&lt;/P&gt;</description>
      <pubDate>Fri, 31 Jan 2025 15:14:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/retirement-of-virtual-network-injection-for-azure-data-explorer/ba-p/4372670</guid>
      <dc:creator>cosh23</dc:creator>
      <dc:date>2025-01-31T15:14:21Z</dc:date>
    </item>
    <item>
      <title>Synapse Data Explorer (SDX) to Eventhouse Migration Capability (Preview)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/synapse-data-explorer-sdx-to-eventhouse-migration-capability/ba-p/4357150</link>
      <description>&lt;P&gt;&lt;A href="https://learn.microsoft.com/azure/synapse-analytics/data-explorer/data-explorer-overview" target="_blank" rel="noopener"&gt;Synapse Data Explorer (SDX)&lt;/A&gt;, part of Azure Synapse Analytics, is an enterprise analytics service that enables you to explore, analyze, and visualize large volumes of data using the familiar Kusto Query Language (KQL). SDX has been in public preview since 2019.&lt;/P&gt;
&lt;H1&gt;The evolution of Synapse Data Explorer&lt;/H1&gt;
&lt;P&gt;The next generation of SDX offering is evolving to become &lt;A href="https://learn.microsoft.com/fabric/real-time-intelligence/eventhouse" target="_blank" rel="noopener"&gt;Eventhouse&lt;/A&gt;, part of &lt;A href="https://learn.microsoft.com/fabric/real-time-intelligence/overview" target="_blank" rel="noopener"&gt;Real-Time Intelligence&lt;/A&gt; in Microsoft Fabric. Eventhouse offers the same powerful features and capabilities as SDX, but with enhanced scalability, performance, and security. Eventhouse is built on the same technology as SDX, and is compatible with all the applications, SDKs, integrations, and tools that work with SDX.&lt;/P&gt;
&lt;P&gt;For&amp;nbsp; existing customers considering a move to Fabric, we are excited to offer a seamless migration capability. You can now&lt;STRONG&gt; migrate your Data Explorer pools from Synapse workspace to Eventhouse effortlessly.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To initiate the migration of your SDX cluster to Eventhouse, simply follow the instructions. &lt;A class="lia-external-url" href="http://aka.ms/sdx.migrate" target="_blank"&gt;http://aka.ms/sdx.migrate&lt;/A&gt;&lt;BR /&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 28 Apr 2025 17:39:31 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/synapse-data-explorer-sdx-to-eventhouse-migration-capability/ba-p/4357150</guid>
      <dc:creator>Anshul_Sharma</dc:creator>
      <dc:date>2025-04-28T17:39:31Z</dc:date>
    </item>
    <item>
      <title>Query Acceleration for Delta External Tables (Preview)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/query-acceleration-for-delta-external-tables-preview/ba-p/4292377</link>
      <description>&lt;P&gt;An &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/kusto/query/schema-entities/external-tables?view=microsoft-fabric" target="_blank" rel="noopener"&gt;external table&lt;/A&gt; is a schema entity that references data stored external to a Kusto database. Queries run over external tables can be less performant than on data that is ingested due to various factors such as network calls to fetch data from storage, the absence of indexes, and more. Query acceleration allows specifying a policy on top of external delta tables. This policy defines a number of days to cache data for high-performance queries.&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://aka.ms/alter-query-acceleration" target="_blank" rel="noopener"&gt;Query Acceleration policy &lt;/A&gt;&amp;nbsp;allows customers to set a policy on top of external delta tables to define the number of days to cache. Behind the scenes, Kusto continuously indexes and caches the data for that period, allowing customers to run performant queries on top.&lt;/P&gt;
&lt;P&gt;QAP is supported by Azure Data Explorer (ADX) over ADLSgen2/blob storage and Eventhouse over OneLake/ADLSgen2/blob storage.&lt;/P&gt;
&lt;H2&gt;Query Acceleration policy&lt;/H2&gt;
&lt;P&gt;We are introducing a &lt;A class="lia-external-url" href="https://aka.ms/alter-query-acceleration" target="_blank" rel="noopener"&gt;new policy&lt;/A&gt; to enable acceleration for delta external tables:&lt;/P&gt;
&lt;H3&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc17173160"&gt;&lt;/A&gt;Syntax&lt;/H3&gt;
&lt;P&gt;.alter external table &amp;lt;TableName&amp;gt; policy query_acceleration 'Policy'&lt;/P&gt;
&lt;P&gt;Where:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&amp;lt;TableName&amp;gt;&lt;/STRONG&gt; is the name of a Delta Parquet external table.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&amp;lt;Policy&amp;gt;&lt;/STRONG&gt; is a string literal holding a JSON property bag with the following properties:
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;IsEnabled&lt;/STRONG&gt; : Boolean, required.&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - If &lt;STRONG&gt;true&lt;/STRONG&gt;, query acceleration is enabled.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Hot&lt;/STRONG&gt;: TimeSpan, last 'N' days of data to cache.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Steps to enable Query Acceleration&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Create a delta external table as described in&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/management/external-tables-delta-lake" target="_blank" rel="noopener"&gt;this document:&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang=""&gt;.create-or-alter external table &amp;lt;TableName&amp;gt; kind=delta ( h@'https://storageaccount.blob.core.windows.net/container;&amp;lt;credentials&amp;gt; )&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Set a query acceleration policy&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang=""&gt;.alter external table &amp;lt;TableName&amp;gt; policy query_acceleration ```{ "IsEnabled": true, "Hot": "36500d" }```&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Query the table.&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang=""&gt;external_table('TableName')&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: Indexing and caching might take some time depending on the volume of data and &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cluster size. For monitoring the progress, see &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/kusto/management/show-external-table-operations-query-acceleration-statistics?view=microsoft-fabric" target="_blank"&gt;Monitoring command&lt;/A&gt;&lt;/P&gt;
&lt;H1&gt;Costs/Billing&lt;/H1&gt;
&lt;P&gt;Enabling Query Acceleration does come with some additional costs. The accelerated data will be ingested in Kusto and count towards the SSD storage, similar to native Kusto tables. You can control the amount of data to accelerate by configuring number of days to cache.&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;Query Acceleration is a powerful feature designed to enhance your data querying capabilities on PetaBytes of data. By understanding when and how to use this feature, you can significantly improve the efficiency and speed of your data operations - whether you are dealing with large datasets, complex queries, or real-time analytics, Query Acceleration provides the performance boost you need to stay ahead.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Get started with &lt;A class="lia-external-url" href="https://aka.ms/alter-query-acceleration" target="_blank" rel="noopener"&gt;Azure Data Explorer.&lt;/A&gt;&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Get started with&lt;A class="lia-external-url" href="https://go.microsoft.com/fwlink/?linkid=2286363" target="_blank" rel="noopener"&gt; Eventhouse in Microsoft Fabric&lt;/A&gt;.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 26 Nov 2024 11:11:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/query-acceleration-for-delta-external-tables-preview/ba-p/4292377</guid>
      <dc:creator>Anshul_Sharma</dc:creator>
      <dc:date>2024-11-26T11:11:48Z</dc:date>
    </item>
    <item>
      <title>Exploring the New Kraph Features: Unlocking Powerful Patterns and Operations</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/exploring-the-new-kraph-features-unlocking-powerful-patterns-and/ba-p/4291651</link>
      <description>&lt;P&gt;Kusto Graph Semantics have always been a powerful tool for representing and analyzing complex data structures. With the recent release, we are pleased to introduce a suite of enhancements designed to simplify and enrich your data analysis experience. In this blog post, we delve into the new features including the &lt;EM&gt;star pattern&lt;/EM&gt;, &lt;EM&gt;default node id&lt;/EM&gt;, &lt;EM&gt;graph-shortest-path&lt;/EM&gt;, and &lt;EM&gt;graph-mark-components&lt;/EM&gt;.&lt;/P&gt;
&lt;H1&gt;The Star Pattern&lt;/H1&gt;
&lt;P&gt;One of the most exciting additions is the "star pattern," which allows users to express nonlinear patterns using multiple comma-delimited sequences. This feature is particularly useful for describing connections where different sequences share one or more variable names of a node. For instance, consider a scenario where a node 'n' is at the center of a star, connected to nodes a, b, c, and d. The following pattern can be used:&lt;/P&gt;
&lt;P&gt;(a)--(n)--(b),(c)--(n)--(d).&lt;/P&gt;
&lt;P&gt;This feature in now &lt;STRONG&gt;generally available&lt;/STRONG&gt; (&lt;A href="https://learn.microsoft.com/kusto/query/graph-match-operator?view=microsoft-fabric#star-pattern" target="_blank"&gt;Learn more&lt;/A&gt;).&lt;/P&gt;
&lt;H1&gt;Defining the Default Node ID&lt;/H1&gt;
&lt;P&gt;Creating a graph from a tabular expression of edges has never been easier with the new approach for defining the default node id. This feature ensures that the node identifier is readily available for the constraints section of the subsequent graph-match operator. By setting a default node id, you streamline the process of graph creation and enhance the precision of your data queries.&lt;/P&gt;
&lt;P&gt;This feature is now &lt;STRONG&gt;generally available&lt;/STRONG&gt; (&lt;A href="https://learn.microsoft.com/kusto/query/make-graph-operator?view=microsoft-fabric#default-node-identifier" target="_blank"&gt;Learn more&lt;/A&gt;).&lt;/P&gt;
&lt;H1&gt;Graph-Shortest-Path&lt;/H1&gt;
&lt;P&gt;Finding the shortest path between nodes is a fundamental operation in graph analysis, and the graph-shortest-path operator makes this task more efficient than ever. This operator identifies the shortest paths between a set of source nodes and a set of target nodes within a graph, returning a table with the results. Whether you're navigating social networks, optimizing logistical routes, or exploring intricate data relationships, this feature is indispensable for uncovering the most direct connections.&lt;/P&gt;
&lt;P&gt;This feature is now &lt;STRONG&gt;in public preview&lt;/STRONG&gt; (&lt;A href="https://learn.microsoft.com/kusto/query/graph-shortest-paths-operator?view=microsoft-fabric" target="_blank"&gt;Learn more&lt;/A&gt;).&lt;/P&gt;
&lt;H1&gt;Graph-Mark-Components&lt;/H1&gt;
&lt;P&gt;The graph-mark-components operator is designed to find all connected components of a graph and mark each node with a unique component identifier. This feature is crucial for identifying and distinguishing different clusters within your data. By marking each node with a component identifier, you can easily analyse the structure and connectivity of your graph, leading to deeper insights and more informed decisions.&lt;/P&gt;
&lt;P&gt;This feature is now &lt;STRONG&gt;in public preview&lt;/STRONG&gt; (&lt;A href="https://learn.microsoft.com/kusto/query/graph-mark-components-operator?view=microsoft-fabric" target="_blank"&gt;Learn more&lt;/A&gt;).&lt;/P&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;The latest graph features offer robust tools for enhancing your data analysis capabilities. From the intuitive star pattern to the precise definition of default node ids, and from the efficiency of graph-shortest-path to the clarity of graph-mark-components, these enhancements empower you to delve deeper into your graphs and extract meaningful insights. Embrace these new features and unlock the full potential of your data with ease and precision.&lt;/P&gt;
&lt;P&gt;Stay tuned for more updates and tutorials on how to leverage these powerful graph features to their fullest extent.&lt;/P&gt;</description>
      <pubDate>Mon, 11 Nov 2024 12:47:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/exploring-the-new-kraph-features-unlocking-powerful-patterns-and/ba-p/4291651</guid>
      <dc:creator>cosh23</dc:creator>
      <dc:date>2024-11-11T12:47:28Z</dc:date>
    </item>
    <item>
      <title>Country and Region Information in current_principal_details</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/country-and-region-information-in-current-principal-details/ba-p/4275454</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Kusto has introduced a new feature that allows users to access information about the country of a user and their tenant region or country as provided by &lt;A href="https://learn.microsoft.com/entra/fundamentals/whatis" target="_blank"&gt;Microsoft Entra ID&lt;/A&gt; through the &lt;A href="https://learn.microsoft.com/kusto/query/current-principal-details-function" target="_blank"&gt;current_principal_details()&lt;/A&gt; function. This addition provides enhanced granularity and control in data security and accessibility.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For the function to provide this information, it is essential to understand the authentication (AuthN) and authorization (AuthZ) flow for a query in Kusto.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;It begins with the client application requesting access to the Kusto service. The client uses the &lt;A href="https://learn.microsoft.com/entra/identity-platform/msal-overview" target="_blank"&gt;Microsoft Authentication Library (MSAL)&lt;/A&gt; to acquire an access token from Microsoft Entra ID, which serves as proof of the client’s identity. This access token is included in the authorization header of the request. Upon receiving the request, Kusto validates the access token to ensure it is issued by a trusted authority and is still valid. Next, Kusto checks the roles assigned to the authenticated principal to determine if they have the necessary permissions to execute the query. If the principal is authorized, the query is executed; otherwise, access is denied. In the case of &lt;A href="https://learn.microsoft.com/kusto/query/current-principal-details-function" target="_blank"&gt;current_principal_details()&lt;/A&gt;, the function extracts information from &lt;A href="https://learn.microsoft.com/en-us/entra/identity-platform/optional-claims-reference#v10-and-v20-optional-claims-set" target="_blank"&gt;optional claims in the token&lt;/A&gt; to enrich the result about the identity. The newly added properties are:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Country – based on the optional claim “ctry” (standard two-letter country/region code)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;TenantCountry – based on the optional claim “tenant_ctry” (standard two-letter country/region code configured by a tenant admin)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;TenantRegion – based on the optional claim “tenant_region_scope” (standard two-letter region code of the resource tenant)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The following Kusto Query Language (KQL) statement prints the information of the Entra ID user Alice:&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;print details=current_principal_details()&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN&gt;The result of the function provides detailed information about the authenticated user, Alice.&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="javascript"&gt;{ 
  "Country": "DE",
  "TenantCountry": "US",
  "TenantRegion": "WW",
  "UserPrincipalName": "alice@contoso.com",
  "Type": "aaduser",
  "IdentityProvider": "https://sts.windows.net",
  "DisplayName": "Alice (upn: alice@contoso.com)",
  "Authority": "&amp;lt;tenantId&amp;gt;",
  "ObjectId": "&amp;lt;objectId&amp;gt;",
  "Mfa": "True",
  "FQN": "aaduser=&amp;lt;objectId;tenantId "
}&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN&gt;With the integration of location information, users are now able to formulate advanced &lt;A href="https://learn.microsoft.com/kusto/management/row-level-security-policy?view=microsoft-fabric" target="_blank"&gt;Row Level Security (RLS)&lt;/A&gt; policies. These policies can control access to specific rows based on the data provided by Entra ID tokens. This capability is particularly advantageous for organizations operating across multiple countries or regions, as it ensures that sensitive data is accessible only to authorized individuals within specified locations.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The &lt;EM&gt;ContosoSales&lt;/EM&gt; table provides a straightforward yet illustrative dataset that includes sales information segmented by country. The table comprises two columns: &lt;EM&gt;Country&lt;/EM&gt; and &lt;EM&gt;Product&lt;/EM&gt;, with corresponding &lt;EM&gt;Amount&lt;/EM&gt; of sales. For instance, it shows that 10 units of Espresso were sold in Germany (DE) and 5 units in the United States (US). This data can be used to implement and test Row Level Security policies based on geographical location, ensuring that access to sales data is restricted according to the specified country codes.&lt;/SPAN&gt;&lt;/P&gt;
&lt;TABLE width="214px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="71.9688px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Country&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="71.9688px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Product&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="72.2812px" height="30px"&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;Amount&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="71.9688px" height="30px"&gt;
&lt;P&gt;DE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="71.9688px" height="30px"&gt;
&lt;P&gt;Espresso&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="72.2812px" height="30px"&gt;
&lt;P&gt;10&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="71.9688px" height="30px"&gt;
&lt;P&gt;US&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="71.9688px" height="30px"&gt;
&lt;P&gt;Espresso&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="72.2812px" height="30px"&gt;
&lt;P&gt;5&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The following function can be used as a predicate in Row Level Security policy:&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;.create-or-alter function RLSForContoso(TableName: string) {
    table(TableName)
    | where Country == current_principal_details()["Country"]
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;A user with the &lt;EM&gt;Country&lt;/EM&gt; property set to "DE" in Entra ID will get the following result when querying the &lt;EM&gt;ContosoSales&lt;/EM&gt; table:&lt;/SPAN&gt;&lt;/P&gt;
&lt;TABLE width="206px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="67.7656px"&gt;
&lt;P&gt;&lt;STRONG&gt;Country&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="71.9688px"&gt;
&lt;P&gt;&lt;STRONG&gt;Product&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="68.0156px"&gt;
&lt;P&gt;&lt;STRONG&gt;Amount&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="67.7656px"&gt;
&lt;P&gt;DE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="71.9688px"&gt;
&lt;P&gt;Espresso&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="68.0156px"&gt;
&lt;P&gt;10&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Please note that the information provided by Entra ID is based on static properties configured in the user's profile. Therefore, it does not necessarily represent the user's actual location at the time the query is executed. For example, a user with the &lt;EM&gt;Country&lt;/EM&gt; attribute set to "DE" might not be physically located in Germany when the query runs.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This new capability not only bolsters data security but also enhances compliance with regional data protection regulations. By leveraging the properties from Microsoft Entra ID, enterprises can enforce their data governance policies more effectively and with greater precision.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The introduction of Country/Region-based filtering in Kusto RLS policies underscores Microsoft's commitment to providing robust, secure, and versatile data management solutions. As organizations navigate the complexities of data privacy and security, this feature offers a critical tool for maintaining control over their data landscape.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Stay tuned for more updates and detailed guides on how to implement and make the most out of this exciting new feature!&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 21 Oct 2024 12:19:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/country-and-region-information-in-current-principal-details/ba-p/4275454</guid>
      <dc:creator>cosh23</dc:creator>
      <dc:date>2024-10-21T12:19:08Z</dc:date>
    </item>
    <item>
      <title>Performing ETL in Real-Time Intelligence with Microsoft Fabric</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/performing-etl-in-real-time-intelligence-with-microsoft-fabric/ba-p/4267752</link>
      <description>&lt;H1&gt;Introduction&lt;/H1&gt;
&lt;P&gt;In today’s data-driven world, the ability to act upon data as soon as its generated is crucial for businesses to make informed decisions quickly. Organizations seek to harness the power of up-to-the-minute data to drive their operations, marketing strategies, and customer interactions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This becomes challenging in the world of real-time data where it is not always possible to do all the transformations while the data is being streamed. Therefore, you must come up with a flow that does not impact the data stream and is also quick.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is where Microsoft Fabric comes into play. Fabric offers a comprehensive suite of services including Data Engineering, Data Factory, Data Science, Real-Time Intelligence, Data Warehouse, and Databases. But today, we are going to focus on Real-Time Intelligence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Use-Cases&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;This set up can be used in scenarios where data transformation is needed to be used in downstream processing/analytical workload. As example of this would be to enable &lt;A href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-house-onelake-availability" target="_self"&gt;OneLake availability&lt;/A&gt; in KQL table and use that data to be accessed by other Fabric engines like Notebooks, Lakehouse etc. for training ML models/analytics.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Another example let's say you have a timestamp column in your streaming data and you would like to change its format based on your standard. You can use the update policy to transform the timestamp data format and store it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Overview&lt;/H1&gt;
&lt;P&gt;Fabric Real-Time Intelligence supports KQL database as its datastore which is designed to handle real-time data streams efficiently. After ingestion, you can use &lt;A href="https://learn.microsoft.com/en-us/kusto/query/?view=microsoft-fabric" target="_blank" rel="noopener"&gt;Kusto Query Language (KQL)&lt;/A&gt; to query the data in the database.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;KQL Table is a Fabric item which is part of the KQL Database. Both these entities are housed within an &lt;A href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/eventhouse" target="_blank" rel="noopener"&gt;Eventhouse&lt;/A&gt;. An Eventhouse is a workspace of databases, which might be shared across a certain project. It allows you to manage multiple databases at once, sharing capacity and resources to optimize performance and cost. Eventhouses provide unified monitoring and management across all databases and per database.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Figure 1: Hierarchy of Fabric items in an Eventhouse&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/kusto/management/update-policy?view=microsoft-fabric" target="_blank" rel="noopener"&gt;Update policies&lt;/A&gt; are automated processes activated when new data is added to a table. They automatically transform the incoming data with a query and save the result in a destination table, removing the need for manual orchestration. A single table can have multiple update policies for various transformations, saving data to different tables simultaneously. These target tables can have distinct schemas, retention policies, and other configurations from the source table.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Scope&lt;/H1&gt;
&lt;P&gt;In this blog, we have a scenario where we will be doing data enrichment on the data that lands in the KQL table. In this case, we will be dropping the columns we don’t need but you can also do other transformations supported in KQL on the data.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;Here we have a real-time stream pushing data to a KQL table. Once loaded into the source table, we will use an update policy which will drop columns not needed and push the data of interest to the destination table from the source table.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Prerequisites&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;A Microsoft account or a Microsoft Entra user identity. An Azure subscription isn't required.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;Fabric Capacity. If you don't have one, you can sign-up for &lt;A href="https://learn.microsoft.com/en-us/fabric/get-started/fabric-trial" target="_self"&gt;Fabric Trial Capacity&lt;/A&gt;.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;A&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/create-database" target="_blank" rel="noopener"&gt;KQL database in Real-Time Intelligence in Microsoft Fabric.&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Creating sample data stream&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;In the Real-Time Intelligence experience, &lt;A href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-streams/create-manage-an-eventstream?pivots=standard-capabilities" target="_self"&gt;create a new event stream&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Under source, add new source and select sample data.&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Continue configuring the stream. I am using the Bicycles sample data stream in this blog.&lt;/LI&gt;
&lt;LI&gt;Select Direct ingestion as the Data Ingestion Mode for destination.&lt;/LI&gt;
&lt;LI&gt;Select your workspace and KQL database you have created as a prerequisite to this exercise for the destination.&lt;/LI&gt;
&lt;LI&gt;You should be seeing a pop-up to configure the database details and continue to configure the table where you need to land the data from the stream.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Configuring KQL Table with Update Policy&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;Open the Eventhouse page in Fabric. There you should now be able to preview the data that is being ingested from the sample data stream.&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Create a new destination table. I used the following KQL to create the new table (destination):&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;.create table RTITableNew (
    BikepointID: string,Street: string, Neighbourhood: string, No_Bikes: int, No_Empty_Docks: int )
&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Under the Database tab, click on new and select Table Update Policy.&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;
&lt;DIV id="tinyMceEditorgurkamal_4" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;BR /&gt;&lt;img /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;You can edit the existing policy format or paste the one below that I used:&lt;/P&gt;
&lt;EM&gt;NOTE: RTITable is source and RTITableNew is the destination table.&lt;/EM&gt;&lt;BR /&gt;&lt;LI-CODE lang="applescript"&gt;.alter table RTITable policy update ```[
  {
    "IsEnabled": true,
    "Source": "RTITable",
    "Query": "RTITable | project BikepointID=BikepointID, Street=Street, Neighbourhood=Neighbourhood, No_Bikes=No_Bikes, No_Empty_Docks=No_Empty_Docks ",
    "IsTransactional": true,
    "PropagateIngestionProperties": false,
    "ManagedIdentity": null
  }
]```​&lt;/LI-CODE&gt;&lt;BR /&gt;The above policy drops the Longitude and Latitude columns and stores the rest of the columns in the destination table. You can do more transformations as per your requirements, but the workflow remains the same.&lt;/LI&gt;
&lt;LI&gt;After running the above command, your destination table will start populating with the new data as soon as the source table gets data. To review the policy on the destination table, you can run the following&amp;nbsp;command:&lt;BR /&gt;&lt;LI-CODE lang="applescript"&gt; .show table &amp;lt;table-name&amp;gt; policy update​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;To summarize, we took a real-time data stream, stored the data in a KQL database and then performed data enrichment on the data and stored in a destination table. This flow caters the scenarios where you want to perform processing on the data once its ingested from the stream.&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;Further Reading and Resources&lt;/H1&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/kusto/management/update-policy-common-scenarios?view=azure-data-explorer" target="_blank" rel="noopener"&gt;Common scenarios for using table update policies - Kusto | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/table-update-policy" target="_blank" rel="noopener"&gt;Create a table update policy in Real-Time Intelligence - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2024 16:14:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/performing-etl-in-real-time-intelligence-with-microsoft-fabric/ba-p/4267752</guid>
      <dc:creator>gurkamal</dc:creator>
      <dc:date>2024-10-15T16:14:02Z</dc:date>
    </item>
    <item>
      <title>Time Weighted Average and Value in Azure Data Explorer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/time-weighted-average-and-value-in-azure-data-explorer/ba-p/4257933</link>
      <description>&lt;P&gt;Azure Data Explorer (ADX) supports time series aggregation at scale, either by the &lt;A href="https://learn.microsoft.com/en-us/kusto/query/summarize-operator?view=azure-data-explorer" target="_blank"&gt;summarize operator&lt;/A&gt; that keeps the aggregated data in tabular format or by the &lt;A href="https://learn.microsoft.com/en-us/kusto/query/make-series-operator?view=azure-data-explorer" target="_blank"&gt;make-series operator&lt;/A&gt; that transforms it to a set of dynamic arrays. There are multiple aggregation functions, out of them &lt;A href="https://learn.microsoft.com/en-us/kusto/query/avg-aggregation-function?view=azure-data-explorer" target="_blank"&gt;avg()&lt;/A&gt; is one of the most popular. ADX calculates it by grouping the samples into fixed time bins and applying simple average of all samples inside each time bin, regardless of their specific location inside the bin. This is the standard time bin aggregation as done by SQL and other databases. However, there are scenarios where simple average doesn’t accurately represent the time bin value. For example, IoT devices sending data commonly emits metric values in an asynchronous way, only upon change, to conserve bandwidth. In that case we need to calculate Time Weighted Average (TWA), taking into consideration the exact timestamp and duration of each value inside the time bin. ADX doesn’t have native aggregation functions to calculate time weighted average, still we have just added few &lt;A href="https://learn.microsoft.com/en-us/kusto/query/functions/?view=azure-data-explorer" target="_blank"&gt;User Defined Functions&lt;/A&gt;, part of the &lt;A href="https://learn.microsoft.com/en-us/kusto/functions-library/functions-library?view=azure-data-explorer" target="_blank"&gt;Functions Library&lt;/A&gt;, supporting it:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/kusto/functions-library/time-weighted-val-fl?view=azure-data-explorer&amp;amp;tabs=query-defined" target="_blank"&gt;time_weighted_val_fl()&lt;/A&gt; - Calculates the time weighted value of a metric using linear interpolation.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/kusto/functions-library/time-weighted-avg-fl?view=azure-data-explorer&amp;amp;tabs=query-defined" target="_blank"&gt;time_weighted_avg_fl()&lt;/A&gt; - Calculates the time weighted average of a metric using fill forward interpolation.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/kusto/functions-library/time-weighted-avg2-fl?view=azure-data-explorer&amp;amp;tabs=query-defined" target="_blank"&gt;time_weighted_avg2_fl()&lt;/A&gt; - Calculates the time weighted average of a metric using linear interpolation.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Here is a query comparing the original &amp;amp; interpolated values, standard average by the summarize operator, twa using fill forward and twa using linear interpolation:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;let tbl = datatable(ts:datetime,  val:real, key:string) [
    datetime(2021-04-26 00:00), 100, 'D1',
    datetime(2021-04-26 00:45), 300, 'D1',
    datetime(2021-04-26 01:15), 200, 'D1',
];
let stime=datetime(2021-04-26 00:00);
let etime=datetime(2021-04-26 01:15);
let dt = 1h;
//
tbl
| where ts between (stime..etime)
| summarize val=avg(val) by bin(ts, dt), key
| project-rename _ts=ts, _key=key
| extend orig_val=0
| extend _key = strcat(_key, '-SUMMARIZE'), orig_val=0
| union (tbl
| invoke time_weighted_val_fl('ts', 'val', 'key', stime, etime, dt)
| project-rename val = _twa_val
| extend _key = strcat(_key, '-SAMPLES'))
| union (tbl
| invoke time_weighted_avg_fl('ts', 'val', 'key', stime, etime, dt)
| project-rename val = tw_avg
| extend _key = strcat(_key, '-TWA-FF'), orig_val=0)
| union (tbl
| invoke time_weighted_avg2_fl('ts', 'val', 'key', stime, etime, dt)
| project-rename val = tw_avg
| extend _key = strcat(_key, '-TWA-LI'), orig_val=0)
| order by _key asc, _ts asc
// use anomalychart just to show original data points as bold dots
| render anomalychart with (anomalycolumns=orig_val, title='Time Wighted Average, Fill Forward &amp;amp; Linear interpolation')
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Explaining the results:&lt;/P&gt;
&lt;TABLE width="946px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="202.863px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="385.65px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;2021-04-26 00:00&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="356.688px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;2021-04-26 00:00&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="202.863px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Interpolated value&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="385.65px" height="30px"&gt;
&lt;P&gt;100&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="356.688px" height="30px"&gt;
&lt;P&gt;(300+200)/2=250&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="202.863px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Average by summarize&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="385.65px" height="30px"&gt;
&lt;P&gt;(100+300)/2=200&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="356.688px" height="30px"&gt;
&lt;P&gt;200&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="202.863px" height="57px"&gt;
&lt;P&gt;&lt;STRONG&gt;Fill forward TWA&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="385.65px" height="57px"&gt;
&lt;P&gt;(45m*100 + 15m*300)/60m = 150&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="356.688px" height="57px"&gt;
&lt;P&gt;(15m*300 + 45m*200)/60m = 225&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="202.863px" height="57px"&gt;
&lt;P&gt;&lt;STRONG&gt;Linear interpolation TWA&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="385.65px" height="57px"&gt;
&lt;P&gt;45m*(100+300)/2 + 15m*(300+250)/2)/60m = 218.75&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="356.688px" height="57px"&gt;
&lt;P&gt;15m*(250+200)/2 + 45m*200)/60m = 206.25&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note that all functions work on multiple time series, partitioned by supplied key.&lt;/P&gt;
&lt;P&gt;You are welcome to try these functions and share your feedback!&lt;/P&gt;</description>
      <pubDate>Sun, 29 Sep 2024 16:02:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/time-weighted-average-and-value-in-azure-data-explorer/ba-p/4257933</guid>
      <dc:creator>adieldar</dc:creator>
      <dc:date>2024-09-29T16:02:00Z</dc:date>
    </item>
    <item>
      <title>New Custom and Managed Python images in Azure Data Explorer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/new-custom-and-managed-python-images-in-azure-data-explorer/ba-p/4249908</link>
      <description>&lt;P&gt;Azure Data Explorer (ADX) supports running Python code embedded in KQL query using the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/kusto/query/pythonplugin?pivots=azuredataexplorer" target="_blank" rel="noopener"&gt;python() plugin&lt;/A&gt;&amp;nbsp;. The plugin runtime is hosted in a sandbox, an isolated and secured environment hosted on ADX existing compute nodes. This sandbox contains the language engine as well as common mathematical and scientific packages. The plugin extends KQL native functionalities with a huge archive of OSS packages, enabling ADX users to run advanced algorithms, such as &lt;A href="https://learn.microsoft.com/en-us/kusto/functions-library/functions-library?view=azure-data-explorer#machine-learning-functions" target="_blank" rel="noopener"&gt;machine learning&lt;/A&gt;, &lt;A href="https://learn.microsoft.com/en-us/kusto/functions-library/functions-library?view=azure-data-explorer#statistical-and-probability-functions" target="_blank" rel="noopener"&gt;statistical tests&lt;/A&gt;, &lt;A href="https://learn.microsoft.com/en-us/kusto/functions-library/functions-library?view=azure-data-explorer#series-processing-functions" target="_blank" rel="noopener"&gt;time series analysis&lt;/A&gt; and many more, as part of the KQL query.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are constantly working to improve the Python plugin capabilities, and today we introduce new managed Python images as well as the option to fully customize the Python image to include your required Python packages:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Managed images&lt;/STRONG&gt; are Python environment that are built and maintained by the Kusto team, containing specific Python engine and a set of packages. We are adding today &lt;A href="https://learn.microsoft.com/en-us/kusto/query/python-package-reference?view=azure-data-explorer&amp;amp;tabs=python3-11-7" target="_blank" rel="noopener"&gt;Python 3.11.7&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/en-us/kusto/query/python-package-reference?view=azure-data-explorer&amp;amp;tabs=python3-11-7-DL" target="_blank" rel="noopener"&gt;Python 3.11.7 DL&lt;/A&gt; (containing torch &amp;amp; tensorflow), both images contain up to date Python engine 3.11.7 and packages (you can review the full contents of these images by clicking the respective links).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Custom images&lt;/STRONG&gt; let you create your specific Python images in case you need additional packages or different versions of the Python engine and/or packages. You can create a custom image from scratch by specifying the Python engine and supplying a requirements text file containing full list of packages, or by selecting a base existing image and supplying a minimal requirements file containing only the additional packages to install on top of the base image.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Defining a custom Python image from ADX porta&lt;/EM&gt;&lt;EM&gt;l&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For further information see &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/language-extensions#create-a-custom-image" target="_blank" rel="noopener"&gt;Create a custom image - Azure Data Explorer | Microsoft Learn&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/en-us/kusto/query/python-package-reference?view=azure-data-explorer&amp;amp;tabs=python3-11-7" target="_blank" rel="noopener"&gt;Python plugin packages - Kusto | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You are welcome to try it and share your feedback!&lt;/P&gt;</description>
      <pubDate>Thu, 19 Sep 2024 12:45:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/new-custom-and-managed-python-images-in-azure-data-explorer/ba-p/4249908</guid>
      <dc:creator>adieldar</dc:creator>
      <dc:date>2024-09-19T12:45:42Z</dc:date>
    </item>
    <item>
      <title>Advanced Time Series Anomaly Detector in Fabric</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/advanced-time-series-anomaly-detector-in-fabric/ba-p/4226195</link>
      <description>&lt;H1&gt;Introduction&lt;/H1&gt;
&lt;P&gt;&lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fai-services%2FAnomaly-Detector%2Foverview&amp;amp;data=05%7C02%7Cadieldar%40microsoft.com%7C0b36aec77b2f4d747a7b08dcc76d586f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638604519754467753%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;amp;sdata=ZykmO4dMt1%2FTbULRnC9HA28SQ6f96PtJ1kceKs0ZNHQ%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Anomaly Detector&lt;/A&gt;&lt;/U&gt;, one of Azure AI services, enables you to monitor and detect anomalies in your time series data. This service is based on advanced algorithms, &lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fmicrosoft-developer-community%2Foverview-of-sr-cnn-algorithm-in-azure-anomaly-detector%2Fba-p%2F982798&amp;amp;data=05%7C02%7Cadieldar%40microsoft.com%7C0b36aec77b2f4d747a7b08dcc76d586f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638604519754476958%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;amp;sdata=0FlP1%2BRzKD%2FeBpFtm0SCODwbyN7j9rktBjAcGYxlF6M%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;SR-CNN&lt;/A&gt;&lt;/U&gt;&amp;nbsp;for univariate analysis and &lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fai-azure-ai-services-blog%2Fintroducing-multivariate-anomaly-detection%2Fba-p%2F2260679&amp;amp;data=05%7C02%7Cadieldar%40microsoft.com%7C0b36aec77b2f4d747a7b08dcc76d586f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638604519754481595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;amp;sdata=ckEsLvnFIduR7fDMcjs0c3ABpiwX0Jt8DjHy3eCORPU%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;MTAD-GAT&lt;/A&gt;&lt;/U&gt;&amp;nbsp;for multivariate analysis and is being retired by October 2026. In this blog post we will lay out a migration strategy to Microsoft Fabric, allowing you to detect identical anomalies, using the same algorithms as the old service, and even more. Here are a few of the benefits of the strategy that we are about to lay out for you:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Easier management of the trained models’ lifecycle using Fabric ML.&lt;/LI&gt;
&lt;LI&gt;No need to upload your data to external storage account, just stream your data to Fabric Eventhouse with OneLake availability and you can use it for training and scoring.&lt;/LI&gt;
&lt;LI&gt;You can use your data by any Fabric engine (KQL DB, Fabric ML Notebook, PBI and more)&lt;/LI&gt;
&lt;LI&gt;The algorithms are open sourced and published by the new &lt;U style="font-family: inherit;"&gt;&lt;/U&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;U style="font-family: inherit;"&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpypi.org%2Fproject%2Ftime-series-anomaly-detector%2F&amp;amp;data=05%7C02%7Cadieldar%40microsoft.com%7C0b36aec77b2f4d747a7b08dcc76d586f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638604519754486272%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;amp;sdata=PK5i9iyqcX6PovrI8HySnUlQgv2dHl6VQNWjJuZrAu0%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;time-series-anomaly-detector · PyPI&lt;/A&gt;&lt;/U&gt;&lt;SPAN&gt;&amp;nbsp;package, thus you can review and tweak them as needed.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Time Series Anomaly Detection in Fabric RTI&lt;/H1&gt;
&lt;P&gt;There are few options for time series anomaly detection in Fabric RTI (Real Time Intelligence):&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;For &lt;A href="https://en.wikipedia.org/wiki/Univariate_(statistics)#Analysis" target="_blank" rel="noopener"&gt;univariate analysis&lt;/A&gt;, KQL contains the native function &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/series-decompose-anomaliesfunction" target="_blank" rel="noopener"&gt;series_decompose_anomalies()&lt;/A&gt; that can perform anomaly detection on thousands of time series in seconds. For further info on using this function take a look at &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/anomaly-detection" target="_blank" rel="noopener"&gt;Time series anomaly detection &amp;amp; forecasting in Azure Data Explorer&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;For &lt;A href="https://en.wikipedia.org/wiki/Multivariate_statistics#Multivariate_analysis" target="_blank" rel="noopener"&gt;multivariate analysis&lt;/A&gt;, there are few KQL library functions leveraging known multivariate analysis algorithms in &lt;A href="https://scikit-learn.org/stable/index.html" target="_blank" rel="noopener"&gt;scikit-learn&lt;/A&gt; , taking advantage of&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/pythonplugin?pivots=azuredataexplorer" target="_blank" rel="noopener"&gt;ADX capability to run inline Python as part of the KQL query&lt;/A&gt;. For further info see &lt;A href="https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/multivariate-anomaly-detection-in-azure-data-explorer/ba-p/3689616" target="_blank" rel="noopener"&gt;Multivariate Anomaly Detection in Azure Data Explorer - Microsoft Community Hub&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;For both univariate and multivariate analysis you can now use the new workflow, which is based on the time-series-anomaly-detector package, as described below.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Using time-series-anomaly-detector in Fabric&lt;/H1&gt;
&lt;P&gt;In the following example we shall&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Upload stocks change table to Fabric&lt;/LI&gt;
&lt;LI&gt;Train the multivariate anomaly detection model in a Python notebook using Spark engine&lt;/LI&gt;
&lt;LI&gt;Predict anomalies by applying the trained model to new data using Eventhouse (Kusto) engine&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Note that for the univariate model there is no need to train the model in a separate step (as the training is fast and done internally) and we can just predict.&lt;/P&gt;
&lt;P&gt;Below we briefly present the steps, see &lt;A href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/multivariate-anomaly-detection" target="_blank" rel="noopener"&gt;Multivariate anomaly detection - Microsoft Fabric | Microsoft Learn&lt;/A&gt; for the detailed tutorial.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Creating the environments&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;Create a Workspace&lt;/LI&gt;
&lt;LI&gt;Create Eventhouse – to store the incoming streaming data
&lt;UL&gt;
&lt;LI&gt;Enable OneLake availability – so the older data that was ingested to the Eventhouse can be seamlessly accessed by the Spark Notebook for training the anomaly detection model&lt;/LI&gt;
&lt;LI&gt;Enable KQL Python plugin – to be used for real time predictions of anomalies on the new streaming data. Select 3.11.7 DL image that contains the time-series-anomaly-detector package&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Create a Spark environment that includes the time-series-anomaly-detector package&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Training &amp;amp; storing the Anomaly Detection model&lt;/H2&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Upload the stocks data to the Eventhouse&lt;/LI&gt;
&lt;LI&gt;Create a notebook to train the model&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI&gt;Load the data from the Eventhouse using the OneLake path:&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;onelake_uri = "OneLakeTableURI" # Replace with your OneLake table URI 
abfss_uri = convert_onelake_to_abfss(onelake_uri)
df = spark.read.format('delta').load(abfss_uri)
df = df.toPandas().set_index('Date')&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;View the data:&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;import plotly.graph_objects as go

fig = go.Figure()
fig.add_trace(go.Scatter(x=df.index, y=df['AAPL'], mode='lines', name='AAPL'))
fig.add_trace(go.Scatter(x=df.index, y=df['AMZN'], mode='lines', name='AMZN'))
fig.add_trace(go.Scatter(x=df.index, y=df['GOOG'], mode='lines', name='GOOG'))
fig.add_trace(go.Scatter(x=df.index, y=df['MSFT'], mode='lines', name='MSFT'))
fig.add_trace(go.Scatter(x=df.index, y=df['SPY'], mode='lines', name='SPY'))
fig.update_layout(
    title='Stock Prices change',
    xaxis_title='Date',
    yaxis_title='Change %',
    legend_title='Tickers'
)

fig.show()&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Prepare the data for training:&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;features_cols = ['AAPL', 'AMZN', 'GOOG', 'MSFT', 'SPY']
cutoff_date = pd.to_datetime('2023-01-01')
train_df = df[df.Date &amp;lt; cutoff_date]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Train the model:&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;import mlflow
from anomaly_detector import MultivariateAnomalyDetector
model = MultivariateAnomalyDetector()
sliding_window = 200
param   s = {"sliding_window": sliding_window}
model.fit(train_df, params=params)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Save the model in Fabric ML model registry&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;with mlflow.start_run():
    mlflow.log_params(params)
    mlflow.set_tag("Training Info", "MVAD on 5 Stocks Dataset")

    model_info = mlflow.pyfunc.log_model(
        python_model=model,
        artifact_path="mvad_artifacts",
        registered_model_name="mvad_5_stocks_model",
    )&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Extract the mode path (to be used by the Eventhouse for the prediction):&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;mi = mlflow.search_registered_models(filter_string="name='mvad_5_stocks_model'")[0]
model_abfss = mi.latest_versions[0].source
print(model_abfss)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="6"&gt;
&lt;LI&gt;Create a Query set and attached the Eventhouse to it
&lt;UL&gt;
&lt;LI&gt;Run the ‘.create-or-alter function’ query to define predict_fabric_mvad_fl() stored function:&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;.create-or-alter function with (folder = "Packages\\ML", docstring = "Predict MVAD model in Microsoft Fabric")
predict_fabric_mvad_fl(samples:(*), features_cols:dynamic, artifacts_uri:string, trim_result:bool=false)
{
    let s = artifacts_uri;
    let artifacts = bag_pack('MLmodel', strcat(s, '/MLmodel;impersonate'), 'conda.yaml', strcat(s, '/conda.yaml;impersonate'),
                             'requirements.txt', strcat(s, '/requirements.txt;impersonate'), 'python_env.yaml', strcat(s, '/python_env.yaml;impersonate'),
                             'python_model.pkl', strcat(s, '/python_model.pkl;impersonate'));
    let kwargs = bag_pack('features_cols', features_cols, 'trim_result', trim_result);
    let code = ```if 1:
        import os
        import shutil
        import mlflow
        model_dir = 'C:/Temp/mvad_model'
        model_data_dir = model_dir + '/data'
        os.mkdir(model_dir)
        shutil.move('C:/Temp/MLmodel', model_dir)
        shutil.move('C:/Temp/conda.yaml', model_dir)
        shutil.move('C:/Temp/requirements.txt', model_dir)
        shutil.move('C:/Temp/python_env.yaml', model_dir)
        shutil.move('C:/Temp/python_model.pkl', model_dir)
        features_cols = kargs["features_cols"]
        trim_result = kargs["trim_result"]
        test_data = df[features_cols]
        model = mlflow.pyfunc.load_model(model_dir)
        predictions = model.predict(test_data)
        predict_result = pd.DataFrame(predictions)
        samples_offset = len(df) - len(predict_result)        # this model doesn't output predictions for the first sliding_window-1 samples
        if trim_result:                                       # trim the prefix samples
            result = df[samples_offset:]
            result.iloc[:,-4:] = predict_result.iloc[:, 1:]   # no need to copy 1st column which is the timestamp index
        else:
            result = df                                       # output all samples
            result.iloc[samples_offset:,-4:] = predict_result.iloc[:, 1:]
        ```;
    samples
    | evaluate python(typeof(*), code, kwargs, external_artifacts=artifacts)
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Run the prediction query that will detect multivariate anomalies on the 5 stocks, based on the trained model, and render it as &lt;A href="https://learn.microsoft.com/en-us/kusto/query/visualization-anomalychart?view=microsoft-fabric" target="_self"&gt;anomalychart&lt;/A&gt;. Note that the anomalous points are rendered on the first stock (AAPL), though they represent multivariate anomalies, i.e. anomalies of the vector of the 5 stocks in the specific date.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;let cutoff_date=datetime(2023-01-01);
let num_predictions=toscalar(demo_stocks_change | where Date &amp;gt;= cutoff_date | count);   //  number of latest points to predict
let sliding_window=200;                                                                 //  should match the window that was set for model training
let prefix_score_len = sliding_window/2+min_of(sliding_window/2, 200)-1;
let num_samples = prefix_score_len + num_predictions;
demo_stocks_change
| top num_samples by Date desc 
| order by Date asc
| extend is_anomaly=bool(false), score=real(null), severity=real(null), interpretation=dynamic(null)
| invoke predict_fabric_mvad_fl(pack_array('AAPL', 'AMZN', 'GOOG', 'MSFT', 'SPY'),
            // NOTE: Update artifacts_uri to model path
            artifacts_uri='enter your model URI here',
            trim_result=true)
| summarize Date=make_list(Date), AAPL=make_list(AAPL), AMZN=make_list(AMZN), GOOG=make_list(GOOG), MSFT=make_list(MSFT), SPY=make_list(SPY), anomaly=make_list(toint(is_anomaly))
| render anomalychart with(anomalycolumns=anomaly, title='Stock Price Changest in % with Anomalies')&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Summary&lt;/H1&gt;
&lt;P&gt;The addition of the time-series-anomaly-detector package to Fabric makes it the top platform for univariate &amp;amp; multivariate time series anomaly detection. Choose the anomaly detection method that best fits your scenario – from native KQL function for univariate analysis at scale, through standard multivariate analysis techniques and up to the best of breed time series anomaly detection algorithms implemented in the time-series-anomaly-detector package. For more information see the &lt;A href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/multivariate-anomaly-overview" target="_blank" rel="noopener"&gt;overview&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/multivariate-anomaly-detection" target="_blank" rel="noopener"&gt;tutorial&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 29 Aug 2024 10:31:03 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/advanced-time-series-anomaly-detector-in-fabric/ba-p/4226195</guid>
      <dc:creator>adieldar</dc:creator>
      <dc:date>2024-08-29T10:31:03Z</dc:date>
    </item>
    <item>
      <title>Visualizing Data as Graphs with Fabric and KQL</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/visualizing-data-as-graphs-with-fabric-and-kql/ba-p/4223689</link>
      <description>&lt;H1&gt;Introduction&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For quite a while, I have been extremely interested in data visualization. Over the last few years, I have been focused on ways to visualize graph databases (regardless of where the data comes from Using force directed graphs to highlight the similarities or “connected communities” in data is incredibly powerful. The purpose of this post is to highlight the recent work that the &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/tools/kusto-explorer" target="_blank" rel="noopener"&gt;Kusto.Explorer&lt;/A&gt; team has done to visualize graphs in Azure Data Explorer database with data coming from a Fabric KQL Database.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Note: The &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/tools/kusto-explorer" target="_blank" rel="noopener"&gt;Kusto.Explorer&lt;/A&gt; application used to visualize the graph is currently only supported on Windows.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;&lt;STRONG&gt;Background&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/products/data-explorer/" target="_blank" rel="noopener"&gt;Azure Data Explorer&lt;/A&gt; (ADX) is Microsoft’s fully managed, high-performance analytics engine specializing in near real time queries on high volumes of data. It is extremely useful for log analytics, time-series and Internet of Things type scenarios. ADX is like traditional relational database models in that it organizes the data into tables with strongly typed schemas.&lt;/P&gt;
&lt;P&gt;In September 2023, the ADX team introduced &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/graph-overview" target="_blank" rel="noopener"&gt;extensions to the query language&lt;/A&gt; (KQL) that enabled graph semantics on top of the tabular data. These extensions enabled users to contextualize their data and its relationships as a graph structure of nodes and edges. Graphs are often an easier way to present and query complex or networked relationships. These are normally difficult to query because they require recursive joins on standard tables. Examples of common graphs include social networks (friends of friends), product recommendations (similar users also bought product x), connected assets (assembly line) or a knowledge graph.&lt;/P&gt;
&lt;P&gt;Fast forward to February 2024, Microsoft Fabric introduced &lt;A href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/eventhouse" target="_blank" rel="noopener"&gt;Eventhouse&lt;/A&gt; as a workload in a Fabric workspace. This brings forward the power of KQL and Real-Time analytics to the Fabric ecosystem.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So now, I have a large amount of data in Fabric Eventhouse that I want to visualize with a force directed graph…&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let’s get started!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Pre-Requisites&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you want to follow along, you will need a Microsoft Fabric account (&lt;A href="https://www.microsoft.com/en-us/microsoft-fabric/getting-started" target="_blank" rel="noopener"&gt;Get started with Fabric for Free&lt;/A&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Next, for this post, I used an open dataset from the Bureau of Transportation Statistics. The following files were used:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://www.transtats.bts.gov/Tables.asp?QO_VQ=IMI&amp;amp;QO_anzr=N8vn6v10%FDf722146%FDgnoyr5&amp;amp;QO_fu146_anzr=N8vn6v10%FDf722146%FDgnoyr5" target="_blank" rel="noopener"&gt;Aviation Support Tables – Master Coordinate data&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;When you download this file, you can choose the fields to be included in it. For this example, I only used AirportID, Airport, AirportName, AirportCityName and AirportStateCode.&lt;/LI&gt;
&lt;LI&gt;This Airport data will be loaded directly to a table in KQL.&lt;/LI&gt;
&lt;LI&gt;This file does not necessarily need to be unzipped.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.bts.gov/browse-statistical-products-and-data/bts-publications/airline-service-quality-performance-234-time" target="_blank" rel="noopener"&gt;Airline Service Quality Performance 234 (On-Time performance data)&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;For this blog, I only used the “April 2024” file from this link.&lt;/LI&gt;
&lt;LI&gt;This data will be accessed using a Lakehouse shortcut.&lt;/LI&gt;
&lt;LI&gt;Unzip this file to a local folder and change the extension from “.asc” to “.psv” because this is a pipe-separated file.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In order to use these downloaded files, I uploaded them to the “Files” section of the Lakehouse in my Fabric Workspace. If you do not have a Lakehouse in your workspace, first, navigate to your workspace and select “New” -&amp;gt; “More Options” and choose “Lakehouse” from the Data Engineering workloads. Give your new Lakehouse a name and click “Create”.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once you have a Lakehouse, you can upload the files by clicking on the Lakehouse to bring up the Lakehouse Explorer. First, in the Lakehouse Explorer, click the three dots next to “Files” and select “New subfolder” and create a folder for “Flights”. Next, click the three dots next to the “Flights” sub-folder and select “Upload” from the drop-down menu and choose the on-time performance file. Confirm that the file is uploaded to files by refreshing the page.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Next, an Eventhouse will be used to host the KQL Cluster where you will ingest the data for analysis. If you do not have an Eventhouse in your workspace, select “New” -&amp;gt; “More Options” and choose “Eventhouse” from “Real-Time Intelligence” workloads. Give your new Eventhouse a name and click “Create”.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, we will use the &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/tools/kusto-explorer" target="_blank" rel="noopener"&gt;Kusto.Explorer&lt;/A&gt; application (available only for Windows) to visualize the graph. This is a one-click deployment application, so it is possible that it will run an application update when you start it up.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Ingest Data to KQL Database&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When the Eventhouse was created, a default KQL database with the same name was created. To get data into the database, click the three dots next to the database name, select “Get Data” -&amp;gt; “Local File”. In the dialog box that pops up, in the “Select or create a destination table”, click the “New table” and give the table a name, in this case it will be “airports”. Once you have a valid table name, the dialog will update to drag or browse for the file to load.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Note: You can upload files in a compressed file format if it is smaller than 1GB.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click “Next” to inspect the data for import. For the airports data, you will need to change the “Format” to CSV and enable the option for “First row is column header”.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click “Finish” to load the file to the KQL table.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The airport data should now be loaded into the table, and you can query the table to view the results.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is a sample of query to verify that data was loaded:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="basic"&gt;airports
| take 100;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For the On-Time performance data, we will not ingest it into KQL. Instead, we will create a shortcut to the files in the Lakehouse storage.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Back in the KQL Database explorer, at the top, click on the “+ New -&amp;gt; OneLake shortcut” menu item.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the dialog that comes up, choose “Microsoft OneLake” and in the “Select a data source type”, choose the Lakehouse where the data was uploaded earlier, and click “Next”&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the tree view of the OneLake populates the Tables and Files, open the files, and select the subfolder that was created when uploading the On-Time data, and click “Create” to complete the shortcut creation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the shortcut is created, you can view that data by clicking the “Explore your data” and running the following query to validate your data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="basic"&gt;external_table(‘flights’)
| count;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;Note: When accessing the shortcut data, use the “external_table” and the name of the shortcut that was created. You cannot change the shortcut name.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Query and Visualize with Kusto.Explorer&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that the data is connected to an Eventhouse database, we want to start to do analytics on this data. Fabric does have a way to run KQL Queries directly, but the expectation is that the results of the query will be a table. The only way to show the graph visualization is to use the &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/tools/kusto-explorer" target="_blank" rel="noopener"&gt;Kusto.Explorer&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To connect to the KQL database, you need to get the URI of the cluster from Fabric. Navigating to the KQL Database in Fabric, there is a panel that includes the “Database details”.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using the “Copy URI” to the right of the Query URI will copy the cluster URI to the clipboard.&lt;/P&gt;
&lt;P&gt;In the Kusto.Explorer application, right click the “Connections” and select “Add Connection”&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the popup, paste the Query URI into the “Cluster connection” textbox replacing the text that is there. You can optionally give the connection an alias rather than using the URI. Finally, I chose to use the AAD for security. You can choose whatever is appropriate for your client access.&lt;/P&gt;
&lt;P&gt;At this point, we can open a “New Tab” (Home menu) and type in the query like what we used above.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="basic"&gt;let nodes = airports;
let edges = external_table('flights')
| project origin = Column7, dest = Column8, flight = strcat(Column1, Column2), carrier = Column1;
edges
| make-graph origin --&amp;gt; dest with nodes on AIRPORT&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Note: You may need to modify the table names (airports, flights) depending on the shortcut or table name you used when loading the data. These values are case-sensitive.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The points of interest in our graph will be the airports (nodes) and the connections (edges) will be the individual flights that were delayed. I am using the “make-graph” extension in KQL to make a graph of edges from origin to destination using the three-character airport code as link.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Visualize with “make-graph”&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When this query is run, if the last line of the query is “make-graph”, &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/tools/kusto-explorer" target="_blank" rel="noopener"&gt;Kusto.Explorer&lt;/A&gt; will automatically pop up a new window titled “Chart” to view the data. In the image below, I chose to change the visualization to a dark theme and then colored the edges based on the “carrier” column of the flight data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Note: I have zoomed in on the cluster of interest.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If I drag a few of the nodes around, I can start to see there are some nodes (airports) with a lot of orange connections. If I click on an orange link, I quickly learn the orange lines are Delta Flights and the three nodes I pulled out in the image below are Atlanta, Minneapolis, and Detroit.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I started with tables of text-based data and ended with a nice “network” visualization of my flights data. The power of graph visualization to see the relationships between my data rather than just reading tables is invaluable.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Next, I am excited to start to explore visualizations of the data for supply chains and product recommendations.&lt;/P&gt;</description>
      <pubDate>Tue, 20 Aug 2024 19:15:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/visualizing-data-as-graphs-with-fabric-and-kql/ba-p/4223689</guid>
      <dc:creator>BrianSherwin</dc:creator>
      <dc:date>2024-08-20T19:15:48Z</dc:date>
    </item>
    <item>
      <title>ADX Web UI updates - July 2023</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/adx-web-ui-updates-july-2023/ba-p/4218628</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Welcome to the July 2024 update. We are excited to announce new features and improvements in ADX web UI.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Continue reading to learn more about:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Copy query with syntax coloring and KQL IntelliSense improvements&lt;/LI&gt;
&lt;LI&gt;Ad-hoc visual data exploration&lt;/LI&gt;
&lt;LI&gt;Dashboards real time refresh rate&lt;/LI&gt;
&lt;LI&gt;Enhanced data interaction for dashboard tiles&lt;/LI&gt;
&lt;LI&gt;Resize and move dashboard tiles using keyboard only&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Copy query with syntax coloring and KQL IntelliSense improvements &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We’ve enhanced the experience of sharing queries with your team by fixing a long-standing bug that impacted syntax colorization when copying and pasting.&lt;/P&gt;
&lt;P&gt;Whether you use Ctrl+C/Ctrl+V or copy the query directly from the UI, the syntax colorization now persists when pasting into Outlook.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&amp;nbsp;In Outlook, make sure to select “Keep source formatting” option when pasting&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This improvement makes the query easier to read and understand, ensuring smoother collaboration and communication.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Additionally, we’re pleased to introduce several incremental improvements for KQL writers, focusing on &lt;STRONG&gt;enhanced Intellisense support&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;The Intellisense list now provides relevant suggestions without the need to add a space, and it filters results based on user input, excluding function parameter names for more accurate matches. Additionally, quotes and brackets are automatically closed, streamlining the writing process.&lt;/P&gt;
&lt;P&gt;Completion items now prioritize displaying columns at the top, and the Intellisense list is sorted alphabetically for easier navigation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Introducing ad-hoc visual Data Exploration feature&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Our new "Data Exploration" feature allows you to dive deeper into the data on any dashboard, extending your exploration beyond the displayed tiles to uncover new insights. This user-friendly, form-like interface lets you add filters, create aggregations, and switch visualization types without writing queries. Now, you can explore data ad-hoc, leveraging existing tiles to start your journey and expand your data view.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Read more: &lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Announcement:&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/ad-hoc-visual-data-exploration-feature/ba-p/4204266" target="_blank" rel="noopener"&gt;Ad-Hoc Visual Data Exploration Feature - Microsoft Community Hub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Documentation: &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/dashboard-explore-data" target="_blank" rel="noopener"&gt;Explore data in dashboard tiles (preview) - Azure Data Explorer | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Dashboards real time refresh rate&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We are excited to announce an enhancement to our dashboard auto refresh feature, now supporting &lt;STRONG&gt;continuous&lt;/STRONG&gt; and &lt;STRONG&gt;10 seconds&lt;/STRONG&gt; refresh rates, in addition to the existing options.&lt;/P&gt;
&lt;P&gt;This upgrade, addressing a popular customer request, allows both editors and viewers to set near real-time and real-time data updates, ensuring your dashboards display the most current information with minimal delay. Experience faster data refresh and make more timely decisions with our improved dashboard capabilities.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As the dashboard author you can enable the Auto refresh setting and set a minimum time interval, to prevent users from setting an auto refresh interval smaller than the provided value. &lt;BR /&gt;Note that the Continuous option should be used with caution. The data is refreshed every second or after the previous refresh completes if it takes more than 1 second.&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enhanced Data Interaction for Dashboard Tiles&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We are excited to announce new capabilities that enhance your &lt;STRONG&gt;interaction with data&lt;/STRONG&gt; presented visually in dashboard tiles, particularly when multiple data series are involved.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can interact with the data by selecting specific items from the legend using the &lt;STRONG&gt;mouse&lt;/STRONG&gt;, using &lt;STRONG&gt;Ctrl&lt;/STRONG&gt; to add or remove selections, or &lt;STRONG&gt;holding Shift&lt;/STRONG&gt; to select a range.&lt;/P&gt;
&lt;P&gt;The &lt;STRONG&gt;Search&lt;/STRONG&gt; button helps you quickly filter items, while the &lt;STRONG&gt;Invert&lt;/STRONG&gt; button allows you to reverse your selections.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Navigate&lt;/STRONG&gt; through your selections with ease using the Up and Down arrows to refine your data view.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&amp;nbsp;Note that users with edit rights on a dashboard, can customize the legend location in their tiles, improving readability and data interpretation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Resize dashboard tiles using keyboard only&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are excited to introduce a new accessibility feature that allows users to resize dashboard tiles using only the keyboard.&lt;/P&gt;
&lt;P&gt;By pressing the Tab key, you can focus on a tile, and then use the arrow keys to move it.&lt;/P&gt;
&lt;P&gt;To resize, hold the Shift key and use the arrow keys: right to increase width, left to decrease width, down to increase height, and up to decrease height.&lt;/P&gt;
&lt;P&gt;This functionality mirrors the ease of moving and resizing with a mouse, enhancing the accessibility and usability of ADX web UI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Data Explorer Web UI team is looking forward for your feedback in &lt;A href="mailto:KustoWebExpFeedback@service.microsoft.com" target="_blank" rel="noopener"&gt;KustoWebExpFeedback@service.microsoft.com&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;You’re also welcome to add more ideas and vote for them here - &lt;A href="https://aka.ms/adx.ideas" target="_blank" rel="noopener"&gt;https://aka.ms/adx.ideas&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Read more:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;ADX Web May updates – &lt;A href="https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/adx-web-updates-may-2024/ba-p/4163410" target="_blank" rel="noopener"&gt;ADX Web updates – May 2024 - Microsoft Community Hub&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 14 Aug 2024 13:51:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/adx-web-ui-updates-july-2023/ba-p/4218628</guid>
      <dc:creator>Michal_Bar</dc:creator>
      <dc:date>2024-08-14T13:51:32Z</dc:date>
    </item>
    <item>
      <title>Store images in Kusto and visualize them with Power BI or Azure Data Explorer Dashboards</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/store-images-in-kusto-and-visualize-them-with-power-bi-or-azure/ba-p/4205340</link>
      <description>&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H1 class="heading-element" dir="auto" data-sourcepos="1:1-1:41"&gt;How to Visualize Images Stored in Kusto&lt;/H1&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="3:1-3:15"&gt;Introduction&lt;/H2&gt;
&lt;A id="user-content-introduction" class="anchor" href="https://github.com/hau-mal/timeseriesAnalytics/edit/main/kusto-images.md#introduction" target="_blank" rel="noopener" aria-label="Permalink: Introduction"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P data-sourcepos="4:1-6:180"&gt;Kusto is a fast and scalable database designed to ingest, store, and analyze large volumes of structured and semi-structured data. For non-structured data like images, Azure Storage is typically the best choice. Databases can reference image data on storage via a URL, meaning images are not directly stored in Kusto. However, there are scenarios where storing image data in Kusto is beneficial. In this blog post, we will explore when it makes sense to store images in Kusto, how to store them, and how to visualize this data using Azure Data Explorer dashboards or Power BI.&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="8:1-8:29"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="8:1-8:29"&gt;Why store images in Kusto?&lt;/H2&gt;
&lt;A id="user-content-why-store-images-in-kusto" class="anchor" href="https://github.com/hau-mal/timeseriesAnalytics/edit/main/kusto-images.md#why-store-images-in-kusto" target="_blank" rel="noopener" aria-label="Permalink: Why store images in Kusto?"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P data-sourcepos="9:1-12:96"&gt;Although Kusto doesn’t support binary data types, there are still compelling reasons to store images in Azure Data Explorer. For dashboards and reports that require images, visualization tools might not support secure access to external storage. By leveraging identities and network segregation via managed private endpoints, storing all data in one location simplifies both access and security. However, it’s important to note that Kusto is not the best technology for storing large-scale images.&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="14:1-14:31"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="14:1-14:31"&gt;How to store images in Kusto&lt;/H2&gt;
&lt;A id="user-content-how-to-store-images-in-kusto" class="anchor" href="https://github.com/hau-mal/timeseriesAnalytics/edit/main/kusto-images.md#how-to-store-images-in-kusto" target="_blank" rel="noopener" aria-label="Permalink: How to store images in Kusto"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P data-sourcepos="15:1-17:154"&gt;Kusto does not support binary data types, so images must be encoded in base64. This encoding converts the data into a non-human-readable string of 64 English characters. When storing an image in Kusto using base64, it is saved as a string.The default size limit for a string in Kusto is 1 MB (see Kusto Documentation for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/azure/data-explorer/kusto/query/scalar-data-types/string" target="_blank" rel="nofollow noopener"&gt;string datatype&lt;/A&gt;.) By default, all columns in Kusto are indexed. For columns storing images, you should disable indexing and may need to increase the default size limit. Below is an example of creating an image table, disabling indexing, and increasing the string size limit to 2 MB using the the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;BigObject&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;encoding type:&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;// create image table
.create table image (file_name:string, img_original_base64 : string )

// This policy disables the index of the image column and overrides MaxValueSize property in the encoding Policy to 2 MB:
.alter column image.img_original_base64 policy encoding type='BigObject'
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container position-absolute right-0 top-0"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P data-sourcepos="27:1-27:236"&gt;The maximum size for a string in Kusto is 32 MB. For more details, refer to the documentation on the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/management/alter-encoding-policy#encoding-policy-types" target="_blank" rel="nofollow noopener"&gt;encoding policy&lt;/A&gt;.&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="29:1-29:25"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="29:1-29:25"&gt;Ingest images to Kusto&lt;/H2&gt;
&lt;A id="user-content-ingest-images-to-kusto" class="anchor" href="https://github.com/hau-mal/timeseriesAnalytics/edit/main/kusto-images.md#ingest-images-to-kusto" target="_blank" rel="noopener" aria-label="Permalink: Ingest images to Kusto"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P data-sourcepos="30:1-30:312"&gt;You can use all available ingestion methods for the Kusto database, depending on the deployment (PaaS or SaaS). Ensure that the image data is converted to a binary string and encoded to base64, as described in the previous section. You can find a Python example in the Gist referenced at the end of this article.&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="32:1-32:51"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="32:1-32:51"&gt;Display images in Azure Data Explorer Dashboards&lt;/H2&gt;
&lt;A id="user-content-display-images-in-azure-data-explorer-dashboards" class="anchor" href="https://github.com/hau-mal/timeseriesAnalytics/edit/main/kusto-images.md#display-images-in-azure-data-explorer-dashboards" target="_blank" rel="noopener" aria-label="Permalink: Display images in Azure Data Explorer Dashboards"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P data-sourcepos="33:1-33:271"&gt;Once you've ingested image data into a Kusto table, you might want to visualize it using Azure Data Explorer Dashboards. Markdown visuals are an effective way to display images. Typically, images are displayed from a storage location using the following markdown pattern:&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;![alt text](path/to/image.png)
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container position-absolute right-0 top-0"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P data-sourcepos="37:1-37:188"&gt;For images stored in Kusto, the process is similar. Instead of linking to a storage location, you use the field containing the base64-encoded string of the image. Here's how you can do it:&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;| extend image=strcat("![image](data&amp;amp;colon;image/png;base64,", img_original_base64, ")" )
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container position-absolute right-0 top-0"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P data-sourcepos="42:1-43:286"&gt;This method embeds the image directly into the dashboard using the base64-encoded string from your data. If you have multiple images to display you can make use of a function generating a markdown from a Kusto query. The function logic has been shared on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://stackoverflow.com/questions/73783992/how-to-generate-a-markdown-from-a-kusto-adx-query-result" target="_blank" rel="nofollow noopener"&gt;stackoverflow&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;by Daniel Dror:&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;.create-or-alter function with (folder = "Gold layer", docstring = "function converts a table to markdown", skipvalidation = "true") table_to_markdown(t:(*)) {
let schema = t | getschema;
let headers = schema | project ColumnName | summarize make_list(ColumnName) | extend String = strcat('| ', strcat_array(list_ColumnName, ' | '), ' |') | project String, Order=1;
let upper_divider = schema | project ColumnName, Sep = '---' | summarize Cols=make_list(Sep) | extend String = strcat('| ', strcat_array(Cols, ' | '), ' |') | project String, Order=2;
let data = t | extend Cols=pack_array(*) | extend String = strcat('| ', strcat_array(Cols, ' | '), ' |') | project String, Order=3;
headers 
| union upper_divider
| union data
| order by Order asc 
| summarize Rows=make_list(String) 
| project array_strcat(Rows, '\r\n')
} 
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container position-absolute right-0 top-0"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P data-sourcepos="59:1-59:171"&gt;With invoking this function, you can easily display a table of images in Azure Data Explorer dashboards. The following query is used in combination with a markdown visual:&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;image
| project file_name, img_original_base64
| extend ingestion_time=ingestion_time() 
| summarize arg_max(ingestion_time, *) by file_name// remove duplicates
| extend image=strcat("![image](data&amp;amp;colon;image/png;base64,", img_original_base64, ")" )
| project file_name, image 
| order by file_name desc 
| invoke table_to_markdown() 
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container position-absolute right-0 top-0"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P data-sourcepos="70:1-70:84"&gt;This is an example how your data can be visualized using the markdown visualization:&lt;/P&gt;
&lt;DIV id="tinyMceEditorHauke_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt; &lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="75:1-75:29"&gt;Display images in Power BI&lt;/H2&gt;
&lt;A id="user-content-display-images-in-power-bi" class="anchor" href="https://github.com/hau-mal/timeseriesAnalytics/edit/main/kusto-images.md#display-images-in-power-bi" target="_blank" rel="noopener" aria-label="Permalink: Display images in Power BI"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P data-sourcepos="76:1-76:518"&gt;Power BI allows the integration of images from a database, a process that is well-documented in a Guy in a Cube YouTube video, which is referenced at the end of this article. By default, Power BI supports image URLs, but what if you want to display images stored as strings? Given Power BI's limitation of a 32k string size, a creative workaround is necessary. This involves splitting the strings and then reconstructing them using DAX logic, a technique thoroughly explained in the aforementioned Guy in a Cube video.&lt;/P&gt;
&lt;P data-sourcepos="78:1-78:295"&gt;To handle large image strings that exceed Power BI's capacity, a split string function in Kusto can be employed. This function divides the image string representation into multiple rows, which is essential for visualization tools that have string size restrictions. Here's how the function looks&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;//helper function that splits large image string representation into several rows
//this is needed for visualization tools with limitation on string sizes
.create-or-alter function with (folder = "Gold layer", docstring="split image string representation to several rows if PowerBI string size limitation is hit", skipvalidation = "true") image_report ()
{
let max_length=32766; //maximum PowerBI string length
image
| project file_name, img_original_base64
| extend ingestion_time=ingestion_time() 
| summarize arg_max(ingestion_time, *) by file_name // remove duplicates
| extend parts = range(0, strlen(img_original_base64) - 1, max_length)
| mv-expand parts // rows needed for each substring (1 if length &amp;lt; max_length)
| extend img_original_base64_part = substring(img_original_base64, toint(parts), max_length), order=toint(parts)/max_length
| project file_name, img_original_base64_part, order
}
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container position-absolute right-0 top-0"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P data-sourcepos="97:1-97:95"&gt;Following the split, DAX logic is used to concatenate the substrings back into the final image:&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;image = IF (HASONEVALUE(Images[file_name]), "data&amp;amp;colon;image/png;base64, " &amp;amp;CONCATENATEX(Images, 'Images'[img_original_base64_part],,Images[order],ASC) )
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container position-absolute right-0 top-0"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P data-sourcepos="102:1-102:190"&gt;This approach ensures that even with Power BI's string size limitations, images can be effectively displayed by leveraging Kusto's split string function and DAX's concatenation capabilities.&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="105:1-105:13"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="105:1-105:13"&gt;Conclusion&lt;/H2&gt;
&lt;A id="user-content-conclusion" class="anchor" href="https://github.com/hau-mal/timeseriesAnalytics/edit/main/kusto-images.md#conclusion" target="_blank" rel="noopener" aria-label="Permalink: Conclusion"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P data-sourcepos="106:1-106:526"&gt;The integration of images into Kusto and their visualization through Power BI or Azure Data Explorer Dashboards offers a unique approach to managing and displaying non-structured data. While Kusto is primarily designed for structured and semi-structured data, it can accommodate images through base64 encoding, albeit with some limitations due to the absence of binary data types. This method is particularly useful for dashboards and reports that require secure access to images without relying on external storage solutions.&lt;/P&gt;
&lt;P data-sourcepos="108:1-108:330"&gt;The process involves encoding images into a base64 string, ingesting them into Kusto, and then utilizing visualization tools like Power BI to display the images. This approach ensures that all data, including images, can be securely accessed and managed in one centralized location, simplifying both access and security protocols.&lt;/P&gt;
&lt;P data-sourcepos="110:1-110:413"&gt;However, it's crucial to recognize that Kusto is not optimized for storing large-scale images, and this method should be reserved for scenarios where the benefits outweigh the limitations. By following the guidelines and techniques outlined in this blog post, users can effectively store and visualize images within Kusto, enhancing their data analysis and reporting capabilities in a secure and efficient manner.&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="112:1-112:13"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="heading-element" dir="auto" data-sourcepos="112:1-112:13"&gt;References&lt;/H2&gt;
&lt;A id="user-content-references" class="anchor" href="https://github.com/hau-mal/timeseriesAnalytics/edit/main/kusto-images.md#references" target="_blank" rel="noopener" aria-label="Permalink: References"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;UL dir="auto" data-sourcepos="113:1-120:1"&gt;
&lt;LI data-sourcepos="113:1-113:142"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/management/alter-encoding-policy#encoding-policy-types" target="_blank" rel="nofollow noopener"&gt;Column encoding policy&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-sourcepos="114:1-114:167"&gt;&lt;A href="https://stackoverflow.com/questions/73783992/how-to-generate-a-markdown-from-a-kusto-adx-query-result" target="_blank" rel="nofollow noopener"&gt;Generate a markdown from a kusto query result, stackoverflow&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-sourcepos="115:1-115:145"&gt;&lt;A href="https://learn.microsoft.com/power-bi/create-reports/power-bi-images-tables" target="_blank" rel="nofollow noopener"&gt;Power BI Display images in a table, matrix, or slicer in a report&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-sourcepos="116:1-116:100"&gt;&lt;A href="https://www.youtube.com/watch?v=Q82yzcfkqAc" target="_blank" rel="nofollow noopener"&gt;Using Images from a Data Base in Power BI, You Tube&lt;/A&gt;&lt;/LI&gt;
&lt;LI data-sourcepos="117:1-120:1"&gt;&lt;A href="https://gist.github.com/hau-mal/fb1ba9c59b666f2fcc1f2631a021e85c" target="_blank" rel="noopener"&gt;Gist with code examples&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 12 Aug 2024 06:45:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/store-images-in-kusto-and-visualize-them-with-power-bi-or-azure/ba-p/4205340</guid>
      <dc:creator>Hauke</dc:creator>
      <dc:date>2024-08-12T06:45:28Z</dc:date>
    </item>
    <item>
      <title>Ad-Hoc Visual Data Exploration Feature</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/ad-hoc-visual-data-exploration-feature/ba-p/4204266</link>
      <description>&lt;H1&gt;Ad-Hoc Data Exploration Feature&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are excited to introduce the new Data Exploration feature, designed to enhance your ability to delve deeper into the data presented on any Dashboard.&lt;/P&gt;
&lt;P&gt;If the information you're seeking isn't readily available on the dashboard, this feature allows you to extend your exploration beyond the data displayed in the tiles, potentially &lt;STRONG&gt;uncovering new insights&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Directly from a dashboard, you can refine your exploration using a user-friendly, form-like interface. This intuitive and dynamic experience is tailored for insights explorers seeking insights based on high volumes of data in near real time.&lt;/P&gt;
&lt;P&gt;You can add filters, create aggregations, and switch visualization types without writing queries to easily uncover insights.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this new feature, you are no longer bound by the limitations of pre-defined dashboards, nor are you required to master KQL (Kusto Query Language). As independent explorers, you have the freedom for ad-hoc exploration, leveraging existing tiles to kickstart your journey.&lt;/P&gt;
&lt;P&gt;Learn more about this feature&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/dashboard-explore-data" target="_blank" rel="noopener"&gt;Explore data in dashboard tiles (preview) - Azure Data Explorer | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Data Explorer Web UI team is looking forward to your feedback in &lt;A href="mailto:KustoWebExpFeedback@service.microsoft.com" target="_blank" rel="noopener"&gt;KustoWebExpFeedback@service.microsoft.com&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;You’re also welcome to add more ideas and vote for them here - &lt;A href="https://aka.ms/adx.ideas" target="_blank" rel="noopener"&gt;https://aka.ms/adx.ideas&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 12 Aug 2024 18:25:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/ad-hoc-visual-data-exploration-feature/ba-p/4204266</guid>
      <dc:creator>Michal_Bar</dc:creator>
      <dc:date>2024-08-12T18:25:16Z</dc:date>
    </item>
    <item>
      <title>Deprecation of Virtual Network Injection for Azure Data Explorer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/deprecation-of-virtual-network-injection-for-azure-data-explorer/ba-p/4198192</link>
      <description>&lt;P&gt;&lt;SPAN&gt;We are announcing the deprecation of the feature of &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/security-network-overview#virtual-network-injection" target="_blank"&gt;Virtual Network Injection&lt;/A&gt; for Azure Data Explorer. This feature allows customers to inject their Azure Data Explorer cluster into their own virtual network and control the inbound and outbound network traffic. However, this feature has&lt;/SPAN&gt; &lt;SPAN&gt;limitations and challenges, such as:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Customers face a lot of maintenance work, because of things like updating firewall lists of FQDNs or using public IP addresses in a &lt;/SPAN&gt;restricted and secured&lt;SPAN&gt; environment.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Customers are responsible for ensuring that the intra-cluster communication is working.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;It requires a dedicated subnet for each cluster, which can lead to subnet exhaustion and increased management overhead.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;It does not support cross-region or cross-subscription scenarios, which can limit the scalability and flexibility of the data platform.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Required actions&lt;/H1&gt;
&lt;P&gt;&lt;SPAN&gt;As a result, we are deprecating this feature and recommending customers to move as soon as possible to a &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/security-network-overview#private-endpoint" target="_blank"&gt;private endpoint&lt;/A&gt; based network security architecture. A private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network. With private endpoints, you can:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Connect securely to Azure Data Explorer from your virtual network or from on-premises networks via VPN or ExpressRoute.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Access Azure Data Explorer from different regions or subscriptions without any public internet exposure.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Use all the Azure Data Explorer features without any limitations or trade-offs.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Reduce the network complexity and management overhead by using a single subnet for multiple clusters and services.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;To help you with the migration, we have created a &lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/security-network-migrate-vnet-to-private-endpoint?tabs=portal" target="_blank"&gt;migration process&lt;/A&gt; (close to zero downtime).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;We understand that this migration may require effort and coordination from your side, and we are here to support you along the way. We have scheduled office hours to answer your questions and provide guidance on the migration process. You can register for the office hours using this &lt;A href="https://aka.ms/adx.security.vnet.deprecation.form" target="_blank"&gt;form&lt;/A&gt;. You can also reach out to us via email at &lt;A href="mailto:ADXVnetDeprecation@microsoft.com" target="_blank"&gt;ADXVnetDeprecation@microsoft.com&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H1&gt;Timelines&lt;/H1&gt;
&lt;P&gt;&lt;SPAN&gt;Please note the following important dates and actions regarding the migration:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Effective immediately, no new customers will be able to create virtual network injected clusters. Existing customers can continue to use their clusters until the migration deadline.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Starting from February 1, 2025, all running virtual network injected clusters will be stopped. Customers who have not migrated by then will not be able to start them until they complete the migration process.&lt;/LI&gt;
&lt;LI&gt;To avoid any disruption, we strongly recommend that you migrate your clusters as soon as possible. You can follow the migration steps outlined in this &lt;A style="background-color: #ffffff;" href="https://learn.microsoft.com/en-us/azure/data-explorer/security-network-migrate-vnet-to-private-endpoint?tabs=portal" target="_blank"&gt;document&lt;/A&gt;. If you encounter any issues or need assistance, please contact us at ADXVnetMigration@microsoft.com or join our office hours.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;We appreciate your understanding and cooperation as we deprecate the Virtual Network Injection feature and move to a more secure, scalable, and feature-rich network security architecture for Azure Data Explorer.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jul 2024 18:36:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/deprecation-of-virtual-network-injection-for-azure-data-explorer/ba-p/4198192</guid>
      <dc:creator>cosh23</dc:creator>
      <dc:date>2024-07-22T18:36:11Z</dc:date>
    </item>
  </channel>
</rss>

