<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure Data Explorer topics</title>
    <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/bd-p/Kusto</link>
    <description>Azure Data Explorer topics</description>
    <pubDate>Sun, 05 Apr 2026 00:56:39 GMT</pubDate>
    <dc:creator>Kusto</dc:creator>
    <dc:date>2026-04-05T00:56:39Z</dc:date>
    <item>
      <title>Kusto: summarize consecutive segments of the same value per batch</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/kusto-summarize-consecutive-segments-of-the-same-value-per-batch/m-p/4490182#M714</link>
      <description>&lt;P&gt;Problem statement (Azure Data Explorer / Kusto)&lt;/P&gt;&lt;P&gt;I have a table in Azure Data Explorer (Kusto) with the following columns:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;batch_number (string)&lt;/LI&gt;&lt;LI&gt;team (string)&lt;/LI&gt;&lt;LI&gt;datetime (datetime)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Each row represents work done by a team on a batch at a given time.&lt;/P&gt;&lt;P&gt;The data is chronological per batch, but teams can change over time and may come back later.&lt;/P&gt;&lt;P&gt;Example data (ordered by batch_number, then datetime):&lt;/P&gt;&lt;P&gt;batch_number | datetime | team&lt;BR /&gt;001 | 2024-01-01 08:00:00 | A&lt;BR /&gt;001 | 2024-01-01 08:01:00 | A&lt;BR /&gt;001 | 2024-01-01 08:02:00 | A&lt;BR /&gt;001 | 2024-01-01 08:03:00 | B&lt;BR /&gt;001 | 2024-01-01 08:04:00 | B&lt;BR /&gt;001 | 2024-01-01 08:05:00 | A&lt;BR /&gt;001 | 2024-01-01 08:06:00 | A&lt;BR /&gt;002 | 2024-01-01 08:00:00 | A&lt;BR /&gt;002 | 2024-01-01 08:01:00 | A&lt;BR /&gt;002 | 2024-01-01 08:02:00 | B&lt;BR /&gt;002 | 2024-01-01 08:03:00 | C&lt;/P&gt;&lt;P&gt;Goal&lt;/P&gt;&lt;P&gt;For each batch, I want to group consecutive rows where the team stays the same and compute:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;start_time = min(datetime)&lt;/LI&gt;&lt;LI&gt;end_time = max(datetime)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;The result must preserve team sequences, not just distinct teams.&lt;/P&gt;&lt;P&gt;Expected result:&lt;/P&gt;&lt;P&gt;batch_number | team | start_time | end_time&lt;BR /&gt;001 | A | 2024-01-01 08:00:00 | 2024-01-01 08:02:00&lt;BR /&gt;001 | B | 2024-01-01 08:03:00 | 2024-01-01 08:04:00&lt;BR /&gt;001 | A | 2024-01-01 08:05:00 | 2024-01-01 08:06:00&lt;BR /&gt;002 | A | 2024-01-01 08:00:00 | 2024-01-01 08:01:00&lt;BR /&gt;002 | B | 2024-01-01 08:02:00 | 2024-01-01 08:02:00&lt;BR /&gt;002 | C | 2024-01-01 08:03:00 | 2024-01-01 08:03:00&lt;/P&gt;&lt;P&gt;Notes&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Team A appears twice for batch 001 because it appears in two separate consecutive segments.&lt;/LI&gt;&lt;LI&gt;The grouping must be done per batch, not across batches.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Important constraints&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;A simple summarize by batch_number and team is not correct because it merges non-consecutive segments.&lt;/LI&gt;&lt;LI&gt;I can easily implement this logic in pandas using a cumulative sequence identifier, but I have not found a reliable equivalent in Kusto.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Question&lt;/P&gt;&lt;P&gt;What is the correct and reliable Kusto query to compute these consecutive team segments per batch?&lt;/P&gt;</description>
      <pubDate>Wed, 28 Jan 2026 19:12:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/kusto-summarize-consecutive-segments-of-the-same-value-per-batch/m-p/4490182#M714</guid>
      <dc:creator>Deleted</dc:creator>
      <dc:date>2026-01-28T19:12:49Z</dc:date>
    </item>
    <item>
      <title>How to use existing cache for external table when acceleration in progress</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/how-to-use-existing-cache-for-external-table-when-acceleration/m-p/4438027#M707</link>
      <description>&lt;P&gt;I enabled query acceleration for my external table which binds a delta table (1TB) on ADLS. But acceleration progress needs 1.5 hours to complete.&lt;/P&gt;
&lt;P&gt;I found during acceleration in progress, querying on the table is quite slower than case when acceleration completed.&lt;/P&gt;
&lt;P&gt;How can I use the existing acceleration cache/index and after acceleration completed, Kusto will switch to new index?&lt;/P&gt;</description>
      <pubDate>Wed, 30 Jul 2025 03:23:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/how-to-use-existing-cache-for-external-table-when-acceleration/m-p/4438027#M707</guid>
      <dc:creator>kunpengfan</dc:creator>
      <dc:date>2025-07-30T03:23:49Z</dc:date>
    </item>
    <item>
      <title>timechart legend in Azure Data Explorer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/timechart-legend-in-azure-data-explorer/m-p/4437981#M706</link>
      <description>&lt;P&gt;Hi, I'm creating a timechart dashboard in Azure Data Explorer and facing an issue with the legend labels. The legends have extra prefixes and suffixes, such as "Endpoint" or "Count". How can I remove these and show only the actual value in the legend?&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;img /&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Tue, 29 Jul 2025 22:43:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/timechart-legend-in-azure-data-explorer/m-p/4437981#M706</guid>
      <dc:creator>panug</dc:creator>
      <dc:date>2025-07-29T22:43:37Z</dc:date>
    </item>
    <item>
      <title>Parameter controls are not showing Display text</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/parameter-controls-are-not-showing-display-text/m-p/4426312#M700</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;After a recent update to the Azure Data Explorer Web UI, the Parameter controls are not displaying correctly. The Display Text for parameters is not shown by default; instead, the raw Value is displayed until the control is clicked, at which point the correct Display Text appears.&lt;/P&gt;&lt;P&gt;Could you please investigate this issue and provide guidance on a resolution?&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;</description>
      <pubDate>Mon, 23 Jun 2025 10:01:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/parameter-controls-are-not-showing-display-text/m-p/4426312#M700</guid>
      <dc:creator>SonPham</dc:creator>
      <dc:date>2025-06-23T10:01:42Z</dc:date>
    </item>
    <item>
      <title>Export to Excel is not working</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/export-to-excel-is-not-working/m-p/4426308#M699</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;After the recent Azure Data Explorer Web UI update, the "Export to Excel" feature is no longer functioning as expected. While it works for simple tables but it takes longer time than before, it fails for tables containing complex data outputs, such as Empty, Null, Array [], or JSON data. Clicking the "Export to Excel" option does not produce the expected results.&lt;/P&gt;&lt;P&gt;Could you please investigate this issue and provide guidance on a resolution?&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;</description>
      <pubDate>Mon, 23 Jun 2025 09:49:38 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/export-to-excel-is-not-working/m-p/4426308#M699</guid>
      <dc:creator>SonPham</dc:creator>
      <dc:date>2025-06-23T09:49:38Z</dc:date>
    </item>
    <item>
      <title>KQL Query output limit of 5 lakh rows</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/kql-query-output-limit-of-5-lakh-rows/m-p/4417860#M698</link>
      <description>&lt;P&gt;Hi , i have a kusto table which has more than 5 lakh rows and i want to pull that into power bi. When i run the kql query it gives error due to the 5 lakh row limit but when i use set notruncation before the query then i do not get this row limit error on power bi desktop but get this error in power bi service after applying incremental refresh on that table. My question is that will set notruncation always work and i will not face any error further for millions of rows and is this the only limit or there are other limits on ADE due to which i may face error due to huge volume of data or i should export the data from kusto table to azure blob storage and pull the data from blob storage to power bi. Which will be the best way to do it?&lt;/P&gt;</description>
      <pubDate>Tue, 27 May 2025 10:55:19 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/kql-query-output-limit-of-5-lakh-rows/m-p/4417860#M698</guid>
      <dc:creator>jatinkolhe</dc:creator>
      <dc:date>2025-05-27T10:55:19Z</dc:date>
    </item>
    <item>
      <title>Union and then distinct values of a column</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/union-and-then-distinct-values-of-a-column/m-p/4404295#M695</link>
      <description>&lt;P&gt;Hi,&amp;nbsp;&lt;BR /&gt;Here is a description of my problem.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I have 3 tables, but i will take the example of 2. in each table, I have a column "common_col". I want to combine the values of this 2 tables to get one column and then return the distinct values : like that i will be sure that i have the values of each tables.&amp;nbsp;&lt;/P&gt;&lt;P&gt;********************Script 1**********************&lt;/P&gt;&lt;P&gt;((table1&lt;BR /&gt;| project common_col&lt;BR /&gt;| distinct common_col)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;| union&lt;/P&gt;&lt;P&gt;(table2&lt;BR /&gt;| project common_col&lt;BR /&gt;| distinct common_col))&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;| distinct common_col&lt;/P&gt;&lt;P&gt;*******************************************************&lt;/P&gt;&lt;P&gt;******************Script 2 ********************************&lt;/P&gt;&lt;P&gt;((table1&lt;BR /&gt;| project common_col&lt;BR /&gt;| distinct common_col)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;| union&lt;/P&gt;&lt;P&gt;(mv_pdb_process_stl&lt;BR /&gt;| project common_col&lt;BR /&gt;| distinct common_col))&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;| summarize by common_col&lt;/P&gt;&lt;P&gt;**********************************************************&lt;/P&gt;&lt;P&gt;The first and second script, because when after importing the data in Power BI, I tried this :&lt;/P&gt;&lt;P&gt;Nb_distinct_values = DISTINCTCOUNT('Newtable'[common_col])&lt;/P&gt;&lt;P&gt;Nb_Total_rows = COUNTROWS('common_col')&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I don't get the same number and it is don't normal. So there is an issue in my script and I need help please ! Thanks !&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Apr 2025 07:32:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/union-and-then-distinct-values-of-a-column/m-p/4404295#M695</guid>
      <dc:creator>LeHello</dc:creator>
      <dc:date>2025-04-15T07:32:18Z</dc:date>
    </item>
    <item>
      <title>Azure ADX - UpdatePolicy fails to insert data</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/azure-adx-updatepolicy-fails-to-insert-data/m-p/4399443#M694</link>
      <description>&lt;P&gt;Hi Everyone,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I believe everyone is doing good and safe.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am facing challenge with ADX. Please find the problem details below.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Problem statement:&lt;/U&gt;&lt;/STRONG&gt; We are unable to insert a result data into a target table from source table using an&amp;nbsp;&lt;STRONG&gt;UpdatePolicy&lt;/STRONG&gt;.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Description:&lt;/U&gt;&amp;nbsp;&lt;/STRONG&gt;We have written an &lt;STRONG&gt;UpdatePolicy&lt;/STRONG&gt; on a table. This &lt;STRONG&gt;UpdatePolicy&lt;/STRONG&gt; will accept query parameters as an ADX function. This function returns output result in the form of table. Further, This table output result received should be inserted into target table.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Additional Details:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;STRONG&gt;UpdatePolicy&lt;/STRONG&gt; helps to update the data into a target table from source table dynamically/automatically.&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;UpdatePolicy&lt;/STRONG&gt; is almost equivalent to&amp;nbsp;&lt;STRONG&gt;Triggers&amp;nbsp;&lt;/STRONG&gt;&amp;nbsp;in&amp;nbsp;&lt;STRONG&gt;SQL Server&amp;nbsp;&lt;/STRONG&gt;to do dynamic insert into a target table.&lt;/LI&gt;&lt;LI&gt;Syntax of&amp;nbsp;&lt;STRONG&gt;UpdatePolicy &amp;nbsp;&lt;/STRONG&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;.alter table&amp;nbsp;TargetTable policy update&lt;/P&gt;&lt;P&gt;```&lt;/P&gt;&lt;P&gt;[&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;{&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;"IsEnabled": true,&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;"Source":&amp;nbsp;"SourceTable",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;"Query": "SourceTable &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;| extend Result = G3MS_ClearAlarm(Id, CountryCode, OccuredTime)&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;|&amp;nbsp;project AlarmId = Result.AlarmId, ClearAlarmId = Result.ClearAlarmId, ClearTime = Result.ClearTime",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;"IsTransactional":&amp;nbsp;true,&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;"PropagateIngestionProperties":&amp;nbsp;false&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;}&lt;/P&gt;&lt;P&gt;]&lt;/P&gt;&lt;P&gt;```&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Error Received when executed&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;&lt;P&gt;Error during execution of a policy operation: Request is invalid and cannot be processed: Semantic error: SEM0085: Tabular expression is not expected in the current context.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;If anyone has any suggestions/thoughts on this will be very beneficial to complete the requirement.&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 01 Apr 2025 06:37:26 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/azure-adx-updatepolicy-fails-to-insert-data/m-p/4399443#M694</guid>
      <dc:creator>Saihariharan1988</dc:creator>
      <dc:date>2025-04-01T06:37:26Z</dc:date>
    </item>
    <item>
      <title>ADX data receiving stop after sometime</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/adx-data-receiving-stop-after-sometime/m-p/4389554#M692</link>
      <description>&lt;P&gt;I have very strange problem. I have IOT application where device send data to IOT hub, then it routed to event hub. There is function with trigger as azure function ,this function insert data in to ADX .&lt;/P&gt;&lt;P&gt;I trace the event from device to event hub. I can see all data&amp;nbsp; I can also see function get trigger ,no error.But in ADX empty record, no data from event. Just date field which add explicitly.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Note- Again after some time I can see data in ADX (no change in ADX not even restart any service).&lt;/P&gt;&lt;P&gt;Can any body have clue what exactly could be issue?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 05 Mar 2025 18:28:24 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/adx-data-receiving-stop-after-sometime/m-p/4389554#M692</guid>
      <dc:creator>darksaturn</dc:creator>
      <dc:date>2025-03-05T18:28:24Z</dc:date>
    </item>
    <item>
      <title>External Table in ADX</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/external-table-in-adx/m-p/4382640#M691</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I'm trying to create an external table in ADX which uses a Synapse Analytics (SA) database view (called undelivered). The undelivered view itself is query data from a Cosmos analytical store&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;I've create a user defined idenity&lt;UL&gt;&lt;LI&gt;Added the identiy to the ADX cluster, SA and Cosmos&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;LI&gt;Updated the ADX database:&lt;BR /&gt;.alter-merge cluster policy managed_identity[&lt;BR /&gt;&amp;nbsp; &amp;nbsp; {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "ObjectId": "a3d7ddcd-d625-4715-be6f-c099c56e1567",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "AllowedUsages": "ExternalTable"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; }&lt;BR /&gt;]&lt;/LI&gt;&lt;LI&gt;Created the database users in SA&lt;BR /&gt;&lt;P&gt;-- Create a database user for the ADX Managed Identity&lt;BR /&gt;CREATE USER [adx-synapse-identity] FROM EXTERNAL PROVIDER;&lt;/P&gt;&lt;P&gt;-- Grant read permissions&lt;BR /&gt;ALTER ROLE db_datareader ADD MEMBER [adx-synapse-identity];&lt;BR /&gt;GRANT SELECT ON OBJECT::undelivered TO [adx-synapse-identity];&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;From within SA I can "SELECT * FROM undelivered" and the correct information is returned&lt;/LI&gt;&lt;LI&gt;But when I come to create the external table in ADX:&lt;BR /&gt;.create-or-alter external table MyExternalTable&lt;BR /&gt;(&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Status: string&lt;BR /&gt;)&lt;BR /&gt;kind=sql&amp;nbsp;&lt;BR /&gt;table=undelivered&lt;BR /&gt;(&lt;BR /&gt;&amp;nbsp; &amp;nbsp; h@'Server=tcp:synapse-xxxxx.sql.azuresynapse.net,1433;Database="Registration";ManagedIdentityClientId=&amp;lt;key&amp;gt;;Authentication=Active Directory Managed Identity;'&lt;BR /&gt;)&lt;BR /&gt;with (&lt;BR /&gt;&amp;nbsp; &amp;nbsp; managed_identity = "&amp;lt;key&amp;gt;"&lt;BR /&gt;)&lt;BR /&gt;I get the error: Managed Identity 'system' is not allowed by the managed_identity policy for usage: ExternalTable&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;So even with me specifying the managed identity I want to use it is still trying to use the system one.&amp;nbsp;&lt;/P&gt;&lt;P&gt;How can I get the external table created with the correct managed identity?&lt;BR /&gt;Any questions please just ask&lt;BR /&gt;Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 18 Feb 2025 20:27:55 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/external-table-in-adx/m-p/4382640#M691</guid>
      <dc:creator>JasonWorlin</dc:creator>
      <dc:date>2025-02-18T20:27:55Z</dc:date>
    </item>
    <item>
      <title>Issue with mysql_request Plugin on Dashboard</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/issue-with-mysql-request-plugin-on-dashboard/m-p/4368928#M689</link>
      <description>&lt;P&gt;Today, while using the Azure Data Explorer dashboard, we noticed that the &lt;STRONG&gt;mysql_request&lt;/STRONG&gt; plugin is no longer functioning as expected. Specifically, we are encountering the following error message:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="lia-text-color-14"&gt;evaluate mysql_request(): the following error(s) occurred while evaluating the output schema: The 'mysql_request' plugin cannot be used as the request property request_readonly_hardline is set.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="lia-text-color-21"&gt;Interestingly, the plugin works perfectly fine within the Query tab. However, when called from the dashboard, it fails to execute. This issue was not present yesterday, as it was working seamlessly at that time.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="lia-text-color-21"&gt;Looking further, I realize that in Query Tab, the ADX UI send with&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "Options":{&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "request_readonly_hardline": false&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="lia-text-color-21"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="lia-text-color-21"&gt;And the request from Dashboard such as Table, Chart. It send with &lt;STRONG&gt;request_readonly_hardline: true&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Is this on purpose?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="lia-text-color-21"&gt;Thanks&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Jan 2025 15:34:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/issue-with-mysql-request-plugin-on-dashboard/m-p/4368928#M689</guid>
      <dc:creator>SonPhamT</dc:creator>
      <dc:date>2025-01-21T15:34:39Z</dc:date>
    </item>
    <item>
      <title>Extending by a function output</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/extending-by-a-function-output/m-p/4247160#M574</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="applescript"&gt;datatable(ids: dynamic)
[
    dynamic(["value1", "value2"])
]
| function(ids)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This above snippet works fine and returns a table&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="applescript"&gt;datatable(date: datetime, ids: dynamic)
[
    datetime(2022-01-01), dynamic(["value1", "value2"]),
    datetime(2022-01-02), dynamic(["value3", "value4"])
]
| extend outputs = function(ids)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This one however complains that extend expects a scalar and not table that the function returns&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="applescript"&gt;datatable(date: datetime, ids: dynamic)
[
    datetime(2022-01-01), dynamic(["value1", "value2"]),
    datetime(2022-01-02), dynamic(["value3", "value4"])
]
| extend outputs = toscalar(function(ids))&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When using toscalar, ids cannot be referenced. Is there a workaround?&lt;/P&gt;&lt;P&gt;The function take in dynamic and returns a tubular expression of one row and two columns&lt;/P&gt;</description>
      <pubDate>Tue, 17 Sep 2024 10:22:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/extending-by-a-function-output/m-p/4247160#M574</guid>
      <dc:creator>ic2092</dc:creator>
      <dc:date>2024-09-17T10:22:32Z</dc:date>
    </item>
    <item>
      <title>Disappearing files after ending session</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/disappearing-files-after-ending-session/m-p/4192860#M565</link>
      <description>&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt; &lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;Good day Any idea why my files on a Surface Hub 2s keep disappearing whenever I end session and log back in? Is there a way I can permanently keep them in my Documents folder or at the File Explorer folder and access them at anytime even after ending session?&lt;/P&gt;&lt;P&gt;attached I uploaded two pictures. One with the created folder called JEOC Docs and in the other picture the files disappeared after ending session or login out. Like in the below picture.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Jul 2024 15:27:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/disappearing-files-after-ending-session/m-p/4192860#M565</guid>
      <dc:creator>EGLHUBJEOC3</dc:creator>
      <dc:date>2024-07-16T15:27:41Z</dc:date>
    </item>
    <item>
      <title>SQL Server emulation layer; support for prepared statements</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/sql-server-emulation-layer-support-for-prepared-statements/m-p/4158247#M562</link>
      <description>&lt;P&gt;&lt;SPAN&gt;ADX supports very basic transformation of T-SQL to KQL, as it is limited to normal SQL statements.&lt;BR /&gt;&lt;BR /&gt;Currently support for prepared statements doesn't seem to be in-place. This would greatly improve security of the emulation layer by enabling prepared statements to be used.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;This would also allow semantic layers and customer facing analytics tools such as &lt;A href="https://www.vizzly.co" target="_self"&gt;Vizzly&lt;/A&gt;&amp;nbsp;to integrate in a secured manner.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2024 07:27:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/sql-server-emulation-layer-support-for-prepared-statements/m-p/4158247#M562</guid>
      <dc:creator>vizzly_james</dc:creator>
      <dc:date>2024-06-03T07:27:13Z</dc:date>
    </item>
    <item>
      <title>Why does ADX caching result from related dimension table/mv/function</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/why-does-adx-caching-result-from-related-dimension-table-mv/m-p/4095697#M556</link>
      <description>&lt;P&gt;I'm testing materialized views based on the sample queries below:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Lookup to another materialized view (bar_mv) storing last timestamps in field LAST_FOO_TS for some key(a,b,c)&lt;/LI&gt;&lt;/UL&gt;&lt;PRE&gt;.create materialized-view with (dimensionMaterializedViews = "bar_mv") 
foo_mv on table events_table{
events_table 
|lookup materialized_view('bar_mv') on a,b,c
|where TIMESTAMP &amp;gt; LAST_FOO_TS
}&lt;/PRE&gt;&lt;UL&gt;&lt;LI&gt;Lookup to stored function (bar_v) returning last timestamps in field LAST_FOO_TS for some key(a,b,c)&lt;/LI&gt;&lt;/UL&gt;&lt;PRE&gt;.create materialized-view
foo_mv1 on table events_table{
events_table 
|lookup bar_v() on a,b,c
|where TIMESTAMP &amp;gt; LAST_FOO_TS
}&lt;/PRE&gt;&lt;P&gt;In both cases, looks like ADX is caching results of the lookup materialized view/function and not refresing it for long time or ever. Is there any way to force refresh of those to get the newest values of LAST_FOO_TS in "where" condition?&lt;/P&gt;</description>
      <pubDate>Mon, 25 Mar 2024 14:03:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/why-does-adx-caching-result-from-related-dimension-table-mv/m-p/4095697#M556</guid>
      <dc:creator>Bartek_insight</dc:creator>
      <dc:date>2024-03-25T14:03:18Z</dc:date>
    </item>
    <item>
      <title>Kusto ADX plugin for Azure Devops (dashboards)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/kusto-adx-plugin-for-azure-devops-dashboards/m-p/4059305#M553</link>
      <description>&lt;P&gt;The existing kusto plugin that we are using in our Azure DevOps&amp;nbsp; (&lt;A href="https://marketplace.visualstudio.com/items?itemName=arsaveli.kustoWidget" target="_blank"&gt;Kusto and Application Insights Widget - Visual Studio Marketplace&lt;/A&gt;) is broken and there is no ETA on a fix.&lt;/P&gt;
&lt;P&gt;Would be great if one of you could point me to a work around/a different plugin that we could use to access kusto data from the devops dashboards.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Feb 2024 14:51:14 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/kusto-adx-plugin-for-azure-devops-dashboards/m-p/4059305#M553</guid>
      <dc:creator>Girish_Dasarathy</dc:creator>
      <dc:date>2024-02-16T14:51:14Z</dc:date>
    </item>
    <item>
      <title>Connect using SAS Token or APIKey</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/connect-using-sas-token-or-apikey/m-p/4043400#M550</link>
      <description>&lt;P&gt;Hello,&lt;BR /&gt;There is a lot of samples showing how to connect to ADX using an interactive user or a service principal but I would like to know if there is a way to connect to ADX within a non-interactive service using something like a SAS token or a kind of APIKey ?&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;</description>
      <pubDate>Tue, 30 Jan 2024 14:55:24 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/connect-using-sas-token-or-apikey/m-p/4043400#M550</guid>
      <dc:creator>Messan APETE</dc:creator>
      <dc:date>2024-01-30T14:55:24Z</dc:date>
    </item>
    <item>
      <title>Materialized view with inner join over 2 tables skips data</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/materialized-view-with-inner-join-over-2-tables-skips-data/m-p/4036037#M542</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have 2 tables which are ingested from event hubs with standard injection policy (10 minutes etc).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have created a materialized view which has roughly the following definition:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;.create materialized-view TransactionsWithCustDataMatView on table Transactions&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;{&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Transactions&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;| join kind=inner Customers on CustomerId&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;| where CreatedOn &amp;gt; LastModifiedOn1&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;| summarize arg_max(LastModifiedOn1, *) by TransactionId&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;}&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;The problem is that sometimes a customer registers and within 1 minute also performs a transactions. However, since Customers and Transactions are different tables, each with its own ingestion (batching), it happens that sometimes the transaction of a new customer is ingested before the corresponding customer is ingested, and then the above materialized view "skips" the transaction due to the inner join.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Is there a way how to control/sync the ingestions of the 2 source table, or schedule the materialized view to run with some delay of 10-20 minutes (when ingestions to both tables would have happened)?&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;Br,&lt;BR /&gt;Deyan&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 22 Jan 2024 14:05:54 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/materialized-view-with-inner-join-over-2-tables-skips-data/m-p/4036037#M542</guid>
      <dc:creator>Deyan_Petrov</dc:creator>
      <dc:date>2024-01-22T14:05:54Z</dc:date>
    </item>
    <item>
      <title>SLA Query</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/sla-query/m-p/4035974#M541</link>
      <description>&lt;P&gt;How to Calculate SLA using KQL Query?&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jan 2024 13:20:12 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/sla-query/m-p/4035974#M541</guid>
      <dc:creator>JKPrasannaDonthireddy</dc:creator>
      <dc:date>2024-01-22T13:20:12Z</dc:date>
    </item>
    <item>
      <title>'State' Graph from ADO</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-explorer/state-graph-from-ado/m-p/4028355#M538</link>
      <description>&lt;P&gt;How to get 'State' like Stages of State (New-Active-Inprogress-Closed) from ADO State graph in to KQL query&lt;/P&gt;</description>
      <pubDate>Fri, 12 Jan 2024 17:49:27 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-explorer/state-graph-from-ado/m-p/4028355#M538</guid>
      <dc:creator>JKPrasannaDonthireddy</dc:creator>
      <dc:date>2024-01-12T17:49:27Z</dc:date>
    </item>
  </channel>
</rss>

