<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure PaaS Blog articles</title>
    <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/bg-p/AzurePaaSBlog</link>
    <description>Azure PaaS Blog articles</description>
    <pubDate>Sat, 18 Apr 2026 10:13:01 GMT</pubDate>
    <dc:creator>AzurePaaSBlog</dc:creator>
    <dc:date>2026-04-18T10:13:01Z</dc:date>
    <item>
      <title>How to get blob Total Blob Count and Total Capacity with Blob Inventory</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/how-to-get-blob-total-blob-count-and-total-capacity-with-blob/ba-p/4485643</link>
      <description>&lt;DIV class="mce-toc"&gt;
&lt;H2&gt;Table of Contents&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="#community--1-mcetoc_1jf5pdnfg_1" target="_self"&gt;Approach&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-mcetoc_1jf5nsncf_2" target="_self"&gt;Introduction to the Blob Inventory Service&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-mcetoc_1jf5pdnfg_3" target="_self"&gt;Steps to enable inventory report&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-mcetoc_1jf5nsncf_3" target="_self"&gt;Support Documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-mcetoc_1jf5pdnfg_5" target="_self"&gt;Disclaimer:&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;H1 id="mcetoc_1jf5pdnfg_1"&gt;Approach&lt;/H1&gt;
&lt;P&gt;This article presents how to take advantage of the Blob Inventory service to get the total blob count and the total capacity per storage account, per container or per directory.&lt;/P&gt;
&lt;P&gt;I will present the steps to create the blob inventory rule and how to get the needed information without having to process the blob inventory rule results just by using the &lt;EM&gt;prefix match&lt;/EM&gt;&amp;nbsp;field.&lt;/P&gt;
&lt;P&gt;Additional support documentation it is presented at the end of the article.&lt;/P&gt;
&lt;H1 id="mcetoc_1jf5nsncf_2"&gt;Introduction to the Blob Inventory Service&lt;/H1&gt;
&lt;P&gt;Azure Storage blob inventory provides a list of the containers, blobs, blob versions, and snapshots in your storage account, along with their associated properties. It generates an output report in either comma-separated values (csv) or Apache Parquet format on a daily or weekly basis. You can use the report to audit retention, legal hold or encryption status of your storage account contents, or you can use it to understand the total data size, age, tier distribution, or other attributes of your data. Please find &lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/blob-inventory" target="_blank" rel="noopener"&gt;here&lt;/A&gt; our documentation about the blob inventory service.&lt;/P&gt;
&lt;P&gt;On this article, I will focus on using this service to get the blob count and the capacity.&lt;/P&gt;
&lt;H1 id="mcetoc_1jf5pdnfg_3"&gt;Steps to enable inventory report&lt;/H1&gt;
&lt;P&gt;Please see below how to define a blob inventory rule to get the intended information, using the Azure Portal:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Sign in to the&amp;nbsp;&lt;A href="https://portal.azure.com/" target="_blank" rel="noopener" data-linktype="external"&gt;Azure portal&lt;/A&gt;&amp;nbsp;to get started.&lt;/LI&gt;
&lt;LI&gt;Locate your storage account and display the account overview.&lt;/LI&gt;
&lt;LI&gt;Under&amp;nbsp;&lt;STRONG&gt;Data management&lt;/STRONG&gt;, select&amp;nbsp;&lt;STRONG&gt;Blob inventory&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG&gt;Add your first inventory rule&lt;/STRONG&gt;, if you do not have any rule defined,&lt;STRONG&gt; &lt;/STRONG&gt;or select&lt;STRONG&gt; Add a rule,&lt;/STRONG&gt; in case that you already have at least one rule defined.&lt;/LI&gt;
&lt;LI&gt;Add a new inventory rule by filling in the following fields:&amp;nbsp;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Rule name:&lt;/STRONG&gt; The name of your blob inventory rule.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Container:&lt;/STRONG&gt; Container to store the result of the blob inventory rule execution.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Object type to inventory:&lt;/STRONG&gt;&amp;nbsp;Select&amp;nbsp;&lt;EM&gt;blob&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Blob types:&lt;/STRONG&gt;
&lt;OL&gt;
&lt;LI&gt;Blob Storage: Select all (&lt;EM&gt;Block blobs, Page blobs, Append blobs&lt;/EM&gt;).&lt;/LI&gt;
&lt;LI&gt;Data Lake Storage: Select all (&lt;EM&gt;Block blobs&lt;/EM&gt;,&amp;nbsp;&lt;EM&gt;Append blobs).&lt;/EM&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Subtypes:&lt;/STRONG&gt;&amp;nbsp;
&lt;OL&gt;
&lt;LI&gt;Blob Storage: Select all (&lt;EM&gt;Include blob versions&lt;/EM&gt;,&amp;nbsp;&lt;EM&gt;Include snapshots, Include deleted blobs).&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;Data Lake Storage: Select all (&lt;EM&gt;Include snapshots, Include deleted blobs).&lt;/EM&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Blob inventory fields:&lt;/STRONG&gt; Please find&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/blob-inventory#custom-schema-fields-supported-for-blob-inventory" target="_blank" rel="noopener"&gt;here&lt;/A&gt; all custom schema fields supported for blob inventory. In this scenario, we need to select at least the following fields:
&lt;OL&gt;
&lt;LI&gt;Blob Storage: &lt;EM&gt;Name&lt;/EM&gt;, &lt;EM&gt;Creation-Time&lt;/EM&gt;,&amp;nbsp;&lt;EM&gt;ETag&lt;/EM&gt;, &lt;EM&gt;Content-Length&lt;/EM&gt;, &lt;EM&gt;Snapshot&lt;/EM&gt;, &lt;EM&gt;VersionId&lt;/EM&gt;, &lt;EM&gt;IsCurrentVersion&lt;/EM&gt;, &lt;EM&gt;Deleted&lt;/EM&gt;, &lt;EM&gt;RemainingRetentionDays.&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;Data Lake Storage: &lt;EM&gt;Name&lt;/EM&gt;, &lt;EM&gt;Creation-Time&lt;/EM&gt;, &lt;EM&gt;ETag&lt;/EM&gt;, &lt;EM&gt;Content-Length&lt;/EM&gt;, &lt;EM&gt;Snapshot&lt;/EM&gt;, &lt;EM&gt;DeletionId&lt;/EM&gt;, &lt;EM&gt;Deleted&lt;/EM&gt;, &lt;EM&gt;DeletedTime&lt;/EM&gt;, RemainingRetentionDays.&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Inventory frequency:&lt;/STRONG&gt; A blob inventory run is automatically scheduled every day when daily is chosen. Selecting weekly schedule will only trigger the inventory run on Sundays.&lt;BR /&gt;
&lt;OL&gt;
&lt;LI&gt;
&lt;P&gt;A daily execution will return results faster.&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Export format:&lt;/STRONG&gt; The export format. Could be a csv file or a parquet file.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Prefix match:&lt;/STRONG&gt;&amp;nbsp;Filter blobs by name or first letters. To find items in a specific container, enter the name of the container followed by a forward slash, then the blob name or first letters. For example, to show all blobs starting with “a”, type: “myContainer/a”.
&lt;OL&gt;
&lt;LI&gt;Here is the place to add the path where to start collecting the blob information.&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The step 5.9 presented above (the &lt;EM&gt;prefix match&lt;/EM&gt; field) it is the main point of this article.&lt;/P&gt;
&lt;P&gt;Considering that we have a Storage Account with a container named &lt;EM&gt;work&lt;/EM&gt;, and a directory named &lt;EM&gt;items&lt;/EM&gt;, inside the container named &lt;EM&gt;work&lt;/EM&gt;. Please see below how to configure the &lt;EM&gt;prefix match &lt;/EM&gt;field to get the needed result:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Leave it empty to get the information at the storage account level.&lt;/LI&gt;
&lt;LI&gt;Add the container name in the &lt;EM&gt;prefix match &lt;/EM&gt;field to get the information at the container level.&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;
&lt;UL&gt;
&lt;LI&gt;Put&amp;nbsp;&lt;EM&gt;prefix match = &lt;/EM&gt;work/&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Add the directory path in the &lt;EM&gt;prefix match &lt;/EM&gt;field to get the information at the directory level.
&lt;UL&gt;
&lt;LI&gt;Put&amp;nbsp;&lt;EM&gt;prefix match = &lt;/EM&gt;work/items/&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The blob inventory execution will generate a file named &lt;EM&gt;&amp;lt;ruleName&amp;gt;-manifest.json&lt;/EM&gt;, please see more information about this file in the support documentation section. This file captures the rule definition provided by the user and the path to the inventory for that rule, and the information that we want without having to process the blob inventory rule files.&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
"destinationContainer" : "inventory-destination-container",
"endpoint" : "https://testaccount.blob.core.windows.net",
"files" : [
  {
    "blob" : "2021/05/26/13-25-36/Rule_1/Rule_1.csv",
    "size" : 12710092
  }
],
"inventoryCompletionTime" : "2021-05-26T13:35:56Z",
"inventoryStartTime" : "2021-05-26T13:25:36Z",
"ruleDefinition" : {
  "filters" : {
    "blobTypes" : [ "blockBlob" ],
    "includeBlobVersions" : false,
    "includeSnapshots" : false,
    "prefixMatch" : [ "penner-test-container-100003" ]
  },
  "format" : "csv",
  "objectType" : "blob",
  "schedule" : "daily",
  "schemaFields" : [
    "Name",
    "Creation-Time",
    "BlobType",
    "Content-Length",
    "LastAccessTime",
    "Last-Modified",
    "Metadata",
    "AccessTier"
  ]
},
"ruleName" : "Rule_1",
"status" : "Succeeded",
"summary" : {
  "objectCount" : 110000,
  "totalObjectSize" : 23789775
},
"version" : "1.0"
}&lt;/LI-CODE&gt;
&lt;P&gt;The&amp;nbsp;&lt;EM&gt;objectCount&lt;/EM&gt; value is the total blob count, and the&amp;nbsp;&lt;EM&gt;totalObjectSize&lt;/EM&gt; is the total capacity in bytes.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Special notes:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A rule needs to be defined for each path (container or directory) to get the total blob count and the total capacity.&lt;/LI&gt;
&lt;LI&gt;The blob inventory rule generates a CSV or Apache Parquet formatted file(s). These files should be deleted if the blob inventory rule is only to get the information presented on this article.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1 id="mcetoc_1jf5nsncf_3"&gt;Support Documentation&lt;/H1&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; border-width: 1px;"&gt;&lt;colgroup&gt;&lt;col style="width: 22.0543%" /&gt;&lt;col style="width: 77.9373%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="lia-align-center"&gt;Topic&lt;/td&gt;&lt;td class="lia-align-center"&gt;Some highlights&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/storage/blobs/blob-inventory-how-to?tabs=azure-portal" target="_blank" rel="noopener"&gt;Enable Azure Storage blob inventory reports&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;The steps to enable inventory report.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/blob-inventory#inventory-run" target="_blank" rel="noopener" data-lia-auto-title-active="1"&gt;Inventory run&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;If you configure a rule to run daily, then it will be scheduled to run every day. If you configure a rule to run weekly, then it will be scheduled to run each week on Sunday UTC time.&lt;/P&gt;
&lt;P&gt;The time taken to generate an inventory report depends on various factors and the maximum amount of time that an inventory run can complete before it fails is six days.&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/blob-inventory#inventory-output" target="_blank" rel="noopener"&gt;Inventory output&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Each inventory rule generates a set of files in the specified inventory destination container for that rule. The inventory output is generated under the following path:&amp;nbsp;https://&amp;lt;accountName&amp;gt;.blob.core.windows.net/&amp;lt;inventory-destination-container&amp;gt;/YYYY/MM/DD/HH-MM-SS/&amp;lt;ruleName&amp;nbsp;where:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;EM&gt;accountName&lt;/EM&gt;&lt;/STRONG&gt;&amp;nbsp;is your Azure Blob Storage account name.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;EM&gt;inventory-destination-container&lt;/EM&gt;&lt;/STRONG&gt;&amp;nbsp;is the destination container you specified in the inventory rule.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;EM&gt;YYYY/MM/DD/HH-MM-SS&lt;/EM&gt;&lt;/STRONG&gt;&amp;nbsp;is the time when the inventory began to run.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;EM&gt;ruleName&lt;/EM&gt;&lt;/STRONG&gt; is the inventory rule name.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&amp;nbsp;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/storage/blobs/blob-inventory#inventory-files" target="_blank" rel="noopener" data-lia-auto-title-active="1"&gt;Inventory files&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;Each inventory run for a rule generates the following files:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Inventory file:&lt;/STRONG&gt; An inventory run for a rule generates a CSV or Apache Parquet formatted file. Each such file contains matched objects and their metadata.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Checksum file:&lt;/STRONG&gt; A checksum file contains the MD5 checksum of the contents of manifest.json file. The name of the checksum file is &amp;lt;ruleName&amp;gt;-manifest.checksum. Generation of the checksum file marks the completion of an inventory rule run&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Manifest file:&lt;/STRONG&gt; A manifest.json file contains the details of the inventory file(s) generated for that rule. The name of the file is &amp;lt;ruleName&amp;gt;-manifest.json. This file also captures the rule definition provided by the user and the path to the inventory for that rule.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/storage/blobs/blob-inventory#pricing-and-billing" target="_blank" rel="noopener"&gt;Pricing and billing&lt;/A&gt;&lt;/td&gt;&lt;td&gt;&amp;nbsp;Pricing for inventory is based on the number of blobs and containers that are scanned during the billing period.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/storage/blobs/blob-inventory#known-issues-and-limitations" target="_blank" rel="noopener"&gt;Known Issues and Limitations&lt;/A&gt;&lt;/td&gt;&lt;td&gt;&amp;nbsp;This section describes limitations and known issues of the Azure Storage blob inventory feature.&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H1 id="mcetoc_1jf5pdnfg_5"&gt;Disclaimer&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;These steps are provided&amp;nbsp;for the purpose of illustration only.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;These steps and any related information are provided "as is" without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.&lt;/LI&gt;
&lt;LI&gt;We grant You a nonexclusive, royalty-free right to use and modify the Steps and to reproduce and distribute the steps, provided that. You agree:&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;to not use Our name, logo, or trademarks to market Your software product in which the steps are embedded;&lt;/LI&gt;
&lt;LI&gt;to include a valid copyright notice on Your software product in which the steps are embedded; and&lt;/LI&gt;
&lt;LI&gt;to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of steps.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 23 Mar 2026 13:46:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/how-to-get-blob-total-blob-count-and-total-capacity-with-blob/ba-p/4485643</guid>
      <dc:creator>ruineiva</dc:creator>
      <dc:date>2026-03-23T13:46:08Z</dc:date>
    </item>
    <item>
      <title>Redis Keys Statistics</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/redis-keys-statistics/ba-p/4486079</link>
      <description>&lt;P&gt;Redis Keys statistics including Key Time-to-Live (&lt;STRONG&gt;TTL&lt;/STRONG&gt;) statistics and &lt;STRONG&gt;Key sizes&lt;/STRONG&gt; are useful for troubleshooting cache usage and performance, from client side.&lt;/P&gt;
&lt;P&gt;This article have two sections:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;First one, using a script to get statistics from Keys on Redis ( 1 Bash script + 1 LUA script - &lt;STRONG&gt;getKeyStats.sh + getKeyStats.lua &lt;/STRONG&gt;)&lt;/LI&gt;
&lt;LI&gt;Second one, using a script to filter and list key names ( 1 Bash script that includes a LUA script - &lt;STRONG&gt;listKeys.sh&lt;/STRONG&gt; )&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Key Time-to-Live (TTL):&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;HR /&gt;
&lt;P&gt;TTL may have impact on memory usage and memory available on Redis services.&lt;/P&gt;
&lt;P&gt;Data Loss on Redis services may happened unexpectedly due to some issue on backend, but may also happen due to Memory eviction policy, or Time-to-Live (TTL) expired.&lt;BR /&gt;Memory eviction policy may remove some keys from Redis service, but only when used capacity (the space used by Redis keys) reach 100% on memory available.&lt;/P&gt;
&lt;P&gt;Not having any unexpected issue on Redis backend side or not reaching the maximum memory available, the only reason for having some keys removed from cache is due to TTL value.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;TTL may not be defined at all, and in that case the key remains in the cache forever (persistent)&lt;/LI&gt;
&lt;LI&gt;TTL can be set while setting a new key&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;TTL can be set / re-set later after key creation&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;TTL is defined in seconds or milliseconds, or with a negative value:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;-1&lt;/STRONG&gt;, the key &lt;SPAN style="color: rgb(30, 30, 30);"&gt;exists but has no expiration&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; (it’s persistent); t&lt;/SPAN&gt;his happens when the TTL was not defined or it was removed using PERSIST command&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;-2&lt;/STRONG&gt;, if the key does not exist.&lt;/LI&gt;
&lt;LI&gt;any other value&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Related commands:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SET key1 value1 EX 60&amp;nbsp;&lt;/STRONG&gt;- defines TTL as 60 seconds&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;SET key1 value1 PX 60000&lt;/STRONG&gt; - defines TTL as 60000 milliseconds (60 seconds)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;EXPIRE key1 60&lt;/STRONG&gt; - Set a timeout of 60 seconds on key1&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;TTL key1 &lt;/STRONG&gt;- returns the current TTL value, in seconds&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;PTTL key1 &lt;/STRONG&gt;- returns the current TTL value, in milliseconds&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;PERSIST key1&lt;/STRONG&gt; removes TTL from that key and make the key persistent&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Notes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-start="1012" data-end="1139"&gt;TTL counts down in real time, but Redis expiration is &lt;STRONG data-start="1068" data-end="1085"&gt;lazy + active&lt;/STRONG&gt;, so exact timing isn’t guaranteed to the millisecond.&lt;/LI&gt;
&lt;LI data-start="1140" data-end="1246"&gt;A TTL of 0 is basically a race condition, that usually are not seen, it because the key expires immediately.&lt;/LI&gt;
&lt;LI data-start="1247" data-end="1291"&gt;EXPIRE key 0 deletes the key right away.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;There is no guarantee the deletion happens exactly at expiration time. Redis &lt;STRONG data-start="1068" data-end="1085"&gt;lazy + active &lt;/STRONG&gt;expiration means the key is checked only when someone touches it (&lt;STRONG&gt;lazy&lt;/STRONG&gt;), but to avoid memory filling up with expired junk, Redis also runs a background job to periodically scan a subset of keys and delete the expired ones (&lt;STRONG&gt;active&lt;/STRONG&gt;). So, some expired keys may survive a bit longer, not accessible anymore but still im memory.&lt;/P&gt;
&lt;P&gt;Example Redis lazy:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;at 11:59:00 &lt;STRONG&gt;SET key1 value1&lt;/STRONG&gt; &lt;STRONG&gt;EX 60 &lt;/STRONG&gt;- 60 seconds expiration time&lt;/LI&gt;
&lt;LI&gt;key1 expires at 12:00:00&lt;/LI&gt;
&lt;LI&gt;no one accesses it until 12:00:05 - when someone try to access key1 at 12:00:05, Redis identify the key1 expired and delete it.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Example Redis active:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;for the same key1, after 12:00:00. if during the periodically background job Redis scan the subset of keys containing key1, that key1 will be actively deleted.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For that reason, we may see some higher memory usage than the real memory used by active keys in the cache.&lt;BR /&gt;For more information about Redis commands, check &lt;A href="https://redis.io/docs/latest/commands/" target="_blank" rel="noopener"&gt;Redis Inc - Commands&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Sizes:&lt;/STRONG&gt;&lt;/P&gt;
&lt;HR /&gt;
&lt;P&gt;Large key value sizes in the cache, may have high impact on Redis performance.&lt;BR /&gt;Redis service is designed to 1KB response size, and Microsoft recommends to use up to 100KB on Azure Redis services, to get a better performance.&lt;BR /&gt;Redis response size may not be exactly the same as key size, as Response size is the sum of the response from each operation sent to Redis.&lt;BR /&gt;While the response size can be the size of only one key requested (like GET), we can see very often response size being a sum of more than one key, as result of multikey operations (like MGET and others).&lt;/P&gt;
&lt;P&gt;The scope of this article is the focus on each key size; so, we will not discuss on this article the implications of multikey commands.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;By design Redis service is a single thread system per shard, and this is not a Microsoft/Azure limitation but a Redis design feature.&lt;BR /&gt;To be very quick on processing requests, Redis is optimized to work and process small keys, and for that is more efficient using a single thread instead of the need of context switching.&lt;BR /&gt;In a multi threaded system, context switching happens when the processor stops executing one thread and starts executing another.&lt;BR /&gt;When that happens, the OS saves the current thread’s state (registers, program counter, stack pointer, etc.) and restores the state of the next thread.&amp;nbsp;&lt;BR /&gt;To save time on that process, Redis service is designed to run in a single thread system.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Due to the single thread nature, all operations sent to Redis service, are waiting in a queue to be processed.&lt;BR /&gt;To minimize latency, all keys must remain small so they can be processed efficiently and responses can be transmitted to the client quickly over the network.&lt;/P&gt;
&lt;P&gt;For that reason, it's important to understand the key sizes we have on our Redis service, and maintain all keys as small as possible.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Scripts Provided&lt;/STRONG&gt;&lt;/P&gt;
&lt;HR /&gt;
&lt;P&gt;To help on identifying some specific TTL values and Keys sizes in a Redis cache, two solutions are provided below:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;1. Get Key statistics&lt;/STRONG&gt; - that scans all cache and return the amount of Redis keys with:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Number of keys with &lt;STRONG&gt;TTL &lt;/STRONG&gt;no set&lt;/LI&gt;
&lt;LI&gt;Number of keys with &lt;STRONG&gt;TTL &lt;/STRONG&gt;higher or equal to a user defined TTL threshold&lt;/LI&gt;
&lt;LI&gt;Number of keys with &lt;STRONG&gt;TTL &lt;/STRONG&gt;lower than a user defined TTL threshold&lt;/LI&gt;
&lt;LI&gt;Number of keys with value &lt;STRONG&gt;size &lt;/STRONG&gt;higher or equal than a user defined Size threshold&lt;/LI&gt;
&lt;LI&gt;Number of keys with value &lt;STRONG&gt;size &lt;/STRONG&gt;lower than a user defined Size threshold&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Total &lt;/STRONG&gt;number of keys in the cache.&lt;/LI&gt;
&lt;LI&gt;It also includes start and end time, and the total time spent on the keys scan.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;2. List Key Names&lt;/STRONG&gt; - this script returns a list of Redis Keys names, based on parameters provided:&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;No &lt;STRONG&gt;TTL &lt;/STRONG&gt;set, or&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;TTL &lt;/STRONG&gt;higher or equal to a user defined TTL threshold, or&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;TTL &lt;/STRONG&gt;lower than to a user defined TTL threshold&lt;/LI&gt;
&lt;LI&gt;Key value &lt;STRONG&gt;size &lt;/STRONG&gt;higher or equal than a user defined Size threshold, or&lt;/LI&gt;
&lt;LI&gt;Key value &lt;STRONG&gt;size &lt;/STRONG&gt;lower than a user defined Size threshold&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Total &lt;/STRONG&gt;number of keys in the cache&lt;/LI&gt;
&lt;LI&gt;It also includes start and end time, and the total time spent on the keys scan.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;WARNING:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Due to the need to read all keys in the cache, both solutions can cause high workload on Redis side, specially for high datasets on the cache, with high number of keys.&lt;BR /&gt;Both solutions are using LUA script that runs on Redis side, and depending on the amount of keys in the cache, may block all other commands to be processed, while the script is running.&lt;BR /&gt;The duration time on the output from each script run, may help to identify the impact of the scripts to run.&lt;BR /&gt;Run it carefully and do some tests first on your developing environment, before using in a production.&lt;BR /&gt;&lt;BR /&gt;YOU CAN RUN THE BELOW SCRIPTS AT YOUR OWN RISK.&lt;BR /&gt;WE DON'T ASSUME ANY RESPONSABILITY FOR UNEXPECTED RESULTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-background-color-custom-0072c6" border="1" style="width: 100%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;SPAN style="color: #ffffff; font-size: large;"&gt;&lt;STRONG&gt;1- Get Key statistics&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 100.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;To get Redis key statistics, we use Linux Bash shell and Redis-cli tool to run LUA script on Redis side, to get TTL values and sizes from each key.&lt;BR /&gt;This solution is very fast, but needs to scan all keys in the cache during the LUA script run. &lt;BR /&gt;Despite very quick, the time depends of the amount of keys to scan.&lt;BR /&gt;This may block Redis to process other requests, due to the single-thread nature of Redis service.&lt;/P&gt;
&lt;P&gt;The below script scans all cache and return only the amount of Redis keys with:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Number of keys with &lt;STRONG&gt;TTL &lt;/STRONG&gt;no set&lt;/LI&gt;
&lt;LI&gt;Number of keys with &lt;STRONG&gt;TTL &lt;/STRONG&gt;higher or equal to a user defined TTL threshold&lt;/LI&gt;
&lt;LI&gt;Number of keys with &lt;STRONG&gt;TTL &lt;/STRONG&gt;lower than a user defined TTL threshold&lt;/LI&gt;
&lt;LI&gt;Number of keys with value &lt;STRONG&gt;size &lt;/STRONG&gt;higher or equal than a user defined Size threshold&lt;/LI&gt;
&lt;LI&gt;Number of keys with value &lt;STRONG&gt;size &lt;/STRONG&gt;lower than a user defined Size threshold&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Total &lt;/STRONG&gt;number of keys in the cache.&lt;/LI&gt;
&lt;LI&gt;It also includes start and end time, and the total time spent on the keys scan.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Calling&lt;STRONG&gt; getKeyStats.sh&amp;nbsp;&lt;/STRONG&gt;return statistics from the existing keys in the cache, based on the two threshold values (optional), that can be passed on script command line parameters.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;This script can clarify questions like:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;“Do I have any key in the cache without TTL set?”, or&lt;/LI&gt;
&lt;LI&gt;“Why my keys are not expiring?”&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; (any threshold values used will clarify this)&lt;/LI&gt;
&lt;LI&gt;“Do I have large keys in my cache, larger than 1KB&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; (any TTL threshold can be used, with key size threshold 1024)&lt;/LI&gt;
&lt;LI&gt;“How many keys I have that will expire on next 1 hour?”&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(threshold 3600, with any key size threshold)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;Output:&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;How to run:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;create the below &lt;STRONG&gt;getKeyStats.sh&lt;/STRONG&gt; and &lt;STRONG&gt;getKeyStats&lt;/STRONG&gt;&lt;STRONG&gt;.lua &lt;/STRONG&gt;files on same folder, on your Linux environment (Ubuntu 20.04.6 LTS used)&lt;/LI&gt;
&lt;LI&gt;give permissions to run Shell script, with command&amp;nbsp;&lt;STRONG&gt;chmod 700&lt;/STRONG&gt; &lt;STRONG&gt;getKeyStats&lt;/STRONG&gt;&lt;STRONG&gt;.sh&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Call the script using the syntax:&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;./getKeyStats.sh host password [port] [ttl_threshold] [size_threshold]&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;Script parameters:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&amp;nbsp;host &lt;/STRONG&gt;(mandatory) : the URI for the cache&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;&lt;STRONG&gt;password&lt;/STRONG&gt; (mandatory) : the Redis access key from the cache&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;&lt;STRONG&gt;port &lt;/STRONG&gt;(optional - default 10000) : TCP port used to access the cache&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;&lt;STRONG&gt;ttl_threshold &lt;/STRONG&gt;(optional - default 600 - 10 minutes) : Key TTL threshold (in seconds) to be used on the results (use -1 to 1 to get Keys with no TTL set)&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;&lt;STRONG&gt;size_threshold&lt;/STRONG&gt; (optional - default 102400 - 100KB) : Key Size threshold to be used on the results&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If not provided, the default values will be used: &lt;STRONG&gt;Redis Port: 10000&lt;/STRONG&gt;, &amp;nbsp;&lt;STRONG&gt;ttl_threshold:&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;600&lt;/STRONG&gt; Seconds,&amp;nbsp;&lt;STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;size_threshold:&lt;/SPAN&gt;&amp;nbsp;102400&lt;/STRONG&gt; Bytes (100KB).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Tested with:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Ubuntu 20.04.6 LTS&lt;/LI&gt;
&lt;LI&gt;redis-cli -v&lt;BR /&gt;&amp;nbsp; &amp;nbsp; redis-cli 7.4.2&lt;/LI&gt;
&lt;LI&gt;Redis services:
&lt;UL&gt;
&lt;LI&gt;Azure Managed Redis Balanced B0 OSSMode&lt;/LI&gt;
&lt;LI&gt;Azure Cache for Redis Standard C1&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;getKeyStats.sh&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang=""&gt;#!/usr/bin/env bash
#============================== LUA script version =================
# Linux Bash Script to get statistics from Redis Keys TTL values and Key value sizes
# It returns the Number of:
#   - keys with TTL no set
#   - keys with TTL higher or equal to TTL_treshold
#   - keys with TTL lower TTL_threshold
#   - keys with value size higher or equal than Size_threshold
#   - keys with value size lower than Size_threshold
#   - total number of keys in the cache.
#-------------------------------------------------------
# WARNING:
# It uses LUA script to run on Redis server side.
# Use it carefully, during low Redis workoads.
# Do your tests first on a Dev environment, before use it on production.
#-------------------------------------------------------
# It requires :
#    redis-cli v7 or above
#--------------------------------------------------------
# Usage:
# getRedisTTL.sh &amp;lt;cacheuri&amp;gt; &amp;lt;cacheaccesskey&amp;gt; [&amp;lt;accessport&amp;gt;(10000)] [&amp;lt;ttl_treashold&amp;gt;(600)] [&amp;lt;size_threshold&amp;gt;(102400)]
#========================================================

#------------------------------------------------------
# To use non-ssl port requites to remove --tls parameter from Redis-cli command below
#------------------------------------------------------

# Parameters
REDIS_HOST="${1:?Usage: $0 &amp;lt;host&amp;gt; &amp;lt;password&amp;gt; [port] [ttl_threshold] [Size_Threshold]}"
REDISCLI_AUTH="${2:?Usage: $0 &amp;lt;host&amp;gt; &amp;lt;password&amp;gt; [port] [ttl_threshold] [Size_Threshold]}"
REDIS_PORT="${3:-10000}"             # 10000 / 6380 / 6379
REDIS_TTL_THRESHOLD="${4:-600}"      # 10 minutes
REDIS_SIZE_THRESHOLD="${5:-102400}"  # 100KB

# Port number must be numeric
if ! [[ "$REDIS_PORT" =~ ^[0-9]+$ ]]; then
  echo "ERROR: Redis Port must be numeric"
  exit 1
fi

# TTL threshold must be numeric
if ! [[ "$REDIS_TTL_THRESHOLD" =~ ^[0-9]+$ ]]; then
  echo "ERROR: TTL threshold must be numeric"
  exit 1
fi

# Size threshold must be numeric
if ! [[ "$REDIS_SIZE_THRESHOLD" =~ ^[0-9]+$ ]]; then
  echo "ERROR: Size threshold must be numeric"
  exit 1
fi

echo ""

echo "========================================================"
echo "Scaning number of keys with TTL threshold $REDIS_TTL_THRESHOLD Seconds, and Key size threshold $REDIS_SIZE_THRESHOLD Bytes"

# Start time
start_ts=$(date +%s.%3N)
echo "Start time: $(date "+%d-%m-%Y %H:%M:%S")"
echo "------------------------"

echo ""

# Procesing
result=$(redis-cli \
  -h "$REDIS_HOST" \
  -a "$REDISCLI_AUTH" \
  -p "$REDIS_PORT" \
  --tls \
  --no-auth-warning \
  --raw \
  --eval getKeyStats.lua , "$REDIS_TTL_THRESHOLD" "$REDIS_SIZE_THRESHOLD" \
  | tr '\n' ' ')

read no_ttl nonexist ttl_high ttl_low ttl_invalid size_high size_low size_nil total &amp;lt;&amp;lt;&amp;lt; "$result"

if [[ $result == ERR* ]]; then
  echo "Redis Lua error:"
  echo "$result"
else
  echo "Total keys scanned: $total"
  echo "------------"
  echo "Keys with TTL not set       : $no_ttl"
  echo "Keys with TTL &amp;gt;= $REDIS_TTL_THRESHOLD seconds: $ttl_high"
  echo "Keys with TTL &amp;lt;  $REDIS_TTL_THRESHOLD seconds: $ttl_low"
  echo "Keys with TTL invalid/error : $ttl_invalid"
  echo "Non existent Keys           : $nonexist"
  echo "------------"
  echo "Keys with Size &amp;gt;= $REDIS_SIZE_THRESHOLD Bytes: $size_high"
  echo "Keys with Size &amp;lt;  $REDIS_SIZE_THRESHOLD Bytes: $size_low"
  echo "Keys with invalid Size        : $size_nil"
fi

echo ""
echo "------------------------"
end_ts=$(date +%s.%3N)
echo "End time: $(date "+%d-%m-%Y %H:%M:%S")"

# Duration - Extract days, hours, minutes, seconds, milliseconds
duration=$(awk "BEGIN {print $end_ts - $start_ts}")
days=$(awk "BEGIN {print int($duration/86400)}")
hours=$(awk "BEGIN {print int(($duration%86400)/3600)}")
minutes=$(awk "BEGIN {print int(($duration%3600)/60)}")
seconds=$(awk "BEGIN {print int($duration%60)}")
milliseconds=$(awk "BEGIN {printf \"%03d\", ($duration - int($duration))*1000}")
echo "Duration  : ${days} days $(printf "%02d" "$hours"):$(printf "%02d" "$minutes"):$(printf "%02d" "$seconds").$milliseconds"
echo "========================================================"
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;getKeyStats.lua&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang=""&gt;local ttl_threshold = tonumber(ARGV[1])
local size_threshold = tonumber(ARGV[2])
local cursor = "0"

-- Counters
local no_ttl = 0
local nonexist = 0
local ttl_high = 0
local ttl_low = 0
local ttl_invalid = 0
local size_high = 0
local size_low = 0
local size_nil = 0
local total = 0

repeat
  local scan = redis.call("SCAN", cursor, "COUNT", 1000)
  cursor = scan[1]
  local keys = scan[2]
  for _, key in ipairs(keys) do

      local ttl = redis.call("TTL", key)
      local size = redis.call("MEMORY","USAGE", key)
      total = total + 1

      if ttl == -1 then
        no_ttl = no_ttl + 1
      elseif ttl == -2 then
        nonexist = nonexist + 1
      elseif type(ttl) ~= "number" then
        ttl_invalid = ttl_invalid + 1
      elseif ttl &amp;gt;= ttl_threshold then
        ttl_high = ttl_high + 1
      else
        ttl_low = ttl_low + 1
      end

      if size == nil then
        size_nil = size_nil + 1
      elseif size &amp;gt;= size_threshold then
        size_high = size_high + 1
      else
        size_low = size_low + 1
      end

  end
until cursor == "0"

return {
  no_ttl,
  nonexist,
  ttl_high,
  ttl_low,
  ttl_invalid,
  size_high,
  size_low,
  size_nil,
  total
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Performance:&lt;/STRONG&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Redis service used:&amp;nbsp;&lt;/STRONG&gt;Azure Managed Redis - Balanced B0 - OSSMode&lt;/P&gt;
&lt;P&gt;Scanning number of keys with TTL threshold 600 Seconds, and Key size threshold 102400 Bytes&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Total keys scanned: 46161&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;TTL not set : 0&lt;BR /&gt;TTL &amp;gt;= 600 seconds: 46105&lt;BR /&gt;TTL &amp;lt; 600 seconds: 56&lt;BR /&gt;TTL invalid/error : 0&lt;BR /&gt;Non existent key : 0&lt;/P&gt;
&lt;P&gt;Keys with Size &amp;gt;= 102400 Bytes: 0&lt;BR /&gt;Keys with Size &amp;lt; 102400 Bytes: 46161&lt;BR /&gt;Keys with invalid Size : 0&lt;/P&gt;
&lt;P&gt;Duration : 0 days &lt;STRONG&gt;00:00:00.602&lt;/STRONG&gt;&lt;BR /&gt;# ------------------&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Redis service used: &lt;/STRONG&gt;Azure Cache for Redis - Standard - C1&lt;/P&gt;
&lt;P&gt;Scanning number of keys with TTL threshold 100 Seconds, and Key size threshold 500 Bytes&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Total keys scanned: 1227&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;TTL not set : 2&lt;BR /&gt;TTL &amp;gt;= 100 seconds: 1225&lt;BR /&gt;TTL &amp;lt; 100 seconds: 0&lt;BR /&gt;TTL invalid/error : 0&lt;BR /&gt;Non existent key : 0&lt;/P&gt;
&lt;P&gt;Keys with Size &amp;gt;= 500 Bytes: 1225&lt;BR /&gt;Keys with Size &amp;lt; 500 Bytes: 2&lt;BR /&gt;Keys with invalid Size : 0&lt;/P&gt;
&lt;P&gt;Duration : 0 days &lt;STRONG&gt;00:00:00.630&lt;BR /&gt;&lt;/STRONG&gt;# ------------------&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;WARNING:&lt;/STRONG&gt;&lt;BR /&gt;The above scripts uses LUA script, that runs on Redis side, and may block you normal workload.&lt;BR /&gt;Use it carefully when have a large number of keys in the cache, and during low workload times.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-background-color-custom-0072c6" border="1" style="width: 100%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;SPAN style="color: #ffffff; font-size: large;"&gt;&lt;STRONG&gt;2 - List Key Names&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 100.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Once we identify some amount of keys in the cache with some specific threshold, we may want to list that key names.&lt;BR /&gt;The below script can help on that, and returns a list of Redis Keys names with:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;No &lt;STRONG&gt;TTL &lt;/STRONG&gt;set&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;TTL &lt;/STRONG&gt;higher or equal to a user defined TTL threshold&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;TTL &lt;/STRONG&gt;lower than to a user defined TTL threshold&lt;/LI&gt;
&lt;LI&gt;Key value &lt;STRONG&gt;size &lt;/STRONG&gt;higher or equal than a user defined Size threshold&lt;/LI&gt;
&lt;LI&gt;Key value &lt;STRONG&gt;size &lt;/STRONG&gt;lower than a user defined Size threshold&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Total &lt;/STRONG&gt;number of keys in the cache&lt;/LI&gt;
&lt;LI&gt;It also includes start and end time, and the total time spent on the keys scan.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Calling&lt;STRONG&gt; getKeyStats.sh&amp;nbsp;&lt;/STRONG&gt;return just key names and respective TTL values and key sizes.&lt;BR /&gt;The result depends of the threshold values used, that can be passed on script command line parameters or using the default ones.&lt;BR /&gt;On this script, we can use a sign on both threshold values:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;“&lt;STRONG&gt;-&lt;/STRONG&gt;“, if we want to return keys lower than that threshold;&lt;/LI&gt;
&lt;LI&gt;“&lt;STRONG&gt;+&lt;/STRONG&gt;” or no sign, if we want to return keys higher than that threshold.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;This script can clarify questions like:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;“What are the keys I have in the cache without TTL?”&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;(Threshold values: -1 0)&lt;/LI&gt;
&lt;LI&gt;“What are the keys I have in the cache with TTL, and size higher than 1KB”?&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(Threshold values: 0 1024)&lt;/LI&gt;
&lt;LI&gt;“What are the keys I have in the cache with TTL, and size smaller than 1KB ?”&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(Threshold values: 0 -1024)&lt;/LI&gt;
&lt;LI&gt;“What are the keys I have in the cache with that will expire on next 1 hour?”&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; (Threshold values: -3600 0)&lt;/LI&gt;
&lt;LI&gt;“What are the keys I have in the cache with that will expire after 1 hour?”&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; (Threshold values: +3600 0)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;“What are the keys I have in the cache with No TTL set and size larger than 200K?”&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; (Threshold values: -1 204800)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;Output:&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;How to run:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;create the below &lt;STRONG&gt;listKeys&lt;/STRONG&gt;&lt;STRONG&gt;.sh&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;file under some folder, on your Linux environment (Ubuntu 20.04.6 LTS used)&lt;/LI&gt;
&lt;LI&gt;give permissions to run Shell script, with command&amp;nbsp;&lt;STRONG&gt;chmod 700&lt;/STRONG&gt; &lt;STRONG&gt;listKeys&lt;/STRONG&gt;&lt;STRONG&gt;.sh&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Call the script using the syntax:&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;./listKeys.sh host password [port] [+/-][ttl_threshold] [+/-][size_threshold]&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;Script parameters:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;host&amp;nbsp;&lt;/STRONG&gt;(mandatory) : the URI for the cache&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;password&amp;nbsp;&lt;/STRONG&gt;(mandatory) : the Redis access key from the cache&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;port&amp;nbsp;&lt;/STRONG&gt;(optional - default 10000) : TCP port used to access the cache&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;[+/-]&lt;/STRONG&gt; (optional) before ttl_threshold: indicates if we want return keys with lower "-", or higher TTL "+" than ttl_threshold&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;ttl_threshold&amp;nbsp;&lt;/STRONG&gt;(optional - default 600 - 10 minutes) : Key TTL threshold (in seconds) to be used on the results (use -1 to get Keys with no TTL set)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[+/-] &lt;/STRONG&gt;(optional)&lt;STRONG&gt; &lt;/STRONG&gt;before size_threshold: indicates if we want return keys with small size "-", or large size "+" than size_threshold&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;size_threshold&amp;nbsp;&lt;/STRONG&gt;(optional - default 102400 - 100KB) : Key Size threshold to be used on the results&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If not provided, the default values that will be used: &lt;STRONG&gt;No&lt;/STRONG&gt; &lt;STRONG&gt;TTL &lt;/STRONG&gt;set (-1), and &lt;STRONG&gt;Key size threshold 102400&lt;/STRONG&gt; Bytes (100KB).&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Tips:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;use &lt;STRONG&gt;ttl_threshold&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;= -1&lt;/STRONG&gt; to return key names with no TTL&amp;nbsp;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;(ex: /listKeys.sh [port] &lt;STRONG&gt;-1&lt;/STRONG&gt; &lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;[+/-][size_Threshold]&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;)&lt;/SPAN&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;use &lt;STRONG&gt;ttl_threshold &lt;/STRONG&gt;&lt;STRONG&gt;= -500&lt;/STRONG&gt; to return key names with TTL below 500 seconds (ex: /listKeys.sh [port] &lt;STRONG&gt;-500&lt;/STRONG&gt;&amp;nbsp;[+/-][size_Threshold])&lt;/LI&gt;
&lt;LI&gt;use &lt;STRONG&gt;ttl_threshold &lt;/STRONG&gt;&lt;STRONG&gt;= 500&lt;/STRONG&gt; to return key names with TTL above or equal to 500 seconds (ex: /listKeys.sh [port] &lt;STRONG&gt;500&lt;/STRONG&gt;&amp;nbsp;[+/-][size_Threshold])&lt;/LI&gt;
&lt;/UL&gt;
&lt;HR /&gt;
&lt;UL&gt;
&lt;LI&gt;use &lt;STRONG&gt;size_threshold &lt;/STRONG&gt;&lt;STRONG&gt;= 0&lt;/STRONG&gt;&amp;nbsp; to return key names with any size in the cache (ex: /listKeys.sh [port] [+/-][ttl_threshold] &lt;STRONG&gt;0&lt;/STRONG&gt;)&lt;/LI&gt;
&lt;LI&gt;use &lt;STRONG&gt;size_threshold&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;= -1000&lt;/STRONG&gt;&amp;nbsp; to return key names with size below 1000 Bytes (ex: /listKeys.sh [port] [+/-][ttl_threshold] &lt;STRONG&gt;-1000&lt;/STRONG&gt;)&lt;/LI&gt;
&lt;LI&gt;use &lt;STRONG&gt;size_threshold&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;= 1000&lt;/STRONG&gt;&amp;nbsp; to return key names with size above or equal to 1000 Bytes (ex: /listKeys.sh [port] [+/-][ttl_threshold] &lt;STRONG&gt;1000&lt;/STRONG&gt;)&lt;/LI&gt;
&lt;/UL&gt;
&lt;HR /&gt;
&lt;UL&gt;
&lt;LI&gt;use &lt;STRONG&gt;ttl_threshold &lt;/STRONG&gt;&lt;STRONG&gt; = 0&lt;/STRONG&gt; AND &lt;STRONG&gt;size_threshold&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;= 0&lt;/STRONG&gt; to return all key names with any TTL and any size in the cache&amp;nbsp;(ex: /listKeys.sh [port] &lt;STRONG&gt;0&lt;/STRONG&gt; &lt;STRONG&gt;0&lt;/STRONG&gt;)&lt;/LI&gt;
&lt;LI&gt;use &lt;STRONG&gt;ttl_threshold &lt;/STRONG&gt;&lt;STRONG&gt; = -1&lt;/STRONG&gt;&amp;nbsp;AND &lt;STRONG&gt;size_threshold&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;= 0&lt;/STRONG&gt; to return all key names with no TTL and any size in the cache (ex: /listKeys.sh [port] &lt;STRONG&gt;-1&lt;/STRONG&gt; &lt;STRONG&gt;0&lt;/STRONG&gt;)&lt;/LI&gt;
&lt;/UL&gt;
&lt;HR /&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;Tested with:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Ubuntu 20.04.6 LTS&lt;/LI&gt;
&lt;LI&gt;redis-cli -v&lt;BR /&gt;&amp;nbsp; &amp;nbsp; redis-cli 7.4.2&lt;/LI&gt;
&lt;LI&gt;Redis services:
&lt;UL&gt;
&lt;LI&gt;Azure Managed Redis Balanced B0 OSSMode&lt;/LI&gt;
&lt;LI&gt;Azure Cache for Redis Standard C1&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;listKeys.sh&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang=""&gt;#!/usr/bin/env bash
set -euo pipefail
#============================== LUA script version =================
# Linux Bash Script to list Redis Keys names
# It returns key names with:
#   - No TTL set
#   - with TTL higher or equal to TTL_treshold
#   - with TTL lower TTL_threshold
#   - with value size higher or equal than Size_threshold
#   - with value size lower than Size_threshold
#   - total number of keys in the cache.
#-------------------------------------------------------
# WARNING:
# It uses LUA script (included on Bash code) to run on Redis server side.
# Use it carefully, during low Redis workoads.
# Do your tests first on a Dev environment, before use it on production.
#-------------------------------------------------------
# It requires :                                                                                               
#    redis-cli v7 or above                                                                                    
#--------------------------------------------------------                                                     
# Usage:                                                                                                      
# listKeys.sh &amp;lt;cacheuri&amp;gt; &amp;lt;cacheaccesskey&amp;gt; [&amp;lt;accessport&amp;gt;(10000)] [+/-][&amp;lt;ttl_treashold&amp;gt;(-1)] [+/-][&amp;lt;size_treashold&amp;gt;(102400)]
#========================================================

#------------------------------------------------------
# Using non-ssl port requires to remove --tls parameter on Redis-cli command below
#------------------------------------------------------

sintax="&amp;lt;redis_host&amp;gt; &amp;lt;password&amp;gt; [redis_port] [+/-][ttl_threshold] [+/-][size_threshold]"
REDIS_HOST="${1:?Usage: $0 $sintax}"
REDISCLI_AUTH="${2:?Usage: $0 $sintax}"
REDIS_PORT="${3:-10000}"          # Redis port (10000, 6380, 6379)
KEYTTL_THRESHOLD=${4:-"-1"}       # -1, +TTL_threshold, TTL_threashold, -TTL_threshold
KEYSIZE_THRESHOLD="${5:-102400}"  # +Size_threshold, Size_threashold, -Size_threshold

# Port number must be numeric
if ! [[ "$REDIS_PORT" =~ ^[0-9]+$ ]]; then
  echo "ERROR: Redis Port must be numeric"
  exit 1
fi

# Check if KEYTTL_THRESHOLD is a valid integer
if ! [[ "$KEYTTL_THRESHOLD" =~ ^[-+]?[0-9]+$ ]]; then
    echo "Error: ttl_threshold $KEYTTL_THRESHOLD is not an integer"
    exit 1
fi

# Check if KEYSIZE_THRESHOLD is a valid integer
if ! [[ "$KEYSIZE_THRESHOLD" =~ ^[-+]?[0-9]+$ ]]; then
    echo "Error: Size_threshold $KEYSIZE_THRESHOLD is not an integer"
    exit 1
fi

# Check if TTL Threasold is positive (or zero), or  negative
if [ "$KEYTTL_THRESHOLD" -ge 0 ]; then
    TTLSIGN="+"
else
    TTLSIGN="-"
fi

# Check if Size Threshold is positive (or zero), or  negative
if [ "$KEYSIZE_THRESHOLD" -ge 0 ]; then
    SIZESIGN="+"
    size_text="larger than"
else
    SIZESIGN="-"
    size_text="smaler than"
fi

# specific with no TTL set
if [ "$KEYTTL_THRESHOLD" -eq -1 ]; then
    ttl_text="No TTL set"
fi
if [ "$KEYTTL_THRESHOLD" -ge 0 ]; then
    ttl_text="TTL above $KEYTTL_THRESHOLD Seconds"
fi
if [ "$KEYTTL_THRESHOLD" -lt -1 ]; then
    ttl_text="TTL below ${KEYTTL_THRESHOLD#[-+]} Seconds"
fi

# remove any sign
KEYTTL_THRESHOLD="${KEYTTL_THRESHOLD#[-+]}"
KEYSIZE_THRESHOLD="${KEYSIZE_THRESHOLD#[-+]}"

echo "========================================================"
echo "List all key names with $ttl_text, and Key size $size_text $KEYSIZE_THRESHOLD Bytes"

# Start time
start_ts=$(date +%s.%3N)
echo "Start time: $(date "+%d-%m-%Y %H:%M:%S")"
echo "------------------------"

echo ""

# Procesing
redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" -a "$REDISCLI_AUTH" --tls --no-auth-warning EVAL "
local cursor = '0'
local ttl_threshold = tonumber(ARGV[1])    -- KEYTTL_THRESHOLD
local ttl_sign = ARGV[2]                   -- TTLSIGN
local size_threshold = tonumber(ARGV[3])   -- KEYSIZE_THRESHOLD
local size_sign = ARGV[4]                  -- SIZESIGN
local output = {}
local count = 0
local totalKeys = 0
local strKeyTTL = ''
local strKeySize = ''


-- Scanning keys in the cache
table.insert(output, '--------------------------------------')

repeat
    local res = redis.call('SCAN', cursor, 'COUNT', 100)
    cursor = res[1]

    for _, k in ipairs(res[2]) do
        local ttl = redis.call('TTL', k)
        local size = redis.call('MEMORY','USAGE', k)
        totalKeys = totalKeys + 1

        if (size_sign == '+' and size &amp;gt;= size_threshold) or (size_sign == '-' and size &amp;lt; size_threshold) then
            -- TTL == -1 → no expiration
            if ttl_sign == '-' and ttl_threshold == 1 then
                if ttl == -1 then
                    table.insert(output, k .. ': TTL: -1, Size: ' .. size .. ' Bytes')
                    count =  count + 1
                end

            -- TTL comparisons (exclude -1 and -2)
            else
                if ttl &amp;gt;= 0 then
                    if ttl_sign == '-' and ttl &amp;lt; ttl_threshold then
                        table.insert(output, k .. ': TTL: ' .. ttl .. ' seconds, Size: ' .. size .. ' Bytes')
                        count =  count + 1
                    elseif ttl_sign == '+' and ttl &amp;gt;= ttl_threshold then
                        table.insert(output, k .. ': TTL: ' .. ttl .. ' seconds, Size: ' .. size .. ' Bytes')
                        count =  count + 1
                    end
                end
            end
        end
    end
until cursor == '0'


-- Adding summary to output
table.insert(output, '--------------------------------------')

if (size_sign == '+') then
   strKeySize = 'larger'
else
   strKeySize = 'smaler'
end
strKeySize = 'size ' .. strKeySize .. ' than ' .. size_threshold .. ' Bytes'

if ttl_sign == '-' and ttl_threshold == 1 then
   strKeyTTL = 'No TTL'
elseif ttl_sign == '-' then
   strKeyTTL = 'TTL &amp;lt; ' .. ttl_threshold .. ' seconds'
elseif ttl_sign == '+' then
   strKeyTTL = 'TTL &amp;gt;= ' .. ttl_threshold .. ' seconds'
end
strKeyTTL = ' keys found with ' .. strKeyTTL

table.insert(output, 'Scan completed.')
table.insert(output, 'Total of ' .. totalKeys .. ' keys scanned.')
table.insert(output, count .. strKeyTTL .. ', and ' .. strKeySize)
table.insert(output, '--------------------------------------')

return output
" 0 "$KEYTTL_THRESHOLD" "$TTLSIGN" "$KEYSIZE_THRESHOLD" "$SIZESIGN"

echo " "

end_ts=$(date +%s.%3N)
echo "End time: $(date "+%d-%m-%Y %H:%M:%S")"

# Duration - Extract days, hours, minutes, seconds, milliseconds
duration=$(awk "BEGIN {print $end_ts - $start_ts}")
days=$(awk "BEGIN {print int($duration/86400)}")
hours=$(awk "BEGIN {print int(($duration%86400)/3600)}")
minutes=$(awk "BEGIN {print int(($duration%3600)/60)}")
seconds=$(awk "BEGIN {print int($duration%60)}")
milliseconds=$(awk "BEGIN {printf \"%03d\", ($duration - int($duration))*1000}")
echo "Duration  : ${days} days $(printf "%02d" "$hours"):$(printf "%02d" "$minutes"):$(printf "%02d" "$seconds").$milliseconds"
echo "========================================================"
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;Performance:&lt;/STRONG&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Redis service used: &lt;/STRONG&gt;Azure Managed Redis Balanced B0 OSSMode&lt;/P&gt;
&lt;P&gt;# ------------------&lt;BR /&gt;Scan completed. Total keys listed: &lt;STRONG&gt;46005&lt;/STRONG&gt;&lt;BR /&gt;Duration : 0 days &lt;STRONG&gt;00:00:01.437&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;# ------------------&lt;BR /&gt;&lt;STRONG&gt;Redis service used:&amp;nbsp;&lt;/STRONG&gt;Azure Cache for Redis - Standard - C1&lt;BR /&gt;Scan completed. Total keys listed: &lt;STRONG&gt;1225&lt;/STRONG&gt;&lt;BR /&gt;Duration : 0 days &lt;STRONG&gt;00:00:00.545&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;# ------------------&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;WARNING:&lt;/STRONG&gt;&lt;BR /&gt;The above script uses LUA script, that runs on Redis side, and may block you normal workload.&lt;BR /&gt;Use it carefully when have a large number of keys in the cache, and during low workload times.&lt;BR /&gt;&lt;BR /&gt;YOU CAN RUN THE BELOW SCRIPTS AT YOUR OWN RISK.&lt;BR /&gt;WE DON'T ASSUME ANY RESPONSABILITY FOR UNEXPECTED RESULTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-background-color-custom-0072c6" border="1" style="width: 100%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;SPAN style="color: #ffffff; font-size: large;"&gt;&lt;STRONG&gt;References&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 100.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/redis/overview" target="_blank" rel="noopener"&gt;Azure Managed Redis&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-best-practices-development" target="_blank" rel="noopener"&gt;Azure Best Practice for Development&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://redis.io/docs/latest/commands/" target="_blank" rel="noopener"&gt;Redis Inc - Commands&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://redis.io/docs/latest/develop/programmability/lua-api/" target="_blank" rel="noopener"&gt;Redis LUA - Lua API reference&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://redis.io/docs/latest/commands/expire/#how-redis-expires-keys" target="_blank" rel="noopener"&gt;Redis Inc - How Redis expires keys &lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://redis.io/docs/latest/develop/tools/cli/" target="_blank" rel="noopener"&gt;Redis CLI&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://www.w3schools.com/bash/bash_script.php" target="_blank" rel="noopener"&gt;Bash Script&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://manpages.org/xargs" target="_blank" rel="noopener"&gt;xargs man page&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://manpages.org/awk" target="_blank" rel="noopener"&gt;awk man page&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;I hope this can be useful !!!&lt;/P&gt;</description>
      <pubDate>Mon, 02 Feb 2026 15:57:26 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/redis-keys-statistics/ba-p/4486079</guid>
      <dc:creator>LuisFilipe</dc:creator>
      <dc:date>2026-02-02T15:57:26Z</dc:date>
    </item>
    <item>
      <title>Deriving expiry days and remaining retention days for blobs through blob inventory</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/deriving-expiry-days-and-remaining-retention-days-for-blobs/ba-p/4466586</link>
      <description>&lt;P&gt;In managing data within Azure blob storage accounts and Azure data lake gen 2 storage accounts, organizations often encounter scenarios where blobs have been deleted but remain in a soft-deleted state. To calculate the remaining retention days for all such blobs across an entire storage account can be a critical requirement for customers seeking to optimize data management and ensure compliance with retention policies. &lt;BR /&gt;&lt;BR /&gt;Additionally, certain blobs may have an expiry time set, scheduling their deletion for a future date. To facilitate the identification and monitoring of these blobs and their respective expiry times, a custom query has been written to efficiently list and calculate expiry information, enabling users to proactively manage their storage resources.&lt;/P&gt;
&lt;P&gt;The expiry time for Azure blobs is set using the&amp;nbsp;Set Blob Expiry&amp;nbsp;operation. This feature is present in only Hierarchical namespace enabled storage accounts.&lt;BR /&gt;&lt;BR /&gt;We can set the expiry with below steps:&lt;BR /&gt;&lt;BR /&gt;i) Azure Storage action- &lt;A href="https://learn.microsoft.com/en-us/azure/storage-actions/overview#supported-regions" target="_blank" rel="noopener"&gt;About Azure Storage Actions - Azure Storage Actions | Microsoft Learn&lt;/A&gt;&lt;BR /&gt;Storage action can be used to set blob expiry, share a high-level snippet for the operation below&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;ii) REST API- &lt;A href="https://learn.microsoft.com/en-us/rest/api/storageservices/set-blob-expiry?tabs=microsoft-entra-id" target="_blank" rel="noopener"&gt;Set Blob Expiry (REST API) - Azure Storage | Microsoft Learn&lt;/A&gt; to set the expiry time for your blobs. This ensures that each blob has a defined lifecycle and will be deleted after the specified period.&lt;BR /&gt;&lt;BR /&gt;In this blog, it is a step-by-step process of listing the expiry time and retention of the blobs using Blob Inventory report and then parsing it using Synapse.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;1. Set blob inventory rule&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Get CSV file from blob inventory run.&lt;BR /&gt;Go to the container where inventory reports are getting stored.&lt;BR /&gt;Navigate to the recent date folder and get url of Blob Inventory csv life.&lt;BR /&gt;Sharing the below snippet for reference:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;2. Create an Azure Synapse workspace&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Next,&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/synapse-analytics/get-started-create-workspace" target="_blank" rel="noopener"&gt;create an Azure Synapse workspace&lt;/A&gt;&amp;nbsp;where you will execute a SQL query to report the inventory results.&lt;/P&gt;
&lt;P&gt;Create the SQL query: After you create your Azure Synapse workspace, do the following steps.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Navigate to&amp;nbsp;&lt;A href="https://web.azuresynapse.net/" target="_blank" rel="noopener"&gt;https://web.azuresynapse.net&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Select the&amp;nbsp;&lt;STRONG&gt;Develop&lt;/STRONG&gt;&amp;nbsp;tab on the left edge.&lt;/LI&gt;
&lt;LI&gt;Select the large plus sign (+) to add an item.&lt;/LI&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG&gt;SQL script&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;3. Use the sample query below to get the expiry time and remaining retention days of blob respectively&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;EM&gt;select &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;LEFT([Name], CHARINDEX('/', [Name]) - 1) AS Container,&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;RIGHT([Name], LEN([Name])- CHARINDEX('/',[Name])) AS Blob,&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;[Expiry-time]&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;from OPENROWSET(&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; bulk '&amp;lt;URL to your inventory CSV file&amp;gt;',&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; format='csv', parser_version='2.0', header_row=true&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;) as Source &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;For blobs which got deleted directly, you can calculate the remaining retention days since the data is present in soft deleted state and will be deleted permanently after the retention days complete.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;EM&gt;select &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;LEFT([Name], CHARINDEX('/', [Name]) - 1) AS Container,&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;RIGHT([Name], LEN([Name])- CHARINDEX('/',[Name])) AS Blob,&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;[Expiry-time], &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;RemainingRetentionDays &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;from OPENROWSET(&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; bulk '&amp;lt;URL to your inventory CSV file&amp;gt;',&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; format='csv', parser_version='2.0', header_row=true&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;) as Source &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;In the above snippet, the Null value represents that the blob is not deleted and no expiry time is set on the blob yet.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Please Note: Calculating blob expiry from blob inventory is one way, customer can explore other options such as Powershell and Azure CLI to achieve the same.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Reference links:-&lt;BR /&gt;&lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/rest/api/storageservices/set-blob-expiry?tabs=microsoft-entra-id" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Set Blob Expiry (REST API) - Azure Storage | Microsoft Learn&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-create?tabs=azure-portal" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Create a storage task - Azure Storage Actions | Microsoft Learn&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/blob-inventory" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Azure Storage blob inventory | Microsoft Learn&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/calculate-blob-count-size" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Calculate blob count and size using Azure Storage inventory | Microsoft Learn&lt;/STRONG&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 11 Nov 2025 05:35:25 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/deriving-expiry-days-and-remaining-retention-days-for-blobs/ba-p/4466586</guid>
      <dc:creator>Harshi_mrinal</dc:creator>
      <dc:date>2025-11-11T05:35:25Z</dc:date>
    </item>
    <item>
      <title>Exclude Prefix in Azure Storage Action: Smarter Blob Management</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/exclude-prefix-in-azure-storage-action-smarter-blob-management/ba-p/4440075</link>
      <description>&lt;P&gt;Azure Storage Actions is a powerful platform for automating data management tasks across Blob and Data Lake Storage. Among its many features,&amp;nbsp;&lt;STRONG&gt;Exclude Prefix&lt;/STRONG&gt; stands out as a subtle yet critical capability that helps fine-tune task assignments.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What Is the "Exclude Prefix" Feature?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The&amp;nbsp;&lt;STRONG&gt;Exclude Prefix&lt;/STRONG&gt;&amp;nbsp;option allows users to&amp;nbsp;&lt;STRONG&gt;omit specific blobs or folders&lt;/STRONG&gt;&amp;nbsp;from being targeted by Azure Storage Actions. This is particularly useful when applying actions such as:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Moving blobs to a cooler tier&lt;/LI&gt;
&lt;LI&gt;Deleting blobs&lt;/LI&gt;
&lt;LI&gt;Rehydrating archived blobs&lt;/LI&gt;
&lt;LI&gt;Triggering workflows based on blob changes&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For example, if you're running a task to archive blobs older than 30 days, but you want to&amp;nbsp;&lt;STRONG&gt;exclude logs or config files&lt;/STRONG&gt;, you can define a prefix like&amp;nbsp;logs/&amp;nbsp;or&amp;nbsp;config/&amp;nbsp;in the exclusion list.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How to Use It — Example Scenario:&lt;BR /&gt;&lt;/STRONG&gt;In the following example, blobs across the entire storage account were deleted based on a condition: if a blob’s access tier was set to&amp;nbsp;&lt;STRONG&gt;Hot&lt;/STRONG&gt;, it was deleted &lt;STRONG&gt;except&lt;/STRONG&gt;&amp;nbsp;for those blobs or paths explicitly listed under the&amp;nbsp;&lt;STRONG&gt;Exclude blob prefixes&lt;/STRONG&gt;&amp;nbsp;property.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Create a Task: -&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Navigate to the Azure portal and search for &lt;STRONG&gt;Storage tasks&lt;/STRONG&gt;. Then, under&amp;nbsp;&lt;STRONG&gt;Services&lt;/STRONG&gt;, click on&amp;nbsp;&lt;STRONG&gt;Storage tasks – Azure Storage Actions&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;On the&amp;nbsp;&lt;STRONG&gt;Azure Storage Actions | Storage Tasks&lt;/STRONG&gt;&amp;nbsp;page, click&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;&amp;nbsp;to begin configuring a new task.&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Complete all the required fields, then click&amp;nbsp;&lt;STRONG&gt;Next&lt;/STRONG&gt;&amp;nbsp;to proceed to the&amp;nbsp;&lt;STRONG&gt;Conditions&lt;/STRONG&gt;&amp;nbsp;page. To configure blob deletion, add the following conditions on the&amp;nbsp;&lt;STRONG&gt;Conditions&lt;/STRONG&gt;&amp;nbsp;page.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Add the Assignment :-&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Click Add assignment&lt;STRONG&gt; in&lt;/STRONG&gt; the &lt;STRONG&gt;Select scope&lt;/STRONG&gt; section, choose your subscription and storage account, then provide a name for the assignment.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;In the&amp;nbsp;&lt;STRONG&gt;Role assignment&lt;/STRONG&gt; section, select &lt;STRONG&gt;Storage Blob Data&lt;/STRONG&gt; Owner from the Role drop-down list to assign this role to the system-assigned managed identity of the storage task.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;In the&amp;nbsp;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Filter objects&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; section, specify the &lt;/SPAN&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Exclude Blob Prefix&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; filter if you want to exclude specific blobs or folders from the task.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; In the example specified above, blobs will be deleted—except for those under the path “excludefiles” listed in the &lt;EM&gt;Exclude&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; blobprefixes&lt;/EM&gt;&amp;nbsp;property.&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;In the&amp;nbsp;&lt;STRONG&gt;Trigger details&lt;/STRONG&gt;&amp;nbsp;section, choose the runs of the task and then &lt;BR /&gt;select the container where you'd like to store the execution reports.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG&gt;Add&lt;/STRONG&gt;. In the&amp;nbsp;&lt;STRONG&gt;Tags&lt;/STRONG&gt;&amp;nbsp;tab, select&amp;nbsp;&lt;STRONG&gt;Next&lt;/STRONG&gt; and in the&amp;nbsp;&lt;STRONG&gt;Review + Create&lt;/STRONG&gt;&amp;nbsp;tab,&amp;nbsp;&lt;STRONG&gt;select Review + create&lt;/STRONG&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;When the task is deployed, your deployment is complete page appears and select &lt;STRONG&gt;Go to resource&lt;/STRONG&gt;&amp;nbsp;to open the&amp;nbsp;&lt;STRONG&gt;Overview page&lt;/STRONG&gt;&amp;nbsp;of the storage task.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enable the Task Assignment: -&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;In the &lt;STRONG style="color: rgb(30, 30, 30);"&gt;Trigger details&lt;/STRONG&gt;&lt;SPAN style="font-weight: 400; color: rgb(30, 30, 30);"&gt;&amp;nbsp;section, we have a Enable task assignment checkbox which is checked by default.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;If the&amp;nbsp;&lt;STRONG&gt;Enable task assignments&lt;/STRONG&gt;&amp;nbsp;checkbox is unchecked, you can still enable assignments manually from the&amp;nbsp;&lt;STRONG&gt;Assignments&lt;/STRONG&gt;&amp;nbsp;page. To do this, go to&amp;nbsp;&lt;STRONG&gt;Assignments&lt;/STRONG&gt;, select the relevant assignment, and then click&amp;nbsp;&lt;STRONG&gt;Enable&lt;/STRONG&gt;.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;The task assignment is queued to run and will run at the specified time.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Monitoring the runs:-&lt;BR /&gt;&lt;/STRONG&gt;After the task completes running, you can view the results of the run.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;With the&amp;nbsp;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Assignments&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&amp;nbsp;page still open, select&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;View task runs&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Select the&amp;nbsp;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;View report&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; link to download a report. &lt;/SPAN&gt;You can also view these comma-separated reports in the container that you specified when you configured the assignment.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Conclusion:&lt;BR /&gt;&lt;/STRONG&gt;The &lt;STRONG&gt;Exclude Prefix&lt;/STRONG&gt; feature in Azure Storage Actions provides enhanced control and flexibility when managing blob data at scale. By allowing you to &lt;STRONG&gt;exclude specific prefixes from actions like delete or tier changes&lt;/STRONG&gt;, it helps you &lt;STRONG&gt;safeguard critical data&lt;/STRONG&gt;, reduce mistakes, and fine-tune automation workflows. This targeted approach not only improves operational efficiency but also supports more &lt;STRONG&gt;granular data &lt;/STRONG&gt;in Azure Blob Storage.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note:-&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Storage Actions are generally available in the following public regions:&amp;nbsp;&lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage-actions/overview#supported-regions" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;https://learn.microsoft.com/en-us/azure/storage-actions/overview#supported-regions&lt;/STRONG&gt;&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;We can also exclude certain blobs using the&amp;nbsp;&lt;STRONG&gt;“Not”&lt;/STRONG&gt;operator when building task conditions. Blobs may be excluded based on specific blob or container attributes from the&amp;nbsp;&lt;STRONG&gt;task conditions&lt;/STRONG&gt; side as well—not just through task assignments.&lt;BR /&gt;In the screenshot below, we are using the &lt;STRONG&gt;Not&lt;/STRONG&gt; operator (!) to exclude blobs where the blob name is equal to "Test".&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Please refer: &lt;A href="https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-conditions#multiple-clauses-in-a-condition" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-conditions#multiple-clauses-in-a-condition&lt;/A&gt;.&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Reference Links:-&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fstorage-actions%2Foverview&amp;amp;data=05%7C02%7Cankitsah%40microsoft.com%7C09d1fc4fed9240532a5b08ddb957ba75%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638870508625797507%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=b5obgvCr5ksIWs9x7Z3azY9fYhbH9OkpGFrZ6YeO1js%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;About Azure Storage Actions - Azure Storage Actions | Microsoft Learn&lt;/STRONG&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fstorage-actions%2Fstorage-tasks%2Fstorage-task-best-practices&amp;amp;data=05%7C02%7Cankitsah%40microsoft.com%7C09d1fc4fed9240532a5b08ddb957ba75%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638870508625816690%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=%2B08AQNo4KSTdZguHWpAcepcxU6tTIKJUAS9nFd3GP0I%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Storage task best practices - Azure Storage Actions | Microsoft Learn&lt;/STRONG&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fstorage-actions%2Fstorage-tasks%2Fstorage-task-known-issues&amp;amp;data=05%7C02%7Cankitsah%40microsoft.com%7C09d1fc4fed9240532a5b08ddb957ba75%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638870508625830066%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=%2BC2rkfBNAvumSVUtBXcGgUHdGox2n3cIwmwG2PvQK%2Fo%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Known issues and limitations with storage tasks - Azure Storage Actions | Microsoft Learn&lt;/STRONG&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 30 Sep 2025 09:26:23 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/exclude-prefix-in-azure-storage-action-smarter-blob-management/ba-p/4440075</guid>
      <dc:creator>ManjunathS</dc:creator>
      <dc:date>2025-09-30T09:26:23Z</dc:date>
    </item>
    <item>
      <title>Developer Tier APIM + Self-hosted Gateway</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/developer-tier-apim-self-hosted-gateway/ba-p/4457556</link>
      <description>&lt;P&gt;Developer tier APIM enjoys many "premium" features such as vnet injection.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Feature&lt;/th&gt;&lt;th&gt;Consumption&lt;/th&gt;&lt;th&gt;Developer&lt;/th&gt;&lt;th&gt;Basic&lt;/th&gt;&lt;th&gt;Basic v2&lt;/th&gt;&lt;th&gt;Standard&lt;/th&gt;&lt;th&gt;Standard v2&lt;/th&gt;&lt;th&gt;Premium&lt;/th&gt;&lt;th&gt;Premium v2 (preview)&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Microsoft Entra integration&lt;SUP&gt;1&lt;/SUP&gt;&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Virtual network injection support&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Private endpoint support for inbound connections&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Outbound virtual network integration support&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Multi-region deployment&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Availability zones&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Multiple custom domain names for gateway&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Developer portal&lt;SUP&gt;2&lt;/SUP&gt;&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Built-in cache&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-cache-external" data-linktype="relative-path" target="_blank"&gt;External cache&lt;/A&gt;&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Autoscaling&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;API analytics&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/self-hosted-gateway-overview" data-linktype="relative-path" target="_blank"&gt;Self-hosted gateway&lt;/A&gt;&lt;SUP&gt;3&lt;/SUP&gt;&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/workspaces-overview" data-linktype="relative-path" target="_blank"&gt;Workspaces&lt;/A&gt;&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-manage-protocols-ciphers" data-linktype="relative-path" target="_blank"&gt;TLS settings&lt;/A&gt;&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-mutual-certificates-for-clients" data-linktype="relative-path" target="_blank"&gt;Client certificate authentication&lt;/A&gt;&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-policies" data-linktype="relative-path" target="_blank"&gt;Policies&lt;/A&gt;&lt;SUP&gt;4&lt;/SUP&gt;&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/credentials-overview" data-linktype="relative-path" target="_blank"&gt;Credential manager&lt;/A&gt;&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-disaster-recovery-backup-restore" data-linktype="relative-path" target="_blank"&gt;Backup and restore&lt;/A&gt;&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/api-management-configuration-repository-git" data-linktype="relative-path" target="_blank"&gt;Management over Git&lt;/A&gt;&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Azure Monitor metrics&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Azure Monitor and Log Analytics request logs&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Application Insights request logs&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Static IP&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Export API to Power Platform&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Export API to Postman&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Export API to MCP server (preview)&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Expose existing MCP server (preview)&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/api-management-features" target="_blank"&gt;Feature-based comparison of Azure API Management tiers | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;It however comes with weaknesses like no SLA guaranteed since it's designed for non-production use cases and evaluations in the first place.&lt;/P&gt;
&lt;P&gt;For projects that benefit from the ROI of developer tier APIM and would like to have more control over APIM service availability, developer tier + self-hosted gateway might be an option. One example use case here is to provision an Azure VM and then setup self-hosted gateway on the VM. This way users are able to manage and maintain the underlying VM and avoid events like VM Guest OS upgrade during business hours that would have caused service disruptions on developer tier APIM.&lt;/P&gt;
&lt;P&gt;Just a quick thought...&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 27 Sep 2025 16:45:43 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/developer-tier-apim-self-hosted-gateway/ba-p/4457556</guid>
      <dc:creator>reve</dc:creator>
      <dc:date>2025-09-27T16:45:43Z</dc:date>
    </item>
    <item>
      <title>Finding the Right Page number in PDFs with AI Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/finding-the-right-page-number-in-pdfs-with-ai-search/ba-p/4440758</link>
      <description>&lt;DIV&gt;&lt;STRONG&gt;Why Page Numbers Matter in AI Search&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;When users search for content within large PDFs—such as contracts, manuals, or reports—they often need to know not just what was found, but where it was found. Associating search results with page numbers enables:&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Contextual navigation within documents.&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Precise citations in knowledge bases or chatbots.&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Improved user trust in AI-generated responses.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV&gt;&lt;STRONG&gt;Prerequisites for Azure Blob Storage &amp;amp; Azure AI Search Setup Summary&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;1. Azure Blob Storage&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV class="lia-indent-padding-left-30px"&gt;A container is configured to store PDF files.&lt;/DIV&gt;
&lt;DIV&gt;&lt;BR /&gt;&lt;STRONG&gt;2. Appropriate permissions:&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;The AI search service must have&amp;nbsp;&lt;STRONG&gt;Storage Blob Data Reader&lt;/STRONG&gt; access to the container. If using RBAC, ensure the managed identity is properly assigned.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Ref:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/search/search-blob-indexer-role-based-access" target="_blank" rel="noopener"&gt;AI search Search-blob-indexer-role-based-access&lt;/A&gt;&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/search/search-howto-indexing-azure-blobs" target="_blank" rel="noopener"&gt;How to Index Azure Blobs&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;Technical Approaches to Extract Page Numbers using AI search&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;1. Adding a Skillset: Document Cracking and index projection for parent-child indexing&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;The first step in skillset execution is document cracking, which separates text and image content. A common use case for Text Merger is merging the textual representation of images—such as OCR output or image captions—into the content field of a document. This is especially useful for PDFs or Word documents that combine text with embedded images. This ensures that the final enriched document includes all relevant textual data, regardless of its original format, and improves the accuracy of downstream search and analysis.&lt;/DIV&gt;
&lt;DIV&gt;an index projection specifies how parent-child content is mapped to fields in a search index for one-to-many indexing.&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;LI-CODE lang="json"&gt;{
  "@odata.etag": "\"0x8DDD58F12B5D0B9\"",
  "name": "pagenumskillset",
  "description": "Skillset to feed document to OCR skill and use Index Projection to split the content page wise",
  "skills": [
    {
      "@odata.type": "#Microsoft.Skills.Vision.OcrSkill",
      "name": "#1",
      "context": "/document/normalized_images/*",
      "lineEnding": "Space",
      "defaultLanguageCode": "en",
      "detectOrientation": true,
      "inputs": [
        {
          "name": "image",
          "source": "/document/normalized_images/*",
          "inputs": []
        }
      ],
      "outputs": [
        {
          "name": "text",
          "targetName": "text"
        },
        {
          "name": "layoutText",
          "targetName": "layoutText"
        }
      ]
    }
  ],
  "cognitiveServices": {
    "@odata.type": "#Microsoft.Azure.Search.DefaultCognitiveServices"
  },
  "indexProjections": {
    "selectors": [
      {
        "targetIndexName": "pagenumidx",
        "parentKeyFieldName": "ParentKey",
        "sourceContext": "/document/normalized_images/*",
        "mappings": [
          {
            "name": "DocText",
            "source": "/document/normalized_images/*/text",
            "inputs": []
          },
          {
            "name": "DocName",
            "source": "/document/metadata_storage_name",
            "inputs": []
          },
          {
            "name": "DocURL",
            "source": "/document/metadata_storage_path",
            "inputs": []
          },
          {
            "name": "PageNum",
            "source": "/document/normalized_images/*/pageNumber",
            "inputs": []
          }
        ]
      }
    ],
    "parameters": {
      "projectionMode": "skipIndexingParentDocuments"
    }
  }
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;2. Defining Index definition&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;An index is defined by a schema and stored within the search service, ensuring millisecond response times by decoupling from external data sources. Except for indexer-driven scenarios, the search service never queries the original data directly making it ideal for high-performance search applications.&lt;/DIV&gt;
&lt;DIV&gt;&lt;LI-CODE lang="json"&gt;{
  "@odata.etag": "\"0x8DDD58E7F7CF595\"",
  "name": "pagenumidx",
  "fields": [
    {
      "name": "ID",
      "type": "Edm.String",
      "searchable": true,
      "filterable": false,
      "retrievable": true,
      "stored": true,
      "sortable": false,
      "facetable": false,
      "key": true,
      "analyzer": "keyword",
      "synonymMaps": []
    },
    {
      "name": "DocText",
      "type": "Edm.String",
      "searchable": true,
      "filterable": false,
      "retrievable": true,
      "stored": true,
      "sortable": false,
      "facetable": false,
      "key": false,
      "synonymMaps": []
    },
    {
      "name": "DocName",
      "type": "Edm.String",
      "searchable": true,
      "filterable": true,
      "retrievable": true,
      "stored": true,
      "sortable": true,
      "facetable": true,
      "key": false,
      "synonymMaps": []
    },
    {
      "name": "PageNum",
      "type": "Edm.Int32",
      "searchable": false,
      "filterable": true,
      "retrievable": true,
      "stored": true,
      "sortable": true,
      "facetable": false,
      "key": false,
      "synonymMaps": []
    },
    {
      "name": "DocURL",
      "type": "Edm.String",
      "searchable": true,
      "filterable": true,
      "retrievable": true,
      "stored": true,
      "sortable": false,
      "facetable": false,
      "key": false,
      "analyzer": "standard.lucene",
      "synonymMaps": []
    },
    {
      "name": "ParentKey",
      "type": "Edm.String",
      "searchable": true,
      "filterable": true,
      "retrievable": true,
      "stored": true,
      "sortable": false,
      "facetable": false,
      "key": false,
      "analyzer": "keyword",
      "synonymMaps": []
    }
  ],
  "scoringProfiles": [],
  "suggesters": [],
  "analyzers": [],
  "normalizers": [],
  "tokenizers": [],
  "tokenFilters": [],
  "charFilters": [],
  "similarity": {
    "@odata.type": "#Microsoft.Azure.Search.BM25Similarity"
  },
  "semantic": {
    "defaultConfiguration": "my_semantic_cfg",
    "configurations": [
      {
        "name": "my_semantic_cfg",
        "flightingOptIn": false,
        "rankingOrder": "BoostedRerankerScore",
        "prioritizedFields": {
          "titleField": {
            "fieldName": "DocName"
          },
          "prioritizedContentFields": [
            {
              "fieldName": "DocText"
            }
          ],
          "prioritizedKeywordsFields": [
            {
              "fieldName": "DocName"
            },
            {
              "fieldName": "ID"
            }
          ]
        }
      }
    ]
  }
}&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;3.Using Azure AI Search Indexer with OCR and ImageAction&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;Azure AI Search allows you to extract page-level data by configuring the indexer with:&lt;/DIV&gt;
&lt;DIV&gt;This setting renders each PDF page as a separate image, which can then be processed using the OcrSkill. The OCR output can be mapped to a collection field, where each item corresponds to a page's text. This method enables you to infer page numbers based on the position of matched content in the collection.&lt;/DIV&gt;
&lt;DIV&gt;&lt;LI-CODE lang="json"&gt;{
  "@odata.context": "https://searchinstancename.search.windows.net/$metadata#indexers/$entity",
  "@odata.etag": "\"0x8DDD58F3760067D\"",
  "name": "indexer-pagenum",
  "description": null,
  "dataSourceName": "azureblob-1754401027271-datasource",
  "skillsetName": "pagenumskillset",
  "targetIndexName": "pagenumidx",
  "disabled": null,
  "schedule": null,
  "parameters": {
    "batchSize": null,
    "maxFailedItems": null,
    "maxFailedItemsPerBatch": null,
    "configuration": {
      "dataToExtract": "contentAndMetadata",
      "parsingMode": "default",
      "imageAction": "generateNormalizedImagePerPage",
      "pdfTextRotationAlgorithm": "none"
    }
  },
  "fieldMappings": [],
  "outputFieldMappings": [],
  "cache": null,
  "encryptionKey": null
}&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;STRONG&gt;Validation and Conclusion&lt;/STRONG&gt;
&lt;P&gt;You can leverage Search Explorer to view the output which will look like below:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
      "@search.score": 1,
      "ID": "",
      "DocText": "the PDF content ",
      "DocName": "docname.pdf",
      "PageNum": 33,
      "DocURL": "https://storageaccoutname.blob.core.windows.net/containername/docname.pdf",
      "ParentKey": "sammple key"
    }&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;P&gt;Hope this help in your requirement of getting Page Number from PDF using AI search&lt;/P&gt;</description>
      <pubDate>Mon, 11 Aug 2025 09:10:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/finding-the-right-page-number-in-pdfs-with-ai-search/ba-p/4440758</guid>
      <dc:creator>samsarka</dc:creator>
      <dc:date>2025-08-11T09:10:39Z</dc:date>
    </item>
    <item>
      <title>How to enable alerts in Batch especially when a node is encountering high disk usage</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/how-to-enable-alerts-in-batch-especially-when-a-node-is/ba-p/4437428</link>
      <description>&lt;P&gt;Batch users often encounter issues like nodes suddenly gets into unusable state due to high CPU or Disk usage. Alerts allow you to identify and address issues in your system. This blog will focus on how users can enable alerts when the node is consuming high amount of disk by configuring the threshold limit. With this user can get notified beforehand when the node gets into unusable state and pre-emptively takes measures to avoid service disruptions.&lt;/P&gt;
&lt;P&gt;The task output data is written to the file system of the Batch node. When this data reaches more than 90 percent capacity of the disk size of the node SKU, the Batch service marks the node as unusable and blocks the node from running any other tasks until the Batch service does a clean up. The Batch node agent reserves 10 percent capacity of the disk space for its functionality. Before any tasks are scheduled to run, depending on the capacity of the Batch node, it's essential to keep enough space on the disk.&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;U&gt;&lt;STRONG&gt;Best practices to follow to avoid issues with high disk usage in Azure Batch:&lt;/STRONG&gt;&lt;/U&gt;&lt;/H4&gt;
&lt;P&gt;When the node is experiencing high disk usage, as an initial step you can RDP to the node and check how most of the&amp;nbsp;space&amp;nbsp;is consumed. You can check which apps and files that are consuming high disk and check if these can be deleted.&lt;/P&gt;
&lt;P&gt;A node can experience high disk usage on &lt;STRONG&gt;OS disk&lt;/STRONG&gt; or &lt;STRONG&gt;Ephemeral disk&lt;/STRONG&gt;. Ephemeral disk contains all the files related to task working directory like the task output file or resource files whereas OS disk is different. The default operating system (OS) disk is usually 127 GiB only in Azure and this cannot be changed. In Batch pools using custom image, users might need to expand the OS disk when the node consumes high OS disk.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2Fwindows%2Fexpand-disks&amp;amp;data=05%7C02%7Clakshmijakka%40microsoft.com%7Cb10f0cd1cdaa4891129a08dd36e04850%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638727059203209746%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=2oFpcFrET5G1CT8xYU4c1ypnw7uglxjIikqYV1U81bI%3D&amp;amp;reserved=0" target="_blank"&gt;Expand virtual hard disks attached to a Windows VM in an Azure - Azure Virtual Machines | Microsoft Learn &lt;/A&gt;&lt;/P&gt;
&lt;P&gt;After you have allocated extra disk on the custom image VM, you can create a new pool with the latest image.&lt;/P&gt;
&lt;P&gt;If you want to clear manually files on node, please refer&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/troubleshoot/azure/hpc/batch/azure-batch-node-unusable-state#solution-clear-files-in-task-folders" target="_blank"&gt;Azure Batch node gets stuck in the Unusable state because of configuration issues - Azure | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Switch to higher VM SKU&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In some cases, just creating a new pool with higher VM SKU than the existing VM SKU will suffice and avoid any issues with node.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Save Task data&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;A task should move its output off the node it's running on, and to a durable store before it completes. Similarly, if a task fails, it should move logs required to diagnose the failure to a durable store.&lt;/P&gt;
&lt;P&gt;It is users’ responsibility to ensure the output data is moved to a durable store before the node or job gets deleted.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/batch/batch-task-output-files" target="_blank"&gt;Persist output data to Azure Storage with Batch service API - Azure Batch | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Clear files&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If a&amp;nbsp;&lt;STRONG&gt;retentionTime&lt;/STRONG&gt;&amp;nbsp;is set, Batch automatically cleans up the disk space used by the task when the&amp;nbsp;retentionTime&amp;nbsp;expires. i.e. the task directory will be retained for 7 days unless the compute node is removed or the job is deleted. This action helps ensure that your nodes don't fill up with task data and run out of disk space. Users can set this to low value to ensure output data is deleted immediately.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In some scenarios, the task gets triggered from ADF pipeline that is integrated with Batch. The retention time for the files submitted for custom activity. Default value is 30 days. Users can set the retention time in custom activity settings from ADF pipeline.&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Now let’s see how to get notified when a Batch node experiences high disk usage.&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1&lt;/STRONG&gt;: You are first required to follow below doc to integrate Azure Monitor in Batch nodes. The Azure Monitor service collects and aggregates metrics and logs from every component of the node.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/blog/AzurePaaSBlog/integrating-azure-monitor-in-azure-batch-to-monitor-batch-pool-nodes-performance/4428929" target="_blank"&gt;Integrating Azure Monitor in Azure Batch to monitor Batch Pool nodes performance | Microsoft Community Hub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once the AMA is configured, you can navigate to the VMSS in portal for which you enable metrics. Go to Metrics section and select Virtual Machine Guest from Metrics Namespace.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3: &lt;/STRONG&gt;From the metrics dropdown you can check metrics for the performance counter you wish.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4&lt;/STRONG&gt;: Now navigate to Alerts section from Menu and create alert rule.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5:&lt;/STRONG&gt; Here you can select any performance counter as you wish for which you want to receive alerts. Below shows creating a signal based on percentage free space that is available on VMSS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 6:&lt;/STRONG&gt; Once you select the signal it will ask you to provide other details for alert logic. Below snapshot shows alerts triggered when average of percentage free space available on VMSS instances is less than or equal to 20%. This alert evaluates for every one hour and check the average for the past one hour.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 7:&lt;/STRONG&gt; You can proceed with the next steps and configure your email address and Alert rule description to receive notifications.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can refer to below document for more information on alerts.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-create-metric-alert-rule" target="_blank"&gt;Create Azure Monitor metric alert rules - Azure Monitor | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;In this way users can enable alerts to get notifications based on metrics for their Batch nodes. Below is a sample email alert notification.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 05 Aug 2025 14:11:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/how-to-enable-alerts-in-batch-especially-when-a-node-is/ba-p/4437428</guid>
      <dc:creator>lakshmijakka</dc:creator>
      <dc:date>2025-08-05T14:11:56Z</dc:date>
    </item>
    <item>
      <title>Converting Page or Append Blobs to Block Blobs with ADF</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/converting-page-or-append-blobs-to-block-blobs-with-adf/ba-p/4433723</link>
      <description>&lt;P&gt;In certain scenarios, a storage account may contain a significant number of page blobs classified under the hot access tier that are infrequently accessed or retained solely for backup purposes. To optimise costs, it is desirable to transition these page blobs to the archive tier. However, as indicated in the following documentation -&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview&lt;/A&gt;&lt;U&gt; &lt;/U&gt;the ability to set the access tier is only available for block blobs; this functionality is not supported for append or page blobs.&lt;/P&gt;
&lt;P&gt;The Azure blob storage connector in Azure data factory is capable of copying blobs from block, append, or page blobs and copying data to only block blobs. &lt;A href="https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#supported-capabilities" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#supported-capabilities&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Note:&lt;/U&gt;&lt;/STRONG&gt; No extra configuration is required to set the blob type on the destination. By default, the ADF copy activity creates blobs as Block Blobs.&lt;/P&gt;
&lt;P&gt;In this blog, we will understand how to make use of Azure Data Factory to copy the page blobs to block blobs. Please note that this is applicable to append blobs as well.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-olk-copy-source="MessageBody"&gt;Let’s take a look at the steps ahead&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-olk-copy-source="MessageBody"&gt;Step 1: Creating ADF instance&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create an Azure data factory resource in the Azure portal referring to the following document - &lt;A href="https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;After creation, click on "Launch Studio" as shown below&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2: Creating datasets&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create two datasets by navigating to Author -&amp;gt; Datasets -&amp;gt; New dataset. These datasets are used in source and sink for the ADF copy activity&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;Select "Azure blob storage" -&amp;gt; click on continue -&amp;gt; select "binary" and continue&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3: Creating Linked service&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create a new linked service and provide the storage account name which contains page blobs&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Provide the file path where the page blobs are located.&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;You would also need to create another dataset for destination. Repeat the steps from 3 to 6 to create another destination dataset to copy the blobs to the storage account as block blobs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Note:&lt;/U&gt;&lt;/STRONG&gt; You can use same or different storage account for the destination dataset. Set it as per your requirements.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4: Configuring a Copy data pipeline&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Once the two datasets are created, now create a new pipeline and under "Move and Transform" section, drag and drop the "Copy data" activity as shown below.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;Under the Source and Sink sections from the drop down, select the source and destination datasets respectively which were created in the previous steps. Select the “Recursively” option and publish the changes.
&lt;UL&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;&lt;U&gt;Source:&lt;/U&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;U&gt;&lt;STRONG&gt;Sink:&lt;/STRONG&gt;&lt;/U&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Note:&lt;/U&gt;&lt;/STRONG&gt;&amp;nbsp;You can configure the filters and copy behaviour as per your requirements.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5: Debugging and validating&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Now as the configuration is completed, click on "Debug".&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;If the pipeline activity ran successfully, you should be able to see "succeeded" status in the output section as below.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;Verify the blob type of the blobs in the destination storage account and it should show as block blob and access tier as Hot.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After converting the blobs to block blobs, several methods are available to change their access tier to archive. These include implementing a blob lifecycle management policy, utilizing storage actions, or by using Az CLI or PowerShell scripts.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Utilising ADF enables the conversion of page or append blobs to block blobs, after which any standard method such as LCM policy or storage actions may be used to change the access tier to archive. This strategy offers a more streamlined and efficient solution compared to developing custom code or scripts.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Reference links:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#supported-capabilities" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#supported-capabilities&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-quickstart-portal" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-quickstart-portal&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/archive-blob?tabs=azure-powershell#bulk-archive" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/storage/blobs/archive-blob?tabs=azure-powershell#bulk-archive&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;</description>
      <pubDate>Fri, 01 Aug 2025 10:57:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/converting-page-or-append-blobs-to-block-blobs-with-adf/ba-p/4433723</guid>
      <dc:creator>SaikumarMandepudi</dc:creator>
      <dc:date>2025-08-01T10:57:37Z</dc:date>
    </item>
    <item>
      <title>Rehydrating Archived Blobs via Storage Task Actions</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/rehydrating-archived-blobs-via-storage-task-actions/ba-p/4429282</link>
      <description>&lt;P&gt;Azure Storage Actions is a fully managed platform designed to automate data management tasks for Azure Blob Storage and Azure Data Lake Storage. You can use it to perform common data operations on millions of objects across multiple storage accounts without provisioning extra compute capacity and without requiring you to write code.&lt;/P&gt;
&lt;P&gt;Storage task actions can be used to rehydrate the archived blobs in any tier as required. Please note there is no option to set the rehydration priority and is defaulted to Standard one as of now.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note :-&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Storage Actions are generally available in the following public regions: &lt;A href="https://learn.microsoft.com/en-us/azure/storage-actions/overview#supported-regions" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/storage-actions/overview#supported-regions&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Azure Storage Actions is currently in PREVIEW in following reions. Please refer: &lt;A href="https://learn.microsoft.com/en-us/azure/storage-actions/overview#regions-supported-at-the-preview-level" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/storage-actions/overview#regions-supported-at-the-preview-level&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Below are the steps to achieve the rehydration :-&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;
&lt;H5&gt;&lt;STRONG&gt;Create a Task :-&lt;/STRONG&gt;&lt;/H5&gt;
&lt;OL&gt;
&lt;LI&gt;In the Azure portal, search for Storage tasks. Under Services, select Storage tasks - Azure Storage Actions.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;On the&amp;nbsp;&lt;STRONG&gt;Azure Storage Actions | Storage Tasks&lt;/STRONG&gt;&amp;nbsp;page, select&amp;nbsp;&lt;STRONG&gt;Create&lt;BR /&gt;&lt;/STRONG&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Fill in all the required details and click on &lt;STRONG&gt;Next&lt;/STRONG&gt; to open the Conditions page.&lt;/LI&gt;
&lt;LI&gt;Add the conditions as below if you want to rehydrate to Cool tier.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;H5&gt;&lt;STRONG&gt;Add the Assignment :-&lt;/STRONG&gt;&lt;/H5&gt;
&lt;OL&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG&gt;Add assignment.&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;In the&amp;nbsp;&lt;STRONG&gt;Select scope&lt;/STRONG&gt; section, select your subscription and storage account and name the assignment.&lt;/LI&gt;
&lt;LI&gt;In the&amp;nbsp;&lt;STRONG&gt;Role assignment&lt;/STRONG&gt;&amp;nbsp;section, in the&amp;nbsp;&lt;STRONG&gt;Role&lt;/STRONG&gt;&amp;nbsp;drop-down list, select the&amp;nbsp;&lt;STRONG&gt;Storage Blob Data Owner&lt;/STRONG&gt; to assign that role to the system-assigned managed identity of the storage task.&lt;/LI&gt;
&lt;LI&gt;In the&amp;nbsp;&lt;STRONG&gt;Filter objects&lt;/STRONG&gt; section, specify the filter if you want this to run on some specific objects or the whole storage account.&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;In the&amp;nbsp;&lt;STRONG&gt;Trigger details&lt;/STRONG&gt; section, choose the runs of the task and then select the container where you'd like to store the execution reports.&lt;/LI&gt;
&lt;LI&gt;Select &lt;STRONG&gt;Add&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;In the &lt;STRONG&gt;Tags&lt;/STRONG&gt; tab, select &lt;STRONG&gt;Next&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;In the &lt;STRONG&gt;Review + Create&lt;/STRONG&gt; tab, &lt;STRONG&gt;select Review + create&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;When the task is deployed, Your deployment is complete page appears.&lt;/LI&gt;
&lt;LI&gt;Select &lt;STRONG&gt;Go to resource&lt;/STRONG&gt; to open the &lt;STRONG&gt;Overview page&lt;/STRONG&gt; of the storage task.&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;H5&gt;&lt;STRONG&gt;Enable the Task Assignment :-&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Storage task assignments are disabled by default. Enable assignments from the&amp;nbsp;&lt;STRONG&gt;Assignments&lt;/STRONG&gt; page.&lt;/P&gt;
&lt;BR /&gt;
&lt;OL&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG&gt;Assignments&lt;/STRONG&gt;, select the assignment, and then select&amp;nbsp;&lt;STRONG&gt;Enable&lt;/STRONG&gt;.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The task assignment is queued to run and will run at the specified time.&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;H5&gt;&lt;STRONG&gt;Monitoring the runs :-&lt;/STRONG&gt;&lt;/H5&gt;
After the task completes running, you can view the results of the run.&lt;BR /&gt;
&lt;OL&gt;
&lt;LI&gt;With the&amp;nbsp;&lt;STRONG&gt;Assignments&lt;/STRONG&gt;&amp;nbsp;page still open, select&amp;nbsp;&lt;STRONG&gt;View task runs&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Select the&amp;nbsp;&lt;STRONG&gt;View report&lt;/STRONG&gt; link to download a report.&lt;BR /&gt;You can also view these comma-separated reports in the container that you specified when you configured the assignment.&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H6&gt;&lt;STRONG&gt;Reference Links :-&lt;/STRONG&gt;&lt;/H6&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fstorage-actions%2Foverview&amp;amp;data=05%7C02%7Cankitsah%40microsoft.com%7C09d1fc4fed9240532a5b08ddb957ba75%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638870508625797507%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=b5obgvCr5ksIWs9x7Z3azY9fYhbH9OkpGFrZ6YeO1js%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;About Azure Storage Actions - Azure Storage Actions | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fstorage-actions%2Fstorage-tasks%2Fstorage-task-best-practices&amp;amp;data=05%7C02%7Cankitsah%40microsoft.com%7C09d1fc4fed9240532a5b08ddb957ba75%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638870508625816690%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=%2B08AQNo4KSTdZguHWpAcepcxU6tTIKJUAS9nFd3GP0I%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Storage task best practices - Azure Storage Actions | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fstorage-actions%2Fstorage-tasks%2Fstorage-task-known-issues&amp;amp;data=05%7C02%7Cankitsah%40microsoft.com%7C09d1fc4fed9240532a5b08ddb957ba75%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638870508625830066%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=%2BC2rkfBNAvumSVUtBXcGgUHdGox2n3cIwmwG2PvQK%2Fo%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Known issues and limitations with storage tasks - Azure Storage Actions | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 09 Jul 2025 06:12:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/rehydrating-archived-blobs-via-storage-task-actions/ba-p/4429282</guid>
      <dc:creator>ankitsah</dc:creator>
      <dc:date>2025-07-09T06:12:02Z</dc:date>
    </item>
    <item>
      <title>Integrating Azure Monitor in Azure Batch to monitor Batch Pool nodes performance</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/integrating-azure-monitor-in-azure-batch-to-monitor-batch-pool/ba-p/4428929</link>
      <description>&lt;P&gt;In Azure Batch, to monitor the node performance like CPU or Disk usage users are required to use Azure monitor. &amp;nbsp;The Azure Monitor service collects and aggregates metrics and logs from every component of the node. Azure Monitor provides you with a view of availability, performance, and resilience. When you create an Azure Batch pool, you can install any of the following monitoring-related extensions on the compute nodes to collect and analyse data.&lt;/P&gt;
&lt;P&gt;Previously users have leveraged &lt;STRONG&gt;Batch Insights&lt;/STRONG&gt; to get system statistics for Azure Batch account nodes, but it is deprecated now and no longer supported.&lt;/P&gt;
&lt;P&gt;The &lt;STRONG&gt;Log Analytics agent&lt;/STRONG&gt; virtual machine (VM) extension installs the Log Analytics agent on Azure VMs and enrols VMs into an existing Log Analytics workspace. The Log Analytics agent is on a&amp;nbsp;&lt;STRONG&gt;deprecation path&lt;/STRONG&gt;&amp;nbsp;and won't be supported after&amp;nbsp;&lt;STRONG&gt;August 31, 2024&lt;/STRONG&gt;.&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/agents/azure-monitor-agent-migration" target="_blank" rel="noopener"&gt;Migrate to Azure Monitor Agent from Log Analytics agent - Azure Monitor | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Monitor Agent&lt;/STRONG&gt; (AMA) now replaces the Log Analytics agent.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/agents/azure-monitor-agent-manage?tabs=azure-portal" target="_blank" rel="noopener"&gt;Install and Manage the Azure Monitor Agent - Azure Monitor | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-8"&gt;Important!&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Currently Azure Monitor in Batch Pool is only supported for Batch accounts that are created with Pool allocation Mode in User subscription mode only. Batch accounts that are created with Pool allocation Mode in Batch Service are not supported. As in Batch Service mode, nodes will be created in Azure subscriptions and users do not have access to these subscriptions, enabling data collection for these nodes is not possible.&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This article will focus on how you can use to install and configure the&amp;nbsp;Azure Monitor Agent&amp;nbsp;extension on Azure Batch pool nodes.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Note&lt;/U&gt;&lt;/STRONG&gt; : Extensions cannot be added to an existing pool. Pools must be recreated to add, remove, or update extensions. Currently the Batch pool with user assigned Managed Identity and extension is &lt;STRONG&gt;only&amp;nbsp;supported by ARM template and REST API call&lt;/STRONG&gt;. Creating a pool&amp;nbsp;with extension&amp;nbsp;is&amp;nbsp;unsupported&amp;nbsp;in Azure Portal. Creating a pool&amp;nbsp;with user assigned Managed Identity&amp;nbsp;is&amp;nbsp;unsupported in Az PowerShell module and Azure CLI.&lt;/P&gt;
&lt;P&gt;To use the templates below, you'll need to follow below prerequisites:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;To create a user-assigned managed identity. A managed identity is required for Azure Monitor agent to collect and publish data.&lt;/LI&gt;
&lt;LI&gt;To configure data collection for Azure Monitor Agent, you must also configure or deploy Resource Manager template data collection rules and associations.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1:&lt;/STRONG&gt; Create a pool with AMA extension&lt;/P&gt;
&lt;P&gt;Below is sample JSON template to create a pool with AMA extension enabled for Windows server.&lt;/P&gt;
&lt;P&gt;{&lt;BR /&gt;&amp;nbsp; "name": "poolextmon",&lt;BR /&gt;&amp;nbsp; "type": "Microsoft.Batch/batchAccounts/pools",&lt;BR /&gt;&amp;nbsp; "properties": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "allocationState": "Steady",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "vmSize": "STANDARD_D2S_V3",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "interNodeCommunication": "Disabled",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "taskSlotsPerNode": 1,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "taskSchedulingPolicy": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "nodeFillType": "Pack"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; },&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "deploymentConfiguration": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "virtualMachineConfiguration": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "imageReference": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "publisher": "microsoftwindowsserver",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "offer": "windowsserver",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "sku": "2019-datacenter",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "version": "latest"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; },&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "nodeAgentSkuId": "batch.node.windows amd64",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "extensions": [&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "name": "AzureMonitorAgent",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "publisher": "Microsoft.Azure.Monitor",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "AzureMonitorWindowsAgent",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "typeHandlerVersion": "1.0",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "autoUpgradeMinorVersion": true,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "enableAutomaticUpgrade": true,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "settings": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "authentication": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "managedIdentity": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "identifier-name": "mi_res_id",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "identifier-value": "/subscriptions/xxxxx/resourceGroups/r-xxxx/providers/Microsoft.ManagedIdentity/userAssignedIdentities/usmi"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ]&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; },&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "scaleSettings": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "fixedScale": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "targetDedicatedNodes": 1,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "targetLowPriorityNodes": 0,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "resizeTimeout": "PT15M"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; },&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "currentDedicatedNodes": 1,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "currentLowPriorityNodes": 0,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "targetNodeCommunicationMode": "Default",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "currentNodeCommunicationMode": "Simplified"&lt;BR /&gt;&amp;nbsp; }&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;Below is sample JSON template to create a pool with AMA extension enabled for Linux server.&lt;/P&gt;
&lt;P&gt;{&lt;BR /&gt;&amp;nbsp; "name": "poolextmon",&lt;BR /&gt;&amp;nbsp; "type": "Microsoft.Batch/batchAccounts/pools",&lt;BR /&gt;&amp;nbsp; "properties": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "allocationState": "Steady",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "vmSize": "STANDARD_D2S_V3",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "interNodeCommunication": "Disabled",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "taskSlotsPerNode": 1,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "taskSchedulingPolicy": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "nodeFillType": "Pack"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; },&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "deploymentConfiguration": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "virtualMachineConfiguration": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "imageReference": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "publisher": "canonical",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "offer": "0001-com-ubuntu-server-jammy",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "sku": "22_04-lts",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "version": "latest"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; },&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "nodeAgentSkuId": "batch.node.ubuntu 22.04",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "extensions": [&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "name": "AzureMonitorAgent",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "publisher": "Microsoft.Azure.Monitor",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "AzureMonitorLinuxAgent",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "typeHandlerVersion": "1.0",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "autoUpgradeMinorVersion": true,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "enableAutomaticUpgrade": true,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "settings": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "authentication": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "managedIdentity": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "identifier-name": "mi_res_id",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "identifier-value": "/subscriptions/xxxxxx/resourceGroups/r-xxx/providers/Microsoft.ManagedIdentity/userAssignedIdentities/usmi"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ]&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; },&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "scaleSettings": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "fixedScale": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "targetDedicatedNodes": 1,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "targetLowPriorityNodes": 0,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "resizeTimeout": "PT15M"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;BR /&gt;&amp;nbsp; &amp;nbsp; },&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "currentDedicatedNodes": 1,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "currentLowPriorityNodes": 0,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "targetNodeCommunicationMode": "Default",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "currentNodeCommunicationMode": "Simplified"&lt;BR /&gt;&amp;nbsp; }&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;Once the pool is created you can verify if the extension is installed on pool from portal.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2:&lt;/STRONG&gt; Create Log analytics workspace&lt;/P&gt;
&lt;P&gt;You are required to have a log analytics workspace where the data will be sent.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3:&lt;/STRONG&gt; Create a data collection rule (DCR)&lt;/P&gt;
&lt;P&gt;Create a DCR to collect data by using the Azure portal by following below document. You can refer to below document on how to create a DCR. Below document also talks about the types of data you can collect from a VM client with Azure Monitor and where you can send that data.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/vm/data-collection#create-a-data-collection-rule" target="_blank" rel="noopener"&gt;Collect data from virtual machine client with Azure Monitor - Azure Monitor | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Once the VM is associated to DCR, you can check the computers connected in Log Analytics workspace.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;To verify that the agent is operational and communicating properly with Azure Monitor check the Heartbeat for the VM.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;To verify that data is being collected in the Log Analytics workspace, check for records in the Perf table.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To verify that data is being collected in Azure Monitor Metrics, select&amp;nbsp;&lt;STRONG&gt;Metrics&lt;/STRONG&gt; from the virtual machine in the Azure portal. Navigate to the VMSS from portal and select &lt;STRONG&gt;Virtual Machine Guest&lt;/STRONG&gt;&amp;nbsp;(Windows) or&amp;nbsp;&lt;STRONG&gt;azure.vm.linux.guestmetrics&lt;/STRONG&gt; for the namespace and then select a metric to add to the view.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 03 Jul 2025 08:25:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/integrating-azure-monitor-in-azure-batch-to-monitor-batch-pool/ba-p/4428929</guid>
      <dc:creator>lakshmijakka</dc:creator>
      <dc:date>2025-07-03T08:25:59Z</dc:date>
    </item>
    <item>
      <title>Lifecycle Management of Blobs (Deletion) using Automation Tasks</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/lifecycle-management-of-blobs-deletion-using-automation-tasks/ba-p/4401441</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Background&lt;/U&gt;&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;We often encounter scenarios where we need to delete blobs that have been idle in a storage account for an extended period. For a small number of blobs, deletion can be handled easily using the Azure Portal, Storage Explorer, or inline scripts such as PowerShell or Azure CLI.&lt;/P&gt;
&lt;P&gt;However, in most cases, we deal with a large volume of blobs, making manual deletion impractical. In such situations, it's essential to leverage automation tools to streamline the deletion process. One effective option is using&amp;nbsp;&lt;STRONG&gt;Automation Tasks&lt;/STRONG&gt;, which can help schedule and manage blob deletions efficiently.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: Behind the scenes, an automation task is actually a logic app resource that runs a workflow. So, the Consumption pricing model of logic-app applies to automation tasks.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario’s where “Automation Tasks” are helpful&lt;/STRONG&gt;:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;You have a requirement to automate deletion of blobs which are older than a specific time, in days, weeks or months.&lt;/LI&gt;
&lt;LI&gt;You don’t want to put in much manual effort rather have simple UI based steps to achieve your goal&lt;/LI&gt;
&lt;LI&gt;You have System containers, and you want to action on it.&lt;BR /&gt;We have “&lt;STRONG&gt;LCM &lt;/STRONG&gt;(Life Cycle management)” which too can be leveraged by users to automation deletion of older blobs; however LCM cannot be used to delete blobs from System containers.&lt;/LI&gt;
&lt;LI&gt;You have to work on page blobs.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Setup “Automation Tasks”&lt;/U&gt;&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;Let’s walk through on how to achieve our goal.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Navigate to the desired storage account and scroll down to the “&lt;STRONG&gt;Automation&lt;/STRONG&gt;” section and select the “&lt;STRONG&gt;Tasks&lt;/STRONG&gt;” blade and then click on “&lt;STRONG&gt;Add Task&lt;/STRONG&gt;” from the top panel or bottom panel (highlighted in image).&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;On the next page click the “&lt;STRONG&gt;Select&lt;/STRONG&gt;” (highlighted image)&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;The new page which opens up should look as below, however there isn’t anything we are doing. So let’s just click on the “&lt;STRONG&gt;Next : Configure&lt;/STRONG&gt;” (highlighted in image) and move to the next screen.&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;The new page opens needs to be filled as per your requirement. I have added a sample. You can also use it on your own containers as well.
&lt;UL&gt;
&lt;LI&gt;'sample' is a folder inside container '$web'&lt;img /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;The “&lt;STRONG&gt;Expiration Age&lt;/STRONG&gt;” field means that Blobs older than these number of days needs be deleted. In above screenshot, blobs older than 180 days would be deleted.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Similarly, we can configure values in weeks or months as well.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Once we are through with the steps proceed with creation of the task.&lt;/P&gt;
&lt;OL start="6"&gt;
&lt;LI&gt;Once task is created it looks as below:&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;You can click on the “&lt;STRONG&gt;View Run&lt;/STRONG&gt;” to see run history.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;In-case you want to modify the task, click on your tasks name. For example in above screenshot I can modify by clicking “&lt;STRONG&gt;mytask&lt;/STRONG&gt;” link and re-configure the task.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now this isn’t sufficient. We will update some of the steps which were used to create the Logic-app. Hence we would need to edit some steps and save those before re-running the app.&lt;/P&gt;
&lt;P&gt;a) Go the logic app and navigate to the “Logic App Designer” blade&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;b) Now click on the “+” sign as shown below and “Add an Action”&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;c) Once the new page opens up, search for “List Blobs (v2)” and select it&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;d) Choose the “Enter custom value” and enter your storage account name&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;e) The values would like as shown below&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;f) Now let's navigate to the “For Each” condition&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;g) We need to delete the “Delete blob” too and replace with “Delete blob (V2)”&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;h) The “Delete Blob (V2)” looks like as below&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;i) With all steps ready, lets save the logic app and click on “Run”. You should observe the run passing successfully.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Impact due to Firewall&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;The above steps for works when your storage account is configured for public access.&lt;/P&gt;
&lt;P&gt;However, when firewall is enabled, you would need to provide the necessary permissions, else you are going to encounter 403 "Authorization Failure" errors. There would be no issue to create the task, but you will see failures when you check the runs. Example:&lt;/P&gt;
&lt;P&gt;To overcome this limitation, you need to navigate to your logic app and generate a managed identity for the app and provide the identity “Storage Blob Data Contributor” role.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step1&lt;/STRONG&gt;. Enable Managed Identity:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;In Azure Portal, go to your Logic App resource.&lt;/LI&gt;
&lt;LI&gt;Under Settings, select Identity.&lt;/LI&gt;
&lt;LI&gt;In the Identity pane, under System assigned, select On and Save.&lt;/LI&gt;
&lt;LI&gt;This step registers the system-assigned identity with Microsoft Entra ID, represented by an object ID.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step2&lt;/STRONG&gt;. Assign Necessary Role:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Open the Azure Storage Account in Azure Portal.&lt;/LI&gt;
&lt;LI&gt;Select Access control (IAM) &amp;gt; Add &amp;gt; Add role assignment.&lt;/LI&gt;
&lt;LI&gt;Assign a role like 'Storage Blob Data Contributor', which includes write access for blobs in an Azure Storage container, to the managed identity.&lt;/LI&gt;
&lt;LI&gt;Under Assign access to, select Managed identity &amp;gt; Add members, and choose your Logic App's identity&lt;/LI&gt;
&lt;LI&gt;Save and refresh and you see the new role configured to your storage account&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Remember that, if the Storage Account and logic app are in different region you should add another step in the firewall of storage account. You need to whitelist the logic app instance in “Resource instances” list as shown below:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Conclusion&lt;/U&gt;&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;The multiple ways to action on blobs are provided for your convenience. Depending on your requirement, feasibility &amp;amp; other factors like comfortability with the feature or pricing too would certainly influence your decisions.&lt;/P&gt;
&lt;P&gt;However, in-case you want to action upon System containers like $logs or $web, “&lt;STRONG&gt;Automation Tasks&lt;/STRONG&gt;” are one of the most helpful feature which you can use and achieve your goal.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: At the time of writing this blog this feature is still in preview. So ensure to check if there are any limitations which might impact you before implementing it in your Production environment.&lt;/P&gt;
&lt;P&gt;References:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/logic-apps/create-automation-tasks-azure-resources" target="_blank" rel="noopener"&gt;Create automation tasks to manage and monitor Azure resources - Azure Logic Apps | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview" target="_blank" rel="noopener"&gt;Optimize costs by automatically managing the data lifecycle - Azure Blob Storage | Microsoft Learn&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 22 May 2025 04:17:03 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/lifecycle-management-of-blobs-deletion-using-automation-tasks/ba-p/4401441</guid>
      <dc:creator>thakurmishra</dc:creator>
      <dc:date>2025-05-22T04:17:03Z</dc:date>
    </item>
    <item>
      <title>[AI Search] LockedSPLResourceFound error when deleting AI Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/ai-search-lockedsplresourcefound-error-when-deleting-ai-search/ba-p/4415849</link>
      <description>&lt;P&gt;Are you unable to delete AI Search with the following error?&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;LockedSPLResourceFound :Unable to verify management locks on Resource '$Resource_Path '. If you still want to delete the search service, manually delete the SPL resource first and try again.&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;If you are, this is the right place for you to find a quick resolution! Keep on reading through.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;[Solution – Delete the Shared Private Link]&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The error message will appear if you still have Shared Private Link configured in AI Search. AI Search will not let you delete the resource unless you delete the Shared Private Link first.&lt;/P&gt;
&lt;P&gt;You must delete the Shared Private Link in the Portal manually.&lt;BR /&gt;Move to Settings &amp;gt; Networking &amp;gt; Shared private access tab. &amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the Shared Private Links are all deleted, &amp;nbsp;please try again to delete the AI Search.&lt;/P&gt;
&lt;P&gt;Also please give at least 15 minutes for the Shared Private Links to be deleted completely as it may take longer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;[Extra - I tried to delete Shared Private Links but it’s been pending for a long while] &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;There are occasions where you will see your Shared Private Links are in a state of being deleted for a long time as below. (For example 3 hours plus or more)&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In this case, please open a case to our Support team mentioning you are having an issue with deleting AI Search due to Shared Private Link not being deleted properly. Our team will take care of the issue from that point on!&lt;/P&gt;</description>
      <pubDate>Wed, 21 May 2025 05:59:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/ai-search-lockedsplresourcefound-error-when-deleting-ai-search/ba-p/4415849</guid>
      <dc:creator>SungGun_Lee</dc:creator>
      <dc:date>2025-05-21T05:59:13Z</dc:date>
    </item>
    <item>
      <title>Introducing Azure SRE Agent</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/introducing-azure-sre-agent/ba-p/4414569</link>
      <description>&lt;P&gt;Today we’re thrilled to introduce Azure SRE Agent, an AI-powered tool that makes it easier to sustain production cloud environments. SRE Agent helps respond to incidents quickly and effectively, alleviating the toil of managing production environments. Overall, it results in better service uptime and reduced operational costs. SRE Agent leverages the reasoning capabilities of LLMs to identify the logs and metrics necessary for rapid root cause analysis and issue mitigation. Its advanced AI capabilities transform incident and infrastructure management in Azure, freeing engineers to focus on more meaningful work.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Sign up for the SRE Agent preview click&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A class="lia-external-url" style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="http://aka.ms/sreagent" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;/P&gt;
&lt;div data-video-id="https://youtu.be/teVBlz3UTg0/1747415889310" data-video-remote-vid="https://youtu.be/teVBlz3UTg0/1747415889310" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FteVBlz3UTg0%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DteVBlz3UTg0&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FteVBlz3UTg0%2Fhqdefault.jpg&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;P&gt;As more companies move their services online, site reliability engineering (SRE) has become crucial to keeping critical systems reliable, scalable, and cost-efficient. But SRE isn't just about fixing problems—it’s about bridging the gap between business goals and developer needs. With growing infrastructure complexity, it’s harder than ever to keep everything running smoothly while anticipating future scalability and reliability needs.&lt;/P&gt;
&lt;P&gt;We’ve heard from SREs that they experience significant toil from repetitive live-site incident handling and log analysis tasks, with adhoc administrative tasks disrupting their workflow. Responding to incidents is stressful, as seconds matter and there’s little room for error.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;SRE Agent brings the years of experience Microsoft teams gathered running the Azure cloud to your team.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;SRE Agent is a new Azure service that gives site reliability engineers (SREs) and developers the tools they need to increase the speed and efficiency of incident responses, diagnostics and collaboration to resolve problems rapidly. It is seamlessly integrated with other observability and incident management tools, as well as the &lt;A href="https://aka.ms/Build25/HeroBlog/agenticDevOps" target="_blank" rel="noopener"&gt;new coding agent&lt;/A&gt; in GitHub Copilot. It runs in the background 24x7, learning and monitoring the health and performance of Azure resources, handling production alerts, and partnering in incident investigations and root cause analysis (RCA) to mitigate issues faster.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Key Capabilities&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;SRE Agent can help make your infrastructure more secure, resilient and scalable and can help detect and respond to incidents more quickly.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Evaluating usage and performance trends&amp;nbsp;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;SRE Agent continuously learns about your Azure resources to build relevant context about them without the hassle of multiple tools. You can ask questions to learn about their properties, configuration, and recent changes. And you can learn about their health and performance by visualizing relevant metrics. This allows developers to quickly identify anomalies or trends that need attention.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example Prompts:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;EM&gt;What changed to my app in last day?&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;When was the last slot swap performed on my app?&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;What alerts should I set up for my web app?&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Can you give me the overall AKS cluster usage?&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;What are best practices that I should setup for my app?&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Visualize requests and 500 errors for last week for my app&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;&lt;img /&gt;&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Proactive detection and remediation of security vulnerabilities&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;SRE Agent continuously audits Azure resources to ensure compliance with security best practices. Currently, it checks for the use of supported TLS versions and verifies if resources have Managed Identity enabled. SRE Agent not only identifies the potential vulnerabilities but can also perform the necessary operations to update resources with your approval, bringing them into compliance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Automated incident response and Faster root cause analysis&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;SRE Agent can respond to Azure Monitor alerts out of the box. You can also integrate with Incident Management tools such as &lt;STRONG&gt;PagerDuty&lt;/STRONG&gt; to extend its alert-handling capabilities. With this integration, SRE Agent can:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Start investigation upon alert detection.&lt;/LI&gt;
&lt;LI&gt;Access metrics, activity logs, dependencies, and dashboards to form hypotheses and determine the root cause.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Traditional RCA methods can take &lt;STRONG&gt;hours&lt;/STRONG&gt;, while the SRE Agent completes them in &lt;STRONG&gt;minutes&lt;/STRONG&gt;, minimizing impact and driving faster resolutions.&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Incident mitigation&amp;nbsp;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;To mitigate incidents and bring an application back to operational state, SRE Agent can perform operations on behalf of and with approval of the user. These operations can include scaling up resources, restarting applications, and performing rollbacks to a previously working version of the app.&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Close the loop with developers&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Once an investigation is complete, SRE Agent creates a GitHub issue including all the details from the investigation helping developers to fix source code and prevent subsequent recurrences of an incident.&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Share your thoughts&amp;nbsp;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;We would love to hear your feedback. Signup for preview access to SRE Agent is here. Use the “Thumbs Up” or “Thumbs Down” buttons to share your thoughts on Agent’s responses. You can also share feedback on the product via the &lt;STRONG&gt;“give us feedback”&lt;/STRONG&gt; button on the top right corner inside the SRE Agent. Your input is invaluable to us as we strive to improve and support you on your Azure journey.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Additional resources&lt;/STRONG&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://aka.ms/sreagent" target="_blank" rel="noopener"&gt;signup&lt;/A&gt; for preview access&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://build.microsoft.com/en-US/sessions/BRK201?source=sessions" target="_blank" rel="noopener"&gt;Build session&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://techcommunity.microsoft.com/aka.ms/Build25/HeroBlog/agenticDevOps" target="_blank" rel="noopener"&gt;Agentic DevOps Blog post&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;A class="lia-external-url" href="https://aka.ms/sreagent/pricing/blog" target="_blank" rel="noopener"&gt;Pricing announcement&lt;/A&gt; (billing starts September 1 2025)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 18 Nov 2025 06:09:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/introducing-azure-sre-agent/ba-p/4414569</guid>
      <dc:creator>dchelupati</dc:creator>
      <dc:date>2025-11-18T06:09:11Z</dc:date>
    </item>
    <item>
      <title>Custom Tracing in API Management</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/custom-tracing-in-api-management/ba-p/4260472</link>
      <description>&lt;P style="margin: 0in; font-family: SegoeUI; font-size: 13.5pt; color: #3366ff;"&gt;&lt;STRONG&gt;Scenario:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In case of encountering error in API management, request tracing is an invaluable feature that serves as a debugger. It allows for tracking the flow of requests as they pass through various policy logic, providing detailed insights into the complete API Management (APIM) processing. Here is a &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-api-inspector" target="_blank" rel="noopener"&gt;link &lt;/A&gt;if you would like to read more on how to enable request tracing in API management.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However, it is the most common way to debug your API, let's assume a real-life scenario where you encounter a sporadic error or unexpected response while processing the live APIM calls and need to drill down the issue. In such cases, attaching a debugger or running request traces can be challenging, especially when the issue is intermittent or requires checking the specific code logic. This often necessitates a trial-and-error approach to reproduce the scenario and obtain the traces. To address these challenges, the APIM Trace policy can be utilized. This policy enables the addition of custom traces to Application Insights and/or to resource logs.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This blog can help users to use the APIM Trace policy which can be leveraged to sort out such kind of issues. The following snippet demonstrates how to use traces with Application Insights.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 10.5pt; color: #333333;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; font-family: SegoeUI; font-size: 13.5pt; color: #3366ff;"&gt;&lt;STRONG&gt;Pre-Requirements:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Before we start, we could make sure that we have existing Azure API Management service and App Insight setup for logging these requests.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/api-management/get-started-create-service-instance" target="_blank" rel="noopener"&gt;How to create APIM service&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-app-insights?tabs=rest" target="_blank" rel="noopener"&gt;How to log request using App Insight&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P style="margin: 0in; font-family: SegoeUI; font-size: 13.5pt; color: #3366ff;"&gt;&lt;STRONG&gt;Steps&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;Once you are done with your APIM setup make sure your diagnostics setting is correctly setup and it is logging your API calls, you can verify the same by going to App Insight instance and running the queries for "requests" table.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Now let's consider a scenario where we may need to check on the value of remaining calls which will be set based on certain input parameters (or any other logic on your policy code). To do the same go to your API's&amp;gt;Design&amp;gt; "Your API".&lt;/LI&gt;
&lt;LI style="margin-top: 0; margin-bottom: 0; vertical-align: middle;"&gt;I did setup the verbosity as Verbose here to log the APIM request to App insight and we will be using the trace policy code to add custom trace information to the request tracing output as shown below.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI style="margin-top: 0; margin-bottom: 0; vertical-align: middle;"&gt;As an example, we will be capturing the traces for the selected API only. The below example is for illustration purpose only here we are trying to log remaining calls under trace variable "&lt;EM&gt;callremain&lt;/EM&gt;". We will be leveraging rate limit policy to check number of calls left and adding a custom trace information. Here is a policy code for the same
&lt;P style="margin: 0in; margin-left: .375in; font-family: Consolas; font-size: 10.5pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;LI-CODE lang="xml"&gt;&amp;lt;choose&amp;gt;
	&amp;lt;when condition="@(((string)context.Request.Headers.GetValueOrDefault("x-env","none")) =="test")"&amp;gt;
		&amp;lt;rate-limit-by-key calls="5" renewal-period="30" counter-key="@(context.Request.IpAddress)" 
			increment-condition="@(context.Response.StatusCode == 200)" remaining-calls-variable-name="remainingCallsPerIP" /&amp;gt;
		
		&amp;lt;trace source="remainingCallsPerIP" severity="verbose"&amp;gt;
		&amp;lt;message&amp;gt;@(String.Format("{0} | {1}", "ip address", context.Request.IpAddress))&amp;lt;/message&amp;gt;
		&amp;lt;metadata name="callremain" value="@{
			var calls = context.Variables.GetValueOrDefault&amp;lt;Int32&amp;gt;("remainingCallsPerIP");
				return calls.ToString();
			}" /&amp;gt;
        &amp;lt;/trace&amp;gt;

     &amp;lt;/when&amp;gt;
     &amp;lt;otherwise&amp;gt;
         &amp;lt;rate-limit calls="10" renewal-period="20" /&amp;gt;
     &amp;lt;/otherwise&amp;gt;
&amp;lt;/choose&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI style="margin-top: 0; margin-bottom: 0; vertical-align: middle;"&gt;Once you save the above policy code run the API calls using header x-env the rate limit policy only allows 5 calls per 30 seconds here and will log the traces with message and call remain variable.&lt;/LI&gt;
&lt;LI style="margin-top: 0; margin-bottom: 0; vertical-align: middle;"&gt;Next Go to App insight(your app insight instance) &amp;gt;Monitoring&amp;gt;Logs&amp;gt;traces&lt;/LI&gt;
&lt;LI style="margin-top: 0; margin-bottom: 0; vertical-align: middle;"&gt;You can filter traces based on your Kusto like API name or custom dimension however here for the simplicity for our sample call filter is based on message contains "ip address". Please note it is the message metadata we added in our trace policy and call remain variable which you can track based on your business logic.&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;
&lt;P style="margin: 0in; margin-left: .375in; font-family: Calibri; font-size: 11.0pt;"&gt;traces&lt;/P&gt;
&lt;P style="margin: 0in; margin-left: .375in; font-family: Calibri; font-size: 11.0pt;"&gt;| where message contains "ip address"&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI style="margin-top: 0; margin-bottom: 0; vertical-align: middle;"&gt;Keep calling the API, for example using VS Code or any other tool to make continuous calls to API and capture the variable "&lt;EM&gt;callremain&lt;/EM&gt;". Here the trace policy helps to identify how many calls left in the rate limit policy by logging custom messages and metadata to the trace output.&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI style="margin-top: 0; margin-bottom: 0; vertical-align: middle;" value="9"&gt;This will provide a way to look deep into your code logic or if you would like to track any variables in your APIM code. You can add more metadata in the &amp;lt;Trace&amp;gt; policy as per your debug requirements. Here is another quick example to save your request and change the severity as informational in the trace log.
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang="xml"&gt;&amp;lt;trace source="remainingCallsPerIP" severity="information"&amp;gt;
&amp;lt;metadata name="myurl" value="@(context.Request.OriginalUrl.ToString())" /&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-10"&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/SPAN&gt;: The above steps outlined will give you an idea on how to use this policy to add custom messages and metadata to the trace output, which can help in debugging and troubleshooting API's by providing detailed information about request processing steps. This includes inbound requests, backend interactions, and outbound responses. I hope this blog provided you with information to better grasp what happens when an API is called, and how trace policy can help you in troubleshooting and debugging your APIM calls.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 06 May 2025 18:51:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/custom-tracing-in-api-management/ba-p/4260472</guid>
      <dc:creator>shailesh14</dc:creator>
      <dc:date>2025-05-06T18:51:06Z</dc:date>
    </item>
    <item>
      <title>How Networking setting of Batch Account impacts simplified communication mode Batch pool</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/how-networking-setting-of-batch-account-impacts-simplified/ba-p/4410258</link>
      <description>&lt;P&gt;As described in our &lt;A href="https://learn.microsoft.com/en-us/azure/batch/simplified-compute-node-communication" target="_blank"&gt;official document&lt;/A&gt;, the classic communication mode of Batch node will be retired on 31 March 2026. Instead, it’s recommended to use simplified communication mode while creating Batch pool.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;But while user changes their Batch pool communication mode from classic to simplified and applies the necessary changes of network security group per &lt;A href="https://learn.microsoft.com/en-us/azure/batch/batch-virtual-network#network-security-groups-for-virtual-machine-configuration-pools-specifying-subnet-level-rules" target="_blank"&gt;documentation&lt;/A&gt;, they will find out that the node is still stuck in unusable status.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A very possible cause of this issue is due to the bad&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/batch/public-network-access" target="_blank"&gt;networking setting of Batch Account&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This blog will mainly talk about why networking setting can cause node using simplified communication mode stuck in unusable status and how to configure correct networking setting under different user scenarios.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Cause:&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;As described in this &lt;A href="https://learn.microsoft.com/en-us/azure/batch/simplified-compute-node-communication" target="_blank"&gt;document&lt;/A&gt;, the difference between classic and simplified communication mode is very clear:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Classic&lt;/STRONG&gt;: the Batch service initiates communication with the compute nodes.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Simplified&lt;/STRONG&gt;: the compute nodes initiate communication with the Batch service.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The purpose of communication is simple: Batch service needs to receive traffic from Batch nodes to know whether a node is healthy and which status it’s in.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The difference is where the traffic initiates. If it’s initiated from Batch service side like classic communication mode, then it’s considered as outgoing traffic of your Batch Account. If it’s initiated from Batch nodes like simplified communication mode, then it’s considered as incoming traffic of your Batch Account.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The &lt;A href="https://learn.microsoft.com/en-us/azure/batch/public-network-access" target="_blank"&gt;networking settings&lt;/A&gt; of Batch Account will only check the incoming traffic, not outgoing one.&lt;/STRONG&gt; Hence if the networking setting completely disables public network access, the classic communication mode nodes will still be able to communicate with Batch service, but the simplified communication mode nodes will be unable to communicate with Batch service, which will further cause Batch service to mark this node as unusable status.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Solution:&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;The only important point of solution is to make sure the traffic from the simplified communication mode node is allowed by Batch Account networking setting.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is the diagram for different user scenarios:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*1: The resource group where the public IP address is created will be different depending on the &lt;A href="https://learn.microsoft.com/en-us/azure/batch/batch-account-create-portal#create-a-batch-account" target="_blank"&gt;Batch Account pool allocation mode&lt;/A&gt;. If it’s Batch Service, the public IP address will be created in same resource group as Virtual Network resource. If it’s User Subscription, it will be in a resource group with name &lt;EM&gt;AzureBatch-{GUID}-C&lt;/EM&gt;.&lt;/P&gt;
&lt;P&gt;*2: This scenario is as &lt;A href="https://learn.microsoft.com/en-us/azure/batch/create-pool-public-ip" target="_blank"&gt;document&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;TIPS: In some scenarios, there will be more than 1 public IP address which can be used by a Batch pool, such as the scenario of pool with Virtual Network and own public IP address will require one additional public IP address as buffer, or the scenario of pool with Virtual Network with more than 100 nodes. In those scenarios, please remember to put all public IP addresses into allow list.&lt;/P&gt;</description>
      <pubDate>Fri, 02 May 2025 04:12:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/how-networking-setting-of-batch-account-impacts-simplified/ba-p/4410258</guid>
      <dc:creator>JerryZhangMS</dc:creator>
      <dc:date>2025-05-02T04:12:44Z</dc:date>
    </item>
    <item>
      <title>AI Resilience: Strategies to Keep Your Intelligent App Running at Peak Performance</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/ai-resilience-strategies-to-keep-your-intelligent-app-running-at/ba-p/4389357</link>
      <description>&lt;H4&gt;Stay Online&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/well-architected/reliability/" target="_blank" rel="noopener"&gt;Reliability&lt;/A&gt;. It's one of the 5 pillars of &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/well-architected/pillars" target="_blank" rel="noopener"&gt;Azure Well-Architect Framework&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When starting to implement and go-to-market any new product witch has any integration with Open AI Service you can face spikes of usage in your workload and, even having everything scaling correctly in your side, if you have an Azure Open AI Services deployed using PTU you can reach the PTU threshold and them start to experience some &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/provisioned-throughput?tabs=global-ptum" target="_blank" rel="noopener"&gt;&lt;SPAN class="lia-text-color-8"&gt;&lt;EM&gt;429 &lt;/EM&gt;&lt;/SPAN&gt;response code&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;You also will receive some important information about the when you can retry the request in the header of the response and with this information you can implement in your business logic a solution. Here in this article I will show how to use the API Management Service policy to handle this and also explore the native cache to save some tokens!&lt;/P&gt;
&lt;H4&gt;Architecture Reference&lt;/H4&gt;
&lt;img /&gt;
&lt;P&gt;The Azure Function in the left of the diagram just represent and App request and can be any kind of resource (even in an On-Premisse environment). Our goal in this article is to show one in&amp;nbsp;&lt;EM&gt;n&lt;/EM&gt; possibilities to handle the 429 responses. We are going to use API Management Policy to automatically redirect the backend to another Open AI Services instance in other region in the Standard mode, witch means that the charge is going to be only what you use.&lt;/P&gt;
&lt;P&gt;First we need to create an API in our API Management to forward the requests to your main Open AI Services (region 1 in the diagram).&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now we are going to create this policy in the API call request:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;    &amp;lt;policies&amp;gt;
       &amp;lt;inbound&amp;gt;
         &amp;lt;base /&amp;gt;
         &amp;lt;set-backend-service base-url="&amp;lt;your_open_ai_region1_endpoint&amp;gt;" /&amp;gt;
       &amp;lt;/inbound&amp;gt;
       &amp;lt;backend&amp;gt;
         &amp;lt;base /&amp;gt;
       &amp;lt;/backend&amp;gt;
       &amp;lt;outbound&amp;gt;
         &amp;lt;base /&amp;gt;
       &amp;lt;/outbound&amp;gt;
       &amp;lt;on-error&amp;gt;
         &amp;lt;retry condition="@(context.Response.StatusCode == 429)" count="1" interval="5" /&amp;gt;
         &amp;lt;set-backend-service base-url="&amp;lt;your_open_ai_region2_endpoint&amp;gt;" /&amp;gt;
       &amp;lt;/on-error&amp;gt;
     &amp;lt;/policies&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;The first part of our job is done! Now we have an automatically redirect to our OpenAI Services deployed at region 2 when our PTU threshold is reached.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Cost consideration&lt;/H4&gt;
&lt;P&gt;So now you can ask me: and about my cost increment for using API Management?&lt;/P&gt;
&lt;P&gt;Even if you don't want to use any other feature on API Management you can leverage of the API Management native cache and, once again using policy and AI, put some questions/answers in the built-in&amp;nbsp;&lt;EM&gt;&lt;STRONG&gt;Redis*&lt;/STRONG&gt;&lt;/EM&gt; cache using semantic cache for Open AI services.&lt;/P&gt;
&lt;P&gt;Let's change our policy to consider this:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;     &amp;lt;policies&amp;gt;
       &amp;lt;inbound&amp;gt;
         &amp;lt;base /&amp;gt;
         &amp;lt;azure-openai-semantic-cache-lookup score-threshold="0.05" embeddings-backend-id ="azure-openai-backend" embeddings-backend-auth ="system-assigned" &amp;gt;
           &amp;lt;vary-by&amp;gt;@(context.Subscription.Id)&amp;lt;/vary-by&amp;gt;
         &amp;lt;/azure-openai-semantic-cache-lookup&amp;gt;
         &amp;lt;set-backend-service base-url="&amp;lt;your_open_ai_region1_endpoint&amp;gt;" /&amp;gt;
       &amp;lt;/inbound&amp;gt;
       &amp;lt;backend&amp;gt;
         &amp;lt;base /&amp;gt;
       &amp;lt;/backend&amp;gt;
       &amp;lt;outbound&amp;gt;
         &amp;lt;base /&amp;gt;
         &amp;lt;azure-openai-semantic-cache-store duration="60" /&amp;gt;
       &amp;lt;/outbound&amp;gt;
       &amp;lt;on-error&amp;gt;
         &amp;lt;retry condition="@(context.Response.StatusCode == 429)" count="1" interval="5" /&amp;gt;
         &amp;lt;set-backend-service base-url="&amp;lt;your_open_ai_region2_endpoint&amp;gt;" /&amp;gt;
       &amp;lt;/on-error&amp;gt;
     &amp;lt;/policies&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;Now, API Management will handle the tokens inputted and use semantic equivalence and decide if its fit with cached information or redirect the request to your OpenAI endpoint. And, sometime, this can help you to avoid reach the PTU threshold as well!&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;* Check the tier / cache capabilities to validate your business solution needs with the API Management cache feature: Compare&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/api-management/api-management-features" target="_blank" rel="noopener"&gt;API Management features across tiers&lt;/A&gt; and &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/api-management/v2-service-tiers-overview" target="_blank" rel="noopener"&gt;cache size across tiers&lt;/A&gt;.&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;H4&gt;Conclusion&lt;/H4&gt;
&lt;P&gt;API Management offers key capabilities for AI that we are exploring in this article and also others that you can leverage for your intelligent applications. Check it out on this awesome &lt;A class="lia-external-url" href="https://github.com/Azure-Samples/AI-Gateway" target="_blank"&gt;AI Gateway HUB repository&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;At least but not less important, dive in API Management features with experts in the field inside the &lt;A class="lia-external-url" href="http://aka.ms/apimlove" target="_blank"&gt;API Management HUB&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Thanks for reading and Happy Coding!&lt;/P&gt;</description>
      <pubDate>Thu, 24 Apr 2025 16:52:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/ai-resilience-strategies-to-keep-your-intelligent-app-running-at/ba-p/4389357</guid>
      <dc:creator>fabiopadua</dc:creator>
      <dc:date>2025-04-24T16:52:18Z</dc:date>
    </item>
    <item>
      <title>Streaming and Analyzing Azure Storage Diagnostic Logs via Event Hub using Service Bus Explorer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/streaming-and-analyzing-azure-storage-diagnostic-logs-via-event/ba-p/4401439</link>
      <description>&lt;P&gt;Monitoring Azure Storage operations is crucial for ensuring performance, compliance, and security. Azure provides various options to collect and route diagnostic logs. One powerful option is&amp;nbsp;&lt;STRONG&gt;sending logs to Azure Event Hub&lt;/STRONG&gt;, which allows real-time streaming and integration with external tools and analytics platforms.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll walk through setting up diagnostic logging for an Azure Storage account with &lt;STRONG&gt;Event Hub as the destination&lt;/STRONG&gt;, and then demonstrate how to analyse incoming logs using&lt;STRONG&gt; Service Bus Explorer&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Prerequisites&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Before we begin, make sure you have the following set up:&lt;/P&gt;
&lt;P&gt;1. &lt;STRONG&gt;Azure Event Hub Configuration&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;An Event Hub namespace and instance set up in your Azure subscription.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;2. &lt;STRONG&gt;Service Bus Explorer Tool&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We'll use Service Bus Explorer to connect to Event Hub and analyse log data.&lt;/P&gt;
&lt;P&gt;Download and Setup:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Go to the official GitHub page:&lt;BR /&gt;&lt;A href="https://github.com/paolosalvatori/ServiceBusExplorer" target="_blank" rel="noopener"&gt;https://github.com/paolosalvatori/ServiceBusExplorer&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Download the latest release .zip file from the Releases section.&lt;/LI&gt;
&lt;LI&gt;Extract the contents and launch ServiceBusExplorer.exe (no installation needed).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;- This is a Windows-only tool. Make sure .NET runtime is installed on your system.&lt;/P&gt;
&lt;P&gt;3. &lt;STRONG&gt;Event Hub Connection String&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;You’ll need a connection string with appropriate permissions to connect via Service Bus Explorer:&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Azure Portal → Navigate to your Event Hub Namespace → Shared Access Policies&lt;/LI&gt;
&lt;LI&gt;Select RootManageSharedAccessKey with Managed rights&lt;/LI&gt;
&lt;LI&gt;Copy the Connection string–primary key&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;-&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;Ensure the connection string includes the Entity Path if you're targeting a specific Event Hub.&lt;/P&gt;
&lt;P&gt;4. &lt;STRONG&gt;Ensure Diagnostic Logging is Enabled for Azure Storage&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To stream logs into Event Hub, make sure that diagnostic logging is configured properly on your Azure Storage account.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Steps:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Navigate to your Storage Account in the Azure Portal.&lt;/LI&gt;
&lt;LI&gt;Go to Monitoring &amp;gt; Diagnostic settings.&lt;/LI&gt;
&lt;LI&gt;Click Add diagnostic setting or edit an existing one.&lt;/LI&gt;
&lt;LI&gt;Select the required log categories: Blob, Table, Queue, File (as needed).&lt;/LI&gt;
&lt;LI&gt;Set Event Hub as the destination.&lt;img /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H6&gt;Let's take a look at the steps below for configuring Service Bus Explorer and Reviewing the logs ahead&lt;/H6&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1: Connect to Event Hub Using Service Bus Explorer&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To analyze the streamed logs, we will use &lt;STRONG&gt;Service Bus Explorer&lt;/STRONG&gt;, a powerful tool for inspecting messages within Azure Event Hub.&lt;/P&gt;
&lt;P&gt;Open &lt;STRONG&gt;Service Bus Explorer&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click on &lt;STRONG&gt;File &amp;gt; Connect&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2: Provide Event Hub Connection String&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Paste your &lt;STRONG&gt;Event Hub-compatible connection string&lt;/STRONG&gt; at the Namespace level from the Portal which has Manage permissions and paste it in the right-hand text field under Connection Settings and click on Save.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3: View Available Event Hubs&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Once connected, you'll see a list of &lt;STRONG&gt;Event Hubs under your namespace&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Expand the Event Hub you configured for diagnostic logs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4: Start Listening to the Consumer Group&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Right-click on your &lt;STRONG&gt;Consumer Group&lt;/STRONG&gt; (usually $Default).&lt;/LI&gt;
&lt;LI&gt;Select &lt;STRONG&gt;"Create Consumer Group Listener"&lt;/STRONG&gt; to begin listening for incoming data.&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5: Enable Verbose Logging&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Check the &lt;STRONG&gt;Verbose&lt;/STRONG&gt; option to view more detailed information about the incoming messages, including metadata.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 6: View and Analyse Events&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Navigate to the &lt;STRONG&gt;Events&lt;/STRONG&gt; tab at the top.&lt;/LI&gt;
&lt;LI&gt;Click on &lt;STRONG&gt;Start&lt;/STRONG&gt; to begin streaming live log events.&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;P&gt;You’ll now see:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Metadata&lt;/STRONG&gt;: Sequence Number, Offset, Enqueued Time (UTC), Partition Info.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Payload&lt;/STRONG&gt;: The actual log data coming from Azure Storage (in JSON format).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;🧠 Tip: You can copy and analyse this payload further using tools like Power BI, Stream Analytics, or even a custom parser.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Streaming diagnostic logs to Event Hub gives you flexibility in how you handle monitoring data. Whether you're integrating with external SIEM solutions, triggering alerts, or performing deep analytics, Event Hub provides a real-time backbone for diagnostics.&lt;/P&gt;
&lt;P&gt;Using Service Bus Explorer, you can quickly validate and analyse the incoming logs to ensure your setup is working and to inspect what's being captured.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Reference Link :-&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/diagnostic-settings" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/diagnostic-settings&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 24 Apr 2025 11:27:12 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/streaming-and-analyzing-azure-storage-diagnostic-logs-via-event/ba-p/4401439</guid>
      <dc:creator>jainsourabh</dc:creator>
      <dc:date>2025-04-24T11:27:12Z</dc:date>
    </item>
    <item>
      <title>Tips for Migrating Azure Event Hub from Standard to Basic Tier Using Scripts</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/tips-for-migrating-azure-event-hub-from-standard-to-basic-tier/ba-p/4404556</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What are Event Hubs?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Event Hub is a big data streaming platform and event ingestion service by Microsoft Azure. It’s designed to ingest, buffer, store, and process millions of events per second in real time.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Feature Comparison&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;The Standard tier of Azure Event Hubs provides features beyond what is available in the Basic tier. The following features are included with Standard:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Feature&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Basic Tier&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Standard Tier&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Capture Feature&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;❌ Not available&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;✅ Available&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual Network Integration&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;❌ Not available&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;✅ Available&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Auto-Inflate&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;❌ Not available&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;✅ Available&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Consumer Groups&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Limited (only 1 group)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Up to 20 consumer groups&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Message Retention&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Up to 1 day&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Up to 7 days&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Many organizations or users choose to downgrade their Event Hubs from the Standard to the Basic tier to reduce costs or for other reasons.&lt;/P&gt;
&lt;P&gt;However, it's important not to overlook that certain features available in the Standard tier are not supported in the Basic tier. These differences mentioned above should be carefully reviewed before making the switch.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So, while you try to switch from the Portal, you will be able to see that Basic tier is enabled, and you are good to migrate from Standard to Basic&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However, when you start migrating from the Portal you will see the error below which means your Event Hub capture feature is enabled and you need to disable it first before migrating.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Failed updating Event Hubs namespace with error: Code: &lt;STRONG&gt;'MessagingGatewayConflict'&lt;/STRONG&gt; with &lt;STRONG&gt;Message: 'Namespace tier cannot be downgraded, as Archive is not available in Basic tier.&lt;/STRONG&gt; TrackingId:xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx-xxxxxx-xxxxxxxxxx-xxxxxxxexx_xxx, SystemTracker:eventhubnamespacename.servicebus.windows.net:$tenants/eventhubnamespacename.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;When running a script for multiple namespaces, this error may not appear. You might incorrectly assume the migration succeeded, while the&amp;nbsp;&lt;STRONG&gt;status will remain 'Not Active.'&lt;/STRONG&gt; This can silently break functionality.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Current Limitation&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;If Capture Feature is enabled, it should not allow to migrate on the first place.&lt;/LI&gt;
&lt;LI&gt;You must have created multiple Consumer groups for Standard tier. However, after migration, you can use only 1 Consumer Group that is $default for the basic tier.&lt;/LI&gt;
&lt;LI&gt;If Auto-inflate is enabled, it will ask you to disable first and then start the migration.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;In Ideal scenario, you should not be able to pull events from other consumer groups as Basic only offers 1 consumer group per entity for Basic.&lt;/P&gt;
&lt;P&gt;However, if you try to pull events from other consumer groups it does not throw any throw Runtime Error.&lt;/P&gt;
&lt;P&gt;This bug has been identified by our Internal Product Group team, and they are working on it to fix this bug and release it sooner.&lt;/P&gt;
&lt;P&gt;Before implementing this workaround addressed below described in this article, I recommend checking the status of this feature request. Event Hubs is continuously evolving, and by the time you're reading this, the limitation might have been addressed in the upcoming weeks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Workaround&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;You need to disable Capture feature first and check the Status = Active. Then start the migration process and check again the Status = Active.&lt;/LI&gt;
&lt;LI&gt;Delete all the other Consumer Groups which you created in Standard tier to avoid confusion.&lt;/LI&gt;
&lt;LI&gt;Do not use any other Consumer groups other than $default in your receiver applications while pulling the Events from the Event Hub.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;"Thanks for reading! I hope this article helps clarify the nuances of migrating from the Standard to Basic tier in Azure Event Hubs. As the platform evolves, keep an eye on official documentation and Azure updates. This limitation may be resolved soon, making migration more seamless in the future."&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 24 Apr 2025 08:34:15 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/tips-for-migrating-azure-event-hub-from-standard-to-basic-tier/ba-p/4404556</guid>
      <dc:creator>sudeshna</dc:creator>
      <dc:date>2025-04-24T08:34:15Z</dc:date>
    </item>
    <item>
      <title>Automate creation of work items in ADO and Export/Import workflow packages</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/automate-creation-of-work-items-in-ado-and-export-import/ba-p/4272180</link>
      <description>&lt;P&gt;This article would create multiple work items (tasks) for any specific User Story in a particular Backlog with a certain TAG value in Azure Devops. This article would also show steps to Export/Import a workflow package.&lt;BR /&gt;&lt;BR /&gt;&lt;U&gt;&lt;STRONG&gt;PART 1&lt;/STRONG&gt;&lt;/U&gt;: Create multiple items for a parent User Story&lt;BR /&gt;This article uses Power Automate to do the same.&lt;BR /&gt;There are certain Pre-requisites that have to be fulfilled,&lt;BR /&gt;1. Be a part of an Azure Devops organization, have a project created along with some User Stories.&lt;BR /&gt;2. Have access to Power Automate to create workflows, and make sure you are able to add connections from Automate to ADO (Not to worry we would check that in the below steps)&lt;BR /&gt;&lt;BR /&gt;Now go ahead and&amp;nbsp;&lt;BR /&gt;Follow the below steps to achieve the purpose.&lt;BR /&gt;&lt;BR /&gt;Step1: Open power automate portal and Click on "My Flows"&amp;nbsp;&lt;BR /&gt;Link:&amp;nbsp;https://make.powerautomate.com&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Step2: Click on "New flow" and on "Automated cloud flow"&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Step3: Choose an appropriate name and search for ADO trigger "When a work item is assigned"&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Step4: Now with the Create option, it would depict invalid parameters like below&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Next step would resolve this.&lt;BR /&gt;&lt;BR /&gt;Step5: Click on the workflow item and following tab would appear, fill in the organization and other details&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Now click on "Show All" options and fill in the Area Path, Iteration Path, Created By and Type&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Step6:&amp;nbsp;&amp;nbsp;Now once the previous step is complete, click on the Plus symbol and select 'Add a condition' option&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;This condition is the part where we add the specific tag value and make sure that User stories with only these tag values are picked up and not all the others&lt;BR /&gt;&lt;BR /&gt;Step7: Choose the left entry as 'Tags'&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;and enter value here in this case , 'Custom' is used.&lt;BR /&gt;&lt;BR /&gt;Intermediately keeping saving the workflow to avoid any losses.&lt;BR /&gt;&lt;BR /&gt;Step8: Now lets add the work item details, Under the True condition (shown as below), click on plus symbol and 'Add an action'&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click on 'Azure Devops' - 'Create a work item'&lt;BR /&gt;&lt;BR /&gt;Step 9: Click on the item, and fill in ADO details&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Important part is to click on ShowAll&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;and make sure the option Link URL is chosen correctly.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Step 10:&amp;nbsp; Choose 'Hierarchy-Reverse' in the Link Type&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Step 11: Now lets create parallel tasks for the parent User Story&lt;BR /&gt;Click 'Add a parallel branch', similar to the previous steps&amp;nbsp;&lt;BR /&gt;Click on 'Azure Devops' - 'Create a work item'&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Step 12:&amp;nbsp;Save and Test the workflow&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Now the workflow would create multiple 'tasks' for a User Story with a Tag value 'Custom'&lt;BR /&gt;&lt;BR /&gt;Lets test the workflow&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;The run was successful, and the User story has child 'tasks'&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;All the tasks, have parent Area Path and Iteration Path&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;&lt;U&gt;&lt;STRONG&gt;PART 2&lt;/STRONG&gt;&lt;/U&gt;: In the second section of the document, lets learn about how to export a workflow and import the same.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Step1: Click on Export and choose 'package(.zip)'&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Step2: Enter details and click on 'Export'&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Now the package would be downloaded to local&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Importing exported package&lt;BR /&gt;&lt;BR /&gt;Step3: Click on 'my flows' and Import option&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Step4: The next would be as below&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;And once Import is clicked the workflow would get created from the exported package.&lt;BR /&gt;&lt;BR /&gt;To conclude , we have tested the workflow for User Stories with a specific 'Tag' value . The workflow creates multiple tasks as child items. We also tested the export/import of workflow packages.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Apr 2025 20:25:09 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/automate-creation-of-work-items-in-ado-and-export-import/ba-p/4272180</guid>
      <dc:creator>Gayatri_Ram</dc:creator>
      <dc:date>2025-04-16T20:25:09Z</dc:date>
    </item>
    <item>
      <title>Lease Management in Azure Storage &amp; Common troubleshooting scenarios</title>
      <link>https://techcommunity.microsoft.com/t5/azure-paas-blog/lease-management-in-azure-storage-common-troubleshooting/ba-p/4402002</link>
      <description>&lt;P&gt;The blog explains how lease management in Azure Storage works, covering the management of concurrent access to blobs and containers. It discusses key concepts such as acquiring, renewing, changing, releasing, and breaking leases, ensuring only the lease holder can modify or delete a resource for a specified duration. Additionally, it explores common troubleshooting scenarios in Azure Storage Lease Management.&lt;/P&gt;
&lt;P&gt;Lease management in Azure Storage allows you to create and manage locks on blobs for write and delete operations. This is particularly useful for ensuring that only one client can write to a blob at a time, preventing conflicts and ensuring data consistency.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Key Concepts:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Lease States&lt;/STRONG&gt;: A blob can be in one of several lease states, such as&amp;nbsp;Available,&amp;nbsp;Leased,&amp;nbsp;Expired,&amp;nbsp;Breaking, and&amp;nbsp;Broken. Each state indicates the current status of the lease on the blob.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Lease Duration&lt;/STRONG&gt;: The duration of a lease can be between 15 to 60 seconds, or it can be infinite. An infinite lease remains active until it is explicitly released or broken.&amp;nbsp;In versions prior to 2012-02-12, the lock duration is 60 seconds.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Lease Actions&lt;/STRONG&gt;: There are several actions you can perform on a lease, including acquiring, renewing, releasing, and breaking the lease.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The diagram below illustrates the five states of a lease and the commands or events that trigger lease state changes. A lease in Azure Blob Storage manages concurrent access to blobs and containers, ensuring that only the lease holder can modify or delete a resource for a specified duration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Available&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The blob is not leased at present and is available for leasing. It transitions to the Leased state when acquired and stays in the Available state when released or written to.&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;&lt;U&gt;Leased&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;When a lease is acquired, the blob enters a leased state and is locked from modification by others; it is available when released, leased when renewed or changed, breaking with a break period greater than 0, broken with a break period of 0, and expired when the lease period ends.&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;&lt;U&gt;Expired&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The lease has expired, but it can be renewed or re-acquired, transitioning to leased when acquired or available when terminated.&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;&lt;U&gt;Breaking&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The lease is terminated with a non-zero break period, remaining in the Breaking state if greater than 0, and entering the Broken state once the period ends or is set to 0, becoming available when released.&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;&lt;U&gt;Broken&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;When a lease is broken, clients can acquire a new lease; it becomes Leased when acquired, remains broken if broken again, and moves to available when released or written to.&lt;/P&gt;
&lt;P&gt;​In addition, Lease management in Azure Storage allows you to create and manage locks on containers to control access for delete operations. This is particularly useful for ensuring that only one client can delete a container at a time, preventing conflicts and ensuring data consistency.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Troubleshooting Scenarios&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario 1: Lease ID Mismatch&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;U&gt;&lt;STRONG class="lia-align-left"&gt;Error&lt;/STRONG&gt;:&lt;/U&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;"The lease ID specified did not match the lease ID for the blob. RequestId:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Time:YYYY-MM-DDTHH:mm:SS.Z&lt;/P&gt;
&lt;P class=""&gt;&lt;U&gt;&lt;STRONG&gt;Cause:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P class=""&gt;This error occurs when:&lt;BR /&gt;• The lease ID provided in the request does not match the actual lease ID currently held by the blob.&lt;BR /&gt;• The lease has already expired or was broken, making the lease ID invalid.&lt;BR /&gt;• Another process has acquired a new lease, making the old lease ID incorrect.&lt;STRONG&gt; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P class=""&gt;&lt;U&gt;&lt;STRONG&gt;Resolution:&lt;/STRONG&gt;&amp;nbsp;&lt;/U&gt;&lt;/P&gt;
&lt;P class=""&gt;Verify the Correct Lease ID&lt;BR /&gt;• Get the correct lease ID before performing lease operations.&lt;BR /&gt;• You can retrieve the current lease ID from your application logs or metadata if previously stored.&lt;BR /&gt;Acquire a New Lease If Needed&lt;BR /&gt;• If the lease has already expired or was broken, you must acquire a new lease before proceeding. To acquire a lease, please refer to lease action command above&lt;BR /&gt;Use the Correct Lease ID for Subsequent Operations&lt;BR /&gt;• Ensure all lease-related requests (renew, release, break, change) use the correct lease ID that was returned when acquiring the lease.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario 2: Lease ID Missing&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Error:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;“There is currently a lease on the blob and no lease ID was specified in the request.&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Time:YYYY-MM-DDTHH:mm:SS.Z&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Cause:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;This error occurs when:&lt;BR /&gt;• You are trying to renew, release, or change a lease but did not include x-ms-lease-id in the request.&lt;BR /&gt;• The blob is already leased, but you are trying to acquire a new lease without breaking the existing one first.&lt;BR /&gt;• You are attempting to break a lease without including the lease ID when required (some scenarios do not require it)&lt;STRONG&gt; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Resolution:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Include the Lease ID in the Request&lt;BR /&gt;• Ensure you provide the correct x-ms-lease-id in the request header.&lt;BR /&gt;• Example for renewing a lease. To renew a lease, please refer to lease action command above&lt;BR /&gt;Verify If a Lease Exists Before Performing Operations&lt;BR /&gt;You can check the lease status by using the Get Blob Properties API:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;curl -I -X HEAD "https://mystorageaccount.blob.core.windows.net/mycontainer/myblob?SAS_TOKEN" \ -H "x-ms-version: 2021-08-06"&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;If the lease status is “unlocked”, then there is no active lease, and you do not need to specify a lease ID.&lt;/P&gt;
&lt;P&gt;&lt;STRONG class="lia-align-justify"&gt;&lt;STRONG&gt;Scenario 3: &lt;/STRONG&gt;Lease ID Not Found&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG class="lia-align-justify"&gt;Error:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;There is currently no lease on the blob.&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Time: YYYY-MM-DDTHH:MM:SS.ssssssZ&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Cause:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;The error occurs when:&lt;BR /&gt;• You provide an incorrect or expired lease ID in lease operations such as renew, release, or change.&lt;BR /&gt;• The lease has already expired or been broken, making the lease ID invalid.&lt;BR /&gt;• The blob never had a lease in the first place.&lt;BR /&gt;• You are trying to release or renew a lease without specifying a lease ID when required.&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Resolution:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Check If the Blob Has an Active Lease&lt;BR /&gt;Before performing any lease-related operation, you should verify the lease status using the Get Blob Properties API:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;curl -I -X HEAD "https://mystorageaccount.blob.core.windows.net/mycontainer/myblob?SAS_TOKEN" \ -H "x-ms-version: 2021-08-06"&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;Look for these headers in the response:&lt;BR /&gt;&amp;nbsp;x-ms-lease-status: locked → A lease is present.&lt;BR /&gt;x-ms-lease-status: unlocked → No lease is currently active.&lt;BR /&gt;x-ms-lease-state: expired → The lease has expired.&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;Ensure You Use the Correct Lease ID&lt;BR /&gt;If a lease is present, but you’re getting an error, ensure you are using the correct lease ID from when the lease was acquired. If you don’t have the lease ID, you may need to break the lease and acquire a new one.&lt;/P&gt;
&lt;P&gt;If No Active Lease Exists, Acquire a New Lease. If the blob&amp;nbsp;does not have an active lease, you need to acquire one before performing lease-related operations:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;If the lease ID is incorrect or missing, check the lease status first.&lt;/LI&gt;
&lt;LI&gt;If no lease exists, acquire a new lease before proceeding.&lt;/LI&gt;
&lt;LI&gt;If the lease ID is lost, break the lease and acquire a new one.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario 4: Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Error:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Unable to proceed HnsOn migration due to incompatible feature, Blob has active lease&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Cause:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;The HNS upgrade fails if any blob/container has an active lease.&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Resolution:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The upgrade might fail if an application writes to the storage account during the upgrade. To prevent such write activity:&lt;/LI&gt;
&lt;LI&gt;Quiesce any applications or services that might perform write operations.&lt;/LI&gt;
&lt;LI&gt;Release or break existing leases on containers and blobs in the storage account.&lt;/LI&gt;
&lt;LI&gt;After the upgrade has completed, break the leases you created to resume allowing write access to the containers and blobs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Breaking an active lease without gracefully disabling applications or virtual machines that are currently accessing those resources could have unexpected results. Be sure to quiesce any current write activities before breaking any current leases.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Additional Resources:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For more detailed information and troubleshooting tips, you can refer to the following resources:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Frest%2Fapi%2Fstorageservices%2Flease-blob%3Ftabs%3Dmicrosoft-entra-id%23lease-states&amp;amp;data=05%7C02%7Crpadi%40microsoft.com%7C3bf7f05deff84f772a5708dd765e9fdc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638796871185661430%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=bbjhIjucGeM%2FUwxuq6SrkRX8LzAkO4D%2FduLTihe5C8g%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Lease Blob (REST API) - Azure Storage | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/concurrency-manage" target="_blank" rel="noopener"&gt;Manage concurrency in Blob Storage - Azure Storage&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-lease" target="_blank" rel="noopener"&gt;Create and manage blob leases with .NET - Azure Storage&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/rest/api/storageservices/lease-container?tabs=microsoft-entra-id" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/rest/api/storageservices/lease-container?tabs=microsoft-entra-id&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 09 Apr 2025 12:01:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-paas-blog/lease-management-in-azure-storage-common-troubleshooting/ba-p/4402002</guid>
      <dc:creator>rpadi450</dc:creator>
      <dc:date>2025-04-09T12:01:45Z</dc:date>
    </item>
  </channel>
</rss>

