<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure Integration Services topics</title>
    <link>https://techcommunity.microsoft.com/t5/azure-integration-services/bd-p/IntegrationsonAzure</link>
    <description>Azure Integration Services topics</description>
    <pubDate>Fri, 03 Apr 2026 21:24:51 GMT</pubDate>
    <dc:creator>IntegrationsonAzure</dc:creator>
    <dc:date>2026-04-03T21:24:51Z</dc:date>
    <item>
      <title>Logic Apps Data Mapper Integer Formatting Issue</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/logic-apps-data-mapper-integer-formatting-issue/m-p/4490538#M365</link>
      <description>&lt;P&gt;Hello team, I am working on a data map that is giving me a hard time in the logic app.&amp;nbsp; For my transformations, I do json-to-json transformation using the new data mapper. I have managed to handle all fields but for some reason one integer field is giving me a very hard time.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="xml"&gt;&amp;lt;number key="id"&amp;gt;
          &amp;lt;xsl:value-of select="/*/*[@key='mapparameters']/*[@key='counterpartyType1id']" /&amp;gt;
        &amp;lt;/number&amp;gt;&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here, if I pass let's say 12345, I want to see 12345 but the result is &lt;STRONG&gt;12345.0.&lt;/STRONG&gt;&amp;nbsp; This action's output is directly being sent to a HTTP call in the logic app and based on the workflow run logs, everything seems okay. In the logs, the value seen as 12345. However, when we check the backend, this field is 12345.0 in the request body and this causes an error as the application does not accept it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I tried to format the number and convert it in any way but with no luck, the issue is this problem started happening out of the blue someday.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Does anyone can guide me a potential resolution? Thanks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 29 Jan 2026 20:01:34 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/logic-apps-data-mapper-integer-formatting-issue/m-p/4490538#M365</guid>
      <dc:creator>BerkayM</dc:creator>
      <dc:date>2026-01-29T20:01:34Z</dc:date>
    </item>
    <item>
      <title>Fixed ip address for outbound calls from Azure APIM Standard V2</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/fixed-ip-address-for-outbound-calls-from-azure-apim-standard-v2/m-p/4457882#M361</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I recently ran a PoC deployment of Azure APIM Standard V2 Sku instead of our current Premium Classic instance. This worked well! Performance is great and I am able to route calls to an on-prem network ok using vnet-integration. However, one of the features we currently make use of with the Premium Classic instance is a fixed ip address for calls from APIM to 3rd parties.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Is there a way to achieve this using Standard V2? We have tried a nat gateway with fixed ip on the same vnet but this does not seem to help.&lt;/P&gt;</description>
      <pubDate>Mon, 29 Sep 2025 14:21:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/fixed-ip-address-for-outbound-calls-from-azure-apim-standard-v2/m-p/4457882#M361</guid>
      <dc:creator>BizTalkers</dc:creator>
      <dc:date>2025-09-29T14:21:13Z</dc:date>
    </item>
    <item>
      <title>Azure function app to read files from SMB mounted file share</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/azure-function-app-to-read-files-from-smb-mounted-file-share/m-p/4436417#M359</link>
      <description>&lt;P&gt;How can I programmatically connect an Azure Function App to multiple (50+) SMB-mounted Azure File Shares that use the same credentials, given that Logic Apps aren't suitable due to their static connection requirements?&lt;/P&gt;</description>
      <pubDate>Fri, 25 Jul 2025 00:39:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/azure-function-app-to-read-files-from-smb-mounted-file-share/m-p/4436417#M359</guid>
      <dc:creator>velmars</dc:creator>
      <dc:date>2025-07-25T00:39:22Z</dc:date>
    </item>
    <item>
      <title>Issue with Custom Domain on APIM and Cloudflare Proxying</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/issue-with-custom-domain-on-apim-and-cloudflare-proxying/m-p/4395909#M351</link>
      <description>&lt;P&gt;Dear all,&lt;/P&gt;&lt;P&gt;Last week, we attempted to configure a custom domain name for our Azure API Management (APIM) instance. We use Cloudflare as our DNS provider. The required CNAME record was created with the &lt;STRONG&gt;proxied&lt;/STRONG&gt; attribute enabled. However, when configuring the custom hostname in Azure, we encountered the following error:&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;STRONG&gt;Invalid parameter: CustomHostnameOwnershipCheckFailed.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;A CNAME record pointing from &lt;EM&gt;apim&lt;/EM&gt;.&lt;EM&gt;ourowndomain.net&lt;/EM&gt; to &lt;EM&gt;apim.azure-api.net&lt;/EM&gt; was not found.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;As a workaround, we disabled the &lt;STRONG&gt;proxied&lt;/STRONG&gt; attribute in Cloudflare, retried the configuration, and it worked successfully. We then re-enabled the &lt;STRONG&gt;proxied&lt;/STRONG&gt; attribute, and the custom domain continued to function correctly.&lt;/P&gt;&lt;P&gt;However, yesterday, we discovered that the custom domain was no longer working and returned a &lt;STRONG&gt;"404 Web site not found"&lt;/STRONG&gt; error page.&lt;/P&gt;&lt;P&gt;After extensive troubleshooting—including disabling the&amp;nbsp;&lt;STRONG&gt;proxied&lt;/STRONG&gt; attribute on the CNAME record—we were unable to resolve the issue.&lt;/P&gt;&lt;P&gt;To restore functionality, we removed and reconfigured the custom domain by following the same steps:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Disable the &lt;STRONG&gt;proxied&lt;/STRONG&gt; attribute on the CNAME record.&lt;/LI&gt;&lt;LI&gt;Configure the custom domain in APIM.&lt;/LI&gt;&lt;LI&gt;Re-enable the &lt;STRONG&gt;proxied&lt;/STRONG&gt; attribute.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;This resolved the issue again.&lt;/P&gt;&lt;P&gt;We suspect that Azure initially validates the CNAME record during the custom domain configuration process when the &lt;STRONG&gt;proxied&lt;/STRONG&gt; attribute is disabled. However, after a few days, Azure appears to revalidate the CNAME record and expects it to resolve to *.azure-api.net. Since Cloudflare returns its own IPs when proxying is enabled, Azure may reject the custom domain configuration, leading to the issue.&lt;/P&gt;&lt;P&gt;Can anyone confirm whether our assumption is correct?&lt;/P&gt;&lt;P&gt;Additionally, is there a recommended workaround for this issue? We are considering deploying a reverse proxy (Application Gateway) to handle Cloudflare requests and forward them to the APIM instance.&lt;/P&gt;&lt;P&gt;Thank you in advance for your help.&lt;/P&gt;&lt;P&gt;Best regards,&lt;/P&gt;</description>
      <pubDate>Fri, 21 Mar 2025 14:02:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/issue-with-custom-domain-on-apim-and-cloudflare-proxying/m-p/4395909#M351</guid>
      <dc:creator>mkg310</dc:creator>
      <dc:date>2025-03-21T14:02:49Z</dc:date>
    </item>
    <item>
      <title>Debug your APIs using request tracing</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/debug-your-apis-using-request-tracing/m-p/4362157#M344</link>
      <description>&lt;P&gt;We are leveraging Azure API Management's tracing capabilities to monitor and log incoming traffic. The primary goal is to track traffic in APIM and attribute it to specific client applications by identifying the appid from JWT tokens included in requests. Additionally, we aim to ensure that trace logs are correctly sent to Log Analytics for debugging and further analysis.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To achieve this, we implemented a test policy in a GET method of a cloned API within APIM. The policy is as follows:&lt;/P&gt;&lt;P&gt;“&amp;lt;policies&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;inbound&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;base /&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;trace source="InboundTrace" severity="verbose"&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;message&amp;gt;Inbound processing started&amp;lt;/message&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;metadata name="User-Agent" value="@(context.Request.Headers.GetValueOrDefault("User-Agent", "unknown"))" /&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/trace&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/inbound&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;backend&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;base /&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/backend&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;outbound&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;base /&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;trace source="OutboundTrace" severity="verbose"&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;message&amp;gt;Outbound response being sent&amp;lt;/message&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;metadata name="ResponseCode" value="@(context.Response.StatusCode.ToString())" /&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/trace&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/outbound&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;on-error&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;base /&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;trace source="ErrorTrace" severity="error"&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;message&amp;gt;Error encountered&amp;lt;/message&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;metadata name="ErrorDetails" value="@(context.LastError.Message)" /&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/trace&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/on-error&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;lt;/policies&amp;gt;”&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This approach aims to ensure the appid appears in the tracerecords attribute of ApiManagementGatewayLogs, enabling us to identify which client applications are consuming specific APIs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Challenges Faced&lt;/STRONG&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;STRONG&gt;Trace Logs&lt;/STRONG&gt;:&lt;/LI&gt;&lt;UL&gt;&lt;LI&gt;Trace logs are not appearing in &lt;STRONG&gt;Log Analytics&lt;/STRONG&gt;, despite being configured in diagnostics.&lt;/LI&gt;&lt;LI&gt;Using the queries suggested in the documentation, we could not find the TraceRecords field or metadata added by the trace policy.&lt;/LI&gt;&lt;LI&gt;We are unsure if the policy is being correctly applied or if additional configurations are needed.&lt;/LI&gt;&lt;/UL&gt;&lt;LI&gt;&lt;STRONG&gt;Traffic Attribution&lt;/STRONG&gt;:&lt;/LI&gt;&lt;UL&gt;&lt;LI&gt;While traffic is traceable, attributing requests to client applications without the appid is challenging.&lt;/LI&gt;&lt;LI&gt;We want to confirm if the approach to extract and log the appid aligns with best practices and whether there are more efficient alternatives.&lt;/LI&gt;&lt;/UL&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Questions &lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Are there additional configurations needed to ensure trace logs are correctly sent to Log Analytics?&lt;/LI&gt;&lt;LI&gt;Could you provide more detailed examples of KQL queries to check the records generated by the trace policy?&lt;/LI&gt;&lt;LI&gt;Does the proposed approach for extracting and logging appid align with best practices in APIM?&lt;/LI&gt;&lt;LI&gt;Are there any limitations or performance considerations when modifying global policies for this purpose?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;References Followed&lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;A href="https://urldefense.proofpoint.com/v2/url?u=https-3A__learn.microsoft.com_en-2Dus_azure_api-2Dmanagement_api-2Dmanagement-2Dhowto-2Dapi-2Dinspector-3FWT.mc-5Fid-3DPortal-2DMicrosoft-5FAzure-5FApiManagement-23enable-2Dtracing-2Dfor-2Dan-2Dapi&amp;amp;d=DwMFAw&amp;amp;c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&amp;amp;r=MdtdpT6fWRx6afA4FKZeUFulPBEf_H4lJIVZ59v4LX4&amp;amp;m=pAYGPAmFhUcQE_wy_49kDmI9p77ms7dc3LLXcX_4aHmN56P-FU3r0YOKjlFVKfO7&amp;amp;s=QNkuX53U1AydrKgyElUYzzNutQi40lOqWZfsjc-vYto&amp;amp;e=" target="_blank"&gt;Debug APIs in Azure API Management&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://urldefense.proofpoint.com/v2/url?u=https-3A__learn.microsoft.com_en-2Dus_azure_api-2Dmanagement_trace-2Dpolicy&amp;amp;d=DwMFAw&amp;amp;c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&amp;amp;r=MdtdpT6fWRx6afA4FKZeUFulPBEf_H4lJIVZ59v4LX4&amp;amp;m=pAYGPAmFhUcQE_wy_49kDmI9p77ms7dc3LLXcX_4aHmN56P-FU3r0YOKjlFVKfO7&amp;amp;s=kINIAf_QAOH_WbYI87TmMXZsBc17GVGcGtBNBCt94U4&amp;amp;e=" target="_blank"&gt;Trace Policy Documentation&lt;/A&gt;&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Thu, 02 Jan 2025 20:44:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/debug-your-apis-using-request-tracing/m-p/4362157#M344</guid>
      <dc:creator>orafaelferreira</dc:creator>
      <dc:date>2025-01-02T20:44:56Z</dc:date>
    </item>
    <item>
      <title>Azure API Management Gateway - RBAC on the API level</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/azure-api-management-gateway-rbac-on-the-api-level/m-p/4319912#M337</link>
      <description>&lt;P&gt;Is it possible to grant access on specific APIs implementation, making users able to see some APIs but not others inside the same Azure API Management Gateway?&lt;/P&gt;&lt;P&gt;For example: User1 can manage green ones, but not red ones.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 21 Nov 2024 16:45:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/azure-api-management-gateway-rbac-on-the-api-level/m-p/4319912#M337</guid>
      <dc:creator>mkg310</dc:creator>
      <dc:date>2024-11-21T16:45:37Z</dc:date>
    </item>
    <item>
      <title>Read sharepoint files from ADF and then export it to SharePoint</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/read-sharepoint-files-from-adf-and-then-export-it-to-sharepoint/m-p/4275810#M332</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;Could someone please help me with integrating the SharePoint files into ADF and then exporting it back to SharePoint?&lt;/P&gt;</description>
      <pubDate>Mon, 21 Oct 2024 19:26:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/read-sharepoint-files-from-adf-and-then-export-it-to-sharepoint/m-p/4275810#M332</guid>
      <dc:creator>DIMPYK</dc:creator>
      <dc:date>2024-10-21T19:26:21Z</dc:date>
    </item>
    <item>
      <title>API Guide: Resubmitting from a specific Action in Logic Apps Standard</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/api-guide-resubmitting-from-a-specific-action-in-logic-apps/m-p/4268506#M330</link>
      <description>&lt;P data-unlink="true"&gt;&lt;EM&gt;&lt;FONT size="1 2 3 4 5 6 7"&gt;&lt;FONT size="2"&gt;In collaboration with&lt;SPAN&gt;&amp;nbsp;Sofia Hubendick&lt;/SPAN&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;FONT size="3"&gt;This how-to article explains the process of resubmitting a Logic App Standard from a specific action via API. If you want to resubmit the workflow from the beginning, you can use the &lt;A href="https://learn.microsoft.com/sv-se/rest/api/appservice/workflow-trigger-histories/resubmit?view=rest-appservice-2024-04-01&amp;amp;tabs=HTTP" target="_self"&gt;Workflow Trigger Histories - Resubmit - REST API&lt;/A&gt;&amp;nbsp;&amp;nbsp;instead.&lt;/FONT&gt;&lt;/P&gt;&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;H3&gt;&lt;FONT size="4"&gt;Workflow Run Histories - Resubmit&lt;/FONT&gt;&lt;/H3&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/DIV&gt;&lt;P&gt;&lt;FONT size="4"&gt;&lt;U&gt;Authentication&lt;/U&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT size="3"&gt;I used a managed identity for authentication, which simplifies the process by eliminating the need to obtain a token manually. Additionally, I implemented the new Logic App Standard Operator role.&lt;/FONT&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;FONT size="4"&gt;&lt;U&gt;URL&lt;/U&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="3"&gt;The URL for resubmitting an action looks like this:&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;https://management.azure.com/subscriptions/[subscriptionId]/resourceGroups/[resourceGroupName]/providers/Microsoft.Web/sites/[logicAppName]/hostruntime/runtime/webhooks/workflow/api/management/workflows/[workflowName]/runs/[runId]/resubmit?api-version=2022-03-01&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Mandatory URL Path Parameters&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="160.2px" height="30px"&gt;&lt;P&gt;&lt;STRONG&gt;Name&lt;/STRONG&gt;&lt;/P&gt;&lt;/TD&gt;&lt;TD width="418.188px" height="30px"&gt;&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="160.2px" height="30px"&gt;&lt;P&gt;subscriptionId&lt;/P&gt;&lt;/TD&gt;&lt;TD width="418.188px" height="30px"&gt;&lt;P&gt;The Azure subscription Id&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="160.2px" height="30px"&gt;&lt;P&gt;resourceGroupName&lt;/P&gt;&lt;/TD&gt;&lt;TD width="418.188px" height="30px"&gt;&lt;P&gt;The name of the resource group containing the Logic App&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="160.2px" height="30px"&gt;&lt;P&gt;logicAppName&lt;/P&gt;&lt;/TD&gt;&lt;TD width="418.188px" height="30px"&gt;&lt;P&gt;The name of the Logic App&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="160.2px" height="30px"&gt;&lt;P&gt;workflowName&lt;/P&gt;&lt;/TD&gt;&lt;TD width="418.188px" height="30px"&gt;&lt;P&gt;The name of the workflow&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="160.2px" height="30px"&gt;&lt;P&gt;runId&lt;/P&gt;&lt;/TD&gt;&lt;TD width="418.188px" height="30px"&gt;&lt;P&gt;The id of the workflow run to be resubmitted&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&lt;BR /&gt;&lt;FONT size="4"&gt;&lt;U&gt;Request Body&lt;/U&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="3"&gt;The API request body is structured as follows; replace the placeholder with the name of the action:&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="json"&gt;{
  "actionsToResubmit": [
    {
      "name": "[action name]"
    }
  ]
}&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="4"&gt;&lt;U&gt;Response&lt;/U&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;TABLE width="523px"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;&lt;STRONG&gt;&lt;FONT size="3"&gt;Name &lt;/FONT&gt;&lt;/STRONG&gt;&lt;/TD&gt;&lt;TD&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="150.825px"&gt;&lt;FONT size="3"&gt;202 Accepted&lt;/FONT&gt;&lt;/TD&gt;&lt;TD width="371.375px"&gt;&lt;P&gt;&lt;FONT size="3"&gt;OK&lt;/FONT&gt;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="150.825px"&gt;&lt;FONT size="3"&gt;Other Status Codes&lt;/FONT&gt;&lt;/TD&gt;&lt;TD width="371.375px"&gt;&lt;P&gt;&lt;FONT size="3"&gt;Error response describing why the operation failed.&lt;/FONT&gt;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 14 Oct 2024 06:13:34 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/api-guide-resubmitting-from-a-specific-action-in-logic-apps/m-p/4268506#M330</guid>
      <dc:creator>andevjen</dc:creator>
      <dc:date>2024-10-14T06:13:34Z</dc:date>
    </item>
    <item>
      <title>Integration with SuccessFactors</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/integration-with-successfactors/m-p/4249096#M327</link>
      <description>&lt;P&gt;Hi Community&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have SuccessFactors and we have four different Azure's.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;SuccessFactors can only connect to one of these (Its for SuccessFactors Recruitment Integration to Outlook)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Does anyone know a way that I can connect the three other Azure's to our main company one, so that SuccessFactors can connect to our main one, then can see / post to Outlooks that exist on the other three?&lt;/P&gt;</description>
      <pubDate>Wed, 18 Sep 2024 16:13:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/integration-with-successfactors/m-p/4249096#M327</guid>
      <dc:creator>pallen330</dc:creator>
      <dc:date>2024-09-18T16:13:33Z</dc:date>
    </item>
    <item>
      <title>Semantic Kernel: Develop your AI Integrated Web App on Azure and .NET 8.0</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/semantic-kernel-develop-your-ai-integrated-web-app-on-azure-and/m-p/4209484#M324</link>
      <description>&lt;TABLE border="1" width="100%"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="50%"&gt;&lt;DIV class=""&gt;&lt;H3&gt;How to create a Smart Career Advice and Job Search Engine with Semantic Kernel&lt;/H3&gt;&lt;/DIV&gt;&lt;/TD&gt;&lt;TD width="50%"&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;H3&gt;The concept&lt;/H3&gt;&lt;H4&gt;The Rise of Semantic Kernel&lt;/H4&gt;&lt;P class=""&gt;Semantic Kernel, an open-source development kit, has taken the .NET community by storm. With support for C#, Python, and Java, it seamlessly integrates with dotnet services and applications. But what makes it truly remarkable? Let’s dive into the details.&lt;/P&gt;&lt;H4&gt;A Perfect Match: Semantic Kernel and .NET&lt;/H4&gt;&lt;P class=""&gt;Picture this: you’re building a web app, and you want to infuse it with AI magic. Enter Semantic Kernel. It’s like the secret sauce that binds your dotnet services and AI capabilities into a harmonious blend. Whether you’re a seasoned developer or just dipping your toes into AI waters, &lt;STRONG&gt;Semantic Kernel&lt;/STRONG&gt; simplifies the process. As part of the Semantic Kernel community, I’ve witnessed its evolution firsthand. The collaborative spirit, the shared knowledge—it’s electrifying! We’re not just building software; we’re shaping the future of AI-driven web applications.&lt;/P&gt;&lt;H4&gt;The Web App&lt;/H4&gt;&lt;P class=""&gt;Our initial plan was simple: create a job recommendations engine. But Semantic Kernel had other ideas. It took us on an exhilarating ride. Now, our web application not only suggests career paths but also taps into third-party APIs to fetch relevant job listings. And that’s not all—it even crafts personalized skilling plans and preps candidates for interviews. Talk about exceeding expectations!&lt;/P&gt;&lt;H3&gt;Build&lt;/H3&gt;&lt;P class=""&gt;Since i have already created the repository on &lt;A href="https://github.com/passadis/semantickernel-careeradvice" target="_blank" rel="noopener"&gt;GitHub&lt;/A&gt; i don’t think it is critical to re post Terraform files here. We are building our main Infrastructure with Terraform and also invoke an Azure Cli script to automate the Container Image build and push. We will have these resources at the end:&lt;/P&gt;&lt;P class=""&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Before deployment make sure to assign the Service Principal with the role “RBAC Administrator” and narrow down the assignments to AcrPull, AcrPush, so you can create a User Assigned Managed Identity with these roles.&lt;/P&gt;&lt;P class=""&gt;Since we are building and pushing the Container Images with local-exec and Az Cli scripts within Terraform you will notice some explicit dependencies, for us to make sure everything builds in order. It is really amazing the fact that we can build all the Infra including the Apps with Terraform !&lt;/P&gt;&lt;H3&gt;Architecture&lt;/H3&gt;&lt;P class=""&gt;Upon completion you will have a functioning React Web App with the ASP NET Core webapi, utilizing Semantic Kernel and an external Job Listings API, to get advice, find Jobs and get a Skilling Plan for a specific recommended role! The following is a reference Architecture. Aside the Private Endpoints the same deployment is available in GitHub.&lt;/P&gt;&lt;P class=""&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;H3&gt;Kernel SDK&lt;/H3&gt;&lt;P class=""&gt;The SDK provides a simple yet powerful array of commands to configure and “set” the Semantic Kernel characteristics. Let’s the first endpoint, where users ask for recommended career paths:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="csharp"&gt;        [HttpPost("get-recommendations")]
        public async Task&amp;lt;IActionResult&amp;gt; GetRecommendations([FromBody] UserInput userInput)
        {
            _logger.LogInformation("Received user input: {Skills}, {Interests}, {Experience}", userInput.Skills, userInput.Interests, userInput.Experience);

            var query = $"I have the following skills: {userInput.Skills}. " +
                        $"My interests are: {userInput.Interests}. " +
                        $"My experience includes: {userInput.Experience}. " +
                        "Based on this information, what career paths would you recommend for me?";

            var history = new ChatHistory();
            history.AddUserMessage(query);

            ChatMessageContent? result = await _chatCompletionService.GetChatMessageContentAsync(history);

            if (result == null)
            {
                _logger.LogError("Received null result from the chat completion service.");
                return StatusCode(500, "Error processing your request.");
            }

            string content = result.Content;

            _logger.LogInformation("Received content: {Content}", content);

            var recommendations = ParseRecommendations(content);

            _logger.LogInformation("Returning recommendations: {Count}", recommendations.Count);

            return Ok(new { recommendations });&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The actual data flow is depicted below, and we can see the Interaction with the local Endpoints and the external endpoint as well. The user provides Skills, Interests, Experience and Level of current position and the API sends the Payload to Semantic kernel with a constructed prompt asking for positions recommendations. The recommendations return with clickable buttons, one to find relevant positions from LinkedIn listings using the external API, and another to ask again the Semantic Kernel for skill up advice!&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The UI experience :&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Recommendations:&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Skill Up Plan:&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Job Listings:&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;The Project can be extended to a point of automation and AI Integration where users can upload their CVs and ask the Semantic Kernel to provide feedback as well as apply for a specific position! As we discussed earlier some additional optimizations are good to have, like the Private Endpoints, Azure Front Door and/or Azure Firewall, but the point is to see Semantic Kernel in action with it’s amazing capabilities especially when used within the .NET SDK.&lt;/P&gt;&lt;P class=""&gt;&lt;EM&gt;Important Note: This could have been a one shot deployment but we cannot add the custom domain with Terraform ( &lt;STRONG&gt;unless we use Azure DNS&lt;/STRONG&gt;) and the Cors Settings. So we have to add these details for our Solution to function properly! &lt;/EM&gt;&lt;/P&gt;&lt;P class=""&gt;Once the Terraform completes, add the Custom Domains to both Container Apps. The advantage here is that we will know the Frontend and Backend FQDNs, since we decide the Domain name, and the React Environment Value is preconfigured with the backend URL. Same for the Backend, we have set as Environment Value for the ALLOWED_ORIGINS, the frontend URL. So we can just go to Custom Domain on each App, and add the domain names after selecting the Certificate which will be already there, since we have uploaded it via Terraform!&lt;/P&gt;&lt;P class=""&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;H3&gt;Lessons Learned&lt;/H3&gt;&lt;P class=""&gt;This was a real adventure and i want to share with you important lessons learned and hopefully save you some time and effort. Prepare ahead with a &lt;STRONG&gt;Certificate&lt;/STRONG&gt;. I was having problems from the get go with ASP NET refusing to build on Containers until i integrated the certificate. The local development works fine without it. &lt;STRONG&gt;Cross Origin&lt;/STRONG&gt; is very important, do not underestimate it ! Configure it correctly and in this example i went directly to &lt;STRONG&gt;Custom Domains&lt;/STRONG&gt;, so i can have better overall control. This solution worked both on &lt;STRONG&gt;Azure Web Apps&lt;/STRONG&gt; and &lt;STRONG&gt;Azure Container Apps&lt;/STRONG&gt;. The Git Hub repo has the Container Apps solution but you can go with Web Apps. Finally don’t waste you time to go with Dapr. React does not ‘react’ well with the Dapr Client and my lesson learned here is that Dapr is made for same framework invocation or you are going to need a middleware. Since we cannot create the Custom Domain with Terraform there are solutions we can use, like using AzApi, We utilized a small portion of what really Semantic Kernel can do and i stopped when i realized that this project will never end if i continue pursuing ideas ! It is much better to have it on GiHub and probably we can come back and add some more features !&lt;/P&gt;&lt;H3&gt;Conclusion&lt;/H3&gt;&lt;P class=""&gt;In this journey through the intersection of technology and career guidance, we’ve explored the powerful capabilities of Azure Container Apps and the transformative potential of Semantic Kernel, Microsoft’s open-source development kit. By seamlessly integrating AI into .NET applications, Semantic Kernel has not only simplified the development process but also opened new doors for innovation in career advice.&lt;/P&gt;&lt;P class=""&gt;Our adventure began with a simple idea—creating a job recommendations engine. However, with the help of Semantic Kernel, this idea evolved into a sophisticated web application that goes beyond recommendations. It connects to third-party APIs, crafts personalized skilling plans, and prepares candidates for interviews, demonstrating the true power of AI-driven solutions.&lt;/P&gt;&lt;P class=""&gt;By leveraging Terraform for infrastructure management and Azure CLI for automating container builds, we successfully deployed a robust architecture that includes a React Web App, ASP.NET Core web API, and integrated AI services. This project highlights the ease and efficiency of building and deploying cloud-based applications with modern tools. The code is available in GitHub for you to explore, contribute and extend as mush as you want to !&lt;/P&gt;&lt;P class=""&gt;Git Hub Repo: &lt;A title="Semantic Kernel - Career Advice" href="https://github.com/passadis/semantickernel-careeradvice" target="_blank" rel="noopener"&gt;Semantic Kernel - Career Advice&lt;/A&gt;&lt;/P&gt;&lt;H4&gt;Links\References&lt;/H4&gt;&lt;UL&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/semantic-kernel/overview/" target="_blank" rel="noopener"&gt;Intro to Semantic Kernel&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/semantic-kernel/concepts/kernel?pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;Understanding the kernel&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/semantic-kernel/concepts/ai-services/chat-completion/?tabs=csharp-AzureOpenAI%2Cpython-AzureOpenAI%2Cjava-AzureOpenAI&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;Chat completion&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/semantic-kernel/get-started/detailed-samples?pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;Deep dive into Semantic Kernel&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/container-apps/" target="_blank" rel="noopener"&gt;Azure Container Apps documentation&lt;/A&gt;&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Sun, 04 Aug 2024 15:40:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/semantic-kernel-develop-your-ai-integrated-web-app-on-azure-and/m-p/4209484#M324</guid>
      <dc:creator>KonstantinosPassadis</dc:creator>
      <dc:date>2024-08-04T15:40:46Z</dc:date>
    </item>
    <item>
      <title>Kernel Memory - Retrieval Augmented Generation (RAG) using Azure Open AI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/kernel-memory-retrieval-augmented-generation-rag-using-azure/m-p/4177244#M323</link>
      <description>&lt;P&gt;Hello Community,&lt;BR /&gt;I am seeking for guidance here, Looking for Kernel Memory - Retrieval Augmented Generation (&lt;STRONG&gt;RAG&lt;/STRONG&gt;) using&amp;nbsp;&lt;STRONG&gt;Azure Open AI&lt;/STRONG&gt;&amp;nbsp;which can read file in kernel memory. I can ask question and based on memory it can answer my questions. I want to use .NetCore here for implementation.&lt;/P&gt;&lt;P&gt;I have referred below article but i did not found configuration related to Azure Open AI.&lt;/P&gt;&lt;P&gt;&lt;A href="https://github.com/microsoft/kernel-memory/tree/main%22github.com%22" target="_blank" rel="ugc noopener noreferrer"&gt;https://github.com/microsoft/kernel-memory/tree/main&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jun 2024 10:08:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/kernel-memory-retrieval-augmented-generation-rag-using-azure/m-p/4177244#M323</guid>
      <dc:creator>Bhavin163884</dc:creator>
      <dc:date>2024-06-27T10:08:11Z</dc:date>
    </item>
    <item>
      <title>Third Party NVA in Azure VMware Solution</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/third-party-nva-in-azure-vmware-solution/m-p/4176386#M322</link>
      <description>&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;P class=""&gt;Hi all,&lt;/P&gt;&lt;P class=""&gt;&lt;BR /&gt;I am following below link to get more information on how to deploy 3rd party NVA however, would like to know if you have any other detailed documentations and considerations that I can follow during my initial discussion with the customers.&lt;BR /&gt;&lt;BR /&gt;&lt;A class="" href="https://vuptime.io/post/2023-07-24-third-party-nva-in-avs-nsxt/#:~:text=In%20order%20to%20deploy%20a,and%20to%20the%20NVA%20uplink" target="_blank" rel="noopener"&gt;https://vuptime.io/post/2023-07-24-third-party-nva-in-avs-nsxt/#:~:text=In%20order%20to%20deploy%20a,and%20to%20the%20NVA%20uplink&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Appreciate your support!&lt;/P&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Wed, 26 Jun 2024 11:19:43 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/third-party-nva-in-azure-vmware-solution/m-p/4176386#M322</guid>
      <dc:creator>pravesh_kaushal</dc:creator>
      <dc:date>2024-06-26T11:19:43Z</dc:date>
    </item>
    <item>
      <title>Unable to create logic app</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/unable-to-create-logic-app/m-p/4170858#M321</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am new to Azure I am learning the concepts&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;I am using free subscription and trying to create a logic app it is showing me the below message and not allowing me to create a logic app.&amp;nbsp;&lt;/P&gt;&lt;P&gt;can someone help me with the issue? how to overcome&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Tue, 18 Jun 2024 16:06:57 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/unable-to-create-logic-app/m-p/4170858#M321</guid>
      <dc:creator>Azuredoubts1209</dc:creator>
      <dc:date>2024-06-18T16:06:57Z</dc:date>
    </item>
    <item>
      <title>Azure Entra Connect</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/azure-entra-connect/m-p/4167331#M318</link>
      <description>&lt;P&gt;Hi all I am looking for some direction on how to deal with an issue that has arisen.&lt;/P&gt;&lt;P&gt;I have a client with an in-house on premises domain that has 2 domain controllers.&lt;/P&gt;&lt;P&gt;1 DC is Windows 2012 R2 and has an older version of Active Sync installed that stopped syncing in October 2023. The other DC is Windows 2022 and will become the 'PDC" soon as the old one will be retired soon.&amp;nbsp; Due to many reasons this was left past the desired time to be dealt with.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have looked at this until my eyes have crossed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Should I just disable sync using the web based Powershell and then start new with Entra Connect on the Windows 2022 server? The sync was mostly for efficiency so that the exchange O365 match users as we created them with password sync as well. There fore there will not be any issues I can see considering the sync has not been active since October 2023?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Or, should I setup the new server in staging mode, test the settings and then make it active? How will the orphaned server be dealt with since it has no way to even communcate that it is in staging mode?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for any thoughts&lt;/P&gt;</description>
      <pubDate>Thu, 13 Jun 2024 14:15:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/azure-entra-connect/m-p/4167331#M318</guid>
      <dc:creator>rsclark1113</dc:creator>
      <dc:date>2024-06-13T14:15:02Z</dc:date>
    </item>
    <item>
      <title>Connecting AIS (Logic Apps) to On-Prem resources</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/connecting-ais-logic-apps-to-on-prem-resources/m-p/4159242#M316</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We're currently using an on-prem BizTalk ESB to leverage our integrations and I'm tasked with scoping the transition over to AIS.&lt;/P&gt;&lt;P&gt;I've figured out the appropriate tooling required and what services we're likely to leverage within AIS, but there will be a strong dependency on on-prem connections (most of our services are maintained within internal SQL DB's and Network shares).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've done some further reading on the Azure Data Gateway, and can see that we connect our local SQL DB's to Logic Apps through it, but is it possible for the data gateway to poll/listen out for SQL data changes?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Chris&lt;/P&gt;</description>
      <pubDate>Tue, 04 Jun 2024 08:37:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/connecting-ais-logic-apps-to-on-prem-resources/m-p/4159242#M316</guid>
      <dc:creator>Chris_Lupton</dc:creator>
      <dc:date>2024-06-04T08:37:22Z</dc:date>
    </item>
    <item>
      <title>Microsoft Entra SSO integration with FortiGate SSL VPN issue</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/microsoft-entra-sso-integration-with-fortigate-ssl-vpn-issue/m-p/4131495#M314</link>
      <description>&lt;P&gt;Scenario: Microsoft Entra SSO integration with FortiGate SSL VPN&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I am unable to connect via FortiClient vpn version 7.2.x.x.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;But when i use FortiClient vpn client version 7.0.x.x.x to connect SSL VPN via Azure ID with SAML Authentication. its connect in 2nd attempt or 3rd attempt every time not in first attempt. In first attempt ask 2FA but not connected. when i try again in 2nd or 3rd attempt so without 2FA prompt just directly connected. is it bug or configuration issue on FortiClient firewall side or Azure FortiGate SSL VPN application side?? please suggest&lt;/P&gt;</description>
      <pubDate>Sun, 05 May 2024 06:26:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/microsoft-entra-sso-integration-with-fortigate-ssl-vpn-issue/m-p/4131495#M314</guid>
      <dc:creator>Zohaib_Yousuf</dc:creator>
      <dc:date>2024-05-05T06:26:00Z</dc:date>
    </item>
    <item>
      <title>Azure Text to Speech with Container Apps</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/azure-text-to-speech-with-container-apps/m-p/4087187#M312</link>
      <description>&lt;TABLE border="1" width="100%"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="50%"&gt;&lt;H3&gt;Azure Text to Speech with Container Apps&lt;/H3&gt;&lt;/TD&gt;&lt;TD width="50%"&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P class=""&gt;Imagine interacting with not just one, but three distinct speaking agents, each bringing their unique flair to life right through your React web UI. Whether it’s getting the latest weather updates, catching up on breaking news, or staying on top of the Stock Exchange, our agents have got you covered.&lt;/P&gt;&lt;P class=""&gt;We’ve seamlessly integrated the Azure Speech SDK with a modular architecture and dynamic external API calls, creating an experience that’s as efficient as it is enjoyable.&lt;/P&gt;&lt;P class=""&gt;What sets this application apart is its versatility. Choose your preferred agent, like the News Agent, and watch as it transforms data fetched from a news API into speech, courtesy of the Azure Speech Service. The result? Crisp, clear audio that you can either savor live on the UI or download as an MP3 file for on-the-go convenience.&lt;/P&gt;&lt;P class=""&gt;But that’s not all. We’ve infused the application with a range of Python modules, each offering different voices, adding layers of personality and depth to the user experience. A testament to the power of AI Speech capabilities and modern web development, making it an exciting project for any IT professional to explore and build upon.&lt;/P&gt;&lt;H3&gt;Requirements&lt;/H3&gt;&lt;P class=""&gt;Our Project is build with the help of &lt;STRONG&gt;VSCode&lt;/STRONG&gt;, &lt;STRONG&gt;Azure CLI&lt;/STRONG&gt;, &lt;STRONG&gt;React &lt;/STRONG&gt;and &lt;STRONG&gt;Python&lt;/STRONG&gt;. We need an &lt;STRONG&gt;Azure Subscription&lt;/STRONG&gt; to create &lt;STRONG&gt;Azure Container Apps&lt;/STRONG&gt; and an &lt;STRONG&gt;Azure Speech&lt;/STRONG&gt; service resource. We will build our Docker images directly to Azure Container Registry and create the relevant ingress configurations. Additional security should be taken in account like Private Endpoints and Front Door in case you want this as a production application.&lt;/P&gt;&lt;P class=""&gt;&lt;!--   wp:heading {&amp;amp;quot;level&amp;amp;quot;:3}   --&gt;&lt;/P&gt;&lt;H3&gt;Build&lt;/H3&gt;&lt;P class=""&gt;&lt;!--   /wp:heading   --&gt;&lt;!--   wp:paragraph {&amp;amp;quot;fontSize&amp;amp;quot;:&amp;amp;quot;medium&amp;amp;quot;}   --&gt;&lt;/P&gt;&lt;P class=""&gt;We are building a simple React Web UI and containerizing it, while the interesting part of our code lays into the modular design of the Python backend. It is also a Docker container Image with a main application and three different python modules each one responsible for it's respective agent. Visual elements make the UI quite friendly and simple to understand and use. The user selects the agent and presses the 'TALK" button. The backend fetches data from the selected API ( GNEWS, OpenMeteo and Alphavantage) interacts the text with Azure Speech Service and returns the audio to be played on the UI with a small player, providing also a Download link for the MP3. Each time we select and activate a different agent the file is updated with the new audio.&lt;/P&gt;&lt;P class=""&gt;Let's have a look on the React build:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="javascript"&gt;import React, { useState } from 'react';
import './App.css';
import logo from './logo.png';
import avatarRita from './assets/rita.png';
import avatarMark from './assets/mark.png';
import avatarMary from './assets/mary.png';

function App() {
  const [activeAgent, setActiveAgent] = useState(null);
  const [audioUrl, setAudioUrl] = useState(null);  // Add this line to define audioUrl and setAudioUrl
  const [audioStream, setAudioStream] = useState(null);  // Add this line to define audioStream and setAudioStream
  const [stockSymbol, setStockSymbol] = useState('');
   
  const handleAgentClick = (agent) =&amp;gt; {
    setActiveAgent(agent);
  };

  /*const handleCommand = async (command) =&amp;gt; {
    if (activeAgent === 'rita' &amp;amp;&amp;amp; command === 'TALK') {
      try {
        // Default text to send
        const defaultText = { text: "Good Morning to everyone" };
  
        const response = await fetch(`${process.env.REACT_APP_API_BASE_URL}/talk-to-rita`, {
          method: 'POST',
          headers: {
            'Content-Type': 'application/json',
          },
          body: JSON.stringify(defaultText), // Include default text in the request body
        });
        const data = await response.json();*/
  const handleCommand = async (command) =&amp;gt; {
   if (command === 'TALK') {
            let endpoint = '';
            let bodyData = {};
            if (activeAgent === 'rita') {
              endpoint = '/talk-to-rita';
              bodyData = { text: "Good Morning to everyone" };// Add any specific data or parameters for RITA if required
            } else if (activeAgent === 'mark') {
              endpoint = '/talk-to-mark';
              // Add any specific data or parameters for MARK if required
            } else if (activeAgent === 'mary' &amp;amp;&amp;amp; stockSymbol) {
              endpoint = '/talk-to-mary';
              bodyData = { symbol: stockSymbol };
            } else {
              console.error('Agent not selected or stock symbol not provided');
              return;
            }
      
            try {
              const response = await fetch(`${process.env.REACT_APP_API_BASE_URL}${endpoint}`, {
                method: 'POST',
                headers: {
                  'Content-Type': 'application/json',
                },
                body: JSON.stringify(bodyData),// Add body data if needed for the specific agent
              });
              const data = await response.json();        
  
        if (response.ok) {
          const audioContent = base64ToArrayBuffer(data.audioContent); // Convert base64 to ArrayBuffer
          const blob = new Blob([audioContent], { type: 'audio/mp3' });
          const url = URL.createObjectURL(blob);
          setAudioUrl(url);  // Update state
          setAudioStream(url);
        } else {
          console.error('Response error:', data);
        }
      } catch (error) {
        console.error('Error:', error);
      }
    }
  };
    // Function to convert base64 to ArrayBuffer
    function base64ToArrayBuffer(base64) {
      const binaryString = window.atob(base64);
      const len = binaryString.length;
      const bytes = new Uint8Array(len);
      for (let i = 0; i &amp;lt; len; i++) {
        bytes[i] = binaryString.charCodeAt(i);
      }
      return bytes.buffer;
    }


  return (
    &amp;lt;div className="App"&amp;gt;
      &amp;lt;header className="navbar"&amp;gt;
        &amp;lt;span&amp;gt;DATE: {new Date().toLocaleDateString()}&amp;lt;/span&amp;gt;
        &amp;lt;span&amp;gt;   &amp;lt;/span&amp;gt;
      &amp;lt;/header&amp;gt;
      &amp;lt;h1&amp;gt;Welcome to MultiChat!&amp;lt;/h1&amp;gt;
      &amp;lt;h2&amp;gt;Choose an agent to start the conversation&amp;lt;/h2&amp;gt;
      &amp;lt;h3&amp;gt;Select Rita for Weather, Mark for Headlines and Mary for Stocks&amp;lt;/h3&amp;gt;
      &amp;lt;img src={logo} className="logo" alt="logo" /&amp;gt;
      &amp;lt;div className="avatar-container"&amp;gt;
        &amp;lt;div className={`avatar ${activeAgent === 'rita' ? 'active' : ''}`} onClick={() =&amp;gt; handleAgentClick('rita')}&amp;gt;
          &amp;lt;img src={avatarRita} alt="Rita" /&amp;gt;
          &amp;lt;p&amp;gt;RITA&amp;lt;/p&amp;gt;
        &amp;lt;/div&amp;gt;
        &amp;lt;div className={`avatar ${activeAgent === 'mark' ? 'active' : ''}`} onClick={() =&amp;gt; handleAgentClick('mark')}&amp;gt;
          &amp;lt;img src={avatarMark} alt="Mark" /&amp;gt;
          &amp;lt;p&amp;gt;MARK&amp;lt;/p&amp;gt;
        &amp;lt;/div&amp;gt;
        &amp;lt;div className={`avatar ${activeAgent === 'mary' ? 'active' : ''}`} onClick={() =&amp;gt; handleAgentClick('mary')}&amp;gt;
          &amp;lt;img src={avatarMary} alt="Mary" /&amp;gt;
          &amp;lt;p&amp;gt;MARY&amp;lt;/p&amp;gt;
        &amp;lt;/div&amp;gt;
      &amp;lt;/div&amp;gt;
      &amp;lt;div&amp;gt;
        {activeAgent === 'mary' &amp;amp;&amp;amp; (
          &amp;lt;input 
            type="text" 
            placeholder="Enter Stock Symbol" 
            value={stockSymbol} 
            onChange={(e) =&amp;gt; setStockSymbol(e.target.value)}
            className="stock-input"
          /&amp;gt;
        )}
      &amp;lt;/div&amp;gt;      
      &amp;lt;div className="controls"&amp;gt;
        &amp;lt;button onClick={() =&amp;gt; handleCommand('TALK')}&amp;gt;TALK&amp;lt;/button&amp;gt;
      &amp;lt;/div&amp;gt;
      &amp;lt;div className="audio-container"&amp;gt;
        {audioStream &amp;amp;&amp;amp; &amp;lt;audio src={audioStream} controls autoPlay /&amp;gt;}
        {audioUrl &amp;amp;&amp;amp; (
          &amp;lt;a href={audioUrl} download="speech.mp3" className="download-link"&amp;gt;
            Download MP3
          &amp;lt;/a&amp;gt;
        )}
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
  );
}

export default App;&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;The CSS is available on GitHub and this is the final result:&lt;/P&gt;&lt;P class=""&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Now the Python backend is the force that makes this Web App a real Application ! Let’s have a look on our app.py , and the 3 different modules of weather_service.py, news_service.py and stock_service.py. Keep in mind that the external APIs used here are free and we can adjust our calls to our needs, based on the documentation of each API and its capabilities. For example the Stock agent brings up a text box to write the Stock symbol which you want information from.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;import os
import base64
from flask import Flask, request, jsonify
import azure.cognitiveservices.speech as speechsdk
import weather_service
import news_service
import stock_service
from flask_cors import CORS

app = Flask(__name__)
CORS(app)

# Azure Speech Service configuration using environment variables
speech_key = os.getenv('SPEECH_KEY')
speech_region = os.getenv('SPEECH_REGION')
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=speech_region)

# Set the voice name (optional, remove if you want to use the default voice)
speech_config.speech_synthesis_voice_name='en-US-JennyNeural'

def text_to_speech(text, voice_name='en-US-JennyNeural'):
    try:
        # Set the synthesis output format to MP3
        speech_config.set_speech_synthesis_output_format(speechsdk.SpeechSynthesisOutputFormat.Audio16Khz32KBitRateMonoMp3)
        
        # Set the voice name dynamically
        speech_config.speech_synthesis_voice_name = voice_name

        # Create a synthesizer with no audio output (null output)
        synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None)
        result = synthesizer.speak_text_async(text).get()

        # Check result
        if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:
            print("Speech synthesized for text [{}]".format(text))
            return result.audio_data  # This is in MP3 format
        elif result.reason == speechsdk.ResultReason.Canceled:
            cancellation_details = result.cancellation_details
            print("Speech synthesis canceled: {}".format(cancellation_details.reason))
            print("Error details: {}".format(cancellation_details.error_details))
            return None
    except Exception as e:
        print(f"Error in text_to_speech: {e}")
        return None

@app.route('/talk-to-rita', methods=['POST'])
def talk_to_rita():
    try:
        # Use default coordinates or get them from request
        latitude = 37.98  # Default latitude
        longitude = 23.72  # Default longitude
        data = request.json
        if data:
            latitude = data.get('latitude', latitude)
            longitude = data.get('longitude', longitude)

        # Get weather description using the weather service
        descriptive_text = weather_service.get_weather_description(latitude, longitude)
        
        if descriptive_text:
            
            audio_content = text_to_speech(descriptive_text, 'en-US-JennyNeural')  # Use the US voice
            #audio_content = text_to_speech(descriptive_text)
            if audio_content:
                # Convert audio_content to base64 for JSON response
                audio_base64 = base64.b64encode(audio_content).decode('utf-8')
                return jsonify({"audioContent": audio_base64}), 200
            else:
                return jsonify({"error": "Failed to synthesize speech"}), 500
        else:
            return jsonify({"error": "Failed to get weather description"}), 500
    except Exception as e:
        return jsonify({"error": str(e)}), 500
        
@app.route('/talk-to-mark', methods=['POST'])
def talk_to_mark():
    try:
        gnews_api_key = os.getenv('GNEWS_API_KEY')
        news_headlines = news_service.fetch_greek_news(gnews_api_key)

        # Set the language to Greek for MARK
        # speech_config.speech_synthesis_voice_name = 'el-GR-AthinaNeural'  # Example Greek voice

        audio_content = text_to_speech(news_headlines, 'el-GR-NestorasNeural')  # Use the Greek voice

        if audio_content:
            audio_base64 = base64.b64encode(audio_content).decode('utf-8')
            return jsonify({"audioContent": audio_base64}), 200
        else:
            return jsonify({"error": "Failed to synthesize speech"}), 500
    except Exception as e:
        return jsonify({"error": str(e)}), 500
        
@app.route('/talk-to-mary', methods=['POST'])
def talk_to_mary():
    try:
        data = request.json
        stock_symbol = data.get('symbol')  # Extract the stock symbol from the request

        if not stock_symbol:
            return jsonify({"error": "No stock symbol provided"}), 400

        api_key = os.getenv('ALPHAVANTAGE_API_KEY')  # Get your Alpha Vantage API key from the environment variable
        stock_info = stock_service.fetch_stock_quote(api_key, stock_symbol)

        audio_content = text_to_speech(stock_info, 'en-US-JennyNeural')  # Use an English voice for Mary
        if audio_content:
            audio_base64 = base64.b64encode(audio_content).decode('utf-8')
            return jsonify({"audioContent": audio_base64}), 200
        else:
            return jsonify({"error": "Failed to synthesize speech"}), 500
    except Exception as e:
        print(f"Error in /talk-to-mary: {e}")
        return jsonify({"error": str(e)}), 500
        
if __name__ == '__main__':
    app.run(debug=True)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and here is the sample weather_service.py:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;import requests_cache
import pandas as pd
from retry_requests import retry
import openmeteo_requests

# Function to create descriptive text for each day's weather
def create_weather_descriptions(df):
    descriptions = []
    for index, row in df.iterrows():
        description = (f"On {row['date'].strftime('%Y-%m-%d')}, the maximum temperature is {row['temperature_2m_max']}°C, "
                      f"the minimum temperature is {row['temperature_2m_min']}°C, "
                      f"and the total rainfall is {row['rain_sum']}mm.")
        descriptions.append(description)
    return descriptions

# Setup the Open-Meteo API client with cache and retry on error
cache_session = requests_cache.CachedSession('.cache', expire_after=3600)
retry_session = retry(cache_session, retries=5, backoff_factor=0.2)
openmeteo = openmeteo_requests.Client(session=retry_session)

def fetch_weather_data(latitude=37.98, longitude=23.72): # Default coordinates for Athens, Greece
    # Define the API request parameters
    params = {
        "latitude": latitude,
        "longitude": longitude,
        "daily": ["weather_code", "temperature_2m_max", "temperature_2m_min", "rain_sum"],
        "timezone": "auto"
    }

    # Make the API call
    url = "https://api.open-meteo.com/v1/forecast"
    responses = openmeteo.weather_api(url, params=params)

    # Process the response and return daily data as a DataFrame
    response = responses[0]
    daily = response.Daily()
    daily_dataframe = pd.DataFrame({
        "date": pd.date_range(
            start=pd.to_datetime(daily.Time(), unit="s", utc=True),
            end=pd.to_datetime(daily.TimeEnd(), unit="s", utc=True),
            freq=pd.Timedelta(seconds=daily.Interval()),
            inclusive="left"
        ),
        "weather_code": daily.Variables(0).ValuesAsNumpy(),
        "temperature_2m_max": daily.Variables(1).ValuesAsNumpy(),
        "temperature_2m_min": daily.Variables(2).ValuesAsNumpy(),
        "rain_sum": daily.Variables(3).ValuesAsNumpy()
    })

    return daily_dataframe

def get_weather_description(latitude, longitude):
    # Fetch the weather data
    weather_data = fetch_weather_data(latitude, longitude)
    
    # Create weather descriptions from the data
    weather_descriptions = create_weather_descriptions(weather_data)
    return ' '.join(weather_descriptions)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Refer to the GitHub Repo for the other modules , and the Dockerfiles as well.&lt;/P&gt;&lt;P&gt;Now here is the Azure Cli scripts that we need to execute in order to build, tag and push our Images to Container Registry and pull them as Container Apps to our Environment on Azure:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="bash"&gt;## Run these before anything ! :
az login
az extension add --name containerapp --upgrade
az provider register --namespace Microsoft.App
az provider register --namespace Microsoft.OperationalInsights

## Load your resources to variables
$RESOURCE_GROUP="rg-demo24"
$LOCATION="northeurope"
$ENVIRONMENT="env-web-x24"
$FRONTEND="frontend"
$BACKEND="backend"
$ACR="acrx2024"

## Create a Resource Group, a Container Registry and a Container Apps Environment:
az group create --name $RESOURCE_GROUP --location "$LOCATION"
az acr create --resource-group $RESOURCE_GROUP --name $ACR --sku Basic --admin-enabled true
az containerapp env create --name $ENVIRONMENT -g $RESOURCE_GROUP --location "$LOCATION"

## Login from your Terminal to ACR:
az acr login --name $(az acr list -g rg-demo24 --query "[].{name: name}" -o tsv)

## Build your backend:
az acr build --registry $ACR --image backendtts .

## Create your Backend Container App:
az containerapp create \
 --name backendtts \
 --resource-group $RESOURCE_GROUP \
 --environment $ENVIRONMENT \
 --image "$ACR.azurecr.io/backendtts:latest" \
 --target-port 5000 \
 --env-vars SPEECH_KEY=xxxxxxxxxx SPEECH_REGION=northeurope \
 --ingress 'external' \
 --registry-server "$ACR.azurecr.io" \
 --query properties.configuration.ingress.fqdn

## Make sure to cd into the React Frontend directory where your Dockerfile is:
az acr build --registry $ACR --image frontendtts .

## Create your Frontend:
az containerapp create --name frontendtts --resource-group $RESOURCE_GROUP \
 --environment $ENVIRONMENT \
 --image "$ACR.azurecr.io/frontendtts:latest" \
 --target-port 80 --ingress 'external' \
 --registry-server "$ACR.azurecr.io" \
 --query properties.configuration.ingress.fqdn &lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Now we usually need to have the Web UI up and running so what we do is to set the scaling on each Container App to minimum 1 instance, but this is up to you !&lt;/P&gt;&lt;P class=""&gt;That’s it ! Select your agents and make calls. Hear the audio, download the MP3 and make any changes to your App, just remember to rebuild your image and restart the revision !&lt;/P&gt;&lt;H3&gt;Closing&lt;/H3&gt;&lt;P class=""&gt;As we wrap up this exciting project showcasing the seamless integration of Azure Speech Service with React, Python, and Azure Container Apps, we hope it has sparked your imagination and inspired you to explore the endless possibilities of modern cloud technologies. It’s been an exciting journey combining these powerful tools to create an application that truly speaks to its users. We eagerly look forward to seeing how you, our innovative community, will use these insights to build your own extraordinary projects.&lt;/P&gt;&lt;P class=""&gt;References:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;A title="Azure Text to Speech with Container Apps" href="https://github.com/passadis/react-multiagents-speech" target="_blank" rel="noopener"&gt;GitHub Repo&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/container-apps/tutorial-deploy-first-app-cli?tabs=bash" target="_blank" rel="noopener"&gt;Azure Container Apps&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/speech-sdk?tabs=windows%2Cubuntu%2Cios-xcode%2Cmac-xcode%2Candroid-studio" target="_blank" rel="noopener"&gt;Azure Speech SDK&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/overview" target="_blank" rel="noopener"&gt;Azure Speech Service&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/get-started-text-to-speech?tabs=windows%2Cterminal&amp;amp;pivots=programming-language-python" target="_blank" rel="noopener"&gt;Quickstart Text to Speech&lt;/A&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P class=""&gt;Architecture:&lt;/P&gt;&lt;P class=""&gt;&lt;img /&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;!--   /wp:paragraph   --&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 16 Mar 2024 01:30:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/azure-text-to-speech-with-container-apps/m-p/4087187#M312</guid>
      <dc:creator>KonstantinosPassadis</dc:creator>
      <dc:date>2024-03-16T01:30:33Z</dc:date>
    </item>
    <item>
      <title>When is  gpt-4-0125-preview coming?</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/when-is-gpt-4-0125-preview-coming/m-p/4040955#M308</link>
      <description>&lt;P&gt;OpenAI has new GPT-4 models available.&lt;BR /&gt;&lt;BR /&gt;Approximately how long will it take to come to azure?&lt;/P&gt;</description>
      <pubDate>Fri, 26 Jan 2024 20:46:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/when-is-gpt-4-0125-preview-coming/m-p/4040955#M308</guid>
      <dc:creator>tobiq</dc:creator>
      <dc:date>2024-01-26T20:46:00Z</dc:date>
    </item>
    <item>
      <title>What is the best strategy for combining data?</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/what-is-the-best-strategy-for-combining-data/m-p/4022525#M306</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a dataflow in datafactory that concists of many joins. Each joins has the responsibility of adding new data to the inital object.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Are you aware of better strategies other than joining?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Fri, 05 Jan 2024 18:05:10 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/what-is-the-best-strategy-for-combining-data/m-p/4022525#M306</guid>
      <dc:creator>knoerregaard</dc:creator>
      <dc:date>2024-01-05T18:05:10Z</dc:date>
    </item>
    <item>
      <title>Want to check email already exists or not before verifying email in signup of Azure AD B2C flow.</title>
      <link>https://techcommunity.microsoft.com/t5/azure-integration-services/want-to-check-email-already-exists-or-not-before-verifying-email/m-p/4020112#M305</link>
      <description>&lt;P&gt;I'm working on a custom sign-up flow in Azure AD B2C, and I want to include a step to check whether an email address already exists before initiating the email verification process. The goal is to enhance the user experience by avoiding unnecessary verification for existing email addresses.&lt;/P&gt;&lt;P&gt;I'm looking for guidance on how to configure a custom user journey that incorporates a technical profile specifically designed to validate the uniqueness of the provided email address. Ideally, I want to collect the user's email, check if it exists, and then proceed with email verification only if the email is new.&lt;/P&gt;&lt;P&gt;If anyone has experience implementing such a scenario or can provide insights into the necessary steps and configurations, I would greatly appreciate your assistance. Additionally, any code snippets or examples related to this specific use case would be extremely helpful.&lt;/P&gt;&lt;P&gt;Thank you in advance for your support!&lt;/P&gt;</description>
      <pubDate>Wed, 03 Jan 2024 05:35:23 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-integration-services/want-to-check-email-already-exists-or-not-before-verifying-email/m-p/4020112#M305</guid>
      <dc:creator>Akshay85</dc:creator>
      <dc:date>2024-01-03T05:35:23Z</dc:date>
    </item>
  </channel>
</rss>

