integration
278 TopicsReliable B2B Tracking using Premium SKU Integration Account
Reliable B2B Tracking using Premium SKU Integration Account In the world of enterprise integration, accurate and reliable tracking of B2B transactions is crucial for maintaining compliance, troubleshooting issues, and ensuring smooth business operations. Organizations that rely on Logic Apps for EDI X12, EDIFACT, or AS2 transactions need robust tracking capabilities to monitor their B2B exchanges effectively. Currently, in Logic Apps Consumption, B2B tracking is powered by Azure Log Analytics, which provides basic telemetry and logging capabilities. However, this approach has some key limitations: Limited Query Capabilities – Searching for transactions in Log Analytics can be cumbersome, especially when dealing with large-scale enterprise data. Retention and Performance Issues – Log Analytics is optimized for general telemetry, not for high-volume, structured B2B transaction tracking, making it challenging for organizations with strict compliance requirements. To address these challenges, we are introducing Reliable B2B Tracking in Logic Apps Standard using a Premium SKU Integration Account. This new feature ensures that all B2B transactions are reliably tracked and ingested into an Azure Data Explorer (ADX) cluster, providing a lossless tracking mechanism with powerful querying and visualization capabilities. With data in ADX, customers can extend their existing PowerBI dashboards or easily build custom dashboards on this data if they need detailed analysis on any issue. Additionally, a tracking dashboard is available, enabling customers to monitor, search, and analyze B2B transactions efficiently. This enhancement significantly improves reliability, visibility, and troubleshooting for mission-critical B2B integrations. How It Works Reliable B2B Tracking in Logic Apps Standard ensures that every AS2, X12, and EDIFACT transaction is accurately recorded and stored in an Azure Data Explorer (ADX) cluster database instead of relying on Azure Log Analytics, which may drop events. Here’s how the system works: Event Collection – Whenever a B2B transaction occurs, tracking data is generated from the built-in AS2, X12, and EDIFACT actions in Logic Apps Standard. Data Ingestion – Instead of sending logs to Log Analytics, the tracking data is directly pushed to an Azure Data Explorer (ADX) cluster via integration account transactionally to ensure reliable, lossless storage. Structured Storage – ADX provides fast indexing and query capabilities, allowing enterprises to search, filter, and analyze their transactions efficiently. Tracking Dashboard – A dedicated B2B monitoring dashboard visualizes transaction flow, helping customers track acknowledgments (997, MDN), detect failures, and troubleshoot issues in real time. Requirements for Using Reliable B2B Tracking To enable this feature, customers must meet the following prerequisites: Premium SKU Integration Account – This feature is only available with a Premium SKU Integration Account in Logic Apps Standard. Built-in AS2, X12, or EDIFACT Actions – Only transactions processed through Logic Apps Standard using built-in B2B actions will be tracked reliably. Azure Data Explorer Cluster – Customers must provide their own ADX cluster database, where all transaction logs will be stored and queried. Please note that B2B tracking for EDIFACT transactions are not supported yet and will be available in near future. How to Use Reliable B2B Tracking 1. Create a Tracking Store Artifact in the Integration Account In the Integration Account, create a tracking store artifact that points to an existing Azure Data Explorer (ADX) cluster database. Currently, only one default tracking store is supported per integration account. The ADX database must be pre-created before setting up the tracking store. 2. Enable or Disable Tracking in the Agreement Settings B2B tracking is also managed at the agreement level. By default, tracking is enabled for an agreement. To disable tracking, user can set a setting named TrackingState to Disabled in send or receive agreement. To enable again, TrackingState value needs to set to Enabled. Please note that this setting can be updated in JSON view only. For tracking to function correctly, both the tracking store in the integration account be configured and TrackingState needs to be set to Enabled in agreement. Using the Tracking Dashboard Before using tracking dashboard, please ensure that some B2B actions are executed so that tracking data is available in tracking store. To view the tracking dashboard, please click the When the B2B tracking dashboard is opened, the dashboard displays message overview data for 7 days by default. To change the data scope to a different time interval, use the TimeRange at the top of the page. After the Message Overview Status dashboard loads, users can drill down into specific message types (AS2 or X12) for a more detailed view. Selecting the AS2 or X12 tabs provides insights into message processing details, including transaction status, acknowledgments, and failures. Managing Tracking Stores via REST API Reliable B2B Tracking supports a REST API for managing tracking stores. Users can create, update, delete, and retrieve tracking stores programmatically using the following API endpoints. If users choose to use rest API to create tracking store in integration account, then two ADX database tables named AS2TrackRecords and EdiTrackRecords need to be created manually in the ADX database with a specific schema and integration account needs to have 'Ingester' permission to the database. 1. Get All Tracking Stores Retrieves all tracking stores configured in an Integration Account. GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationAccounts/{integrationAccountName}/groups/default/trackingstores?api-version=2016-06-01 Parameters: {subscriptionId} – Azure subscription ID. {resourceGroupName} – Name of the resource group. {integrationAccountName} – Name of the integration account. Response: Returns a list of tracking stores associated with the integration account. Please note that currently only one tracking store per integration account is supported. 2. Get a Specific Tracking Store Retrieves details of a specific tracking store. GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationAccounts/{integrationAccountName}/groups/default/trackingstores/{trackingstoreName}?api-version=2016-06-01 Parameters: {trackingstoreName} – Name of the tracking store to retrieve. Response: Returns details of the specified tracking store. 3. Create or Update a Tracking Store Creates a new tracking store or updates an existing one. PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationAccounts/{integrationAccountName}/groups/default/trackingstores/{trackingstoreName}?api-version=2016-06-01 Request Body: json { "properties": { "adxClusterUri": "https://youradxcluster.kusto.windows.net", "databaseName": "YourDatabaseName" } } Parameters: adxClusterUri – The Azure Data Explorer cluster URI. databaseName – The database name within the ADX cluster. Response: Returns the details of the created or updated tracking store. 4. Delete a Tracking Store Deletes an existing tracking store from the integration account. DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationAccounts/{integrationAccountName}/groups/default/trackingstores/{trackingstoreName}?api-version=2016-06-01 Parameters: {trackingstoreName} – Name of the tracking store to delete. Response: Returns a success response if the tracking store is deleted successfully. Tracking Database Table Schema The Azure Data Explorer (ADX) cluster database used for Reliable B2B Tracking stores transaction data in a structured format. AS2 transactions are stored in a table named AS2TrackRecords. X12 and EDIFACT transactions are stored in a table named EdiTrackRecords. These tables enable efficient querying and retrieval of B2B tracking data, providing structured insights into message flow, processing status, and troubleshooting details. Since B2B tracking data is stored in an Azure Data Explorer database, users can leverage Azure Workbook to create visually rich custom dashboards for analyzing their B2B transactions. If users choose to use rest API to create tracking store in integration account, then these two tables need to be created manually in the ADX database and integration account needs to have 'Ingester' permission to the database. For more information, see Reliable B2B Tracking Database Schema Future Enhancements To further improve Reliable B2B Tracking, the following enhancements are planned for future releases: 1. Drill into Workflow Run Details from the Tracking Dashboard Users will be able to navigate directly from the tracking dashboard to the corresponding Logic Apps workflow run details. This feature will allow for faster troubleshooting by linking B2B transactions to their underlying workflow executions. 2. EDIFACT Transaction Tracking In the future, EDIFACT message tracking will be fully supported, enabling detailed monitoring of EDIFACT interchanges, functional groups, and message transactions.Reliable B2B Tracking Database Schema
The Azure Data Explorer (ADX) cluster database used for Reliable B2B Tracking stores transaction data in a structured format. AS2 transactions are stored in a table named AS2TrackRecords. X12 and EDIFACT transactions are stored in a table named EdiTrackRecords. These tables enable efficient querying and retrieval of B2B tracking data, providing structured insights into message flow, processing status, and troubleshooting details. Since B2B tracking data is stored in an Azure Data Explorer database, users can leverage Azure Workbook to create visually rich custom dashboards for analyzing their B2B transactions. If users choose to use rest API to create tracking store in integration account, then these two tables need to be created manually in the ADX database and integration account needs to have 'Ingester' permission to the database. AS2 Tracking Table – AS2TrackRecords All AS2 tracking data are stored on the AS2TrackRecords table. Below is the table creation query: .create table AS2TrackRecords ( IntegrationAccountSubscriptionId: string, /Subscription ID for the integration account. IntegrationAccountResourceGroup: string, // Resource group for the integration account. IntegrationAccountName: string, // Integration Account name. IntegrationAccountId: string, // Integration Account id. WorkflowRunOperationInfo: dynamic, // Workflow run operation information. ClientRequestId: string, // Client request id. EventTime: datetime, // Time of event. Error: dynamic, // Error if any. RecordType: string, // Tracking record type. Direction: string, // Direction of the message flow, which is either receive or send. IsMessageFailed: bool, // Whether the message is failed. MessageProperties: dynamic, // Message properties. AdditionalProperties: dynamic, // Additional properties. TrackingId: string, // Any custom tracking id. AgreementName: string, // Name of the agreement to which the messages are resolved. As2From: string, //AS2 message receiver’s name. As2To: string, //AS2 message sender’s name. ReceiverPartnerName: string, //AS2 message receiver's partner name. SenderPartnerName: string, // AS2 message sender's partner name. MessageId: string, // AS2 message ID. OriginalMessageId: string, // AS2 Original message ID. CorrelationMessageId: string, // AS2 message ID, to correlate messages with MDNs. IsMdnExpected: bool // whether the MDN is expected. ) X12/EDIFACT Tracking Table – EdiTrackRecords X12 and EDIFACT tracking data are stored in the EdiTrackRecords table. Below is the table creation query: .create table EdiTrackRecords ( IntegrationAccountSubscriptionId: string, // Subscription id of the Integration Account. IntegrationAccountResourceGroup: string, // Resource group for the integration account. IntegrationAccountName: string, // Integration Account name. IntegrationAccountId: string, // Integration Account id. WorkflowRunOperationInfo: dynamic, // Workflow run operation information. ClientRequestId: string, // Client request id. EventTime: datetime, // Time of event. Error: dynamic, // Error if any. RecordType: string, // Tracking record type. Direction: string, // Direction of the message flow, either receive or send. IsMessageFailed: bool, // Whether the message is failed. MessageProperties: dynamic, // Message properties. AdditionalProperties: dynamic, // Additional properties. TrackingId: string, // Any custom tracking id. AgreementName: string, // Name of the agreement to which the messages are resolved. SenderPartnerName: string, // Message sender's partner name. ReceiverPartnerName: string, // Message receiver's partner name. SenderQualifier: string, // Send partner qualifier. SenderIdentifier: string, // Send partner identifier. ReceiverQualifier: string, // Receive partner qualifier. ReceiverIdentifier: string, // Receive partner identifier. TransactionSetControlNumber: string, // Transaction set control number. FunctionalGroupControlNumber: string, // Functional group control number. InterchangeControlNumber: string, // Interchange control number. MessageType: string, // Transaction set or document type. RespondingTransactionSetControlNumber: string, //The responding transaction set control number in case of acknowledgement message. RespondingFunctionalGroupControlNumber: string, // The responding interchange control number in case of acknowledgement. RespondingInterchangeControlNumber: string, // The responding interchange control number in case of acknowledgement. ProcessingStatus: string // Processing status of the acknowledgment with these permitted values: Received, Generated, and Sent } Message Properties JSON schema Both As2TrackRecords and EdiTrackRecords table has a MessageProperties column which is of dynamic type having different JSON schema for different types of track records. AS2 Message Track Record Message Properties schema { "direction": "", "messageId": "", "dispositionType": "", "fileName": "", "isMessageFailed": "", "isMessageSigned": "", "isMessageEncrypted": "", "isMessageCompressed": "", "correlationMessageId": "", "incomingHeaders": {}, "outgoingHeaders": {}, "isNrrEnabled": "", "isMdnExpected": "", "mdnType": "" } The following table describes the properties in the JSON schema for message properties in an AS2 message track record. Property Type Description direction String Direction of the message flow, which is either receive or send messageId String AS2 message ID. dispositionType String Message Disposition Notification (MDN) disposition type value fileName String File name from the header of the AS2 message isMessageFailed Boolean Whether the AS2 message failed isMessageSigned Boolean Whether the AS2 message was signed isMessageEncrypted Boolean Whether the AS2 message was encrypted isMessageCompressed Boolean Whether the AS2 message was compressed correlationMessageId String AS2 message ID, to correlate messages with MDNs incomingHeaders Dictionary of JToken Incoming AS2 message header details outgoingHeaders Dictionary of JToken Outgoing AS2 message header details isNrrEnabled Boolean Whether to use default value if the value isn't known isMdnExpected Boolean Whether the MDN is expected. mdnType Enum Allowed values: NotConfigured, Sync, and Async As2 MDN Track Record Message Properties schema { "direction": "", "messageId": "", "originalMessageId": "", "dispositionType": "", "isMessageFailed": "", "isMessageSigned": "", "isNrrEnabled": "", "statusCode": "", "micVerificationStatus": "", "correlationMessageId": "", "incomingHeaders": { }, "outgoingHeaders": { } } The following table describes the properties in the JSON schema for message properties in an AS2 MDN track record. Property Type Description direction String Direction of the message flow, which is either receive or send messageId String AS2 message ID originalMessageId String AS2 original message ID dispositionType String MDN disposition type value isMessageFailed Boolean Whether the AS2 message failed isMessageSigned Boolean Whether the AS2 message was signed isNrrEnabled Boolean Whether to use the default value if the value isn't known statusCode Enum Allowed values: Accepted, Rejected, and AcceptedWithErrors micVerificationStatus Enum Allowed values: NotApplicable, Succeeded, and Failed correlationMessageId String Correlation ID, which is the ID for the original message that has the MDN configured incomingHeaders Dictionary of JToken Incoming message header details outgoingHeaders Dictionary of JToken Outgoing message header details X12 transaction set Track Record Message Properties schema { "direction": "", "interchangeControlNumber": "", "functionalGroupControlNumber": "", "transactionSetControlNumber": "", "CorrelationMessageId": "", "messageType": "", "isMessageFailed": "", "isTechnicalAcknowledgmentExpected": "", "isFunctionalAcknowledgmentExpected": "", "needAk2LoopForValidMessages": "", "segmentsCount": "" } The following table describes the properties in the JSON schema for message properties in a X12 transaction set track record. Property Type Description direction Enum Direction of the message flow, which is either receive or send interchangeControlNumber String Interchange control number functionalGroupControlNumber String Functional control number transactionSetControlNumber String Transaction set control number CorrelationMessageId String Correlation message ID, which is a combination of {AgreementName}{GroupControlNumber}{TransactionSetControlNumber} messageType String Transaction set or document type isMessageFailed Boolean Whether the X12 message failed isTechnicalAcknowledgmentExpected Boolean Whether the technical acknowledgment is needed isFunctionalAcknowledgmentExpected Boolean Whether the functional acknowledgment is needed needAk2LoopForValidMessages Boolean Whether the AK2 loop is required for a valid message segmentsCount Integer Number of segments in the X12 transaction set X12 transaction set acknowledgment Track Record Message Properties schema { "direction": "", "interchangeControlNumber": "", "functionalGroupControlNumber": "", "respondingfunctionalGroupControlNumber": "", "respondingFunctionalGroupId": "", "respondingtransactionSetControlNumber": "", "respondingTransactionSetId": "", "statusCode": "", "processingStatus": "", "CorrelationMessageId": "", "isMessageFailed": "", } The following table describes the properties in the JSON schema for message properties in a X12 transaction set acknowledgement track record. Property Type Description direction Enum Direction of the message flow, which is either receive or send interchangeControlNumber String Interchange control number of the functional acknowledgment. The value populates only for the send side where functional acknowledgment is received for the messages sent to partner. functionalGroupControlNumber String Functional group control number of the functional acknowledgment. The value populates only for the send side where functional acknowledgment is received for the messages sent to partner. respondingfunctionalGroupControlNumber String The responding interchange control number respondingFunctionalGroupId String The responding functional group ID, which maps to AK101 in the acknowledgment respondingtransactionSetControlNumber String The responding transaction set control number respondingTransactionSetId String The responding transaction set ID, which maps to AK201 in the acknowledgment statusCode Boolean Transaction set acknowledgment status code segmentsCount Enum Acknowledgment status code with these permitted values: Accepted, Rejected, and AcceptedWithErrors processingStatus Enum Processing status of the acknowledgment with these permitted values: Received, Generated, and Sent CorrelationMessageId String Correlation message ID, which is a combination of {AgreementName}{GroupControlNumber}{TransactionSetControlNumber} isMessageFailed Boolean Whether the X12 message failed X12 interchange Track Record Message Properties schema { "direction": "", "interchangeControlNumber": "", "isTechnicalAcknowledgmentExpected": "", "isMessageFailed": "", "isa09": "", "isa10": "", "isa11": "", "isa12": "", "isa14": "", "isa15": "", "isa16": "" } The following table describes the properties in the JSON schema for message properties in a X12 interchange track record. Property Type Description direction Enum Direction of the message flow, which is either receive or send interchangeControlNumber String Interchange control number isTechnicalAcknowledgmentExpected Boolean Whether the technical acknowledgment is configured in the X12 agreement isMessageFailed Boolean Whether the X12 message failed isa09 String X12 document interchange date isa10 String X12 document interchange time isa11 String X12 interchange control standards identifier isa12 String X12 interchange control version number isa14 String X12 acknowledgment is requested isa15 String Indicator for test or production isa16 String Element separator X12 interchange acknowledgment Track Record Message Properties schema { "direction": "", "interchangeControlNumber": "", "respondingInterchangeControlNumber": "", "isMessageFailed": "", "statusCode": "", "processingStatus": "", "ta102": "", "ta103": "", "ta105": "" } The following table describes the properties in the JSON schema for message properties in a X12 interchange acknowledgement track record. Property Type Description direction Enum Direction of the message flow, which is either receive or send interchangeControlNumber String Interchange control number of the technical acknowledgment that's received from partners respondingInterchangeControlNumber String Interchange control number for the technical acknowledgment that's received from partners isMessageFailed Boolean Whether the X12 message failed statusCode Enum Interchange acknowledgment status code with these permitted values: Accepted, Rejected, and AcceptedWithErrors processingStatus Enum Acknowledgment status with these permitted values: Received, Generated, and Sent ta102 String Interchange date ta103 String Interchange time ta105 String Interchange note code X12 functional group Message Properties schema { "direction": "", "interchangeControlNumber": "", "functionalGroupControlNumber": "", "isTechnicalAcknowledgmentExpected": "", "isFunctionalAcknowledgmentExpected": "", "isMessageFailed": "", "gs01": "", "gs02": "", "gs03": "", "gs04": "", "gs05": "", "gs07": "", "gs08": "" } The following table describes the properties in the JSON schema for message properties in a X12 functional group track record. Property Type Description direction Enum Direction of the message flow, either receive or send interchangeControlNumber String Interchange control number functionalGroupControlNumber String Functional control number isTechnicalAcknowledgmentExpected Boolean Whether the technical acknowledgment is configured in the X12 agreement isFunctionalAcknowledgmentExpected Boolean Whether the functional acknowledgment is configured in the X12 agreement isMessageFailed Boolean Whether the X12 message failed gs01 String Functional identifier code gs02 String Application sender's code gs03 String Application receiver's code gs04 String Functional group date gs05 String Functional group time gs07 String Responsible agency code gs08 String Identifier code for the version, release, or industry X12 functional group acknowledgment Message Properties schema { "direction": "", "interchangeControlNumber": "", "functionalGroupControlNumber": "", "respondingfunctionalGroupControlNumber": "", "respondingFunctionalGroupId": "", "isMessageFailed": "", "statusCode": "", "processingStatus": "", "ak903": "", "ak904": "", "ak9Segment": "" } The following table describes the properties in the JSON schema for message properties in a X12 functional group acknowledgement track record. Property Type Description direction Enum Direction of the message flow, which is either receive or send interchangeControlNumber String Interchange control number, which populates for the send side when a technical acknowledgment is received from partners functionalGroupControlNumber String Functional group control number of the technical acknowledgment, which populates for the send side when a technical acknowledgment is received from partners respondingfunctionalGroupControlNumber String Control number of the original functional group respondingFunctionalGroupId String Maps to AK101 in the acknowledgment functional group ID isMessageFailed Boolean Whether the X12 message failed statusCode Enum Acknowledgment status code with these permitted values: Accepted, Rejected, and AcceptedWithErrors processingStatus Enum Processing status of the acknowledgment with these permitted values: Received, Generated, and Sent ak903 String Number of transaction sets received ak904 String Number of transaction sets accepted in the identified functional group ak9Segment String Whether the functional group identified in the AK1 segment is accepted or rejected, and why WorkflowRunOperationInfo Column JSON schema WorkflowRunOperationInfo column of both AS2TrackRecords and EdiTrackRecords table captures details about the standard logic apps action. Please find JSON schema for Workflow run operation information. { "title": "WorkflowRunOperationInfo", "type": "object", "properties": { "Workflow": { "type": "object", "properties": { "SystemId": { "type": "string", "description": "The workflow system id." }, "SubscriptionId": { "type": "string", "description": "The subscription id of the workflow." }, "ResourceGroup": { "type": "string", "description": "The resource group name of the workflow." }, "LogicAppName": { "type": "string", "description": "The logic app name of the workflow." }, "Name": { "type": "string", "description": "The name of the workflow." }, "Version": { "type": "string", "description": "The version of the workflow." } } }, "RunInstance": { "type": "object", "properties": { "RunId": { "type": "string", "description": "The logic app run id." }, "TrackingId": { "type": "string", "description": "The tracking id of the run." }, "ClientTrackingId": { "type": "string", "description": "The client tracking id of the run." } } }, "Operation": { "type": "object", "properties": { "OperationName": { "type": "string", "description": "The logic app operation name." }, "RepeatItemScopeName": { "type": "string", "description": "The repeat item scope name." }, "RepeatItemIndex": { "type": "integer", "description": "The repeat item index." }, "RepeatItemBatchIndex": { "type": "integer", "description": "The index of the repeat item batch." }, "TrackingId": { "type": "string", "description": "The tracking id of the logic app operation." }, "CorrelationId": { "type": "string", "description": "The correlation id of the logic app operation." }, "ClientRequestId": { "type": "string", "description": "The client request id of the logic app operation." }, "OperationTrackingId": { "type": "string", "description": "The operation tracking id of the logic app operation." } } } } }Access [Logic Apps / App Services] Site Files with FTPS using Logic Apps
You may need to access storage files for your site, whether it is a Logic App Standard, Function App, or App Service. Depending on your ASP SKU, these files can be accessed using FTP/FTPS. Some customers encounter difficulties when attempting to connect using Implicit/Explicit FTPS. This post aims to simplify this process by utilizing a Logic App to list files, retrieve file content, and update files. Explicit FTPS connection will be used in this scenario as it is the one mutually supported by the FTP Connector and by the FTP site when using FTPS. Steps: Create a user/pass credentials to access the FTP site. You can do it from the Portal or using CLI. You can run a command Shell from the below reference to execute the command. Reference: Configure deployment credentials - Azure App Service | Microsoft Learn CLI: az webapp deployment user set --user-name <username> --password <password> Portal: Enable FTP Basic Authentication on the Destination (Logic App Standard, Function App, App Services): It is highly advised to use "FTPS only" connection as it provides a secure encrypted connection. To disable unencrypted FTP, select FTPS Only in FTP state. To disable both FTP and FTPS entirely, select Disabled. When finished, select Save. If using FTPS Only, you must enforce TLS 1.2 or higher by navigating to the TLS/SSL settings page of your web app. TLS 1.0 and 1.1 aren't supported with FTPS Only. Reference: Deploy content using FTP/S - Azure App Service | Microsoft Learn The FTP Connector supports Explicit connection only: FTP - Connectors | Microsoft Learn For secure FTP, make sure to set up explicitFile Transfer Protocol Secure (FTPS), rather than implicit FTPS. Also, some FTP servers, such as ProFTPd, require that you enable the NoSessionReuseRequired option if you use Transport Layer Security (TLS) mode, the successor to Secure Socket Layer (SSL). The FTP connector doesn't work with implicit FTPS and supports only explicit FTP over FTPS, which is an extension of TLS. Create Logic App and FTP Connection: Create the Logic App workflow, add an FTP Action to List the Files, or any FTP Action based on your requirements. To test the connection for the first time, I recommend using the "List files in folder" Action. In the connection configuration: Server Address: xxxxxxx.ftp.azurewebsites.windows.net (Get this value from the Properties of the Destination service, don't add the "ftps://" section nor the "/site/wwwroot" section) User Name and Password: xxxxxx\xxxxx (This is what we created in the FTP credentials tab under the Deployment Center, in the User scope section, or using the CLI command) FTP Server Port: 21 (use Port 21 to force the connection to be Explicit) Enabled SSL?: checked (use SSL to force the connection to use FTPS) Create Logic App FTP Connection: After creating the connection, use "/site/wwwroot" to access your folder: Test this and see if it works! Troubleshooting: Reference: Deploy content using FTP/S - Azure App Service | Microsoft Learn I recommend to secure the connection password using KeyVault. More on that below. Secure Parameters in Keyvault: Main steps: Put the connection string in KeyVault. Give Access to the Logic App on KeyVault. Add the reference in App Settings for the Logic App. The steps mentioned here: Use Key Vault references - Azure App Service | Microsoft Learn Example of this at the end of this article: A walkthrough of parameterization of different connection types in Logic App Standard | Microsoft Community Hub And that's how you access those files! You can make use of this secure connection for multiple tasks based on your requirements.Scaling mechanism in hybrid deployment model for Azure Logic Apps Standard
Hybrid Logic Apps offer a unique blend of on-premises and cloud capabilities, making them a versatile solution for various integration scenarios. A key feature of hybrid deployment models is their ability to scale efficiently to manage different workloads. This capability enables customers to optimize their compute costs during peak usage by scaling up to handle temporary spikes in demand and then scaling down to reduce costs when the demand decreases. This blog will explore the scaling mechanism in hybrid deployment models, focusing on the role of the KEDA operator and its integration with other components.Logic Apps Aviators Newsletter - March 2025
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month March’s Ace Aviator: Dieter Gobeyn What’s your role and title? What are your responsibilities? I work as an Azure Solution Architect; however, I remain very hands-on and regularly develop solutions to stay close to the technology. I design and deliver end-to-end solutions, ranging from architectural analysis to full implementation. My responsibilities include solution design, integration analysis, contributing to development, reviewing colleagues’ work, and proposing improvements to our platform. I also provide Production support when necessary. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My days can vary greatly, but collaboration with my globally distributed team is always a priority. I begin my day promptly at 8 AM to align with different time zones. After our daily stand-up, I often reach out to colleagues to see if they need assistance or follow-up on mails/team messages. A significant portion of my day involves solution design—gathering requirements, outlining integration strategies, and collaborating with stakeholders. I also identify potential enhancements, perform preliminary analysis, and translate them into user stories. I also spend time on technical development, building features, testing them thoroughly, and updating documentation for both internal and client use. On occasions where deeper investigation is needed, I support advanced troubleshooting, collaborating with our support team if issues demand additional expertise. If a release is scheduled, I sometimes manage deployment activities in the evening. What motivates and inspires you to be an active member of the Aviators/Microsoft community? I’ve always valued the sense of community that comes from sharing knowledge. Early in my career, attending events and meeting fellow professionals helped me bridge the gap between theory and real-world practice. This informal environment encourages deeper, hands-on knowledge exchange, which often goes beyond what official documentation can provide. Now that I’m in a more senior role, I believe it’s my responsibility—and pleasure—to give back. Contributing to the community enables me to keep learning, connect with fantastic people, and grow both technically and personally. Looking back, what advice do you wish you had been given earlier that you’d now share with those looking to get into STEM/technology? Master the fundamentals, not just the tools. It’s easy to get caught up in the newest frameworks, cloud platforms, and programming languages. However, what remains constant are the core concepts such as networking, data structures, security, and system design. By understanding the ‘why’ behind each technology, you’ll be better equipped to design future-proof solutions and adapt fast as tools and trends evolve. What has helped you grow professionally? Curiosity and a commitment to continuous learning have been key. I’m always keen to understand the ‘why’ behind how things work. Outside my normal job, I pursue Microsoft Reactor sessions, community events, and personal projects to expand my skills. Just as important is receiving open, honest feedback from peers and being honest with oneself. Having mentors or colleagues who offer both challenges and support is crucial for growth, as they provide fresh perspectives and help you refine your skills. In many cases, I’ve found it takes effort outside standard working hours to truly develop my skills, but it has always been worth it. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d love to see more uniformity & predictability across adapters, for example in terms of their availability for both stateless and stateful workflows. Currently, certain adapters—like the timer trigger—are either unavailable in stateless workflows or behave differently. Unifying adapter support would not only simplify solution design decisions, but also reduce proof-of-concept overhead and streamline transitions between stateless and stateful workflows as requirements evolve. News from our product group Logic Apps Live Feb 2025 Missed Logic Apps Live in February? You can watch it here. You will find a live demo for the Exporting Logic Apps Standard to VS Code, some updates on the new Data Mapper User Experience and lots of examples on how to leverage Logic Apps to create your Gen AI solutions. Exporting Logic App Standard to VS Code Bringing existing Logic Apps Standard deployed in Azure to VS Code are now simpler with the new Create Logic Apps Workspaces from package. New & Improved Data Mapper UX in Azure Logic Apps – Now in Public Preview! We’re excited to announce that a UX update for Data Mapper in Azure Logic Apps is now in Public Preview! We have continuously improved Data Mapper, which is already generally available (GA), based on customer feedback. Parse or chunk content for workflows in Azure Logic Apps (Preview) When working with Azure AI Search or Azure OpenAI actions, it's often necessary to convert content into tokens or divide large documents into smaller pieces. The Data Operations actions, "Parse a document" and "Chunk text," can help by transforming content like PDFs, CSVs, and Excel files into tokenized strings and splitting them based on the number of tokens. These outputs can then be used in subsequent actions within your workflow. Connect to Azure AI services from workflows in Azure Logic Apps Integrate enterprise services, systems, and data with AI technologies by connecting your logic app workflows to Azure OpenAI and Azure AI Search resources. This guide offers an overview and practical examples on how to use these connector operations effectively in your workflow. Power Automate migration to Azure Logic Apps (Standard) Development teams often need to build scalable, secure, and efficient automation solutions. If your team is considering migrating flows from Microsoft Power Automate to Standard workflows in Azure Logic Apps, this guide outlines the key advantages of making the transition. Azure Logic Apps (Standard) is particularly beneficial for enterprises running complex, high-volume, and security-sensitive workloads. AI playbook, examples, and other resources for workflows in Azure Logic Apps AI capabilities are increasingly essential in applications and software, offering time-saving and innovative tasks like chat interactions. They also facilitate the creation of integration workloads across various services, systems, apps, and data within enterprises. This guide provides building blocks, examples, samples, and resources to demonstrate how to use AI services, such as Azure OpenAI and Azure AI Search, in conjunction with other services and systems to build automated workflows in Azure Logic Apps. Collect ETW trace in Logic App Standard An Inline C# script to collect Event Tracing for Windows (ETW) and store it in a text file, from within your Logic Apps. Typical Storage access issues troubleshooting With this blog post we intend to provide you more tools and visibility on how to troubleshoot your Logic App and accelerate your service availability restore. Download Logic App content for Consumption and Standard Logic App in the Portal It's common to see customers needing to download the JSON contents for their Logic Apps, either to keep a copy of the code or to initiate CI/CD. The methods to download this are very simple, accessible on a single button. Running Powershell inline with Az commands- Logic App Standard With the availability of the Inline "Execute Powershell code" action, a few questions have been brought to us like for example how to execute Az commands with this action. Deploy Logic App Standard with Application Routing Feature Based on Terraform and Azure Pipeline This article shared a mature plan to deploy logic app standard then set the application routing features automatically. It's based on Terraform template and Azure DevOps Pipeline. News from our community Azure Logic Apps: create Standard Logic App projects in Visual Studio Code from Azure portal export Post by Stefano Demiliani How many times you had the need to create a new Azure Logic App workflow starting from an existing one? Personally, this happens a lot of time… Starting with version 5.18.7 (published some days ago), the Azure Logic Apps (Standard) extension for Visual Studio Code provides the capability to create Standard Azure Logic App projects from an existing Logic App exported from the Azure portal. Bridging the Gap: Azure Logic Apps Meets On-Prem Fileshares Post by Tim D'haeyer The end of BizTalk Server is fast approaching, signaling a significant shift in the Microsoft integration landscape. With this transition, the era of on-premises integration is drawing to a close, prompting many organizations to migrate their integration workloads to Azure. One key challenge in this process is: “How can I read and write from an on-premises file share using Logic Apps?” Thankfully, this functionality has been available for some time with Azure Logic Apps Standard. Azure Logic Apps vs. Power Apps vs. Power Automate: What to Use When? Post by Prashant Singh The Architect’s Dilemma: Logic Apps vs. Power Apps vs. Power Automate! In my latest blog, I compare Logic Apps, Power Automate, and Power Apps—helping you pick the right one! Securing Azure Logic Apps: Prevent SQL Injection in Complex SQL Server Queries Post by Cameron McKay Executing COMPLEX queries as raw SQL is tempting in Logic App workflows. It's clear how to protect SQL CRUD actions in Logic Apps. BUT how do we protect our complex queries? In the Logic App Standard tier, built-in connectors run locally within the same process as the logic app Post by Sandro Pereira In the Logic App Standard tier, built-in connectors run locally within the same process as the logic app, reducing latency and improving performance. This contrasts with the Consumption model, where many connectors rely on external dependencies, leading to potential delays due to network round-trips. This makes Logic App Standard an ideal choice for scenarios where performance and low-latency integration are critical, such as real-time data processing and enterprise API integrations. Scaling Logic Apps Hybrid Post by Massimo Crippa Logic Apps Hybrid provides a consistent development, deployment, and observability experience across both cloud and edge applications. But what about scaling? Let's dive into that in this blog post. Calling API Management in a different subscription on LA Standard Post by Sandro Pereira Welcome again to another Logic Apps Best Practices, Tips, and Tricks post. Today, we will discuss how to call from Logic App Standard an API exposed in API Management from a different subscription using the in-app API Management connector. How to enable API Management Connector inside VS Code Logic App Standard Workflow Designer Post by Sandro Pereira If you’ve been working with Azure Logic Apps Standard in Visual Studio Code and noticed that the API Management connector is conspicuously absent from the list of connectors inside the workflow designer, you’re not alone. This is a typical behavior that many developers encounter, and understanding why it happens—and how to enable it—can save you a lot of headaches. Do you have strict security requirements for your workflows? Azure Logic Apps is the solution. Post by Stefano Demiliani Azure Logic Apps offers robust solutions for enterprise-level workflows, emphasizing high performance, scalability, and stringent security measures. This article explores how Logic Apps ensures business continuity with geo-redundancy, automated backups, and advanced security features like IP restrictions and VNET integration. Discover why Azure Logic Apps is the preferred choice for secure and scalable automation in large organizations.403Views2likes0Comments🚀 New & Improved Data Mapper UX in Azure Logic Apps – Now in Public Preview!
We’re excited to announce that a UX update for Data Mapper in Azure Logic Apps is now in Public Preview! We have continuously improved Data Mapper, which is already generally available (GA), based on customer feedback. Last year, we conducted a private preview to assess the improvements in the new user experience and confirm that we are on the right track in simplifying complex data transformations, including EDI schemas. With the insights gained, we made significant UI enhancements and added features to streamline the mapping process. Feedback We value your feedback to make the Data Mapper even better. Please share your thoughts, suggestions, and overall experience with us through our feedback form. How feedback shaped the Public Preview Throughout the evolution of Data Mapper, we gathered valuable feedback from customers and partners. Key themes that emerged include: Reliability: Ensuring the Data Mapper can handle large schemas and complex transformation logic, including functions. Error handling: Providing real-time validation by allowing users to test payloads and catch errors while authoring maps. Looping: Clearly indicating when repeating nodes are mapped and ensuring complex objects are properly represented Drag & drop enhancements: Improving how connections between nodes are created for better usability. Deserialization & namespace honoring: Ensuring XML deserialization correctly loads mappings without data loss, preserving namespace integrity for seamless schema validation. We’ve incorporated these suggestions into the public preview, ensuring a more refined and user-friendly experience. What’s new in the Data Mapper UX? 1. Easier navigation Docked schema panels keep you oriented within the data map. Easily search for specific nodes to streamline mapping. 2. Side-by-side function panel Search and use 100+ built-in functions, including mainly: Collection functions (for repeating elements) String manipulations Mathematical operations Conditional logic 3. Automatic looping for repeating nodes When mapping repeating nodes, a new loop connection is automatically added on the immediate parent nodes at source and destination. Repeating parent nodes are denoted by "A1, A2" notation on hover. Note: If the child node in the source has a deeper nesting level than in the destination, you must manually map the connection from the repeating source node to the destination to ensure proper data transformation. 4. Real-time error detection On saving the map, instantly view warnings and errors for missing mappings or incorrect configurations 5. Test Your Map Instantly Preview the output before running your workflow. How to set up and test out the new Data Mapper experience Enable the Preview: Go to your Azure Logic App (Standard) extension -> Settings -> Data Mapper. Select “Version ~2” to try out the new user experience. Light theme: Enable "Light Theme" in VS Code before creating a new data map. Dark Theme is not supported, but is on the roadmap and will be prioritized soon. Create a New Data Map: Navigate to the Azure tab on the left-hand panel of your VS Code. Select “Create New Data Map” and name it. Once loaded, select the schemas for source and destination. Upload schemas: Upload your source and destination schemas before creating the map (eg .xsd or .json files). Limitations While the new Data Mapper UX brings significant improvements, a few limitations remain: Filter function: The filter function correctly processes numeric conditions when enclosed in quotes (e.g., ">= 10"), but does not behave consistently for string comparisons (e.g., checking if item name = "Pen"). We are actively working on refining this behavior. Custom Functions: Support for custom functions is coming in the next refresh to enhance flexibility in data mapping. Usability enhancements: Improved tooltips, function labels, error messages and other UX refinements are on the way to provide clearer guidance and a smoother mapping experience, especially for complex transformations. Future investments The product is going to continue getting better and we should be adding more features very soon! Some immediate investments include: Enhanced test map experience: Making it easier to validate mappings during development. Panel resizing: Allowing users to have flexibility in viewing larger schemas and functions when multiple panels are expanded.2.1KViews12likes6CommentsIntroducing GenAI Gateway Capabilities in Azure API Management
We are thrilled to announce GenAI Gateway capabilities in Azure API Management – a set of features designed specifically for GenAI use cases. Azure OpenAI service offers a diverse set of tools, providing access to advanced models like GPT3.5-Turbo to GPT-4 and GPT-4 Vision, enabling developers to build intelligent applications that can understand, interpret, and generate human-like text and images. One of the main resources you have in Azure OpenAI is tokens. Azure OpenAI assigns quota for your model deployments expressed in tokens-per-minute (TPMs) which is then distributed across your model consumers that can be represented by different applications, developer teams, departments within the company, etc. Starting with a single application integration, Azure makes it easy to connect your app to Azure OpenAI. Your intelligent application connects to Azure OpenAI directly using API Key with a TPM limit configured directly on the model deployment level. However, when you start growing your application portfolio, you are presented with multiple apps calling single or even multiple Azure OpenAI endpoints deployed as Pay-as-you-go or Provisioned Throughput Units (PTUs) instances. That comes with certain challenges: How can we track token usage across multiple applications? How can we do cross charges for multiple applications/teams that use Azure OpenAI models? How can we make sure that a single app does not consume the whole TPM quota, leaving other apps with no option to use Azure OpenAI models? How can we make sure that the API key is securely distributed across multiple applications? How can we distribute load across multiple Azure OpenAI endpoints? How can we make sure that PTUs are used first before falling back to Pay-as-you-go instances? To tackle these operational and scalability challenges, Azure API Management has built a set of GenAI Gateway capabilities: Azure OpenAI Token Limit Policy Azure OpenAI Emit Token Metric Policy Load Balancer and Circuit Breaker Import Azure OpenAI as an API Azure OpenAI Semantic Caching Policy (in public preview) Azure OpenAI Token Limit Policy Azure OpenAI Token Limit policy allows you to manage and enforce limits per API consumer based on the usage of Azure OpenAI tokens. With this policy you can set limits, expressed in tokens-per-minute (TPM). This policy provides flexibility to assign token-based limits on any counter key, such as Subscription Key, IP Address or any other arbitrary key defined through policy expression. Azure OpenAI Token Limit policy also enables pre-calculation of prompt tokens on the Azure API Management side, minimizing unnecessary request to the Azure OpenAI backend if the prompt already exceeds the limit. Learn more about this policy here. Azure OpenAI Emit Token Metric Policy Azure OpenAI enables you to configure token usage metrics to be sent to Azure Applications Insights, providing overview of the utilization of Azure OpenAI models across multiple applications or API consumers. This policy captures prompt, completions, and total token usage metrics and sends them to Application Insights namespace of your choice. Moreover, you can configure or select from pre-defined dimensions to split token usage metrics, enabling granular analysis by Subscription ID, IP Address, or any custom dimension of your choice. Learn more about this policy here. Load Balancer and Circuit Breaker Load Balancer and Circuit Breaker features allow you to spread the load across multiple Azure OpenAI endpoints. With support for round-robin, weighted (new), and priority-based (new) load balancing, you can now define your own load distribution strategy according to your specific requirements. Define priorities within the load balancer configuration to ensure optimal utilization of specific Azure OpenAI endpoints, particularly those purchased as PTUs. In the event of any disruption, a circuit breaker mechanism kicks in, seamlessly transitioning to lower-priority instances based on predefined rules. Our updated circuit breaker now features dynamic trip duration, leveraging values from the retry-after header provided by the backend. This ensures precise and timely recovery of the backends, maximizing the utilization of your priority backends to their fullest. Learn more about load balancer and circuit breaker here. Import Azure OpenAI as an API New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs. We streamline the onboarding process by automatically importing the OpenAPI schema for Azure OpenAI and setting up authentication to the Azure OpenAI endpoint using managed identity, removing the need for manual configuration. Additionally, within the same user-friendly experience, you can pre-configure Azure OpenAI policies, such as token limit and emit token metric, enabling swift and convenient setup. Learn more about Import Azure OpenAI as an API here. Azure OpenAI Semantic Caching policy Azure OpenAI Semantic Caching policy empowers you to optimize token usage by leveraging semantic caching, which stores completions for prompts with similar meaning. Our semantic caching mechanism leverages Azure Redis Enterprise or any other external cache compatible with RediSearch and onboarded to Azure API Management. By leveraging the Azure OpenAI Embeddings model, this policy identifies semantically similar prompts and stores their respective completions in the cache. This approach ensures completions reuse, resulting in reduced token consumption and improved response performance. Learn more about semantic caching policy here. Get Started with GenAI Gateway Capabilities in Azure API Management We’re excited to introduce these GenAI Gateway capabilities in Azure API Management, designed to empower developers to efficiently manage and scale their applications leveraging Azure OpenAI services. Get started today and bring your intelligent application development to the next level with Azure API Management.33KViews10likes14CommentsAzure AI Foundry, GitHub Copilot, Fabric and more to Analyze usage stats from Utility Invoices
Overview With the introduction of Azure AI Foundry, integrating various AI services to streamline AI solution development and deployment of Agentic AI Workflow solutions like multi-modal, multi-model, dynamic & interactive Agents etc. has become more efficient. The platform offers a range of AI services, including Document Intelligence for extracting data from documents, natural language processing and robust machine learning capabilities, and more. Microsoft Fabric further enhances this ecosystem by providing robust data storage, analytics, and data science tools, enabling seamless data management and analysis. Additionally, Copilot and GitHub Copilot assist developers by offering AI-powered code suggestions and automating repetitive coding tasks, significantly boosting productivity and efficiency. Objectives In this use case, we will use monthly electricity bills from the utilities' website for a year and analyze them using Azure AI services within Azure AI Foundry. The electricity bills is simply an easy start but we could apply it to any other format really. Like say, W-2, I-9, 1099, ISO, EHR etc. By leveraging the Foundry's workflow capabilities, we will streamline the development stages step by step. Initially, we will use Document Intelligence to extract key data such as usage in kilowatts (KW), billed consumption, and other necessary information from each PDF file. This data will then be stored in Microsoft Fabric, where we will utilize its analytics and data science capabilities to process and analyze the information. We will also include a bit of processing steps to include Azure Functions to utilize GitHub Copilot in VS Code. Finally, we will create a Power BI dashboard in Fabric to visually display the analysis, providing insights into electricity usage trends and billing patterns over the year. Utility Invoice sample Building the solution Depicted in the picture are the key Azure and Copilot Services we will use to build the solution. Set up Azure AI Foundry Create a new project in Azure AI Foundry. Add Document Intelligence to your project. You can do this directly within the Foundry portal. Extract documents through Doc Intel Download the PDF files of the power bills and upload them to Azure Blob storage. I used Document Intelligence Studio to create a new project and Train custom models using the files from the Blob storage. Next, in your Azure AI Foundry project, add the Document Intelligence resource by providing the Endpoint URL and Keys. Data Extraction Use Azure Document Intelligence to extract required information from the PDF files. From the resource page in the Doc Intel service in the portal, copy the Endpoint URL and Keys. We will need these to connect the application to the Document Intelligence API. Next, let’s integrate doc intel with the project. In the Azure AI Foundry project, add the Document Intelligence resource by providing the Endpoint URL and Keys. Configure the settings as needed to start using doc intel for extracting data from the PDF documents. We can stay within the Azure AI Foundry portal for most of these steps, but for more advanced configurations, we might need to use the Document Intelligence Studio. GitHub Copilot in VS Code for Azure Functions For processing portions of the output from Doc Intel, what better way to create the Azure Function than in VS Code, especially with the help of GitHub Copilot. Let’s start by installing the Azure Functions extension in VS Code, then create a new function project. GitHub Copilot can assist in writing the code to process the JSON received. Additionally, we can get Copilot to help generate unit tests to ensure the function works correctly. We could use Copilot to explain the code and the tests it generates. Finally, we seamlessly integrate the generated code and unit tests into the Functions app code file, all within VS Code. Notice how we can prompt GitHub Copilot from step 1 of Creating the Workspace to inserting the generated code into the Python file for the Azure Function to testing it and all the way to deploying the Function. Store and Analyze information in Fabric There are many options for storing and analyzing JSON data in Fabric. Lakehouse, Data Warehouse, SQL Database, Power BI Datamart. As our dataset is small, let’s choose either SQL DB or PBI Datamart. PBI Datamart is great for smaller datasets and direct integration with PBI for dashboarding while SQL DB is good for moderate data volumes and supports transactional & analytical workloads. To insert the JSON values derived in the Azure Functions App either called from Logic Apps or directly from the AI Foundry through the API calls into Fabric, let’s explore two approaches. Using REST API and the other Using Functions with Azure SQL DB. Using REST API – Fabric provides APIs that we can call directly from our Function to insert records using HTTP client in the Function’s Python code to send POST requests to the Fabric API endpoints with our JSON data. Using Functions with Azure SQL DB – we can connect it directly from our Function using the SQL client in the Function to execute SQL INSERT statements to add records to the database. While we are at it, we could even get GitHub Copilot to write up the Unit Tests. Here’s a sample: Visualization in Fabric Power BI Let's start with creating visualizations in Fabric using the web version of Power BI for our report, UtilitiesBillAnalysisDashboard. You could use the PBI Desktop version too. Open the PBI Service and navigate to the workspace where you want to create your report. Click on "New" and select "Dataset" to add a new data source. Choose "SQL Server" from the list of data sources and enter "UtilityBillsServer" as the server name and "UtilityBillsDB" as the DB name to establish the connection. Once connected, navigate to the Navigator pane where we can select the table "tblElectricity" and the columns. I’ve shown these in the pictures below. For a clustered column (or bar) chart, let us choose the columns that contain our categorical data (e.g., month, year) and numerical data (e.g., kWh usage, billed amounts). After loading the data into PBI, drag the desired fields into the Values and Axis areas of the clustered column chart visualization. Customize the chart by adjusting the formatting options to enhance readability and insights. We now visualize our data in PBI within Fabric. We may need to do custom sort of the Month column. Let’s do this in the Data view. Select the table and create a new column with the following formula. This will create a custom sort column that we will use as ‘Sum of MonthNumber’ in ascending order. Other visualizations possibilities: Other Possibilities Agents with Custom Copilot Studio Next, you could leverage a custom Copilot to provide personalized energy usage recommendations based on historical data. Start by integrating the Copilot with your existing data pipeline in Azure AI Foundry. The Copilot can analyze electricity consumption patterns stored in your Fabric SQL DB and use ML models to identify optimization opportunities. For instance, it could suggest energy-efficient appliances, optimal usage times, or tips to reduce consumption. These recommendations can be visualized in PBI where users can track progress over time. To implement this, you would need to set up an API endpoint for the Copilot to access the data, train the ML models using Python in VS Code (let GitHub Copilot help you here… you will love it), and deploy the models to Azure using CLI / PowerShell / Bicep / Terraform / ARM or the Azure portal. Finally, connect the Copilot to PBI to visualize the personalized recommendations. Additionally, you could explore using Azure AI Agents for automated anomaly detection and alerts. This agent could monitor electricity bill data for unusual patterns and send notifications when anomalies are detected. Yet another idea would be to implement predictive maintenance for electrical systems, where an AI agent uses predictive analytics to forecast maintenance needs based on the data collected, helping to reduce downtime and improve system reliability. Summary We have built a solution that leveraged the seamless integration of pioneering AI technologies with Microsoft’s end-to-end platform. By leveraging Azure AI Foundry, we have developed a solution that uses Document Intelligence to scan electricity bills, stores the data in Fabric SQL DB, and processes it with Python in Azure Functions in VS Code, assisted by GitHub Copilot. The resulting insights are visualized in Power BI within Fabric. Additionally, we explored potential enhancements using Azure AI Agents and Custom Copilots, showcasing the ease of implementation and the transformative possibilities. Finally, speaking of possibilities – With Gen AI, the only limit is our imagination! Additional resources Explore Azure AI Foundry Start using the Azure AI Foundry SDK Review the Azure AI Foundry documentation and Call Azure Logic Apps as functions using Azure OpenAI Assistants Take the Azure AI Learn courses Learn more about Azure AI Services Document Intelligence: Azure AI Doc Intel GitHub Copilot examples: What can GitHub Copilot do – Examples Explore Microsoft Fabric: Microsoft Fabric Documentation See what you can connect with Azure Logic Apps: Azure Logic Apps Connectors About the Author Pradyumna (Prad) Harish is a Technology leader in the GSI Partner Organization at Microsoft. He has 26 years of experience in Product Engineering, Partner Development, Presales, and Delivery. Responsible for revenue growth through Cloud, AI, Cognitive Services, ML, Data & Analytics, Integration, DevOps, Open Source Software, Enterprise Architecture, IoT, Digital strategies and other innovative areas for business generation and transformation; achieving revenue targets via extensive experience in managing global functions, global accounts, products, and solution architects across over 26 countries.1.2KViews3likes1CommentInbound private endpoint for Standard v2 tier of Azure API Management
Standard v2 was announced in general availability on April 1st, 2024. Customers can now configure an inbound private endpoint (preview) for your API Management Standard v2 instance to allow clients in your private network to securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. Inbound private endpoint With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Use policy to distinguish traffic that comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. Preview limitations Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. In addition, each API management instance can support at most 100 private link connections. To participate in the preview and add an inbound private endpoint to your Standard v2 instance, you must complete a request form. The Azure API Management team will review your request and respond via email within five business days. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentation