logic apps
265 TopicsScaling Logic Apps Standard – Sustained Message Processing System
In the previous blog of this blog post series, we discussed how Logic App standard can be used to process high throughput event data at a sustained rate over long periods of time. In this blog, we will see how Logic App standard can be used to process high throughput message data that can facilitate the decoupling of applications and services. We simulate a real-life use case where messages are sent to a Service Bus queue at a sustained rate for processing, and we use a templated Logic App workflow to process the messages in real-time. The business logic in the templated workflow can be easily replaced by the customer to actions that encompass their unique processing of the relevant messaging information. To better showcase the message processing capabilities, we will discuss two scaling capabilities, one for vertical scaling (varying the performance of service plans), and another horizontal scaling (varying the number of service plan instances). Vertical scaling capabilities of theLogic App Standard with Built-In Service Bus Connector In this section, we will investigate the vertical scaling capabilities of the Logic App Service Bus connector, conducting experiments to find the maximum message throughput supported by each of the standard Logic App SKUs from WS1 to WS3. The workflow uses the Service Bus built-in trigger, so the messages are promptly picked up and are processed in the run at par with ingress rate. like the one shown below - available at our Template Gallery. Customers can replace the Business Logic and Compensation Logic to handle their business scenarios. For this investigation, we used the out-of-the-box Logic Apps Standard configuration for scaling: 1 always ready instance 20 maximum burst instances We also used the default trigger batch size of 50. Experiment Methodology For each experiment we selected one of the available SKUs (WS1, WS2, WS3), and supplied a steady influx of X messages per minute to the connected Service Bus queue in one experiment. We conduct multiple experiments for each SKU and gradually increase X until the Logic App cannot process all the messages immediately. For each experiment, we pushed enough (1 million) messages in total to the queue to ensure that each workflow reaches a steady state processing rate with its maximum scaling. Environment Configuration The experiment setup is summarized in the table below: Tests setup Single Stamp Logic App Number of workflows 1 Templated Triggers Service Bus Trigger batch size 50 Actions Service Bus, Scope, Condition, Compose Number of storage accounts 1 Prewarmed instances 1 Max scale settings 20 Message size 1 KB Service Bus queue max size 2 GB Service Bus queue message lock duration 5 minutes Service Bus queue message max delivery count 10 Experiment results We summarize the experiment results in the table below. If the default maximum scaling of 20 instances is adopted, then the throughput we measured here serves as a good reference for the upper bound of message processing powers: WS Plan Message Throughput Time to process 1M messages WS1 9000 messages/minute 120 minutes WS2 19000 messages/minute 60 minutes WS3 24000 messages/minute 50 minutes In all the experiments, the Logic App scaled out to 20 instances at steady state. 📝 Complex business logic, which requires more actions and/or longer processing times, can change those values. Findings Understand the scaling and bottlenecks In the vertical scaling experiments, we limited the maximum instance count to 20. Under this setting, we sometimes observe "dead-letter" messages being generated. With Service Bus, messages become "dead-letters" if they are not processed within the lock duration for all delivery attempts. This means that the workflow takes more than 5 minutes to complete the scope/business logic for some messages. The root cause is that the Service Bus trigger fetches messages faster than the workflow actions can process them. As we can see in the following figure, the Service Bus trigger can fetch as much as 60k messages per minute, but the workflow can only process less than 30k messages per minute. Recommendations We recommend going with the default scaling settings if your workload is well below the published message throughput and increase the maximum burst when a heavier workload is expected. Horizontal scaling capabilities of the Logic App Service Bus connector In this section, we probe into the horizontal scaling of Logic App message handling capabilities with varying instance counts. We conduct experiments on the most performant and widely used WS3 SKU. Experiment Methodology For each experiment we varied the number of pre-warmed instances and maximum burst instances and supplied a steady influx of X messages per minute to the connected Service Bus queue, gradually increase X until the Logic App cannot process all the messages immediately. We push enough (4 million) messages to the queue for each experiment to ensure that each workflow reaches a steady state processing rate. Environment configuration The experiment setup is summarized in the table below: Tests setup Multi Stamp Logic App Number of workflows 1 Templated Triggers Service Bus Trigger batch size 50 Actions Service Bus, Scope, Condition, Compose Number of storage accounts 3 Message size 1 KB Service Bus queue max size 4 GB Service Bus queue message lock duration 5 minutes WS Plan WS3 Service Bus queue message max delivery count 10 Experiment results The experiment results are summarized in the table below: Prewarmed Instances Max Burst Instances Message Throughput 1 20 24000 messages/minute 1 60 65000 messages/minute 5 60 65000 messages/minute 10 60 65000 messages/minute 10 100 85000 messages/minute In all the experiments, the Logic App scaled out to the maximum burst instance allowed at steady state. Editor's Note: The actual business logic can affect the number of machines the app scales out to. The performance might also vary based on the complexity of the workflow logic. Findings Understand the scaling and bottlenecks In the horizontal scaling experiments, when the max burst instances count is 60 or above, we no longer observe "dead-letters" being generated. In these cases, the Service Bus trigger can only fetch messages as fast as the workflow actions can process them. As we can observe in the following figure, all messages are processed immediately after they are fetched. Does the scaling speed affect the workload? As we can see below, a Standard Logic app with a prewarmed instance count of 5 can scale out to its maximum scaling of 60 under 10 minutes. The message fetching and message processing abilities scale out together, preventing the generation of “dead-letters.” Also, from the results in our horizontal scaling experiments, we see that having more prewarmed instances does not affect the steady-state throughput of the workflow. Recommendations With these two findings, we recommend keeping the minimum instance number small for cost-saving, without any impact on your peak performance. If a use case requires a higher throughput, the maximum burst instances setting can be set higher to accommodate that. For production workflows, we still recommend having at least two always-ready instances, as they would reduce any potential downtime from reboots.Azure Integration Services Unveils New Features at Microsoft Ignite 2024
In today’s fast-paced digital landscape, businesses are turning to AI to drive innovation and maintain their competitive edge. At Microsoft Ignite 2024, Azure Integration Services introduces groundbreaking features that seamlessly integrate AI into your workflows, without disrupting your operations. By putting AI at the forefront, Azure Integration Services helps enterprises streamline business processes, enhance customer experiences, and unlock new capabilities. While AI is a powerful driver of transformation, modernization of your integration platforms is equally critical. Azure Integration Services delivers both, empowering your organization to modernize integrations while tapping into AI innovation. In this blog, we’ll explore how the latest updates to Azure Integration Services equip your organization with the tools and knowledge to integrate AI into workflows, modernize integrations, and create a foundation that’s both scalable and adaptable to future business needs. Future-Proof Your Business: Embrace AI with Azure Integration Services Azure Integration Services continues to transform the way businesses leverage AI, enhancing Azure API Management and expanding Azure Logic Apps capabilities across diverse environments. Support for GPT4o (Text and Images) Across All GenAI Policies in Azure API Management Our expanded support for GPT-4o models (text and image) within Azure API Management’s Generative AI policies brings AI-driven innovation to your fingertips. New features like Token Limit Policy, Token Metric Policy, and Semantic Caching Policy help businesses manage GPT-4 models in Azure OpenAI deployments more effectively. Learn more about how these policies unlock new capabilities here. Generative AI Gateway Token Quota in Azure API Management This enhancement to the Token Limit Policy gives businesses greater flexibility with daily, weekly, or monthly token quotas. With the ability to control costs, track usage trends, and optimize token consumption, you can support dynamic AI-driven innovation while staying within your budget. Explore how this drives cost-controlled AI experimentation here. AI Capabilities in the Azure Logic Apps Consumption SKU We are excited to announce the public preview of AI capabilities in the Azure Logic Apps Consumption SKU, bringing AI directly into your workflows with the Azure AI Search Connector, Azure OpenAI Connector, and Forms Recognizer. These tools enable intelligent document processing, enhanced search, and language capabilities—all essential for creating dynamic and smarter workflows. By adding AI-powered connectors to the Consumption SKU, businesses of all sizes can innovate without the complexity of managing multiple environments. Ready to integrate AI into your workflows? Learn more about these AI capabilities here. Templates Support in Azure Logic Apps Standard Azure Logic Apps makes it easier than ever to launch integrations quickly, whether you're orchestrating simple data transfers or complex workflows. With pre-built workflow templates, you can accelerate integration scenarios—reducing development time while ensuring your workflows meet unique business needs. Explore how these templates can speed up your integration process here. Modernize Without Disruptions While innovation is crucial, maintaining operational stability is just as important. Azure Integration Services ensures that businesses can modernize their integration systems without causing disruptions, even during critical migrations or cloud transitions. Logic Apps Hybrid Deployment Model For businesses with specialized integration requirements, the new Hybrid Deployment Model allows workflows to run on customer-managed infrastructure—whether on-premises, in a private cloud, or a third-party public cloud. This ensures that businesses can meet regulatory, privacy, or network demands while benefiting from Azure's robust connector library for SaaS integration. Learn how this hybrid approach can help your organization meet unique integration requirements. Premium Integration Account in Azure Logic Apps The Premium Integration Account enhances B2B integrations with higher throughput, scalability, and support for advanced security like VNET integration. This offering is optimized for high-performance, mission-critical workloads and provides the reliability your business depends on. Discover how the Premium Integration Account can power your enterprise-grade integrations here. Deployment slots in Azure Logic Apps Standard This feature is designed to enable zero-downtime deployment for mission-critical Logic Apps, allowing you to update and deploy new versions seamlessly without disrupting end users. Deployment slots bring enterprise-grade availability to your Logic Apps, making it easier to meet high availability requirements. Learn more about setting up deployment slots and optimizing your deployment strategy in our documentation. Automate Build and Deployment for Standard Logic App Workflows with Azure DevOps This release streamlines the deployment process for single-tenant Logic Apps using Azure DevOps, ensuring consistency and efficiency across environments. Start optimizing your workflow deployments today and unlock the power of automated CI/CD for your Logic Apps! For setup details and best practices, check out our documentation here. Advanced Enterprise Features for API Management In addition to AI and integration capabilities, Azure API Management continues to evolve with advanced enterprise features designed to streamline operations, enhance security, and improve performance at scale. Let’s take a look at some of the key advancements that will transform your API management experience. Shared Workspace Gateways in Azure API Management This feature allows businesses to connect multiple workspaces to a single gateway, reducing operational costs and simplifying API management. By federating API management across up to 30 workspaces, organizations can maintain decentralized control while unifying oversight through a central developer portal. This means you can innovate rapidly without sacrificing the security and scalability your enterprise demands. Start simplifying your API management here. Azure API Management Premium v2 Tier For businesses managing APIs at scale, the new Premium v2 Tier offers unmatched performance. With higher entity limits, unlimited API requests, and flexible networking options, the Premium v2 Tier supports large-scale enterprise needs, all while offering greater stability and performance. Explore the power of Premium v2 and how it can drive your organization forward. Fully Managed API Analysis in Azure API Center Simplify API governance with fully managed API analysis in Azure API Center. Automatic linting ensures your API definitions align with company standards, helping to maintain high-quality APIs while reducing manual configuration. Learn more about API analysis and how it ensures consistency and quality across all your APIs here. Synchronization Between Azure API Center and Azure API Management This integration brings together the governance power of API Center with the management capabilities of API Management, offering a unified solution for API lifecycle management. With this integration, you can now easily sync your API Management instance directly with API Center for streamlined API discovery, centralized tracking, and enhanced governance. This solution simplifies the API lifecycle, improving operational efficiency while ensuring comprehensive oversight and governance across your organization’s APIs. API security posture management is now natively available in Defender CSPM We’re excited to announce that API security posture management is now natively integrated into Defender CSPM, offering comprehensive visibility and proactive risk analysis for Azure API Management APIs. This integration helps security teams identify vulnerabilities, prioritize best practices, and assess API exposure risks within the broader application context. Additionally, it expands sensitive data discovery to include API URLs, paths, and query parameters, enabling efficient tracking and mitigation of data exposure risks across cloud applications. Empower Your Business for the Future with Azure Integration Services With the latest innovations at Microsoft Ignite 2024, Azure Integration Services ensures that businesses can move forward with confidence—modernizing their integration systems without disruption while leveraging the power of AI. Whether you're managing legacy migrations, automating workflows, or optimizing for AI-driven business success, Azure Integration Services provides the flexibility, scalability, and stability to drive your future growth. Ready to future-proof your business? Start your AI and integration journey with Azure today!Announcing AI building blocks in Logic Apps (Consumption)
We’re thrilled to announce that the Azure OpenAI and AI Search connectors, along with the Parse Document and Chunk Text actions, are now available in the Logic Apps Consumption SKU! These capabilities, already available in the Logic Apps Standard SKU, can now be leveraged in serverless, pay-as-you-go workflows to build powerful AI-driven applications providing cost-efficiency and flexibility. What’s new in Consumption SKU? This release brings almost all the advanced AI capabilities from Logic Apps Standard to Consumption SKU, enabling lightweight, event-driven workflows that automatically scale with your needs. Here’s a summary of the operations now available: Azure OpenAI connector operations Get Completions: Generate text with Azure OpenAI’s GPT models for tasks such as summarization, content creation, and more. Get Embeddings: Generate vector embeddings from text for advanced scenarios like semantic search and knowledge mining. AI Search connector operations Index Document: Add or update a single document in an AI Search index. Index Multiple Documents: Add or update multiple documents in an AI Search index in one operation. *Note: The Vector Search operation for enabling retrieval pattern will be highlighted in an upcoming release in December.* Parse Document and Chunk Text Actions Under the Data operations connector: Parse Document: Extract structured data from uploaded files like PDFs or images. Chunk Text: Split large text blocks into smaller chunks for downstream processing, such as generating embeddings or summaries. Demo workflow: Automating document ingestion with AI To showcase these capabilities, here’s an example workflow that automates document ingestion, processing, and indexing: Trigger: Start the workflow with an HTTP request or an event like a file upload to Azure Blob Storage. Get Blob Content: Retrieve the document to be processed. Parse Document: Extract structured information, such as key data points from a service agreement. Chunk Text: Split the document content into smaller, manageable text chunks. Generate Embeddings: Use the Azure OpenAI connector to create vector embeddings for the text chunks. Select array: To compose the inputs being passed to Index documents operation Index Data: Store the embeddings and metadata for downstream applications, like search or analytics Why choose Consumption SKU? With this release, Logic Apps Consumption SKU allows you to: - Build smarter, scalable workflows: Leverage advanced AI capabilities without upfront infrastructure costs. - Pay only for what you use: Ideal for event-driven workloads where cost-efficiency is key. - Integrate seamlessly: Combine AI capabilities with hundreds of existing Logic Apps connectors. What’s next? In December, we’ll be announcing the Vector Search operation for the AI Search connector, enabling retrieval capability in Logic Apps Consumption SKU to bring feature parity with Standard SKU. This will allow you to perform advanced search scenarios by matching queries with contextually similar content. Stay tuned for updates!346Views3likes0CommentsGet Ready for Azure Integration Services at Microsoft Ignite 2024
Microsoft Ignite 2024 is just around the corner, and we’re excited to share how Azure Integration Services is taking center stage this year! Whether you're attending in person in Chicago or virtually from anywhere in the world, this is your chance to dive deep into the latest innovations in enterprise integration, AI-powered automation, API governance, and much more. Mark your calendars for these must-see sessions: Breakout sessions Modernize Enterprise Integration with Azure Integration Services Date: Thursday, November 21, 2024 Time: 12:30 PM - 1:15 PM Pacific Standard Time Session Code: BRK150 Speakers: Divya Swarnkar, Kent Weare In today’s rapidly evolving digital world, modernizing enterprise integration is critical to maintaining a competitive edge. This session will explore how Azure Integration Services can streamline and automate your processes, ensuring business continuity while driving transformation. We’ll cover how hybrid deployment models seamlessly connect on-premises systems with the cloud and demonstrate how to transition from legacy platforms like BizTalk to Azure Integration Services—all while preserving your existing investments. Now that you have a solid foundation, we’ll showcase how Azure Logic Apps can integrate AI into your workflows, reshape every business process, and reinvent customer experiences. If you’re looking to modernize your enterprise integration, unlock new opportunities, and stay ahead without disrupting your business operations, this session is for you. Effective API Governance in the Era of AI with Azure API Management Date: Wednesday, November 20, 2024 Time: 3:00 PM - 3:45 PM Pacific Standard Time Session Code: BRK143 Speakers: Mike Budzynski, Julia Kasper As APIs continue to drive innovation, effective governance becomes more important—especially when it comes to managing the complexity of AI-driven workloads. In this session, we’ll dive into how Azure API Management can help you implement a robust API governance model that ensures security, compliance, and scalability for AI and other critical APIs. Learn how to leverage Azure’s powerful tools like Azure API Management, Azure Policy, and Microsoft Defender for Cloud to accelerate API development, enhance reliability, and stay ahead of evolving security requirements—all without slowing down innovation. Demo GenAI Gateway Capabilities in Azure API Management Date: Wednesday, November 20, 2024 Time: 9:00 AM - 9:15 AM Pacific Standard Time Session Code: THR509 Speakers: Nima Kamoosi, Fernando Mejia GenAI apps are pushing the boundaries of what’s possible with APIs. This quick but impactful demo will show you how GenAI gateway capabilities in Azure API Management can help overcome scalability, security, and monitoring challenges in GenAI app development. We’ll demonstrate how you can configure Azure API Management to authenticate and authorize LLM (Large Language Model) endpoints, enforce token consumption limits, monitor usage, and implement load balancing—all within the familiar environment of Azure. Don’t miss this opportunity to see how these capabilities can streamline your GenAI app development. In-Person Expert Meetup at Microsoft Hub Want to dive even deeper into the world of Azure Integration Services? Join us at the Expert Meetup stations in the Microsoft Hub at Ignite for in-person demos and to ask questions directly to the product experts and team members. This is a great opportunity to engage with the people behind the solutions and get tailored advice on your integration challenges. Don’t Miss Out! Microsoft Ignite 2024 offers a unique chance to gain firsthand insights into the latest trends and solutions shaping the future of enterprise integration and API management. Register today to secure your spot and take advantage of these exciting sessions, demos, and expert meetups.379Views0likes0CommentsBuilding Intelligent Workflows with Azure Integration Services: Session 2 Preview
Ourthree-part webinar series, Building Intelligent Workflows with Azure Integration Services, is in full swing! After a successful first session that covered foundational strategies for AI integration, we’re thrilled to dive even deeper into the second session: Integrating AI into Your Workflows with Azure Logic Apps. If you missed the first session, you can watch it on-demand here to learn how Azure Integration Services can transform your operations through intelligent automation. What to Expect in Session 2: Integrating AI into Your Workflows with Azure Logic Apps Session 2 will focus on using Azure Logic Apps to embed AI into your workflows, highlighting new features that simplify data ingestion and document processing for Generative AI applications. This session will show you how Azure Logic Apps accelerates AI-enhanced automation, helping you achieve more with less effort and complexity. We’ll also have live demos to bring these concepts to life! Joining us will be a special guest speaker, Mick Badran, Founder & Director of SolveIT.Today, a consulting company specializing in intelligent process automation and AI-driven solutions. Mick will share insights on the challenges and best practices for AI integration, explaining how Logic Apps can address these challenges while driving automation and business value. Here’s what you’ll learn: 1. Streamline Document Processing with Built-in Actions for AI Ingestion Discover the built-in actions in Logic Apps Standard (Public Preview) for document parsing and chunking, designed to simplify AI data ingestion using Retrieval-Augmented Generation (RAG) patterns. Actions like “Parse a document” and “Chunk text” convert formats like PDFs, CSVs, and Excel into tokenized chunks that are compatible with Azure AI Search and Azure OpenAI, eliminating the need for complex coding. 2. Accelerate Development with Templates Support in Logic Apps With Templates Support now in Public Preview, Logic Apps offers a growing library of pre-built templates, covering scenarios from basic data transfers to intricate event-driven workflows. These templates provide robust solutions to help you start building and deploying applications faster than ever, whether you’re new to Logic Apps or experienced in enterprise workflows. 3.Leverage Built-In AI Connectors for Generative AI We’ll introduce the Azure OpenAI and Azure AI Search connectors, which bridge Logic Apps workflows with AI capabilities. These connectors support secure connections via multiple authentication methods, including AAD and managed identities, and they work even behind firewalls. You’ll see how these AI-driven tools power natural language processing, data retrieval, and document processing—transforming your workflows with minimal setup. Don’t Miss This Opportunity! Join Kent Weare, Divya Swarnkar, and Mick Badran for this insightful session, complete with live demos, to explore the latest in AI-powered automation with Azure Integration Services. Sign up now to secure your spot!147Views0likes0CommentsJoin Us in Creating Our First Logic Apps Community Playbook
We are thrilled to announce an exciting opportunity for all community members who are passionate about sharing their knowledge and experiences: the creation of our very own Logic Apps Community Playbook! This initiative is designed to bring together a patterns and practices playbook in a joint effort between Product Group engineers, Microsoft Field experts and our diverse and talented community, to create an invaluable resource for everyone. By contributing to our Community Playbook, you will: Enhance Our Documentation: Your contributions will help improve and expand our existing documentation, making it more comprehensive and useful for all users. Share Your Expertise: This is your chance to shine! Share your unique insights and experiences with the community, and help others learn from your knowledge. Get Featured: Contributors will be recognized and featured on Microsoft Learn in the Logic Apps section. This is a fantastic opportunity to showcase your expertise on a renowned platform. We have made it easy for you to get involved. Simply fill out our call for content sign-up link with the required details and wait for our team to review your proposal. And we will contact you with more details on how to contribute. Your participation can make a significant impact. Don’t miss this opportunity to contribute, connect, and be celebrated for your expertise. We look forward to your submissions and to building something extraordinary together!352Views2likes0Comments