Azure API Management
99 TopicsImport Logic Apps (Standard) into Azure API Management
API Management (APIM) is a way to create consistent and modern API gateways for existing back-end services. API Management helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. Azure Logic Apps is a cloud-based platform for creating and running automatedlogic app workflowsthat integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. To create a logic app, you use either theLogic App (Consumption)resource type or theLogic App (Standard)resource type. The Consumption resource type runs in themulti-tenantAzure Logic Apps orintegration service environment, while the Standard resource type runs insingle-tenantAzure Logic Apps environment. This blog walks you through step by step on how to import Logic App (Standard) into Azure API Management. For how to import a Logic App (Consumption) into APIM, please refer to our public doc Prerequisites— Create an Azure API Management instance. Create a Logic App The functionality to directly import from “Create from Azure Resource” is not available for workflows in Logic App (Standard) yet. We will demonstrate how to overcome this limitation in the followings. Steps to import Logic App (Standard) into Azure API Management: ======================================================== As an alternative, to import the Logic App (Standard) we can manually register the Request trigger URL from workflows as a blank API in APIM service. We will need to divide the Request URL(i.e., Logic Apps Workflow URL) into two parts to put into the backend and frontend. For example– this request URL can be broken into 2 segments— https://stdla1.azurewebsites.net:443/api/TESTWF1/triggers/manual/invoke?api-version=2020-05-01-pre...> Part 1 https://stdla1.azurewebsites.net:443/api/ Part 2 /test2/triggers/manual/invoke?api-version=2020-05-01-preview&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<123abc> We need to place the part 1 URL into either Webservice URL Or Backend HTTP(s) endpoint by clicking on highlighted part as portrayed below Or Place the part 1 URL of Workflow URL into the backend HTTP(s) endpoint by clicking on highlighted part as portrayed below Then select target as HTTP(s) endpoint and click on override and provide the first part of your request URL as shown in the screenshot. Next add part 2 URL into the frontend section as depicted in the screenshot below. On testing, it gives 200 OK response. Similarly, you can add various workflows as different operations in the same API. This method is useful for Logic App (Standard) Workflows. Happy Learning!! 🙂13KViews2likes3CommentsGenerative AI with JavaScript FREE course
JavaScript devs, now’s your chance to tap into the potential of Generative AI! Whether you’re just curious or ready to level up your apps, our new video series is packed with everything you need to start building AI-powered applications.2.6KViews0likes0CommentsBoost Your Development with Azure Tools for Visual Studio Code
As the cloud becomes essential for modern software development, integrating cloud solutions into your development process can significantly boost productivity. Microsoft Azure offers a comprehensive suite of services and tools to help developers create, deploy, and manage cloud applications. Using Azure extensions for Visual Studio Code is one of the simplest ways to utilize Azure’s features. This blog post will discuss using the Azure Tools extension pack for Visual Studio Code and the best extensions for various development roles.2.5KViews2likes0CommentsConfigure rate limits for different API operations in Azure API Management
Azure API Management (APIM) is one of the PaaS products offered by Azure which allows you to publish, manage, secure and monitor APIs. One of the features of APIM is the ability to control the traffic to your APIs using policies such as rate limits and quotas. Rate limits are policies that prevent API usage spikes on a per subscription or per key basis by limiting the call rate to a specified number per a specified time period. Quotas are policies that enforce a hard limit on the number of calls that can be made to an API within a billing period. In this blog post, we will focus on how to configure rate limits for different operations in APIM using the `rate-limit-by-key` policy. This policy allows you to define expressions to identify the keys that are used to track traffic usage. You can use any arbitrary string value as a key, such as IP address, subscription ID, etc. Scenario Let's say you have two operations in your API: Operation A and Operation B. You want to apply different rate limits for each operation based on your business requirements. For example: - Operation A has a rate limit of 5 calls per minute - Operation B has a rate limit of 5 calls per 30 seconds You also want to make sure that the rate limits are independent by operation, meaning that calling one operation does not affect the counter for another operation. Solution As per our official document, operations in APIM (regardless API) use a single counter for all scopes at which the policy is configured. Say, if you make 2 calls to one operation, these calls will be counted towards the single counter used by all of the operations. To achieve this scenario, you need to use the `rate-limit-by-key` policy with an expression that produces unique values for different operations. One way to ensure that is to add `context.Operation.Id` to the expression. The `context.Operation.Id` property returns a unique identifier for each operation in your API. By concatenating it with another value such as IP address or subscription ID, you can create a key that is specific for each operation and each caller. Here is an example of how you can apply this policy at the inbound section of your API: <policies> <inbound> <!-- Extracts caller's IP address --> <set-variable name="caller-ip" value="@(context.Request.IpAddress)" /> <!-- Applies rate limit by key using IP address and operation ID --> <rate-limit-by-key calls="5" renewal-period="60" counter-key="@(context.Request.IpAddress + context.Operation.Id)" /> <base /> </inbound> ... </policies> ``` To test this solution, you can use any tool that can send HTTP requests such as Postman or curl. You can also use Azure Portal's Test console feature. Note: Due to the distributed nature of throttling architecture, rate limiting is never completely accurate. The difference between the configured and the actual number of allowed requests varies based on request volume and rate, backend latency, and other factors. You can also customize this policy by adding optional attributes such as increment-condition or quota-exceeded-response-code. For more details on how this policy works and what options are available, see:https://learn.microsoft.com/en-us/azure/api-management/rate-limit-by-key-policy In this blog post, we have seen how to configure rate limits for different operations of an API in Azure API Management using the `rate-limit-by-key` policy. This policy allows us to define expressions to identify the keys that are used to track traffic usage. We have also seen how to use `context.Operation.Id` as part of the key expression to ensure that the rate limits are independent by operation. We hope this blog helps you!9.2KViews1like1CommentOpenAI at Scale: Maximizing API Management through Effective Service Utilization
Harnessing Azure OpenAI at Scale: Effective API Management with Circuit Breaker, Retry, and Load Balance Unlock the full potential of Azure OpenAI by leveraging the advanced capabilities of Azure API Management. This guide explores how to effectively utilize Circuit Breaker, Retry, and Load Balance strategies to optimize backends and ensure seamless service utilization. Learn best practices for integrating OpenAI services, enhancing performance, and achieving scalability through robust API management policies.4.3KViews2likes0CommentsPost-retirement action for App Service Environment v1/v2, Logic Apps ISE, and APIM stv1
The following Azure services reached end of life on August 31, 2024, and are no longer supported: Azure App Service Environment v1 and v2 Logic Apps Integration Service Environment Azure API Management stv1 Customers running retired environments should migrate immediately to avoid security risk and loss of data. As part of our ongoing communication and assistance to migrate customers safely to their new environments, this article provides more information on the customer experience for any workloads that remain in production beginning 1st September 2024.2.6KViews0likes0CommentsSecure APIM and Azure OpenAI with managed identity
Ok, so you might have read somewhere that API keys is not secure, and you might even have heard about this managed identity thing. But what is it, and why is it better than API keys? Let's try to answer that question and show a practical example of how to use managed identities in Azure.11KViews1like1CommentManage your Generative AI APIs with Azure API Management and Azure Open AI
This is for you who have started with Generative AI APIs and you’re looking to take those APIs into production. At high-level, there are things to consider like load balancing error management and cost management. We’ll mention those in this article and guide you to an Azure Sample where you can get started deploying an enterprise-ready solution.4.2KViews1like0Comments