Among the many great features Azure API Management (APIM) provides, response and value caching are some of the most advantageous approaches to alleviate load on your backends and provide reduced round-trip times (latency) for your API consumers. Depending on the nature of your implementation, gains can be quite positively impactful.
APIM provides two types of caching - internal and external. The former is caching maintained in each APIM instance; the latter is implemented through Azure Cache for Redis. Both offerings have their respective advantages and disadvantages, and the best way to gain an understanding of which may be most suitable for you is to read up on both and understand how each APIM tier you employ may utilize cache.
While this blog post can be just as applicable to a single-region APIM installation, I want to focus on what a multi-region approach looks like. Therefore, suppose you have a distributed APIM setup in two or more regions. We will focus on the region-paired Central US and East US 2. For ease of example, both of these APIM instances source data from an external, 3rd party API, but any other backend implementation is adaptable. It is very probable that, as part of your fault-tolerant, region-based design, you have redundant backend APIs, perhaps replicated data stores in your region, and so on. That's very applicable, but we can focus on a simpler scenario that conveys the focus of this post.
As alluded to in our introduction, our objective is to provide a fast, robust caching experience to alleviate backend stress and reduce latency to our caller. We will implement an external Redis cache that will be used in a shared manner to benefit our APIM instances. Do note that I am not immediately focused on redundancy of the Redis implementation as I want to keep focus primarily on the APIM / Redis interaction.
We will use the Color API that we also use in our APIM Hands-On Lab. Repeat these steps for each of the two APIM instances.
<policies>
<inbound>
<base />
<!-- 1) Check whether we have a cache entry in our Redis cache (external).-->
<cache-lookup-value key="randomcolor" variable-name="cachedrandomcolor" caching-type="external" />
<choose>
<!-- 2) If we had a cache hit, we simply return that value. Otherwise, we automatically move on to a call to the backend API.-->
<when condition="@(context.Variables.ContainsKey("cachedrandomcolor"))">
<return-response>
<set-header name="x-cache-status" exists-action="override">
<value>hit</value>
</set-header>
<set-body>@((string)context.Variables["cachedrandomcolor"])</set-body>
</return-response>
</when>
</choose>
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
<!-- 1) Assemble the return value from the API response, the current managed APIM gateway regionm, and a time stamp.-->
<set-variable name="returnValue" value="@{
return $"\nAPI response: {context.Response.Body.As<string>()}\nAPIM region: {context.Deployment.Region}\nTimestamp: {DateTime.Now}\n\n";
}" />
<!-- 2) Store the value to be returned in the Redis cache (external) for 30 seconds. This is important for our lookup on subsequent requests.-->
<cache-store-value key="randomcolor" duration="30" caching-type="external" value="@((string)context.Variables["returnValue"])" />
<!-- 3) Return the same value to the APIM caller that we just stored in cache. That makes it indistinguishable where the value came from.-->
<return-response>
<set-header name="x-cache-status" exists-action="override">
<value>miss</value>
</set-header>
<set-body>@((string)context.Variables["returnValue"])</set-body>
</return-response>
</outbound>
<on-error>
<base />
</on-error>
</policies>
This concludes our APIM / API / Redis setup.
Solution Validation
To validate the shared cache, we will be calling both APIM instances in short succession. From the policies above, you can see that we cache the entry for 30 seconds.
curl -v https://apim-redis-demo-central-us.azure-api.net/color/api/RandomColor
curl -v https://apim-redis-demo-east-us2.azure-api.net/color/api/RandomColor
I demonstrated how two APIM instances can successfully use a shared Redis cache. There are optimizations to be made to resiliency of the Redis cache, etc., but I hope I was able to convey the principle of the setup successfully.
Please reach out and comment if you have any questions. Thanks so much!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.