azure databases
17 TopicsAzure Cache for Redis Retirement: What to Know and How to Prepare
Microsoft has announced the retirement of Azure Cache for Redis (Basic, Standard, Premium) and Azure Cache for Redis Enterprise/Enterprise Flash tiers. If you rely on these services today, it’s important to understand what’s changing, when, and how to prepare for a smooth transition to Azure Managed Redis. What’s Retiring and When? Azure Cache for Redis (Basic, Standard, Premium): Creation blocked for new customers: April 1, 2026 Creation blocked for existing customers: October 1, 2026 Retirement Date: September 30, 2028 Instances will be disabled starting October 1, 2028 Azure Cache for Redis Enterprise/Enterprise Flash: Creation blocked for all customers: April 1, 2026 Retirement Date: March 31, 2027 Instances will be migrated to Azure Managed Redis starting April 1, 2027 Existing instances will continue to run and receive regular maintenance until their respective retirement dates. More information here. Why Move to Azure Managed Redis? Azure Managed Redis is built on Redis Enterprise software, offering significant improvements: Enterprise-grade features: Active geo-replication, Redis modules, and more Performance & Cost: More performant and cost-effective than all tiers of Azure Cache for Redis Reliability: Zone redundancy by default, up to 99.999% availability with geo-replication Simplified Management: Native Azure experience, no Marketplace component, easier provisioning and billing compared to Azure Cache for Redis Enterprise Migration Guidance: What Customers Need to Do Upgrade Early Microsoft recommends upgrading to Azure Managed Redis as soon as possible, rather than waiting for the retirement deadline. Early migration ensures you benefit from new features and avoid last-minute disruptions. Migration Tooling For Basic/Standard/Premium: A command-line migration experience will be available in phases from February 2026, starting with the Basic caches support in preview. This tooling will allow you to migrate your cache endpoint, using the same hostname and access key for a seamless transition. For Enterprise/EnterpriseFlash: Migration tooling will roll out in phases starting March 2026. Downtime: If you use the migration tooling, expect only a brief connection blip (a few seconds) when the DNS record is updated. This is similar to the downtime experienced during regular maintenance. Application Changes Update your Redis hostname and access key to point to the new Azure Managed Redis instance. Azure Managed Redis is clustered by default. Most client libraries (e.g., StackExchange.Redis) work out-of-the-box, but check your library’s documentation for cluster support. Non-clustered support is available up to 25GB, but clustering is recommended for performance and scalability. For migrating data, see various options outlined in this blogpost: Data Migration with RIOT-X for Azure Managed Redis | Microsoft Community Hub and here. Reservations You can cancel or exchange your existing reservations for Azure Cache for Redis as described in Microsoft Cost Management documentation. Feature Parity and Regional Availability: What’s Coming and When Azure Managed Redis is actively being enhanced to close feature gaps and expand regional coverage. Here are the key ETAs for upcoming features and regions (all dates are tentative): Azure Public Regions France Central: November 2025 Qatar Central: December 2026 Azure Sovereign Clouds China Cloud: July 2026 US Gov Cloud: July 2026 Larger SKUs Memory Optimized, Balanced, Compute Optimized (up to 500GB): May 2026 Management Operations Scheduling maintenance windows: February 2026 Keyspace notifications: March 2026 If you need a feature or region that isn’t yet available, reach out to support or email AzureManagedRedis@microsoft.com for guidance. Resources for a Smooth Migration Migration Overview & Guidance Choosing the Right Tier Azure Managed Redis architecture Key Takeaways for Customers Don’t wait—start planning your migration to Azure Managed Redis now. Migration tooling will make the process easier, with phased rollouts starting November 2025. Feature parity is a priority, with major gaps closing by March–June 2026. Reach out to Microsoft support if you have blockers or need help with migration.19KViews2likes2CommentsAzure database - Allocated Space not expanding
Hello, Please help me. Our windows azure database allocated space is not expanding. We have an elastic standard tier. Our database space details are, Used space 71.91 GB Allocated space 71.91 GB Maximum storage size 300 GB We are not able to add any new records in the database.1.6KViews0likes0CommentsBuilding faster AI agents with Azure Managed Redis and .NET Aspire
AI is evolving fast—and so are the tools to build intelligent, responsive applications. In our recent Microsoft Reactor session, Catherine Wang (Principal Product Manager at Microsoft) and Roberto Perez (Microsoft MVP and Senior Global Solutions Architect at Redis) shared how Azure Managed Redis helps you create Retrieval-Augmented Generation (RAG) AI agents with exceptional speed and consistency. Why RAG agents? RAG applications combine the power of large language models (LLMs) with your own data to answer questions accurately. For example, a customer support chatbot can deliver precise, pre-approved answers instead of inventing them on the fly. This ensures consistency, reduces risk, and improves customer experience. Where Azure Managed Redis fits with agentic scenarios In this project, Azure Managed Redis is used as a high-performance, in-memory vector database to support Agentic Retrieval-Augmented Generation (RAG), enabling fast similarity searches over embeddings to retrieve and ground the LLM with the most relevant known answers. Beyond this, Azure Managed Redis is a versatile platform that supports a range of AI-native use cases, including: Semantic Cache – Cache and reuse previous LLM responses based on semantic similarity to reduce latency and improve reliability. LLM Memory – Persist recent interactions and context to maintain coherent, multi-turn conversations. Agentic Memory – Store long-term agent knowledge, actions, and plans to enable more intelligent and autonomous behavior over time. Feature Store – Serve real-time features to machine learning models during inference for personalization and decision-making. These capabilities make Azure Managed Redis a foundational building block for building fast, stateful, and intelligent AI applications. Demo highlights In the session, the team demonstrates how to: Deploy a RAG AI agent using .NET Aspire and Azure Container Apps. Secure your Redis instance with Azure Entra ID, removing the need for connection strings. Use Semantic Kernel to orchestrate agents and retrieve knowledge base content via vector search. Monitor and debug microservices with built-in observability tools. Finally, we walk through code examples in C# and Python, demonstrating how you can integrate Redis search, vector similarity, and prompt orchestration into your own apps. Get Started Ready to explore? ✅ Watch the full session replay: Building a RAG AI Agent Using Azure Redis ✅ Try the sample code: Azure Managed Redis RAG AI Sample790Views0likes0CommentsAzure Managed Redis & Azure Cosmos DB with cache‑aside: a practical guide
Co-authored by James Codella - Principal Product Manager, Azure Cosmos DB, Microsoft; Andrew Liu - Principal Group Product Manager, Azure Cosmos DB, Microsoft; Philip Laussermair – Azure Managed Redis Solution Architect, Redis Inc. Using Azure Managed Redis alongside Azure Cosmos DB is a powerful way to reduce operational costs in read-heavy applications. While Azure Cosmos DB delivers low-latency point reads, an SLA-backed 10ms at the 99th percentile of requests, each read consumes Request Units (RUs), which directly impact your billing. For workloads with frequent access to the same data, caching those reads in Azure Managed Redis can dramatically reduce RU consumption and smooth out cost spikes. This cache-aside pattern allows applications to serve hot data from memory while preserving Azure Cosmos DB as the system of record. By shifting repeated reads to Azure Managed Redis, developers can optimize for cost efficiency without sacrificing consistency, durability, or global availability. What each service does: Azure Managed Redis is Microsoft’s first‑party, fully managed service built on Redis Enterprise. Azure Cosmos DB offers higher durable multi-region writes, high availability SLAs, and in-region single-digit millisecond latency for point operations. Co-locating app compute, Azure Managed Redis, and Azure Cosmos DB in the same region minimizes round-trips; adding Azure Managed Redis on top of Azure Cosmos DB reduces RU consumption from repeat reads and smooths tail latency spikes during peak load. Both Azure Managed Redis and Azure Cosmos DB offer great support for queries (including vector search) over JSON data to work well together fast efficient AI apps. This pairing doesn’t require a rewrite. You can adopt a cache‑aside strategy in your data‑access layer: look in Redis first; on a miss, read from Azure Cosmos DB and populate Redis with a TTL; on writes, update Azure Cosmos DB and invalidate or refresh the corresponding cache key. Use Azure Cosmos DB ETags in cache keys to make invalidation deterministic, and use the Azure Cosmos DB Change Feed to trigger precise cache refreshes when data changes. The Cache‑Aside (Lazy Loading) Pattern Read path: GET key from Azure Managed Redis. If found, return. If not, issue a point read to Azure Cosmos DB, then SET/JSON.SET the value in Azure Managed Redis with a TTL and return the payload to the caller. Write path: Persist to Azure Cosmos DB as the source of truth. Invalidate or refresh the related Redis key (for example, delete product:{id}:v{etag} or write the new version). If you subscribe to the Change Feed, an Azure Function can perform this invalidation asynchronously to keep caches hot under write bursts. Code Example (.NET): using System; using System.Collections.Generic; using System.Threading.Tasks; using Microsoft.Azure.Documents; using Microsoft.Extensions.Logging; using StackExchange.Redis; using Microsoft.Azure.StackExchangeRedis; using Azure.Identity; public static class CosmosDbChangeFeedFunction { private static RedisConnection _redisConnection; static CosmosDbChangeFeedFunction() { // Initialize Redis connection using Entra ID (Azure AD) authentication var redisHostName = Environment.GetEnvironmentVariable("RedisHostName"); // e.g., mycache.redis.cache.windows.net var credential = new DefaultAzureCredential(); var configurationOptions = ConfigurationOptions.Parse($"{redisHostName}:10000"); _redisConnection = RedisConnection.ConnectAsync(configurationOptions.ConfigureForAzureWithTokenCredentialAsync(credential)).GetAwaiter().GetResult(); } [FunctionName("CosmosDbChangeFeedFunction")] public static async Task Run( [CosmosDBTrigger( databaseName: "my-database", containerName: "my-container", ConnectionStringSetting = "CosmosDBConnection", LeaseContainerName = "leases")] IReadOnlyList<Document> input, ILogger log) { var cache = _redisConnection.GetDatabase(); if (input != null && input.Count > 0) { foreach (var doc in input) { string id = doc.GetPropertyValue<string>("id"); string etag = doc.GetPropertyValue<string>("_etag"); string cacheKey = $"item:{id}:v{etag}"; string json = doc.ToString(); await cache.StringSetAsync(cacheKey, json, TimeSpan.FromMinutes(10)); log.LogInformation($"🔄 Refreshed cache for key: {cacheKey}"); } } } } Why it works: The database handles correctness and global replication; the cache handles locality and frequency. You reduce repeated reads (and RU costs) and lower p99 by serving hot keys from memory close to the compute. TTLs give you explicit control over staleness; negative caching and stale‑while‑revalidate are easy extensions when appropriate. Design Choices That Matter TTLs: Choose TTLs that reflect business tolerance for staleness. Use jitter (±N%) to avoid thundering herds when many keys expire simultaneously. Keying and versioning: Include an ETag or version in the key, e.g., product:{id}:v{etag}. When the record changes, the ETag changes, naturally busting the old key. Cache stampede control: For hot keys that miss, use a short single‑flight lock so that one request refreshes the value while others wait briefly or serve stale data. Serialization: For portability, utilize compact JSON (RedisJSON if you want field‑level reads/writes). Keep values small to preserve cache efficiency. Failure semantics: Treat Azure Managed Redis as an optimization. If the cache is unavailable, the app should continue by reading from Azure Cosmos DB. Favor idempotent writes and retry‑safe operations. Why Azure Managed Redis & Azure Cosmos DB Work Well in Practice Local speed, global reach: Azure Cosmos DB targets a single-digit millisecond p99 for in-region point operations. Placing Azure Managed Redis in the same region enables sub-millisecond memory reads for repeated access patterns, providing optimal performance for high-frequency data access. The result is a shorter, more predictable critical path. Active‑active at both layers: Azure Cosmos DB supports multi‑region writes, so each region can accept traffic locally. Azure Managed Redis supports active geo‑replication across up to five instances, using conflict-free replicated data types (CRDT) to converge cache state. That yields region‑local cache hits with eventual consistency at the cache tier and strong guarantees at the database tier. Reference Architecture Regional deployment Ingress: Clients reach the app via Azure Front Door (or similar) and land on Azure App Service in a VNet. Read path: The app queries Azure Managed Redis first. On a miss, it performs a point read/write against Azure Cosmos DB (API for NoSQL) and updates Redis with a TTL. Write path: Writes go to Azure Cosmos DB. The Change Feed triggers an Azure Function that invalidates or refreshes related Redis keys. Observability: Use Azure Monitor for logs and metrics. Monitor cache hit ratio, gets/sets, evictions, and geo‑replication health on the Azure Managed Redis side; RU/s, throttles, and p99 latency on the Azure Cosmos DB side. Operations and SRE Considerations Co‑location: Keep compute, Azure Managed Redis, and the Azure Cosmos DB write/read region together to avoid unnecessary RTTs. Capacity planning: Size Azure Managed Redis memory for working‑set coverage and headroom for failover. Validate eviction policies (volatile‑TTL vs all‑keys) against workload behavior. Back‑pressure: Watch RU throttling on Azure Cosmos DB and evictions on Azure Managed Redis. High evictions or a low hit ratio indicate a working-set mismatch or TTLs that are too short. Testing: Load‑test with realistic key distributions and measure p50/p95/p99 on both cache hits and misses. Chaos‑test cache outages to verify graceful degradation. Security: Use managed identities for data plane access where supported; apply App Service access restrictions and VNet integration as appropriate; audit with Azure Monitor logs. Putting It All Together Adopt cache‑aside in one region, measure hit ratio and RU savings, then add Change Feed–based invalidation to keep hot keys fresh under write load. When you need global scale, enable Azure Cosmos DB multi-region writes and Azure Managed Redis active geo-replication so that every region serves users locally. You end up with fast read paths, clear consistency boundaries, and a deployment model that scales without surprises. Next steps Review Azure Managed Redis documentation - https://learn.microsoft.com/azure/redis/ , Learn more about Cache Aside pattern in the Azure Architecture Center - https://learn.microsoft.com/azure/architecture/patterns/cache-aside Start with a sample app - https://github.com/AzureManagedRedis/776Views0likes0Comments