performance
536 TopicsIgnite 2025: Advancing Azure Database for MySQL with Powerful New Capabilities
At Ignite 2025, we’re introducing a wave of powerful new capabilities for Azure Database for MySQL, designed to help organizations modernize, scale, and innovate faster than ever before. From enhanced high availability and seamless serverless integrations to AI-powered insights and greater flexibility for developers, these advancements reflect our commitment to delivering a resilient, intelligent data platform. Join us as we unveil what’s next for MySQL on Azure - and discover how industry leaders are already building the future with confidence. Enhanced Failover Performance with Dedicated SLB for High-Availability Servers We’re excited to announce the General Availability of Dedicated Standard Load Balancer (SLB) for HA-enabled servers in Azure Database for MySQL. This enhancement introduces a dedicated SLB to High Availability configurations for servers created with public access or private link. By managing the MySQL data traffic path, SLB eliminates the need for DNS updates during failover, significantly reducing failover time. Previously, failover relied on DNS changes, which caused delays due to DNS TTL (30 seconds) and client-side DNS caching. What’s new with GA: The FQDN consistently resolves to the SLB IP address before and after failover. Load-balancing rules automatically route traffic to the active node. Removes DNS cache dependency, delivering faster failovers. Note: This feature is not supported for servers using private access with VNet integration. Learn more Build serverless, event-driven apps at scale – now GA with Trigger Bindings for Azure Functions We’re excited to announce the General Availability of Azure Database for MySQL Trigger bindings for Azure Functions, completing the full suite of Input, Output, and Trigger capabilities. This feature lets you build real-time, event-driven applications by automatically invoking Azure Functions when MySQL table rows are created or updated - eliminating custom polling and boilerplate code. With native support across multiple languages, developers can now deliver responsive, serverless solutions that scale effortlessly and accelerate innovation. Learn more Enable AI agents to query Azure Database for MySQL using Azure MCP Server We’re excited to announce that Azure MCP Server now supports Azure Database for MySQL, enabling AI agents to query and manage MySQL data using natural language through the open Model Context Protocol (MCP). Instead of writing SQL, you can simply ask questions like “Show the number of new users signed up in the last week in appdb.users grouped by day.”, all secured with Microsoft Entra authentication for enterprise-grade security. This integration delivers a unified, secure interface for building intelligent, context-aware workflows across Azure services - accelerating insights and automation. Learn more Greater networking flexibility with Custom Port Support Custom port support for Azure Database for MySQL is now generally available, giving organizations the flexibility to configure a custom port (between 25001 and 26000) during new server creation. This enhancement streamlines integration with legacy applications, supports strict network security policies, and helps avoid port conflicts in complex environments. Supported across all network configurations - including public access, private access, and Private Link - custom port provisioning ensures every new MySQL server can be tailored to your needs. The managed experience remains seamless, with all administrative capabilities and integrations working as before. Learn more Streamline migrations and compatibility with Lower Case Table Names support Azure Database for MySQL now supports configuring lower_case_table_names server parameter during initial server creation for MySQL 8.0 and above, ensuring seamless alignment with your organization’s naming conventions. This setting is automatically inherited for restores and replicas, and cannot be modified. Key Benefits: Simplifies migrations by aligning naming conventions and reducing complexity. Enhances compatibility with legacy systems that depend on case-insensitive table names. Minimizes support dependency, enabling faster and smoother onboarding. Learn more Unlock New Capabilities with Private Preview Features at Ignite 2025 We’re excited to announce that you can now explore two powerful capabilities in early access - Reader Endpoint for seamless read scaling and Server Rename for greater flexibility in server management. Scale reads effortlessly with Reader Endpoint (Private Preview) We’re excited to announce that the Reader Endpoint feature for Azure Database for MySQL is now ready for private preview. Reader Endpoint provides a dedicated read-only endpoint for read replicas, enabling automatic connection-based load balancing of read-only traffic across multiple replicas. This simplifies application architecture by offering a single endpoint for read operations, improving scalability and fault tolerance. Azure Database for MySQL supports up to 10 read replicas per primary server. By routing read-only traffic through the reader endpoint, application teams can efficiently manage connections and optimize performance without handling individual replica endpoints. Reader endpoints continuously monitor the health of replicas and automatically exclude any replica that exceeds the configured replication lag threshold or becomes unavailable. To enroll in the preview, please submit your details using this form. Limitations During Private Preview: Only performance-based routing is supported in this preview. Certain settings such as routing method and the option to attach new replicas to the reader endpoint can only be configured at creation time. Only one reader endpoint can be created per replica group. Including the primary server as a fallback for read traffic when no replicas are available is not supported in this preview. Get flexibility in server management with Server Rename (Private Preview) We’re excited to announce the Private Preview of Server Rename for Azure Database for MySQL. This feature lets you update the name of an existing MySQL server without recreating it, migrating data, or disrupting applications - making it easier to adopt clear, consistent naming. It provides a near zero-downtime path to a new hostname of the server. To enroll in the preview, please submit your details using this form. Limitations During Private Preview: Primary server with read replicas: Renaming a primary server that has read replicas keeps replication healthy. However, the SHOW SLAVE STATUS output on the replicas will still display the old primary server's name. This is a display inconsistency only and does not affect replication. Renaming is currently unsupported for servers using Customer Managed Key (CMK) encryption or Microsoft Entra Authentication (Entra Id). Real-World Success: Azure Database for MySQL Powers Resilient Applications at Scale Factorial Factorial, a leading HR software provider, uses Azure Database for MySQL alongside Azure Kubernetes Service to deliver secure, scalable HR solutions for thousands of businesses worldwide. By leveraging Azure Database for MySQL’s reliability and seamless integration with cloud-native technologies, Factorial ensures high availability and rapid innovation for its customers. Learn more YES (Youth Employment Service) South Africa’s largest youth employment initiative, YES, operates at national scale by leveraging Azure Database for MySQL to deliver a resilient, centralized platform for real-time job matching, learning management, and career services - connecting thousands of young people and employers, and helping nearly 45 percent of participants secure permanent roles within six months. Learn more Nasdaq At Ignite 2025, Nasdaq will showcase how it uses Azure Database for MySQL - alongside Azure Database for PostgreSQL and other Azure products - to power a secure, resilient architecture that safeguards confidential data while unlocking new agentic AI capabilities. Learn more These examples demonstrate that Azure Database for MySQL is trusted by industry leaders to build resilient, scalable applications - empowering organizations to innovate and grow with confidence. We Value Your Feedback Azure Database for MySQL is built for scale, resilience, and performance - ready to support your most demanding workloads. With every update, we’re focused on simplifying development, migration, and management so you can build with confidence. Explore the latest features and enhancements to see how Azure Database for MySQL meets your data needs today and in the future. We welcome your feedback and invite you to share your experiences or suggestions at AskAzureDBforMySQL@service.microsoft.com Stay up to date by visiting What's new in Azure Database for MySQL, and follow us on YouTube | LinkedIn | X for ongoing updates. Thank you for choosing Azure Database for MySQL!Azure Skilling at Microsoft Ignite 2025
The energy at Microsoft Ignite was unmistakable. Developers, architects, and technical decision-makers converged in San Francisco to explore the latest innovations in cloud technology, AI applications, and data platforms. Beyond the keynotes and product announcements was something even more valuable: an integrated skilling ecosystem designed to transform how you build with Azure. This year Azure Skilling at Microsoft Ignite 2025 brought together distinct learning experiences, over 150+ hands-on labs, and multiple pathways to industry-recognized credentials—all designed to help you master skills that matter most in today's AI-driven cloud landscape. Just Launched at Ignite Microsoft Ignite 2025 offered an exceptional array of learning opportunities, each designed to meet developers anywhere on the skilling journey. Whether you joined us in-person or on-demand in the virtual experience, multiple touchpoints are available to deepen your Azure expertise. Ignite 2025 is in the books, but you can still engage with the latest Microsoft skilling opportunities, including: The Azure Skills Challenge provides a gamified learning experience that lets you compete while completing task-based achievements across Azure's most critical technologies. These challenges aren't just about badges and bragging rights—they're carefully designed to help you advance technical skills and prepare for Microsoft role-based certifications. The competitive element adds urgency and motivation, turning learning into an engaging race against the clock and your peers. For those seeking structured guidance, Plans on Learn offer curated sets of content designed to help you achieve specific learning outcomes. These carefully assembled learning journeys include built-in milestones, progress tracking, and optional email reminders to keep you on track. Each plan represents 12-15 hours of focused learning, taking you from concept to capability in areas like AI application development, data platform modernization, or infrastructure optimization. The Microsoft Reactor Azure Skilling Series, running December 3-11, brings skilling to life through engaging video content, mixing regular programming with special Ignite-specific episodes. This series will deliver technical readiness and programming guidance in a livestream presentation that's more digestible than traditional documentation. Whether you're catching episodes live with interactive Q&A or watching on-demand later, you’ll get world-class instruction that makes complex topics approachable. Beyond Ignite: Your Continuous Learning Journey Here's the critical insight that separates Ignite attendees who transform their careers from those who simply collect swag: the real learning begins after the event ends. Microsoft Ignite is your launchpad, not your destination. Every module you start, every lab you complete, and every challenge you tackle connects to a comprehensive learning ecosystem on Microsoft Learn that's available 24/7, 365 days a year. Think of Ignite as your intensive immersion experience—the moment when you gain context, build momentum, and identify the skills that will have the biggest impact on your work. What you do in the weeks and months following determines whether that momentum compounds into career-defining expertise or dissipates into business as usual. For those targeting career advancement through formal credentials, Microsoft Certifications, Applied Skills and AI Skills Navigator, provide globally recognized validation of your expertise. Applied Skills focus on scenario-based competencies, demonstrating that you can build and deploy solutions, not simply answer theoretical questions. Certifications cover role-based scenarios for developers, data engineers, AI engineers, and solution architects. The assessment experiences include performance-based testing in dedicated Azure tenants where you complete real configuration and development tasks. And finally, the NEW AI Skills Navigator is an agentic learning space, bringing together AI-powered skilling experiences and credentials in a single, unified experience with Microsoft, LinkedIn Learning and GitHub – all in one spot Why This Matters: The Competitive Context The cloud skills race is intensifying. While our competitors offer robust training and content, Microsoft's differentiation comes not from having more content—though our 1.4 million module completions last fiscal year and 35,000+ certifications awarded speak to scale—but from integration of services to orchestrate workflows. Only Microsoft offers a truly unified ecosystem where GitHub Copilot accelerates your development, Azure AI services power your applications, and Azure platform services deploy and scale your solutions—all backed by integrated skilling content that teaches you to maximize this connected experience. When you continue your learning journey after Ignite, you're not just accumulating technical knowledge. You're developing fluency in an integrated development environment that no competitor can replicate. You're learning to leverage AI-powered development tools, cloud-native architectures, and enterprise-grade security in ways that compound each other's value. This unified expertise is what transforms individual developers into force-multipliers for their organizations. Start Now, Build Momentum, Never Stop Microsoft Ignite 2025 offered the chance to compress months of learning into days of intensive, hands-on experience, but you can still take part through the on-demand videos, the Global Ignite Skills Challenge, visiting the GitHub repos for the /Ignite25 labs, the Reactor Azure Skilling Series, and the curated Plans on Learn provide multiple entry points regardless of your current skill level or preferred learning style. But remember: the developers who extract the most value from Ignite are those who treat the event as the beginning, not the culmination, of their learning journey. They join hackathons, contribute to GitHub repositories, and engage with the Azure community on Discord and technical forums. The question isn't whether you'll learn something valuable from Microsoft Ignite 2025-that's guaranteed. The question is whether you'll convert that learning into sustained momentum that compounds over months and years into career-defining expertise. The ecosystem is here. The content is ready. Your skilling journey doesn't end when Ignite does—it accelerates.From Breakthroughs to Everyday Impact: Advanced Performance, Reliability, & User Experience in Teams
Written by: Jeff Chen, Catalin Ionut Fratila, Andrei Vieru, Ashish Rathore, Avinash Prasad, David Rosenthal, Fred Wu, David Zhao, Will Dixon and Kerry Perez Heffernan. At Microsoft, we know that speed and reliability are essential for Teams users—especially when you’re in the middle of an important meeting or collaborating with your team. When we rebuilt Teams in 2023, it marked a major leap forward in performance improvements, reflecting our deep commitment to listening to user needs and transforming feedback into tangible results. Our journey hasn’t stopped there. Every year, we push the boundaries of what Teams can do—introducing new performance enhancements, streamlined experiences, and thoughtful features. Since the redesigned chat and channels experience in 2024, we’ve built on that momentum with even more innovations in 2025. This blog outlines what we’ve achieved together this year. Always Fast & Responsiveness—Making Every Interaction Fluid Teams should feel fast and responsive, no matter your device or network. Thanks to your feedback and our engineering investments, we’ve made improvements across core Teams user interactions, including faster video loading time, faster application, launch time and faster switching to chat and channels across all Teams clients including Windows, Mac, and Mobile at 95 th percentile. Tracking performance metrics at the 95 th percentile signals if improvements apply to most users, including those with low-end devices and poor network conditions. In addition, we’ve systematically reduced layout shifts, flickers, and multi-pass rendering, using telemetry-driven prioritization to deliver a smoother, more delightful user experience. We’ve also evolved how Teams tracks the percentage of sessions that are fully responsive, that has resulted in major reductions in UI freezes and long delays. Together, these changes helps Teams feel more consistently fast and reliable for everyone. Reliability & Feedback—Listening and Improving Together While we’re proud of the progress so far, we know there’s always more to do. Your feedback keeps us grounded and focused on what matters: making Teams faster, more stable, and more enjoyable for you. Delivering a reliable experience at the scale of Microsoft Teams requires more than just monitoring technical metrics—it demands a deep understanding of real-world customer feedback and the ability to act on it quickly and intelligently. We turn feedback from every channel - whether it’s direct customer feedback, in-app reports, surveys, admin logs, or social media - into actionable insights, combining it with our reliability data to prioritize improvements that matter most. This ongoing validation ensures our metrics reflect real-world experiences and drive meaningful performance gains. Efficiency & Memory Management—Powerful Features, Light Footprint We understand that not all Teams issues are created equal. For some, a driver update on a laptop might trigger unexpected audio or video problems. For others, memory usage is a daily concern. We’re committed to working through these device-specific challenges, so Teams works reliably for everyone, everywhere on any devices. Delivering new features shouldn’t come at the cost of efficiency. That’s why our approach ensures Teams remains lightweight and resource-friendly. Over the past years, we have made improvements in following areas. Memory continues to be our top priority. With the improvements that we brought to new team for half memory consumption compared to classic Teams, we have deepened our partnership with Wv2 team and further reduced additional half of windows idle memory (now at 30% comparing to classic teams). Let’s unpack the details that make this possible Behind the Scenes – Delivering Performant Client at Scale Building a world-class collaboration platform like Microsoft Teams requires more than just feature innovation—it demands a disciplined approach to architecture, diagnostics, and customer partnership. Architecture Native Video Rendering: Fully embracing the hybrid desktop client architecture across web stack and native media technology, we have fundamentally rebuilt the API surface across web and native layers to simplify the video rendering in Teams meetings – this led to reduction of API calls over IPC by 40x when loading the meeting stage with 7x7 videos – resulting in 10% reduction in video loading time at 95 th percentile. Furthermore, the new simplified design significantly improved the video reliability and quality – with up to 36% reduction of video freezes and rendering failures. WebView2 Integration: Using WebView2 delivers a consistent rendering pipeline across Mac and Windows, accelerating feature rollout and enabling custom instrumentation for hard-to-debug edge cases. In addition, Teams performance is also greatly benefit from different integrations with Wv2 APIs. For example, the recent integration with IDBIndex: getAllRecords() method for faster chat and channel switch by 10%, and Set MemoryUsageTargetLevel, Empty Working Set for Windows Idle memory by 40% improvements Data Fetching: Granular cache partitioning contributed to chat switching speed improvements by 10%, with preloading and background prefetching in testing for further gains. This builds on work we shared in an earlier post macOS Optimizations: We’ve invested in making our macOS application feel just as fast and native as a fully native app. To achieve this, we not only take advantage of several macOS-specific optimizations (i.e. populate the dyld caches after installation, run gktool post launch). These leads to 50% application launch time improvements that are specific to Mac. Mobile meeting battery consumption: As part of our targeted performance improvements for meetings, we optimized background processing, improved resource reuse through smarter caching, reduced millions of allocations, and delivered an efficient experience for our mobile users. Diagnostics Operating at global scale requires proactive detection and rapid remediation. Root causing unresponsiveness issues at scale: We know that freezes or hangs can be tough to diagnose, so we've improved how we collect and analyze data across platforms. By enhancing instrumentation in WebView2 and using better telemetry—like capturing native process data—we can more easily spot patterns and fix desktop or native issues. We also use efficient event tracing (ETW) to gather targeted insights, helping us quickly identify, fix, and prevent problems before they impact users. Advanced Telemetry & Leak Detector Service: Automated anomaly detection and real-time diagnostics continuously monitor memory leaks, grouping similar issues and prioritizing fixes. This approach reduced major leak rates to less than 1%, improving stability across millions of users. Customer Partnership We co-design with enterprise customers to validate improvements in real-world environments and align engineering priorities with business needs. This collaboration ensures Teams delivers measurable value—speed, reliability, and efficiency—where it matters most. We have kept in mind customer bandwidth consumption during updates and have implemented smarter mechanisms such as Delivery Optimization (P2P downloads) and a Distributed Update schedule. An important aspect of all this extensive work and new functionality is for customers to make sure they are updating to the latest version of the Teams client This not only ensures they are getting the latest security updates and improvements but also is how we deliver all these significant performance enhancements. We have recently released the Teams Client Health dashboard in Teams Admin Center to help customers ensure their users are always on the latest versions. Looking Forward Our journey doesn’t stop here. We’re pushing for even higher responsiveness, further memory reductions, and greater efficiency for 2026. We’ll continue to invest in platform, framework, and telemetry innovations to sustain and extend Teams’ performance leadership. Thank you for sharing your stories, frustrations, and suggestions. Every customer interaction, post, and survey response helps us build a better Teams. We’re on this journey together, and your experience is at the heart of every improvement we make.2.4KViews2likes0CommentsBuild Smarter with Azure HorizonDB
By: Maxim Lukiyanov, PhD, Principal PM Manager; Abe Omorogbe, Senior Product Manager; Shreya R. Aithal, Product Manager II; Swarathmika Kakivaya, Product Manager II Today, at Microsoft Ignite, we are announcing a new PostgreSQL database service - Azure HorizonDB. You can read the announcement here, and in this blog you can learn more about HorizonDB’s AI features and development tools. Azure HorizonDB is designed for the full spectrum of modern database needs - from quickly building new AI applications, to scaling enterprise workloads to unprecedented levels of performance and availability, to managing your databases efficiently and securely. To help with building new AI applications we are introducing 3 features: DiskANN Advanced Filtering, built-in AI model management, and integration with Microsoft Foundry. To help with database management we are introducing a set of new capabilities in PostgreSQL extension for Visual Studio Code, as well as announcing General Availability of the extension. Let’s dive into AI features first. DiskANN Advanced Filtering We are excited to announce a new enhancement in the Microsoft’s state of the art vector indexing algorithm DiskANN – DiskANN Advanced Filtering. Advanced Filtering addresses a common problem in vector search – combining vector search with filtering. In real-world applications where queries often include constraints like price ranges, ratings, or categories, traditional vector search approaches, such as pgvector’s HNSW, rely on multiple step retrieval and post-filtering, which can make search extremely slow. DiskANN Advanced Filtering solves this by combining filter and search into one operation - while the graph of vectors is traversed during the vector search, each vector is also checked for filter predicate match, ensuring that only the correct vectors are retrieved. Under the hood, it works in a 3-step process: first creating a bitmap of relevant rows using indexes on attributes such as price or rating, then performing a filter-aware graph traversal against the bitmap, and finally, validating and ordering the results for accuracy. This integrated approach delivers dramatically faster and more efficient filtered vector searches. Initial benchmarks show that enabling Advanced Filtering on DiskANN reduces query latency by up to 3x, depending on filter selectivity. AI Model Management Another exciting feature of HorizonDB is AI Model Management. This feature automates Microsoft Foundry model provisioning during database deployment and instantly activates database semantic operators. This eliminates tens of setup and configuration steps and simplifies the development of new AI apps and agents. AI Model Management elevates the experience of using semantic operators within PostgreSQL. When activated, it provisions key models for embedding, semantic ranking and generation via Foundry, installs and configures the azure_ai extension to enable the operators, establishes secure connections, integrates model management, monitoring and cost management within HorizonDB. What would otherwise require significant manual effort and context-switching between Foundry and PostgreSQL for configuration, management, and monitoring is now possible with just a few clicks, all without leaving the PostgreSQL environment. You can also continue to bring your own Foundry models, with a simplified and enhanced process for registering your custom model endpoints in the azure_ai extension. Microsoft Foundry Integration Microsoft Foundry offers a comprehensive technology stack for building AI apps and agents. But building modern agents capable of reasoning, acting, and collaborating is impossible without connection to data. To facilitate that connection, we are excited to announce a new PostgreSQL connector in Microsoft Foundry. The connector is designed using a new standard in data connectivity – Model Context Protocol (MCP). It enables Foundry agents to interact with HorizonDB securely and intelligently, using natural language instead of SQL, and leveraging Microsoft Entra ID to ensure secure connection. In addition to HorizonDB this connector also supports Azure Database for PostgreSQL (ADP). This integration allows Foundry agents to perform tasks like: Exploring database schemas Retrieving records and insights Performing analytical queries Executing vector similarity searches for semantic search use cases All through natural language, without compromising enterprise security or compliance. To get started with Foundry Integration, follow these setup steps to deploy your own HorizonDB (requires participation in Private Preview) or ADP and connect it to Foundry in just a few steps. PostgreSQL extension for VS Code is Generally Available We’re excited to announce that the PostgreSQL extension for Visual Studio Code is now Generally Available. This extension garnered significant popularity within the PostgreSQL community since it’s preview in May’25 reaching more than 200K installs. It is the easiest way to connect to a PostgreSQL database from your favorite editor, manage your databases, and take advantage of built-in AI capabilities without ever leaving VS Code. The extension works with any PostgreSQL whether it's on-premises or in the cloud, and also supports unique features of Azure HorizonDB and Azure Database for PostgreSQL (ADP). One of the key new capabilities is Metrics Intelligence, which uses Copilot and real-time telemetry of HorizonDB or ADP to help you diagnose and fix performance issues in seconds. Instead of digging through logs and query plans, you can open the Performance Dashboard, see a CPU spike, and ask Copilot to investigate. The extension sends a rich prompt that tells Copilot to analyze live metrics, identify the root cause, and propose an actionable fix. For example, Copilot might find a full table scan on a large table, recommend a composite index on the filter columns, create that index, and confirm the query plan now uses it. The result is dramatic: you can investigate and resolve the CPU spike in seconds, with no manual scripting or guesswork, and with no prior PostgreSQL expertise required. The extension also makes it easier to work with graph data. HorizonDB and ADP support open-source graph extension Apache AGE. This turns these services into fully managed graph databases. You can run graph queries against HorizonDB and immediately visualize the results as an interactive graph inside VS Code. This helps you understand relationships in your data faster, whether you’re exploring customer journeys, network topologies, or knowledge graphs - all without switching tools. In Conclusion Azure HorizonDB brings together everything teams need to build, run, and manage modern, AI-powered applications on PostgreSQL. With DiskANN Advanced Filtering, you can deliver low-latency, filtered vector search at scale. With built-in AI Model Management and Microsoft Foundry integration, you can provision models, wire up semantic operators, and connect agents to your data with far fewer steps and far less complexity. And with the PostgreSQL extension for Visual Studio Code, you get an intuitive, AI-assisted experience for performance tuning and graph visualization, right inside the tools you already use. HorizonDB is now available in private preview. If you’re interested in building AI apps and agents on a fully managed, PostgreSQL-compatible service with built-in AI and rich developer tooling, sign-up for Private Preview: https://aka.ms/PreviewHorizonDB.714Views4likes0CommentsBuilding brighter futures: How YES tackles youth unemployment with Azure Database for MySQL
YES leverages Azure Database for MySQL to power South Africa’s largest youth employment initiative, delivering scalable, reliable systems that connect thousands of young people to jobs and learning opportunities.135Views0likes0CommentsExciting things on the horizon for PostgreSQL fans @ Ignite 2025
If you’re passionate about PostgreSQL or just curious about what’s new, you’ll want to join us at Microsoft Ignite 2025. We have a packed lineup, including sessions exploring cutting-edge features and exclusive giveaways at the PostgreSQL on Azure booth. Haven’t registered yet? Now’s the time – sign up for Microsoft Ignite and start building your schedule. Below are the must-see PostgreSQL on Azure activities, with highlights of what you’ll learn at each. Add these to your agenda today. Sessions can fill up fast! Theater sessions: get a first look, fast I know from experience that attention spans can start to wane after hours-long keynotes, content-rich sessions, and conference socializing. Luckily, we have a couple of theater sessions that offer snackable but substantial information in less time than it will take to grab lunch. And they’re located conveniently on the main conference floor. PostgreSQL on Azure: Your launchpad for intelligent apps and agents (THR705) - See how we’re making PostgreSQL AI-aware for developers to drive app and agent innovation. Includes a demo of vector similarity search, semantic operators baked into Postgres, and more! Simplifying scale-out of PostgreSQL for performant multi-tenant apps (THR706) - Discover a smarter, simpler way to scale PostgreSQL using the new Elastic Clusters feature. If your app or service is growing fast (or you want it to!), add this breakout to learn how Azure makes it easier to scale Postgres and keep it reliable. These talks are a great way to sample what’s new and decide where to dive deeper. Plus, they’re fun and demo-heavy, and who doesn’t love a good demo? Breakout sessions: a deep dive into Postgres innovations Led by Azure product leaders and executives from organizations driving innovation backed by PostgreSQL, these breakout sessions will dive into the coolest new capabilities and real-world use cases. If you want rich, technical content and more live demos, these are for you. Build mission-critical apps that scale with PostgreSQL on Azure (BRK127) - Get a closer look at the next generation of PostgreSQL on Azure. Add this session, if you’re curious about how we’re taking Postgres to the next level to support your mission-critical AI workloads. Modern data, modern apps: Innovation with Microsoft Databases (BRK134) - Gain insider knowledge on the latest innovations across open-source, SQL, and NoSQL databases, and understand how Microsoft’s integrated database portfolio supports next-gen innovation. Nasdaq Boardvantage: AI-driven governance on PostgreSQL and AI Foundry (BRK137) - Discover how a Fortune 100 merges trust with cutting-edge AI leveraging Azure’s AI-enriched and enterprise-ready solutions, including Azure Database for PostgreSQL, Azure Database for MySQL, Azure AI Foundry, Azure Kubernetes Service (AKS), and API Management. AI-assisted migration: The path to powerful performance on PostgreSQL (BRK123) - A before and after migration journey from Oracle to Azure Database for PostgreSQL. See how the new AI-assisted migration experience delivers conversion in a few clicks and minimal downtime. The blueprint for intelligent AI agents backed by PostgreSQL (BRK130) - If you’re into AI development, this session will spark ideas on bridging the gap between raw data and AI reasoning. You’ll leave with practical tips to turbocharge your AI agents with PostgreSQL. Each breakout session is 45 minutes with live demos and Q&A, so you’ll get plenty of detail and interaction with Postgres experts. Hands-on lab: experience coding with Azure superpowers Do you learn best by doing? Then our guided workshop, Build advanced AI agents with PostgreSQL (Lab515), is for you. In each 75-minute session, you’ll get to create a fully functional AI-powered application backed by PostgreSQL on Azure with step-by-step guidance and expert insight on the latest innovations enabling intelligent app development. All the tools and instructions you’ll need are provided. Labs have limited capacity, so be sure to reserve your seat for any of the four labs in advance. This lab is a great way to understand how all the pieces come together on Azure. And you’ll gain practical skills you can apply to your own projects, whether it’s customer support bots, intelligent search in your app, or any scenario where PostgreSQL + AI collide. Expert meet-up booth: meet the team, grab some swag If you still want more Postgres (or a little Postgres souvenir), you can stop by the PostgreSQL on Azure Expert Meetup booth in the Ignite Hub. This will be our homebase on the show floor, where you can: Meet the team: I’ll be there in person, along with engineers, program managers, cloud solution architects, and advocates from our team. Whether you have a burning technical question, want to share feedback, or need guidance for your specific use case, come chat with us. Get a quick demo re-run: Sometimes a 5-minute demo is worth a thousand words, especially after you’ve sat through all those words already in a keynote. The booth will have a monitor and a live environment so we can walk you through select use cases if you have questions - no appointment needed. Swag and giveaways: Ah yes, the goodies! We know conference swag is part of the fun, so we’ve got some special PostgreSQL-themed giveaways at the booth. I won’t spoil all the surprises, but rumor has it there are some limited-edition items up for grabs. Network with peers: The expert meet-up area is also a magnet for PostgreSQL enthusiasts. You might bump into other attendees at the booth who are tackling similar projects or challenges. Ignite is about community as much as content, so come by and spark up a conversation. Meet you there? Ignite is our largest event of the year. We love sharing what we’ve been working on and, most of all, hearing from you, the community. So, on behalf of the Azure for PostgreSQL team, thank you for your interest and support. We can’t wait to show you what’s new and to help you continue to succeed with Postgres. See you in San Francisco!419Views2likes0CommentsPostgres as a Distributed Cache Unlocks Speed and Simplicity for Modern .NET Workloads
In the world of high-performance, modern software engineering, developers often face a tough tradeoff: how to achieve lightning-fast data retrieval response rates without adding complexity, sacrificing reliability, or getting locked into specialized, external data caching products or platforms. What if you could harness the power and flexibility of your existing Postgres database to solve this challenge? Enter the Microsoft.Extensions.Caching.Postgres library, a new nuget.org package that brings distributed caching to Postgres, unlocking speed, simplicity, and seamless integration for modern .NET workloads. In this article, we’re going to take a closer look at the Postgres caching store, which introduces a new option for .NET developers planning on implementing a distributed cache, such as HybridCache, paired together with a Postgres database to provide distributed backplane operations. One data platform for multiple workloads Postgres’ reputation for reliability, extensibility, and standards compliance has long been respected, with Postgres databases driving some of today’s largest and most popular platforms. Increasingly developers, data engineers, and entrepreneurs alike all rallying to apply these benefits. One of the most compelling aspects of Postgres is its adaptability: it’s a data platform that can simultaneously handle everything from transactional workloads to analytical queries, JSON documents to geospatial data, and even time-series and vectorized AI search. In an era of specialized services, Postgres is proving that one platform can do it all and do it well. Intrepid engineers have also discovered that Postgres is often just as proficient in handling workloads traditionally supported by other very different technology solutions, such as lake-house, pub-sub, message queues, job schedulers, and session store caches. These roles are all now being powered by Postgres databases, while Postgres simultaneously continues to deliver the same scalable, battle-tested, and mission-critical ACID-compliant core relational database operations we’ve all come to expect. When speed matters most Database-backed cache stores are by no means a new concept; the first version of a database cache library for .NET was made available to developers exploring the nuget.org ecosystem (Microsoft.Extensions.Caching.SqlServer) in June 2016. This library included several impressive features, such as expiration policies, serialization, and dependency injection, making it ideal for multi-instance applications requiring shared cache functionality. It was especially useful in environments where Redis or other cache providers were not available. The convenience of leveraging a transactional database’s usefulness to function as a distributed cache comes with some tradeoffs, especially when compared against services such as Redis or Memcached; in a word: speed. All the features which make your database data durable, reliable, and consistent require precious additional clock cycles and I/O operations, and this “overhead” resulted in performance costs when compared to the alternative memory stores and caching system options. What if it was possible to maintain all those familiar and convenient interfaces for connecting to your database, while simultaneously being able to precisely configure specific tables to throw off the burden of crash consistency and replication logging? What if, for only the tables we selected, we could trade this durability for pure speed? Enter Postgres’ UNLOGGED Tables. Postgres' adaptable performance Another compelling aspect of Postgres databases is the ability to significantly speed up write-performance by bypassing the Write Ahead Log (WAL). The WAL is designed to ensure that data is crash-consistent (and replicable), and writing to your database is comprised of a transparent two-step process: your data is written to your database tables, and these changes are also committed to a separate file to guarantee the data’s persistence. It also happens that in some circumstances, the tradeoff to increase performance can be worth the sacrifice to crash-consistency, especially for short-lived, temporary types of data, like when used as a cache store. This table configuration is scoped to individual tables, which allows for combinations of “logged” and “unlogged” table configurations, both operating side-by-side within the same database instance. The net result: Postgres can provide incredibly performant response times when used as a distributed cache, rivaling the performance of other popular cache stores, while also providing the simplicity, familiarity, and consistency that the Postgres engine naturally offers. HybridCache for your .NET solutions It was this capability*combined with the inspiration from the SQL Server library that inspired the creation of the nuget.org Microsoft.Extensions.Caching.Postgres package. As a longtime .NET developer, I have personally witnessed the incredible evolution of the .NET platform and the amazing growth, enhancements, and improvements to the languages, the tooling, runtimes, and the incredible people behind each of these contributions. The recent addition of HybridCache is especially exciting to consider incorporating into your .NET solutions because it dramatically simplifies the steps required to add caching into your project, while simultaneously linking in-memory cache with a second-level tiered cache service. This seamless integration provides your application with the best of both worlds: blazing fast in-memory retrieval paired with a resilient fail-safe and similarly performant backplane in the event an application instance blinks, scales up/out, etc. Don’t just take my word for it, let’s look at some of the benchmarks between a Redis cache and Postgres database. The tests are comprised of synchronous and async operations across three different sized payloads (128, 1024, and 10240 bytes) for read/write, containing both single and concurrent messages, and at fixed and random positions. The tests are further divided into two types of cache expiration windows: absolute/non-sliding and sliding/relative windows. Consider the output from a suite of benchmarks tests, keeping in mind these results are based on microseconds, meaning 1,000 microseconds equals 1 millisecond: What do these results reveal? In certain respects, there aren’t that many surprises. Bespoke memory-based key-value cache systems like Redis continue to outperform relational databases in terms of pure speed and low latency. What is really exciting to see is that Postgres comes very close to Redis performance for more intensive operations! Finding the right fit for your solution I’m excited to make this Postgres package available to everyone considering distributed caching in their solution designs. The combination of HybridCache paired with your choice of backplane will allow you to select the right technologies and tools that are best suited for your solution. Our GitHub repo also contains a variety of sample applications, which demonstrate how to configure and use HybridCache together with the Postgres distributed cache library within a Console app service, as well as an Aspire-based sample Web API. I encourage you to explore these examples and share your thoughts and ideas. I look forward to any feedback you may have to share about the Microsoft.Extensions.Caching.Postgres package. Keep advancing, keep improving, and keep contributing to be a “builder” and an active part of our incredible community! * This package extension is highly configurable, and you can choose whether to enable/disable bypassing the WAL for your cache table, along with several other options that can be adjusted for your particular use case.