postgresql
196 TopicsFrom Oracle to Azure: How Quadrant Technologies accelerates migrations
This blog was authored by Manikyam Thukkapuram, Director, Alliances & Engineering at Quadrant Technologies; and Thiwagar Bhalaji, Migration Engineer and DevOps Architect at Quadrant Technologies Over the past 20+ years, Quadrant Technologies has accelerated database modernization for hundreds of organizations. As momentum to the cloud continues to grow, a major focus for our business has been migrating on-premises Oracle databases to Azure. We’ve found that landing customers in Azure Database for PostgreSQL has been the best option both in terms of cost savings and efficiency. Azure Migrate is by far the best way to get them there. With Azure Migrate, we’re able to streamline migrations that traditionally took months, into weeks. As a Microsoft solutions partner, we help customers migrate to Azure and develop Azure-based solutions. We’re known as “the great modernization specialists” because many of our customers come to us with complex legacy footprints, outdated infrastructure, and monolithic applications that can be challenging to move to the cloud. But we excel at untangling these complex environments. And with our Q-Migrator tool, which is a wrapper around Azure Migrate, we’re able to automate and accelerate these kinds of migrations. Manual steps slowed down timelines In general, each migration we lead includes a discovery phase, a compatibility assessment, and the migration execution. In discovery, we identify every server, database, and application in a customer’s environment and map their interactions. Next, we assess each asset’s readiness for Azure and plan for optimal cloud configurations. Finally, we bring the plan to life, integrating applications, moving workloads, and validating performance. Before adopting Azure Migrate, each of these phases involved manual tasks for our team. During our discovery process we manually collected inventory and wrote custom scripts to track server relationships and database dependencies. Our engineers also had to dig through configuration files and use third-party assessment tools for aspects like VM utilization and Oracle schema. When we mapped compatibility, we worked from static data to predict cost estimates and sizing, as opposed to operating from real-time telemetry. By the time we reached the migration phase, fragmented tooling and inconsistent assessments made it difficult to maintain accuracy and efficiency. Hidden dependencies sometimes surfaced late in the process, causing unexpected rework and delays. Streamlining migrations with Azure Migrate To automate and streamline these manual tasks, we developed Q-Migrator, which is our in-house framework built around Azure Migrate. Now we can offer clients an efficient, agentless approach to discovery, assessment, and migration. As part of our on-premises database migration initiatives, we rely on Azure Migrate to seamlessly migrate a wide range of structured databases (including MySQL, Microsoft SQL Server, PostgreSQL, and Oracle) from on-premises environments to Azure IaaS and PaaS. For instance, for an on-premises PostgreSQL migration, we begin by setting up an Azure Migrate appliance in the client’s environment to automatically discover servers, databases, and applications. That generates a complete inventory and dependency map that identifies every relationship between servers and databases. From there, we run an assessment through Azure Migrate to check compatibility, identify blockers, and right-size target environments for Azure Database for PostgreSQL. By integrating Azure Database Migration Service (DMS), we can replicate data continuously until cutover, ensuring near-zero downtime. In addition, Azure DMS provides robust telemetry and analytics for deep visibility into every stage of the process. This unified and automated workflow not only replaces manual steps but also increases reliability and accelerates delivery. Teams benefit from a consolidated dashboard for planning, execution, and performance tracking, driving efficiency throughout the migration lifecycle. 75% faster deployment, 60% cost savings Since implementing Azure Migrate, which now facilitates discovery and assessment for on-premises PostgreSQL workloads, we’ve accelerated deployment by 75% compared to traditional migration methods. We’ve also reduced costs for our clients by up to 60 percent. Automated discovery alone reduces that phase by nearly 40%, and dependency mapping now takes a fraction of the effort. With the integrated dashboard in Azure Migrate we can also track progress across discovery, assessment, and migration in one place. This eliminates the need for multiple third-party tools. These efficiencies allow us to deliver complex migrations on tighter timelines without sacrificing quality or reliability. Rounding out the modernization journey with AKS As “the great modernization specialists,” we’re often asked which is the best database for landing Oracle workloads in the cloud. From our experience, Azure Database for PostgreSQL is ideal for enterprises seeking cost-efficient and secure PostgreSQL deployments. Its managed services reduce operational overhead while maintaining high availability, compliance, and scalability. Plus, seamless integration with Azure AI services allows us to innovate for clients and keep them ahead of the curve. We also recognize that database migration is only the first step for many clients—modernizing the application layer delivers even greater scalability, security, and manageability. When clients come to Quadrant for a broader modernization strategy, we often use Azure Kubernetes Service (AKS) to containerize their applications and break monoliths into microservices. AKS delivers a cloud-native architecture alongside database modernization. This integration supports DevOps practices, simplifies deployments, and allows customers to take full advantage of elastic cloud infrastructure. More innovation to come Overall, Azure Migrate and Azure Database for PostgreSQL, Azure Database for MySQL, and Azure SQL Database have redefined how we deliver database modernization, and our close collaboration with Microsoft has made it possible. By engaging early with Microsoft, we can validate migration architectures and gain insights into best practices for high-performance and secure cloud deployments. Access to Microsoft experts helps us fine-tune our designs, optimize performance, and resolve complex issues quickly. We’re also investing in AI-driven automation using Azure OpenAI in Foundry Models to analyze migration data, optimize queries, and predict performance outcomes. These innovations allow us to deliver more intelligent, adaptive solutions tailored to each customer’s unique environment.PostgreSQL: Migrating Large Objects (LOBs) with Parallelism and PIPES
Why This Approach? Migrating large objects (LOBs) between PostgreSQL servers can be challenging due to size, performance, and complexity. Traditional methods often involve exporting to files and re-importing, which adds overhead and slows down the process. Also, there could be some Cloud restrictions that limit the usage of other tools like: Pgcopydb: Welcome to pgcopydb’s documentation! — pgcopydb 0.17~dev documentation Or other techniques like using RLS: SoporteDBA: PostgreSQL pg_dump filtering data by using Row Level Security (RLS) This solution introduces a parallelized migration script that: Reads directly from pg_largeobject. Splits work across multiple processes using the MOD() function on loid. Streams data via PIPES—no intermediate files. Scales easily by adjusting parallel degree. *Possible feature*: Commit size logic to supports resume logic by excluding already migrated LOBs. Key Benefits Direct streaming: No temporary files, reducing disk I/O. Parallel execution: Faster migration by leveraging multiple processes. Simple setup: Just two helper scripts for source and destination connections. Reference Scripts reference: Moving data with PostgreSQL COPY and \COPY commands | Microsoft Community Hub Source Connection -- pgsource.sh -- To connect to source database -- PLEASE REVIEW CAREFULLY THE CONNECTION STRINGS TO CONNECT TO SOURCE SERVER PGPASSWORD=<password> psql -t -h <sourceservername>.postgres.database.azure.com -U <username> <database> -- Permissions to execute chmod +x pgsource.sh Destination Connection -- pgdestination.sh -- To connect to target database -- PLEASE REVIEW CAREFULLY THE CONNECTION STRINGS TO CONNECT TO DESTINATION SERVER PGPASSWORD=<password> psql -t -h <destinationservername>.postgres.database.azure.com -U <username> <database> -- Permissions to execute chmod +x pgdestination.sh Parallel Migration Script -- transferlobparallel.sh -- To perform the parallel migrations of lobs echo > nohup.out echo 'ParallelDegree: '$1 'DateTime: '`date +"%Y%m%d%H%M%S"` # Check if no large objects to migrate count=$(echo "select count(1) from pg_largeobject;"|./pgsource.sh) count=$(echo "$count" | xargs) if [ "$count" -eq 0 ]; then echo "There are no large objects to migrate. Stopping the script." exit 0 fi par=$1 for i in $(seq 1 $1); do nohup /bin/bash <<EOF & echo "\copy (select data from (select 0 as rowsort, 'begin;' as data union select 1, concat('SELECT pg_catalog.lo_create(', lo.loid, ');SELECT pg_catalog.lo_open(', lo.loid, ', 131072);SELECT pg_catalog.lowrite(0,''', string_agg(lo.data, '' ORDER BY pageno), ''');SELECT pg_catalog.lo_close(0);') from pg_largeobject lo where mod(lo.loid::BIGINT,$par)=$i-1 group by lo.loid union select 2, 'commit;') order by rowsort) to STDOUT;"|./pgsource.sh|sed 's/\\\\\\\/\\\\/g'|./pgdestination.sh; echo "Execution thread $i finished. DateTime: ";date +"%Y%m%d%H%M%S"; EOF done tail -f nohup.out|grep Execution -- Permissions to execute chmod +x transferlobparallel.sh -- NOTE Please pay attention during the script execution, it will never finish as last line is “tail -f nohup…”, you need to monitor if all the threads already finished or checking in a different session the “psql” processes are still working or not. How to Run ./transferlobparallel.sh <ParallelDegree> Example: ./transferlobparallel.sh 3 Runs 3 parallel processes to migrate LOBs directly to the destination server. Performance Results Please find basic metrics taking into consideration that client linux VM with accelerated networking was co-located in same region/AZ than source and target Azure Database for PostgreSQL Flexible servers, servers and linux client based in Standard 4CPU SKUs, no other special configurations. 16 threads: ~11 seconds for 500MB. CPU usage: ~15% at 16 threads. Estimated migration for 80GB: ~30 minutes Key Takeaways This approach dramatically reduces migration time and complexity. By combining PostgreSQL’s pg_largeobject with parallelism and streaming, without intermediate storage, and using only psql client command as required client software. Disclaimer The script is provided as it is. Please review carefully when running/testing, the script is just a starting point to show how to migrate large objects in a parallelized way without intermediate storage, but it can be also implemented/improved using other methods as mentioned.SubgenAI makes AI practical, scalable, and sustainable with Azure Database for PostgreSQL
Authors: Abe Omorogbe, Senior Program Manager at Microsoft and Julia Schröder Langhaeuser, VP of Product Serenity Star at SubgenAI AI agents are thriving in pilots and prototypes. However, scaling them across organizations is more difficult. A recent MIT report shows that 95 percent of projects fail to reach production. Long development cycles, lack of observability, and compliance hurdles leave enterprises struggling to deliver production-ready agents. SubgenAI, a European generative AI company that focuses on democratizing AI for businesses and governments, saw an opportunity to change this. Its flagship platform, Serenity Star, transforms AI agent development from a code-heavy, fragmented process into a streamlined, no-code experience. Built on Microsoft Azure Database for PostgreSQL, Semantic Kernel, and Microsoft Foundry, Serenity Star empowers organizations to deploy production-grade AI agents in minutes, not months. SubgenAI’s mission is to make generative AI accessible, scalable, and secure for every organization. Whether you're a startup or a multinational, Serenity Star offers the tools to build intelligent agents tailored to your business logic, with full control over data and deployment. “Many things must happen around it in the coming years. Serenity Star is designed to solve problems like data control, compliance, and decision ethics—so companies can unleash the full potential of generative AI without compromising trust or profitability” - Lorenzo Serratosa Simplifying complex AI agent development Technical and operational challenges are inherent in enterprise-wide AI agent deployments. Examples include time-consuming iteration cycles, lack of observability and cost control, security concerns, and data sovereignty requirements. Serenity Star addresses these pain points by handling the entire AI agent lifecycle while providing enterprise-grade security and compliance features. Users can focus on defining their agent's purpose and behavior rather than wrestling with technical implementation details. Its framework focuses on four essentials for AI agents: the brain (underlying model), knowledge (accessible information), behavior (programmed responses), and tools (external system integrations). This framework directly influenced the technology stack choices for Serenity Star, with Azure Database for PostgreSQL powering the knowledge retrieval and Semantic Kernel enabling flexible model orchestration. Real-world architecture in action When a user query comes in, Serenity Star uses the vector capabilities of Azure Database for PostgreSQL to retrieve the most relevant knowledge. That context, combined with the user’s input, forms a complete prompt. Semantic Kernel then routes the request to the right large language model, ensuring the agent delivers accurate and context-aware responses. Serenity Star’s native connectors to platforms such as Microsoft Teams, WhatsApp, and Google Tag Manager are also part of this architecture, delivering answers directly in the collaboration and communication tools enterprises already use every day. Figure 1: Serenity Star Architecture This routing and orchestration architecture applies to the multi-tenant SaaS deployments and dedicated customer instances offered by Serenity Star. Azure Database for PostgreSQL provides native Row-Level Security (RLS) capabilities, a key advantage for securely managing multi-tenant environments. Multi-tenant deployments allow organizations to get started quickly with lower overhead, while dedicated instances meet the needs of enterprises with strict compliance and data sovereignty requirements. Optimizing for scale The same architecture that powers retrieval, routing, and multi-channel delivery also provides a foundation for performance at scale. As adoption grows, the team continuously monitors query volume, response times, and resource efficiency across both multi-tenant and dedicated environments. To stay ahead of demand, SubgenAI actively experiments with new Azure Database for PostgreSQL features such as DiskANN for faster vector search. These optimizations keep latency low even as more users and connectors are added. The result is a platform that maintains sub-60-second response times for 99 percent of chart generations, regardless of deployment model or integration point. With this systematic approach to scaling, organizations can deploy fully functional AI agents that are connected to their preferred communication platforms in just 15 minutes instead of hours. For enterprises that have struggled with failed AI projects, Serenity Star offers not only a secure and compliant solution but also one proven to grow with their needs. Why Azure Database for PostgreSQL is a cornerstone The knowledge component of AI agents relies heavily on retrieval-augmented generation (RAG) systems that perform similarity searches against embedded content. This requires a database capable of handling efficient vector search while maintaining enterprise-grade reliability and security. SubgenAI evaluated multiple vector database options. However, Azure Database for PostgreSQL with PGVector emerged as the clear winner. There were several compelling reasons for this. One is its mature technology, which provides immediate credibility with enterprise customers. Two, the ability to scale GenAI use cases with features like DiskANN for accurate and scalable vector search. There, the flexibility and appeal of using an open-source database with a vibrant and fast-moving community. As CPO Leandro Harillo explains: “When we tell them their data runs on Azure Database for PostgreSQL, it’s a relief. It's a well-known technology versus other options that were born with this new AI revolution.” As an open-source relational database management system, Azure Database for PostgreSQL offers extensibility and seamless integration with Microsoft’s enterprise ecosystem. It has a trusted reputation that appeals to organizations with strict data sovereignty and compliance requirements such as those in healthcare and insurance where reliability and governance are non-negotiable. The integration with Azure's broader ecosystem also simplified implementation. With Serenity Star built entirely on Azure infrastructure, Azure Database for PostgreSQL provided seamless connectivity and consistent performance characteristics. The fast response times necessary for real-time agent interactions are the result, along with maintaining the reliability demanded by enterprise customers. Semantic Kernel: Enabling model flexibility at scale Enterprise AI success requires the ability to experiment with different models and adapt quickly as technology evolves. Semantic Kernel makes this possible, supporting over 300 LLMs and embedding models through a unified interface. With Serenity Star, organizations can make genuine choices about their AI implementations without vendor lock-in. Companies can use embedding models from OpenAI through Azure deployments, ensuring their information remains in their own infrastructure while accessing cutting-edge capabilities. If business requirements change or new models emerge, switching becomes a configuration change rather than a development project. Semantic Kernel's comprehensive connector ecosystem also accelerated SubgenAI's own development process. Interfaces for different vector databases enabled rapid prototyping and comparison during the evaluation phase. “Semantic Kernel helped us to be able to try the different ones and choose the one that fit better for us,” notes Julia Schroder, VP of Product. The SubgenAI team has also extended Semantic Kernel to support more features in Azure Database for PostgreSQL, which is easier because of how well-known and popular PostgreSQL is. SubgenAI has also contributed improvements back to the community. This collaborative approach ensures the platform benefits from the latest developments while helping advance the broader ecosystem. Proven impact of Azure Database for PostgreSQL across industries Because organizations struggle to deliver production-ready agents because of long development cycles, lack of observability, and compliance, the effectiveness of Azure Database for PostgreSQL and other Azure services is reflected in deployment metrics and customer feedback. Production-ready agents typically require around 30 iterations for basic implementations. Complex use cases demand significantly more refinement. One GenAI customer in medical education required over 200 iterations to perfect an agent that evaluates medical students through complex case analysis. Azure PostgreSQL and other Azure services support hour-long iteration cycles rather than week-long sprints, which made this level of refinement economically feasible. Cost efficiency is another significant advantage. SubgenAI provisions and configures models in Microsoft Foundry, which eliminates idling GPU resources while providing detailed cost breakdowns. Users can see exactly how tokens are consumed across prompt text, RAG context, and tool usage, enabling data-driven optimization decisions. Consulting partnerships validate the platform's market position. One consulting firm with 50,000 employees is delighted with the easier implementation, faster deployment, and reliable production performance. Conclusion The combination of Azure Database for PostgreSQL and Semantic Kernel has enabled SubgenAI to address the fundamental challenges that cause 95 percent of enterprise AI projects to fail. Organizations using Serenity Star bypass the traditional barriers of lengthy development cycles, limited observability, and compliance hurdles that typically derail AI initiatives. The platform's architecture delivers measurable results, including a 50 percent reduction in coding time, support for complex agents requiring 200+ iterations, and deployment capabilities that compress months-long projects into 15-minute implementations. Azure Database for PostgreSQL provides the enterprise-grade foundation that customers in regulated industries require, while Semantic Kernel ensures organizations retain flexibility as AI technology evolves. This technological partnership creates a reliable pathway for companies to deploy production-ready AI agents without sacrificing data sovereignty or operational control. Through the reliability of Azure Database for PostgreSQL and the flexibility of Semantic Kernel, Serenity Star delivers an enterprise-ready foundation that makes AI practical, scalable, and sustainable.224Views1like0CommentsPostgreSQL 18 is now GA on Azure Database for PostgreSQL
PostgreSQL 18 is now GA on Azure Database for PostgreSQL Excited to announce that Flexible Server now offers full general availability of #PostgreSQL18 - the fastest GA we’ve ever shipped after community release. This means: 𝘸𝘰𝘳𝘭𝘥𝘸𝘪𝘥𝘦 𝘳𝘦𝘨𝘪𝘰𝘯 𝘴𝘶𝘱𝘱𝘰𝘳𝘵, 𝘪𝘯-𝘱𝘭𝘢𝘤𝘦 𝘮𝘢𝘫𝘰𝘳-𝘷𝘦𝘳𝘴𝘪𝘰𝘯 𝘶𝘱𝘨𝘳𝘢𝘥𝘦𝘴 (𝘗𝘎11-𝘗𝘎17 → 𝘗𝘎18), 𝘔𝘪𝘤𝘳𝘰𝘴𝘰𝘧𝘵 𝘌𝘯𝘵𝘳𝘢 𝘐𝘋 𝘢𝘶𝘵𝘩𝘦𝘯𝘵𝘪𝘤𝘢𝘵𝘪𝘰𝘯, and 𝘘𝘶𝘦𝘳𝘺 𝘚𝘵𝘰𝘳𝘦 𝘸𝘪𝘵𝘩 𝘐𝘯𝘥𝘦𝘹 𝘛𝘶𝘯𝘪𝘯𝘨. Check out the full blog for a deep dive 👉https://techcommunity.microsoft.com/blog/adforpostgresql/postgresql-18-now-ga-on-azure-postgres-flexible-server/4469802 #Microsoft #Azure #Cloud #Database #Postgres #PG18PostgreSQL for the enterprise: scale, secure, simplify
This week at Microsoft Ignite, along with unveiling the new Azure HorizonDB cloud native database service, we’re announcing multiple improvements to our fully managed open-source Azure Database for PostgreSQL service, delivering significant advances in performance, analytics, security, and AI-assisted migration. Let’s walk through nine of the top Azure Database for PostgreSQL features and improvements we’re announcing at Microsoft Ignite 2025. Feature Highlights New Intel and AMD v6-series SKUs (Preview) Scale to multiple nodes with Elastic Clusters (GA) PostgreSQL 18 (GA) Realtime analytics with Fabric Mirroring (GA) Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) Adding Parquet to the azure_storage extension (GA) Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Integrated identity with Entra token-refresh libraries for Python AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Performance and scale New Intel and AMD v6 series SKUs (Preview) You can run your most demanding Postgres workloads on new Intel and AMD v6 General Purpose and Memory Optimized hardware SKUs, now availble in preview These SKUs deliver massive scale for high-performance OLTP, analytics and complex queries, with improved price performance and higher memory ceilings. AMD Confidential Compute v6 SKUs are also in Public Preview, enabling enhanced security for sensitive workloads while leveraging AMD’s advanced hardware capabilities. Here’s what you need to know: Processors: Powered by 5th Gen Intel® Xeon® processor (code-named Emerald Rapids) and AMD's fourth Generation EPYC™ 9004 processors Scale: VM size options scale up to 192 vCores and 1.8 TiB IO: Using the NVMe protocol for data disk access, IO is parallelized to the number of CPU cores and processed more efficiently, offering significant IO improvements Compute tier: Available in our General Purpose and Memory Optimized tiers. You can scale up to these new compute SKUs as needed with minimal downtime. Learn more: Here's a quick summary of the v6 SKUs we’re launching, with links to more information: Processor SKU Max vCores Max Mem Intel Ddsv6 192 768 GiB Edsv6 192 1.8 TiB AMD Dadsv6 96 384 GiB Eadsv6 96 672 GiB DCadsv6 96 386 GiB ECadsv6 96 672 GiB Scale to multiple nodes with Elastic clusters (GA) Elastic clusters are now generally available in Azure Database for PostgreSQL. Built on Citus open-source technology, elastic clusters bring the horizontal scaling of a distributed database to the enterprise features of Azure Database for PostgreSQL. Elastic clusters enable horizontal scaling of databases running across multiple server nodes in a “shared nothing” architecture. This is ideal for workloads with high-throughput and storage-intensive demands such as multi-tenant SaaS and IoT-based workloads. Elastic clusters come with all the enterprise-level capabilities that organizations rely upon in Azure Database for PostgreSQL, including high availability, read replicas, private networking, integrated security and connection pooling. Built-in sharding support at both row and schema level enables you to distribute your data across a cluster of compute resources and run queries in parallel, dramatically increasing throughput and capacity. Learn more: Elastic clusters in Azure Database for PostgreSQL PostgreSQL 18 (GA) When PostgreSQL 18 was released in September, we made a preview available on Azure on the same day. Now we’re announcing that PostgreSQL 18 is generally available on Azure Database for PostgreSQL, with full Major Version Upgrade (MVU) support, marking our fastest-ever turnaround from open-source release to managed service general availability. This release reinforces our commitment to delivering the latest PostgreSQL community innovations to Azure customers, so you can adopt the latest features, performance improvements, and security enhancements on a fully managed, production-ready platform without delay. ^Note: MVU to PG18 is currently available in the NorthCentralUS and WestCentralUS regions, with additional regions being enabled over the next few weeks Now you can: Deploy PostgreSQL 18 in all public Azure regions. Perform in-place major version upgrades to PG18 with no endpoint or connection string changes. Use Microsoft Entra ID authentication for secure, centralized identity management in all PG versions. Enable Query Store and Index Tuning for built-in performance insights and automated optimization. Leverage the 90+ Postgres extensions supported by Azure Database for PostgreSQL. PostgreSQL 18 also delivers major improvements under the hood, ranging from asynchronous I/O and enhanced vacuuming to improved indexing and partitioning, ensuring Azure continues to lead as the most performant, secure, and developer-friendly PostgreSQL managed service in the cloud. Learn more: PostgreSQL 18 open-source release announcement Supported versions of PostgreSQL in Azure Database for PostgreSQL Analytics Real-time analytics with Fabric Mirroring (GA) With Fabric mirroring in Azure Database for PostgreSQL, now generally available, you can run your Microsoft Fabric analytical workloads and capabilities on near-real-time replicated data, without impacting the performance of your production PostgreSQL databases, and at no extra cost. Mirroring in Fabric connects your operational and analytical platforms with continuous data replication from PostgreSQL to Fabric. Transactions are mirrored to Fabric in near real-time, enabling advanced analytics, machine learning, and reporting on live data sets without waiting for traditional batch ETL processes to complete. This approach eliminates the overhead of custom integrations or data pipelines. Production PostgreSQL servers can run mission-critical transactional workloads without being affected by surges in analytical queries and reporting. With our GA announcement Fabric mirroring is ready for production workloads, with secure networking (VNET integration and Private Endpoints supported), Entra ID authentication for centralized identity management, and support for high availability enabled servers, ensuring business continuity for mirroring sessions. Learn more: Mirroring Azure Database for PostgreSQL flexible server Adding Parquet support to the azure_storage extension (GA) In addition to mirroring data directly to Microsoft Fabric, there are many other scenarios that require moving operational data into data lakes for analytics or archival. The complexity of building and maintaining ETL pipelines can be expensive and time-consuming. Azure Database for PostgreSQL now natively supports Parquet via the azure_storage extension, enabling direct SQL-based read/write to Parquet files in Azure Storage. This makes it easy to import and export data in Postgres without external tools or scripts. Parquet is a popular columnar storage format often used in big data and analytics environments (like Spark and Azure Data Lake) because of its efficient compression and query performance for large datasets. Now you can use the azure_storage extension to can skip an entire step: just issue a SQL command to write to and query from a Parquet file in Azure Blob Storage. Learn more: Azure storage extension in Azure Database for PostgreSQL Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) DuckDB’s columnar engine excels at high performance scans, aggregations and joins over large tables, making it particularly well-suited for analytical queries. The pg_duckdb extension, now available in preview for Azure Database for PostgreSQL combines PostgreSQL’s transactional performance and reliability with DuckDB’s analytical speed for large datasets. Together pg_duckdb and PostgreSQL are an ideal combination for hybrid OLTP + OLAP environments where you need to run analytical queries directly in PostgreSQL without sacrificing performance., To see the pg_duckdb extension in action check out this demo video: https://aka.ms/pg_duckdb Learn more: pg_duckdb – PostgreSQL extension for DuckDB Security Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Operating in a regulated industry such as Finance, Healthcare and Government means negotiating compliance requirements like HIPAA and PCI-DSS, GDPR that include protection for personalized data and password complexity, expiration and reuse. This week the anon extension, previously in preview, is now generally available for Azure Database for PostgreSQL adding support for dynamic and static masking, anonymized exports, randomization and many other advanced masking techniques. We’ve also added GA support for the credcheck extension, which provides credential checks for usernames, and password complexity, including during user creation, password change and user renaming. This is particularly useful if your application is not using Entra ID and needs to rely on native PostgreSQL users and passwords. If you need to store and query IP ranges for scenarios like auditing, compliance, access control lists, intrusion detection and threat intelligence, another useful extension announced this week is the ip4r extension which provides a set of data types for IPv4 and IPv6 network addresses. Learn more: PostgreSQL Anonymizer credcheck – PostgreSQL username/password checks IP4R - IPv4/v6 and IPv4/v6 range index type for PostgreSQL The Azure team maintains an active pipeline of new PostgreSQL extensions to onboard and upgrade to Azure Database for PostgreSQL For example, another important extension upgraded this week is pg_squeeze which removes unused space from a table. The updated 1.9.1 version adds important stability improvements. Learn more: List of extensions and modules by name Integrated identity with Entra token-refresh libraries for Python In a modern cloud-connected enterprise, identity becomes the most important security perimeter. Azure Database for PostgreSQL is the only managed PostgreSQL service with full Entra integration, but coding applications to take care of Entra token refresh can be complex. This week we’re announcing a new Python library to simplify Entra token refresh. The library automatically refreshes authentication tokens before they expire, eliminating manual token handling and reducing connection failures. The new python_azure_pg_auth library provides seamless Azure Entra ID authentication and supports the latest psycopg and SQLAlchemy drivers with automatic token acquisition, validation, and refresh. Built-in connection pooling is available for both synchronous and asynchronous workloads. Designed for cross-platform use (Windows, Linux, macOS), the package features clean architecture and flexible installation options for different driver combinations. This is our first milestone in a roadmap to add token refresh for additional programming languages and frameworks. Learn more, with code samples to get started here: https://aka.ms/python-azure-pg-auth Migration AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Database migration is a challenging and time-consuming process, with multiple manual steps requiring schema and apps specific information. The growing popularity, maturity and low cost of PostgreSQL has led to a healthy demand for migration tooling to simplify these steps. The new AI-assisted Oracle Migration Tool preview announced this week greatly simplifies moving from Oracle databases to Azure Database for PostgreSQL. Available in the VS Code PostgreSQL extension the new migration tool combines GitHub Copilot, Azure OpenAI, and custom Language Model Tools to convert Oracle schema, database code and client applications into PostgreSQL-compatible formats. Unlike traditional migration tools that rely on static rules, Azure’s approach leverages Large Language Models (LLMs) and validates every change against a running Azure Database for PostgreSQL instance. This system not only translates syntax but also detects and fixes errors through iterative re-compilation, flagging any items that require human review. Application codebases like Spring Boot and other popular frameworks are refactored and converted. The system also understands context by querying the target Postgres instance for version and installed extensions. It can even invoke capabilities from other VS Code extensions to validate the converted code. The new AI-assisted workflow reduces risk, eliminates significant manual effort, and enables faster modernization while lowering costs. Learn more: https://aka.ms/pg-migration-tooling Be sure to follow the Microsoft Blog for PostgreSQL for regular updates from the Postgres on Azure team at Microsoft. We publish monthly recaps about new features in Azure Database for PostgreSQL, as well as an annual blog about what’s new in Postgres at Microsoft.1.1KViews9likes0CommentsAnnouncing Azure HorizonDB
Affan Dar, Vice President of Engineering, PostgreSQL at Microsoft Charles Feddersen, Partner Director of Program Management, PostgreSQL at Microsoft Today at Microsoft Ignite, we’re excited to unveil the preview of Azure HorizonDB, a fully managed Postgres-compatible database service designed to meet the needs of modern enterprise workloads. The cloud native architecture of Azure HorizonDB delivers highly scalable shared storage, elastic scale-out compute, and a tiered cache optimized for running cloud applications of any scale. Postgres is transforming industries worldwide and is emerging as the foundation of modern data solutions across all sectors at an unprecedented pace. For developers, it is the database of choice for building new applications with its rich set of extensions, open-source API, and expansive ecosystems of tools and libraries. At the same time, but at the opposite end of the workload spectrum, enterprises around the world are also increasingly turning to Postgres to modernize their existing applications. Azure HorizonDB is designed to support applications across the entire workload spectrum from the first line of code in a new app to the migration of large-scale, mission-critical solutions. Developers benefit from the robust Postgres ecosystem and seamless integration with Azure’s advanced AI capabilities, while enterprises can gain a secure, highly available, and performant cloud database to host their business applications. Whether you’re building from scratch or transforming legacy infrastructure, Azure HorizonDB empowers you to innovate and scale with confidence, today and into the future. Azure HorizonDB introduces new levels of performance and scalability to PostgreSQL. The scale-out compute architecture supports up to 3,072 vCores across primary and replica nodes, and the auto-scaling shared storage supports up to 128TB databases while providing sub-millisecond multi-zone commit latencies. This storage innovation enables Azure HorizonDB to deliver up to 3x more throughput when compared with open-source Postgres for transactional workloads. Azure HorizonDB is enterprise ready on day one. With native support for Entra ID, Private Endpoints, and data encryption, it provides compliance and security for sensitive data stored in the cloud. All data is replicated across availability zones by default and maintenance operations are transparent with near-zero downtime. Backups are fully automated, and integration with Azure Defender for Cloud provides additional protection for highly sensitive data. All up, Azure HorizonDB offers enterprise-grade security, compliance, and reliability, making it ready for business use today. Since the launch of ChatGPT, there has been an explosion of new AI apps being built, and Postgres has become the database of choice due in large part to its vector index support. Azure HorizonDB extends the AI capabilities of Postgres further with two key features. We are introducing advanced filtering capabilities to the DiskANN vector index which enable query predicate pushdowns directly into the vector similarity search. This provides significant performance and scalability improvements over pgvector HNSW while maintaining accuracy and is ideal for similarity search over transactional data in Postgres. The second feature is built-in AI model management that seamlessly integrates generative, embedding, and reranking models from Microsoft Foundry for developers to use in the database with zero configuration. In addition to enhanced vector indexing and simplified model management to build powerful new AI apps, we’re also pleased to announce the general availability of Microsoft’s PostgreSQL Extension for VS Code that provides the tooling for Postgres developers to maximize their productivity. Using this extension, GitHub Copilot is context aware of the Postgres database which means less prompting and higher quality answers, and in the Ignite release, we’ve added live monitoring with one-click GitHub Copilot debugging where Agent mode can launch directly from the performance monitoring dashboard to diagnose Postgres performance issues and guide users to a fix. Alpha Life Sciences are an existing Azure customers “I’m truly excited about how Azure HorizonDB empowers our AI development. Its seamless support for Vector DB, RAG, and Agentic AI allows us to build intelligent features directly on a reliable Postgres foundation. With Azure HorizonDB, I can focus on advancing AI capabilities instead of managing infrastructure complexities. It’s a smart, forward-looking solution that perfectly aligns with how we design and deliver AI-powered applications.” Pengcheng Xu, CTO Alpha Life Sciences For enterprises that are modernizing their applications to Postgres in the cloud, the security and availability of Azure HorizonDB make it an ideal platform. However, these migrations are often complex and time consuming for large legacy codebase conversions. To simplify this and reduce the risk, we’re pleased to announce the preview of GitHub Copilot powered Oracle migration built into the PostgreSQL Extension for VS Code. Built into VS Code, teams of engineers can work with GitHub Copilot to automate the end-to-end conversion of complex database code using rich code editing, version control, text authoring, and deployment in an integrated development environment. Azure HorizonDB is the next generation of fully managed, cloud native PostgreSQL database service. Built on the latest Azure infrastructure with state-of-the-art cloud architecture, Azure HorizonDB is ready to for the most demanding application workloads. In addition to our portfolio of managed Postgres services in Azure, Microsoft is deeply invested into the open source Postgres project and is one of the top corporate upstream contributors and sponsors for the PostgreSQL project, with 19 Postgres project contributors employed by Microsoft. As a hyperscale Postgres vendor, it’s critical to actively participate in the open-source project. It enables us to better support our customers down to the metal in Azure, and to contribute our learnings from running Postgres at scale back to the community. We’re committed to continuing our investment to push the Postgres project forward, and the team is already active in making contributions to Postgres 19 to be released in 2026. Ready to explore Azure HorizonDB? Azure HorizonDB is initially available in Central US, West US3, UK South and Australia East regions. Customers are invited to apply for early preview access to Azure HorizonDB and get hands-on experience with this new service. Participation is limited, apply now at aka.ms/PreviewHorizonDBGeneral Availability of Graph Database Support in Azure Database for PostgreSQL
We are excited to announce the general availability of the Apache AGE extension for Azure Database for PostgreSQL! This marks a significant milestone in empowering developers and businesses to harness the potential of graph data directly within their PostgreSQL environments, offering fully managed graph database service. Unlocking Graph Data Capabilities Apache AGE (A Graph Extension) is a powerful PostgreSQL extension. It allows users to store and query graph data within Postgres seamlessly, enabling advanced insights through intuitive graph database queries via the openCypher query language. Graph data is instrumental in applications such as social networks, recommendation systems, fraud detection, network analysis, and knowledge graphs. By integrating Apache AGE into Azure Database for PostgreSQL, developers can now benefit from a unified platform that supports both relational and graph data models, unlocking deeper insights and streamlining data workflows. Benefits of Using Apache AGE in Azure Database for PostgreSQL The integration of Apache AGE (AGE) in Azure Database for PostgreSQL brings numerous benefits to developers and businesses looking to leverage graph processing capabilities: Enterprise-grade Managed Graph Database Service: AGE in Azure Database for PostgreSQL provides a fully managed graph database solution, eliminating infrastructure management while delivering built-in security, updates, and high availability. Simplified Data Management: AGE's ability to integrate graph and relational data simplifies data management tasks, reducing the need for separate graph database solutions. Enhanced Data Analysis: With AGE, you can perform complex graph analyses directly within your PostgreSQL database, gaining deeper insights into relationships and patterns in your data. Cost Efficiency: By utilizing AGE within Azure Database for PostgreSQL, you can consolidate your database infrastructure, lowering overall costs and reducing the complexity of your data architecture. Security and Compliance: Leverage Azure's industry-leading security and compliance features, ensuring your graph data is protected and meets regulatory requirements. Index Support: Index graph properties with BTREE and GIN indexes. Real-World Applications Apache AGE opens up a range of possibilities for graph-powered applications. Here are just a few examples: Social Networks: Model and analyze complex relationships, such as user connections and interactions. Fraud Detection: Identify suspicious patterns and connections in financial transactions. Recommendation Systems: Leverage graph data to deliver personalized product or content recommendations. Knowledge Graphs: Structure facts and concepts as nodes and relationships, enabling AI-driven search and data discovery. In the following example, we need to provide Procurement with an updated status of all statements of work (SOW) by vendor, including their invoice status. With AGE and Postgres, this once complex task becomes quite simple. We’ll start by creating the empty graph. SELECT ag_catalog.create_graph('vendor_graph'); Then, we’ll create all the ‘vendor’ nodes from the vendors table. SELECT * FROM ag_catalog.cypher( 'vendor_graph', $$ UNWIND $rows AS v CREATE (:vendor { id: v.id, name: v.name }) $$, ARRAY( SELECT jsonb_build_object('id', id, 'name', name) FROM vendors ) ); Next, we’ll create all the ‘sow’ nodes. SELECT * FROM ag_catalog.cypher( 'vendor_graph', $$ UNWIND $rows AS s CREATE (:sow { id: s.id, number: s.number }) $$, ARRAY( SELECT jsonb_build_object('id', id, 'number', number) FROM sows ) ); Then, we’ll create the ‘has_invoices’ relationships (edges). SELECT * FROM ag_catalog.cypher( 'vendor_graph', $$ UNWIND $rows AS r MATCH (v:vendor { id: r.vendor_id }) MATCH (s:sow { id: r.sow_id }) CREATE (v)-[:has_invoices { payment_status: r.payment_status, amount: r.invoice_amount }]->(s) $$, ARRAY( SELECT jsonb_build_object( 'vendor_id', vendor_id, 'sow_id', sow_id, 'payment_status', payment_status, 'invoice_amount', amount ) FROM invoices ) ); Now that we’ve completed these steps, we have a fully populated vendor_graph with vendor nodes, sow nodes, and has_invoices edges with the invoice attributes. We’re ready to query the graph to start our report for Procurement. SELECT * FROM ag_catalog.cypher('vendor_graph' , $$ MATCH (v:vendor)-[rel:has_invoices]->(s:sow) RETURN v.id AS vendor_id, v.name AS vendor_name, s.id AS sow_id, s.number AS sow_number, rel.payment_status AS payment_status, rel.amount AS invoice_amount $$) AS graph_query(vendor_id BIGINT, vendor_name TEXT, sow_id BIGINT, sow_number TEXT, payment_status TEXT, invoice_amount FLOAT); This statement invokes Apache AGE’s Cypher engine that treats our graph as a relational table: ag_catalog.cypher('vendor_graph', $$ … $$) executes the Cypher query against the graph named “vendor_graph.” The inner Cypher fragment, MATCH (v:vendor)-[rel:has_invoices]->(s:sow) RETURN v.id AS vendor_id, v.name AS vendor_name, s.id AS sow_id, s.number AS sow_number, rel.payment_status AS payment_status, rel.amount AS invoice_amount finds every vendor node with outgoing has_invoices edges to SOW nodes projects each vendor’s ID/name, the target sow’s ID/number, and invoice attributes. Wrapping that in … ) AS graph_query( vendor_id BIGINT, vendor_name TEXT, sow_id BIGINT, sow_number TEXT, payment_status TEXT, invoice_amount FLOAT ); tells PostgreSQL how to map each returned column into a regular SQL result set with proper types. The result? You get a standard table of rows—one per invoice edge—with those six columns populated and ready for further SQL joins, filters, aggregates, etc. Performance notes for this example: AGE will scan all “vendor–has_invoices–sow” paths in the graph. If the graph is large, consider an index on the vendor or sow label properties or filter by additional predicates. You can also push WHERE clauses into the Cypher fragment for more selective matching. Scaling to Large Graphs with AGE The Apache AGE extension in Azure Database for PostgreSQL enables seamless scaling to large graphs. Indexing plays a pivotal role in enhancing query performance, particularly for complex graph analyses. Effective Indexing Strategies To optimize graph queries, particularly those involving joins or range queries, implementing the following indexes is recommended: BTREE Index: Ideal for exact matches and range queries. For vertex tables, create an index on the unique identifier column (e.g., id). CREATE INDEX ON graph_name."VLABEL" USING BTREE (id); GIN Index: Designed for efficient searches within JSON fields, such as the properties column in vertex tables. CREATE INDEX ON graph_name."VLABEL" USING GIN (properties); Edge Table Indexes: For relationship traversal, use BTREE indexes on start_id and end_id columns. CREATE INDEX ON graph_name."ELABEL" USING BTREE (start_id); CREATE INDEX ON graph_name."ELABEL" USING BTREE (end_id); Example: Targeted Key-Value Indexing For targeted queries that focus on specific attributes within the JSON field, a smaller BTREE index can be created for precise filtering. CREATE INDEX ON graph_name.label_name USING BTREE (agtype_access_operator(VARIADIC ARRAY[properties, '"KeyName"'::agtype])); Using these indexing strategies ensures efficient query execution, even when scaling large graphs. Additionally, leveraging the EXPLAIN command helps validate index utilization and optimize query plans for production workloads. How to Get Started Enabling Apache AGE in Azure Database for PostgreSQL is simple: 1. Update Server Parameters Within the Azure Portal, navigate to the PostgreSQL Flexible Server instance and select the Server Parameters option. Adjust the following settings: azure.extensions: In the parameter filter, search for and enable AGE among the available extensions. shared_preload_libraries: In the parameter filter, search for and enable AGE. Click Save to apply these changes. The server will restart automatically to activate the AGE extension. Note: Failure to enable the shared_preload_libraries will result in the following error when you first attempt to use the AGE schema in a query. “ERROR: unhandled cypher(cstring) function call error on first cypher query” 2. Enable AGE Within PostgreSQL Once the server restart is complete, connect to the PostgreSQL instance using the psql interpreter. Execute the following command to enable AGE: CREATE EXTENSION IF NOT EXISTS AGE CASCADE; 3. Configure Schema Paths AGE adds a schema called ag_catalog, which is essential for handling graph data. Ensure this schema is included in the search path by executing: SET search_path=ag_catalog,"$user",public; That’s it! You’re ready to create your first graph within PostgreSQL on Azure. Ready to dive in? Experience the power of graph data with Apache AGE on Azure Database for PostgreSQL. Visit AGE on Azure Database for PostgreSQL Overview for more details, and explore how this extension can transform your data analysis and application development. Get started for free with an Azure free account887Views2likes6CommentsAzure PostgreSQL Lesson Learned #8: Post-Upgrade Performance Surprises (The One-Step Fix)
Co‑authored with angesalsaa Symptoms Upgrade from PostgreSQL 12 → higher version succeeds. After migration, workloads show: Queries running slower than before. Unexpected CPU spikes during normal operations. No obvious errors in logs or connectivity issues. Root Cause Missing or stale statistics can lead to bad query plans, which in turn might degrade performance and consume excessive memory. After a major version upgrade, the query planner relies on outdated or default estimates because the optimizer’s learned patterns are not refreshed. This often results in: Sequential scans instead of index scans. Inefficient join strategies. Increased CPU and memory usage. Contributing Factors Large tables with skewed data distributions. Complex queries with multiple joins. Workloads dependent on accurate cost estimates. Specific Conditions We Observed Any source server version can be impacted once you upgrade to higher version. No ANALYZE or VACUUM run post-upgrade. Operational Checks Before troubleshooting, confirm: Query plans differ significantly from pre-upgrade. pg_stats indicates outdated or missing statistics. Mitigation Goal: Refresh statistics so the planner can optimize queries. Run ANALYZE on all tables: ANALYZE; Important Notes: These commands are safe and online. For very large datasets, consider running during low-traffic windows. We recommend running the ANALYZE command in each database to refresh the pg_statistic table. Post-Resolution Queries return to expected performance. CPU utilization stabilizes. Execution plans align with indexes and cost-based optimization. Prevention & Best Practices Always schedule ANALYZE immediately after major upgrades. Automate stats refresh in your upgrade runbook. Validate plans for critical queries before going live. Why This Matters Skipping this step can lead to: Hours of degraded performance. Emergency escalations and customer dissatisfaction. Misdiagnosis as engine regression when it’s just missing stats. Key Takeaways Issue: Post-upgrade query slowness and CPU spikes due to stale/missing statistics. Fix: Run ANALYZE immediately after upgrade. Pro Tip: Automate this in CI/CD or maintenance scripts for zero surprises. References Major Version Upgrades - Azure Database for PostgreSQL | Microsoft Learn154Views0likes0CommentsBuild Smarter with Azure HorizonDB
By: Maxim Lukiyanov, PhD, Principal PM Manager; Abe Omorogbe, Senior Product Manager; Shreya R. Aithal, Product Manager II; Swarathmika Kakivaya, Product Manager II Today, at Microsoft Ignite, we are announcing a new PostgreSQL database service - Azure HorizonDB. You can read the announcement here, and in this blog you can learn more about HorizonDB’s AI features and development tools. Azure HorizonDB is designed for the full spectrum of modern database needs - from quickly building new AI applications, to scaling enterprise workloads to unprecedented levels of performance and availability, to managing your databases efficiently and securely. To help with building new AI applications we are introducing 3 features: DiskANN Advanced Filtering, built-in AI model management, and integration with Microsoft Foundry. To help with database management we are introducing a set of new capabilities in PostgreSQL extension for Visual Studio Code, as well as announcing General Availability of the extension. Let’s dive into AI features first. DiskANN Advanced Filtering We are excited to announce a new enhancement in the Microsoft’s state of the art vector indexing algorithm DiskANN – DiskANN Advanced Filtering. Advanced Filtering addresses a common problem in vector search – combining vector search with filtering. In real-world applications where queries often include constraints like price ranges, ratings, or categories, traditional vector search approaches, such as pgvector’s HNSW, rely on multiple step retrieval and post-filtering, which can make search extremely slow. DiskANN Advanced Filtering solves this by combining filter and search into one operation - while the graph of vectors is traversed during the vector search, each vector is also checked for filter predicate match, ensuring that only the correct vectors are retrieved. Under the hood, it works in a 3-step process: first creating a bitmap of relevant rows using indexes on attributes such as price or rating, then performing a filter-aware graph traversal against the bitmap, and finally, validating and ordering the results for accuracy. This integrated approach delivers dramatically faster and more efficient filtered vector searches. Initial benchmarks show that enabling Advanced Filtering on DiskANN reduces query latency by up to 3x, depending on filter selectivity. AI Model Management Another exciting feature of HorizonDB is AI Model Management. This feature automates Microsoft Foundry model provisioning during database deployment and instantly activates database semantic operators. This eliminates tens of setup and configuration steps and simplifies the development of new AI apps and agents. AI Model Management elevates the experience of using semantic operators within PostgreSQL. When activated, it provisions key models for embedding, semantic ranking and generation via Foundry, installs and configures the azure_ai extension to enable the operators, establishes secure connections, integrates model management, monitoring and cost management within HorizonDB. What would otherwise require significant manual effort and context-switching between Foundry and PostgreSQL for configuration, management, and monitoring is now possible with just a few clicks, all without leaving the PostgreSQL environment. You can also continue to bring your own Foundry models, with a simplified and enhanced process for registering your custom model endpoints in the azure_ai extension. Microsoft Foundry Integration Microsoft Foundry offers a comprehensive technology stack for building AI apps and agents. But building modern agents capable of reasoning, acting, and collaborating is impossible without connection to data. To facilitate that connection, we are excited to announce a new PostgreSQL connector in Microsoft Foundry. The connector is designed using a new standard in data connectivity – Model Context Protocol (MCP). It enables Foundry agents to interact with HorizonDB securely and intelligently, using natural language instead of SQL, and leveraging Microsoft Entra ID to ensure secure connection. In addition to HorizonDB this connector also supports Azure Database for PostgreSQL (ADP). This integration allows Foundry agents to perform tasks like: Exploring database schemas Retrieving records and insights Performing analytical queries Executing vector similarity searches for semantic search use cases All through natural language, without compromising enterprise security or compliance. To get started with Foundry Integration, follow these setup steps to deploy your own HorizonDB (requires participation in Private Preview) or ADP and connect it to Foundry in just a few steps. PostgreSQL extension for VS Code is Generally Available We’re excited to announce that the PostgreSQL extension for Visual Studio Code is now Generally Available. This extension garnered significant popularity within the PostgreSQL community since it’s preview in May’25 reaching more than 200K installs. It is the easiest way to connect to a PostgreSQL database from your favorite editor, manage your databases, and take advantage of built-in AI capabilities without ever leaving VS Code. The extension works with any PostgreSQL whether it's on-premises or in the cloud, and also supports unique features of Azure HorizonDB and Azure Database for PostgreSQL (ADP). One of the key new capabilities is Metrics Intelligence, which uses Copilot and real-time telemetry of HorizonDB or ADP to help you diagnose and fix performance issues in seconds. Instead of digging through logs and query plans, you can open the Performance Dashboard, see a CPU spike, and ask Copilot to investigate. The extension sends a rich prompt that tells Copilot to analyze live metrics, identify the root cause, and propose an actionable fix. For example, Copilot might find a full table scan on a large table, recommend a composite index on the filter columns, create that index, and confirm the query plan now uses it. The result is dramatic: you can investigate and resolve the CPU spike in seconds, with no manual scripting or guesswork, and with no prior PostgreSQL expertise required. The extension also makes it easier to work with graph data. HorizonDB and ADP support open-source graph extension Apache AGE. This turns these services into fully managed graph databases. You can run graph queries against HorizonDB and immediately visualize the results as an interactive graph inside VS Code. This helps you understand relationships in your data faster, whether you’re exploring customer journeys, network topologies, or knowledge graphs - all without switching tools. In Conclusion Azure HorizonDB brings together everything teams need to build, run, and manage modern, AI-powered applications on PostgreSQL. With DiskANN Advanced Filtering, you can deliver low-latency, filtered vector search at scale. With built-in AI Model Management and Microsoft Foundry integration, you can provision models, wire up semantic operators, and connect agents to your data with far fewer steps and far less complexity. And with the PostgreSQL extension for Visual Studio Code, you get an intuitive, AI-assisted experience for performance tuning and graph visualization, right inside the tools you already use. HorizonDB is now available in private preview. If you’re interested in building AI apps and agents on a fully managed, PostgreSQL-compatible service with built-in AI and rich developer tooling, sign-up for Private Preview: https://aka.ms/PreviewHorizonDB.791Views4likes0Comments