Postgres
122 TopicsFrom Oracle to Azure: How Quadrant Technologies accelerates migrations
This blog was authored by Manikyam Thukkapuram, Director, Alliances & Engineering at Quadrant Technologies; and Thiwagar Bhalaji, Migration Engineer and DevOps Architect at Quadrant Technologies Over the past 20+ years, Quadrant Technologies has accelerated database modernization for hundreds of organizations. As momentum to the cloud continues to grow, a major focus for our business has been migrating on-premises Oracle databases to Azure. We’ve found that landing customers in Azure Database for PostgreSQL has been the best option both in terms of cost savings and efficiency. Azure Migrate is by far the best way to get them there. With Azure Migrate, we’re able to streamline migrations that traditionally took months, into weeks. As a Microsoft solutions partner, we help customers migrate to Azure and develop Azure-based solutions. We’re known as “the great modernization specialists” because many of our customers come to us with complex legacy footprints, outdated infrastructure, and monolithic applications that can be challenging to move to the cloud. But we excel at untangling these complex environments. And with our Q-Migrator tool, which is a wrapper around Azure Migrate, we’re able to automate and accelerate these kinds of migrations. Manual steps slowed down timelines In general, each migration we lead includes a discovery phase, a compatibility assessment, and the migration execution. In discovery, we identify every server, database, and application in a customer’s environment and map their interactions. Next, we assess each asset’s readiness for Azure and plan for optimal cloud configurations. Finally, we bring the plan to life, integrating applications, moving workloads, and validating performance. Before adopting Azure Migrate, each of these phases involved manual tasks for our team. During our discovery process we manually collected inventory and wrote custom scripts to track server relationships and database dependencies. Our engineers also had to dig through configuration files and use third-party assessment tools for aspects like VM utilization and Oracle schema. When we mapped compatibility, we worked from static data to predict cost estimates and sizing, as opposed to operating from real-time telemetry. By the time we reached the migration phase, fragmented tooling and inconsistent assessments made it difficult to maintain accuracy and efficiency. Hidden dependencies sometimes surfaced late in the process, causing unexpected rework and delays. Streamlining migrations with Azure Migrate To automate and streamline these manual tasks, we developed Q-Migrator, which is our in-house framework built around Azure Migrate. Now we can offer clients an efficient, agentless approach to discovery, assessment, and migration. As part of our on-premises database migration initiatives, we rely on Azure Migrate to seamlessly migrate a wide range of structured databases (including MySQL, Microsoft SQL Server, PostgreSQL, and Oracle) from on-premises environments to Azure IaaS and PaaS. For instance, for an on-premises PostgreSQL migration, we begin by setting up an Azure Migrate appliance in the client’s environment to automatically discover servers, databases, and applications. That generates a complete inventory and dependency map that identifies every relationship between servers and databases. From there, we run an assessment through Azure Migrate to check compatibility, identify blockers, and right-size target environments for Azure Database for PostgreSQL. By integrating Azure Database Migration Service (DMS), we can replicate data continuously until cutover, ensuring near-zero downtime. In addition, Azure DMS provides robust telemetry and analytics for deep visibility into every stage of the process. This unified and automated workflow not only replaces manual steps but also increases reliability and accelerates delivery. Teams benefit from a consolidated dashboard for planning, execution, and performance tracking, driving efficiency throughout the migration lifecycle. 75% faster deployment, 60% cost savings Since implementing Azure Migrate, which now facilitates discovery and assessment for on-premises PostgreSQL workloads, we’ve accelerated deployment by 75% compared to traditional migration methods. We’ve also reduced costs for our clients by up to 60 percent. Automated discovery alone reduces that phase by nearly 40%, and dependency mapping now takes a fraction of the effort. With the integrated dashboard in Azure Migrate we can also track progress across discovery, assessment, and migration in one place. This eliminates the need for multiple third-party tools. These efficiencies allow us to deliver complex migrations on tighter timelines without sacrificing quality or reliability. Rounding out the modernization journey with AKS As “the great modernization specialists,” we’re often asked which is the best database for landing Oracle workloads in the cloud. From our experience, Azure Database for PostgreSQL is ideal for enterprises seeking cost-efficient and secure PostgreSQL deployments. Its managed services reduce operational overhead while maintaining high availability, compliance, and scalability. Plus, seamless integration with Azure AI services allows us to innovate for clients and keep them ahead of the curve. We also recognize that database migration is only the first step for many clients—modernizing the application layer delivers even greater scalability, security, and manageability. When clients come to Quadrant for a broader modernization strategy, we often use Azure Kubernetes Service (AKS) to containerize their applications and break monoliths into microservices. AKS delivers a cloud-native architecture alongside database modernization. This integration supports DevOps practices, simplifies deployments, and allows customers to take full advantage of elastic cloud infrastructure. More innovation to come Overall, Azure Migrate and Azure Database for PostgreSQL, Azure Database for MySQL, and Azure SQL Database have redefined how we deliver database modernization, and our close collaboration with Microsoft has made it possible. By engaging early with Microsoft, we can validate migration architectures and gain insights into best practices for high-performance and secure cloud deployments. Access to Microsoft experts helps us fine-tune our designs, optimize performance, and resolve complex issues quickly. We’re also investing in AI-driven automation using Azure OpenAI in Foundry Models to analyze migration data, optimize queries, and predict performance outcomes. These innovations allow us to deliver more intelligent, adaptive solutions tailored to each customer’s unique environment.PostgreSQL 18 is now GA on Azure Database for PostgreSQL
PostgreSQL 18 is now GA on Azure Database for PostgreSQL Excited to announce that Flexible Server now offers full general availability of #PostgreSQL18 - the fastest GA we’ve ever shipped after community release. This means: 𝘸𝘰𝘳𝘭𝘥𝘸𝘪𝘥𝘦 𝘳𝘦𝘨𝘪𝘰𝘯 𝘴𝘶𝘱𝘱𝘰𝘳𝘵, 𝘪𝘯-𝘱𝘭𝘢𝘤𝘦 𝘮𝘢𝘫𝘰𝘳-𝘷𝘦𝘳𝘴𝘪𝘰𝘯 𝘶𝘱𝘨𝘳𝘢𝘥𝘦𝘴 (𝘗𝘎11-𝘗𝘎17 → 𝘗𝘎18), 𝘔𝘪𝘤𝘳𝘰𝘴𝘰𝘧𝘵 𝘌𝘯𝘵𝘳𝘢 𝘐𝘋 𝘢𝘶𝘵𝘩𝘦𝘯𝘵𝘪𝘤𝘢𝘵𝘪𝘰𝘯, and 𝘘𝘶𝘦𝘳𝘺 𝘚𝘵𝘰𝘳𝘦 𝘸𝘪𝘵𝘩 𝘐𝘯𝘥𝘦𝘹 𝘛𝘶𝘯𝘪𝘯𝘨. Check out the full blog for a deep dive 👉https://techcommunity.microsoft.com/blog/adforpostgresql/postgresql-18-now-ga-on-azure-postgres-flexible-server/4469802 #Microsoft #Azure #Cloud #Database #Postgres #PG18Azure PostgreSQL Lesson Learned #10: Why PITR Networking Rules Matter
Co‑authored with angesalsaa Symptoms Customer attempted to restore a server configured with public access into a private virtual network. Restore operation failed with an error indicating unsupported configuration. Root Cause Azure enforces strict networking rules during PITR to maintain security and consistency: Public access servers can only be restored to public access. Private access servers can be restored to the same virtual network or a different virtual network, but not to public access. Why This Happens Networking mode is tied to the original server configuration. Mixing public and private access during restore could expose sensitive data or break connectivity assumptions. Contributing Factors Customer assumed PITR could switch networking modes. No prior review of Azure documentation on restore limitations. Specific Conditions We Observed Source server: Private access with VNet integration. Target restore: Attempted to switch to public access. Operational Checks Before initiating PITR: Confirm the source server’s networking mode (Public vs Private). Review restore options in the Azure portal → Restore. Mitigation Goal: Align restore strategy with networking rules. If source is Public: Restore only to Public access. If source is Private: Restore to same or different VNet (within the same region). Post-Resolution Customer successfully restored to a different VNet after adjusting expectations. Prevention & Best Practices Document networking mode for all PostgreSQL servers. Train teams on PITR limitations before disaster recovery drills. Avoid assumptions always check official guidance. Why This Matters Ignoring these rules can delay recovery during critical incidents. Knowing the constraints upfront ensures faster restores and compliance with security policies. Key Takeaways Issue: PITR does not allow switching between Public and Private access. Fix: Restore within the same networking category as the source server. References Backup and Restore in Azure Database for PostgreSQL Flexible Server89Views0likes0CommentsPostgreSQL for the enterprise: scale, secure, simplify
This week at Microsoft Ignite, along with unveiling the new Azure HorizonDB cloud native database service, we’re announcing multiple improvements to our fully managed open-source Azure Database for PostgreSQL service, delivering significant advances in performance, analytics, security, and AI-assisted migration. Let’s walk through nine of the top Azure Database for PostgreSQL features and improvements we’re announcing at Microsoft Ignite 2025. Feature Highlights New Intel and AMD v6-series SKUs (Preview) Scale to multiple nodes with Elastic Clusters (GA) PostgreSQL 18 (GA) Realtime analytics with Fabric Mirroring (GA) Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) Adding Parquet to the azure_storage extension (GA) Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Integrated identity with Entra token-refresh libraries for Python AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Performance and scale New Intel and AMD v6 series SKUs (Preview) You can run your most demanding Postgres workloads on new Intel and AMD v6 General Purpose and Memory Optimized hardware SKUs, now availble in preview These SKUs deliver massive scale for high-performance OLTP, analytics and complex queries, with improved price performance and higher memory ceilings. AMD Confidential Compute v6 SKUs are also in Public Preview, enabling enhanced security for sensitive workloads while leveraging AMD’s advanced hardware capabilities. Here’s what you need to know: Processors: Powered by 5th Gen Intel® Xeon® processor (code-named Emerald Rapids) and AMD's fourth Generation EPYC™ 9004 processors Scale: VM size options scale up to 192 vCores and 1.8 TiB IO: Using the NVMe protocol for data disk access, IO is parallelized to the number of CPU cores and processed more efficiently, offering significant IO improvements Compute tier: Available in our General Purpose and Memory Optimized tiers. You can scale up to these new compute SKUs as needed with minimal downtime. Learn more: Here's a quick summary of the v6 SKUs we’re launching, with links to more information: Processor SKU Max vCores Max Mem Intel Ddsv6 192 768 GiB Edsv6 192 1.8 TiB AMD Dadsv6 96 384 GiB Eadsv6 96 672 GiB DCadsv6 96 386 GiB ECadsv6 96 672 GiB Scale to multiple nodes with Elastic clusters (GA) Elastic clusters are now generally available in Azure Database for PostgreSQL. Built on Citus open-source technology, elastic clusters bring the horizontal scaling of a distributed database to the enterprise features of Azure Database for PostgreSQL. Elastic clusters enable horizontal scaling of databases running across multiple server nodes in a “shared nothing” architecture. This is ideal for workloads with high-throughput and storage-intensive demands such as multi-tenant SaaS and IoT-based workloads. Elastic clusters come with all the enterprise-level capabilities that organizations rely upon in Azure Database for PostgreSQL, including high availability, read replicas, private networking, integrated security and connection pooling. Built-in sharding support at both row and schema level enables you to distribute your data across a cluster of compute resources and run queries in parallel, dramatically increasing throughput and capacity. Learn more: Elastic clusters in Azure Database for PostgreSQL PostgreSQL 18 (GA) When PostgreSQL 18 was released in September, we made a preview available on Azure on the same day. Now we’re announcing that PostgreSQL 18 is generally available on Azure Database for PostgreSQL, with full Major Version Upgrade (MVU) support, marking our fastest-ever turnaround from open-source release to managed service general availability. This release reinforces our commitment to delivering the latest PostgreSQL community innovations to Azure customers, so you can adopt the latest features, performance improvements, and security enhancements on a fully managed, production-ready platform without delay. ^Note: MVU to PG18 is currently available in the NorthCentralUS and WestCentralUS regions, with additional regions being enabled over the next few weeks Now you can: Deploy PostgreSQL 18 in all public Azure regions. Perform in-place major version upgrades to PG18 with no endpoint or connection string changes. Use Microsoft Entra ID authentication for secure, centralized identity management in all PG versions. Enable Query Store and Index Tuning for built-in performance insights and automated optimization. Leverage the 90+ Postgres extensions supported by Azure Database for PostgreSQL. PostgreSQL 18 also delivers major improvements under the hood, ranging from asynchronous I/O and enhanced vacuuming to improved indexing and partitioning, ensuring Azure continues to lead as the most performant, secure, and developer-friendly PostgreSQL managed service in the cloud. Learn more: PostgreSQL 18 open-source release announcement Supported versions of PostgreSQL in Azure Database for PostgreSQL Analytics Real-time analytics with Fabric Mirroring (GA) With Fabric mirroring in Azure Database for PostgreSQL, now generally available, you can run your Microsoft Fabric analytical workloads and capabilities on near-real-time replicated data, without impacting the performance of your production PostgreSQL databases, and at no extra cost. Mirroring in Fabric connects your operational and analytical platforms with continuous data replication from PostgreSQL to Fabric. Transactions are mirrored to Fabric in near real-time, enabling advanced analytics, machine learning, and reporting on live data sets without waiting for traditional batch ETL processes to complete. This approach eliminates the overhead of custom integrations or data pipelines. Production PostgreSQL servers can run mission-critical transactional workloads without being affected by surges in analytical queries and reporting. With our GA announcement Fabric mirroring is ready for production workloads, with secure networking (VNET integration and Private Endpoints supported), Entra ID authentication for centralized identity management, and support for high availability enabled servers, ensuring business continuity for mirroring sessions. Learn more: Mirroring Azure Database for PostgreSQL flexible server Adding Parquet support to the azure_storage extension (GA) In addition to mirroring data directly to Microsoft Fabric, there are many other scenarios that require moving operational data into data lakes for analytics or archival. The complexity of building and maintaining ETL pipelines can be expensive and time-consuming. Azure Database for PostgreSQL now natively supports Parquet via the azure_storage extension, enabling direct SQL-based read/write to Parquet files in Azure Storage. This makes it easy to import and export data in Postgres without external tools or scripts. Parquet is a popular columnar storage format often used in big data and analytics environments (like Spark and Azure Data Lake) because of its efficient compression and query performance for large datasets. Now you can use the azure_storage extension to can skip an entire step: just issue a SQL command to write to and query from a Parquet file in Azure Blob Storage. Learn more: Azure storage extension in Azure Database for PostgreSQL Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) DuckDB’s columnar engine excels at high performance scans, aggregations and joins over large tables, making it particularly well-suited for analytical queries. The pg_duckdb extension, now available in preview for Azure Database for PostgreSQL combines PostgreSQL’s transactional performance and reliability with DuckDB’s analytical speed for large datasets. Together pg_duckdb and PostgreSQL are an ideal combination for hybrid OLTP + OLAP environments where you need to run analytical queries directly in PostgreSQL without sacrificing performance., To see the pg_duckdb extension in action check out this demo video: https://aka.ms/pg_duckdb Learn more: pg_duckdb – PostgreSQL extension for DuckDB Security Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Operating in a regulated industry such as Finance, Healthcare and Government means negotiating compliance requirements like HIPAA and PCI-DSS, GDPR that include protection for personalized data and password complexity, expiration and reuse. This week the anon extension, previously in preview, is now generally available for Azure Database for PostgreSQL adding support for dynamic and static masking, anonymized exports, randomization and many other advanced masking techniques. We’ve also added GA support for the credcheck extension, which provides credential checks for usernames, and password complexity, including during user creation, password change and user renaming. This is particularly useful if your application is not using Entra ID and needs to rely on native PostgreSQL users and passwords. If you need to store and query IP ranges for scenarios like auditing, compliance, access control lists, intrusion detection and threat intelligence, another useful extension announced this week is the ip4r extension which provides a set of data types for IPv4 and IPv6 network addresses. Learn more: PostgreSQL Anonymizer credcheck – PostgreSQL username/password checks IP4R - IPv4/v6 and IPv4/v6 range index type for PostgreSQL The Azure team maintains an active pipeline of new PostgreSQL extensions to onboard and upgrade to Azure Database for PostgreSQL For example, another important extension upgraded this week is pg_squeeze which removes unused space from a table. The updated 1.9.1 version adds important stability improvements. Learn more: List of extensions and modules by name Integrated identity with Entra token-refresh libraries for Python In a modern cloud-connected enterprise, identity becomes the most important security perimeter. Azure Database for PostgreSQL is the only managed PostgreSQL service with full Entra integration, but coding applications to take care of Entra token refresh can be complex. This week we’re announcing a new Python library to simplify Entra token refresh. The library automatically refreshes authentication tokens before they expire, eliminating manual token handling and reducing connection failures. The new python_azure_pg_auth library provides seamless Azure Entra ID authentication and supports the latest psycopg and SQLAlchemy drivers with automatic token acquisition, validation, and refresh. Built-in connection pooling is available for both synchronous and asynchronous workloads. Designed for cross-platform use (Windows, Linux, macOS), the package features clean architecture and flexible installation options for different driver combinations. This is our first milestone in a roadmap to add token refresh for additional programming languages and frameworks. Learn more, with code samples to get started here: https://aka.ms/python-azure-pg-auth Migration AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Database migration is a challenging and time-consuming process, with multiple manual steps requiring schema and apps specific information. The growing popularity, maturity and low cost of PostgreSQL has led to a healthy demand for migration tooling to simplify these steps. The new AI-assisted Oracle Migration Tool preview announced this week greatly simplifies moving from Oracle databases to Azure Database for PostgreSQL. Available in the VS Code PostgreSQL extension the new migration tool combines GitHub Copilot, Azure OpenAI, and custom Language Model Tools to convert Oracle schema, database code and client applications into PostgreSQL-compatible formats. Unlike traditional migration tools that rely on static rules, Azure’s approach leverages Large Language Models (LLMs) and validates every change against a running Azure Database for PostgreSQL instance. This system not only translates syntax but also detects and fixes errors through iterative re-compilation, flagging any items that require human review. Application codebases like Spring Boot and other popular frameworks are refactored and converted. The system also understands context by querying the target Postgres instance for version and installed extensions. It can even invoke capabilities from other VS Code extensions to validate the converted code. The new AI-assisted workflow reduces risk, eliminates significant manual effort, and enables faster modernization while lowering costs. Learn more: https://aka.ms/pg-migration-tooling Be sure to follow the Microsoft Blog for PostgreSQL for regular updates from the Postgres on Azure team at Microsoft. We publish monthly recaps about new features in Azure Database for PostgreSQL, as well as an annual blog about what’s new in Postgres at Microsoft.1.1KViews9likes0CommentsAnnouncing Azure HorizonDB
Affan Dar, Vice President of Engineering, PostgreSQL at Microsoft Charles Feddersen, Partner Director of Program Management, PostgreSQL at Microsoft Today at Microsoft Ignite, we’re excited to unveil the preview of Azure HorizonDB, a fully managed Postgres-compatible database service designed to meet the needs of modern enterprise workloads. The cloud native architecture of Azure HorizonDB delivers highly scalable shared storage, elastic scale-out compute, and a tiered cache optimized for running cloud applications of any scale. Postgres is transforming industries worldwide and is emerging as the foundation of modern data solutions across all sectors at an unprecedented pace. For developers, it is the database of choice for building new applications with its rich set of extensions, open-source API, and expansive ecosystems of tools and libraries. At the same time, but at the opposite end of the workload spectrum, enterprises around the world are also increasingly turning to Postgres to modernize their existing applications. Azure HorizonDB is designed to support applications across the entire workload spectrum from the first line of code in a new app to the migration of large-scale, mission-critical solutions. Developers benefit from the robust Postgres ecosystem and seamless integration with Azure’s advanced AI capabilities, while enterprises can gain a secure, highly available, and performant cloud database to host their business applications. Whether you’re building from scratch or transforming legacy infrastructure, Azure HorizonDB empowers you to innovate and scale with confidence, today and into the future. Azure HorizonDB introduces new levels of performance and scalability to PostgreSQL. The scale-out compute architecture supports up to 3,072 vCores across primary and replica nodes, and the auto-scaling shared storage supports up to 128TB databases while providing sub-millisecond multi-zone commit latencies. This storage innovation enables Azure HorizonDB to deliver up to 3x more throughput when compared with open-source Postgres for transactional workloads. Azure HorizonDB is enterprise ready on day one. With native support for Entra ID, Private Endpoints, and data encryption, it provides compliance and security for sensitive data stored in the cloud. All data is replicated across availability zones by default and maintenance operations are transparent with near-zero downtime. Backups are fully automated, and integration with Azure Defender for Cloud provides additional protection for highly sensitive data. All up, Azure HorizonDB offers enterprise-grade security, compliance, and reliability, making it ready for business use today. Since the launch of ChatGPT, there has been an explosion of new AI apps being built, and Postgres has become the database of choice due in large part to its vector index support. Azure HorizonDB extends the AI capabilities of Postgres further with two key features. We are introducing advanced filtering capabilities to the DiskANN vector index which enable query predicate pushdowns directly into the vector similarity search. This provides significant performance and scalability improvements over pgvector HNSW while maintaining accuracy and is ideal for similarity search over transactional data in Postgres. The second feature is built-in AI model management that seamlessly integrates generative, embedding, and reranking models from Microsoft Foundry for developers to use in the database with zero configuration. In addition to enhanced vector indexing and simplified model management to build powerful new AI apps, we’re also pleased to announce the general availability of Microsoft’s PostgreSQL Extension for VS Code that provides the tooling for Postgres developers to maximize their productivity. Using this extension, GitHub Copilot is context aware of the Postgres database which means less prompting and higher quality answers, and in the Ignite release, we’ve added live monitoring with one-click GitHub Copilot debugging where Agent mode can launch directly from the performance monitoring dashboard to diagnose Postgres performance issues and guide users to a fix. Alpha Life Sciences are an existing Azure customers “I’m truly excited about how Azure HorizonDB empowers our AI development. Its seamless support for Vector DB, RAG, and Agentic AI allows us to build intelligent features directly on a reliable Postgres foundation. With Azure HorizonDB, I can focus on advancing AI capabilities instead of managing infrastructure complexities. It’s a smart, forward-looking solution that perfectly aligns with how we design and deliver AI-powered applications.” Pengcheng Xu, CTO Alpha Life Sciences For enterprises that are modernizing their applications to Postgres in the cloud, the security and availability of Azure HorizonDB make it an ideal platform. However, these migrations are often complex and time consuming for large legacy codebase conversions. To simplify this and reduce the risk, we’re pleased to announce the preview of GitHub Copilot powered Oracle migration built into the PostgreSQL Extension for VS Code. Built into VS Code, teams of engineers can work with GitHub Copilot to automate the end-to-end conversion of complex database code using rich code editing, version control, text authoring, and deployment in an integrated development environment. Azure HorizonDB is the next generation of fully managed, cloud native PostgreSQL database service. Built on the latest Azure infrastructure with state-of-the-art cloud architecture, Azure HorizonDB is ready to for the most demanding application workloads. In addition to our portfolio of managed Postgres services in Azure, Microsoft is deeply invested into the open source Postgres project and is one of the top corporate upstream contributors and sponsors for the PostgreSQL project, with 19 Postgres project contributors employed by Microsoft. As a hyperscale Postgres vendor, it’s critical to actively participate in the open-source project. It enables us to better support our customers down to the metal in Azure, and to contribute our learnings from running Postgres at scale back to the community. We’re committed to continuing our investment to push the Postgres project forward, and the team is already active in making contributions to Postgres 19 to be released in 2026. Ready to explore Azure HorizonDB? Azure HorizonDB is initially available in Central US, West US3, UK South and Australia East regions. Customers are invited to apply for early preview access to Azure HorizonDB and get hands-on experience with this new service. Participation is limited, apply now at aka.ms/PreviewHorizonDBGeneral Availability of Graph Database Support in Azure Database for PostgreSQL
We are excited to announce the general availability of the Apache AGE extension for Azure Database for PostgreSQL! This marks a significant milestone in empowering developers and businesses to harness the potential of graph data directly within their PostgreSQL environments, offering fully managed graph database service. Unlocking Graph Data Capabilities Apache AGE (A Graph Extension) is a powerful PostgreSQL extension. It allows users to store and query graph data within Postgres seamlessly, enabling advanced insights through intuitive graph database queries via the openCypher query language. Graph data is instrumental in applications such as social networks, recommendation systems, fraud detection, network analysis, and knowledge graphs. By integrating Apache AGE into Azure Database for PostgreSQL, developers can now benefit from a unified platform that supports both relational and graph data models, unlocking deeper insights and streamlining data workflows. Benefits of Using Apache AGE in Azure Database for PostgreSQL The integration of Apache AGE (AGE) in Azure Database for PostgreSQL brings numerous benefits to developers and businesses looking to leverage graph processing capabilities: Enterprise-grade Managed Graph Database Service: AGE in Azure Database for PostgreSQL provides a fully managed graph database solution, eliminating infrastructure management while delivering built-in security, updates, and high availability. Simplified Data Management: AGE's ability to integrate graph and relational data simplifies data management tasks, reducing the need for separate graph database solutions. Enhanced Data Analysis: With AGE, you can perform complex graph analyses directly within your PostgreSQL database, gaining deeper insights into relationships and patterns in your data. Cost Efficiency: By utilizing AGE within Azure Database for PostgreSQL, you can consolidate your database infrastructure, lowering overall costs and reducing the complexity of your data architecture. Security and Compliance: Leverage Azure's industry-leading security and compliance features, ensuring your graph data is protected and meets regulatory requirements. Index Support: Index graph properties with BTREE and GIN indexes. Real-World Applications Apache AGE opens up a range of possibilities for graph-powered applications. Here are just a few examples: Social Networks: Model and analyze complex relationships, such as user connections and interactions. Fraud Detection: Identify suspicious patterns and connections in financial transactions. Recommendation Systems: Leverage graph data to deliver personalized product or content recommendations. Knowledge Graphs: Structure facts and concepts as nodes and relationships, enabling AI-driven search and data discovery. In the following example, we need to provide Procurement with an updated status of all statements of work (SOW) by vendor, including their invoice status. With AGE and Postgres, this once complex task becomes quite simple. We’ll start by creating the empty graph. SELECT ag_catalog.create_graph('vendor_graph'); Then, we’ll create all the ‘vendor’ nodes from the vendors table. SELECT * FROM ag_catalog.cypher( 'vendor_graph', $$ UNWIND $rows AS v CREATE (:vendor { id: v.id, name: v.name }) $$, ARRAY( SELECT jsonb_build_object('id', id, 'name', name) FROM vendors ) ); Next, we’ll create all the ‘sow’ nodes. SELECT * FROM ag_catalog.cypher( 'vendor_graph', $$ UNWIND $rows AS s CREATE (:sow { id: s.id, number: s.number }) $$, ARRAY( SELECT jsonb_build_object('id', id, 'number', number) FROM sows ) ); Then, we’ll create the ‘has_invoices’ relationships (edges). SELECT * FROM ag_catalog.cypher( 'vendor_graph', $$ UNWIND $rows AS r MATCH (v:vendor { id: r.vendor_id }) MATCH (s:sow { id: r.sow_id }) CREATE (v)-[:has_invoices { payment_status: r.payment_status, amount: r.invoice_amount }]->(s) $$, ARRAY( SELECT jsonb_build_object( 'vendor_id', vendor_id, 'sow_id', sow_id, 'payment_status', payment_status, 'invoice_amount', amount ) FROM invoices ) ); Now that we’ve completed these steps, we have a fully populated vendor_graph with vendor nodes, sow nodes, and has_invoices edges with the invoice attributes. We’re ready to query the graph to start our report for Procurement. SELECT * FROM ag_catalog.cypher('vendor_graph' , $$ MATCH (v:vendor)-[rel:has_invoices]->(s:sow) RETURN v.id AS vendor_id, v.name AS vendor_name, s.id AS sow_id, s.number AS sow_number, rel.payment_status AS payment_status, rel.amount AS invoice_amount $$) AS graph_query(vendor_id BIGINT, vendor_name TEXT, sow_id BIGINT, sow_number TEXT, payment_status TEXT, invoice_amount FLOAT); This statement invokes Apache AGE’s Cypher engine that treats our graph as a relational table: ag_catalog.cypher('vendor_graph', $$ … $$) executes the Cypher query against the graph named “vendor_graph.” The inner Cypher fragment, MATCH (v:vendor)-[rel:has_invoices]->(s:sow) RETURN v.id AS vendor_id, v.name AS vendor_name, s.id AS sow_id, s.number AS sow_number, rel.payment_status AS payment_status, rel.amount AS invoice_amount finds every vendor node with outgoing has_invoices edges to SOW nodes projects each vendor’s ID/name, the target sow’s ID/number, and invoice attributes. Wrapping that in … ) AS graph_query( vendor_id BIGINT, vendor_name TEXT, sow_id BIGINT, sow_number TEXT, payment_status TEXT, invoice_amount FLOAT ); tells PostgreSQL how to map each returned column into a regular SQL result set with proper types. The result? You get a standard table of rows—one per invoice edge—with those six columns populated and ready for further SQL joins, filters, aggregates, etc. Performance notes for this example: AGE will scan all “vendor–has_invoices–sow” paths in the graph. If the graph is large, consider an index on the vendor or sow label properties or filter by additional predicates. You can also push WHERE clauses into the Cypher fragment for more selective matching. Scaling to Large Graphs with AGE The Apache AGE extension in Azure Database for PostgreSQL enables seamless scaling to large graphs. Indexing plays a pivotal role in enhancing query performance, particularly for complex graph analyses. Effective Indexing Strategies To optimize graph queries, particularly those involving joins or range queries, implementing the following indexes is recommended: BTREE Index: Ideal for exact matches and range queries. For vertex tables, create an index on the unique identifier column (e.g., id). CREATE INDEX ON graph_name."VLABEL" USING BTREE (id); GIN Index: Designed for efficient searches within JSON fields, such as the properties column in vertex tables. CREATE INDEX ON graph_name."VLABEL" USING GIN (properties); Edge Table Indexes: For relationship traversal, use BTREE indexes on start_id and end_id columns. CREATE INDEX ON graph_name."ELABEL" USING BTREE (start_id); CREATE INDEX ON graph_name."ELABEL" USING BTREE (end_id); Example: Targeted Key-Value Indexing For targeted queries that focus on specific attributes within the JSON field, a smaller BTREE index can be created for precise filtering. CREATE INDEX ON graph_name.label_name USING BTREE (agtype_access_operator(VARIADIC ARRAY[properties, '"KeyName"'::agtype])); Using these indexing strategies ensures efficient query execution, even when scaling large graphs. Additionally, leveraging the EXPLAIN command helps validate index utilization and optimize query plans for production workloads. How to Get Started Enabling Apache AGE in Azure Database for PostgreSQL is simple: 1. Update Server Parameters Within the Azure Portal, navigate to the PostgreSQL Flexible Server instance and select the Server Parameters option. Adjust the following settings: azure.extensions: In the parameter filter, search for and enable AGE among the available extensions. shared_preload_libraries: In the parameter filter, search for and enable AGE. Click Save to apply these changes. The server will restart automatically to activate the AGE extension. Note: Failure to enable the shared_preload_libraries will result in the following error when you first attempt to use the AGE schema in a query. “ERROR: unhandled cypher(cstring) function call error on first cypher query” 2. Enable AGE Within PostgreSQL Once the server restart is complete, connect to the PostgreSQL instance using the psql interpreter. Execute the following command to enable AGE: CREATE EXTENSION IF NOT EXISTS AGE CASCADE; 3. Configure Schema Paths AGE adds a schema called ag_catalog, which is essential for handling graph data. Ensure this schema is included in the search path by executing: SET search_path=ag_catalog,"$user",public; That’s it! You’re ready to create your first graph within PostgreSQL on Azure. Ready to dive in? Experience the power of graph data with Apache AGE on Azure Database for PostgreSQL. Visit AGE on Azure Database for PostgreSQL Overview for more details, and explore how this extension can transform your data analysis and application development. Get started for free with an Azure free account886Views2likes6CommentsAzure PostgreSQL Lesson Learned#5: Why SKU Changes from Non-Confidential to Confidential Fail
Co-authored with HaiderZ-MSFT Issue Summary The customer attempted to change the server configuration from Standard_D4ds_v5 (non-Confidential Compute) to Standard_DC4ads_v5 (Confidential Compute) in the West Europe region. The goal was to enhance the performance and security profile of the server. However, the SKU change could not be completed due to a mismatch in security profiles between the current and target SKUs. Root Cause The issue occurred because SKU changes from non-Confidential to Confidential Compute types are not supported in Azure Database for PostgreSQL Flexible Server. Each compute type uses different underlying hardware and isolation technologies. As documented in Azure Confidential Computing for PostgreSQL Flexible Server, operations such as Point-in-Time Restore (PITR) from non-Confidential Compute SKUs to Confidential ones aren’t allowed. Similarly, direct SKU transitions between these compute types are not supported due to this security model difference. Mitigation To resolve the issue, the customer was advised to migrate the data to a new server created with the desired compute SKU (Standard_DC4ads_v5). This ensures compatibility while achieving the intended performance and security goals. Steps: Create a new PostgreSQL Flexible Server with the desired SKU (Confidential Compute). Use native PostgreSQL tools to migrate data: pg_dump -h <source_server> -U <user> -Fc -f backup.dump pg_restore -h <target_server> -U <user> -d <database> -c backup.dump 3. Validate connectivity and performance on the new server. 4. Decommission the old server once migration is confirmed successful. Prevention & Best Practices To avoid similar issues in the future: Review documentation before performing SKU changes or scaling operations: Azure Confidential Computing for PostgreSQL Flexible Server Confirm compute type compatibility when planning scale or migration operations. Plan migrations proactively if you anticipate needing a different compute security profile. Use tools such as pg_dump / pg_restore or Azure Database Migration Service. Check regional availability for Confidential Compute SKUs before deployment. Why these matters Understanding the distinction between Confidential and non-Confidential Compute is essential to maintain healthy business progress. By reviewing compute compatibility and following the documented best practices, customers can ensure smooth scaling, enhanced security, and predictable database performance.111Views0likes0CommentsExciting things on the horizon for PostgreSQL fans @ Ignite 2025
If you’re passionate about PostgreSQL or just curious about what’s new, you’ll want to join us at Microsoft Ignite 2025. We have a packed lineup, including sessions exploring cutting-edge features and exclusive giveaways at the PostgreSQL on Azure booth. Haven’t registered yet? Now’s the time – sign up for Microsoft Ignite and start building your schedule. Below are the must-see PostgreSQL on Azure activities, with highlights of what you’ll learn at each. Add these to your agenda today. Sessions can fill up fast! Theater sessions: get a first look, fast I know from experience that attention spans can start to wane after hours-long keynotes, content-rich sessions, and conference socializing. Luckily, we have a couple of theater sessions that offer snackable but substantial information in less time than it will take to grab lunch. And they’re located conveniently on the main conference floor. PostgreSQL on Azure: Your launchpad for intelligent apps and agents (THR705) - See how we’re making PostgreSQL AI-aware for developers to drive app and agent innovation. Includes a demo of vector similarity search, semantic operators baked into Postgres, and more! Simplifying scale-out of PostgreSQL for performant multi-tenant apps (THR706) - Discover a smarter, simpler way to scale PostgreSQL using the new Elastic Clusters feature. If your app or service is growing fast (or you want it to!), add this breakout to learn how Azure makes it easier to scale Postgres and keep it reliable. These talks are a great way to sample what’s new and decide where to dive deeper. Plus, they’re fun and demo-heavy, and who doesn’t love a good demo? Breakout sessions: a deep dive into Postgres innovations Led by Azure product leaders and executives from organizations driving innovation backed by PostgreSQL, these breakout sessions will dive into the coolest new capabilities and real-world use cases. If you want rich, technical content and more live demos, these are for you. Build mission-critical apps that scale with PostgreSQL on Azure (BRK127) - Get a closer look at the next generation of PostgreSQL on Azure. Add this session, if you’re curious about how we’re taking Postgres to the next level to support your mission-critical AI workloads. Modern data, modern apps: Innovation with Microsoft Databases (BRK134) - Gain insider knowledge on the latest innovations across open-source, SQL, and NoSQL databases, and understand how Microsoft’s integrated database portfolio supports next-gen innovation. Nasdaq Boardvantage: AI-driven governance on PostgreSQL and AI Foundry (BRK137) - Discover how a Fortune 100 merges trust with cutting-edge AI leveraging Azure’s AI-enriched and enterprise-ready solutions, including Azure Database for PostgreSQL, Azure Database for MySQL, Azure AI Foundry, Azure Kubernetes Service (AKS), and API Management. AI-assisted migration: The path to powerful performance on PostgreSQL (BRK123) - A before and after migration journey from Oracle to Azure Database for PostgreSQL. See how the new AI-assisted migration experience delivers conversion in a few clicks and minimal downtime. The blueprint for intelligent AI agents backed by PostgreSQL (BRK130) - If you’re into AI development, this session will spark ideas on bridging the gap between raw data and AI reasoning. You’ll leave with practical tips to turbocharge your AI agents with PostgreSQL. Each breakout session is 45 minutes with live demos and Q&A, so you’ll get plenty of detail and interaction with Postgres experts. Hands-on lab: experience coding with Azure superpowers Do you learn best by doing? Then our guided workshop, Build advanced AI agents with PostgreSQL (Lab515), is for you. In each 75-minute session, you’ll get to create a fully functional AI-powered application backed by PostgreSQL on Azure with step-by-step guidance and expert insight on the latest innovations enabling intelligent app development. All the tools and instructions you’ll need are provided. Labs have limited capacity, so be sure to reserve your seat for any of the four labs in advance. This lab is a great way to understand how all the pieces come together on Azure. And you’ll gain practical skills you can apply to your own projects, whether it’s customer support bots, intelligent search in your app, or any scenario where PostgreSQL + AI collide. Expert meet-up booth: meet the team, grab some swag If you still want more Postgres (or a little Postgres souvenir), you can stop by the PostgreSQL on Azure Expert Meetup booth in the Ignite Hub. This will be our homebase on the show floor, where you can: Meet the team: I’ll be there in person, along with engineers, program managers, cloud solution architects, and advocates from our team. Whether you have a burning technical question, want to share feedback, or need guidance for your specific use case, come chat with us. Get a quick demo re-run: Sometimes a 5-minute demo is worth a thousand words, especially after you’ve sat through all those words already in a keynote. The booth will have a monitor and a live environment so we can walk you through select use cases if you have questions - no appointment needed. Swag and giveaways: Ah yes, the goodies! We know conference swag is part of the fun, so we’ve got some special PostgreSQL-themed giveaways at the booth. I won’t spoil all the surprises, but rumor has it there are some limited-edition items up for grabs. Network with peers: The expert meet-up area is also a magnet for PostgreSQL enthusiasts. You might bump into other attendees at the booth who are tackling similar projects or challenges. Ignite is about community as much as content, so come by and spark up a conversation. Meet you there? Ignite is our largest event of the year. We love sharing what we’ve been working on and, most of all, hearing from you, the community. So, on behalf of the Azure for PostgreSQL team, thank you for your interest and support. We can’t wait to show you what’s new and to help you continue to succeed with Postgres. See you in San Francisco!434Views2likes0Comments