azure database for postgresql
124 TopicsNasdaq builds thoughtfully designed AI for board governance with PostgreSQL on Azure
Authored by: Charles Federssen, Partner Director of Product Management for PostgreSQL at Microsoft and Mohsin Shafqat, Senior Manager, Software Engineering at Nasdaq When people think of Nasdaq, they usually think of markets, trading floors, and financial data moving at extraordinary speed. But behind the scenes, Nasdaq also plays an equally critical role in how boards of directors govern, deliberate, and make decisions. Nasdaq Boardvantage® is the company’s governance platform, used by more than 4,400 organizations worldwide—including nearly half of the Fortune 100. It’s where directors review board books, collaborate in an environment designed with robust security, and prepare for meetings that often involve some of the most sensitive information a company has. In recent years, Nasdaq set out to modernize Nasdaq Boardvantage with AI, without compromising security and reliability. That journey was featured in a Microsoft Ignite session, “Nasdaq Boardvantage: AI-Driven Governance on PostgreSQL and Foundry.” It offers a practical look at how Azure Database for PostgreSQL can support AI-driven applications where precision, isolation, and data control are non-negotiable. Introducing AI where trust is everything Board governance isn’t a typical productivity workload. Board packets can run 400 to 600 pages, meeting minutes are legal records, and any AI-generated insight must be confined to a customer’s own data. “Our customers trust us with some of their most strategic, sensitive data,” said Mohsin Shafqat, Senior Manager of Software Development at Nasdaq. That trust meant tackling several core challenges upfront, including: How do you minimize AI hallucinations in a governance context? How do you guarantee tenant isolation at scale? How do you keep data regional across a global customer base? A cloud foundation built for governance Before adding intelligence, Nasdaq decided to re-architect Nasdaq Boardvantage on Microsoft Azure, using Azure Kubernetes Service (AKS) to run containerized, multi-tenant workloads with strong isolation boundaries. Microsoft Foundry provides the managed foundation for deploying, governing, and operating AI models across this architecture, adding consistency, security, and control as intelligence is introduced. At the data layer, Azure Database for PostgreSQL and Azure Database for MySQL became the backbone for governance data. PostgreSQL, in particular, plays a central role in managing structured governance information alongside vector embeddings that support AI-driven features. Together, these services give Nasdaq the performance, security, and operational control required for a highly regulated, multi-tenant environment, while still moving quickly. Key architectural choices included: Tenant isolation by design, with separate databases and storage Regional deployments to align with data residency requirements High availability and managed operations, so teams could focus on product innovation instead of infrastructure maintenance PostgreSQL and pgvector: Powering context-aware AI With that foundation in place, Nasdaq was ready to carefully introduce AI. One of the first AI capabilities was intelligent document summarization. Board materials that once took hours to review could now be condensed into concise, contextually accurate summaries. Under the hood, this required more than just calling an LLM. Nasdaq uses pgvector, natively supported in Azure Database for PostgreSQL, to store and query embeddings generated from board documents. This allows the platform to perform hybrid searches that combine traditional SQL queries with vector similarity to retrieve the most relevant context before sending anything to a language model. Instead of treating AI as a black box, the team built a pipeline where: Documents are processed with Azure Document Intelligence to preserve structure and meaning Content is chunked and embedded Embeddings are stored in PostgreSQL with pgvector Vector similarity searches retrieve precise context for each AI task Because this runs inside PostgreSQL, the same database benefits from Azure’s built-in high availability, security controls, and operational tooling – delivering tangible results, including a 25% reduction in overall board preparation time and internal testing shows 91–97% accuracy for AI-generated summaries and meeting minutes. From summaries to an AI Board Assistant With summarization working in production, Nasdaq expanded further. The team is now building an AI-powered Board Assistant that will help directors prepare for upcoming meetings by surfacing trends, risks, and insights from prior discussions. This introduces a new level of scale. Years of board data across thousands of customers translate into millions of embeddings. PostgreSQL continues to anchor this architecture, storing vectors for semantic retrieval while MySQL supports complementary non-vector workloads. Across Nasdaq Boardvantage, users are advised to always review AI outputs, and no customer data is shared or used to train external models. “We designed AI for governance, not the other way around,” Shafqat said. More importantly, customers trust the system because security, isolation, and data control were engineered in from day one. Looking ahead Nasdaq’s work shows how Azure Database for PostgreSQL can support AI workloads that demand both intelligence and integrity. With PostgreSQL at the core, Nasdaq has built a governance platform that scales globally, respects regulatory boundaries, and introduces AI in a way that feels dependable and not experimental. What started as a modernization of Nasdaq Boardvantage is now influencing how Nasdaq approaches AI across the enterprise. To dive deeper into the architecture and hear directly from the engineers behind it, watch the Ignite session and check out these resources: Watch the Ignite breakout session for a technical walkthrough of how Nasdaq Boardvantage is built, including PostgreSQL on Azure, pgvector, and Microsoft Foundry in production. Read the case study to see how Nasdaq introduced AI into board governance and what changed for directors, administrators, and decision-making. Watch the Ignite broadcast for a candid discussion on Azure Database for PostgreSQL, Azure HorizonDB, and what it takes to scale AI-driven governance.January 2026 Recap: Azure Database for PostgreSQL
Hello Azure Community, We’re kicking off the year with important updates for Azure Database for PostgreSQL. From Premium SSD v2 features now available in public preview to REST API feature updates across developer tools, this blog highlights what’s new and what’s coming. Terraform Adds Support for PostgreSQL 18 – Generally Available Ansible module update - Generally Available Achieving Zonal Resiliency with Azure CLI - Generally Available SDKs Released : Go, Java, JavaScript, .NET and Python – Generally Available What’s New in Premium SSD v2 - Public Preview Latest PostgreSQL minor versions January 2026 Maintenance Release Notes Terraform Adds Support for PostgreSQL 18 Azure Database for PostgreSQL now provides support for PostgreSQL 18 which allows customers to create new servers with PostgreSQL 18 version and upgrade existing ones using Terraform. This update makes it easier to adopt PostgreSQL 18 on Azure while managing both provisioning and upgrades through consistent Terraform workflows. Learn more about using the new terraform resource Ansible Module Update A new Ansible module is now available with support for the latest GA REST API features, enabling customers to automate provisioning and management of Azure Database for PostgreSQL resources. This includes support for Elastic Clusters provisioning, deployment of PostgreSQL instances with PostgreSQL 18, and broader adoption of newly released Azure Database for PostgreSQL capabilities through Ansible. Learn more about using Ansible module with latest REST API features Achieve zonal resiliency with Azure CLI We have released updates to the Azure CLI that allow users to enable zone‑redundant high availability (HA) by default using a new --zonal-resiliency parameter. This parameter can be set to enabled or disabled. When --zonal-resiliency is enabled, the service provisions a standby server in a different availability zone than the primary, providing protection against zonal failures. If zonal capacity is not available in the selected region, you can use the --allow-same-zone flag to provision the standby in the same zone as the primary. Azure CLI commands: az postgres flexible-server update --resource-group <resource_group> --name <server> --zonal-resiliency enabled --allow-same-zone</server></resource_group> az postgres flexible-server update --resource-group <resource_group> --name <server> --zonal-resiliency Disabled</server></resource_group> az postgres flexible-server create --resource-group <resource_group> --name <server> --zonal-resiliency enabled --allow-same-zone</server></resource_group> Learn more about how to configure high availability on Azure Database for PostgreSQL. SDKs Released : Go, Java, JavaScript, .NET and Python We have released updated SDKs for Go, Java, JavaScript, .NET, and Python, built on the latest GA REST API (2025‑08‑01). These SDKs enable developers to programmatically provision, configure, and manage Azure Database for PostgreSQL resources using stable, production‑ready APIs. It also adds the ability to set a default database name for Elastic Clusters, simplifying cluster provisioning workflows, support for PostgreSQL 18. To improve developer experience and reliability, operation IDs have been renamed for clearer navigation, and HTTP response codes have been corrected so automation scripts and retries behave as expected. Learn More about .NET SDK Learn more about Go SDK Learn more about Java SDK Learn more about Javascript SDK Learn more about Python SDK What’s New in Premium SSD v2: Public Preview Azure Database for PostgreSQL Flexible Server now supports a broader set of resiliency and lifecycle management capabilities on Premium SSD v2, enabling production‑grade PostgreSQL deployments with improved durability, availability, and operational flexibility. In this preview, customers can use High Availability (same‑zone and zone‑redundant), geo‑redundant backups, in‑region and geo read replicas, geo‑disaster recovery (Geo‑DR), and Major Version Upgrades on SSDv2‑backed servers, providing both zonal and regional resiliency options for mission‑critical PostgreSQL workloads. These capabilities help protect data across availability zones and regions, support compliance and disaster‑recovery requirements, and simplify database lifecycle operations. Premium SSD v2 enhances these resiliency workflows with higher and independently scalable IOPS and throughput, predictable low latency, and decoupled scaling of performance and capacity. Customers can provision and adjust storage performance without over‑allocating disk size, enabling more efficient capacity planning while sustaining high‑throughput, low‑latency workloads. When combined with zone‑resilient HA and cross‑region data protection, SSDv2 provides a consistent storage foundation for PostgreSQL upgrades, failover, backup, and recovery scenarios. These capabilities are being expanded incrementally across regions as the service progresses toward general availability For more details, see Premium SSDv2 Latest Postgres minor versions: 18.1, 17.7, 16.11, 15.15, 14.20, 13.23 Azure Database for PostgreSQL now supports the latest PostgreSQL minor versions: 18.1, 17.7, 16.11, 15.15, 14.20, and 13.23. These updates are applied automatically during planned maintenance windows, ensuring your databases stay up to date with critical security fixes and reliability improvements no manual action required. This release includes two security fixes and over 50 bug fixes across indexing, replication, partitioning, memory handling, and more. PostgreSQL 13.23 is the final community release for version 13, which has now reached end-of-life (EOL). Customers still using PostgreSQL 13 on Azure should review their upgrade options and refer to Azure’s Extended Support policy for more details. For details about the minor release, see PostgreSQL community announcement. January 2026 Maintenance Release Notes We’re excited to announce the January 2026 version of Azure Database for PostgreSQL maintenance updates. This new version delivers major engine updates, new extensions, Elastic clusters enhancements, performance improvements, and critical reliability fixes. This release introduces expands migration and Fabric mirroring support, and adds powerful analytics, security, and observability capabilities across the service. Customers also benefit from improved Query Store performance, new WAL metrics, enhanced networking flexibility, and multiple Elastic clusters enhancements. All new servers are automatically onboarded beginning January 20, 2026, with existing servers upgraded during their next scheduled maintenance. For a complete list of features, improvements, and resolved issues, see the full release notes here. Azure Postgres Learning Bytes Managing Replication Lag with Debezium Change Data Capture (CDC) enables real‑time integrations by streaming row‑level changes from OLTP systems like PostgreSQL into event streams, data lakes, caches, and microservices. In a typical CDC pipeline, Debezium captures changes from PostgreSQL and streams them into Kafka with minimal latency. However, during large bulk updates that affect millions of rows, replication lag can spike significantly, impacting replication lag. This learning byte walks through how to detect and mitigate replication lag in Azure Database for PostgreSQL when using Debezium. Detect Replication Lag: Start by identifying where lag is building up in the system. Monitor replication slots and lag: Use the following query to inspect active replication slots and measure how far behind they are relative to the current WAL position: SELECT slot_name, active_pid, confirmed_flush_lsn, restart_lsn, pg_current_wal_lsn(), pg_size_pretty( ( pg_current_wal_lsn() - confirmed_flush_lsn ) ) AS lsn_distance FROM pg_replication_slots; Check WAL sender backend status: Verify whether WAL sender processes are stalled due to decoding or I/O waits: SELECT pid, backend_type, application_name, wait_event FROM pg_stat_activity WHERE backend_type = 'walsender' ORDER BY backend_start; Inspect spill activity : High spill activity indicates memory pressure during logical decoding and may contribute to lag. Large values for spill_bytes or spill_count suggest the need to increase logical_decoding_work_mem, reduce transaction sizes, or tune Debezium connector throughput. SELECT slot_name, spill_txns, spill_count, pg_size_pretty(spill_bytes) AS spill_bytes, total_txns, pg_size_pretty(total_bytes) AS total_bytes, stats_reset FROM pg_stat_replication_slots; Fix Replication Lag: Database and infrastructure tuning Reduce unnecessary overhead and ensure compute, memory, and storage resources are appropriately scaled to handle peak workloads. Connector level tuning Adjust Debezium configuration to keep pace with PostgreSQL WAL generation and Kafka throughput. This includes tuning batch sizes, poll intervals, and throughput settings to balance latency and stability. To learn more about diagnosing and resolving CDC performance issues, read the full blog: Performance Tuning for CDC: Managing Replication Lag in Azure Database for PostgreSQL with Debezium416Views2likes1CommentMicrosoft at PGConf India 2026
I’m genuinely excited about PGConf India 2026. Over the past few editions, the conference has continued to grow year over year—both in size and in impact—and it has firmly established itself as one of the key events on the global PostgreSQL calendar. That momentum was very evident again in the depth, breadth, and overall quality of the program for PGConf India 2026. Microsoft is proud to be a diamond sponsor for the conference again this year. At Microsoft, we continue our contributions to the upstream PostgreSQL open-source project—as well as to serve our customers with our Postgres managed service offerings, both Azure Database for PostgreSQL and our newest Postgres offering, Azure HorizonDB . On the open-source front, Microsoft had 540 commits in PG18, including major features like Asynchronous IO. We’re also excited to grow our Postgres open-source contributors team, and so happy to welcome Noah Misch to our team. Noah is a Postgres committer who has deep expertise in PostgreSQL security and is focused on correctness and reliability in PostgreSQL’s core. Microsoft at PGConf India 2026: Highlights from Our Speakers PGConf India has several tracks, all of which have some great talks I am looking forward to. First, the plug. 😊 Microsoft has some amazing talks this year, and we have 8 different talks spread across all the tracks. Postgres on Azure : Scaling with Azure HorizonDB, AI, and Developer Workflows, by Aditya Duvuri & Divya Bhargov Resizing shared buffer pool in a running PostgreSQL server: important, yet impossible, by Ashutosh Bapat Ten Postgres Hacker Journeys—and what they teach us, by Claire Giordano How Postgres can leverage disk bandwidth for better TPS, by Nikhil Chawla AWSM FSM! Free Space Maps Decoded by Nikhil Sontakke Journey of developing a Performance Optimization Feature in PostgreSQL, by Rahila Syed Build Agentic AI with Semantic Kernel and Graph RAG on PostgreSQL, by Shriram Muthukrishnan & Palak Chaturvedi All things Postgres @ Microsoft (2026 edition) by Sumedh Pathak Claire is an amazing speaker and has done a lot of work over the last several years documenting and understanding PostgreSQL committers and hackers. Her talk will definitely have some key insights and nuggets of information. Rahila’s talk will go in depth on performance optimization features and how best to test and benchmark them, and all the tools and tricks she has used as part of the feature development. This should be a must-see talk for anyone doing performance work. Diving Deep: Case Studies & Technical Tracks One of the tracks I’m really excited about is the Case Study track. I see these as similar to ‘Experience’ papers in academia. An experience paper documents what actually happened when applying a technique or system in the real world, what worked, what didn’t, and why. One of the talks I’m looking forward to is ‘Operating Postgres Logical Replication at Massive Scale’ by Sai Srirampur from Clickhouse. Logical Replication is an extremely useful tool, and I’m curious to learn more about pitfalls and lessons learnt when running this at large scale. Another interesting one I’m curious to hear is ‘Understanding the importance of the commit log through a database corruption’ by Amit Kumar Singh from EDB. The Database Engine Developers track allows us to go deep into the PostgreSQL code base and get a better understanding of how PostgreSQL is built. Even if you are not a database developer, this track is useful to understand how and why PostgreSQL does things, helping you be a better user of the database. With the rise of larger machines and memory available in the Cloud, different and newer memory architectures/tiers and serverless product offerings, there is a lot of deep dive in PostgreSQL’s memory architecture. There are some great talks focused on this area, which should be must-see for anyone interested in this topic: Resizing shared buffer pool in a running PostgreSQL server: important, yet impossible by Ashutosh Bapat from Microsoft From Disk to Data: Exploring PostgreSQL's Buffer Management by Lalit Choudhary from PurnaBIT Beyond shared_buffers: On-Demand Memory in Modern PostgreSQL by Vaibhav Popat from Google Finally, the Database Administration and Application Developer tracks have some really great content as well. They cover a wide range of topics, from PII data, HA/DR, Query Tuning to connection pooling and understanding conflict detection and resolution. PostgreSQL in India: A Community Effort Worth Celebrating Conferences like these are a rich source of information, dramatically increasing my personal understanding of the product and the ecosystem. Separately, they are also a great way to meet other practitioners in the space and connect with people in the industry. For people in Bangalore, another great option is the PostgreSQL Bangalore Meetup, and I’m super happy that Microsoft was able to join the ranks of other companies to host the eighth iteration of this meetup. Finally, I would be remiss in not mentioning the hard work done by the PGConf India organizing team including Pavan Deolasse, Ashish Mehra, Nikhil Sontakke, Hari Kiran, and Rushabh Lathia who are making all of this happen. Also, a big shout out to the PGConf India Program Committee (Amul Sul, Dilip Kumar, Marc Linster, Thomas Munro, Vigneshwaran C) for putting together an amazing set of talks. I look forward to meeting all of you in Bangalore! Be sure to drop by the Microsoft booth to say hello (and to snag a free pair of our famous socks). I’d love to learn more about how you’re using Postgres.184Views3likes0CommentsAlphaLife Sciences powers regulatory-compliant AI workflows with PostgreSQL on Azure
by: Maxim Lukiyanov, PhD, Principal PM Manager and Sharon Chen, CEO and Founder at AlphaLife Sciences In life sciences, every document is deeply interconnected and highly regulated. Each clinical trial, regulatory submission, safety report, or protocol amendment is expected to stand up to rigorous audit. For AlphaLife Sciences, that challenge became an opportunity to rethink how AI could support expert human judgment. At Microsoft Ignite, AlphaLife Sciences CEO and Founder Sharon Chen shared how her team is building an AI-powered content authoring platform on top of Azure Database for PostgreSQL, designed specifically for the demands of regulated life sciences workflows. She also explained why the team is excited about Azure HorizonDB as a new PostgreSQL service that is built to meet the needs of modern enterprise workloads. This post explores how AlphaLife Sciences uses PostgreSQL as more than a data store. It’s a semantic foundation for compliant, auditable AI agents. Bringing AI into regulated workflows Life sciences organizations are under constant pressure. R&D pipelines are growing and patent windows are shrinking. A single clinical study report can take six months or more to complete, involving multiple teams and hundreds of source documents. Building efficiency into these processes is critical, but only if it doesn’t compromise accuracy, traceability, or compliance. That’s where many AI solutions fall short. Generating text is one thing, but generating verifiable, version-controlled, regulation-aware content is another. AlphaLife Sciences needed agents that could: Work across massive volumes of structured and unstructured data (Word, PDF, Excel, PowerPoint) Maintain full traceability from generated content back to source documents Support audits, amendments, and regulatory review Minimize hallucinations in a zero-tolerance environment Integrate naturally into the tools writers already use Bringing data, search, and AI together in one system At the core of AlphaLife Sciences’ platform is Azure Database for PostgreSQL. The team chose it for flexibility, extensibility, and for how well it supports modern AI workloads. Instead of stitching together separate systems for SQL queries, vector search, text indexing, and metadata tracking, AlphaLife Sciences consolidated everything into PostgreSQL. One of its flagship use cases is clinical trial protocol authoring, a process that typically involves: Designing trial objectives and endpoints Pulling references from previous studies Writing and revising hundreds of pages of structured content Managing multiple rounds of amendments and regulatory feedback With AI agents backed by PostgreSQL, that workflow changes dramatically. When a writer generates a protocol section, the system can automatically retrieve relevant references from a centralized document pool, using semantic search rather than manual lookup. Writers select the sources they want, apply rules or prompts, and let AI draft the section - complete with citations tied back to the original documents. Reviewers can inspect the source, adjust the output, or insert it directly into the document. For protocol amendments, the platform allows teams to upload inputs (Word or Excel), analyze which sections are affected, and generate structured suggestions. Changes are clearly highlighted, compared against previous versions, and summarized in amendment tables. AI agents that respect the rules A recurring theme in Chen’s talk was restraint. “We don’t just need AI that can write,” she said. “We need intelligent agents that understand data structures, follow regulatory laws, and manage version control.” This is where PostgreSQL-backed AI agents shine. By grounding AI behavior in structured schemas, controlled access, and auditable records, automation works hand-in-hand with human experts. AI accelerates first drafts, consistency checks, discrepancy detection, and cross-document analysis, but final accountability stays firmly with professionals. In some cases, the time to complete processes has been reduced by more than 50%. Azure Database for PostgreSQL has become more than a database for AlphaLife Sciences. It’s a semantic knowledge base that supports: Structured and unstructured data Vector similarity search Metadata-driven traceability Compliance, security, and auditability AI agents operating safely inside enterprise constraints By grounding AI agents directly in the database, reasoning, retrieval, and generation all operate against the same governed source of truth. “AI agents are not here to replace human beings,” said Chen. “They extend structured, compliant, and auditable thinking.” What’s next for AlphaLife Sciences with PostgreSQL on Azure Looking ahead, Chen shared her excitement about Azure HorizonDB and the capabilities it brings to PostgreSQL on Azure. Features like in-database AI model management, semantic operators for classification and summarization, and faster vector search with DiskANN align closely with AlphaLife Sciences’ needs as their platform continues to scale. “We’re extremely happy to see the launch of Azure HorizonDB and the more powerful tools coming with it,” Chen said. “By putting everything together in PostgreSQL, we don’t have to rely on different systems for vector search, text indexing, or SQL queries. Everything happens in one streamlined system. The code becomes cleaner, efficiency improves, and the AI agents perform much more elegantly.” Learn more AlphaLife Sciences’ journey was featured during the Microsoft Ignite session “The Blueprint for Intelligent AI Agents Backed by PostgreSQL.” Watch the session to learn more and see a demo of how Azure Database for PostgreSQL transforms the protocol and protocol amendment process. When AI is anchored in a strong PostgreSQL foundation, innovation and compliance don’t have to compete - they can reinforce each other.160Views4likes0CommentsJanuary 2026 Recap: Azure Database for PostgreSQL
We just dropped the 𝗝𝗮𝗻𝘂𝗮𝗿𝘆 𝟮𝟬𝟮𝟲 𝗿𝗲𝗰𝗮𝗽 for Azure Database for PostgreSQL and this one’s all about developer velocity, resiliency, and production-ready upgrades. January 2026 Recap: Azure Database for PostgreSQL • PostgreSQL 18 support via Terraform (create + upgrade) • Premium SSD v2 (Preview) with HA, replicas, Geo-DR & MVU • Latest PostgreSQL minor version releases • Ansible module GA with latest REST API features • Zone-redundant HA now configurable via Azure CLI • SDKs GA (Go, Java, JS, .NET, Python) on stable APIs Read the full January 2026 recap here and see what’s new (and what’s coming) - January 2026 Recap: Azure Database for PostgreSQLPostgres speakers - POSETTE 2026 CFP is closing soon!
Guidelines for submitting a proposal to the POSETTE CFP POSETTE: An Event for Postgres is back for its 5th year, and the excitement is already building. Scheduled for June 16 – June 18, 2026, this free and virtual developer event brings together the global Postgres community for three days of learning, sharing, and deep technical storytelling. Whether you're a first-time speaker or a seasoned contributor, your story matters and the Call for Proposals (CFP) closes on February 1, 2026. If you’re considering submitting a proposal (or encouraging someone else to), in this post I will walk you through everything you need to know to craft a strong, compelling submission before the deadline arrives. 1. Key Dates to Know CFP Deadline: February 1, 2026 @ 11:59 PM PST Talk Acceptance Notifications: February 11, 2026 Event Dates: June 16 – June 18, 2026 (includes four unique livestreams, live text chat, and speaker Q&A) Schedule & sessions announced: Feb 25, 2026 Pre-record all talks: Weeks of April 20 & April 27 Tip: Add a calendar reminder, this deadline arrives quickly, and no late submissions are accepted. 2. Why Submit a Talk to POSETTE? Submitting a talk for a conference can seem like a difficult task at the start, but this guide can help you come up with potential ideas that can be used to submit a talk for the conference. Share your story with the global Postgres community Your experience, whether it’s a deep dive into query planning, a migration journey, or lessons learned from scaling can help thousands of developers. Grow your professional visibility POSETTE is a high‑reach, virtual event that enables your content to live on well after the livestream. First‑time speakers are welcomed and encouraged POSETTE is not an exclusive club. If you have a story to tell, this is a supportive, welcoming place to tell it. 3. What Makes a Strong Proposal? First‑time speaker? Don’t worry. The guidelines below cover the key elements you’ll need to craft a strong, successful proposal. Make your proposal focused, not broad: Many proposals try to cover too much. The strongest ones zoom in on a specific challenge, insight, or transformation. A narrow, well‑defined topic reads more clearly and creates a stronger takeaway for attendees. Clearly identify the target audience: State who the talk is for: Beginner Postgres developers Cloud architects DBAs focusing on performance Engineers migrating from Oracle/MySQL This helps the selection team understand fit and event balance. Demonstrate real‑world value, not generic theory: Talks rooted in hands‑on experience tend to perform best. Strong abstracts answer: What problem did we face? What did we try? What worked (or didn’t)? What can you replicate in your environment? POSETTE audiences love actionable content. 4. Show how attendees will grow from your talk: Selection committees love when speakers articulate transformation. Clarify what people will gain: “Improve query execution time by…” “Avoid common replication pitfalls…” “Design HA setups more confidently…” The reviewers want talks with practical outcomes. 5. Highlight what makes your talk unique Is your approach unconventional? Did you migrate at massive scale? Did you build or extend an OSS tool? Did you learn something the hard way? Emphasize novelty POSETTE gets many submissions, so originality matters. 6. Use a storytelling angle: Human brains love stories. Strong abstracts often follow a mini narrative: Problem Tension Turning point Solution Lessons This makes your proposal memorable and relatable. 7. Keep the abstract concise and structured: Avoid long, meandering paragraphs. A clear structure like this works well: Topic summary (one sentence) Problem + context (two–three sentences) Solution or insights (two–three sentences) What attendees will learn (one–two sentences) 4. Ideas for Topics That Work Well Not every proposal needs to be a deep internal dive real‑world stories resonate. Consider topics like: Migrating to Postgres (cloud or on‑prem) Performance tuning adventures and lessons Postgres extensions and ecosystem tooling Operational best practices, HA architecture, or incident learnings Developer productivity with Postgres Novel patterns or creative uses of Postgres internals Azure Database for PostgreSQL customer stories Community‑focused topics, such as how to start a PGDay event, how to begin contributing to open source, or how to engage with the Postgres community effectively. Look at POSETTE 2024 or 2025 talk titles to calibrate tone and depth. 5. What Happens If Your Talk Is Accepted? Good news: the speaker experience is designed to be smooth and supportive. Talks are 25 minutes long and pre‑recorded, with professional production support from the POSETTE organizing team at an agreed-upon time during the weeks of April 20 & April 27 Speakers join live text chat during the session to interact with attendees No travel required the event is fully virtual All you need is a good microphone, a quiet space, and a story worth telling. 6. How to Submit Your Proposal Here are the official links you’ll want handy: 📄 CFP Page: https://posetteconf.com/2026/cfp/ ❓ FAQ: https://posetteconf.com/2026/faq/ 📝 Submit on Sessionize: https://sessionize.com/posette2026/ Submission Checklist Before hitting "submit," make sure you have: A strong, interesting title A clear and concise abstract Defined takeaways for attendees An understanding of your target audience Submission completed before Feb 1 @ 11:59 PM PST POSETTE is built by and for the Postgres community and your experience, whether small or monumental, has the potential to help others. With the CFP deadline approaching fast on February 1, now is the perfect time to refine your idea, shape your abstract, and submit your talk. This could be the year your story gets shared with thousands. Take the leap the community will be glad you did.218Views4likes0Comments