ai
605 TopicsBuild AI Agents for Postgres with Azure AI Agent Service
Agents can be implemented using various GenAI (Generative AI) frameworks, including LangChain, LangGraph, LlamaIndex, and Semantic Kernel. All these frameworks support using Neon. Implementing AI-driven functions or tools calling requires hundreds of lines of code—now, it takes just a few with the new Azure AI Agent Service. AI Agent Service is a part of Azure AI Foundry, and one can create agents without writing code or using Azure AI SDKs. An AI Agent works like a smart assistant that can answer questions, perform tasks, and automate workflows. It uses AI models from Azure OpenAI, external tools, and databases that help it access and interact with real-world data. AI agents can now set up things like databases on their own. Since they’re already creating new databases every few seconds faster than humans, it looks like they’ll help run a lot of the internet’s backend in the future. For example, an AI agent running a customer support chatbot can instantly set up a new Postgres database to log chat history and user feedback. Or when a new region is added on Azure, the agent can spin up a local database in seconds to keep everything fast and organized. In this article, we will discover how to use Neon Serverless Postgres with the Azure AI Agent service and demonstrate it with a sample project that interacts with your Neon projects, databases, and branching just by typing simple human-like commands. Try out the Neon API Management AI Agent console app source code to see how it works! Why Use Azure AI Agent Service with Neon? Azure AI Agent Service is a fully managed solution that helps developers create, deploy, and scale AI agents without worrying about infrastructure. Azure AI Agent Service handles the entire tool-calling process by following three steps: Executing the model based on the given instructions Automatically calling the required tools, and Delivering the results back to you. Key Features: Smart microservices: AI Agents can answer questions, execute functions, and automate workflows. Built-in tools: Access real-world data from Azure Search, Bing, and more. Seamless API integration: Use Azure AI Foundry SDK or OpenAI SDKs to create and run AI agents. Fully managed: No need to handle computing or storage manually. Neon can store vector embeddings of documents and perform fast similarity searches. Using the pgvector extension, you can efficiently search and retrieve relevant data for AI models to enable more accurate and context-aware responses. One of Neon’s most shining features is database branching, which allows AI Agents to: Create separate environments for testing without affecting production. Experiment with AI models and data without risk. Quickly restore previous versions of the database. Finally, Neon API provides full programmatic control over database creation, updates, and deletions. This is perfect for AI-driven workflows where databases need to be dynamically created for different AI agents or users. Neon API Management AI Agent Imagine you want to create a new database. Instead of going to the Neon Console or writing API calls, you can simply type something like, “Create a database called ‘my-new-database’.” Or, if you want to see what databases you have, just say, “Show me my databases.” That’s exactly what the Neon Azure AI Agent lets you do. We will break down how to create, configure, and integrate an AI agent using the Neon Management API and Azure AI Agent Service. Get Started with Azure AI Agent Service To begin using Azure AI Agent Service, you first need to set up an AI Foundry hub and create an Agent project in your Azure subscription. If you’re new to the service, check out the quickstart guide for a step-by-step introduction. Once your AI hub and project are created, you can deploy a compatible AI model, such as GPT-4o. After deploying the model, you’ll be able to interact with the service and make API calls using the available SDKs. In our example project, we used the Azure AI Projects client library for Python to create an agent, assigning Neon Management API functions and managing message threads. Step 1: Create an AI Project Client First, initialize an AIProjectClient using Azure credentials. This client is responsible for managing AI agents, threads, and interactions. # ai-gent.py project_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=os.environ["PROJECT_CONNECTION_STRING"], ) Step 2: Define Neon API Functions Next, define the Neon API functions that the AI Agent will use. These functions allow the AI to perform database-related actions, such as creating a project, listing databases, and managing branches. # neon_functions.py from neon_api import NeonAPI import os from dotenv import load_dotenv load_dotenv() neon = NeonAPI(api_key=os.environ["NEON_API_KEY"]) def create_project(project_name: str): try: response = neon.project_create( project={ "name": project_name, "pg_version": 17, # Fixed typo: removed extra colon "region_id": "azure-eastus2", } ) return format_action_response(response) except Exception as e: return f"Error creating project: {str(e)}" user_functions: Set[Callable[..., Any]] = { create_project } ... Step 3: Register Function Tools Once the functions are defined, we register them as tools that the AI agent can call when processing user requests. # ai-gent.py functions = FunctionTool(user_functions) toolset = ToolSet() toolset.add(functions) Step 4: Create the AI Agent Now, we create an AI agent that understands database management tasks and can execute Neon API functions. # ai-gent.py agent = project_client.agents.create_agent( model=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"], name=f"neon-db-agent-{datetime.now().strftime('%Y%m%d%H%M')}", description="AI Agent for managing Neon databases and running SQL queries.", instructions=f""" You are an AI assistant that helps users create and manage Neon projects, databases, and branches. Use the provided functions to perform actions. The current date is {datetime.now().strftime("%Y-%m-%d")}. """, toolset=toolset, ) print(f"✅ Created agent, ID: {agent.id}") Step 5: Create a Thread for User Interactions The AI Agent needs a thread to interact with users. This thread stores all messages and interactions between the user and the AI. # ai-gent.pythread = project_client.agents.create_thread()print(f"✅ Created thread, ID: {thread.id}") Step 6: Process User Commands When a user sends a command (e.g., “Create a Neon Project”), the AI agent processes it by creating a message, running the request, and returning a response. # ai-gent.py thread = project_client.agents.create_thread() print(f"✅ Created thread, ID: {thread.id}") Step 7: Accept User Commands Finally, the program continuously listens for user input, allowing users to issue commands like “Create a database” or “List projects”. # ai-gent.py def process_command(command: str): message = project_client.agents.create_message( thread_id=thread.id, role="user", content=command, ) print(f"✅ Created message, ID: {message.id}") run = project_client.agents.create_and_process_run( thread_id=thread.id, agent_id=agent.id ) print(f"✅ Run finished with status: {run.status}") if run.status == "failed": print(f"❌ Run failed: {run.last_error}") else: messages = project_client.agents.list_messages(thread_id=thread.id) print(f"📜 Messages: {messages['data'][0]['content'][0]['text']['value']}") Step 8: Run the AI Agent Start the AI agent and interact with Neon API: python ai-gent.py Example Commands After running the script, you can enter commands in the console: Create a project: Create a project called My Neon Project List Projects : List existing projects Conclusion Throughout the article, we learned how to create a dedicated CLI interface that makes it easy for AI agents to deploy and manage database projects. AI agents are now creating more databases on Neon than humans, provisioning thousands of new databases every day. By connecting AI agents to Neon databases, agents can deliver more accurate, context-aware responses based on your data. Ready to see it in action? Try out the Neon API & Azure AI Agent demo today!Windows Recall keeps crashing
Windows recall keeps crashing for me with this error in the event viewer I feel like this could be related to the crashing, but I'm not sure Faulting application name: AIXHost.exe, version: 2125.8200.0.0, time stamp: 0x67e02391 Faulting module name: AIXView.dll, version: 2125.8000.0.0, time stamp: 0x67ddbaa7 Exception code: 0xc0000409 Fault offset: 0x00000000000874a1 Faulting process id: 0x4AC0 Faulting application start time: 0x1DBBFAAEE149DAA Faulting application path: C:\WINDOWS\SystemApps\MicrosoftWindows.Client.AIX_cw5n1h2txyewy\AIXHost.exe Faulting module path: C:\WINDOWS\SystemApps\MicrosoftWindows.Client.CoreAI_cw5n1h2txyewy\AIXView.dll Report Id: b7747b32-c2ed-4a0d-880d-248aebabb9ae Faulting package full name: MicrosoftWindows.Client.AIX_1000.26100.3915.0_x64__cw5n1h2txyewy Faulting package-relative application ID: AIXApp18Views0likes0CommentsConversational AI applications with Azure Communication Services and the Azure OpenAI Realtime API
At Microsoft, we’re dedicated to helping businesses create seamless, interactive experiences that blend advanced communication capabilities with the power of artificial intelligence. We're shipping an updated set of samples, documentations, and videos, to help developers deliver breakthrough integration that leverages Azure Communication Services and Azure OpenAI’s Realtime API. This enables developers to build real-time, low-latency, AI-driven conversational applications that are both scalable and secure. Bridging communication and AI Modern customers expect engaging, instantaneous interactions—whether it’s for customer support, live translations, or intelligent voice command applications. With this integration, you can harness the rich media capabilities of Azure Communication Services alongside Azure OpenAI’s powerful Realtime language models. The result? A fluid, bidirectional communication system in which audio is captured, processed, and responded to in near real time. Why Azure Communication Services? Azure Communication Services is an AI-ready, cloud-based communications platform that provides APIs for voice, video, chat, and SMS—empowering developers to integrate these capabilities into any application. This ecosystem provides: Scalability & Global Reach: Built on Microsoft’s robust cloud that powers scaled applications like Microsoft Teams and Dynamics, Azure Communication Services ensures reliable, low-latency communication across the globe within a unified tech stack. Enterprise-Grade Security: Benefit from Azure’s comprehensive security and compliance certifications. Flexible Integration: Easily combine voice and video streaming with AI insights to deliver enhanced, interactive customer experiences. Learn more about at the Azure Communication Services product page. The power of real-time AI By integrating Azure OpenAI’s Realtime API with Azure Communication Services, you can convert live audio streams into intelligent interactions. Imagine a customer support system that not only listens to your customers but also understands context, analyzes intent, and generates precise, helpful responses—all as the conversation unfolds. How it works Capture and stream: Azure Communication Services captures high-quality audio from your application’s communication channels. Whether you’re building a call center, a virtual assistant, or an interactive voice response system, Azure Communication Services ensures that every sound is transmitted securely and with minimal latency. Process with Azure OpenAI: Azure OpenAI’s Realtime API processes the streamed audio input using advanced language models. The API analyzes the conversation in real time, extracting intent and context. Deliver intelligent responses: The API returns processed data as actionable insights or natural language responses. Azure Communication Services then delivers these responses back to the user, closing the loop in an instant. This seamless, bidirectional flow is inspired by our latest innovations—check out our detailed exploration of bidirectional real-time audio streaming at Ignite 2024. A developer’s journey: getting started To help you hit the ground running, we’ve put together an overview of the integration steps, along with sample code and best practices. Step 1: Set up your environment Begin by provisioning an Azure Communication Services resource. We offer SDKs in multiple languages (JavaScript, .NET, and more) to quickly integrate voice, video, and chat into your application. Detailed documentation and sample projects are available at: https://learn.microsoft.com/en-us/azure/ai-services/openai/realtime-audio-quickstart Step 2: Stream audio to Azure OpenAI’s Realtime API Once your Azure Communication Services instance is live, capture real-time audio streams from your application. The following code illustrates how you might forward audio data to Azure OpenAI for processing: # WebSocket to get bi-directional streaming audio @app.websocket("/ws") async def ws(websocket: WebSocket): await websocket.accept() handler = CommunicationHandler(websocket) await handler.start_conversation_async() while True: # Receive data from the client data = await websocket.receive_json() kind = data["kind"] if kind == "AudioData": audio_data = data["audioData"]["data"] # Send the audio data to the CallAutomationHandler Step 3: Customize and Scale With the integration in place, you can further tailor the interaction: Real-time transcription and translation: Use Azure OpenAI’s capabilities to transcribe and translate audio on the fly. Contextual customer support: Integrate with backend systems to provide personalized support based on conversation context. Voice-activated commands: Enable smart assistants that understand and execute user commands in real time. Scale with GPT-4o-Mini-Realtime-Preview: With the newly announced GPT-4o-Mini-Realtime-Preview API you can scale your Conversational AI application with reduced latency at a significantly reduced cost. Unlock new possibilities By combining Azure Communication Services with Azure OpenAI’s real-time capabilities, you’re not just adding another tool to your stack—you’re creating an ecosystem where communication and intelligence work hand in hand. This integration opens new avenues for customer engagement, automation, and interactive media applications. We’re thrilled to see how developers and businesses will leverage this powerful combination to transform their communication strategies and build the next generation of intelligent applications. Get started today Ready to build real-time conversational AI experiences? Explore the following resources to begin your journey: Get started with the sample code: https://github.com/anujb-msft/communication-services-realtime-voice-agent Chat with Azure Communication Services experts and join our live learning series on how to build with us in April. Read about this event and register here: aka.ms/register-acs-series. Azure Communication Services: Product Page and Documentation. OpenAI Realtime API: Refer to OpenAI’s developer resources for guidance on API integration. Real-Time Audio Streaming Insights: Ignite 2024 Blog Post. At Microsoft, we believe that the future of communication lies in the seamless integration of real-time connectivity and intelligent processing. We can’t wait to see what you create!Enhanced Threat Detection & Monitoring AI Solutions
Software development companies are moving fast with AI—so are the risks. Join us to explore advanced strategies for detecting and monitoring security threats in real time. From anomaly detection to proactive incident response, learn how to keep your AI apps secure and resilient. What You’ll Learn: 💡Detect and respond to real-time security threats using intelligent threat intelligence 💡 Identify anomalies and suspicious behavior across your AI systems 💡 Build proactive incident response strategies that strengthen your security posture 💡 Protect customer data and trust in a dynamic risk landscape 👉 Register now As part of Software Development Company Security Series, for more details head to aka.ms/asiaisvsecurity50Views1like0CommentsWhy being secure and transactable is a key to marketplace success
The Partner Spotlight series highlights the achievements of forward-thinking partners who are driving innovation in the commercial marketplace. These industry pioneers share how they leverage AI to develop advanced applications, create impactful solutions on the Microsoft Cloud, launch transactable offerings, and accelerate their success through marketplace sales. In this installment, I had the opportunity to chat with Ásgrímur Skarphéðinsson, Co-founder and Chief Technology Officer at Klynke, about the companies focus on building securely on the Microsoft cloud, their motivation for publishing transactable offers, and the success they have seen on the Azure marketplace. About Ásgrímur: As CTO of Klynke, Ásgrímur Skarphéðinsson drives product innovation and technology strategy. With a strong background in engineering and full-stack development, Ásgrímur leads the design and architecture of Klynke’s time tracking platform, ensuring it delivers secure, scalable performance within the Microsoft cloud. He is passionate about building intuitive solutions that integrate seamlessly with Microsoft 365 and Teams—helping modern teams work smarter, stay compliant, and focus on what matters most. __________________________________________________________________________________________________ [JR]: Tell us about your organization. What inspired the founding? What products/services do you offer? [AS]: At Klynke, we believe technology should feel like a natural extension of the way people work, and that belief inspired our founding. We started with a simple idea: to help organizations work smarter by building secure, cloud-based tools on the Microsoft platform, and our work proved something important: Microsoft’s cloud had the power and flexibility to bring innovative ideas to life. From there, we noticed a gap, that Microsoft 365 didn’t offer native time tracking. So, we built what we wished existed: a secure, intuitive time management solution that works directly inside the Microsoft 365 tools people already use. That’s how Klynke Time Management was born. Today, our solution connects seamlessly with Teams, Planner, Outlook, SharePoint, and Excel. It helps teams track time right where the work happens—without switching tools or breaking focus. And with the Klynke Teams app included, it’s all just a click away. [JR]: Can you tell us a bit about the application(s) you have available on the marketplace? How does it work? [AS]: Klynke Time Management, available on Microsoft Marketplace, is built to simplify time tracking within Microsoft 365. Whether you're working in Teams or Outlook, Klynke lets you log time on tasks and projects without leaving your flow. Thanks to our deep integration with Microsoft Graph, Klynke syncs across tools like Teams, Outlook, Planner, SharePoint, and Excel—so everything stays connected and up to date. The app is designed to be as intuitive as it is powerful. You can log time, track progress, and view reports through filtered dashboards—all within your familiar Microsoft environment. And since it’s all hosted on Microsoft’s cloud, customers enjoy world-class security and compliance by default. [JR]: What Microsoft cloud products did you use in your app development? What value is this enabling with your customers? [AS]: We built Klynke Time Management using a mix of Microsoft cloud services that together create a seamless, secure experience for our users. At the core is Microsoft Azure, which gives us scalability and rock-solid reliability. We use Azure Active Directory for secure authentication, Azure SQL to manage metadata, and Microsoft Graph to integrate deeply with Microsoft 365 apps like Teams, Planner, Outlook, SharePoint, and Excel. In terms of new releases, testing and deployment we use Azure DevOps. Klynke also runs directly within Microsoft Teams and Outlook, which means users can track time without ever leaving their daily workflow. This cross-cloud application integration helps teams stay productive, focused, and secure—all while working in tools they already know. [JR]: How has Microsoft supported you along your journey? [AS]: Microsoft has been our platform provider and a true partner in our growth. From development to deployment, we’ve been able to tap into Microsoft’s ecosystem for guidance, technical resources, and support. Publishing on the Microsoft Marketplace has been a game changer. It gave us the reach of a global platform, and the best part, Microsoft handles transactions. That lets us focus on what matters most: building great software and supporting our customers. Being part of the Microsoft ecosystem means we’re always building on a foundation of trust, innovation, and security. [JR]: How does your org align its business to enabling positive impact for your customers and communities? [AS]: At Klynke, our mission is to help people work smarter—and that includes organizations making a real difference in their communities. We designed our time tracking solution to be flexible and intuitive for professionals, managers, and teams, with a special focus on small and medium-sized organizations. A key feature is support for teams working across time zones and regions, where coordination is critical, and simplicity is essential. One example is a UK-based healthcare nonprofit focused on improving the diagnosis and management of chronic diseases in primary care. With a globally distributed team, they needed a simple way to stay coordinated without adding operational overhead. Klynke helps them track time consistently and collaborate more effectively allowing them to stay focused on improving patient outcomes. Another example is a U.S.-based healthcare provider with a national network of therapists and contractors. Time tracking across multiple time zones was a challenge, especially with compliance requirements. By integrating Klynke into their Microsoft 365 tools, they’ve streamlined operations and reduced administrative complexity. App Security: Building securely on the Microsoft cloud [JR]: Tell us about your approach to app security. What inspired your focus on building securely on the Microsoft cloud? [AS]: Security has always been front and center for us—it’s not a checkbox, it’s a mindset. From the beginning, we knew our app had to operate securely within our customers’ Microsoft 365 environments. That’s why we chose to build on Microsoft’s cloud. It offers powerful tools like Azure Active Directory and Microsoft Graph that make secure development not just possible, but practical. Our goal is simple: to give customers a solution they can trust—one that’s built on a foundation of compliance, data protection, and transparency. We’re proud that security is baked into every layer of our product. It’s not something we add on—it’s something we architected for from the start. [JR]: Can you describe the security features of your application(s) available on the marketplace? How do they work? [AS]: Klynke Time Management is built using Microsoft’s platform-as-a-service (PaaS) model, which means we benefit from constant security updates and enhancements from Microsoft. Key security features include: Azure Active Directory integration for secure authentication and access control Microsoft Graph API for secure data interactions within Microsoft 365 Zero data duplication—Klynke operates inside your Microsoft 365 environment, so your data stays where it belongs The result? A time management solution that’s secure by design and always evolving. [JR]: What Microsoft cloud security products did you use in your app development? What value is this enabling with your customers? [AS]: We rely on Azure Active Directory to handle secure identity and access management. It ensures that only authorized users can access sensitive time-tracking data. Because we’ve built on Microsoft’s cloud, we’re also aligned with enterprise-grade security standards out of the box, something our customers deeply value, especially in regulated industries. [JR]: What were some of the challenges you encountered while building securely on the Microsoft Cloud, and how did you overcome them? [AS]: Our primary challenge was ensuring seamless integration with Microsoft 365 without compromising security or performance. We encountered technical obstacles such as API throttling and latency. We addressed these by utilizing Microsoft's comprehensive documentation, tools like the Graph Explorer, and Azure DevOps best practices. These resources enabled us to fine-tune our implementation and ensure stability and performance. [JR]: What business outcomes/impact have you experienced as a result of building securely? [AS]: Building securely on the Microsoft cloud has allowed us to deeply integrate Klynke into the Microsoft 365 ecosystem. Our users benefit from working within familiar tools, and our secure-by-design approach has reinforced customer trust. Knowing their data remains within their own Microsoft 365 environment and complies with industry standards has been a key differentiator for our solution. [JR]: What security measures do you implement to protect sensitive customer data in your applications? [AS]: We do not transfer or store any customer data outside their Microsoft 365 environment. All data remains under the customer’s control and is managed according to their own security and compliance policies. This architecture ensures that the customer retains full ownership and oversight of their information. [JR]: How do you ensure the security of third-party integrations and APIs used in your applications? [AS]: We rely exclusively on Microsoft Graph API, protected by Azure Active Directory, and managed entirely by the customer’s global IT administrator. To access the directory securely, we use the Microsoft Authentication Library (MSAL), which is developed by Microsoft to meet the highest standards of security. [JR]: Can you share any best practices for securing applications during the development lifecycle on the Microsoft Cloud, or advice for partners who are just starting their app security journey? [AS]: Our experience would suggest Adopt Azure Active Directory as the foundation for authentication. Use MSAL for secure and scalable access to AD resources. Design for multi-tenant environments from the start. Architect your solution to avoid moving data from the customer's environment—leaving data ownership and security within their domain. Publishing transactable offers [JR]: What business outcomes have you experienced as a result of having a transactable offer on the marketplace? [AS]: Having a transactable offer on the Microsoft commercial marketplace has meaningfully streamlined our sales process. Customers can now try, buy, and deploy Klynke Time Management with just a few clicks, all within the familiar Microsoft environment. This level of accessibility has helped accelerate adoption, broaden our reach, and improve conversion rates. Since making Klynke transactable, we’ve expanded from serving customers in a single country to supporting users across eight countries, spanning four continents. One customer, based in the U.K., manages a globally distributed team that extends into Asia—highlighting how Klynke’s deep integration with Microsoft 365 makes it easy to coordinate across time zones and regions. The marketplace has played a key role in enabling this kind of international scale with minimal friction. [JR]: How has the ability to transact directly on the marketplace influenced your sales strategy and customer interactions? [AS]: Being able to transact directly in the marketplace has streamlined our sales approach, enabling customers to explore, purchase, and implement Klynke Time Management more efficiently. This ease of access has allowed us to reduce friction in the buying process, leading to quicker decisions and smoother customer interactions. It’s also helped us focus more on delivering value, as customers are empowered to manage their own journey with less reliance on traditional sales cycles. [JR]: Were there any challenges you encountered throughout the publication process? How did you overcome them? [AS]: Our biggest challenge was implementing the SaaS fulfilment APIs required for publishing on the Microsoft commercial marketplace. This task demanded more effort from our development team than we initially anticipated. However, we were pleasantly surprised by the level of quality testing Microsoft applied to our application. The feedback and support from Microsoft helped us refine and strengthen our solution, ultimately resulting in a better product for the marketplace.95Views0likes0CommentsNeon Serverless Postgres is Now Generally Available!
We are excited to announce that Neon Serverless Postgres is now generally available as an Azure Native Integration. This milestone marks a significant step forward in providing developers with a powerful, serverless Postgres solution, seamlessly integrated within the Azure ecosystem. We’re excited to bring Neon to all Azure developers, especially AI platforms. Neon Serverless Postgres scales automatically to match your workload and can branch instantly for an incredible developer experience. And for AI developers concerned about scale, cost efficiency, and data privacy, Neon enables them to easily adopt database-per-customer architectures, ensuring real-time provisioning and data isolation for their customers. - Nikita Shamgunov, CEO, Neon. Neon as a natively integrated offering on Azure Azure Native Integrations enable developers to seamlessly provision, manage, and tightly integrate independent software vendor (ISV) software and services within the Azure ecosystem. Neon's native Integration enables developers to use Neon Serverless Postgres with the same experience as any other Azure service, fully integrated within Azure's workflows, identity systems, and billing framework. Developers can provision Neon Serverless Postgres directly from the Azure Portal, where it is listed alongside Microsoft’s database offerings in the Databases section, complete with a partner badge. Once provisioned, Neon projects, branches, and connection strings can be managed using the Azure portal, CLI, and SDK. The deep integration also simplifies security and procurement, allowing authentication using Microsoft Entra ID (formerly Azure Active Directory) and consolidating Neon usage into your organization’s Azure bill. New Features Available with GA Release With General Availability release, several new features are now accessible. The intent is to make all innovative Neon features available to developers in the Azure eco-system. With GA, the following features are now added: Neon Project and Branch Creation from Azure: Developers now can create Neon projects and branches directly through the Azure portal. This enhancement simplifies team operations, facilitates tighter CI/CD integration, allows for more detailed environment management, and accelerates the transition from provisioning to production. Connect to database: The connect experience as part of Azure resource allows developers to connect Neon Serverless Postgres to their application stack directly from the Azure portal. This simplifies the process of connecting Neon databases to a few clicks from Azure portal. Azure CLI and SDKs (Java, Python, .NET, Go, JS): Developers can use the Azure CLI and SDK to manage Neon organization, projects, branches, and connection strings. This flexibility allows developers to choose the client of their preference and integrate Neon seamlessly into their current development environment. Semantic Kernel: Neon integrates directly with Semantic Kernel, enabling developers to orchestrate full Retrieval-Augmented Generation (RAG) pipelines. Generate embeddings with Azure OpenAI, store and query them in Neon using pgvector, and retrieve relevant results in milliseconds. You can read more about this release in the Neon launch announcement as well. 🎉 Neon for Enterprises and AI startups Neon offers instant provisioning, branching, seamless integration, and cost-effective development workflows. This makes it the perfect solution which can power organizations of all sizes and growth stage, whether you are an Enterprise or a fast-scaling AI startup. For Enterprises, Neon offers efficient scalability, strong recovery guarantees, and developer workflows that align with modern software development practices. Neon’s compute layer scales automatically based on workload, ensuring smoother performance during peak traffic, lower infrastructure costs, and reduced operational overhead. For AI startups, Neon provides the speed, flexibility, and automation needed for modern AI applications. If you are part of Microsoft for Startups program, you can leverage the native integration of the serverless Postgres to experience features like scale to zero and branching for faster and a cost-effective AI solution. Try Out Neon Serverless Postgres on Azure We invite you to try out Neon Serverless Postgres on Azure. Follow the Docs and start leveraging the power of serverless Postgres in your applications. The collaboration between Microsoft and Neon has resulted in a developer-focused roadmap, ensuring continuous improvements and new features. Link to our Feedback communities If you have new scenarios you would like to propose, please share your feedback with us on Azure feedback community. You can reach out to Neon via Neon discord community or contact feedback@neon.tech. We are committed to listening to you and enhancing the Neon experience to solve your problems. We look forward to seeing how developers will leverage Neon Serverless Postgres to build intelligent and scalable Azure applications. Next steps For all the links and resources, please refer the following: Subscribe to Neon Serverless Postgres on Azure portal or Azure Marketplace Learn more about Neon Serverless Postgres at Microsoft docs Read the launch blogpost by Neon Discover more about Neon Learn about Microsoft’s investment in Neon218Views2likes0Comments