azure ai services
61 TopicsYour city agent speaks 100 languages. But does It understand you?
For many residents, interacting with city services feels like a frustrating experience. Forms are confusing. Wait times are long. And often, it feels like no one is truly listening. Now imagine a city services agent that speaks your language and understands your needs. Some cities already have one. But the real question is not whether they can do it. It is whether they should. SmartCitiesWorld has published a new trend report titled “AI for Personalised Government Services – Reimagining Citizen Experiences.” The report includes case studies from Derby, Amarillo, Jakarta, and Tampere. It offers frameworks for data governance, staff training, and building public trust with AI. Download the full report today to explore how leading cities are transforming service delivery with responsible AI. And join us at Smart City Expo World Congress in Barcelona from November 4 to 6 to see these innovations in action. Visit Microsoft at Hall 3, Stand D51 to experience how AI is helping cities listen better, serve smarter, and build trust with every interaction. Cities Are Using AI to Improve Public Services Around the world, cities are using artificial intelligence to make public services faster, simpler, and more accessible. These are not experimental pilots. These are live systems serving residents every day. In Derby, United Kingdom, AI assistants handle more than half a million calls each year. This allows city staff to focus on complex cases that require human judgment and empathy. Routine questions are answered instantly. Human expertise is reserved for situations that need it most. In Amarillo, Texas, the city built an AI assistant named Emma. Emma speaks one hundred languages. In just one year, Emma helped the city save 1.8 million dollars in operational costs. More importantly, Emma helped residents who previously struggled to access services due to language barriers. In Jakarta, the JAKI platform connects services across departments. It reminds residents when permits need renewal. It sends alerts about tax payments. It links systems that used to operate in isolation. These cities are not chasing technology for its own sake. They are using it to serve more people, more effectively. The Technical Challenge: Old Systems Meet New Tools Most city systems were built decades ago. They do not communicate with each other. One department may not know what another already knows about the same resident. Artificial intelligence can bridge these gaps. But only if cities implement it carefully. Tampere, Finland shows how to do it right. The city redesigns services with residents, not just for them. It uses digital twin technology to test changes before rollout. It gathers feedback from actual users, including children and older adults. This approach takes more time. But it delivers better results. Services work for the people who actually use them. Why Thoughtful Planning Makes AI More Effective As cities explore the potential of AI to improve public services, it is important to recognize that successful implementation depends on careful preparation. While the technology offers powerful capabilities, its impact depends on how well it is integrated into existing systems and aligned with community needs. There are a few common challenges that cities may encounter during deployment: Data quality: AI systems rely on accurate and representative data. If the data is incomplete or biased, the system may produce inconsistent or unfair outcomes. Addressing data gaps early helps ensure more reliable performance. System integration: Many city platforms were built decades ago and operate in silos. Introducing AI without addressing these legacy issues can limit its effectiveness. A thoughtful integration strategy helps AI enhance, not just accelerate, existing processes. Public trust: Residents need to feel confident that AI systems are fair, transparent, and accountable. When mistakes happen, clear communication and responsive support are essential to maintaining trust. These challenges are not roadblocks, they are opportunities to build stronger, more inclusive systems. Cities that take time to plan, test, and engage with residents early are better positioned to deliver meaningful results. Building Trust Through Transparency AI is only as effective as the trust behind it. If an algorithm denies benefits incorrectly, who takes responsibility? If a translation system misunderstands a request, how does the city fix it? Camden, United Kingdom created a Data Charter in plain language. Residents helped write it. The charter explains how the city collects data, who can access it, and how it checks for bias in algorithms. This is not just good governance. It is necessary design. Residents will not use systems they do not trust. The charter includes regular audits. Independent reviewers check AI decisions for patterns of discrimination. If the system treats one neighborhood differently than another, the city investigates immediately. Other cities need similar frameworks. Trust requires transparency. Transparency requires clear communication. Making AI Work for All Residents AI allows cities to personalize services at scale. But personalization must include everyone. Language accessibility is essential. Residents should interact with city services in their preferred language. Emma in Amarillo proves this works. Translation should be automatic, not an extra step. Interface design matters too. Simple layouts help residents with limited digital skills. Clear labels support residents with cognitive disabilities. Audio options assist those with vision impairments. Human support remains critical. AI cannot handle every situation. Complex cases need human judgment. Emotional situations need human empathy. Cities must provide easy ways to reach actual staff. South Cambridgeshire routes 27 percent of inquiries through AI. This frees human staff to focus on the remaining 73 percent that need personal attention. The result is faster resolution for routine questions and better support for difficult cases. What Success Looks Like Cities that succeed with AI share common traits. They focus on outcomes, not features. They measure impact on residents, not just cost savings. Successful cities start small. They test one service before expanding. They collect feedback continuously. They adjust based on what residents actually need. They train staff properly. AI changes how employees work. Staff need time to adapt. They need clear guidance on when to use AI and when to intervene personally. They also maintain alternatives. Not every resident wants to use AI. Phone lines remain open. In-person services continue. Digital tools supplement existing services rather than replace them. The Path Forward for Your City AI will not fix broken systems automatically. It requires careful planning, thoughtful implementation, and ongoing evaluation. Start by identifying one service that frustrates residents. Map the current process. Find the delays and confusion points. Determine if AI can solve those specific problems. Involve residents early. Ask what they need. Test prototypes with actual users. Listen to criticism. Revise based on feedback. Build transparency into the system from the beginning. Explain how AI makes decisions. Create clear paths for appealing those decisions. Assign human accountability for AI outcomes. Train your staff before launch. Help them understand how AI changes their role. Give them tools to override AI when necessary. Recognize that technology serves people, not the other way around. When city services start to feel personal, residents spend less time navigating bureaucracy. Staff spend more time solving real problems. And trust in public institutions grows stronger. Learn From Cities That Succeeded SmartCitiesWorld has published a new trend report titled “AI for Personalised Government Services – Reimagining Citizen Experiences.” The report includes case studies from Derby, Amarillo, Jakarta, and Tampere. It offers frameworks for data governance, staff training, and building public trust with AI. Download the full report today to explore how leading cities are transforming service delivery with responsible AI.135Views0likes0CommentsTransforming Emergency Response: How AI is reshaping public safety
Brand new released Smart City Trend Report: Discover how AI is transforming emergency response and public safety in cities worldwide. In an era of escalating climate events, urban complexity, and rising public expectations, emergency response systems are under pressure like never before. From wildfires and floods to public health crises and infrastructure failures, cities must respond faster, smarter, and more collaboratively. The newly released Transform Emergency Response Trend Report offers a compelling roadmap for how artificial intelligence (AI) is helping cities meet these challenges head-on, by modernizing operations, improving situational awareness, and building resilient, resident-centered safety ecosystems. As Dave Williams, Director of Global Public Safety and Justice at Microsoft, puts it: AI models are increasingly embedded in public safety workflows to enhance both anticipation and real-time awareness. Predictive analytics are used to forecast crime hotspots, traffic incidents, and natural disasters by analyzing historical and real-time data, enabling proactive resource deployment and faster response times. This transformation is not theoretical, it’s happening now. And at the upcoming Smart City Expo World Congress in Barcelona, November 4–6, Microsoft and leading technology innovators will showcase how AI is driving real-world impact across emergency services, law enforcement, and city operations. Government AI Transformation in Action: Oklahoma City Fire Department: Digitizing Operations for Faster Response Serving over 700,000 residents, the Oklahoma City Fire Department (OKCFD) faced mounting challenges due to outdated, paper-based workflows. From rig inspections to fuel logging, manual processes slowed response times and increased risk. Partnering with AgreeYa Solutions and leveraging Microsoft Power Platform, OKCFD built 15+ custom mobile-first apps to digitize core operations. The results were transformative: Helped drive a 40% reduction in manual tasks Real-time dashboards for leadership visibility Improved data accuracy and faster emergency response This modernization not only boosted internal efficiency but also strengthened community trust by ensuring timely, reliable service delivery. North Wales Fire and Rescue Service: Empowering Remote Teams with Secure Access With 44 stations and a mix of full-time and on-call firefighters, North Wales Fire and Rescue Service (NWFRS) needed a better way to support staff across a wide geographic area. Their legacy on-premises systems limited remote access to critical data. By deploying a SharePoint-based intranet integrated with Microsoft 365 tools, NWFRS enabled secure, mobile access to documents, forms, and departmental updates. Improved communication and workflow efficiency Reduced travel time for on-call staff Enhanced compliance and data security This shift empowered firefighters to stay informed and prepared—no matter where they were. San Francisco Police Department: Real-Time Vehicle Recovery Reporting Managing thousands of stolen vehicle cases annually, the San Francisco Police Department (SFPD) struggled with a slow, manual reporting process that delayed updates and eroded public trust. Using Microsoft Power Apps, SFPD built RESTVOS (Returning Stolen Vehicle to Owner System), allowing officers to update vehicle status in real time from the field. Helped reduce reporting time from 2 hours to 2 minutes Supported 500 officer hours saved per month Improved resident experience and reduced mistaken stops This digital leap not only streamlined operations but also reinforced transparency and accountability. Join Us in Barcelona: See Emergency Response in Action At Smart City Expo World Congress 2025, Microsoft and our AI transformations partners will showcase emergency response AI transformation with immersive demos, theater sessions, and roundtable discussions. Transform Emergency Response will be a central focus, showcasing how AI, cloud platforms, and agentic solutions are enabling cities to: Modernize emergency operation centers Enable real-time situational awareness Foster community engagement and trust Featured AI demos from innovative partners: 3AM Innovations Disaster Tech PRATUS Sentient Hubs Tomorrow.io Unified Emergency Response with Microsoft Fabric and Copilot These solutions are not just about technology, they’re about outcomes. They help cities cut response times, improve coordination, and build public trust. Why This Matters Now As Dave Williams emphasizes, the future of emergency response is not just faster, it’s smarter and more resilient: Modern emergency response increasingly relies on unified data platforms that integrate inputs from IoT sensors, satellite imagery, social media, and agency databases. AI-powered analytics systems synthesize this data to support real-time decision-making and resource allocation across agencies. Cities must also invest in governance frameworks, ethical AI policies, and inclusive design to ensure these technologies serve all residents fairly. Let’s Connect Whether you’re a city CIO, emergency services leader, or public safety innovator, we invite you to join us at Smart City Expo World Congress in Barcelona, November 4–6. Explore how Microsoft and its partners are helping cities transform emergency response, and build safer, more resilient communities. Visit our booth at Hall 3, Stand #3D51, attend our theater sessions, and see demos from AI transformation partners delivering demos on Transform Emergency Response. Together, we can reimagine public safety for the challenges of today and the possibilities of tomorrow.234Views0likes0CommentsPower Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This allows users to: Speak commands or input instead of typing, making the interface more accessible and user-friendly. Hear responses or information read aloud, which improves usability for people with visual impairments or those who prefer audio. Provide a more natural and hands-free experience especially on devices like smartphones or tablets. In short, integrating Azure AI Speech service into Open Web UI helps make web apps smarter, more interactive, and easier to use by adding speech recognition and voice output features. If you haven’t hosted Open WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Open WebUI deployed already. Learn More about OpenWeb UI here. Deploy Azure AI Speech service in Azure. Navigate to the Azure Portal and search for Azure AI Speech on the Azure portal search bar. Create a new Speech Service by filling up the fields in the resource creation page. Click on “Create” to finalize the setup. After the resource has been deployed, click on “View resource” button and you should be redirected to the Azure AI Speech service page. The page should display the API Keys and Endpoints for Azure AI Speech services, which you can use in Open Web UI. Settings things up in Open Web UI Speech to Text settings (STT) Head to the Open Web UI Admin page > Settings > Audio. Paste the API Key obtained from the Azure AI Speech service page into the API key field below. Unless you use different Azure Region, or want to change the default configurations for the STT settings, leave all settings to blank. Text to Speech settings (TTS) Now, let's proceed with configuring the TTS Settings on OpenWeb UI by toggling the TTS Engine to Azure AI Speech option. Again, paste the API Key obtained from Azure AI Speech service page and leave all settings to blank. You can change the TTS Voice from the dropdown selection in the TTS settings as depicted in the image below: Click Save to reflect the change. Expected Result Now, let’s test if everything works well. Open a new chat / temporary chat on Open Web UI and click on the Call / Record button. The STT Engine (Azure AI Speech) should identify your voice and provide a response based on the voice input. To test the TTS feature, click on the Read Aloud (Speaker Icon) under any response from Open Web UI. The TTS Engine should reflect Azure AI Speech service! Conclusion And that’s a wrap! You’ve just given your Open WebUI the gift of capturing user speech, turning it into text, and then talking right back with Azure’s neural voices. Along the way you saw how easy it is to spin up a Speech resource in the Azure portal, wire up real-time transcription in the browser, and pipe responses through the TTS engine. From here, it’s all about experimentation. Try swapping in different neural voices or dialing in new languages. Tweak how you start and stop listening, play with silence detection, or add custom pronunciation tweaks for those tricky product names. Before you know it, your interface will feel less like a web page and more like a conversation partner.1.1KViews2likes1CommentPantry Log–Microsoft Cognitive, IOT and Mobile App for Managing your Fridge Food Stock
First published on MSDN on Mar 06, 2018 We are Ami Zou (CS & Math), Silvia Sapora(CS), and Elena Liu (Engineering), three undergraduate students from UCL, Imperial College London, and Cambridge University respectively.821Views0likes1CommentModel Mondays S2E9: Models for AI Agents
1. Weekly Highlights This episode kicked off with the top news and updates in the Azure AI ecosystem: GPT-5 and GPT-OSS Models Now in Azure AI Foundry: Azure AI Foundry now supports OpenAI’s GPT-5 lineup (including GPT-5, GPT-5 Mini, and GPT-5 Nano) and the new open-weight GPT-OSS models (120B, 20B). These models offer powerful reasoning, real-time agent tasks, and ultra-low latency Q&A, all with massive context windows and flexible deployment via the Model Router. Flux 1 Context Pro & Flux 1.1 Pro from Black Forest Labs: These new vision models enable in-context image generation, editing, and style transfer, now available in the Image Playground in Azure AI Foundry. Browser Automation Tool (Preview): Agents can now perform real web tasks—search, navigation, form filling, and more—via natural language, accessible through API and SDK. GitHub Copilot Agent Mode + Playwright MCP Server: Debug UIs with AI: Copilot’s agent mode now pairs with Playwright MCP Server to analyze, identify, and fix UI bugs automatically. Discord Community: Join the conversation, share your feedback, and connect with the product team and other developers. 2. Spotlight On: Azure AI Agent Service & Agent Catalog This week’s spotlight was on building and orchestrating multi-agent workflows using the Azure AI Agent Service and the new Agent Catalog. What is the Azure AI Agent Service? A managed platform for building, deploying, and scaling agentic AI solutions. It supports modular, multi-agent workflows, secure authentication, and seamless integration with Azure Logic Apps, OpenAPI tools, and more. Agent Catalog: A collection of open-source, ready-to-use agent templates and workflow samples. These include orchestrator agents, connected agents, and specialized agents for tasks like customer support, research, and more. Demo Highlights: Connected Agents: Orchestrate workflows by delegating tasks to specialized sub-agents (e.g., mortgage application, market insights). Multi-Agent Workflows: Design complex, hierarchical agent graphs with triggers, events, and handoffs (e.g., customer support with escalation to human agents). Workflow Designer: Visualize and edit agent flows, transitions, and variables in a modular, no-code interface. Integration with Azure Logic Apps: Trigger workflows from 1400+ external services and apps. 3. Customer Story: Atomic Work Atomic Work showcased how agentic AI can revolutionize enterprise service management, making employees more productive and ops teams more efficient. Problem: Traditional IT service management is slow, manual, and frustrating for both employees and ops teams. Solution: Atomic Work’s “Atom” is a universal, multimodal agent that works across channels (Teams, browser, etc.), answers L1/L2 questions, automates requests, and proactively assists users. Technical Highlights: Multimodal & Cross-Channel: Atom can guide users through web interfaces, answer questions, and automate tasks without switching tools. Data Ingestion & Context: Regularly ingests up-to-date documentation and context, ensuring accurate, current answers. Security & Integration: Built on Azure for enterprise-grade security and seamless integration with existing systems. Demo: Resetting passwords, troubleshooting VPN, requesting GitHub repo access—all handled by Atom, with proactive suggestions and context-aware actions. Atom can even walk users through complex UI tasks (like generating GitHub tokens) by “seeing” the user’s screen and providing step-by-step guidance. 4. Key Takeaways Here are the key learnings from this episode: Agentic AI is Production-Ready: Azure AI Agent Service and the Agent Catalog make it easy to build, deploy, and scale multi-agent workflows for real-world business needs. Modular, No-Code Workflow Design: The workflow designer lets you visually create and edit agent graphs, triggers, and handoffs—no code required. Open-Source & Extensible: The Agent Catalog provides open-source templates and welcomes community contributions. Real-World Impact: Solutions like Atomic Work show how agentic AI can transform IT, HR, and customer support, making organizations more efficient and employees more empowered. Community & Support: Join the Discord and Forum to connect, ask questions, and share your own agentic AI projects. Sharda's Tips: How I Wrote This Blog Writing this blog is like sharing my own learning journey with friends. I start by thinking about why the topic matters and how it can help someone new to Azure or agentic AI. I use simple language, real examples from the episode, and organize my thoughts with GitHub Copilot to make sure I cover all the important points. Here’s the prompt I gave Copilot to help me draft this blog: Generate a technical blog post for Model Mondays S2E9 based on the transcript and episode details. Focus on Azure AI Agent Service, Agent Catalog, and real-world demos. Explain the concepts for students, add a section on practical applications, and share tips for writing technical blogs. Make it clear, engaging, and useful for developers and students. After watching the video, I felt inspired to try out these tools myself. The way the speakers explained and demonstrated everything made me believe that anyone can get started, no matter their background. My goal with this blog is to help you feel the same way—curious, confident, and ready to explore what AI and Azure can do for you. If you have questions or want to share your own experience, I’d love to hear from you. Coming Up Next Week Next week: Document Processing with AI! Join us as we explore how to automate document workflows using Azure AI Foundry, with live demos and expert guests. 1️⃣ | Register For The Livestream – Aug 18, 2025 2️⃣ | Register For The AMA – Aug 22, 2025 3️⃣ | Ask Questions & View Recaps – Discussion Forum About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ with three elements: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream Want to get started? Register For Livestreams – every Monday at 1:30pm ET Watch Past Replays to revisit other spotlight topics Register For AMA – to join the next AMA on the schedule Recap Past AMAs – check the AMA schedule for episode specific links Join The Community Great devs don't build alone! In a fast-paced developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! Join the Discord – for real-time chats, events & learning Explore the Forum – for AMA recaps, Q&A, and Discussion! About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I summarize my takeaways from each week's Model Mondays livestream.219Views0likes0CommentsGetting Started with the AI Toolkit: A Beginner’s Guide with Demos and Resources
If you're curious about building AI solutions but don’t know where to start, Microsoft’s AI Toolkit is a great place to begin. Whether you’re a student, developer, or just someone exploring AI for the first time, this toolkit helps you build real-world solutions using Microsoft’s powerful AI services. In this blog, I’ll Walk you through what the AI Toolkit is, how you can get started, and where you can find helpful demos and ready-to-use code samples. What is the AI Toolkit? The AI Toolkit is a collection of tools, templates, and sample apps that make it easier to build AI-powered applications and copilots using Microsoft Azure. With the AI Toolkit, you can: Build intelligent apps without needing deep AI expertise. Use templates and guides that show you how everything works. Quickly prototype and deploy apps with natural language, speech, search, and more. Watch the AI Toolkit in Action Microsoft has created a video playlist that covers the AI Toolkit and shows you how to build apps step-by-step. You can watch the full playlist here: It is especially useful for developers who want to bring AI into their projects, but also for beginners who want to learn by doing. AI Toolkit Playlist – https://aka.ms/AIToolkit/videos These videos help you understand the flow of building AI agents, using Azure OpenAI, and other cognitive services in a hands-on way. Explore Sample Projects on GitHub Microsoft also provides a public GitHub repository where you can find real code examples built using the AI Toolkit. Here’s the GitHub repo: AI Toolkit Samples – https://github.com/Azure-Samples/AI_Toolkit_Samples This repository includes: Sample apps using Azure AI services like OpenAI, Cognitive Search, and Speech. Instructions to deploy apps using Azure. Code that you can clone, test, and build on top of. You don’t have to start from scratch just open the code, understand the structure, and make small edits to experiment. How to Get Started Here’s a simple path if you’re just starting: Watch 2 or 3 videos from the AI Toolkit Playlist. Go to the GitHub repository and try running one of the examples. Make small changes to the code (like updating the prompt or output). Try deploying the solution on Azure by following the guide in the repo. Keep building and learning. Why This Toolkit is Worth Exploring As someone who is also learning and experimenting, I found this toolkit to be: Easy to understand, even for beginners. Focused on real-world applications, not just theory. Helpful for building responsible AI solutions with good documentation. It gives a complete picture — from writing code to deploying apps. Final Thoughts The AI Toolkit helps you start your journey in AI without feeling overwhelmed. It provides real code, real use cases, and practical demos. With the support of Microsoft Learn and Azure samples, you can go from learning to building in no time. If you’re serious about building with AI, this is a resource worth exploring. Continue the discussion in the Azure AI Foundry Discord community at Https://aka.ms/AI/discord Join the Azure AI Foundry Discord Server! References AI Toolkit Playlist (YouTube) https://aka.ms/AIToolkit/videos AI Toolkit GitHub Repository https://github.com/Azure-Samples/AI_Toolkit_Samples Microsoft Learn: AI Toolkit Documentation https://learn.microsoft.com/en-us/azure/ai-services/toolkit/ Azure AI Services https://azure.microsoft.com/en-us/products/ai-services/1KViews0likes0CommentsConfigure Embedding Models on Azure AI Foundry with Open Web UI
Introduction Let’s take a closer look at an exciting development in the AI space. Embedding models are the key to transforming complex data into usable insights, driving innovations like smarter chatbots and tailored recommendations. With Azure AI Foundry, Microsoft’s powerful platform, you’ve got the tools to build and scale these models effortlessly. Add in Open Web UI, a intuitive interface for engaging with AI systems, and you’ve got a winning combo that’s hard to beat. In this article, we’ll explore how embedding models on Azure AI Foundry, paired with Open Web UI, are paving the way for accessible and impactful AI solutions for developers and businesses. Let’s dive in! To proceed with configuring the embedding model from Azure AI Foundry on Open Web UI, please firstly configure the requirements below. Requirements: Setup Azure AI Foundry Hub/Projects Deploy Open Web UI – refer to my previous article on how you can deploy Open Web UI on Azure VM. Optional: Deploy LiteLLM with Azure AI Foundry models to work on Open Web UI - refer to my previous article on how you can do this as well. Deploying Embedding Models on Azure AI Foundry Navigate to the Azure AI Foundry site and deploy an embedding model from the “Model + Endpoint” section. For the purpose of this demonstration, we will deploy the “text-embedding-3-large” model by OpenAI. You should be receiving a URL endpoint and API Key to the embedding model deployed just now. Take note of that credential because we will be using it in Open Web UI. Configuring the embedding models on Open Web UI Now head to the Open Web UI Admin Setting Page > Documents and Select Azure Open AI as the Embedding Model Engine. Copy and Paste the Base URL, API Key, the Embedding Model deployed on Azure AI Foundry and the API version (not the model version) into the fields below: Click “Save” to reflect the changes. Expected Output Now let us look into the scenario for when the embedding model configured on Open Web UI and when it is not. Without Embedding Models configured. With Azure Open AI Embedding models configured. Conclusion And there you have it! Embedding models on Azure AI Foundry, combined with the seamless interaction offered by Open Web UI, are truly revolutionizing how we approach AI solutions. This powerful duo not only simplifies the process of building and deploying intelligent systems but also makes cutting-edge technology more accessible to developers and businesses of all sizes. As we move forward, it’s clear that such integrations will continue to drive innovation, breaking down barriers and unlocking new possibilities in the AI landscape. So, whether you’re a seasoned developer or just stepping into this exciting field, now’s the time to explore what Azure AI Foundry and Open Web UI can do for you. Let’s keep pushing the boundaries of what’s possible!1.4KViews0likes0CommentsMonitoring and Evaluating LLMs in Clinical Contexts with Azure AI Foundry
👀 Missed Session 02? Don’t worry—you can still catch up. But first, here’s what AI HLS Ignited is all about: What is AI HLS Ignited? AI HLS Ignited is a Microsoft-led technical series for healthcare innovators, solution architects, and AI engineers. Each session brings to life real-world AI solutions that are reshaping the Healthcare and Life Sciences (HLS) industry. Through live demos, architectural deep dives, and GitHub-hosted code, we equip you with the tools and knowledge to build with confidence. Session 02 Recap: In this session, we introduced MedEvals, an end-to-end evaluation framework for medical AI applications built on Azure AI Foundry. Inspired by Stanford’s MedHELM benchmark, MedEvals enables providers and payers to systematically validate performance, safety, and compliance of AI solutions across clinical decision support, documentation, patient communication, and more. 🧠 Why Scalable Evaluation Is Critical for Medical AI "Large language models (LLMs) hold promise for tasks ranging from clinical decision support to patient education. However, evaluating the performance of LLMs in medical contexts presents unique challenges due to the complex and critical nature of medical information." — Evaluating large language models in medical applications: a survey As AI systems become deeply embedded in healthcare workflows, the need for rigorous evaluation frameworks intensifies. Although large language models (LLMs) can augment tasks ranging from clinical documentation to decision support, their deployment in patient-facing settings demands systematic validation to guarantee safety, fidelity, and robustness. Benchmarks such as MedHELM address this requirement by subjecting models to a comprehensive battery of clinically derived tasks built on dataset (ground truth), enabling fine-grained, multi-metric performance assessment across the full spectrum of clinical use cases. However, shipping a medical LLM is only step one. Without a repeatable, metrics-driven evaluation loop, quality erodes, regulatory gaps widen, and patient safety is put at risk. This project accelerates your ability to operationalize trustworthy LLMs by delivering plug-and-play medical benchmarks, configurable evaluators, and CI/CD templates—so every model update triggers an automated, domain-specific “health check” that flags drift, surfaces bias, and validates clinical accuracy before it ever reaches production. 🚀 How to Get Started with MedEvals Kick off your MedEvals journey by following our curated labs. Newcomers to Azure AI Foundry can start with the foundational workflow; seasoned practitioners can dive into advanced evaluation pipelines and CI/CD integration. 🧪 Labs 🧪 Foundry Basics & Custom Evaluations: 🧾 Notebook Authenticate, initialize a Foundry project, run built-in metrics, and build custom evaluators with EvalAI and PromptEval. 🧪 Search & Retrieval Evaluations: 🧾 Notebook Prepare datasets, execute search metrics (precision, recall, NDCG), visualize results, and register evaluators in Foundry. 🧪 Repeatable Evaluations & CI/CD: 🧾 Notebook Define evaluation schemas, build deterministic pipelines with PyTest, and automate drift detection using GitHub Actions. 🏥 Use Cases 📝 Creating Your Clinical Evaluation with RevCycle Determinations Select a model and metric that best supports the determination behind the rationale made on AI-assisted prior authorizations based on real payor policy. This notebook use case includes: Selecting multiple candidate LLMs (e.g., gpt-4o, o1) Breaking down determinations both in deterministic results (approved vs rejected) and the supporting rationale and logic. Running evaluations across multiple dimensions Combining deterministic evaluators and LLM-as-a-Judge methods Evaluating the differential impacts of evaluators on the rationale across scenarios 🧾Get Started with the Notebook Why it matters: Enables data-driven metric selection for clinical workflows, ensures transparent benchmarking, and accelerates safe AI adoption in healthcare. 📝 Evaluating AI Medical Notes Summarization Applications Systematically assess how different foundation models and prompting strategies perform on clinical summarization tasks, following the MedHELM framework. This notebook use case includes: Preparing real-world datasets of clinical notes and summaries Benchmarking summarization quality using relevance, coherence, factuality, and harmfulness metrics Testing prompting techniques (zero-shot, few-shot, chain-of-thought prompting) Evaluating outputs using both automated metrics and human-in-the-loop scoring 🧾Get Started with the Notebook Why it matters: Ensures responsible deployment of AI applications for clinical summarization, guaranteeing high standards of quality, trustworthiness, and usability. 📣 Join Us for the Next Session Help shape the future of healthcare by sharing AI HLS Ignited with your network—and don’t miss what’s coming next! 📅 Register for the upcoming session → AI HLS Ignited Event Page 💻 Explore the code, demos, and architecture → AI HLS Ignited GitHub Repository1.1KViews0likes0CommentsBuilding AI-Powered Clinical Knowledge Stores with Azure AI Search
👀 Missed Session 01? Don’t worry—you can still catch up. But first, here’s what AI HLS Ignited is all about: What is AI HLS Ignited? AI HLS Ignited is a Microsoft-led technical series for healthcare innovators, solution architects, and AI engineers. Each session brings to life real-world AI solutions that are reshaping the Healthcare and Life Sciences (HLS) industry. Through live demos, architectural deep dives, and GitHub-hosted code, we equip you with the tools and knowledge to build with confidence. Session 01 Recap: In our first session, we introduced the accelerator MedIndexer - which is an indexing framework designed for the automated creation of structured knowledge bases from unstructured clinical sources. Whether you're dealing with X-rays, clinical notes, or scanned documents, MedIndexer converts these inputs into a schema-driven format optimized for Azure AI Search. This will allow your applications to leverage state-of-the-art retrieval methodologies, including vector search and re-ranking. Moreover, by applying a well-defined schema and vectorizing the data into high-dimensional representations, MedIndexer empowers AI applications to retrieve more precise and context-aware information... The result? AI systems that surface more relevant, accurate, and context-aware insights—faster. 🔍 Turning Your Unstructured Data into Value "About 80% of medical data remains unstructured and untapped after it is created (e.g., text, image, signal, etc.)" — Healthcare Informatics Research, Chungnam National University In the era of AI, the rise of AI copilots and assistants has led to a shift in how we access knowledge. But retrieving clinical data that lives in disparate formats is no trivial task. Building retrieval systems takes effort—and how you structure your knowledge store matters. It’s a cyclic, iterative, and constantly evolving process. That’s why we believe in leveraging enterprise-ready retrieval platforms like Azure AI Search—designed to power intelligent search experiences across structured and unstructured data. It serves as the foundation for building advanced retrieval systems in healthcare. However, implementing Azure AI Search alone is not enough. Mastering its capabilities and applying well-defined patterns can significantly enhance your ability to address repetitive tasks and complex retrieval scenarios. This project aims to accelerate your ability to transform raw clinical data into high-fidelity, high-value knowledge structures that can power your next-generation AI healthcare applications. 🚀 How to Get Started with MedIndexer New to Azure AI Search? Begin with our guided labs to build a strong foundation and get hands-on with the core capabilities. Already familiar with the tech? Jump ahead to the real-world use cases—learn how to build Coded Policy Knowledge Stores and X-ray Knowledge Stores. 🧪 Labs 🧪 Building Your Azure AI Search Index: 🧾 Notebook - Building your first Index Learn how to create and configure an Azure AI Search index to enable intelligent search capabilities for your applications. 🧪 Indexing Data into Azure AI Search: 🧾 Notebook - Ingest and Index Clinical Data Understand how to ingest, preprocess, and index clinical data into Azure AI Search using schema-first principles. 🧪 Retrieval Methods for Azure AI Search: 🧾 Notebook - Exploring Vector Search and Hybrid Retrieval Dive into retrieval techniques such as vector search, hybrid retrieval, and reranking to enhance the accuracy and relevance of search results. 🧪 Evaluation Methods for Azure AI Search: 🧾 Notebook - Evaluating Search Quality and Relevance Learn how to evaluate the performance of your search index using relevance metrics and ground truth datasets to ensure high-quality search results. 🏥 Use Cases 📝 Creating Coded Policy Knowledge Stores In many healthcare systems, policy documents such as pre-authorization guidelines are still trapped in static, scanned PDFs. These documents are critical—they contain ICD codes, drug name coverage, and payer-specific logic—but are rarely structured or accessible in real-time. To solve this, we built a pipeline that transforms these documents into intelligent, searchable knowledge stores. This diagram shows how pre-auth policy PDFs are ingested via blob storage, passed through an OCR and embedding skillset, and then indexed into Azure AI Search. The result: fast access to coded policy data for AI apps. 🧾 Notebook - Creating Coded Policies Knowledge Stores Transform payer policies into machine-readable formats. This use case includes: Preprocessing and cleaning PDF documents Building custom OCR skills Leveraging out-of-the-box Indexer capabilities and embedding skills Enabling real-time AI-assisted querying for ICDs, payer names, drug names, and policy logic Why it matters: This streamlines prior authorization and coding workflows for providers and payors, reducing manual effort and increasing transparency. 🩻 Creating X-ray Knowledge Stores In radiology workflows, X-ray reports and image metadata contain valuable clinical insights—but these are often underutilized. Traditionally, they’re stored as static entries in PACS systems or loosely connected databases. The goal of this use case is to turn those X-ray reports into a searchable, intelligent asset that clinicians can explore and interact with in meaningful ways. This diagram illustrates a full retrieval pipeline where radiology reports are uploaded, enriched through foundational models, embedded, and indexed. The output powers an AI-driven web app for similarity search and decision support. 🧾 Notebook - Creating X-rays Knowledge Stores Turn imaging reports and metadata into a searchable knowledge base. This includes: Leveraging push APIs with custom event-driven indexing pipeline triggered on new X-ray uploads Generating embeddings using Microsoft Healthcare foundation models Providing an AI-powered front-end for X-ray similarity search Why it matters: Supports clinical decision-making by retrieving similar past cases, aiding diagnosis and treatment planning with contextual relevance. 📣 Join Us for the Next Session Help shape the future of healthcare by sharing AI HLS Ignited with your network—and don’t miss what’s coming next! 📅 Register for the upcoming session → AI HLS Ignited Event Page 💻 Explore the code, demos, and architecture → AI HLS Ignited GitHub Repository14 updates for Immersive Reader: General availability on Azure, a Code.org partnership and lots more
Today, we’re thrilled to announce this powerful literacy tool has reached General Availability as an Azure Cognitive Service, allowing third party apps and partners to add Immersive Reader right into their products. During the public preview period, we’ve had scores of partners integrate the Immersive Reader, and some of them are listed below.
17KViews4likes8Comments