azureai
28 TopicsAzure AI Connect - March 2 to March 6 2026
The Future of AI is Connected. The Future is on Azure. Join us for a 5-day virtual event dedicated to mastering the Microsoft Azure AI platform. Azure AI Connect isn't just another virtual conference. It's a 5-day deep-dive immersion into the *connective tissue* of artificial intelligence on the cloud. We're bringing together developers, data scientists, and enterprise leaders to explore the full spectrum of Azure AI services—from Cognitive Services and Machine Learning to the latest breakthroughs in Generative AI. Explore the Ecosystem: Understand how services work *together* to create powerful, end-to-end solutions. Learn from Experts: Get direct insights from Microsoft MVPs, product teams, and industry pioneers. Gain Practical Skills: Move beyond theory with code-driven sessions, practical workshops, and live Q&As. Connect with Peers: Network with a global community in our virtual lounge. Event Details436Views0likes0CommentsHow We Built an AI Operations Agent Using MCP Servers and Dynamic Tool Routing
Modern operations teams are turning to AI Agents to resolve shipping delays faster and more accurately. In this article, we build a “two‑brain” AI Agent on Azure—one MCP server that reads policies from Blob Storage and another that updates order data in Azure SQL—to automate decisions like hazardous‑material handling and delivery prioritization. You’ll see how these coordinated capabilities transform a simple user query into a fully automated operational workflow321Views0likes0CommentsHow to Set Up Claude Code with Microsoft Foundry Models on macOS
Introduction Building with AI isn't just about picking a smart model. It is about where that model lives. I chose to route my Claude Code setup through Microsoft Foundry because I needed more than just a raw API. I wanted the reliability, compliance, and structured management that comes with Microsoft's ecosystem. When you are moving from a prototype to something real, having that level of infrastructure backing your calls makes a significant difference. The challenge is that Foundry is designed for enterprise cloud environments, while my daily development work happens locally on a MacBook. Getting the two to communicate seamlessly involved navigating a maze of shell configurations and environment variables that weren't immediately obvious. I wrote this guide to document the exact steps for bridging that gap. Here is how you can set up Claude Code to run locally on macOS while leveraging the stability of models deployed on Microsoft Foundry. Requirements Before we open the terminal, let's make sure you have the necessary accounts and environments ready. Since we are bridging a local CLI with an enterprise cloud setup, having these credentials handy now will save you time later. Azure Subscription with Microsoft Foundry Setup - This is the most critical piece. You need an active Azure subscription where the Microsoft Foundry environment is initialized. Ensure that you have deployed the Claude model you intend to use and that the deployment status is active. You will need the specific endpoint URL and the associated API keys from this deployment to configure the connection. An Anthropic User Account - Even though the compute is happening on Azure, the interface requires an Anthropic account. You will need this to authenticate your session and manage your user profile settings within the Claude Code ecosystem. Claude Code Client on macOS - We will be running the commands locally, so you need the Claude Code CLI installed on your MacBook. Step 1: Install Claude Code on macOS The recommended installation method is via Homebrew or Curl, which sets it up for terminal access ("OS level"). Option A: Homebrew (Recommended) brew install --cask claude-code Option B: Curl curl -fsSL https://claude.ai/install.sh | bash Verify Installation: Run claude --version. Step 2: Set Up Microsoft Foundry to deploy Claude model Navigate to your Microsoft Foundry portal, and find the Claude model catalog, and deploy the selected Claude model. [Microsoft Foundry > My Assets > Models + endpoints > + Deploy Model > Deploy Base model > Search for "Claude"] In your Model Deployment dashboard, go to the deployed Claude Models and get the "Endpoints and keys". Store it somewhere safe, because we will need them to configure Claude Code later on. Configure Environment Variables in MacOS terminal: Now we need to tell your local Claude Code client to route requests through Microsoft Foundry instead of the default Anthropic endpoints. This is handled by setting specific environment variables that act as a bridge between your local machine and your Azure resources. You could run these commands manually every time you open a terminal, but it is much more efficient to save them permanently in your shell profile. For most modern macOS users, this file is .zshrc. Open your terminal and add the following lines to your profile, making sure to replace the placeholder text with your actual Azure credentials: export CLAUDE_CODE_USE_FOUNDRY=1 export ANTHROPIC_FOUNDRY_API_KEY="your-azure-api-key" export ANTHROPIC_FOUNDRY_RESOURCE="your-resource-name" # Specify the deployment name for Opus export CLAUDE_CODE_MODEL="your-opus-deployment-name" Once you have added these variables, you need to reload your shell configuration for the changes to take effect. Run the source command below to update your current session, and then verify the setup by launching Claude: source ~/.zshrc claude If everything is configured correctly, the Claude CLI will initialize using your Microsoft Foundry deployment as the backend. Once you execute the claude command, the CLI will prompt you to choose an authentication method. Select Option 2 (Antrophic Console account) to proceed. This action triggers your default web browser and redirects you to the Claude Console. Simply sign in using your standard Anthropic account credentials. After you have successfully signed in, you will be presented with a permissions screen. Click the Authorize button to link your web session back to your local terminal. Return to your terminal window, and you should see a notification confirming that the login process is complete. Press Enter to finalize the setup. You are now fully connected. You can start using Claude Code locally, powered entirely by the model deployment running in your Microsoft Foundry environment. Conclusion Setting up this environment might seem like a heavy lift just to run a CLI tool, but the payoff is significant. You now have a workflow that combines the immediate feedback of local development with the security and infrastructure benefits of Microsoft Foundry. One of the most practical upgrades is the removal of standard usage caps. You are no longer limited to the 5-hour API call limits, which gives you the freedom to iterate, test, and debug for as long as your project requires without hitting a wall. By bridging your local macOS terminal to Azure, you are no longer just hitting an API endpoint. You are leveraging a managed, compliance-ready environment that scales with your needs. The best part is that now the configuration is locked in, you don't need to think about the plumbing again. You can focus entirely on coding, knowing that the reliability of an enterprise platform is running quietly in the background supporting every command.604Views1like0CommentsBuilding an Agentic, AI-Powered Helpdesk with Agents Framework, Azure, and Microsoft 365
The article describes how to build an agentic, AI-powered helpdesk using Azure, Microsoft 365, and the Microsoft Agent Framework. The goal is to automate ticket handling, enrich requests with AI, and integrate seamlessly with M365 tools like Teams, Planner, and Power Automate.627Views0likes0CommentsIntelligent Conversations: Building Memory-Driven Bots with Azure AI and Semantic Kernel
Discover how memory-driven AI reshapes the way we interact, learn, and collaborate. 💬✨ On October 20th at 8pm CET, we’ll explore how Semantic Kernel, Azure AI Search, and Azure OpenAI models enable bots that remember context, adapt to users, and deliver truly intelligent conversations. 🤖💭 Join Marko Atanasov and Bojan Ivanovski as they dive into the architecture behind context-aware assistants and the future of personalized learning powered by Azure AI. 🌐💡 ✅Save your seat now: https://lnkd.in/dnZSj6Pb58Views0likes0CommentsIntelligent Conversations: Building Memory-Driven Bots with Azure AI and Semantic Kernel
Discover how memory-driven AI reshapes the way we interact, learn, and collaborate. 💬✨ On October 20th at 8pm CET, we’ll explore how Semantic Kernel, Azure AI Search, and Azure OpenAI models enable bots that remember context, adapt to users, and deliver truly intelligent conversations. 🤖💭 Join Marko Atanasov and Bojan Ivanovski as they dive into the architecture behind context-aware assistants and the future of personalized learning powered by Azure AI. 🌐💡 ✅ Save your seat on the following link: https://streamyard.com/watch/DN4thzYripaz219Views0likes0CommentsUnlocking Document Insights with Azure AI 🚀
Every organisation is drowning in documents (contracts, invoices, reports), yet the real challenge lies in extracting meaningful insights from this unstructured information. Imagine turning those files into structured, actionable data with just a few clicks. That’s exactly what we’ll explore in this Microsoft Zero To Hero session: ✨ How Azure Document Intelligence can automate document processing ✨ Ways to enhance data extraction with AI ✨ Seamless integration into Azure’s data platform for end-to-end insights Join us to see how AI-powered automation can save time, reduce manual effort, and unlock the value hidden in your documents. 📌 Don’t miss this opportunity to learn and apply Azure AI in action! 🗓️ Date: 7 October 2025 ⏰ Time: 19:00 (AEDT) 🎙️ Speaker: Akanksha Malik 📌 Topic: Unlocking Document Insights with Azure AI82Views0likes0CommentsAzure Live Voice API and Avatar Creation
In this live event we’re diving into the cutting edge of voice synthesis and avatar tech with Azure’s latest innovations. What’s on the agenda: Deep dive into Azure AI Speech Hands-on with the Azure AI Foundry Speech Playground Live demo: Avatar creation using Azure Live Voice API Whether you're building conversational agents, experimenting with digital personas, or just curious about the future of voice and identity in tech—this session is for you. No registration required Spread the word : https://globalaigr.onestream.live/ #azure #techgroup #live #event #aispeech90Views0likes0Comments🚀✨ Are you ready for a power-packed, productive, and inspiring October? ✨🚀
Here we go, friends! 🎉 The October Calendar is officially here, right on time, as always! 🗓️💯 This month, we’re bringing you a lineup of world-class sessions designed to help you: 🌍 Explore the https://www.linkedin.com/company/101186090/admin/page-posts/published/?share=true# ecosystem from new perspectives 💡 Gain practical skills you can apply immediately 🤝 Connect with experts and a global community of learners 🚀 Stay ahead with the latest innovations in Azure, AI, Power Platform, Security, and beyond. What makes this calendar stand out is the incredible diversity of voices and expertise it brings together. 🌍 You’ll hear from global speakers who share not just theory, but real-world experiences across different industries, giving you insights that truly matter. And the best part? ⏰ No matter where you are in the world, the sessions are scheduled across multiple time zones so you can always join in. Even better, everything is completely free and open, because learning and growth should be accessible to everyone. 💙 🔗 Check out the full list of sessions, register today, and prepare yourself for an amazing month of learning, networking, and growth. 🔥 This isn’t just another calendar, it’s your chance to grow, connect, and be inspired alongside thousands of passionate learners across the globe. 🙌 Let’s make October unforgettable together in the https://www.linkedin.com/company/101186090/admin/page-posts/published/?share=true# way! 💙 📢 https://www.linkedin.com/in/kaspersvenmozartjohansen/ 📅 October 4, 2025 06:00 PM CET 📖 Get started with a modern zero trust remote access solution: Microsoft Global Secure Access 🖇️ https://streamyard.com/watch/3APZGyZFRyQS?wt.mc_id=MVP_350258 📢 https://www.linkedin.com/in/akanksha-malik/ 📅 October 7, 2025 19:00 PM AEST 📅 October 7, 2025 10:00 AM CET 📖 Unlocking Document Insights with Azure AI 🖇️ https://streamyard.com/watch/M6qvUYdv58tt?wt.mc_id=MVP_350258 📢 https://www.linkedin.com/in/rexdekoning/ 📅 October 11, 2025 06:00 PM CET 📖 Azure Functions and network security.. Can it be done? 🖇️ https://streamyard.com/watch/RHzXr5bpYHFY?wt.mc_id=MVP_350258 📢 https://www.linkedin.com/in/jeevarajankumar/ 📅 October 7, 2025 18:00 PM AEST 📅 October 19, 2025 09:00 AM CET 📖 D365 Field Service 101 🖇️ https://streamyard.com/watch/RtDkftSxhn7P?wt.mc_id=MVP_350258 📢 https://www.linkedin.com/in/priyankashah/ 📅 October 21, 2025 19:00 PM AEST 📅 October 21, 2025 10:00 AM CET 📖 FSI and Gen AI: Wealth management advisor with Azure Foundry Agents and MCP 🖇️ https://streamyard.com/watch/Vb5rUWMBN9YN?wt.mc_id=MVP_350258 📢 https://www.linkedin.com/in/monaghadiri/ 📅 October 25, 2025 06:00 PM CET 📖 The Role of Sentence Syntax in Security Copilot: Structured Storytelling for Effective Defence 🖇️ https://streamyard.com/watch/EtPkn2EZkauD?wt.mc_id=MVP_350258194Views0likes0CommentsPower Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This allows users to: Speak commands or input instead of typing, making the interface more accessible and user-friendly. Hear responses or information read aloud, which improves usability for people with visual impairments or those who prefer audio. Provide a more natural and hands-free experience especially on devices like smartphones or tablets. In short, integrating Azure AI Speech service into Open Web UI helps make web apps smarter, more interactive, and easier to use by adding speech recognition and voice output features. If you haven’t hosted Open WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Open WebUI deployed already. Learn More about OpenWeb UI here. Deploy Azure AI Speech service in Azure. Navigate to the Azure Portal and search for Azure AI Speech on the Azure portal search bar. Create a new Speech Service by filling up the fields in the resource creation page. Click on “Create” to finalize the setup. After the resource has been deployed, click on “View resource” button and you should be redirected to the Azure AI Speech service page. The page should display the API Keys and Endpoints for Azure AI Speech services, which you can use in Open Web UI. Settings things up in Open Web UI Speech to Text settings (STT) Head to the Open Web UI Admin page > Settings > Audio. Paste the API Key obtained from the Azure AI Speech service page into the API key field below. Unless you use different Azure Region, or want to change the default configurations for the STT settings, leave all settings to blank. Text to Speech settings (TTS) Now, let's proceed with configuring the TTS Settings on OpenWeb UI by toggling the TTS Engine to Azure AI Speech option. Again, paste the API Key obtained from Azure AI Speech service page and leave all settings to blank. You can change the TTS Voice from the dropdown selection in the TTS settings as depicted in the image below: Click Save to reflect the change. Expected Result Now, let’s test if everything works well. Open a new chat / temporary chat on Open Web UI and click on the Call / Record button. The STT Engine (Azure AI Speech) should identify your voice and provide a response based on the voice input. To test the TTS feature, click on the Read Aloud (Speaker Icon) under any response from Open Web UI. The TTS Engine should reflect Azure AI Speech service! Conclusion And that’s a wrap! You’ve just given your Open WebUI the gift of capturing user speech, turning it into text, and then talking right back with Azure’s neural voices. Along the way you saw how easy it is to spin up a Speech resource in the Azure portal, wire up real-time transcription in the browser, and pipe responses through the TTS engine. From here, it’s all about experimentation. Try swapping in different neural voices or dialing in new languages. Tweak how you start and stop listening, play with silence detection, or add custom pronunciation tweaks for those tricky product names. Before you know it, your interface will feel less like a web page and more like a conversation partner.1.8KViews2likes1Comment