azure ai
77 TopicsBlogsite AI Voice Answer machine
Hi all, I wanted to quickly to write to show how I thought about building a system based on Azure to allow my blogsite to answer questions about a blog post that a reader may suddenly have in their mind while reading through the post to extend learning. The basic flow is: -User loads a blog post -On load, the page populates 3 buttons a third of the way in the page, each with randomly AI generated questions related to the page that a reader might ask about the page content -On clicking a button, the question is answered through voice, with the answer being 'just' enough to answer the question without being over-bearing (at least that's my feeling!) The architecture is constructed as the following: I wrote in full on how I did this for my blog here : https://www.imaginarium.dev/voice-ai-for-blog/ I wanted to perhaps hear on if I was missing anything here on the design, security considerations particularly on the Azure side? Any ways to improve on the AI Voice implementation? I'm using the Azure OpenAI neural voices at the moment. Gemini voices lately are really good too!! I even thought about using a custom neural voice of my own but I ran into issues when trying to do that within Azure due to not having an enterprise subscription readily available to be allowed this capability. Thoughts?193Views0likes2CommentsModel Mondays S2E01 Recap: Advanced Reasoning Session
About Model Mondays Want to know what Reasoning models are and how you can build advanced reasoning scenarios like a Deep Research agent using Azure AI Foundry? Check out this recap from Model Mondays Season 2 Ep 1. Model Mondays is a weekly series to help you build your model IQ in three steps: 1. Catch the 5-min Highlights on Monday, to get up to speed on model news 2. Catch the 15-min Spotlight on Monday, for a deep-dive into a model or tool 3. Catch the 30-min AMA on Friday, for a Q&A session with subject matter experts Want to follow along? Register Here- to watch upcoming livestreams for Season 2 Visit The Forum- to see the full AMA schedule for Season 2 Register Here - to join the AMA on Friday Jun 20 Spotlight On: Advanced Reasoning This week, the Model Mondays spotlight was on Advanced Reasoning with subject matter expert Marlene Mhangami. In this blog post, I'll talk about my five takeaways from this episode: Why Are Reasoning Models Important? What Is an Advanced Reasoning Scenario? How Can I Get Started with Reasoning Models ? Spotlight: My Aha Moment Highlights: What’s New in Azure AI 1. Why Are Reasoning Models Important? In today's fast-evolving AI landscape, it's no longer enough for models to just complete text or summarize content. We need AI that can: Understand multi-step tasks Make decisions based on logic Plan sequences of actions or queries Connect context across turns Reasoning models are large language models (LLMs) trained with reinforcement learning techniques to "think" before they answer. Rather than simply generating a response based on probability, these models follow an internal thought process producing a chain of reasoning before responding. This makes them ideal for complex problem-solving tasks. And they’re the foundation of building intelligent, context-aware agents. They enable next-gen AI workflows in everything from customer support to legal research and healthcare diagnostics. Reason: They allow AI to go beyond surface-level response and deliver solutions that reflect understanding, not just language patterning. 2. What does Advanced Reasoning involve? An advanced reasoning scenario is one where a model: Breaks a complex prompt into smaller steps Retrieves relevant external data Uses logic to connect dots Outputs a structured, reasoned answer Example: A user asks: What are the financial and operational risks of expanding a startup to Southeast Asia in 2025? This is the kind of question that requires extensive research and analysis. A reasoning model might tackle this by: Retrieving reports on Southeast Asia market conditions Breaking down risks into financial, political, and operational buckets Cross-referencing data with recent trends Returning a reasoned, multi-part answer 3. How Can I Get Started with Reasoning Models? To get started, you need to visit a catalog that has examples of these models. Try the GitHub Models Marketplace and look for the reasoning category in the filter. Try the Azure AI Foundry model catalog and look for reasoning models by name. Example: The o-series of models from Azure Open AI The DeepSeek-R1 models The Grok 3 models The Phi-4 reasoning models Next, you can use SDKs or Playground for exploring the model capabiliies. 1. Try Lab 331 - for a beginner-friendly guide. 2. Try Lab 333 - for an advanced project. 3. Try the GitHub Model Playground - to compare reasoning and GPT models. 4. Try the Deep Research Agent using LangChain - sample as a great starting project. Have questions or comments? Join the Friday AMA on Azure AI Foundry Discord: 4. Spotlight: My Aha Moment Before this session, I thought reasoning meant longer or more detailed responses. But this session helped me realize that reasoning means structured thinking — models now plan, retrieve, and respond with logic. This inspired me to think about building AI agents that go beyond chat and actually assist users like a teammate. It also made me want to dive deeper into LangChain + Azure AI workflows to build mini-agents for real-world use. 5. Highlights: What’s New in Azure AI Here’s what’s new in the Azure AI Foundry: Direct From Azure Models - Try hosted models like OpenAI GPT on PTU plans SORA Video Playground - Generate video from prompts via SORA models Grok 3 Models - Now available for secure, scalable LLM experiences DeepSeek R1-0528 - A reasoning-optimized, Microsoft-tuned open-source model These are all available in the Azure Model Catalog and can be tried with your Azure account. Did You Know? Your first step is to find the right model for your task. But what if you could have the model automatically selected for you_ based on the prompt you provide? That's the magic of Model Router a deployable AI chat model that dynamically selects the best LLM based on your prompt. Instead of choosing one model manually, the Router makes that choice in real time. Currently, this works with a fixed set of Azure OpenAI models, including a reasoning model option. Keep an eye on the documentation for more updates. Why it’s powerful: Saves cost by switching between models based on complexity Optimizes performance by selecting the right model for the task Lets you test and compare model outputs quickly Try it out in Azure AI Foundry or read more in the Model Catalog Coming Up Next Next week, we dive into Model Context Protocol, an open protocol that empowers agentic AI applications by making it easier to discover and integrate knowledge and action tools with your model choices. Register Here to get reminded - and join us live on Monday! Join The Community Great devs don't build alone! In a fast-pased developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! Join the Discord - for real-time chats, events & learning Explore the Forum - for AMA recaps, Q&A, and help! About Me. I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on Github, Dev.to,, Tech Community and Linkedin. In this blog series I have summarizef my takeaways from this week's Model Mondays livestream .380Views0likes0CommentsModel Mondays S2:E4 Understanding AI Developer Experiences with Leo Yao
This week in Model Mondays, we put the spotlight on the AI Toolkit for Visual Studio Code - and explore the tools and workflows that make building generative AI apps and agents easier for developers. Read on for my recap. This post was generated with AI help and human revision & review. To learn more about our motivation and workflows, please refer to this document in our website. About Model Mondays Model Mondays is a weekly series designed to help you grow your Azure AI Foundry Model IQ step by step. Each week includes: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from the Monday livestream If you're looking to grow your skills with the latest in AI model development, this series is a great place to begin. Useful links: Register for upcoming livestreams Watch past episodes Join the AMA on AI Developer Experiences Visit the Model Mondays forum Spotlight On: AI Developer Experiences 1. What is this topic and why is it important? AI Developer Experiences focus on making the process of building, testing, and deploying AI models as efficient as possible. With the right tools—such as the AI Toolkit and Azure AI Foundry extensions for Visual Studio Code—developers can eliminate unnecessary friction and focus on innovation. This is essential for accelerating the real-world impact of generative AI. 2. What is one key takeaway from the episode? The integration of Azure AI Foundry with Visual Studio Code allows developers to manage models, run experiments, and deploy applications directly from their preferred development environment. This unified workflow enhances productivity and simplifies the AI development lifecycle. 3. How can I get started? Here are a few resources to explore: Install the AI Toolkit for VS Code Explore Azure AI Foundry Documentation Join the Microsoft Tech Community to follow and contribute to discussions 4. What’s New in Azure AI Foundry? Azure AI Foundry continues to evolve to meet developer needs with more power, flexibility, and productivity. Here are some of the latest updates highlighted in this week’s episode: AI Toolkit for Visual Studio Code Now with deeper integration, allowing developers to manage models, run experiments, and deploy applications directly within their editor—streamlining the entire workflow. Prompt Shields Enhanced security capabilities designed to protect generative AI applications from prompt injection and unsafe content, improving reliability in production environments. Model Router A new intelligent routing system that dynamically directs model requests to the most suitable model available—enhancing performance and efficiency at scale. Expanded Model Catalog The catalog now includes more open-source and proprietary models, featuring the latest from Hugging Face, OpenAI, and other leading providers. Improved Documentation and Sample Projects Newly added guides and ready-to-use examples to help developers get started faster, understand workflows, and build confidently. My A-Ha Moment Before watching this episode, setting up an AI development environment always felt like a challenge. There were so many moving parts—configurations, integrations, and dependencies—that it was hard to know where to begin. Seeing the AI Toolkit in action inside Visual Studio Code changed everything for me. It was a realization moment: “That’s it? I can explore models, test prompts, and deploy apps—without ever leaving my editor?” This episode made it clear that building with AI doesn’t have to be complex or intimidating. With the right tools, experimentation becomes faster and far more enjoyable. Now, I’m genuinely excited to build, test, and explore new generative AI solutions because the process finally feels accessible. Coming Up Next Week In the next episode, we’ll be exploring Fine-Tuning and Distillation with Dave Voutila. This session will focus on how to adapt Azure OpenAI models to your unique use cases and apply best practices for efficient knowledge transfer. Register here to reserve your spot and be part of the conversation. Join the Community Building in AI is better when we do it together. That’s why the Azure AI Developer Community exists—to support your journey and provide resources every step of the way. Join the Discord for real-time discussions, events, and peer learning Explore the Forum to catch up on AMAs, ask questions, and connect with other developers About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador passionate about cloud technologies and artificial intelligence. I enjoy learning, building, and helping others grow in tech. Connect with me: LinkedIn GitHub Dev.to Microsoft Tech Community224Views0likes0CommentsPower Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This allows users to: Speak commands or input instead of typing, making the interface more accessible and user-friendly. Hear responses or information read aloud, which improves usability for people with visual impairments or those who prefer audio. Provide a more natural and hands-free experience especially on devices like smartphones or tablets. In short, integrating Azure AI Speech service into Open Web UI helps make web apps smarter, more interactive, and easier to use by adding speech recognition and voice output features. If you haven’t hosted Open WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Open WebUI deployed already. Learn More about OpenWeb UI here. Deploy Azure AI Speech service in Azure. Navigate to the Azure Portal and search for Azure AI Speech on the Azure portal search bar. Create a new Speech Service by filling up the fields in the resource creation page. Click on “Create” to finalize the setup. After the resource has been deployed, click on “View resource” button and you should be redirected to the Azure AI Speech service page. The page should display the API Keys and Endpoints for Azure AI Speech services, which you can use in Open Web UI. Settings things up in Open Web UI Speech to Text settings (STT) Head to the Open Web UI Admin page > Settings > Audio. Paste the API Key obtained from the Azure AI Speech service page into the API key field below. Unless you use different Azure Region, or want to change the default configurations for the STT settings, leave all settings to blank. Text to Speech settings (TTS) Now, let's proceed with configuring the TTS Settings on OpenWeb UI by toggling the TTS Engine to Azure AI Speech option. Again, paste the API Key obtained from Azure AI Speech service page and leave all settings to blank. You can change the TTS Voice from the dropdown selection in the TTS settings as depicted in the image below: Click Save to reflect the change. Expected Result Now, let’s test if everything works well. Open a new chat / temporary chat on Open Web UI and click on the Call / Record button. The STT Engine (Azure AI Speech) should identify your voice and provide a response based on the voice input. To test the TTS feature, click on the Read Aloud (Speaker Icon) under any response from Open Web UI. The TTS Engine should reflect Azure AI Speech service! Conclusion And that’s a wrap! You’ve just given your Open WebUI the gift of capturing user speech, turning it into text, and then talking right back with Azure’s neural voices. Along the way you saw how easy it is to spin up a Speech resource in the Azure portal, wire up real-time transcription in the browser, and pipe responses through the TTS engine. From here, it’s all about experimentation. Try swapping in different neural voices or dialing in new languages. Tweak how you start and stop listening, play with silence detection, or add custom pronunciation tweaks for those tricky product names. Before you know it, your interface will feel less like a web page and more like a conversation partner.934Views2likes1CommentSeptember Calendar IS HERE!
🚀✨ Another month, another exciting calendar from the Microsoft Hero ✨🚀 From 🌍 different time zones, and 🌟 diverse topics, we’re bringing incredible sessions designed for everyone, whether you’re just starting your journey or already an expert in Microsoft and the cloud. This month, we’ve packed the calendar with amazing speakers from across the globe 🌐 who will be sharing their invaluable knowledge and real-world experiences. 🙌 💡 Join our live sessions, learn from inspiring experts, and take a step closer to transforming your career, boosting your skills, and making an impact in your organization. ⏰ Just like last month, we’re covering multiple time zones, from Australia 🇦🇺, to Europe 🇪🇺, to the Americas 🌎, so no matter where you are, there’s a session waiting for you! 👉 Don’t miss out, register today, get ready, and let’s grow together from Zero to Hero! 💪🚀 Santhoshkumar Anandakrishnan https://streamyard.com/watch/3CCPGbvGeEfZ?wt.mc_id=MVP_350258 September 4, 2025 11:00 AM CET September 4, 2025 07:00 PM AEST Arafat Tehsin https://streamyard.com/watch/Nyq7gkQEhXkm?wt.mc_id=MVP_350258 September 9, 2025 11:00 AM CET September 9, 2025 07:00 PM AEST Kim Berg https://streamyard.com/watch/6AyAT6PhD9xv?wt.mc_id=MVP_350258 September 13, 2025 06:00 PM CET Andrew O'Young https://streamyard.com/watch/qTvq25R7dfmu?wt.mc_id=MVP_350258 September 16, 2025 11:00 AM CET September 16, 2025 07:00 PM AEST Pam DeGraffenreid https://streamyard.com/watch/UmwbDn9Gimn8?wt.mc_id=MVP_350258 September 20, 2025 06:00 PM CET Anthony Porter https://streamyard.com/watch/8SFHqmDB3gxH?wt.mc_id=MVP_350258 September 29, 2025 09:00 AM CET September 29, 2025 05:00 PM AEST378Views4likes0CommentsCreate Stunning AI Videos with Sora on Azure AI Foundry!
Special credit to Rory Preddy for creating the GitHub resource that enable us to learn more about Azure Sora. Reach him out on LinkedIn to say thanks. Introduction Artificial Intelligence (AI) is revolutionizing content creation, and video generation is at the forefront of this transformation. OpenAI's Sora, a groundbreaking text-to-video model, allows creators to generate high-quality videos from simple text prompts. When paired with the powerful infrastructure of Azure AI Foundry, you can harness Sora's capabilities with scalability and efficiency, whether on a local machine or a remote setup. In this blog post, I’ll walk you through the process of generating AI videos using Sora on Azure AI Foundry. We’ll cover the setup for both local and remote environments. Requirements: Azure AI Foundry with sora model access A Linux Machine/VM. Make sure that the machine already has the package below: Java JRE 17 (Recommended) OR later Maven Step Zero – Deploying the Azure Sora model on AI Foundry Navigate to the Azure AI Foundry portal and head to the “Models + Endpoints” section (found on the left side of the Azure AI Foundry portal) > Click on the “Deploy Model” button > “Deploy base model” > Search for Sora > Click on “Confirm”. Give a deployment name and specify the Deployment type > Click “Deploy” to finalize the configuration. You should receive an API endpoint and Key after successful deploying Sora on Azure AI Foundry. Store these in a safe place because we will be using them in the next steps. Step one – Setting up the Sora Video Generator in the local/remote machine. Clone the roryp/sora repository on your machine by running the command below: git clone https://github.com/roryp/sora.git cd sora Then, edit the application.properties file in the src/main/resources/ folder to include your Azure OpenAI Credentials. Change the configuration below: azure.openai.endpoint=https://your-openai-resource.cognitiveservices.azure.com azure.openai.api-key=your_api_key_here If port 8080 is used for another application, and you want to change the port for which the web app will run, change the “server.port” configuration to include the desired port. Allow appropriate permissions to run the “mvnw” script file. chmod +x mvnw Run the application ./mvnw spring-boot:run Open your browser and type in your localhost/remote host IP (format: [host-ip:port]) in the browser search bar. If you are running a remote host, please do not forget to update your firewall/NSG to allow inbound connection to the configured port. You should see the web app to generate video with Sora AI using the API provided on Azure AI Foundry. Now, let’s generate a video with Sora Video Generator. Enter a prompt in the first text field, choose the video pixel resolution, and set the video duration. (Due to technical limitation, Sora can only generate video of a maximum of 20 seconds). Click on the “Generate video” button to proceed. The cost to generate the video should be displayed below the “Generate Video” button, for transparency purposes. You can click on the “View Breakdown” button to learn more about the cost breakdown. The video should be ready to download after a maximum of 5 minutes. You can check the status of the video by clicking on the “Check Status” button on the web app. The web app will inform you once the download is ready and the page should refresh every 10 seconds to fetch real-time update from Sora. Once it is ready, click on the “Download Video” button to download the video. Conclusion Generating AI videos with Sora on Azure AI Foundry is a game-changer for content creators, marketers, and developers. By following the steps outlined in this guide, you can set up your environment, integrate Sora, and start creating stunning AI-generated videos. Experiment with different prompts, optimize your workflow, and let your imagination run wild! Have you tried generating AI videos with Sora or Azure AI Foundry? Share your experiences or questions in the comments below. Don’t forget to subscribe for more AI and cloud computing tutorials!925Views0likes3CommentsAzure Document Intelligence - How to Extract Data from PDFs and Scanned Files
Imagine this: your nonprofit receives dozens—maybe hundreds—of forms every month. Volunteer sign-ups, program applications, donation forms, surveys. Now imagine you could automatically extract the data from those documents, no matter the layout, and drop it neatly into a spreadsheet or database—with zero manual entry. That’s not a dream. It’s Azure Document Intelligence in action. Whether you're processing handwritten forms, structured PDFs, or invoices from partner organizations, Document Intelligence can turn them into actionable data in minutes. Let’s walk through what it is and exactly how to get started—no coding required. In 2025, Microsoft now offers two ways to work with this tool: the new Azure AI Studio (also known as Foundry) or the original Document Intelligence Studio. Both are currently available, but AI Studio is the direction Microsoft is heading. 📄 What Is Azure Document Intelligence? Azure Document Intelligence is a service that uses AI-powered optical character recognition (OCR) to: Analyze and extract text, tables, and key-value pairs from documents Understand form structure (even if layout varies) Turn scanned documents or PDFs into structured data You can use prebuilt models (like invoice or receipt recognition),or train a custom model to understand your own document types. 🛠️ How to Use Azure Document Intelligence to Read Forms ⚡ Option 1: Use the New Azure AI Studio (Recommended) Azure AI Studio (formerly Azure AI Foundry) is Microsoft’s unified interface for working with AI-powered services like Document Intelligence. This is the platform that will eventually replace Document Intelligence Studio. 🔹 Step 1: Go to Azure AI Studio Sign in with your Azure account. 👉 https://ai.azure.com Choose Build a solution → Document Intelligence. If it’s your first time, you’ll be prompted to create a new project. 🔹 Step 2: Set Up the Document Intelligence Resource Select your Azure subscription, region, and resource group. Name your project (e.g., volunteer-forms). You’ll be issued: An Endpoint URL An API key Note: Keep these for later—they’re required for API calls or Power Automate connections. 🔹 Step 3: Upload and Train Your Model Upload sample forms (PDFs or images). Label fields like name, email, and date. Train a custom model using at least 5 of more example situations. Test and view your results in structured format within the testing pane. 🔹 Step 4: Use the Data Export to Excel or JSON. Connect to Power Automate, Power Apps, or your CRM via API. Check out this blog to see more on the Azure AI Foundry and a video walkthrough of the platform Build, Deploy, & Manage AI with Azure AI Foundry | Microsoft Community Hub 🧭 Option 2: Use Document Intelligence Studio (Legacy Interface) Step 1: Set Up the Document Intelligence Resource in Azure Go to the Azure Portal. Click Create a resource. Search for Document Intelligence (formerly Form Recognizer) and select it. Click Create and fill out the basics: Subscription: Choose your nonprofit subscription. Resource group: Use an existing one or create a new one. Region: Choose the region closest to you. Name: Something like doc-intel-demo. Pricing tier: Choose Free F0 if you're testing (limited pages/month), or Standard if using your credits. Click Review + Create > Create. Step 2: Use the Document Intelligence Studio This is the visual, no-code interface for trying out Document Intelligence. Visit Document Intelligence Studio. Log in with your Azure account. Click Get started. On the left, click Models > Custom model > Build a model. Paste in your Endpoint and Key from the Azure portal. Choose Create project and fill in: Project name (e.g., VolunteerFormsModel) Storage container: You’ll need a Blob Storage account with your forms uploaded (see next step). Source: Select the folder with your form samples. Step 3: Upload Your Forms to Blob Storage In Azure, create a Storage Account if you don’t have one already. Go to Containers and create a new container (e.g., forms-training). Upload 5–10 sample forms of the same type. These can be PDFs, scans, or images. Make sure the forms are consistent in layout (for best results). In Document Intelligence Studio, link this container to your project. Step 4: Label the Forms Once your forms are uploaded, start labeling fields (like Name, Date, Email). The AI will try to guess some fields—confirm or correct them. Do this for 5+ documents to train the model. Click Train model once labeling is complete. Step 5: Test the Model After training, go to Test model. Upload a new, unlabeled form and run the model. Watch as it extracts structured data like: Name: Jane Doe Email: jane@example.org Program Interest: Youth Mentoring Review the output in JSON or table format. Step 6: Export or Use the Results You can: Export the data to Excel Connect via API to feed into a database or CRM Use Power Automate to automate workflows (like adding entries to SharePoint or sending confirmation emails) check out the blog below to see up the workflow ➡️Automate the Busywork: How Nonprofits Can Use Power Automate to Extract and Process Form Data | Microsoft Community Hub Real-World Nonprofit Use Cases Here’s how nonprofits are using Document Intelligence right now: Digitizing intake forms for case management Automatically processing volunteer applications Scanning paper surveys into Excel Extracting info from grant agreements or invoices Final Thoughts Azure Document Intelligence makes what used to be tedious—scanning and retyping forms—quick, intelligent, and scalable. Once set up, it can save your nonprofit hours of manual entry each week and reduce human error. ➡️Automate the Busywork: How Nonprofits Can Use Power Automate to Extract and Process Form Data | Microsoft Community Hub1.1KViews0likes0CommentsAutomate the Busywork: How Nonprofits Can Use Power Automate to Extract and Process Form Data
Didn't read the first blog? Check it out here ➡️ Streamlining Non-Profit Operations with Power Automate Templates (Video Tutorial Included) | Microsoft Community Hub You’ve scanned the forms. You’ve saved the PDFs. Now what? For many nonprofits, getting data from documents into a system—whether it’s SharePoint, Excel, or your CRM—is a time-consuming, manual process. But it doesn’t have to be. With Power Automate, you can automatically trigger a workflow every time a form is uploaded, extract key data, and send it exactly where it needs to go. Whether you’re using Azure Document Intelligence to read the forms or just need to automate your document workflow, Power Automate is your nonprofit’s new best friend. 🧩 What Is Power Automate? Power Automate (formerly Microsoft Flow) is Microsoft’s automation tool that lets you create workflows between your apps and services—without writing code. For nonprofits, that might mean: Creating a task every time a form is submitted Saving form responses to SharePoint Sending an automatic email to a volunteer when their application is received Extracting data from a PDF and sending it to Excel or Dataverse You can do all of that—and more—with just a few clicks. 🔄 Scenario: Process Volunteer Application Forms Automatically Let’s walk through an example: a nonprofit receives scanned PDFs of volunteer forms in a shared folder. They want to extract the name, email, and interests from each form and add it to a SharePoint list. We’ll assume they’ve already trained a custom model in Azure Document Intelligence. Here’s how to build the flow in Power Automate. 🛠️ Step-by-Step: Automate Your Form Workflow with Power Automate Step 1: Set Up Your SharePoint List Go to SharePoint and create a new Custom List. Add the following columns: Name (Single line of text) Email (Single line of text) ProgramInterest (Choice or text) This is where your extracted form data will land. Step 2: Create a New Flow in Power Automate Go to Power Automate. Click Create > Automated cloud flow. Give it a name like Process Volunteer Forms. Choose the trigger: When a file is created in a folder (OneDrive or SharePoint). Step 3: Add the Azure Document Intelligence Connector Click + New Step > Search for Form Recognizer or Document Intelligence. Choose Analyze form (or Analyze with custom model if you trained one). Paste in your endpoint and API key (from the Azure portal). Choose: The model ID you trained (e.g., VolunteerForms) The URL of the uploaded file Step 4: Parse the Response Add a Parse JSON step. Use the sample output from your Document Intelligence model to generate the schema. Pull out fields like Name, Email, ProgramInterest. Step 5: Create the SharePoint Item Add a step: Create item in SharePoint. Point to your list and map the extracted fields to the appropriate columns. Check out this blog for more ideas on creating a flow Automate Your External Data Collection: Power Automate and Microsoft Forms | Microsoft Community Hub take a look at the video below for a visual walkthrough on a similar example Optional: Send a Confirmation Email Add an Outlook step: Send an email (V2). Address it to the email you extracted. Add a friendly message confirming the application was received. ✅ Bonus Scenarios for Nonprofits 🧾 Invoice Processing: Upload scanned invoices, extract amounts and vendors, and add to a tracking system. Check out this blog to see how Streamlining Invoice Processing for Nonprofits with Power Automate | Microsoft Community Hub 📝 Intake Forms: Convert handwritten client intake forms into CRM entries. 📥 Survey Collection: Process paper-based surveys and feed results into Power BI. 💵 Is It Free? Power Automate has a free tier and many flows work with the services nonprofits already use (like SharePoint, Outlook, OneDrive). More advanced features (like premium connectors) can be covered using your Microsoft Cloud for Nonprofit credits or licensing grants. 📊 Connect the Dots with Power Platform Power Automate is even more powerful when combined with: Power Apps (to build simple apps for your team) Power BI (to visualize the data you're collecting) Azure AI (for intelligent document reading, translation, and more) Final Thoughts If your nonprofit is still manually entering data from forms, you’re leaving time and resources on the table. Power Automate empowers anyone—regardless of tech background—to build workflows that save time, reduce errors, and let your team focus on what really matters: your mission. Let the machines do the busywork. You’ve got better things to do.176Views0likes0CommentsDeploy Open Web UI on Azure VM via Docker: A Step-by-Step Guide with Custom Domain Setup.
Introductions Open Web UI (often referred to as "Ollama Web UI" in the context of LLM frameworks like Ollama) is an open-source, self-hostable interface designed to simplify interactions with large language models (LLMs) such as GPT-4, Llama 3, Mistral, and others. It provides a user-friendly, browser-based environment for deploying, managing, and experimenting with AI models, making advanced language model capabilities accessible to developers, researchers, and enthusiasts without requiring deep technical expertise. This article will delve into the step-by-step configurations on hosting OpenWeb UI on Azure. Requirements: Azure Portal Account - For students you can claim $USD100 Azure Cloud credits from this URL. Azure Virtual Machine - with a Linux of any distributions installed. Domain Name and Domain Host Caddy Open WebUI Image Step One: Deploy a Linux – Ubuntu VM from Azure Portal Search and Click on “Virtual Machine” on the Azure portal search bar and create a new VM by clicking on the “+ Create” button > “Azure Virtual Machine”. Fill out the form and select any Linux Distribution image – In this demo, we will deploy Open WebUI on Ubuntu Pro 24.04. Click “Review + Create” > “Create” to create the Virtual Machine. Tips: If you plan to locally download and host open source AI models via Open on your VM, you could save time by increasing the size of the OS disk / attach a large disk to the VM. You may also need a higher performance VM specification since large resources are needed to run the Large Language Model (LLM) locally. Once the VM has been successfully created, click on the “Go to resource” button. You will be redirected to the VM’s overview page. Jot down the public IP Address and access the VM using the ssh credentials you have setup just now. Step Two: Deploy the Open WebUI on the VM via Docker Once you are logged into the VM via SSH, run the Docker Command below: docker run -d --name open-webui --network=host --add-host=host.docker.internal:host-gateway -e PORT=8080 -v open-webui:/app/backend/data --restart always ghcr.io/open-webui/open-webui:dev This Docker command will download the Open WebUI Image into the VM and will listen for Open Web UI traffic on port 8080. Wait for a few minutes and the Web UI should be up and running. If you had setup an inbound Network Security Group on Azure to allow port 8080 on your VM from the public Internet, you can access them by typing into the browser: [PUBLIC_IP_ADDRESS]:8080 Step Three: Setup custom domain using Caddy Now, we can setup a reverse proxy to map a custom domain to [PUBLIC_IP_ADDRESS]:8080 using Caddy. The reason why Caddy is useful here is because they provide automated HTTPS solutions – you don’t have to worry about expiring SSL certificate anymore, and it’s free! You must download all Caddy’s dependencies and set up the requirements to install it using this command: sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update && sudo apt install caddy Once Caddy is installed, edit Caddy’s configuration file at: /etc/caddy/Caddyfile , delete everything else in the file and add the following lines: yourdomainname.com { reverse_proxy localhost:8080 } Restart Caddy using this command: sudo systemctl restart caddy Next, create an A record on your DNS Host and point them to the public IP of the server. Step Four: Update the Network Security Group (NSG) To allow public access into the VM via HTTPS, you need to ensure the NSG/Firewall of the VM allow for port 80 and 443. Let’s add these rules into Azure by heading to the VM resources page you created for Open WebUI. Under the “Networking” Section > “Network Settings” > “+ Create port rule” > “Inbound port rule” On the “Destination port ranges” field, type in 443 and Click “Add”. Repeat these steps with port 80. Additionally, to enhance security, you should avoid external users from directly interacting with Open Web UI’s port - port 8080. You should add an inbound deny rule to that port. With that, you should be able to access the Open Web UI from the domain name you setup earlier. Conclusion And just like that, you’ve turned a blank Azure VM into a sleek, secure home for your Open Web UI, no magic required! By combining Docker’s simplicity with Caddy’s “set it and forget it” HTTPS magic, you’ve not only made your app accessible via a custom domain but also locked down security by closing off risky ports and keeping traffic encrypted. Azure’s cloud muscle handles the heavy lifting, while you get to enjoy the perks of a pro setup without the headache. If you are interested in using AI models deployed on Azure AI Foundry on OpenWeb UI via API, kindly read my other article: Step-by-step: Integrate Ollama Web UI to use Azure Open AI API with LiteLLM Proxy3.1KViews1like1CommentBuild, Deploy, & Manage AI with Azure AI Foundry
Microsoft's Unified AI Development Platform Imagine an Enterprise organization with multiple departments which need to create new AI solutions to streamline operations while boosting customer experience. Each has different objectives and goals they are trying to achieve with AI. Marketing wants to analyze customer engagement on social media, Finance aims to spot fraud, and Operations plans to predict when machines need repairs. Teams have different subscriptions, resource groups, storage, etc. per department. Resource management can be tedious to say the least while sharing data safely, with the added complexity of provisioning things accurately. That is where Azure AI Foundry comes in. Azure AI Foundry is a unified platform allowing organizations to have a centralized hub where they can manage their AI development with the tools and features they need. Nonprofits can now step into the world of AI and build their own solutions for their organization and the communities they serve. Azure AI Foundry is accessible to developers and beginners alike, making AI implementation cost-effective for organizations of any size. In this blog we will cover How you can get started with Azure AI Foundry. Before we begin, there are some prerequisites that need to be made before you start your journey. Prerequisites & Azure Role Based Access Control (RBAC) Acquiring an Azure Account Azure AI Foundry is integrated into Microsoft’s Azure cloud infrastructure. To use the platform, you will need an Azure Account. You need to be assigned the role of Owner or have your administrator assign you the appropriate role. You can learn more about Azure AI Foundry roles in the Role Comparison Between Foundry Projects and Hub Based Projects. Nonprofits can take advantage of Microsoft’s Nonprofit $2000 Azure Sponsorship Credit Subscription. You will need to be an approved participant of Microsoft’s Nonprofit Offers Program. To learn more about how you can get started please see the following blogs: Getting Signed Up with Microsoft Nonprofits Program | Microsoft Community Hub Claiming Azure Credits | Microsoft Community Hub Azure Role Based Access Control (RBAC) Access Control and identity management are crucial steps in safeguarding your sensitive data. Organizations that deal with global privacy compliance standards understand the necessity of securing and hardening their environment. Microsoft aims to empower clients with security tools and measures built in Azure to help secure access to their resources. One of these tools is Microsoft Entra ID (formerly known as Azure Active Directory) which applies built-in roles with limited access and permissions to resources based on their job function, known as Role Based Access Control (RBAC). This follows a security principle called The Principle of Least Privilege. For example, a Business Analyst may need access to Customer Relationship Management software (CRM) to record interactions with stakeholders, allocate budgets, and manage financial records. The Business Analyst would need administrative access related to worked performed. However, they would not need access to creating resources such as virtual machines since that is out of the scope of their role. This ensures security best practices to prevent access to highly sensitive data. Azure AI Foundry has roles designed for developers, managers, and users. By assigning specific roles, such as reader or manager, organizations can ensure that only authorized individuals can view or modify critical AI tools and data. Keep this in mind when granting access to users. Below is a comparison of the features and capabilities of the two project types within Azure AI Foundry: Foundry Project and Hub Based Project. Disclaimer: Some roles may limit functionality in the Azure AI Foundry portal. For example, if a user cannot create a compute instance, that option will not appear in the studio. This prevents access denied errors. Types of Projects Foundry Project Hub-based Project Built on Azure AI Foundry resource Agents Azure AI Foundry Models Azure AI Foundry API Agents Project files (upload and start experimenting) Project-level isolation of files and outputs Evaluations Playground Hosted on Azure AI Foundry Hub Agents (preview) Create if features are not available in Foundry project Azure AI Foundry Models (Connections) Azure AI Foundry API Agents (Connections) Project-level isolation of files and outputs Evaluations Playground Prompt flow Managed compute Azure Storage account & Azure Key Vault Role Comparison Between Foundry Project & Hub Based Project Foundry Project Azure AI User: Azure AI User This role grants reader access to AI projects, reader access to AI accounts, and data actions for an AI project. This role is automatically assigned to the user if they can assign roles. If not, this role must be granted by your subscription Owner or user with role assignment privileges. Azure AI Project Manager: Azure AI Project Manager This role lets you perform management actions on Azure AI Foundry projects, build and develop projects, and grants conditional assignment of the Azure AI User role to other user principles. Azure AI Account Owner: Azure AI Account Owner This role grants full access to managing AI projects, accounts, and grants conditional assignment of the Azure AI User role to other user principles. Hub-Based Project Owner: Full access to the hub, including the ability to manage and create new hubs and assign permissions. This role is automatically assigned to the hub creator Contributor: Users have full access to the hub, including the ability to create new hubs, but cannot manage hub permissions on the existing resource. Azure AI Administrator (preview): This role is automatically assigned to the system-assigned managed identity for the hub. The Azure AI Administrator role has the minimum permissions needed for the managed identity to perform its tasks. For more information, see Azure AI Administrator role (preview). Azure AI Developer: Perform all actions except create new hubs and manage the hub permissions. For example, users can create projects, compute, and connections. Users can assign permissions within their project. Users can interact with existing Azure AI resources such as Azure OpenAI, Azure AI Search, and Azure AI services. Azure AI Inference Deployment Operator: Perform all actions required to create a resource deployment within a resource group. Reader: Read only access to the hub. This role is automatically assigned to all project members within the hub. Playgrounds, Agents, & Models Oh My! Model Catalog Investing in AI can be expensive, from overhead to capital expenditure. Adoption and development can be costly for many organizations with tight budgets. Nonprofits that want to venture in AI development are met with the challenge of balancing budget with performance and navigating the ever-evolving AI landscape. Nonprofits need the ability to evaluate and test drive models before making the major investment to develop AI projects. Azure AI Foundry now makes it easy to compare models and benchmarks for the latest AI models. Choose from a comprehensive collection of models from Open AI, Meta, Mistral, Grok, Cohere, and more. Track your model's quota usage to stay within limits. Fine-tuned AI Models Create tailored experiences with fine-tuned AI models by utilizing base models from Azure AI Foundry and adapting your own data to create an experience to cater to your audience. For nonprofits and businesses alike, fine-tuned models offer a practical path to maximize impact without the need for intensive computational resources or expertise. Whether optimizing for customer support, document summarization, healthcare analysis, or content generation, fine-tuning ensures AI solutions are more effective and aligned to user needs. Playgrounds Playgrounds are a workspace where you can work on GPTs, Assistants, Real-time audio, Images, and Completions. Playgrounds are a great way to test and compare models before making a full commitment to adopting them. Built-in tools let you quickly benchmark and evaluate what works best with your needs. You can choose from a variety of the latest models from OpenAI and third-party vendors. Setup is made simple with just a few clicks by picking your model. Chat: A chat playground lets users work with AI chat models in real time. Assistants: The Assistants’ playground is designed for experimenting with AI-driven assistants tailored to a wide range of tasks. Real-time audio: The Real-time Audio playground provides an interactive space to experiment with advanced audio-based AI models. Images: The Images playground offers an intuitive environment for working with state-of-the-art image generation and analysis models. Completions: The Completions playground allows users to test text generation models by providing prompts and adjusting settings for tasks such as content creation, summarization, or code generation. As you can see, you have many options to choose from. Create agentic bots for customer interactions or develop a chatbot for end users using specific organizational knowledge such as FAQs and documents with citations. The sky is the limit, with Azure adding new features and capabilities to improve user experience. Developers can also get started with templates and use IDEs like Visual Studio and Visual Studio Code. Now, let us talk about how you can integrate your data to refine and improve your workflows. In the next section we will discuss how you can connect your data to your customized solutions. Connecting Data Sources Connecting your data storage to Azure AI Foundry’s playground assistants, fine-tuned models, batch pipelines, and evaluation workflows is direct and straightforward. You can link storage accounts, databases, Azure blob storage, uploaded files, and Azure AI Search to supply datasets for training, testing, or real-time use. Built-in connectors and APIs make integration simple, while role-based permissions control access. Data lineage and versioning help track and manage information, ensuring your assistants and models use accurate, reliable inputs before applying additional security and governance tools. Compatible Storage Types Azure Blob Storage Azure AI Search Azure Cosmo DB for Mongo DB Uploaded Files URL/ Web Address JSON Governance & Security Azure AI Foundry provides tools to ensure the security of projects. One such tool is Role Based Access Control (RBAC), which we spoke about early. However, Azure AI Foundry integrates a security framework designed to protect sensitive data and comply with industry standards. It employs a combination of tools, governance controls, and continuous monitoring to assist organizations in developing AI solutions securely. Users can set up controls like content filters and block lists. Security recommendations are available through Windows Defender XDR integration, offering protection against data leakage, data poisoning, jailbreaks, and credential theft. Additionally, compliance policies from Microsoft Purview help maintain security measures. Security & Governance Features Compliance Security Framework Private Endpoints & Network Isolation Role Based Access Control Guard rails & Controls Data Encryption Microsoft Purview Defender XDR Integration Taken together, robust governance and security features offer organizations peace of mind, ensuring that their AI projects are not only innovative but also responsibly managed and protected against emerging threats. As organizations scale their AI initiatives, understanding and managing resource usage becomes equally important. This is where quotas come into play, helping teams allocate resources efficiently and maintain optimal performance as they build and deploy AI solutions. Managing Token Quotas Azure AI Foundry provides comprehensive tools that empower teams to monitor and manage token quotas across a diverse range of model consumption patterns. Whether your workloads are categorized as Global standard, Global provisioned, Global batch, Data zone standard, Data zone provisioned, Data zone batch, standard, or Regional, the platform allows for granular visibility into resource allocation and consumption. This centralized tracking ensures that organizations can proactively identify usage bottlenecks, optimize deployment strategies, and stay within defined limits, all while supporting efficient scaling and sustaining high performance for their AI solutions. How to Get Started Get started by visiting Azure AI foundry at https://ai.azure.com. Begin leveraging Azure AI Foundry, organizations should first explore the platform’s intuitive interface and robust documentation, which offer step-by-step guidance for onboarding teams of any size. Users can discover a suite of developer SDKs, prebuilt templates, and ready-to-deploy chatbot solutions that expedite the setup process. Engaging with these resources enables teams to rapidly prototype, customize, and scale AI solutions according to their unique requirements. Additionally, organizations are encouraged to take advantage of the extensive educational content and support channels provided, ensuring a smooth transition from initial exploration to full-scale AI deployment. With these tools and resources at their fingertips, teams can confidently embark on their AI journey, transforming innovative ideas into impactful outcomes. Hyperlinks Introducing Azure AI Foundry - Everything you need for AI development Build your own copilot with Azure AI Studio (Part 1) | Microsoft Learn Role-based access control in Azure AI Foundry portal - Azure AI Foundry | Microsoft Learn QuickStart: Get started with Azure AI Foundry - Azure AI Foundry | Microsoft Learn How to configure a private link for an Azure AI Foundry hub - Azure AI Foundry | Microsoft Learn Azure OpenAI Service - Pricing | Microsoft Azure1.4KViews0likes0Comments