ai agents
93 TopicsBeyond the Model: Empower your AI with Data Grounding and Model Training
Discover how Microsoft Foundry goes beyond foundational models to deliver enterprise-grade AI solutions. Learn how data grounding, model tuning, and agentic orchestration unlock faster time-to-value, improved accuracy, and scalable workflows across industries.206Views4likes2CommentsIntegrate Custom Azure AI Agents with Copilot Studio and M365 Copilot
Integrating Custom Agents with Copilot Studio and M365 Copilot In today's fast-paced digital world, integrating custom agents with Copilot Studio and M365 Copilot can significantly enhance your company's digital presence and extend your CoPilot platform to your enterprise applications and data. This blog will guide you through the integration steps of bringing your custom Azure AI Agent Service within an Azure Function App, into a Copilot Studio solution and publishing it to M365 and Teams Applications. When Might This Be Necessary: Integrating custom agents with Copilot Studio and M365 Copilot is necessary when you want to extend customization to automate tasks, streamline processes, and provide better user experience for your end-users. This integration is particularly useful for organizations looking to streamline their AI Platform, extend out-of-the-box functionality, and leverage existing enterprise data and applications to optimize their operations. Custom agents built on Azure allow you to achieve greater customization and flexibility than using Copilot Studio agents alone. What You Will Need: To get started, you will need the following: Azure AI Foundry Azure OpenAI Service Copilot Studio Developer License Microsoft Teams Enterprise License M365 Copilot License Steps to Integrate Custom Agents: Create a Project in Azure AI Foundry: Navigate to Azure AI Foundry and create a project. Select 'Agents' from the 'Build and Customize' menu pane on the left side of the screen and click the blue button to create a new agent. Customize Your Agent: Your agent will automatically be assigned an Agent ID. Give your agent a name and assign the model your agent will use. Customize your agent with instructions: Add your knowledge source: You can connect to Azure AI Search, load files directly to your agent, link to Microsoft Fabric, or connect to third-party sources like Tripadvisor. In our example, we are only testing the CoPilot integration steps of the AI Agent, so we did not build out additional options of providing grounding knowledge or function calling here. Test Your Agent: Once you have created your agent, test it in the playground. If you are happy with it, you are ready to call the agent in an Azure Function. Create and Publish an Azure Function: Use the sample function code from the GitHub repository to call the Azure AI Project and Agent. Publish your Azure Function to make it available for integration. azure-ai-foundry-agent/function_app.py at main · azure-data-ai-hub/azure-ai-foundry-agent Connect your AI Agent to your Function: update the "AIProjectConnString" value to include your Project connection string from the project overview page of in the AI Foundry. Role Based Access Controls: We have to add a role for the function app on OpenAI service. Role-based access control for Azure OpenAI - Azure AI services | Microsoft Learn Enable Managed Identity on the Function App Grant "Cognitive Services OpenAI Contributor" role to the System-assigned managed identity to the Function App in the Azure OpenAI resource Grant "Azure AI Developer" role to the System-assigned managed identity for your Function App in the Azure AI Project resource from the AI Foundry Build a Flow in Power Platform: Before you begin, make sure you are working in the same environment you will use to create your Copilot Studio agent. To get started, navigate to the Power Platform (https://make.powerapps.com) to build out a flow that connects your Copilot Studio solution to your Azure Function App. When creating a new flow, select 'Build an instant cloud flow' and trigger the flow using 'Run a flow from Copilot'. Add an HTTP action to call the Function using the URL and pass the message prompt from the end user with your URL. The output of your function is plain text, so you can pass the response from your Azure AI Agent directly to your Copilot Studio solution. Create Your Copilot Studio Agent: Navigate to Microsoft Copilot Studio and select 'Agents', then 'New Agent'. Make sure you are in the same environment you used to create your cloud flow. Now select ‘Create’ button at the top of the screen From the top menu, navigate to ‘Topics’ and ‘System’. We will open up the ‘Conversation boosting’ topic. When you first open the Conversation boosting topic, you will see a template of connected nodes. Delete all but the initial ‘Trigger’ node. Now we will rebuild the conversation boosting agent to call the Flow you built in the previous step. Select 'Add an Action' and then select the option for existing Power Automate flow. Pass the response from your Custom Agent to the end user and end the current topic. My existing Cloud Flow: Add action to connect to existing Cloud Flow: When this menu pops up, you should see the option to Run the flow you created. Here, mine does not have a very unique name, but you see my flow 'Run a flow from Copilot' as a Basic action menu item. If you do not see your cloud flow here add the flow to the default solution in the environment. Go to Solutions > select the All pill > Default Solution > then add the Cloud Flow you created to the solution. Then go back to Copilot Studio, refresh and the flow will be listed there. Now complete building out the conversation boosting topic: Make Agent Available in M365 Copilot: Navigate to the 'Channels' menu and select 'Teams + Microsoft 365'. Be sure to select the box to 'Make agent available in M365 Copilot'. Save and re-publish your Copilot Agent. It may take up to 24 hours for the Copilot Agent to appear in M365 Teams agents list. Once it has loaded, select the 'Get Agents' option from the side menu of Copilot and pin your Copilot Studio Agent to your featured agent list Now, you can chat with your custom Azure AI Agent, directly from M365 Copilot! Conclusion: By following these steps, you can successfully integrate custom Azure AI Agents with Copilot Studio and M365 Copilot, enhancing you’re the utility of your existing platform and improving operational efficiency. This integration allows you to automate tasks, streamline processes, and provide better user experience for your end-users. Give it a try! Curious of how to bring custom models from your AI Foundry to your Copilot Studio solutions? Check out this blog19KViews3likes11CommentsContext-Aware RAG System with Azure AI Search to Cut Token Costs and Boost Accuracy
🚀 Introduction As AI copilots and assistants become integral to enterprises, one question dominates architecture discussions: “How can we make large language models (LLMs) provide accurate, source-grounded answers — without blowing up token costs?” Retrieval-Augmented Generation (RAG) is the industry’s go-to strategy for this challenge. But traditional RAG pipelines often use static document chunking, which breaks semantic context and drives inefficiencies. To address this, we built a context-aware, cost-optimized RAG pipeline using Azure AI Search and Azure OpenAI, leveraging AI-driven semantic chunking and intelligent retrieval. The result: accurate answers with up to 85% lower token consumption. Majorly in this blog we are considering: Tokenization Chunking The Problem with Naive Chunking Most RAG systems split documents by token or character count (e.g., every 1,000 tokens). This is easy to implement but introduces real-world problems: 🧩 Loss of context — sentences or concepts get split mid-idea. ⚙️ Retrieval noise — irrelevant fragments appear in top results. 💸 Higher cost — you often send 5× more text than necessary. These issues degrade both accuracy and cost efficiency. 🧠 Context-Aware Chunking: Smarter Document Segmentation Instead of breaking text arbitrarily, our system uses an LLM-powered preprocessor to identify semantic boundaries — meaning each chunk represents a complete and coherent concept. Example Naive chunking: “Azure OpenAI Service offers… [cut] …integrates with Azure AI Search for intelligent retrieval.” Context-aware chunking: “Azure OpenAI Service provides access to models like GPT-4o, enabling developers to integrate advanced natural language understanding and generation into their applications. It can be paired with Azure AI Search for efficient, context-aware information retrieval.” ✅ The chunk is self-contained and semantically meaningful. This allows the retriever to match queries with conceptually complete information rather than partial sentences — leading to precision and fewer chunks needed per query. Architecture Diagram Chunking Service: Purpose: Transforms messy enterprise data (wikis, PDFs, transcripts, repos, images) into structured, model-friendly chunks for Retrieval-Augmented Generation (RAG). ChallengeChunking FixLLM context limitsBreaks docs into smaller piecesEmbedding sizeKeeps within token boundsRetrieval accuracyGranular, relevant sections onlyNoiseRemoves irrelevant blocksTraceabilityChunk IDs for auditabilityCost/latencyRe-embed only changed chunks The Chunking Flow (End-to-End) The Chunking Service sits in the ingestion pipeline and follows this sequence: Ingestion: Raw text arrives from sources (wiki, repo, transcript, PDF, image description). Token-aware splitting: Large text is cut into manageable pre-chunks with a 100-token overlap, ensuring no semantic drift across boundaries. Semantic segmentation: Each pre-chunk is passed to an Azure OpenAI Chat model with a structured prompt. Output = JSON array of semantic chunks (sectiontitle, speaker, content). Optional overlap injection: Character-level overlap can be applied across chunks for discourse-heavy text like meeting transcripts. Embedding generation: Each chunk is passed to Azure OpenAI Embeddings API (text-embedding-3-small), producing a 1536-dimension vector. Indexing: Chunks (text + vectors) are uploaded to Azure AI Search. Retrieval: During question answering or document generation, the system pulls top-k chunks, concatenates them, and enriches the prompt for the LLM. Resilience & Traceability The service is built to handle real-world pipeline issues. It retries once on rate limits, validates JSON outputs, and fails fast on malformed data instead of silently dropping chunks. Each chunk is assigned a unique ID (chunk_<sequence>_<sourceTag>), making retrieval auditable and enabling selective re-embedding when only parts of a document change. ☁️ Why Azure AI Search Matters Here Azure AI Search (formerly Cognitive Search) is the heart of the retrieval pipeline. Key Roles: Vector Search Engine: Stores embeddings of chunks and performs semantic similarity search. Hybrid Search (Keyword + Vector): Combines lexical and semantic matching for high precision and recall. Scalability: Supports millions of chunks with blazing-fast search latency. Metadata Filtering: Enables fine-grained retrieval (e.g., by document type, author, section). Native Integration with Azure OpenAI: Allows a seamless, end-to-end RAG pipeline without third-party dependencies. In short, Azure AI Search provides the speed, scalability, and semantic intelligence to make your RAG pipeline enterprise-grade. 💡 Importance of Azure OpenAI Azure OpenAI complements Azure AI Search by providing: High-quality embeddings (text-embedding-3-large) for accurate vector search. Powerful generative reasoning (GPT-4o or GPT-4.1) to craft contextually relevant answers. Security and compliance within your organization’s Azure boundary — critical for regulated environments. Together, these two services form the retrieval (Azure AI Search) and generation (Azure OpenAI) halves of your RAG system. 💰 Token Efficiency By limiting the model’s input to only the most relevant, semantically meaningful chunks, you drastically reduce prompt size and cost. Approach Tokens per Query Typical Cost Accuracy Full-document prompt ~15,000–20,000 Very high Medium Fixed-size RAG chunks ~5,000–8,000 Moderate Medium-high Context-aware RAG (this approach) ~2,000–3,000 Low High 💰 Token Cost Reduction Analysis Let’s quantify it: Step Naive Approach (no RAG) Your Approach (Context-Aware RAG) Prompt context size Entire document (e.g., 15,000 tokens) Top 3 chunks (e.g., 2,000 tokens) Tokens per query ~16,000 (incl. user + system) ~2,500 Cost reduction — ~84% reduction in token usage Accuracy Often low (hallucinations) Higher (targeted retrieval) That’s roughly an 80–85% reduction in token usage while improving both accuracy and response speed. 🧱 Tech Stack Overview Component Service Purpose Chunking Engine Azure OpenAI (GPT models) Generate context-aware chunks Embedding Model Azure OpenAI Embedding API Create high-dimensional vectors Retriever Azure AI Search Perform hybrid and vector search Generator Azure OpenAI GPT-4o Produce final answer Orchestration Layer Python / FastAPI / .NET c# Handle RAG pipeline 🔍 The Bottom Line By adopting context-aware chunking and Azure AI Search-powered RAG, you achieve: ✅ Higher accuracy (contextually complete retrievals) 💸 Lower cost (token-efficient prompts) ⚡ Faster latency (smaller context per call) 🧩 Scalable and secure architecture (fully Azure-native) This is the same design philosophy powering Microsoft Copilot and other enterprise AI assistants today. 🧪 Real-Life Example: Context-Aware RAG in Action To bring this architecture to life, let’s walk through a simple example of how documents can be chunked, embedded, stored in Azure AI Search, and then queried to generate accurate, cost-efficient answers. Imagine you want to build an internal knowledge assistant that answers developer questions from your company’s Azure documentation. ⚙️ Step 1: Intelligent Document Chunking We’ll use a small LLM call to segment text into context-aware chunks — rather than fixed token counts //Context Aware Chunking //text can be your retrieved text from any page/ document private async Task<List<SemanticChunk>> AzureOpenAIChunk(string text) { try { string prompt = $@" Divide the following text into logical, meaningful chunks. Each chunk should represent a coherent section, topic, or idea. Return the result as a JSON array, where each object contains: - sectiontitle - speaker (if applicable, otherwise leave empty) - content Do not add any extra commentary or explanation. Only output the JSON array. Do not give content an array, try to keep all in string. TEXT: {text}" var client = GetAzureOpenAIClient(); var chatCompletionsOptions = new ChatCompletionOptions { Temperature = 0, FrequencyPenalty = 0, PresencePenalty = 0 }; var Messages = new List<OpenAI.Chat.ChatMessage> { new SystemChatMessage("You are a text processing assistant."), new UserChatMessage(prompt) }; var chatClient = client.GetChatClient( deploymentName: _appSettings.Agent.Model); var response = await chatClient.CompleteChatAsync(Messages, chatCompletionsOptions); string responseText = response.Value.Content[0].Text.ToString(); string cleaned = Regex.Replace(responseText, @"```[\s\S]*?```", match => { var match1 = match.Value.Replace("```json", "").Trim(); return match1.Replace("```", "").Trim(); }); // Try to parse the response as JSON array of chunks return CreateChunkArray(cleaned); } catch (JsonException ex) { _logger.LogError("Failed to parse GPT response: " + ex.Message); throw; } catch (Exception ex) { _logger.LogError("Error in AzureOpenAIChunk: " + ex.Message); throw; } } 🧠 Step 2: Adding Overlaps for better result We are adding overlapping between chunks for better and accurate answers. Overlapping window can be modified based on the documents. public List<SemanticChunk> AddOverlap(List<SemanticChunk> chunks, string IDText, int overlapChars = 0) { var overlappedChunks = new List<SemanticChunk>(); for (int i = 0; i < chunks.Count; i++) { var current = chunks[i]; string previousOverlap = i > 0 ? chunks[i - 1].Content[^Math.Min(overlapChars, chunks[i - 1].Content.Length)..] : ""; string combinedText = previousOverlap + "\n" + current.Content; var Id = $"chunk_{i + '_' + IDText}"; overlappedChunks.Add(new SemanticChunk { Id = Regex.Replace(Id, @"[^A-Za-z0-9_\-=]", "_"), Content = combinedText, SectionTitle = current.SectionTitle }); } return overlappedChunks; } 🧠 Step 3: Generate and Store Embeddings in Azure AI Search We convert each chunk into an embedding vector and push it to an Azure AI Search index. public async Task<List<SemanticChunk>> AddEmbeddings(List<SemanticChunk> chunks) { var client = GetAzureOpenAIClient(); var embeddingClient = client.GetEmbeddingClient("text-embedding-3-small"); foreach (var chunk in chunks) { // Generate embedding using the EmbeddingClient var embeddingResult = await embeddingClient.GenerateEmbeddingAsync(chunk.Content).ConfigureAwait(false); chunk.Embedding = embeddingResult.Value.ToFloats(); } return chunks; } public async Task UploadDocsAsync(List<SemanticChunk> chunks) { try { var indexClient = GetSearchindexClient(); var searchClient = indexClient.GetSearchClient(_indexName); var result = await searchClient.UploadDocumentsAsync(chunks); } catch (Exception ex) { _logger.LogError("Failed to upload documents: " + ex); throw; } } 🤖 Step 4: Generate the Final Answer with Azure OpenAI Now we combine the top chunks with the user query to create a cost-efficient, context-rich prompt. P.S. : Here in this example we have used semantic kernel agent , in real time any agent can be used and any prompt can be updated. var context = await _aiSearchService.GetSemanticSearchresultsAsync(UserQuery); // Gets chunks from Azure AI Search //here UserQuery is query asked by user/any question prompt which need to be answered. string questionWithContext = $@"Answer the question briefly in short relevant words based on the context provided. Context : {context}. \n\n Question : {UserQuery}?"; var _agentModel = new AgentModel() { Model = _appSettings.Agent.Model, AgentName = "Answering_Agent", Temperature = _appSettings.Agent.Temperature, TopP = _appSettings.Agent.TopP, AgentInstructions = $@"You are a cloud Migration Architect. " + "Analyze all the details from top to bottom in context based on the details provided for the Migration of APP app using Azure Services. Do not assume anything." + "There can be conflicting details for a question , please verify all details of the context. If there are any conflict please start your answer with word - **Conflict**." + "There might not be answers for all the questions, please verify all details of the context. If there are no answer for question just mention - **No Information**" }; _agentModel = await _agentService.CreateAgentAsync(_agentModel); _agentModel.QuestionWithContext = questionWithContext; var modelWithResponse = await _agentService.GetAnswerAsync(_agentModel); 🧠 Final Thoughts Context-aware RAG isn’t just a performance optimization — it’s an architectural evolution. It shifts the focus from feeding LLMs more data to feeding them the right data. By letting Azure AI Search handle intelligent retrieval and Azure OpenAI handle reasoning, you create an efficient, explainable, and scalable AI assistant. The outcome: Smarter answers, lower costs, and a pipeline that scales with your enterprise. Wiki Link: Tokenization and Chunking IP Link: AI Migration Accelerator1.3KViews4likes1CommentFine-tuning at Ignite 2025: new models, new tools, new experience
Fine‑tuning isn’t just “better prompts.” It’s how you tailor a foundation model to your domain and tasks to get higher accuracy, lower cost, and faster responses -- then run it at scale. As Agents become more critical to businesses, we’re seeing growing demand for fine tuning to ensure agents are low latency, low cost, and call the right tools and the right time. At Ignite 2025, we saw how Docusign fine-tuned models that powered their document management system to achieve major gains: more than 50% cost reduction per document, 2x faster inference time, and significant improvements in accuracy. At Ignite, we launched several new features in Microsoft Foundry that make fine‑tuning easier, more scalable, and more impactful than ever with the goal of making agents unstoppable in the real world: New Open-Source models – Qwen3 32B, Ministral 3B, GPT-OSS-20B and Llama 3.3 70B – to give users access to Open-Source models in the same low friction experience as OpenAI Synthetic data generation to jump start your training journey – just upload your documents and our multi-agent system takes care of the rest Developer Training tier to reduce the barrier to entry by offering discounted training (50% off global!) on spot capacity Agentic Reinforcement Fine-tuning with GPT-5: leverage tool calling during chain of thought to teach reasoning models to use your tools to solve complex problems And if that wasn’t enough, we also released a re-imagined fine tuning experience in Foundry (new), providing access to all these capabilities in a simplified and unified UI. New Open-Source Models for Fine-tuning (Public Preview): Bringing open-source innovation to your fingertips We’ve expanded our model lineup to new open-source models you can fine-tune without worrying about GPUs or compute. Ministral-3B and Qwen3 32B are now available to fine-tune with Supervised Fine-Tuning (SFT) in Microsoft Foundry, enabling developers to adapt open-source models to their enterprise-specific domains with ease. Look out for Llama 3.3 70B and GPT-OSS-20B, coming next week! These OSS models are offered through a unified interface with OpenAI via the UI or Foundry SDK which means the same experience, regardless of model choice. These models can be used alongside your favorite Foundry tools, from AI Search to Evaluations, or to power your agents. Note: New OSS models are only available in "New" Foundry – so upgrade today! Like our OpenAI models, Open-Source models in Foundry charge per-token for training, making it simple to forecast and estimate your costs. All models are available on Global Standard tier, making discoverability easy. For more details on pricing, please see our Microsoft Foundry Models pricing page. Customers like Co-Star Group have already seen success leveraging fine tuning with Mistral models to power their home search experience on Homes.com. They selected Ministral-3B as a small, efficient model to power high volume, low latency processing with lower costs and faster deployment times than Frontier models – while still meeting their needs for accuracy, scalability, and availability thanks to fine tuning in Foundry. Synthetic data generation (Public Preview): Create high-quality training data automatically Developers can now generate high-quality, domain-specific synthetic datasets to close those persistent data gaps with synthetic data generation. One of the biggest challenges we hear teams face during fine-tuning is not having enough data or the right kind of data because it’s scarce, sensitive, or locked behind compliance constraints (think healthcare and finance). Our new synthetic data generation capability solves this by giving you a safe, scalable way to create realistic, diverse datasets tailored to your use case so you can fine-tune and evaluate models without waiting for perfect real-world data. Now, you can produce realistic question–answer pairs from your documents, or simulate multi‑turn tool‑use dialogues that include function calls without touching sensitive production data. How it works: Fine‑tuning datasets: Upload a reference file (PDF/Markdown/TXT) and Foundry converts it into SFT‑formatted Q&A pairs that reflect your domain’s language and nuances so your model learns from the right examples. Agent tool‑use datasets: Provide an OpenAPI (Swagger) spec, and Foundry simulates multi‑turn assistant–user conversations with tool calls, producing SFT‑ready examples that teach models to call your APIs reliably. Evaluation datasets: Generate distinct test queries tailored to your scenarios so you can measure model and agent quality objectively—separate from your training data to avoid false confidence. Agents succeed when they reliably understand domain intent and call the right tools at the right time. Foundry’s synthetic data generation does exactly that: it creates task‑specific training and test data so your agent learns from the right examples and you can prove it works before you go live so they are reliable in the real world. Developer Training Tier (Public Preview): 50% discount on training jobs Fine-tuning can be expensive, especially when you may need to run multiple experiments to create the right model for your production agents. To make it easier than ever to get started, we’re introducing Developer Training tier – providing users with a 50% discount when they choose to run workloads on pre-emptible capacity. It also lets users iterate faster: we support up to 10 concurrent jobs on Developer tier, making it ideal for running experiments in parallel. Because it uses reclaimable capacity, jobs may be pre‑empted and automatically resumed, so they may take longer to complete. When to use Developer Training tier: When cost matters - great for early experimentation or hyperparameter tuning thanks to 50% lower training cost. When you need high concurrency - supports up to 10 simultaneous jobs, ideal for running multiple experiments in parallel. When the workload is non‑urgent - suitable for jobs that can tolerate pre-emption and longer, capacity-dependent runtimes. Agentic Reinforcement Fine‑Tuning (RFT) (Private Preview): Train reasoning models to use your tools through outcome based optimization Building reliable AI agents requires more than copying correct behavior; models need to learn which reasoning paths lead to successful outcomes. While supervised fine-tuning trains models to imitate demonstrations, reinforcement fine-tuning optimizes models based on whether their chain of thought actually generates a successful outcome. It teaches them to think in new ways, about new domains – to solve complex problems. Agentic RFT applies this to tool-using workflows: the model generates multiple reasoning traces (including tool calls and planning steps), receives feedback on which attempts solved the problem correctly, and updates its reasoning patterns accordingly. This helps models learn effective strategies for tool sequencing, error recovery, and multi-step planning—behaviors that are difficult to capture through demonstrations alone. The difference now is that you can provide your own custom tools for use during chain of thought: models can interact with your own internal systems, retrieve the data they need, and access your proprietary APIs to solve your unique problems. Agentic RFT is currently available in private preview for o4-mini and GPT-5, with configurable reasoning effort, sampling rates, and per-run telemetry. Request access at aka.ms/agentic-rft-preview. What are customers saying? Fine-tuning is critical to achieve the accuracy and latency needed for enterprise agentic workloads. Decagon is used by many of the world’s most respected enterprises to build, manage and scale AI agents that can resolve millions of customer inquiries across chat, email, and voice – 24 hours a day, seven days a week. This experience is powered by fine-tuning: “Providing accurate responses with minimal latency is fundamental to Decagon’s product experience. We saw an opportunity to reduce latency while improving task-specific accuracy by fine-tuning models using our proprietary datasets. Via fine-tuning, we were able to exceed the performance of larger state of the art models with smaller, lighter-weight models which could be served significantly faster.” -- Cyrus Asgari, Lead Research Engineer for fine-tuning at Decagon But it’s not just agent-first startups seeing results. Companies like Discover Bank are using fine tuned models to provide better customer experiences with personal banking agents: We consolidated three steps into one, response times that were previously five or six seconds came down to one and a half to two seconds on average. This approach made the system more efficient and the 50% reduction in latency made conversations with Discovery AI feel seamless. - Stuart Emslie, Head of Actuarial and Data Science at Discovery Bank Fine-tuning has evolved from an optimization technique to essential infrastructure for production AI. Whether building specialized agents or enhancing existing products, the pattern is clear: custom-trained models deliver the accuracy and speed that general-purpose models can't match. As techniques like Agentic RFT and synthetic data generation mature, the question isn't whether to fine-tune, but how to build the systems to do it systematically. Learn More 🧠 Get Started with fine-tuning with Azure AI Foundry on Microsoft Learn Docs ▶️ Watch On-Demand: https://ignite.microsoft.com/en-US/sessions/BRK188?source=sessions 👩 Try the demos: aka.ms/FT-ignite-demos 👋 Continue the conversation on Discord407Views0likes0CommentsFoundry IQ: Unlocking ubiquitous knowledge for agents
Introducing Foundry IQ by Azure AI Search in Microsoft Foundry. Foundry IQ is a centralized knowledge layer that connects agents to data with the next generation of retrieval-augmented generation (RAG). Foundry IQ includes the following features: Knowledge bases: Available directly in the new Foundry portal, knowledge bases are reusable, topic-centric collections that ground multiple agents and applications through a single API. Automated indexed and federated knowledge sources – Expand what data an agent can reach by connecting to both indexed and remote knowledge sources. For indexed sources, Foundry IQ delivers automatic indexing, vectorization, and enrichment for text, images, and complex documents. Agentic retrieval engine in knowledge bases – A self-reflective query engine that uses AI to plan, select sources, search, rank and synthesize answers across sources with configurable “retrieval reasoning effort.” Enterprise-grade security and governance – Support for document-level access control, alignment with existing permissions models, and options for both indexed and remote data. Foundry IQ is available in public preview through the new Foundry portal and Azure portal with Azure AI Search. Foundry IQ is part of Microsoft's intelligence layer with Fabric IQ and Work IQ.20KViews4likes0CommentsFoundry IQ: boost response relevance by 36% with agentic retrieval
The latest RAG performance evaluations and results for knowledge bases and built-in agentic retrieval engine. Foundry IQ by Azure AI Search is a unified knowledge layer for agents, designed to improve response performance, automate RAG workflows and enable enterprise-ready grounding. These evaluations tested RAG performance for knowledge bases and new features including retrieval reasoning effort and federated sources like web and SharePoint for M365. Foundry IQ and Azure AI Search are part of Microsoft Foundry.3.9KViews4likes0CommentsSecuring Azure AI Applications: A Deep Dive into Emerging Threats | Part 1
Why AI Security Can’t Be Ignored? Generative AI is rapidly reshaping how enterprises operate—accelerating decision-making, enhancing customer experiences, and powering intelligent automation across critical workflows. But as organizations adopt these capabilities at scale, a new challenge emerges: AI introduces security risks that traditional controls cannot fully address. AI models interpret natural language, rely on vast datasets, and behave dynamically. This flexibility enables innovation—but also creates unpredictable attack surfaces that adversaries are actively exploiting. As AI becomes embedded in business-critical operations, securing these systems is no longer optional—it is essential. The New Reality of AI Security The threat landscape surrounding AI is evolving faster than any previous technology wave. Attackers are no longer focused solely on exploiting infrastructure or APIs; they are targeting the intelligence itself—the model, its prompts, and its underlying data. These AI-specific attack vectors can: Expose sensitive or regulated data Trigger unintended or harmful actions Skew decisions made by AI-driven processes Undermine trust in automated systems As AI becomes deeply integrated into customer journeys, operations, and analytics, the impact of these attacks grows exponentially. Why These Threats Matter? Threats such as prompt manipulation and model tampering go beyond technical issues—they strike at the foundational principles of trustworthy AI. They affect: Confidentiality: Preventing accidental or malicious exposure of sensitive data through manipulated prompts. Integrity: Ensuring outputs remain accurate, unbiased, and free from tampering. Reliability: Maintaining consistent model behavior even when adversaries attempt to deceive or mislead the system. When these pillars are compromised, the consequences extend across the business: Incorrect or harmful AI recommendations Regulatory and compliance violations Damage to customer trust Operational and financial risk In regulated sectors, these threats can also impact audit readiness, risk posture, and long-term credibility. Understanding why these risks matter builds the foundation. In the upcoming blogs, we’ll explore how these threats work and practical steps to mitigate them using Azure AI’s security ecosystem. Why AI Security Remains an Evolving Discipline? Traditional security frameworks—built around identity, network boundaries, and application hardening—do not fully address how AI systems operate. Generative models introduce unique and constantly shifting challenges: Dynamic Model Behavior: Models adapt to context and data, creating a fluid and unpredictable attack surface. Natural Language Interfaces: Prompts are unstructured and expressive, making sanitization inherently difficult. Data-Driven Risks: Training and fine-tuning pipelines can be manipulated, poisoned, or misused. Rapidly Emerging Threats: Attack techniques evolve faster than most defensive mechanisms, requiring continuous learning and adaptation. Microsoft and other industry leaders are responding with robust tools—Azure AI Content Safety, Prompt Shields, Responsible AI Frameworks, encryption, isolation patterns—but technology alone cannot eliminate risk. True resilience requires a combination of tooling, governance, awareness, and proactive operational practices. Let's Build a Culture of Vigilance: AI security is not just a technical requirement—it is a strategic business necessity. Effective protection requires collaboration across: Developers Data and AI engineers Cybersecurity teams Cloud platform teams Leadership and governance functions Security for AI is a shared responsibility. Organizations must cultivate awareness, adopt secure design patterns, and continuously monitor for evolving attack techniques. Building this culture of vigilance is critical for long-term success. Key Takeaways: AI brings transformative value, but it also introduces risks that evolve as quickly as the technology itself. Strengthening your AI security posture requires more than robust tooling—it demands responsible AI practices, strong governance, and proactive monitoring. By combining Azure’s built-in security capabilities with disciplined operational practices, organizations can ensure their AI systems remain secure, compliant, and trustworthy, even as new threats emerge. What’s Next? In future blogs, we’ll explore two of the most important AI threats—Prompt Injection and Model Manipulation—and share actionable strategies to mitigate them using Azure AI’s security capabilities. Stay tuned for practical guidance, real-world scenarios, and Microsoft-backed best practices to keep your AI applications secure. Stay Tuned.!604Views3likes0CommentsBuilding Secure, Governable AI Agents with Microsoft Foundry
The era of AI agents has arrived — and it’s accelerating fast. Organizations are moving beyond simple chatbots that merely respond to requests and toward intelligent agents that can reason, act, and adapt without human supervision. These agents can analyze data, call tools, orchestrate workflows, and make autonomous decisions in real time. As a result, agents are becoming integral members of teams, augmenting and amplifying human capabilities across organizations at scale. But the very strengths that make agents so powerful — their autonomy, intelligence, and ability to operate like virtual teammates — also introduce new risks. Enterprises need a platform that doesn’t just enable agent development but governs it from the start, ensuring security, accountability, and trust at every layer. That’s where Microsoft Foundry comes in. A unified platform for enterprise-ready agents Microsoft Foundry is a unified, interoperable platform for building, optimizing, and governing AI apps and agents at scale. At its core is Foundry Agent Service, which connects models, tools, knowledge, and frameworks into a single, observable runtime. Microsoft Foundry enables companies to shift-left, with security, safety, and governance integrated from the beginning of a developer's workflow. It delivers enterprise-grade controls from setup through production, giving customers the trust, flexibility, and confidence to innovate. 1. Setup: Start with the right foundation Enterprise customers have stringent networking, compliance, and security requirements that must be met before they can even start testing AI capabilities. Microsoft Foundry Agent Service provides a flexible setup experience designed to meet organizations where they are — whether you’re a startup prioritizing speed and simplicity or an enterprise with strict data and compliance needs. Data Control Basic Setup: Ideal for rapid prototyping and getting started quickly. This mode uses platform-managed storage. Standard Setup: Enables fine-grained control over your data by using your own Azure resources and configurations. Networking Bring Your Own Virtual Network (BYO VNet) or enable a Managed Virtual Network (Managed VNet) to enable full network isolation and strict data exfiltration control, ensuring that sensitive information remains within your organization’s trusted boundaries. Using your own virtual network for agents and evaluation workloads in Foundry allows the networking controls to be in your hands, including setting up your own Firewall to control egress traffic, virtual network peering, and setting NSGs and UDRs for managing network traffic. Managed virtual network (preview) creates a virtual network in the Microsoft tenant to handle the egress traffic of your agents. The managed virtual network handles the hassle of setting up network isolation for your Foundry resource and agents, such as setting up the subnet range, IP selection, and subnet delegation. Secrets Management Choose between a Managed Key Vault or Bring Your Own Key Vault to manage secrets and access credentials in a way that aligns with your organization’s security policies. These credentials are critical for establishing secure connections to external resources and tools integrated via the Model Context Protocol (MCP). Encryption Data is always encrypted in transit and at rest using Microsoft-managed keys by default. For enhanced ownership and control, customers can opt for Customer Managed Keys (CMK) to enable key rotation and fine-tuned data governance. Model Governance with AI Gateway Foundry supports Bring Your Own AI Gateway (preview) so enterprises can integrate their existing Foundry and Azure OpenAI model endpoints into Foundry Agent Service behind an AI Gateway for maximum flexibility, control, and governance. Authentication Foundry enforces keyless authentication using Microsoft Entra ID for all end-users wanting to access agents. 2. Development: Build agents you trust Once the environment is configured, Foundry provides tools to develop, control, and evaluate agents before putting them into production. Microsoft Entra Agent ID Every agent in Foundry is assigned a Microsoft Entra Agent ID — a new identity type purpose-built for the security and operational needs of enterprise-scale AI agents. With an agent identity, agents can be recognized, authenticated, and governed just like users, allowing IT teams to enforce familiar controls such as Conditional Access, Identity Protection, Identity Governance, and network policies. In the Microsoft Entra admin center, you will manage your agent inventory which lists all agents in your tenant including those created in Foundry, Copilot Studio, and any 3P agent you register. Unpublished agents (shared agent identity): All unpublished or in-development Foundry agents within the same project share a common agent identity. This design simplifies permission management because unpublished agents typically require the same access patterns and permission configurations. The shared identity approach provides several benefits: Simplified administration: Administrators can centrally manage permissions for all in-development agents within a project Reduced identity sprawl: Using a single identity per project prevents unnecessary identity creation during early experimentation Developer autonomy: Once the shared identity is configured, developers can independently build and test agents without repeatedly configuring new permissions Published Agents (unique agent identity): When you want to share an agent with others as a stable offering, publish it to an agent application. Once published, the agent gets assigned a unique agent identity, tied to the agent application. This establishes durable, auditable boundaries for production agents and enables independent lifecycle, compliance, and monitoring controls. Observability: Tracing, Evaluation, and Monitoring Microsoft Foundry provides a comprehensive observability layer that gives teams deep visibility into agent performance, quality, and operational health across development and production. Foundry’s observability stack brings together traces, logs, evaluations, and safety signals to help developers and administrators understand exactly how an agent arrived at an answer, which tools it used, and where issues may be emerging. This includes: Tracing: Track every step of an agent response including prompts, tool calls, tool responses, and output generation to understand decision paths, latency contributors, and failure points. Evaluations: Foundry provides a comprehensive library of built-in evaluators that measure coherence, groundedness, relevance, safety risks, security vulnerabilities, and agent-specific behaviors such as task adherence or tool-call accuracy. These evaluations help teams catch regressions early, benchmark model quality, and validate that agents behave as intended before moving to production. Monitoring: The Agent Monitoring Dashboard in Microsoft Foundry provides real-time insights into the operational health, performance, and compliance of your AI agents. This dashboard can track token usage, latency, evaluation metrics, and security posture across multi-agent systems. AI red teaming: Foundry’s AI Red Teaming Agent can be used to probe agents with adversarial queries to detect jailbreaks, prompt attacks, and security vulnerabilities. Agent Guardrails and Controls Microsoft Foundry offers safety and security guardrails that can be applied to core models, including image generation models, and agents. Guardrails consist of controls that define three things: What risk to detect (e.g., harmful content, prompt attacks, data leakage) Where to scan for it (user input, tool calls, tool responses, or model output) What action to take (annotate or block) Foundry automatically applies a default safety guardrail to all models and agents, mitigating a broad range of risks — including hate and fairness issues, sexual or violent content, self-harm, protected text/code material, and prompt-injection attempts. For organizations that require more granular control, Foundry supports custom guardrails. These allow teams to tune detection levels, selectively enable or disable risks, and apply different safety policies at the model or agent level. Tool Controls with AI Gateway To enforce tool-level controls, connect AI Gateway to your Foundry project. Once connected, all MCP and OpenAPI tools automatically receive an AI Gateway endpoint, allowing administrators to control how agents call these tools, where they can be accessed from, and who is authorized to use them. You can configure inbound, backend, outbound, and error-handling policies — for example, restricting which IPs can call an API, setting error-handling rules, or applying rate-limiting policies to control how often a tool can be invoked within a given time window. 3. Publish: Securely share your agents with end users Once the proper controls are in place and testing is complete, the agent is ready to be promoted to production. At this stage, enterprises need a secure, governed way to publish and share agents with teammates or customers. Publishing an agent to an Agent Application Anyone with the Azure AI User role on a Foundry project can interact with all agents inside that project, with conversations and state shared across users. This model is ideal for development scenarios like authoring, debugging, and testing, but it is not suitable for distributing agents to broader audiences. Publishing promotes an agent from a development asset into a managed Azure resource with a dedicated endpoint, independent identity, and governance capabilities. When an agent is published, Foundry creates an Agent Application resource designed for secure, scalable distribution. This resource provides a stable endpoint, a unique agent identity with full audit trails, cross-team sharing capabilities, integration with the Entra Agent Registry, and isolation of user data. Instead of granting access to the entire Foundry project, you grant users access only to the Agent Application resource. Integrate with M365/A365 Once your agent is published, you can integrate it into Microsoft 365 or Agent 365. This enables developers to seamlessly deploy agents from Foundry into Microsoft productivity experiences like M365 Copilot or Teams. Users can access and interact with these agents in the canvases they already use every day, providing enterprise-ready distribution with familiar governance and trust boundaries. 4. Production: Govern your agent fleet at scale As organizations expand from a handful of agents to hundreds or thousands, visibility and control become essential. The Foundry Control Plane delivers a unified, real-time view of a company's entire agent ecosystem — spanning Foundry-built and third-party agents. Key capabilities include: Comprehensive agent inventory: View and govern 100% of your agent fleet with sortable, filterable data views. Foundry Control Plane gives developers a clear understanding of every agent—whether built in Foundry, Microsoft Agent Framework, LangChain, or LangGraph—and surfaces them in one place, regardless of where they’re hosted or which cloud they run on. Operational control: Pause or disable an agent when a risk is detected. Real-time alerts: Get notified about policy, evaluation, and security alerts. Policy compliance management: Enforce organization-wide AI content policies and model policies to only allow developers to build agents with approved models in your enterprise. Cost and ROI insights: Real-time cost charts in Foundry give an accurate view of spending across all agents in a project or subscription, with drill-down capabilities to see costs at the individual agent or run level. Agent behavior controls: Apply consistent guardrails across inputs, outputs, and now tool interactions. Health and quality metrics: Review performance and reliability scores for each agent, with drilldowns for deeper analysis and corrective action. Foundry Control Plane brings everything together into a single, connected experience, enabling teams to observe, control, secure, and operate their entire agent fleet. Its capabilities work together seamlessly to help organizations build and manage AI systems that are both powerful and responsibly governed. Build agents with confidence Microsoft Foundry unifies identity, governance, security, observability, and operational control into a single end-to-end platform purpose-built for enterprise AI. With Foundry, companies can choose the setup model that matches their security and compliance posture, apply agent-level guardrails and tool-level controls with AI Gateway, securely publish and share agents across Microsoft 365 and A365, and govern their entire agent fleet through the Foundry Control Plane. At the center of this system is Microsoft Entra Agent ID, ensuring every agent has a managed identity. With these capabilities, organizations can deploy autonomous agents at scale — knowing every interaction is traceable, every risk is mitigated, and every agent is fully accountable. Whether you're building your first agent or managing a fleet of thousands, Foundry provides the foundation to innovate boldly while meeting the trust, compliance, and operational excellence enterprises require. The future of work is one where people and agents collaborate seamlessly — and Microsoft Foundry gives you the platform to build it with confidence. Learn more To dive deeper, watch the Ignite 2025 session: Entra Agent ID and other enterprise superpowers in Microsoft Foundry. To learn more, visit Microsoft Learn and explore resources including the AI Agents for Beginners, Microsoft Agent Framework, and course materials that help you build and operate agents responsibly.654Views2likes0CommentsAnnouncing Elastic MCP Server in Microsoft Foundry Tool Catalog
Introduction The future of enterprise AI is agentic - driven by intelligent, context-aware agents that deliver real business value. Microsoft Foundry is committed to enabling developers with the tools and integrations they need to build, deploy, and govern these advanced AI solutions. Today, we are excited to announce that Elastic MCP Server is now discoverable in the Microsoft Foundry Tool Catalog, unlocking seamless access to Elastic’s industry-leading vector search capabilities for Retrieval-Augmented Generation (RAG) scenarios. Seamless Integration: Elastic Meets Microsoft Foundry This integration is a major milestone in our ongoing effort to foster an open, extensible AI ecosystem. With Elastic MCP Server now available in the Azure MCP registry, developers can easily connect their agents to Elastic’s powerful search and analytics engine using the Model Context Protocol (MCP). This ensures that agents built on Microsoft Foundry are grounded in trusted, enterprise-grade data - delivering accurate, relevant, and verifiable responses. Create Elastic cloud hosted deployments or Serverless Search Projects through the Microsoft Marketplace or the Azure Portal Discoverability: Elastic MCP Server is listed as a remote MCP server in the Azure MCP Registry and the Foundry Tool Catalog. Multi-Agent Workflows: Enable collaborative agent scenarios via the A2A protocol. Unlocking Vector Search for RAG Elastic’s advanced vector search capabilities are now natively accessible to Foundry agents, enabling powerful Retrieval-Augmented Generation (RAG) workflows: Semantic Search: Agents can perform hybrid and vector-based searches over enterprise data, retrieving the most relevant context for grounding LLM responses. Customizable Retrieval: With Elastic’s Agent Builder, you can define your custom tools specific to your indices and datasets and expose them to Foundry Agents via MCP. Enterprise Grounding: Ensure agent outputs are always based on proprietary, up-to-date data, reducing hallucinations and improving trust. Deployment: Getting Started Follow these steps to integrate Elastic MCP Server with your Foundry agents: Within your Foundry project, you can either: Go to Build in the top menu, then select Tools. Click on Connect a Tool. Select the Catalog tab, search for Elasticsearch, and click Create. Once prompted, configure the Elasticsearch details by providing a name, your Kibana endpoint, and your Elasticsearch API key. Click on Use in an agent and select an existing Agent to integrate Elastic MCP Server. Alternatively, within your Agent: Click on Tools. Click Add, then select Custom. Search for Elasticsearch, add it, and configure the tool as described above. The tool will now appear in your Agent’s configuration. You are all set to now interact with your Elasticsearch projects and deployments! Conclusion & Next Steps The addition of Elastic MCP Server to the Foundry Tool Catalog empowers developers to build the next generation of intelligent, grounded AI agents - combining Microsoft’s agentic platform with Elastic’s cutting-edge vector search. Whether you’re building RAG-powered copilots, automating workflows, or orchestrating multi-agent systems, this integration accelerates your journey from prototype to production. Ready to get started? Get started with Elastic via the Azure Marketplace or Azure portal. New users get a 7-day free trial! Explore agent creation in Microsoft Foundryportal and try the Foundry Tool Catalog. Deep dive into Elastic MCP and Agent Builder Join us at Microsoft Ignite 2025 for live demos, deep dives, and more on building agentic AI with Elastic and Microsoft Foundry!454Views1like2Comments