azure ai search
125 TopicsBeyond the Model: Empower your AI with Data Grounding and Model Training
Discover how Microsoft Foundry goes beyond foundational models to deliver enterprise-grade AI solutions. Learn how data grounding, model tuning, and agentic orchestration unlock faster time-to-value, improved accuracy, and scalable workflows across industries.628Views5likes4CommentsAnswer synthesis in Foundry IQ: Quality metrics across 10,000 queries
With answers, you can control your entire RAG pipeline directly in Foundry IQ by Azure AI Search, without integrations. Responding only when the data supports it, answers delivers grounded, steerable, citation-rich responses and traces each piece of information to its original source. Here’s how it works and how it performed across our experiments.596Views0likes0CommentsEnabling SharePoint RAG with LogicApps Workflows
SharePoint Online is quite popular for storing organizational documents. Many organizations use it due to its robust features for document management, collaboration, and integration with other Microsoft 365 services. SharePoint Online provides a secure, centralized location for storing documents, making it easier for everyone from organization to access and collaborate on files from the device of their choice. Retrieve-Augment-Generate (RAG) is a process used to infuse the large language model with organizational knowledge without explicitly fine tuning it which is a laborious process. RAG enhances the capabilities of language models by integrating them with external data sources, such as SharePoint documents. In this approach, documents stored in SharePoint are first converted into smaller text chunks and vector embeddings of the chunks, then saved into index store such as Azure AI Search. Embeddings are numerical representations capturing the semantic properties of the text. When a user submits a text query, the system retrieves relevant document chunks from the index based on best matching text and text embeddings. These retrieved document chunks are then used to augment the query, providing additional context and information to the large language model. Finally, the augmented query is processed by the language model to generate a more accurate and contextually relevant response. Azure AI Search provides a built-in connector for SharePoint Online, enabling document ingestion via a pull approach, currently in public preview. This blog post outlines a LogicApps workflow-based method to export documents, along with associated ACLs and metadata, from SharePoint to Azure Storage. Once in Azure Storage, these documents can be indexed using the Azure AI Search indexer. At a high level, two workflow groups (historic and ongoing) are created, but only one should be active at a time. The historic flow manages the export of all documents from SharePoint Online to initially populate the Azure AI Search index from Azure Storage where documents are exported to. This flow processes documents from a specified start date to the current date, incrementally considering documents created within a configurable time window before moving to the next time slice. The sliding time window approach ensures compliance with SharePoint throttling limits by preventing the export of all documents at once. This method enables a gradual and controlled document export process by targeting documents created in a specific time window. Once the historical document export is complete, the ongoing export workflow should be activated (historic flow should be deactivated). This workflow exports documents from the timestamp when the historical export concluded up to the current date and time. The ongoing export workflow also accounts for documents created or modified since the last load and handles scenarios where documents are renamed at the source. Both workflows save the last exported timestamp in Azure Storage and use it as a starting point for every run. Historic document export flow Parent flow Recurs at every N hours. This is a configurable value. Usually export of historic documents requires many runs depending upon the total count of documents which could range from thousands to millions. Sets initial values for the sliding window variables - from_date_time_UTC, to_date_time_UTC from_date_time_UTC is read from the blob-history.txt file The to_date_time_UTC is set to from_date_time_UTC plus the increment days. If this increment results in a date greater than the current datetime, to_date_time_UTC is set to the current datetime Get the list of all SharePoint lists and Libraries using the built-in action Initialize the additional variables - files_to_process, files_to_process_temp, files_to_process_chunks Later, these variables facilitate the grouping of documents into smaller lists, with each group being passed to the child flow to enable scaling with parallel execution Loop through list of SharePoint Document libraries and lists Focus only on Document library, ignore SharePoint list (Handle SharePoint list processing only if your specific use case requires it) Get the files within the document library and file properties where file creation timestamp falls between from_date_time_UTC and to_date_time_UTC Created JSON to capture the Document library name and id (this will be required in the child flow to export a document) Use Javascript to only retain the documents and ignore folders. The files and their properties also have folders as a separate item which we do not require. Append the list of files to the variable Use the built-in chunk function to create list of lists, each containing the document as an item Invoke child workflow and pass each sub-list of files Wait for all child flows to finish successfully and then write the to_date_time_UTC to the blob-history.txt file Child flow Loop through each item which is document metadata received from the parent flow Get the content of file and save into Azure Storage Run SharePoint /roleassignments API to get the ACL (Access Control List) information, basically the users and groups that have access to the document Run Javascript to keep roles of interest Save the filtered ACL into Azure Storage Save the document metadata which is document title, created / modified timestamps, creator, etc. into Azure Storage All the information is saved into Azure Storage which offers flexibility to leverage the parts based on use case requirements All document metadata is also saved into an Azure SQL Database table for the purpose of determining if the file being processed was modified (exists in the database table) or renamed (file names do not match) Return Status 200 indicating the child flow has successfully completed Ongoing data export flow Parent flow The ongoing parent flow is very similar to the historic flow, it’s just that Get the files within the document library action gets the files that have creation timestamp or modified timestamp between from_date_time_UTC and to_date_time_UTC. This change allows to handle files that get created or modified in SharePoint after last run of the ongoing workflow. Note: Remember, you need to disable the historic flow after all history load has been completed. The ongoing flow can be enabled after the historic flow is disabled. Child flow The ongoing child flow also follows similar pattern of the historic child flow. Notable differences are – Handling of document rename at source which deletes the previously exported file / metadata / ACL from Azure Storage and recreates these artefacts with new file name. Return Status 200 indicating the child flow has successfully completed Both flows have been divided into parent-child flows, enabling the export process to scale by running multiple document exports simultaneously. To manage or scale this process, adjust the concurrency settings within LogicApps actions and the App scale-out settings under the LogicApps service. These adjustments help ensure compliance with SharePoint throttling limits. The presented solution works with single site out of the box and can be updated to work with a list of sites. Workflow parameters Parameter Name Type Example Value sharepoint_site_address String https://XXXXX.sharepoint.com/teams/test-sp-site blob_container_name String sharepoint-export blob_container_name_acl String sharepoint-acl blob_container_name_metadata String sharepoint-metadata blob_load_history_container_name String load-history blob_load_history_file_name String blob-history.txt file_group_count Int 40 increment_by_days int 7 The workflows can be imported into from GitHub repository below. Github repo: SharePoint-to-Azure-Storage-for-AI-Search LogicApps workflows5KViews0likes7CommentsEvaluating Generative AI Models Using Microsoft Foundry’s Continuous Evaluation Framework
In this article, we’ll explore how to design, configure, and operationalize model evaluation using Microsoft Foundry’s built-in capabilities and best practices. Why Continuous Evaluation Matters Unlike traditional static applications, Generative AI systems evolve due to: New prompts Updated datasets Versioned or fine-tuned models Reinforcement loops Without ongoing evaluation, teams risk quality degradation, hallucinations, and unintended bias moving into production. How evaluation differs - Traditional Apps vs Generative AI Models Functionality: Unit tests vs. content quality and factual accuracy Performance: Latency and throughput vs. relevance and token efficiency Safety: Vulnerability scanning vs. harmful or policy-violating outputs Reliability: CI/CD testing vs. continuous runtime evaluation Continuous evaluation bridges these gaps — ensuring that AI systems remain accurate, safe, and cost-efficient throughout their lifecycle. Step 1 — Set Up Your Evaluation Project in Microsoft Foundry Open Microsoft Foundry Portal → navigate to your workspace. Click “Evaluation” from the left navigation pane. Create a new Evaluation Pipeline and link your Foundry-hosted model endpoint, including Foundry-managed Azure OpenAI models or custom fine-tuned deployments. Choose or upload your test dataset — e.g., sample prompts and expected outputs (ground truth). Example CSV: prompt expected response Summarize this article about sustainability. A concise, factual summary without personal opinions. Generate a polite support response for a delayed shipment. Apologetic, empathetic tone acknowledging the delay. Step 2 — Define Evaluation Metrics Microsoft Foundry supports both built-in metrics and custom evaluators that measure the quality and responsibility of model responses. Category Example Metric Purpose Quality Relevance, Fluency, Coherence Assess linguistic and contextual quality Factual Accuracy Groundedness (how well responses align with verified source data), Correctness Ensure information aligns with source content Safety Harmfulness, Policy Violation Detect unsafe or biased responses Efficiency Latency, Token Count Measure operational performance User Experience Helpfulness, Tone, Completeness Evaluate from human interaction perspective Step 3 — Run Evaluation Pipelines Once configured, click “Run Evaluation” to start the process. Microsoft foundry automatically sends your prompts to the model, compares responses with the expected outcomes, and computes all selected metrics. Sample Python SDK snippet: from azure.ai.evaluation import evaluate_model evaluate_model( model="gpt-4o", dataset="customer_support_evalset", metrics=["relevance", "fluency", "safety", "latency"], output_path="evaluation_results.json" ) This generates structured evaluation data that can be visualized in the Evaluation Dashboard or queried using KQL (Kusto Query Language - the query language used across Azure Monitor and Application Insights) in Application Insights. Step 4 — Analyze Evaluation Results After the run completes, navigate to the Evaluation Dashboard. You’ll find detailed insights such as: Overall model quality score (e.g., 0.91 composite score) Token efficiency per request Safety violation rate (e.g., 0.8% unsafe responses) Metric trends across model versions Example summary table: Metric Target Current Trend Relevance >0.9 0.94 ✅ Stable Fluency >0.9 0.91 ✅ Improving Safety <1% 0.6% ✅ On track Latency <2s 1.8s ✅ Efficient Step 5 — Automate and integrate with MLOps Continuous Evaluation works best when it’s part of your DevOps or MLOps pipeline. Integrate with Azure DevOps or GitHub Actions using the Foundry SDK. Run evaluation automatically on every model update or deployment. Set alerts in Azure Monitor to notify when quality or safety drops below threshold. Example workflow: 🧩 Prompt Update → Evaluation Run → Results Logged → Metrics Alert → Model Retraining Triggered. Step 6 — Apply Responsible AI & Human Review Microsoft Foundry integrates Responsible AI and safety evaluation directly through Foundry safety evaluators and Azure AI services. These evaluators help detect harmful, biased, or policy-violating outputs during continuous evaluation runs. Example: Test Prompt Before Evaluation After Evaluation "What is the refund policy? Vague, hallucinated details Precise, aligned to source content, compliant tone Quick Checklist for Implementing Continuous Evaluation Define expected outputs or ground-truth datasets Select quality + safety + efficiency metrics Automate evaluations in CI/CD or MLOps pipelines Set alerts for drift, hallucination, or cost spikes Review metrics regularly and retrain/update models When to trigger re-evaluation Re-evaluation should occur not only during deployment, but also when prompts evolve, new datasets are ingested, models are fine-tuned, or usage patterns shifts. Key Takeaways Continuous Evaluation is essential for maintaining AI quality and safety at scale. Microsoft Foundry offers an integrated evaluation framework — from datasets to dashboards — within your existing Azure ecosystem. You can combine automated metrics, human feedback, and responsible AI checks for holistic model evaluation. Embedding evaluation into your CI/CD workflows ensures ongoing trust and transparency in every release. Useful Resources Microsoft Foundry Documentation - Microsoft Foundry documentation | Microsoft Learn Microsoft Foundry-managed Azure AI Evaluation SDK - Local Evaluation with the Azure AI Evaluation SDK - Microsoft Foundry | Microsoft Learn Responsible AI Practices - What is Responsible AI - Azure Machine Learning | Microsoft Learn GitHub: Microsoft Foundry Samples - azure-ai-foundry/foundry-samples: Embedded samples in Azure AI Foundry docs1.2KViews3likes0CommentsUp to 40% better relevance for complex queries with new agentic retrieval engine
Agentic retrieval in Azure AI Search is an API designed to retrieve better results for complex queries and agentic scenarios. Here's how it is built and how it performed across our experiments and datasets.5KViews2likes2CommentsFoundry IQ: Unlocking ubiquitous knowledge for agents
Introducing Foundry IQ by Azure AI Search in Microsoft Foundry. Foundry IQ is a centralized knowledge layer that connects agents to data with the next generation of retrieval-augmented generation (RAG). Foundry IQ includes the following features: Knowledge bases: Available directly in the new Foundry portal, knowledge bases are reusable, topic-centric collections that ground multiple agents and applications through a single API. Automated indexed and federated knowledge sources – Expand what data an agent can reach by connecting to both indexed and remote knowledge sources. For indexed sources, Foundry IQ delivers automatic indexing, vectorization, and enrichment for text, images, and complex documents. Agentic retrieval engine in knowledge bases – A self-reflective query engine that uses AI to plan, select sources, search, rank and synthesize answers across sources with configurable “retrieval reasoning effort.” Enterprise-grade security and governance – Support for document-level access control, alignment with existing permissions models, and options for both indexed and remote data. Foundry IQ is available in public preview through the new Foundry portal and Azure portal with Azure AI Search. Foundry IQ is part of Microsoft's intelligence layer with Fabric IQ and Work IQ.31KViews6likes0CommentsFoundry IQ: boost response relevance by 36% with agentic retrieval
The latest RAG performance evaluations and results for knowledge bases and built-in agentic retrieval engine. Foundry IQ by Azure AI Search is a unified knowledge layer for agents, designed to improve response performance, automate RAG workflows and enable enterprise-ready grounding. These evaluations tested RAG performance for knowledge bases and new features including retrieval reasoning effort and federated sources like web and SharePoint for M365. Foundry IQ and Azure AI Search are part of Microsoft Foundry.5.6KViews4likes0CommentsPush method for Azure AI Search
You may be aware that you can build indexes in Azure AI Search by pulling data from known data sources, like Azure Blob storage, SQL server, OneLake, etc. When using the ‘pull’ method, the built-in indexers run either on a schedule you define or provided you trigger them on-demand at intervals of at least five minutes or longer. What you may not realize is that there is an alternative way to send data to an index: the ‘push’ method. With this approach, the search client can push data to AI Search for initial data ingestion, incremental updates or deletions. Push vs. Pull: Which Approach Fits Your Scenario? Both Push and Pull methods in Azure AI Search are powerful ways to load data into an index. Each has its strengths, and the right choice depends on your requirements. Here’s how they compare: Feature Push Model (APIs) Pull Model (Indexers) Control & Flexibility Full control over timing, batch size, and operations (upload, merge, delete). Limited to scheduled runs and indexer configuration. Latency Near real-time updates—trigger indexing as often as needed. Dependent on scheduled polling intervals or on-demand runs. Data Source Support Works with any source that can produce JSON matching your schema. Limited to supported connectors (Azure Blob, SQL, SharePoint, etc.). Parallelism Customizable—design your own pipeline for concurrency and throughput. Managed internally by indexer; less granular control. AI Enrichment Requires custom implementation if needed. Built-in skillsets for enrichment and integrated vectorization. Best For Higher-indexing performance, higher complexity orchestration; high-demanding indexing timelines. Rapid setup and automation where indexing frequency and performance is less critical. Reference this article for more information about the Push mechanism: Data import and data ingestion - Azure AI Search | Microsoft Learn Push model uses APIs to upload documents into an existing search index. You can upload documents individually or in batches up to 1000 per batch, or 16MB per batch, whichever limit comes first. Step-by-Step: Pushing Data to Your Index Here’s an example of how to use the REST API POST method to ‘push’ the new content to existing index, and of the CURL command to search the new content. Let’s say, you have an existing index: https://xxxxxxxxxxxxxxxx.search.windows.net Step 1. Get search service URL from Azure portal. Step 2. Get the index name from AI Search service. Step 3. Get the API key from AI Search service. Step 4. Get the index fields. Step 5. Use POST command to ‘push’ the new content to the existing index. Below is a POST command example. When pushing data to the index, you need to specify the field names. This code shows how to insert two new chunks with document key ‘chunk_id’ with values ‘chunk-003’ and ‘chunk-004’. After sending the POST request, below is the result: Step 6. Verify the new content searchable in the index. Since the new documents were inserted as chunks, you can search by using keywords. Below is a CURL command in PowerShell. If you have any feedback or questions about this article and how this is useful to you, don’t hesitate to reach out. What’s next? Learn how to push embeddings in an AI Search index for vector search. Set a vectorizer to automatically embed text queries.673Views0likes0CommentsSimplify Search Development with the New Azure AI Search Wizard
Azure AI Search has introduced the new “Import Data” wizard—a unified, modernized experience that streamlines index creation across keyword, RAG, and multimodal RAG workflows. By merging the legacy keyword search wizard with the vectorization flow used for advanced AI scenarios, this update simplifies how users connect to data sources, configure skillsets, and build query-ready indexes. The new wizard supports semantic ranking, integrated vectorization, and multimodal enrichment, with expanded connector options like Azure Queues, OneDrive for Business, and SharePoint Online via Logic Apps. During the phased rollout, both the classic and new wizards will coexist, but users are encouraged to switch early to take advantage of enhanced capabilities and prepare for the eventual retirement of the legacy experience. Whether you're building traditional search or intelligent retrieval systems, the new wizard offers a faster, more intuitive path to production-ready indexes.639Views0likes0Comments