best practices
71 TopicsMastering Query Fields in Azure AI Document Intelligence with C#
Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage. Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below in C#: BinaryData data = BinaryData.FromBytes(Content); var queryFields = new List<string> { "FullName", "CompanyName", "JobTitle" }; var operation = await client.AnalyzeDocumentAsync( WaitUntil.Completed, modelId, data, "1-2", queryFields: queryFields, features: new List<DocumentAnalysisFeature> { DocumentAnalysisFeature.QueryFields } ); One of the reasons this failed was that the developer was using `Azure.AI.DocumentIntelligence v1.0.0`, where `base64Source` and `urlSource` must be handled internally. Because the older examples using `AnalyzeDocumentContent` no longer apply and leading to errors. Practical Solution Using AnalyzeDocumentOptions. Alternative Method using manual JSON Payload. Using AnalyzeDocumentOptions The correct method involves using AnalyzeDocumentOptions, which streamlines the request construction using the below steps: Prepare the document content: BinaryData data = BinaryData.FromBytes(Content); Create AnalyzeDocumentOptions: var analyzeOptions = new AnalyzeDocumentOptions(modelId, data) { Pages = "1-2", Features = { DocumentAnalysisFeature.QueryFields }, QueryFields = { "FullName", "CompanyName", "JobTitle" } }; - `modelId`: Your trained model’s ID. - `Pages`: Specify pages to analyze (e.g., "1-2"). - `Features`: Enable `QueryFields`. - `QueryFields`: Define which fields to extract. Run the analysis: Operation<AnalyzeResult> operation = await client.AnalyzeDocumentAsync( WaitUntil.Completed, analyzeOptions ); AnalyzeResult result = operation.Value; The reason this works: The SDK manages `base64Source` automatically. This approach matches the latest SDK standards. It results in cleaner, more maintainable code. Alternative method using manual JSON payload For advanced use cases where more control over the request is needed, you can manually create the JSON payload. For an example: var queriesPayload = new { queryFields = new[] { new { key = "FullName" }, new { key = "CompanyName" }, new { key = "JobTitle" } } }; string jsonPayload = JsonSerializer.Serialize(queriesPayload); BinaryData requestData = BinaryData.FromString(jsonPayload); var operation = await client.AnalyzeDocumentAsync( WaitUntil.Completed, modelId, requestData, "1-2", features: new List<DocumentAnalysisFeature> { DocumentAnalysisFeature.QueryFields } ); When to use the above: Custom request formats Non-standard data source integration Key points to remember Breaking changes exist between preview versions and v1.0.0 by checking the SDK version. Prefer `AnalyzeDocumentOptions` for simpler, error-free integration by using built-In classes. Ensure your content is wrapped in `BinaryData` or use a direct URL for correct document input: Conclusion In this article, we have seen how you can use AnalyzeDocumentOptions to significantly improves how you integrate query fields with Azure AI Document Intelligence in C#. It ensures your solution is up-to-date, readable, and more reliable. Staying aware of SDK updates and evolving best practices will help you unlock deeper insights from your documents effortlessly. Reference Official AnalyzeDocumentAsync Documentation. Official Azure SDK documentation. Azure Document Intelligence C# SDK support add-on query field.208Views0likes0CommentsThe Importance of Implementing SAST Scanning for Infrastructure as Code
As the adoption of Infrastructure as Code (IaC) continues to grow, ensuring the security of your infrastructure configurations becomes increasingly crucial. Static Application Security Testing (SAST) scanning for IaC can play a vital role in identifying vulnerabilities early in the development lifecycle. This blog explores why implementing SAST scanning for IaC is essential for maintaining secure and robust infrastructure.Elevate Your AI Expertise with Microsoft Azure: Learn Live Series for Developers
Unlock the power of Azure AI and master the art of creating advanced AI agents. Starting from April 15th, embark on a comprehensive learning journey designed specifically for professional developers like you. This series will guide you through the official Microsoft Learn Plan, focused on the latest agentic AI technologies and innovations. Generative AI has evolved to become an essential tool for crafting intelligent applications, and AI agents are leading the charge. Here's your opportunity to deepen your expertise in building powerful, scalable agent-based solutions using the Azure AI Foundry, Azure AI Agent Service, and the Semantic Kernel Framework. Why Attend? This Learn Live series will provide you with: In-depth Knowledge: Understand when to use AI agents, how they function, and the best practices for building them on Azure. Hands-On Experience: Gain practical skills to develop, deploy, and extend AI agents with Azure AI Agent Service and Semantic Kernel SDK. Expert Insights: Learn directly from Microsoft’s AI professionals, ensuring you're at the cutting edge of agentic AI technologies. Session Highlights Plan and Prepare AI Solutions | April 15th Explore foundational principles for creating secure and responsible AI solutions. Prepare your development environment for seamless integration with Azure AI services. Fundamentals of AI Agents | April 22nd Discover the transformative role of language models and generative AI in enabling intelligent applications. Understand Microsoft Copilot and effective prompting techniques for agent development. Azure AI Agent Service: Build and Integrate | April 29th Dive into the key features of Azure AI Agent Service. Build agents and learn how to integrate them into your applications for enhanced functionality. Extend with Custom Tools | May 6th Enhance your agents’ capabilities with custom tools, tailored to meet unique application requirements. Develop an AI agent with Semantic Kernel | May 8th Use Semantic Kernel to connect to an Azure AI Foundry project Create Azure AI Agent Service agents using the Semantic Kernel SDK Integrate plugin functions with your AI agent Orchestrate Multi-Agent Solutions with Semantic Kernel | May 13th Utilize the Semantic Kernel SDK to create collaborative multi-agent systems. Develop and integrate custom plugin functions for versatile AI solutions. What You’ll Achieve By the end of this series, you'll: Build AI agents using cutting-edge Azure technologies. Integrate custom tools to extend agent capabilities. Develop multi-agent solutions with advanced orchestration. How to Join Don't miss out on this opportunity to level up your development skills and lead the next wave of AI-driven applications. Register now and set yourself apart as a developer equipped to harness the full potential of Azure AI. 🔗 Register for the Learn Live Series 🗓️ Format: Livestream | Language: English | Topic: Core AI Development Take the leap and transform how you develop intelligent applications with Microsoft Azure AI. Does this revision align with your vision for the blog? Let me know if there's anything else you'd like to refine or add!Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.The Startup Stage: Powered by Microsoft for Startups at European AI & Cloud Summit
🚀 The Startup Stage: Powered by Microsoft for Startups Take center stage in the AI and Cloud Startup Program, designed to showcase groundbreaking solutions and foster collaboration between ambitious startups and influential industry leaders. Whether you're looking to engage with potential investors, connect with clients, or share your boldest ideas, this is the platform to shine. Why Join the Startup Stage? Pitch to Top Investors: Present your ideas and products to key decision-makers in the tech world. Gain Visibility: Showcase your startup in a vibrant space dedicated to innovation, and prove that you are the next game-changer. Learn from the Best: Hear from visionary thought leaders and Microsoft AI experts about the latest trends and opportunities in AI and cloud. AI Competition: Propel Your Startup Stand out from the crowd by participating in the European AI & Cloud Startup Stage competition, exclusively designed for startups leveraging Microsoft AI and Azure Cloud services. Compete for prestigious awards, including: $25,000 in Microsoft Azure Credits. A mentoring session with Marco Casalaina, VP of Products at Azure AI. Fast-track access to exclusive resources through the Microsoft for Startups Program. Get ready to deliver a pitch in front of a live audience and an expert panel on 28 May 2025! How to Apply: Ensure your startup solution runs on Microsoft AI and Azure Cloud. Register as a conference and submit your Competiton application form before the deadline: 14 April 2025 at European Cloud and AI Summit. Be Part of Something Bigger This isn’t just an exhibition—it’s a thriving community where innovation meets opportunity. Don’t miss out! With tickets already 70% sold out, now’s the time to secure your spot. Join the European AI and Cloud Startup Area with a booth or launchpad, and accelerate your growth in the tech ecosystem. Visit the [European AI and Cloud Summit](https://ecs.events) website to learn more, purchase tickets, or apply for the AI competition. Download the sponsorship brochure for detailed insights into this once-in-a-lifetime event. Together, let’s shape the future of cloud technology. See you in Düsseldorf! 🎉Measure and Mitigate Risks for a generative AI app in Azure AI Foundry
Join Microsoft Reactor on March 4th at 9 AM PST (6 PM CET) for an exclusive session on responsible AI strategies in Azure AI Foundry. Learn how to identify and mitigate AI risks using tools like Azure AI Content Safety and built-in safety monitoring. Engage in a live Q&A on Discord (March 5th) and participate in the Microsoft Learn Challenge (until March 11th). Led by April Speight, Principal Cloud Advocate at Microsoft, this session is essential for developers building trustworthy AI applications.Get certified as an Azure AI Engineer (AI-102) this summer?
For developers, the accreditation as an Azure AI Engineer—certified through the rigorous AI-102 exam—has become a golden ticket to career acceleration. It isn’t just about coding chatbots or fine-tuning machine learning models; it’s about gaining the confidence (for you and for your business) that you can wield Azure’s toolkits to configure AI solutions that augment human capability. Before we dive in, if you’re planning to become certified as an Azure AI Engineer, you may find this Starter Learning Plan (AI 102) valuable—recently curated by a group of Microsoft experts, purposed for your success. We recommend adding it to your existing learning portfolio. It’s a light introduction that should take less than four hours, but it offers a solid glimpse into what to expect on your journey and the breadth of solutions you might craft in the future. From revolutionizing customer service with intelligent agents to optimizing supply chains through predictive analytics, Azure AI engineers sit at the confluence of technological ingenuity and business transformation. For those with an appetite for problem-solving and a vision for AI-driven futures, this certification isn’t just another badge—it’s an assertion of expertise in a field where demand is outpacing supply. Securing that expertise, however, requires more than just a weekend of cramming. Today’s aspiring AI engineers navigate an ecosystem of learning that is as modern as the field itself. Gone are the days when one could rely solely on a stack of manuals; now, candidates immerse themselves in a medley of Microsoft Learn modules, hands-on labs, AI-powered coding assistants, and community-led study groups. Many take a pragmatic approach—building real-world projects using Azure Cognitive Services and Machine Learning Studio to cement their understanding. Others lean on practice exams and structured courses from platforms like Pluralsight and Udemy, ensuring they aren’t just memorizing but internalizing the core principles. The AI-102 exam doesn’t reward rote knowledge—it demands fluency in designing, deploying, and securing AI solutions, making thorough preparation an indispensable part of the journey. In addition to the above learning plan, we want to provide a few other tips. Understand the Exam Objectives: Begin by thoroughly reviewing the AI-102 study guide. This document outlines the key topics and skills assessed, including planning and managing Azure AI solutions, implementing computer vision and natural language processing solutions, and deploying generative AI solutions. Familiarizing yourself with these areas will provide a structured framework for your study plan. Continuous memorization is part of your study. But if you get a bit bored from your flashcards and look for more ‘storyline’ style learning content, we recommend adding MSFT employee created learning plan to your mix. They are scenario-based and focus more on providing you with a structured understanding of how to do XYZ on Azure. Here are 3 examples: Modernize for AI Readiness Build AI apps with Azure Re-platform AI applications Hands-On Practice: Practical experience is invaluable. Engage with Azure AI services directly by building projects that incorporate computer vision, natural language processing, and other AI functionalities. This hands-on approach not only reinforces theoretical knowledge but also enhances problem-solving skills in real-world scenarios. Utilize Practice Assessments: Assess your readiness by taking advantage of free practice assessments provided by Microsoft. These assessments mirror the style and difficulty of actual exam questions, offering detailed feedback and links to additional resources for areas that may require further study. Stay Updated on Exam Changes: Certification exams are periodically updated to reflect the latest technologies and practices. Regularly consult the official exam page to stay informed about any changes in exam content or structure. Participate in Community Discussions: Engaging with peers through forums and study groups can provide diverse perspectives and insights. The Microsoft Q&A platform is a valuable resource for asking questions, sharing knowledge, and learning from the experiences of others preparing for the same certification. By systematically incorporating these strategies into your preparation, you'll be well-positioned to excel in the AI-102 exam and advance your career as an Azure AI Engineer. If you have additional tips or thoughts, let us know in the comments area. Good luck!Speed Up OpenAI Embedding By 4x With This Simple Trick!
In today’s fast-paced world of AI applications, optimizing performance should be one of your top priorities. This guide walks you through a simple yet powerful way to reduce OpenAI embedding response sizes by 75%—cutting them from 32 KB to just 8 KB per request. By switching from float32 to base64 encoding in your Retrieval-Augmented Generation (RAG) system, you can achieve a 4x efficiency boost, minimizing network overhead, saving costs and dramatically improving responsiveness. Let's consider the following scenario. Use Case: RAG Application Processing a 10-Page PDF A user interacts with a RAG-powered application that processes a 10-page PDF and uses OpenAI embedding models to make the document searchable from an LLM. The goal is to show how optimizing embedding response size impacts overall system performance. Step 1: Embedding Creation from the 10-Page PDF In a typical RAG system, the first step is to embed documents (in this case, a 10-page PDF) to store meaningful vectors that will later be retrieved for answering queries. The PDF is split into chunks. In our example, each chunk contains approximately 100 tokens (for the sake of simplicity), but the recommended chunk size varies based on the language and the embedding model. Assumptions for the PDF: - A 10-page PDF has approximately 3325 tokens (about 300 tokens per page). - You’ll split this document into 34 chunks (each containing 100 tokens). - Each chunk will then be sent to the embedding OpenAI API for processing. Step 2: The User Interacts with the RAG Application Once the embeddings for the PDF are created, the user interacts with the RAG application, querying it multiple times. Each query is processed by retrieving the most relevant pieces of the document using the previously created embeddings. For simplicity, let’s assume: - The user sends 10 queries, each containing 200 tokens. - Each query requires 2 embedding requests (since the query is split into 100-token chunks for embedding). - After embedding the query, the system performs retrieval and returns the most relevant documents (the RAG response). Embedding Response Size The OpenAI Embeddings models take an input of tokens (the text to embed) and return a list of numbers called a vector. This list of numbers represents the “embedding” of the input in the model so that it can be compared with another vector to measure similarity. In RAG, we use embedding models to quickly search for relevant data in a vector database. By default, embeddings are serialized as an array of floating-point values in a JSON document so each response from the embedding API is relatively large. The array values are 32-bit floating point numbers, or float32. Each float32 value occupies 4 bytes, and the embedding vector returned by models like OpenAI’s text-embedding-ada-002 typically consists of 1536-dimensional vectors. The challenge is the size of the embedding response: - Each response consists of 1536 float32 values (one per dimension). - 1536 float32 values result in 6144 bytes (1536 × 4 bytes). - When serialized as UTF-8 for transmission over the network, this results in approximately 32 KB per response due to additional serialization overhead (like delimiters). Optimizing Embedding Response Size One approach to optimize the embedding response size is to serialize the embedding as base64. This encoding reduces the overall size by compressing the data, while maintaining the integrity of the embedding information. This leads to a significant reduction in the size of the embedding response. With base64-encoded embeddings, the response size reduces from 32 KB to approximately 8 KB, as demonstrated below: base64 vs float32 Min (Bytes) Max (Bytes) Mean (Bytes) Min (+) Max (+) Mean (+) 100 tokens embeddings: text-embedding-3-small 32673.000 32751.000 32703.800 8192.000 (4.0x) (74.9%) 8192.000 (4.0x) (75.0%) 8192.000 (4.0x) (74.9%) 100 tokens embeddings: text-embedding-3-large 65757.000 65893.000 65810.200 16384.000 (4.0x) (75.1%) 16384.000 (4.0x) (75.1%) 16384.000 (4.0x) (75.1%) 100 tokens embeddings: text-embedding-ada-002 32882.000 32939.000 32909.000 8192.000 (4.0x) (75.1%) 8192.000 (4.0x) (75.2%) 8192.000 (4.0x) (75.1%) The source code of this benchmark can be found at: https://github.com/manekinekko/rich-bench-node (kudos to Anthony Shaw for creating the rich-bench python runner) Comparing the Two Scenarios Let’s break down and compare the total performance of the system in two scenarios: Scenario 1: Embeddings Serialized as float32 (32 KB per Response) Scenario 2: Embeddings Serialized as base64 (8 KB per Response) Scenario 1: Embeddings Serialized as Float32 In this scenario, the PDF embedding creation and user queries involve larger responses due to float32 serialization. Let’s compute the total response size for each phase: 1. Embedding Creation for the PDF: - 34 embedding requests (one per 100-token chunk). - 34 responses with 32 KB each. Total size for PDF embedding responses: 34 × 32 KB = 1088 KB = 1.088 MB 2. User Interactions with the RAG App: - Each user query consists of 200 tokens (which is split into 2 chunks of 100 tokens). - 10 user queries, requiring 2 embedding responses per query (for 2 chunks). - Each embedding response is 32 KB. Total size for user queries: Embedding responses: 20 × 32 KB = 640 KB. RAG responses: 10 × 32 KB = 320 KB. Total size for user interactions: 640 KB (embedding) + 320 KB (RAG) = 960 KB. 3. Total Size: Total size for embedding responses (PDF + user queries): 1088 KB + 640 KB = 1.728 MB Total size for RAG responses: 320 KB. Overall total size for all 10 responses: 1728 KB + 320 KB = 2048 KB = 2 MB Scenario 2: Embeddings Serialized as Base64 In this optimized scenario, the embedding response size is reduced to 8 KB by using base64 encoding. 1. Embedding Creation for the PDF: - 34 embedding requests. - 34 responses with 8 KB each. Total size for PDF embedding responses: 34 × 8 KB = 272 KB. 2. User Interactions with the RAG App: - Embedding responses for 10 queries, 2 responses per query. - Each embedding response is 8 KB. Total size for user queries: Embedding responses: 20 × 8 KB = 160 KB. RAG responses: 10 × 8 KB = 80 KB. Total size for user interactions: 160 KB (embedding) + 80 KB (RAG) = 240 KB 3. Total Size (Optimized Scenario): Total size for embedding responses (PDF + user queries): 272 KB + 160 KB = 432 KB. Total size for RAG responses: 80 KB. Overall total size for all 10 responses: 432 KB + 80 KB = 512 KB Performance Gain: Comparison Between Scenarios The optimized scenario (base64 encoding) is 4 times smaller than the original (float32 encoding): 2048 / 512 = 4 times smaller. The total size reduction between the two scenarios is: 2048 KB - 512 KB = 1536 KB = 1.536 MB. And the reduction in data size is: (1536 / 2048) × 100 = 75% reduction. How to Configure base64 encoding format When getting a vector representation of a given input that can be easily consumed by machine learning models and algorithms, as a developer, you usually call either the OpenAI API endpoint directly or use one of the official libraries for your programming language. Calling the OpenAI or Azure OpenAI APIs Using OpenAI endpoint: curl -X POST "https://api.openai.com/v1/embeddings" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "input": "The five boxing wizards jump quickly", "model": "text-embedding-ada-002", "encoding_format": "base64" }' Or, calling Azure OpenAI resources: curl -X POST "https://{endpoint}/openai/deployments/{deployment-id}/embeddings?api-version=2024-10-21" \ -H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d '{ "input": ["The five boxing wizards jump quickly"], "encoding_format": "base64" }' Using OpenAI Libraries JavaScript/TypeScript const response = await client.embeddings.create({ input: "The five boxing wizards jump quickly", model: "text-embedding-3-small", encoding_format: "base64" }); A pull request has been sent to the openai SDK for Node.js repository to make base64 the default encoding when/if the user does not provide an encoding. Please feel free to give that PR a thumb up. Python embedding = client.embeddings.create( input="The five boxing wizards jump quickly", model="text-embedding-3-small", encoding_format="base64" ) NB: from 1.62 the openai SDK for Python will default to base64. Java EmbeddingCreateParams embeddingCreateParams = EmbeddingCreateParams .builder() .input("The five boxing wizards jump quickly") .encodingFormat(EncodingFormat.BASE64) .model("text-embedding-3-small") .build(); .NET The openai-dotnet library is already enforcing the base64 encoding, and does not allow setting encoding_format by the user (see). Conclusion By optimizing the embedding response serialization from float32 to base64, you achieved a 75% reduction in data size and improved performance by 4x. This reduction significantly enhances the efficiency of your RAG application, especially when processing large documents like PDFs and handling multiple user queries. For 1 million users sending 1,000 requests per month, the total size saved would be approximately 22.9 TB per month simply by using base64 encoded embeddings. As demonstrated, optimizing the size of the API responses is not only crucial for reducing network overhead but also for improving the overall responsiveness of your application. In a world where efficiency and scalability are key to delivering robust AI-powered solutions, this optimization can make a substantial difference in both performance and user experience. ■ Shoutout to my colleague Anthony Shaw for the the long and great discussions we had about embedding optimisations.Learn how to develop innovative AI solutions with updated Azure skilling paths
The rapid evolution of generative AI is reshaping how organizations operate, innovate, and deliver value. Professionals who develop expertise in generative AI development, prompt engineering, and AI lifecycle management are increasingly valuable to organizations looking to harness these powerful capabilities while ensuring responsible and effective implementation. In this blog, we’re excited to share our newly refreshed series of Plans on Microsoft Learn that aim to supply your team with the tools and knowledge to leverage the latest AI technologies, including: Find the best model for your generative AI solution with Azure AI Foundry Create agentic AI solutions by using Azure AI Foundry Build secure and responsible AI solutions and manage generative AI lifecycles From sophisticated AI agents that can autonomously perform complex tasks to advanced chat models that enable natural human-AI collaboration, these technologies are becoming essential business tools rather than optional enhancements. Let’s take a look at the latest developments and unlock their full potential with our curated training resources from Microsoft Learn. Simplify the process of choosing an AI model with Azure AI Foundry Choosing the optimal generative AI model is essential for any solution, requiring careful evaluation of task complexity, data requirements, and computational constraints. Azure AI Foundry streamlines this decision-making process by offering diverse pre-trained models, fine-tuning capabilities, and comprehensive MLOps tools that enable businesses to test, optimize, and scale their AI applications while maintaining enterprise-grade security and compliance. Our Plan on Microsoft Learn titled Find the best model for your generative AI solution with Azure AI Foundry will guide you through the process of discovering and deploying the best models for creating generative AI solutions with Azure AI Foundry, including: Learn about the differences and strengths of various language models Find out how to integrate and use AI models in your applications to enhance functionality and user experience. Rapidly create intelligent, market-ready multimodal applications with Azure models, and explore industry-specific models. In addition, you’ll have the chance to take part in a Microsoft Azure Virtual Training Day, with interactive sessions and expert guidance to help you skill up on Azure AI features and capabilities. By engaging with this Plan on Microsoft Learn, you’ll also have the chance to prove your skills and earn a Microsoft Certification. Leap into the future of agentic AI solutions with Azure After choosing the right model for your generative AI purposes, our next Plan on Microsoft Learn goes a step further by introducing agentic AI solutions. A significant evolution in generative AI, agentic AI solutions enable autonomous decision-making, problem-solving, and task execution without constant human intervention. These AI agents can perceive their environment, adapt to new inputs, and take proactive actions, making them valuable across various industries. In the Create agentic AI solutions by using Azure AI Foundry Plan on Microsoft Learn, you’ll find out how developing agentic AI solutions requires a platform that provides scalability, adaptability, and security. With pre-built AI models, MLOps tools, and deep integrations with Azure services, Azure AI Foundry simplifies the development of custom AI agents that can interact with data, make real-time decisions, and continuously learn from new information. You’ll also: Learn how to describe the core features and capabilities of Azure AI Foundry, provision and manage Azure AI resources, create and manage AI projects, and determine when to use Azure AI Foundry. Discover how to customize with RAG in Azure AI Foundry, Azure AI Foundry SDK, or Azure OpenAI Service to look for answers in documents. Learn how to use Azure AI Agent Service, a comprehensive suite of feature-rich, managed capabilities, to bring together the models, data, tools, and services your enterprise needs to automate business processes There’s also a Microsoft Virtual Training Day featuring interactive sessions and expert guidance, and you can validate your skills by earning a Microsoft Certification. Safeguard your AI systems for security and fairness Widespread AI adoption demands rigorous security, fairness, and transparency safeguards to prevent bias, privacy breaches, and vulnerabilities that lead to unethical outcomes or non-compliance. Organizations must implement responsible AI through robust data governance, explainability, bias mitigation, and user safety protocols, while protecting sensitive data and ensuring outputs align with ethical standards. Our third Plan on Microsoft Learn, Build secure and responsible AI solutions and manage generative AI lifecycles, is designed to introduce the basics of AI security and responsible AI to help increase the security posture of AI environments. You’ll not only learn how to evaluate and improve generative AI outputs for quality and safety, but you’ll also: Gain an understanding of the basic concepts of AI security and responsible AI to help increase the security posture of AI environments. Learn how to assess and improve generative AI outputs for quality and safety. Discover how to help reduce risks by using Azure AI Content Safety to detect, moderate, and manage harmful content. Learn more by taking part in an interactive, expert-guided Microsoft Virtual Training Day to deepen your understanding of core AI concepts. Got a skilling question? Our new Ask Learn AI assistant is here to help Beyond our comprehensive Plans on Microsoft Learn, we’re also excited to introduce Ask Learn, our newest skilling innovation! Ask Learn is an AI assistant that can answer questions, clarify concepts, and define terms throughout your training experience. Ask Learn is your Copilot for getting skilled in AI, helping to answer your questions within the Microsoft Learn interface, so you don’t have to search elsewhere for the information. Simply click the Ask Learn icon at the top corner of the page to activate! Begin your generative AI skilling journey with curated Azure skilling Plans Azure AI Foundry provides the necessary platform to train, test, and deploy AI solutions at scale, and with the expert-curated skilling resources available in our newly refreshed Plans on Microsoft learn, your teams can accelerate the creation of intelligent, self-improving AI agents tailored to your business needs. Get started today! Find the best model for your generative AI solution with Azure AI Foundry Create agentic AI solutions by using Azure AI Foundry Build secure and responsible AI solutions and manage generative AI lifecycles