azure ai foundry
2 TopicsSecure your AI apps with user-context-aware controls | Microsoft Purview SDK
With built-in protections, prevent data leaks, block unsafe prompts, and avoid oversharing without rewriting your app. As a developer, focus on innovation while meeting evolving security and compliance requirements. And as a security admin, gain full visibility into AI data interactions, user activity, and policy enforcement across environments. Shilpa Ranganathan, Microsoft Purview Principal GPM, shares how new SDKs and Azure AI Foundry integrations bring enterprise-grade security to custom AI apps. Stop data leaks. Detect and block sensitive content in real-time with Microsoft Purview. Get started. Adapt AI security based on user roles. Block or allow access without changing your code. See it here. Prevent oversharing with built-in data protections. Only authorized users can see sensitive results. Start using Microsoft Purview. QUICK LINKS: 00:00 — Microsoft Purview controls for developers 00:16 — AI app protected by Purview 02:23 — User context aware 03:08 — Prevent data oversharing 04:15 — Behind the app 05:17 — API interactions 06:50 — Data security admin AI app protection 07:26 — Monitor and Govern AI Interactions 08:30 — Wrap up Link References Check out https://aka.ms/MicrosoftPurviewSDK Microsoft Purview API Explorer at https://github.com/microsoft/purview-api-samples/ For the Microsoft Purview Chat App go to https://github.com/johnea-chva/purview-chat Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -You can now infuse the data security controls that you’re used to with Microsoft 365 Copilot into your own custom-built AI apps and agentic solutions, even those running in non-Microsoft clouds. In fact, today I’ll show you how we are helping developers and data security teams work together to prevent some of the biggest challenges around data leaks, oversharing, and compliance during AI interactions so that you can start secure with code integrated controls that free you up and make it seamless for you as a developer to focus on building secure apps and agents while knowing that potential users and their activities with work data will be kept secure. -All of which is made possible with Microsoft Purview controls built into Azure AI Foundry, along with the new developer SDK that can be used to protect data during AI interactions, where protections can vary based on specific user context, even when apps are running in non-Microsoft Clouds, which ultimately helps your data in apps and agents stay secure as policies evolve while providing you as a security admin the visibility to evolve protections to protect against leaks and risky insiders to maintain control of your data, prevent data oversharing to unintended recipients, and govern AI data and compliance of your industry and regional requirements by default. This approach makes it simple for you as a developer to translate the requirements of your data security teams as you build your apps using the Microsoft Purview SDK. -In fact, let me show you an example of an AI app that’s protected by Microsoft Purview. This is an AI-powered company chat app. It’s a sample that you can find on GitHub, and it’s using Azure AI Foundry services on the backend for large language model, and Cosmos DB to retrieve relevant information based on a user’s prompt. I’m signed in as a user on the external vendor team. -Now, I’m going to write a prompt that adds sensitive information with a credit card, and immediately, I see a response that this request and prompt violates our company’s sensitive information policy, which was set in Microsoft Purview, so our valuable information is protected. But the real power here is that the controls are user context aware too. It’s not just blocking all credit cards because there are easier ways to do that in code or with system prompts. Let me show you the same app without code changes for another user. I’m logged in as a member of the Customer Support Engineering team and I’m allowed to interact with credit card numbers as part of my job, so I’m going to write the same prompt. Now I’ll submit it, and you’ll see the app generates an appropriate response. And nothing changed in the app. The only change was my user context. -And that was an example of a prompt being analyzed prior to sending it to the application so that it could generate a response. Let me show you another example that proactively prevents data oversharing based on the information retrieval process used by the app. I’m still logged in with the user’s account on the Customer Support Engineering team, and I’ll prompt our app to send me information for recent transactions with Relecloud with payment information to look at a duplicate charge. This takes a moment, looks up the transaction information in our Cosmos DB backend, and it’s presenting the results to me. -In this case, access permissions and protections have been applied using Microsoft Purview to the backend data source. And because our user account has permissions to that information, they received the response. This time, I’m signed in again as a user on the external vendor team. Again, I’ll write the same prompt, and because I shouldn’t and do not have access to retrieve that information, the app tells me that it can’t respond. Again, it is the same app without any code changes and my user context prevented me from seeing information that I shouldn’t be able to see. As a developer, these controls are simple to integrate into your app code and you don’t need to worry about the policies themselves or which user should be in scope for them. -Let me show you. This is the code behind our app. First, you can see that it’s registered with Microsoft Entra to help connect the app with both organizational policies and the identity of the user interacting with the app for user context so that it can apply the right protection scope. This is all possible by using the access tokens once the user has logged in. The app then establishes the API connection with Microsoft Purview to look at the protection scopes API, as well as the process content API, so that it can check whether the submitted prompt or the response is allowed or not based on existing data security access and compliance policies. Based on what’s returned, the app either continues or informs the user of the policy violation. -Now that you’ve seen what’s behind the app, let me show you the actual API interactions between our app and Microsoft Purview. And for that, I’ll use a sample code that we’ve also published to GitHub to view the raw API responses in real time. This is the Purview API Explorer app. This is connected to the Microsoft Graph as you can see with the Request URI. I can use it to view protections and even view how content gets processed in real time, which I’ll do here. Once the user logs in, you’ll see that with the first API for protection scopes, the application will send the user content and application token, as well as the activities that the app supports, like upload text and download text, as noted here, for our prompts. -Once the request is sent to the API, Purview responds back to the application to tell it what to do. In this case, for uploading and downloading text. The application will wait for Purview’s response prior to displaying it back to the user. Now I’ll go to Start a Conversation. And on the left in the Request Body, you can see my raw prompt again with sensitive information contained in the text along with other metadata properties. I’ll send the request. On the right, I can see the details of the content response from the API. So in this case, it found a policy match and responded with the action RestrictedAccess and the restriction action to block. That’s what you’d need to know as a developer to protect your AI apps. -Then as a data security admin, for everything to work as demonstrated, there are a few things you’ll need configured in Microsoft Purview. First, to protect against data loss of sensitive or high value information like I showed using credit cards, you will need data loss prevention policies in place. Second, to help prevent oversharing with managed database sources like I showed from Cosmos DB, which also works with SQL databases, you’ll configure Information Protection policies. This ensures that your database instances are labeled with corresponding access protections applied. Then for visibility into activities with your connected apps, all prompt and response traffic is recorded and auditable. And for apps and agents running on Azure AI Foundry, it’s just one optional setting to light up native Microsoft Purview integration. -In fact, here’s the level of visibility that you get as a data security admin. In DSPM for AI, you can see interactions and associated risks from your AI line-of-business apps running on Azure and other clouds once they are enlightened with Microsoft Purview integration. Here you can see user trends, applicable protections, compliance, and agent count. And across the broader Microsoft Purview solutions, all activity and interactions from your apps are also captured and protected, including Audit Search, so that you can discover all app interactions, Communication Compliance for visibility into inappropriate interactions, and Insider Risk Management as part of activities that establish risk. Integrating your apps with Microsoft Purview’s SDK provides the control to free you up and make it seamless for you as a developer to focus on building secure apps and agents. At the same time, as the data security admin, it gives you continuous visibility to ensure that AI data interactions remain secure and compliant. -To learn more, check out aka.ms/MicrosoftPurviewSDK. We’ve also put links to both sample apps in the description below to help you get started. Keep checking back to Microsoft Mechanics for the latest updates, and thank you for watching.249Views0likes0CommentsIntroducing Azure AI Foundry — Everything you need for AI development
Create agentic solutions quickly and efficiently with Azure AI Foundry. Choose the right models, ground your agents with knowledge, and seamlessly integrate AI into your development workflow — from early experimentation to production. Test, optimize, and deploy with built-in evaluation and management tools. See how to leverage the Azure AI Foundry SDK to code and orchestrate intelligent agents, monitor performance with tracing and assessments, and streamline DevOps with production-ready management. Yina Arenas, from the Azure AI Foundry team, shares its extensive capabilities as a unified platform that supports you throughout the entire AI development lifecycle. Access models to power your agents. The model catalog in Azure AI Foundry gives you access to thousands of AI models, including top-tier LLMs & specialized models, with optimizations for cloud & edge deployment. Take a look. Develop your custom agents. Work seamlessly with Azure AI Foundry inside VS Code, GitHub, and Copilot Studio. See how to integrate AI into your dev workflow. Build AI-powered multi-agent workflows effortlessly. Automate tasks like research, writing, editing, and communication using one system. Get started with Azure AI Foundry. Watch our video here. QUICK LINKS: 00:00 — Create agentic solutions with Azure AI Foundry 00:20 — Model catalog in Azure AI Foundry 02:15 — Experiment in the Azure AI Foundry playground 03:10 — Create and customize agents 04:13 — Assess and improve agents 05:58 — Monitor and manage apps 06:50 — Create a multi-agentic app in code 09:26 — Create a Sender agent 10:39 — How to connect orchestration logic 11:25 — Watch agents work 12:26 — Wrap up Link References Get started with Azure AI Foundry at https://ai.azure.com Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -If you’re looking to create agentic solutions and want to move quickly and efficiently, Azure AI Foundry is the one place for discovering and accessing the right building blocks for your agents, with everything you need for AI development. Today, I’ll share the essentials for Azure AI Foundry, starting with a tour of its extensive capabilities as a unified platform that supports you throughout the entire AI development lifecycle; from initial concept with early experimentation, coding in your preferred ID, pre-production assessment, management in production, and beyond, followed by a real example of the steps for creating a multi-agent application using the new Azure AI Agents service in Azure AI Foundry, all integrated with your code. Starting with our tour, you can easily reach to Azure AI Foundry at ai.azure.com, and once you’ve created a project, panning down the left rail, you can quickly see the core experiences. -First, the model catalog helps you discover and access a growing collection of thousands of models to power your individual agents as you build your system, including premium large language models from OpenAI, Meta, DeepSeek, Cohere, and more, as well as small language models like Microsoft Phi. And of course, there are hundreds of open models like those from Hugging Face for you to try out. Models are also available by area of specialization. -For example, there are regional and focused models to support interactions with different spoken languages, like Mistral for European languages and Jais for Arabic. And separately, there are industry-specific models that you can choose from. The entire model catalog is hosted on Microsoft’s supercomputer infrastructure in Azure for optimized cost performance. Next, in terms of model deployment, you can choose to run models on hosted hardware with managed compute, and for those of our popular premium models, you can use our serverless API option. As you use Azure AI Foundry, you can of course also bring your own models to run on your Azure infrastructure. -Then, to help you choose the right model for your agent, you can easily experiment in our playground. For agents, you can add knowledge to ground your model. You can choose files to upload, use existing search index, or add web knowledge using Microsoft Bing. There are also options to add data from Microsoft Fabric as well as SharePoint to connect with data in Microsoft 365. You can also define actions for your agents to perform, like calling APIs, functions, or using Code interpreter to write and run Python code to automate processes. And back on the homepage, clicking into AI Services provide additional task-specific capabilities that you can use to augment your agents, like speech, translation, vision, and content safety. So right from the start on your application design, you can leverage Azure AI Foundry to evaluate AI models and services for your application. -Next, to create and customize agents, the new Azure AI Agents service helps you orchestrate AI agents without managing the underlying resources. Importantly, everything you do in Azure AI Foundry is integrated with your coding workspaces. In the code experience, you can take advantage of multiple templates as well as a cloud hosted pre-configured dev environment to get started. And importantly, integration with GitHub, your code in Visual Studio, and even Copilot Studio for your low-code apps where you can connect to Azure AI services and more. This means that your work in Azure AI Foundry carries on seamlessly into your code and agents, or you can do everything from your code. By using a single API and calling Azure AI Foundry capabilities as service endpoints when you create projects leveraging the Azure AI Foundry SDK. -For example, you can connect to different models using the new Azure AI model inference endpoint, which lets you easily compare models without changing your underlying code. And as you create your agent, you can easily assess and improve the experience. In fact, Azure AI Foundry offers a range of capabilities to help you as you continuously iterate for centralized observability, such as application tracing for debugging and performance checks, along with detailed views for execution flows integrated with your application insight resources. Additionally, automated evaluations help you continuously assess the quality of AI outputs based on key metrics, like relevance to look at how well the model meets expectations, groundedness to see how well the model refers to your grounding data, fluency for the language proficiency of the answers, and more. -From there, you will use this information to create reporting, set up alerts, and share dashboards with other stakeholders. You can also take advantage of built-in safety and security controls of text, image, and multimodal content that go beyond basic system prompt guardrails to automatically detect and optionally block unwanted inputs and outputs for content involving violence, hate, sexual, and self-harm topics. Azure AI Foundry services also can help you onboard more advanced techniques as you optimize the output of your AI applications. For example, built-in services like model fine-tuning lets you adapt model output with specific training data sets that you define, helping you improve model accuracy and effectiveness in real-world applications. Additionally, integrations with Semantic Kernel and AutoGen as well as LangChain let you orchestrate execution flows for multi-agent processes, making it easier to embed AI into new or existing workflows. -Then, as your apps move into production, we give you tools to monitor and manage resource utilization. Integration with Azure Monitor and Application Insights helps you quickly observe trends and get alerts for key generative AI metrics. And the Centralized Management Center helps simplify ongoing resource management and governance tasks, like managing quota, accessing permissions, and connected resources. -Additionally, built-in integration across Microsoft’s security and governance stack enables you to enforce organizational standards and compliance with Azure policy, manage identity base access data and services with Microsoft Entra, leverage your data security and compliance from Microsoft Purview, and protect your AI apps at scale with ongoing threat detection and security posture management using Microsoft Defender. -So, with our tour complete, next, let me show you how you can create a multi-agent application using Azure AI Foundry along with Semantic Kernel for orchestration. I’ll start by explaining the agentic app scenario, which should sound familiar if you’ve ever written a report. It’s a four-agent solution that can be initiated with any topic. There is a researcher agent that gathers information from the internet as my defined knowledge source. This process loops with the writer agent, which uses the information provided or requests more until it is satisfied. The writer agent then creates the report and loops with the editor agent, which can request additional edits until it is satisfied. And once it has approved the reported text, it shares the output with the sender agent, which emails the report using Outlook in Microsoft 365. These multi-agentic scenarios are similar in concept to microservices and other modular architectures. There are several benefits to breaking down a monolithic process, but now it’s got a new name. -So let’s build it. I’ll begin in Azure AI Foundry, and in the Agents page I can see the agents that I’ve already started building, like the writer and the editor agents. The researcher and the sender agents are missing because we’re going to build them right now. I’ll start with the research agent as a new agent. Next, the setup pane on the right gives me my agent configuration options. I’ll give it a name, Research agent, and then under Deployment I can choose the model I want this agent to use. I’ll pick gpt-4o. -Next, I’ll provide it with instructions for what it is supposed to do. Since it is the research agent, I’ll instruct it to use Bing search to find information. And because the research agent is part of this four-agent team, I’ll specify that it should not try to write the report, which is the job of the writer agent. It should just provide the data. Next, I’ll add a knowledge source. Again, we want Bing to ground the agent with public information from the web. Then I just need to select an Azure connection, and once I hit Connect, our agent is done. To try it out, I’ll use the playground. I’ll ask “What is dot net,” and it will generate a summarized result using knowledge from Bing search. And because it is the research agent and not the writer, you will see that its results are super concise but dense with knowledge about the topic. -Next, I could specify actions, but I don’t need to. This agent already has everything it needs. And so our research agent is ready to go, and I can move on to creating our sender agent, which by the way is going to need some defined actions. To create our email sender agent, I’m going to switch over to VS Code and use the SDK. I have this Python file open, and if you look at the very bottom of the screen, there is a create_agent command. And just like we saw in the Azure AI Foundry portal, I can point it to the model for its deployment, define its name, and add its instructions, as well as its tools. As the email sender agent, we’ll provide it with Outlook as a tool, and when I run this file, it will create an agent inside of Azure AI Foundry. -In fact, if I move back to our list of agents in the portal, we can see that our sender agent was just created. And so now with all of my four agents created, it’s time to wire them up using Semantic Kernel. Back in VS Code, I’ll open my program file where I’ve already started using Semantic Kernel to describe its broader process, and you will see all the logic for how each agent interacts with each other. As I explained in the graphic, each agent needs to satisfy the requirement for it’s task before moving to the next step in the process. -Now, you might be wondering how to connect the orchestration logic together with my four agents. Well, let me show you. Each agent has its own configuration file for our Semantic Kernel orchestration. I’ll connect the researcher agent config with the agent ID from Azure AI Foundry. So, if I go back to the Azure AI Foundry portal and select the researcher agent, I can just copy the agent ID and go back to add it to my code, and you would do this process for each of the agents. Now with everything connected and complete, let’s try it out. Back in my code, I’ll go ahead and run to see how well our agents work together. My program asks, “What would you like a report on?” Let’s make it python, but not the code, the snake. While these agents work, I can watch exactly what they’re doing and comment on them play by play. -First, we can see the researcher agent pulling some material from the web. The writer agent can then pick things up. But wait, the writer agent isn’t easily satisfied and needs some additional research on user sentiment. The researcher agent comes back with that detail, but the writer agent still has questions and needs more additional facts about the pet trade and other topics. The researcher agent, unfazed, comes back with that information. And once the writer is satisfied, it starts generating the report. When it’s finished, it sends the report to the editor agent, and it looks like the writer agent met all of the requirements, which is confirmed and approved by the editor agent, and the approval triggers the sender agent to send it out as an email. In fact, if I move over to Outlook, we can see the Report on Python Snakes just landed on my inbox, and that was just one example of how you can create agentic solutions to automate business processes. -Azure AI Foundry helps you create powerful agents quickly and efficiently by providing a unified platform with extensive capabilities throughout the entire AI development lifecycle. To get started, just head over to ai.azure.com. Subscribe to Microsoft Mechanics if you haven’t already, and thank you for watching.1.8KViews1like0Comments