ai agents
12 TopicsGetting Started with AI Agents: A Student Developer’s Guide to the Microsoft Agent Framework
AI agents are becoming the backbone of modern applications, from personal assistants to autonomous research bots. If you're a student developer curious about building intelligent, goal-driven agents, Microsoft’s newly released Agent Framework is your launchpad. In this post, we’ll break down what the framework offers, how to get started, and why it’s a game-changer for learners and builders alike. What Is the Microsoft Agent Framework? The Microsoft Agent Framework is a modular, open-source toolkit designed to help developers build, orchestrate, and evaluate AI agents with minimal friction. It’s part of the AI Agents for Beginners curriculum, which walks you through foundational concepts using reproducible examples. At its core, the framework helps you: Define agent goals and capabilities Manage memory and context Route tasks through tools and APIs Evaluate agent performance with traceable metrics Whether you're building a research assistant, a coding helper, or a multi-agent system, this framework gives you the scaffolding to do it right. What’s Inside the Framework? Here’s a quick look at the key components: Component Purpose AgentRuntime Manages agent lifecycle, memory, and tool routing AgentConfig Defines agent goals, tools, and memory settings Tool Interface Lets you plug in custom tools (e.g., web search, code execution) MemoryProvider Supports semantic memory and context-aware responses Evaluator Tracks agent performance and goal completion The framework is built with Python and .NET and designed to be extensible, perfect for experimentation and learning. Try It: Your First Agent in 10 Minutes Here’s a simplified walkthrough to get you started: Clone the repo git clone https://github.com/microsoft/ai-agents-for-beginners Open the Sample cd ai-agents-for-beginners/14-microsoft-agent-framework Install dependencies pip install -r requirements.txt Run the sample agent python main.py You’ll see a basic agent that can answer questions using a web search tool and maintain context across turns. From here, you can customize its goals, memory, and tools. Why Student Developers Should Care Modular Design: Learn how real-world agents are structured—from memory to evaluation. Reproducible Workflows: Build agents that can be debugged, traced, and improved over time. Open Source: Contribute, fork, and remix with your own ideas. Community-Ready: Perfect for hackathons, research projects, or portfolio demos. Plus, it aligns with Microsoft’s best practices for agent governance, making it a solid foundation for enterprise-grade development. Why Learn? Here are a few ideas to take your learning further: Build a custom tool (e.g., a calculator or code interpreter) Swap in a different memory provider (like a vector DB) Create an evaluation pipeline for multi-agent collaboration Use it in a class project or student-led workshop Join the Microsoft Azure AI Foundry Discord https://aka.ms/Foundry/discord share your project and build your AI Engineer and Developer connections. Star and Fork the AI Agents for Beginners repo for updates and new modules. Final Thoughts The Microsoft Agent Framework isn’t just another library, it’s a teaching tool, a playground, and a launchpad for the next generation of AI builders. If you’re a student developer, this is your chance to learn by doing, contribute to the community, and shape the future of agentic systems. So fire up your terminal, fork the repo, and start building. Your first agent is just a few lines of code away.310Views0likes1CommentA Recap of the Build AI Agents with Custom Tools Live Session
Artificial Intelligence is evolving, and so are the ways we build intelligent agents. On a recent Microsoft YouTube Live session, developers and AI enthusiasts gathered to explore the power of custom tools in AI agents using Azure AI Studio. The session walked through concepts, use cases, and a live demo that showed how integrating custom tools can bring a new level of intelligence and adaptability to your applications. 🎥 Watch the full session here: https://www.youtube.com/live/MRpExvcdxGs?si=X03wsQxQkkshEkOT What Are AI Agents with Custom Tools? AI agents are essentially smart workflows that can reason, plan, and act — powered by large language models (LLMs). While built-in tools like search, calculator, or web APIs are helpful, custom tools allow developers to tailor agents for business-specific needs. For example: Calling internal APIs Accessing private databases Triggering backend operations like ticket creation or document generation Learn Module Overview: Build Agents with Custom Tools To complement the session, Microsoft offers a self-paced Microsoft Learn module that gives step-by-step guidance: Explore the module Key Learning Objectives: Understand why and when to use custom tools in agents Learn how to define, integrate, and test tools using Azure AI Studio Build an end-to-end agent scenario using custom capabilities Hands-On Exercise: The module includes a guided lab where you: Define a tool schema Register the tool within Azure AI Studio Build an AI agent that uses your custom logic Test and validate the agent’s response Highlights from the Live Session Here are some gems from the session: Real-World Use Cases – Automating customer support, connecting to CRMs, and more Tool Manifest Creation – Learn how to describe a tool in a machine-understandable way Live Azure Demo – See exactly how to register tools and invoke them from an AI agent Tips & Troubleshooting – Best practices and common pitfalls when designing agents Want to Get Started? If you're a developer, AI enthusiast, or product builder looking to elevate your agent’s capabilities — custom tools are the next step. Start building your own AI agents by combining the power of: Microsoft Learn Module YouTube Live Session Final Thoughts The future of AI isn't just about smart responses — it's about intelligent actions. Custom tools enable your AI agent to do things, not just say things. With Azure AI Studio, building a practical, action-oriented AI assistant is more accessible than ever. Learn More and Join the Community Learn more about AI Agents with https://aka.ms/ai-agents-beginnersOpen Source Course and Building Agents. Join the Azure AI Foundry Discord Channel. Continue the discussion and learning: https://aka.ms/AI/discord Have questions or want to share what you're building? Let’s connect on LinkedIn or drop a comment under the YouTube video!223Views0likes0CommentsHow to build Tool-calling Agents with Azure OpenAI and Lang Graph
Introducing MyTreat Our demo is a fictional website that shows customers their total bill in dollars, but they have the option of getting the total bill in their local currencies. The button sends a request to the Node.js service and a response is simply returned from our Agent given the tool it chooses. Let’s dive in and understand how this works from a broader perspective. Prerequisites An active Azure subscription. You can sign up for a free trial here or get $100 worth of credits on Azure every year if you are a student. A GitHub account (not necessarily) Node.js LTS 18 + VS Code installed (or your favorite IDE) Basic knowledge of HTML, CSS, JS Creating an Azure OpenAI Resource Go over to your browser and key in portal.azure.com to access the Microsoft Azure Portal. Over there navigate to the search bar and type Azure OpenAI. Go ahead and click on + Create. Fill in the input boxes with appropriate, for example, as shown below then press on next until you reach review and submit then finally click on Create. After the deployment is done, go to the deployment and access Azure AI Foundry portal using the button as show below. You can also use the link as demonstrated below. In the Azure AI Foundry portal, we have to create our model instance so we have to go over to Model Catalog on the left panel beneath Get Started. Select a desired model, in this case I used gpt-35-turbo for chat completion (in your case use gpt-4o). Below is a way of doing this. Choose a model (gpt-4o) Click on deploy Give the deployment a new name e.g. myTreatmodel, then click deploy and wait for it to finish On the left panel go over to deployments and you will see the model you have created. Access your Azure OpenAI Resource Key Go back to Azure portal and specifically to the deployment instance that we have and select on the left panel, Resource Management. Click on Keys and Endpoints. Copy any of the keys as shown below and keep it very safe as we will use it in our .env file. Configuring your project Create a new project folder on your local machine and add these variables to the .env file in the root folder. AZURE_OPENAI_API_INSTANCE_NAME= AZURE_OPENAI_API_DEPLOYMENT_NAME= AZURE_OPENAI_API_KEY= AZURE_OPENAI_API_VERSION="2024-08-01-preview" LANGCHAIN_TRACING_V2="false" LANGCHAIN_CALLBACKS_BACKGROUND = "false" PORT=4556 Starting a new project Go over to https://github.com/tiprock-network/mytreat.git and follow the instructions to setup the new project, if you do not have git installed, go over to the Code button and press Download ZIP. This will enable you get the project folder and follow the same procedure for setting up. Creating a custom tool In the utils folder the math tool was created, this code show below uses tool from Langchain to build a tool and the schema of the tool is created using zod.js, a library that helps in validating an object’s property value. The price function takes in an array of prices and the exchange rate, adds the prices up and converts them using the exchange rate as shown below. import { tool } from '@langchain/core/tools' import { z } from 'zod' const priceConv = tool((input) =>{ //get the prices and add them up after turning each into let sum = 0 input.prices.forEach((price) => { let price_check = parseFloat(price) sum += price_check }) //now change the price using exchange rate let final_price = parseFloat(input.exchange_rate) * sum //return return final_price },{ name: 'add_prices_and_convert', description: 'Add prices and convert based on exchange rate.', schema: z.object({ prices: z.number({ required_error: 'Price should not be empty.', invalid_type_error: 'Price must be a number.' }).array().nonempty().describe('Prices of items listed.'), exchange_rate: z.string().describe('Current currency exchange rate.') }) }) export { priceConv } Utilizing the tool In the controller’s folder we then bring the tool in by importing it. After that we pass it in to our array of tools. Notice that we have the Tavily Search Tool, you can learn how to implement in the Additional Reads Section or just remove it. Agent Model and the Call Process This code defines an AI agent using LangGraph and LangChain.js, powered by GPT-4o from Azure OpenAI. It initializes a ToolNode to manage tools like priceConv and binds them to the agent model. The StateGraph handles decision-making, determining whether the agent should call a tool or return a direct response. If a tool is needed, the workflow routes the request accordingly; otherwise, the agent responds to the user. The callModel function invokes the agent, processing messages and ensuring seamless tool integration. The searchAgentController is a GET endpoint that accepts user queries (text_message). It processes input through the compiled LangGraph workflow, invoking the agent to generate a response. If a tool is required, the agent calls it before finalizing the output. The response is then sent back to the user, ensuring dynamic and efficient tool-assisted reasoning. //create tools the agent will use //const agentTools = [new TavilySearchResults({maxResults:5}), priceConv] const agentTools = [ priceConv] const toolNode = new ToolNode(agentTools) const agentModel = new AzureChatOpenAI({ model:'gpt-4o', temperature:0, azureOpenAIApiKey: AZURE_OPENAI_API_KEY, azureOpenAIApiInstanceName:AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiDeploymentName:AZURE_OPENAI_API_DEPLOYMENT_NAME, azureOpenAIApiVersion:AZURE_OPENAI_API_VERSION }).bindTools(agentTools) //make a decision to continue or not const shouldContinue = ( state ) => { const { messages } = state const lastMessage = messages[messages.length -1] //upon tool call we go to tools if("tool_calls" in lastMessage && Array.isArray(lastMessage.tool_calls) && lastMessage.tool_calls?.length) return "tools"; //if no tool call is made we stop and return back to the user return END } const callModel = async (state) => { const response = await agentModel.invoke(state.messages) return { messages: [response] } } //define a new graph const workflow = new StateGraph(MessagesAnnotation) .addNode("agent", callModel) .addNode("tools", toolNode) .addEdge(START, "agent") .addConditionalEdges("agent", shouldContinue, ["tools", END]) .addEdge("tools", "agent") const appAgent = workflow.compile() The above is implemented with the following code: Frontend The frontend is a simple HTML+CSS+JS stack that demonstrated how you can use an API to integrate this AI Agent to your website. It sends a GET request and uses the response to get back the right answer. Below is an illustration of how fetch API has been used. const searchAgentController = async ( req, res ) => { //get human text const { text_message } = req.query if(!text_message) return res.status(400).json({ message:'No text sent.' }) //invoke the agent const agentFinalState = await appAgent.invoke( { messages: [new HumanMessage(text_message)] }, {streamMode: 'values'} ) //const agentFinalState_b = await agentModel.invoke(text_message) /*return res.status(200).json({ answer:agentFinalState.messages[agentFinalState.messages.length - 1].content })*/ //console.log(agentFinalState_b.tool_calls) res.status(200).json({ text: agentFinalState.messages[agentFinalState.messages.length - 1].content }) } There you go! We have created a basic tool-calling agent using Azure and Langchain successfully, go ahead and expand the code base to your liking. If you have questions you can comment below or reach out on my socials. Additional Reads Azure Open AI Service Models Generative AI for Beginners AI Agents for Beginners Course Lang Graph Tutorial Develop Generative AI Apps in Azure AI Foundry Portal4KViews1like2CommentsLearn How to Build Smarter AI Agents with Microsoft’s MCP Resources Hub
If you've been curious about how to build your own AI agents that can talk to APIs, connect with tools like databases, or even follow documentation you're in the right place. Microsoft has created something called MCP, which stands for Model‑Context‑Protocol. And to help you learn it step by step, they’ve made an amazing MCP Resources Hub on GitHub. In this blog, I’ll Walk you through what MCP is, why it matters, and how to use this hub to get started, even if you're new to AI development. What is MCP (Model‑Context‑Protocol)? Think of MCP like a communication bridge between your AI model and the outside world. Normally, when we chat with AI (like ChatGPT), it only knows what’s in its training data. But with MCP, you can give your AI real-time context from: APIs Documents Databases Websites This makes your AI agent smarter and more useful just like a real developer who looks up things online, checks documentation, and queries databases. What’s Inside the MCP Resources Hub? The MCP Resources Hub is a collection of everything you need to learn MCP: Videos Blogs Code examples Here are some beginner-friendly videos that explain MCP: Title What You'll Learn VS Code Agent Mode Just Changed Everything See how VS Code and MCP build an app with AI connecting to a database and following docs. The Future of AI in VS Code Learn how MCP makes GitHub Copilot smarter with real-time tools. Build MCP Servers using Azure Functions Host your own MCP servers using Azure in C#, .NET, or TypeScript. Use APIs as Tools with MCP See how to use APIs as tools inside your AI agent. Blazor Chat App with MCP + Aspire Create a chat app powered by MCP in .NET Aspire Tip: Start with the VS Code videos if you’re just beginning. Blogs Deep Dives and How-To Guides Microsoft has also written blogs that explain MCP concepts in detail. Some of the best ones include: Build AI agent tools using remote MCP with Azure Functions: Learn how to deploy MCP servers remotely using Azure. Create an MCP Server with Azure AI Agent Service : Enables Developers to create an agent with Azure AI Agent Service and uses the model context protocol (MCP) for consumption of the agents in compatible clients (VS Code, Cursor, Claude Desktop). Vibe coding with GitHub Copilot: Agent mode and MCP support: MCP allows you to equip agent mode with the context and capabilities it needs to help you, like a USB port for intelligence. When you enter a chat prompt in agent mode within VS Code, the model can use different tools to handle tasks like understanding database schema or querying the web. Enhancing AI Integrations with MCP and Azure API Management Enhance AI integrations using MCP and Azure API Management Understanding and Mitigating Security Risks in MCP Implementations Overview of security risks and mitigation strategies for MCP implementations Protecting Against Indirect Injection Attacks in MCP Strategies to prevent indirect injection attacks in MCP implementations Microsoft Copilot Studio MCP Announcement of the Microsoft Copilot Studio MCP lab Getting started with MCP for Beginners 9 part course on MCP Client and Servers Code Repositories Try it Yourself Want to build something with MCP? Microsoft has shared open-source sample code in Python, .NET, and TypeScript: Repo Name Language Description Azure-Samples/remote-mcp-apim-functions-python Python Recommended for Secure remote hosting Sample Python Azure Functions demonstrating remote MCP integration with Azure API Management Azure-Samples/remote-mcp-functions-python Python Sample Python Azure Functions demonstrating remote MCP integration Azure-Samples/remote-mcp-functions-dotnet C# Sample .NET Azure Functions demonstrating remote MCP integration Azure-Samples/remote-mcp-functions-typescript TypeScript Sample TypeScript Azure Functions demonstrating remote MCP integration Microsoft Copilot Studio MCP TypeScript Microsoft Copilot Studio MCP lab You can clone the repo, open it in VS Code, and follow the instructions to run your own MCP server. Using MCP with the AI Toolkit in Visual Studio Code To make your MCP journey even easier, Microsoft provides the AI Toolkit for Visual Studio Code. This toolkit includes: A built-in model catalog Tools to help you deploy and run models locally Seamless integration with MCP agent tools You can install the AI Toolkit extension from the Visual Studio Code Marketplace. Once installed, it helps you: Discover and select models quickly Connect those models to MCP agents Develop and test AI workflows locally before deploying to the cloud You can explore the full documentation here: Overview of the AI Toolkit for Visual Studio Code – Microsoft Learn This is perfect for developers who want to test things on their own system without needing a cloud setup right away. Why Should You Care About MCP? Because MCP: Makes your AI tools more powerful by giving them real-time knowledge Works with GitHub Copilot, Azure, and VS Code tools you may already use Is open-source and beginner-friendly with lots of tutorials and sample code It’s the future of AI development connecting models to the real world. Final Thoughts If you're learning AI or building software agents, don’t miss this valuable MCP Resources Hub. It’s like a starter kit for building smart, connected agents with Microsoft tools. Try one video or repo today. Experiment. Learn by doing and start your journey with the MCP for Beginners curricula.2.9KViews2likes2CommentsProtect AI apps with Microsoft Defender
Stay in control with Microsoft Defender. You can identify which AI apps and cloud services are in use across your environment, evaluate their risk levels, and allow or block them as needed — all from one place. Whether it’s a sanctioned tool or a shadow AI app, you’re equipped to set the right policies and respond fast to emerging threats. Microsoft Defender gives you the visibility to track complex attack paths — linking signals across endpoints, identities, and cloud apps. Investigate real-time alerts, protect sensitive data from misuse in AI tools like Copilot, and enforce controls even for in-house developed apps using system prompts and Azure AI Foundry. Rob Lefferts, Microsoft Security CVP, joins me in the Mechanics studio to share how you can safeguard your AI-powered environment with a unified security approach. Identify and protect apps. Instantly surface all generative AI apps in use across your org — even unsanctioned ones. How to use Microsoft Defender for Cloud Apps. Extend AI security to internally developed apps. Get started with Microsoft Defender for Cloud. Respond with confidence. Stop attacks in progress and ensure sensitive data stays protected, even when users try to bypass controls. Get full visibility in Microsoft Defender incidents. Watch our video. QUICK LINKS: 00:00 — Stay in control with Microsoft Defender 00:39 — Identify and protect AI apps 02:04 — View cloud apps and website in use 04:14 — Allow or block cloud apps 07:14 — Address security risks of internally developed apps 08:44 — Example in-house developed app 09:40 — System prompt 10:39 — Controls in Azure AI Foundry 12:28 — Defender XDR 14:19 — Wrap up Link References Get started at https://aka.ms/ProtectAIapps Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - While generative AI can help you do more, it can also introduce new security risks. Today, we’re going to demonstrate how you can stay in control with Microsoft Defender to discover the GenAI cloud apps that people in your organization are using right now and approve or block them based on their risk. And for your in-house developed AI apps, we’ll look at preventing jailbreaks and prompt injection attacks along with how everything comes together with Microsoft Defender incident management, to give you complete visibility into your events. Joining me once again to demonstrate how to get ahead of everything is Microsoft Security CVP, Rob Lefferts. Welcome back. - So glad to be back. - It’s always great to have you on to keep us ahead of the threat landscape. In fact, since your last time on the show, we’ve seen a significant increase in the use of generative AI apps, and some of them are sanctioned by IT but many of them are not. So what security concerns does this raise? - Each of those apps really carries their own risk, and even in-house developed apps aren’t necessarily immune to risk. We see some of the biggest risks with Consumer apps, especially the free ones, which are often designed to collect training data as users upload files into them or paste content into their prompts that can then be used to retrain the underlying model. So, before you know it, your data might be part of the public domain, that is, unless you get ahead of it. - And as you showed, this use of your data is often written front and center in the terms and conditions of these apps. - True, but not everyone reads all the fine print. To be clear, people go into these apps with good intentions, to work more efficiently and get more done, but they don’t always know the risks; and that’s where we give you the capabilities you need to identify and protect Generative AI SaaS apps using Microsoft Defender for Cloud Apps. And you can combine this with Microsoft Defender for Cloud for your internally developed apps alongside the unified incident management capabilities in Microsoft Defender XDR where the activities from both of these services and other connected systems come together in one place. - So given just how many cloud apps there are out there and a lot of companies building their own apps, where would you even start? - Well, for most orgs, it starts with knowing which external apps people in your company are using. If you don’t have proactive controls in place yet, there’s a pretty good chance that people are bringing their own apps. Now to find out what they’re using, right from the unified Defender portal, you can use Microsoft Defender for Cloud Apps for a complete view of cloud apps and websites in use inside your organization. The signal comes in from Defender-onboarded computers and phones. And if you’re not already using Defender for Cloud Apps, let me start by showing you the Cloud app catalog. Our researchers at Microsoft are continually identifying and classifying new cloud apps as they surface. There are over 34,000 apps across all of these filterable categories that are all based on best practice use cases across industries. Now if I scroll back up to Generative AI, you’ll see that there are more than 1,000 apps. And I’ll click on this control to filter the list down, and it’s a continually expanding list. We even add to it when existing cloud apps integrate new gen AI capabilities. Now once your signal starts to come in from your managed devices, moving back over to the dashboard, you’ll see that I have visibility into the full breadth of Cloud Apps in use, including Generative AI apps and lots of other categories. The report under Discovered apps provides visibility into the cloud apps with the broadest use within your managed network. And from there, you can again see categories of discovered apps. I’ll filter by Generative AI again, and this time it returns the specific apps in use in my org. Like before, each app has a defined risk score of 0 to 10, with 10 being the best, based on a number of parameters. And if I click into any one of them, like Microsoft Copilot, I can see the details as well as how they fair for general areas, a breadth of security capabilities, as well as compliance with standards and regulations, and whether they appear to meet legal and privacy requirements. - And this can save a lot of valuable time especially when you’re trying to get ahead of risks. - And Defender for Cloud Apps doesn’t just give you visibility. For your managed devices enrolled into Microsoft Defender, it also has controls that can either allow or block people from using defined cloud apps, based on the policies you have set as an administrator. From each cloud app, I can see an overview with activities surrounding the app with a few tabs. In the cloud app usage tab, I can drill in even more to see usage, users, IP addresses, and incident details. I’ll dig into Users, and here you can see who has used this app in my org. If I head back to my filtered view of generative AI apps in use, on the right you can see options to either sanction apps so that people can keep using them, or unsanction them to block them outright from being used. But rather than unsanction these apps one-by-one like Whack-a-Mole, there’s a better way, and that’s with automation based on the app’s risk score level. This way, you’re not manually configuring 1,000 apps in this category; nobody wants to do that. So I’ll head over to policy management, and to make things easier as new apps emerge, you can set up policies based on the risk score thresholds that I showed earlier, or other attributes. I’ll create a new policy, and from the dropdown, I’ll choose app discovery policy. Now I’ll name it Risky AI apps, and I can set the policy severity here too. Now, I’m going to select a filter, and I’ll choose category first, I’ll keep equals, and then scroll all the way down to Generative AI and pick that. Then, I need to add another filter. In this case, I’m going to find and choose risk score. I’ll pause for a second. Now what I want to happen is that when a new app is documented, or an existing cloud app incorporates new GenAI capabilities and meets my category and risk conditions, I want Defender for Cloud Apps to automatically unsanction those apps to stop people from using them on managed devices. So back in my policy, I can adjust this slider here for risk score. I’ll set it so that any app with a risk score of 0 to 6 will trigger a match. And if I scroll down a little more, this is the important part of doing the enforcement. I’ll choose tag app as unsanctioned and hit create to make it active. With that, my policy is set and next time my managed devices are synced with policy, Defender for Endpoint will block any generative AI app with a matching risk score. Now, let’s go see what it looks like. If I move over to a managed device, you’ll remember one of our four generative AI apps was something called Fakeyou. I have to be a little careful with how I enunciate that app name, and this is what a user would see. It’s clearly marked as being blocked by their IT organization with a link to visit the support page for more information. And this works with iOS, Android, Mac, and, of course, Windows devices once they are onboarded to Defender. - Okay, so now you can see and control which cloud apps are in use in your organization, but what about those in-house developed apps? How would you control the AI risks there? - So internally developed apps and enterprise-grade SaaS apps, like Microsoft Copilot, would normally have the controls and terms around data usage in place to prevent data loss and disallow vendors from training their models on your data. That said, there are other types of risks and that’s where Defender for Cloud comes in. If you’re new to Defender for Cloud, it connects the security team and developers in your company. For security teams, for your apps, there’s cloud security posture management to surface actions to predict and give you recommendations for preventing breaches before they happen. For cloud infrastructure and workloads, it gives you insights to highlight risks and guide you with specific protections that you can implement for all of your virtual machines, your data infrastructure, including databases and storage. And for your developers, using DevOps, you can even see best practice insights and associated risks with API endpoints being used, and in Containers see misconfigurations, exposed secrets and vulnerabilities. And for cloud infrastructure entitlement management, you can find out where you have potentially overprovisioned or inactive entitlements that could lead to a breach. And the nice thing is that from the central SecOps team perspective, these signals all flow into Microsoft Defender for end-to-end security tracking. In fact, I have an example here. This is an in-house developed app running on Azure that helps an employee input things like address, tax information, bank details for depositing your salary, and finding information on benefits options that employees can enroll into. It’s a pretty important app to ensure that the right protections are in place. And for anyone who’s entered a new job right after graduation, it can be confusing to know what benefits options to choose from, things like 401k or IRA for example in the U.S., or do you enroll into an employee stock purchasing program? It’s actually a really good scenario for generative AI when you think about it. And if you can act on the options it gives you to enroll into these services, again, it’s super helpful for the employees and important to have the right controls in place. Obviously, you don’t want your salary, stock, or benefits going into someone else’s account. So if you’re familiar with how generative AI apps work, most use what’s called a system prompt to enforce basic rules. But people, especially modern adversaries, are getting savvy to this and figuring out how to work around these basic guardrails: for example, by telling these AI tools to ignore their instructions. And I can show you an example of that. This is our app’s system prompt, and you’ll see that we’ve instructed the AI to not display ID numbers, account numbers, financial information, or tax elections with examples given for each. Now, I’ll move over to a running session with this app. I’ve already submitted a few prompts. And in the third one, with a gentle bit of persuasion, basically telling it that I’m a security researcher, for the AI model to ignore the instructions, it’s displaying information that my company and my dev team did not want it to display. This app even lets me update the bank account IBAN number with a prompt: Sorry, Adele. Fortunately, there’s a fix. Using controls as part of Azure AI Foundry, we can prevent this information from getting displayed to our user and potentially any attacker if their credentials or token has been compromised. So this is the same app on the right with no changes to the system message behind it, and I’ll enter the prompts in live this time. You’ll see that my exact same attempts to get the model to ignore its instructions no matter what I do, even as a security researcher, have been stopped in this case using Prompt Shields and have been flagged for immediate response. And these types of controls are even more critical as we start to build more autonomous agentic apps that might be parsing messages from external users and automatically taking action. - Right, and as we saw in the generated response, protection was enforced, like you said, using content safety controls in Azure AI Foundry. - Right, and those activities are also passed to Defender XDR incidents, so that you can see if someone is trying to work around the rules that your developers set. Let me quickly show you where these controls were set up to defend our internal app against these types of prompt injection or jailbreak attempts. I’m in the new Azure AI Foundry portal under safety + security for my app. The protected version of the app has Prompt shields for jailbreak and indirect attacks configured here as input filters. That’s all I had to do. And what I showed before was a direct jailbreak attack. There can also be indirect attacks. These methods are a little sneakier where the attacker, for example, might poison reference data upstream with maybe an email sent previously or even an image with hidden instructions, which gets added to the prompt. And we protect you in both cases. - Okay, so now you have policy protections in place. Do I need to identify and track issues in their respective dashboards then? - You can, and depending on your role or how deep in any area you want to go, all are helpful. But if you want to stitch together multiple alerts as part of something like a multi-stage attack, that’s where Defender XDR comes in. It will find the connections between different events, whether the user succeeded or not, and give you the details you need to respond to them. I’m now in the Defender XDR portal and can see all of my incidents. I want to look at a particular incident, 206872. We have a compromised user account, but this time it’s not Jonathan Wolcott; it’s Marie Ellorriaga. - I have a feeling Jonathan’s been watching these shows on Mechanics to learn what not to do. - Good for him; it’s about time. So let’s see what Marie, or the person using her account, was up to. It looks like they found our Employee Assistant internal app, then tried to Jailbreak it. But because our protections were in place, this attempt was blocked, and we can see the evidence of that from this alert here on the right. Then we can see that they moved on to Microsoft 365 Copilot and tried to get into some other finance-related information. And because of our DLP policies preventing Copilot from processing labeled content, that activity also wouldn’t have been successful. So our information was protected. - And these controls get even more important, I think, as agents also become more mainstream. - That’s right, and those agents often need to send information outside of your trust boundary to reason over it, so it’s risky. And more than just visibility, as you saw, you have active protections to keep your information secure in real-time for the apps you build in-house and even shadow AI SaaS apps that people are using on your managed devices. - So for anyone who’s watching today right now, what do you recommend they do to get started? - So to get started on the things that we showed today, we’ve created end-to-end guidance for this that walks you through the entire process at aka.ms/ProtectAIapps; so that you can discover and control the generative AI cloud apps people are using now, build protections into the apps you’re building, and make sure that you have the visibility you need to detect and respond to AI-related threats. - Thanks, Rob, and, of course, to stay up-to-date with all the latest tech at Microsoft, be sure to keep checking back on Mechanics. Subscribe if you haven’t already, and we’ll see you again soon.1.8KViews1like0CommentsWebinar Series for Microsoft AI Agents
Join us for an exciting and insightful webinar series where we delve into the revolutionary world of Microsoft Copilot Agents in SharePoint, Agent builder, Copilot Studio and Azure AI Foundry! Discover how the integration of AI and intelligent agents is set to transform the future of business processes, making them more efficient, intelligent, and adaptive. In this webinar series, we will explore: The Power of Microsoft Copilot Agents: Learn how these advanced AI-driven agents can assist you in automating routine tasks, providing intelligent insights, and enhancing collaboration within your organization. Seamless Integration with Microsoft Graph: See how Copilot Agents work seamlessly with Microsoft Graph data to improve information retrieval, boost productivity, and automate mundane tasks. Real-World Applications: See real-world examples of how businesses are leveraging Copilot Agents to drive innovation and achieve their goals. Future Trends and Innovations: Get a glimpse into the future of AI in business processes and how it will continue to evolve and shape the way we work. Join us for the Webinars every week, at 11:30am PST/1:30pm CST/2:30 EST: (Click on the webinar name to join the live meeting on the actual date/time or use the .ics file at the bottom of the page to save the date on your calendar) April 2nd: Agents with SharePoint - Watch this Webinar recording for an overview of SharePoint Agents and its key capabilities to enable your organization with powerful Agents helping you search for information within seconds in large SharePoint libraries with 100's of documents. April 9th: Agents with Agent Builder - Watch this Webinar recording for an overview of Agent Builder and its key capabilities to enable organization with "No code" Agents that can be created by any business user within minutes. April 16th: Agents with Copilot Studio- Join us for an overview of Copilot Studio and its key capabilities to enable organization with "Low code" Agents that can help create efficiency with existing business processes. We will feature a few real-life demo examples and answer any questions. April 24th: Agents with Azure AI Foundry - Join us for an overview of Azure AI Foundry and its key capabilities to enable your organization with AI Agents. We will feature a demo of AI agents for prior authorization and provide resources to accelerate your next project. Don't miss this opportunity to stay ahead of the curve and unlock the full potential of AI and Copilot Agents in your organization. Register now and be part of the future of business transformation! Speakers: Jaspreet Dhamija, Sr. MW Copilot Specialist - Linkedin Michael Gannotti, Principal MW Copilot Specialist - LinkedIn Melissa Nelli, Sr. Biz Apps Technical Specialist - LinkedIn Matthew Anderson, Director Azure Apps - LinkedIn Marcin Jimenez, Sr. Cloud Solution Architect - LinkedIn Thank you!AI Agents: The Multi-Agent Design Pattern - Part 8
This blog post, Part 8 in a series on AI agents, explores the Multi-Agent Design Pattern, outlining the benefits and key components of building systems with multiple interacting agents. It details the scenarios where multi-agent systems excel (large workloads, complex tasks, diverse expertise), highlights their advantages over single-agent approaches (specialization, scalability, fault tolerance), and discusses the fundamental building blocks for implementation, including agent communication, coordination mechanisms, and architectural considerations. The post introduces common multi-agent patterns (group chat, hand-off, collaborative filtering) and illustrates these concepts with a refund process example. Finally, it includes a practical assignment and provides links to further resources and previous posts in the series.4.5KViews1like0CommentsAI Agents: Planning and Orchestration with the Planning Design Pattern - Part 7
This blog post, Part 7 in a series on AI agents, focuses on the Planning Design Pattern for effective task orchestration. It explains how to define clear goals, decompose complex tasks into manageable subtasks, and leverage structured output (e.g., JSON) for seamless communication between agents. The post includes code snippets demonstrating how to create a planning agent, orchestrate multi-agent workflows, and implement iterative planning for dynamic adaptation. It also links to a practical example notebook (07-autogen.ipynb) and further resources like AutoGen Magnetic One, encouraging readers to explore advanced planning concepts. Links to the previous posts in the series are provided for easy access to foundational AI agent concepts.1.5KViews1like0CommentsAI Agents: Building Trustworthy Agents- Part 6
This blog post, Part 6 in a series on AI agents, focuses on building trustworthy AI agents. It emphasizes the importance of safety and security in agent design and deployment. The post details a system message framework for creating robust and scalable prompts, outlining a four-step process from meta prompt to iterative refinement. It then explores various threats to AI agents, including task manipulation, unauthorized access, resource overloading, knowledge base poisoning, and cascading errors, providing mitigation strategies for each. The post also highlights the human-in-the-loop approach for enhanced trust and control, providing a code example using AutoGen. Finally, it links to further resources on responsible AI, model evaluation, and risk assessment, along with the previous posts in the series.642Views3likes0CommentsStep-by-Step Tutorial: Building an AI Agent Using Azure AI Foundry
This blog post provides a comprehensive tutorial on building an AI agent using Azure AI Agent service and the Azure AI Foundry portal. AI agents represent a powerful new paradigm in application development, offering a more intuitive and dynamic way to interact with software. They can understand natural language, reason about user requests, and take actions to fulfill those requests. This tutorial will guide you through the process of creating and deploying an intelligent agent on Azure. We'll cover setting up an Azure AI Foundry hub, crafting effective instructions to define the agent's behavior, including recognizing user intent, processing requests, and generating helpful responses. We'll also discuss testing the agent's conversational abilities and provide additional resources for expanding your knowledge of AI agents and the Azure AI ecosystem. This hands-on guide is perfect for anyone looking to explore the practical application of Azure's conversational AI capabilities and build intelligent virtual assistants. Join us as we dive into the exciting world of AI agents.14KViews1like2Comments