agent
10 TopicsConditional Access for Agent Identities in Microsoft Entra
AI agents are rapidly becoming part of everyday enterprise operations summarizing incidents, analyzing logs, orchestrating workflows, or even acting as digital colleagues. As organizations adopt these intelligent automations, securing them becomes just as important as securing human identities. Microsoft Entra introduces Agent Identities and extends Conditional Access to them but with very limited controls compared to traditional users and workload identities. This blog breaks down what Agent Identities are, how Conditional Access applies to them, and what are current limitations. What Exactly Are Agent Identities? Microsoft Entra now supports a new identity type designed specifically for AI systems: Agent Identity – like an app/service principal but specialized for AI Agent User – an identity that behaves more like a human user Agent Blueprint – a template used to create agent identities This model exists because AI systems behave differently than humans or applications: they can act autonomously, operate continuously, and make decisions without user input. AI-driven automation must be governed and that’s where Conditional Access comes in. Conditional Access for Agents, but with Important Limitations Today, Conditional Access for agent identities is purposely minimal. Microsoft clearly states: Conditional Access applies only when: An agent identity requests a token An agent user requests a token It does NOT apply when: A blueprint acquires a token to create identities An agent performs intermediate token exchange What Controls Are Actually Available Today? ✔ Supported Today Category Supported? Details Identity Targeting ✔ Yes You can include/exclude agent identities & agent users Block Access ✔ Yes This is the only Grant control currently available Agent Risk (Preview) ✔ Yes Early stage risk evaluation Sign-in evaluation ✔ Yes Token acquisition governed by CA ❌NOT Supported Today These CA controls do not apply to Agent Identities: MFA Authentication strength Device compliance Approved client apps App protection policies Session controls User sign-in frequency Terms of Use Location conditions (network/device-based) Client apps (legacy/modern access) Why? Because agents do not perform interactive authentication and do not use device signals or session context like humans. Their authentication is purely machine‑driven. How Conditional Access Works for Agents When an agent identity (or agent user) requests a token, Microsoft Entra: Identifies the requesting agent Checks CA policy assignments Evaluates any agent-risk conditions Allow/Blocks token issuance if conditions meet That’s it. No MFA prompt. No device check. No authentication strength evaluation. This makes CA for agents fundamentally different from CA for humans. Why Is Conditional Access So Limited for Agents? Two major reasons: Agents cannot satisfy user-based controls AI agents cannot: Perform MFA Use biometrics Run on compliant devices Follow session prompts These are human-driven processes. Agents authenticate via secure credential flows They use: Client credentials Federated identity credentials Token exchange flows So CA is limited to identity-level allow/block and risk-based token decisions. Practical Use Cases (Given Today’s Limitations) Even with limited controls, CA for agents is still important. Stop compromised agents from continuing to operate If Microsoft Entra detects high agent risk: CA can block token issuance This halts the agent’s ability to act immediately Enforce separation of duties for AI agents Even though you cannot apply MFA or auth strength, you can: Separate agents into “allowed” vs “blocked” groups Apply different CA rules per department or system Prevent AI sprawl Large enterprises may generate hundreds of AI agents. CA gives central admin control: Only approved, vetted agents can operate Others are blocked at token-request time Why Agent Blueprints Cannot Be Governed by CA Blueprints are templates, not active identities. Blueprint token flows are system-level operations, not access attempts. Therefore: ❌ No CA evaluation ❌ No controls applied ❌ Not counted as agent activity Only actual agent identities are governed by CA. What the Future Might Include Microsoft hints the capabilities will expand: Agent risk scoring Agent behaviour analytics More granularity in CA for agents Additional grant controls Policy scoping at task or capability level But as of today, CA for agents remains intentionally constrained to allow safe onboarding of the new identity type without accidental disruption. Final Summary Conditional Access for Agent Identities is currently a lightweight enforcement mechanism designed to block unauthorized or risky agents, not a full policy suite like we have for human users. ✔ What it does: Controls whether an agent identity can acquire a token Allows blocking specific agents Implements early agent‑risk logic Applies Zero Trust principles at the identity perimeter ❌ What it does not do: Enforce MFA Enforce authentication strength Enforce device or location conditions Apply session controls Govern blueprints As organizations adopt more autonomous agents, this foundational layer keeps AI identities visible and controllable and sets the stage for richer governance in the future.Unable to delete Foundry Agent identity Entra app in Azure
I'm trying to delete an Entra app in Azure created by Foundry Agent identity blueprint as its currently unused and is causing EntraID hygiene alerts. However getting an error mentioning that delete is not supported. Is there any other way to delete an unused Entra app for an agent identity blueprint? Error detail: Agent Blueprints are not supported on the API version used in this request.94Views0likes2CommentsPublished agent from Foundry doesn't work at all in Teams and M365
I've switched to the new version of Azure AI Foundry (New) and created a project there. Within this project, I created an Agent and connected two custom MCP servers to it. The agent works correctly inside Foundry Playground and responds to all test queries as expected. My goal was to make this agent available for my organization in Microsoft Teams / Microsoft 365 Copilot, so I followed all the steps described in the official Microsoft documentation: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/publish-copilot?view=foundry Issue description The first problems started at Step 8 (Publishing the agent). Organization scope publishing I published the agent using Organization scope. The agent appeared in Microsoft Admin Center in the list of agents. However, when an administrator from my organization attempted to approve it, the approval always failed with a generic error: “Sorry, something went wrong” No diagnostic information, error codes, or logs were provided. We tried recreating and republishing the agent multiple times, but the result was always the same. Shared scope publishing As a workaround, I published the agent using Shared scope. In this case, the agent finally appeared in Microsoft Teams and Microsoft 365 Copilot. I can now see the agent here: Microsoft Teams → Copilot Microsoft Teams → Applications → Manage applications However, this revealed the main issue. Main problem The published agent cannot complete any query in Teams, despite the fact that: The agent works perfectly in Foundry Playground The agent responds correctly to the same prompts before publishing In Teams, every query results in messages such as: “Sorry, something went wrong. Try to complete a query later.” Simplification test To exclude MCP or instruction-related issues, I performed the following: Disabled all MCP tools Removed all complex instructions Left only a minimal system prompt: “When the user types 123, return 456” I then republished the agent. The agent appeared in Teams again, but the behavior did not change — it does not respond at all. Permissions warning in Teams When I go to: Teams → Applications → Manage Applications → My agent → View details I see a red warning label: “Permissions needed. Ask your IT admin to add InfoConnect Agent to this team/chat/meeting.” This message is confusing because: The administrator has already added all required permissions All relevant permissions were granted in Microsoft Entra ID Admin consent was provided Because of this warning, I also cannot properly share the agent with my colleagues. Additional observation I have a similar agent configured in Copilot Studio: It shows the same permissions warning However, that agent still responds correctly in Teams It can also successfully call some MCP tools This suggests that the issue is specific to Azure AI Foundry agents, not to Teams or tenant-wide permissions in general. Steps already taken to resolve the issue Configured all required RBAC roles in Azure Portal according to: https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/rbac-foundry?view=foundry-classic During publishing, an agent-bot application was automatically created. I added my account to this bot with the Azure AI User role I also assigned Azure AI User to: The project’s Managed Identity The project resource itself Verified all permissions related to AI agents publishing in: Microsoft Admin Center Microsoft Teams Admin Center Simplified and republished the agent multiple times Deleted the automatically created agent-bot and allowed Foundry to recreate it Created a new Foundry project, configured several simple agents, and published them — the same issue occurs Tried publishing with different models: gpt-4.1, o4-mini Manually configured permissions in: Microsoft Entra ID → App registrations / Enterprise applications → API permissions Added both Delegated and Application permissions and granted Admin consent Added myself and my colleagues as Azure AI User in: Foundry → Project → Project users Followed all steps mentioned in this related discussion: https://techcommunity.microsoft.com/discussions/azure-ai-foundry-discussions/unable-to-publish-foundry-agent-to-m365-copilot-or-teams/4481420 Questions How can I make a Foundry agent work correctly in Microsoft Teams? Why does the agent fail to process requests in Teams while working correctly in Foundry? What does the “Permissions needed” warning actually mean for Foundry agents? How can I properly share the agent with other users in my organization? Any guidance, diagnostics, or clarification on the correct publishing and permission model for Foundry agents in Teams would be greatly appreciated.SolvedUnable to publish Foundry agent to M365 copilot or Teams
I’m encountering an issue while publishing an agent in Microsoft Foundry to M365 Copilot or Teams. After creating the agent and Foundry resource, the process automatically created a Bot Service resource. However, I noticed that this resource has the same ID as the Application ID shown in the configuration. Is this expected behavior? If not, how should I resolve it? I followed the steps in the official documentation: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/publish-copilot?view=foundry Despite this, I keep getting the following error: There was a problem submitting the agent. Response status code does not indicate success: 401 (Unauthorized). Status Code: 401 Any guidance on what might be causing this and how to fix it would be greatly appreciated.Solved330Views0likes3CommentsBuilding a Multi-Agent System with Azure AI Agent Service: Campus Event Management
Personal Background My name is Peace Silly. I studied French and Spanish at the University of Oxford, where I developed a strong interest in how language is structured and interpreted. That curiosity about syntax and meaning eventually led me to computer science, which I came to see as another language built on logic and structure. In the academic year 2024–2025, I completed the MSc Computer Science at University College London, where I developed this project as part of my Master’s thesis. Project Introduction Can large-scale event management be handled through a simple chat interface? This was the question that guided my Master’s thesis project at UCL. As part of the Industry Exchange Network (IXN) and in collaboration with Microsoft, I set out to explore how conversational interfaces and autonomous AI agents could simplify one of the most underestimated coordination challenges in campus life: managing events across multiple departments, societies, and facilities. At large universities, event management is rarely straightforward. Rooms are shared between academic timetables, student societies, and one-off events. A single lecture theatre might host a departmental seminar in the morning, a society meeting in the afternoon, and a careers talk in the evening, each relying on different systems, staff, and communication chains. Double bookings, last-minute cancellations, and maintenance issues are common, and coordinating changes often means long email threads, manual spreadsheets, and frustrated users. These inefficiencies do more than waste time; they directly affect how a campus functions day to day. When venues are unavailable or notifications fail to reach the right people, even small scheduling errors can ripple across entire departments. A smarter, more adaptive approach was needed, one that could manage complex workflows autonomously while remaining intuitive and human for end users. The result was the Event Management Multi-Agent System, a cloud-based platform where staff and students can query events, book rooms, and reschedule activities simply by chatting. Behind the scenes, a network of Azure-powered AI agents collaborates to handle scheduling, communication, and maintenance in real time, working together to keep the campus running smoothly. The user scenario shown in the figure below exemplifies the vision that guided the development of this multi-agent system. Starting with Microsoft Learning Resources I began my journey with Microsoft’s tutorial Build Your First Agent with Azure AI Foundry which introduced the fundamentals of the Azure AI Agent Service and provided an ideal foundation for experimentation. Within a few weeks, using the Azure Foundry environment, I extended those foundations into a fully functional multi-agent system. Azure Foundry’s visual interface was an invaluable learning space. It allowed me to deploy, test, and adjust model parameters such as temperature, system prompts, and function calling while observing how each change influenced the agents’ reasoning and collaboration. Through these experiments, I developed a strong conceptual understanding of orchestration and coordination before moving to the command line for more complex development later. When development issues inevitably arose, I relied on the Discord support community and the GitHub forum for troubleshooting. These communities were instrumental in addressing configuration issues and providing practical examples, ensuring that each agent performed reliably within the shared-thread framework. This early engagement with Microsoft’s learning materials not only accelerated my technical progress but also shaped how I approached experimentation, debugging, and iteration. It transformed a steep learning curve into a structured, hands-on process that mirrored professional software development practice. A Decentralised Team of AI Agents The system’s intelligence is distributed across three specialised agents, powered by OpenAI’s GPT-4.1 models through Azure OpenAI Service. They each perform a distinct role within the event management workflow: Scheduling Agent – interprets natural language requests, checks room availability, and allocates suitable venues. Communications Agent – notifies stakeholders when events are booked, modified, or cancelled. Maintenance Agent – monitors room readiness, posts fault reports when venues become unavailable, and triggers rescheduling when needed. Each agent operates independently but communicates through a shared thread, a transparent message log that serves as the coordination backbone. This thread acts as a persistent state space where agents post updates, react to changes, and maintain a record of every decision. For example, when a maintenance fault is detected, the Maintenance Agent logs the issue, the Scheduling Agent identifies an alternative venue, and the Communications Agent automatically notifies attendees. These interactions happen autonomously, with each agent responding to the evolving context recorded in the shared thread. Interfaces and Backend The system was designed with both developer-focused and user-facing interfaces, supporting rapid iteration and intuitive interaction. The Terminal Interface Initially, the agents were deployed and tested through a terminal interface, which provided a controlled environment for debugging and verifying logic step by step. This setup allowed quick testing of individual agents and observation of their interactions within the shared thread. The Chat Interface As the project evolved, I introduced a lightweight chat interface to make the system accessible to staff and students. This interface allows users to book rooms, query events, and reschedule activities using plain language. Recognising that some users might still want to see what happens behind the scenes, I added an optional toggle that reveals the intermediate steps of agent reasoning. This transparency feature proved valuable for debugging and for more technical users who wanted to understand how the agents collaborated. When a user interacts with the chat interface, they are effectively communicating with the Scheduling Agent, which acts as the primary entry point. The Scheduling Agent interprets natural-language commands such as “Book the Engineering Auditorium for Friday at 2 PM” or “Reschedule the robotics demo to another room.” It then coordinates with the Maintenance and Communications Agents to complete the process. Behind the scenes, the chat interface connects to a FastAPI backend responsible for core logic and data access. A Flask + HTMX layer handles lightweight rendering and interactivity, while the Azure AI Agent Service manages orchestration and shared-thread coordination. This combination enables seamless agent communication and reliable task execution without exposing any of the underlying complexity to the end user. Automated Notifications and Fault Detection Once an event is scheduled, the Scheduling Agent posts the confirmation to the shared thread. The Communications Agent, which subscribes to thread updates, automatically sends notifications to all relevant stakeholders by email. This ensures that every participant stays informed without any manual follow-up. The Maintenance Agent runs routine availability checks. If a fault is detected, it logs the issue to the shared thread, prompting the Scheduling Agent to find an alternative room. The Communications Agent then notifies attendees of the change, ensuring minimal disruption to ongoing events. Testing and Evaluation The system underwent several layers of testing to validate both functional and non-functional requirements. Unit and Integration Tests Backend reliability was evaluated through unit and integration tests to ensure that room allocation, conflict detection, and database operations behaved as intended. Automated test scripts verified end-to-end workflows for event creation, modification, and cancellation across all agents. Integration results confirmed that the shared-thread orchestration functioned correctly, with all test cases passing consistently. However, coverage analysis revealed that approximately 60% of the codebase was tested, leaving some areas such as Azure service integration and error-handling paths outside automated validation. These trade-offs were deliberate, balancing test depth with project scope and the constraints of mocking live dependencies. Azure AI Evaluation While functional testing confirmed correctness, it did not capture the agents’ reasoning or language quality. To assess this, I used Azure AI Evaluation, which measures conversational performance across metrics such as relevance, coherence, fluency, and groundedness. The results showed high scores in relevance (4.33) and groundedness (4.67), confirming the agents’ ability to generate accurate and context-aware responses. However, slightly lower fluency scores and weaker performance in multi-turn tasks revealed a retrieval–execution gap typical in task-oriented dialogue systems. Limitations and Insights The evaluation also surfaced several key limitations: Synthetic data: All tests were conducted with simulated datasets rather than live campus systems, limiting generalisability. Scalability: A non-functional requirement in the form of horizontal scalability was not tested. The architecture supports scaling conceptually but requires validation under heavier load. Despite these constraints, the testing process confirmed that the system was both technically reliable and linguistically robust, capable of autonomous coordination under normal conditions. The results provided a realistic picture of what worked well and what future iterations should focus on improving. Impact and Future Work This project demonstrates how conversational AI and multi-agent orchestration can streamline real operational processes. By combining Azure AI Agent Services with modular design principles, the system automates scheduling, communication, and maintenance while keeping the user experience simple and intuitive. The architecture also establishes a foundation for future extensions: Predictive maintenance to anticipate venue faults before they occur. Microsoft Teams integration for seamless in-chat scheduling. Scalability testing and real-user trials to validate performance at institutional scale. Beyond its technical results, the project underscores the potential of multi-agent systems in real-world coordination tasks. It illustrates how modularity, transparency, and intelligent orchestration can make everyday workflows more efficient and human-centred. Acknowledgements What began with a simple Microsoft tutorial evolved into a working prototype that reimagines how campuses could manage their daily operations through conversation and collaboration. This was both a challenging and rewarding journey, and I am deeply grateful to Professor Graham Roberts (UCL) and Professor Lee Stott (Microsoft) for their guidance, feedback, and support throughout the project.461Views4likes1CommentRunning Phi-4 Locally with Microsoft Foundry Local: A Step-by-Step Guide
In our previous post, we explored how Phi-4 represents a new frontier in AI efficiency that delivers performance comparable to models 5x its size while being small enough to run on your laptop. Today, we're taking the next step: getting Phi-4 up and running locally on your machine using Microsoft Foundry Local. Whether you're a developer building AI-powered applications, an educator exploring AI capabilities, or simply curious about running state-of-the-art models without relying on cloud APIs, this guide will walk you through the entire process. Microsoft Foundry Local brings the power of Azure AI Foundry to your local device without requiring an Azure subscription, making local AI development more accessible than ever. So why do you want to run Phi-4 Locally? Before we dive into the setup, let's quickly recap why running models locally matters: Privacy and Control: Your data never leaves your machine. This is crucial for sensitive applications in healthcare, finance, or education where data privacy is paramount. Cost Efficiency: No API costs, no rate limits. Once you have the model downloaded, inference is completely free. Speed and Reliability: No network latency or dependency on external services. Your AI applications work even when you're offline. Learning and Experimentation: Full control over model parameters, prompts, and fine-tuning opportunities without restrictions. With Phi-4's compact size, these benefits are now accessible to anyone with a modern laptop—no expensive GPU required. What You'll Need Before we begin, make sure you have: Operating System: Windows 10/11, macOS (Intel or Apple Silicon), or Linux RAM: Minimum 16GB (32GB recommended for optimal performance) Storage: At least 5 - 10GB of free disk space Processor: Any modern CPU (GPU optional but provides faster inference) Note: Phi-4 works remarkably well even on consumer hardware 😀. Step 1: Installing Microsoft Foundry Local Microsoft Foundry Local is designed to make running AI models locally as simple as possible. It handles model downloads, manages memory efficiently, provides OpenAI-compatible APIs, and automatically optimizes for your hardware. For Windows Users: Open PowerShell or Command Prompt and run: winget install Microsoft.FoundryLocal For macOS Users (Apple Silicon): Open Terminal and run: brew install microsoft/foundrylocal/foundrylocal Verify Installation: Open your terminal and type. This should return the Microsoft Foundry Local version, confirming installation: foundry --version Step 2: Downloading Phi-4-Mini For this tutorial, we'll use Phi-4-mini, the lightweight 3.8 billion parameter version that's perfect for learning and experimentation. Open your terminal and run: foundry model run phi-4-mini You should see your download begin and something similar to the image below Available Phi Models on Foundry Local While we're using phi-4-mini for this guide, Foundry Local offers several Phi model variants and other open-source models optimized for different hardware and use cases: Model Hardware Type Size Best For phi-4-mini GPU chat-completion 3.72 GB Learning, fast responses, resource-constrained environments with GPU phi-4-mini CPU chat-completion 4.80 GB Learning, fast responses, CPU-only systems phi-4-mini-reasoning GPU chat-completion 3.15 GB Reasoning tasks with GPU acceleration phi-4-mini-reasoning CPU chat-completion 4.52 GB Mathematical proofs, logic puzzles with lower resource requirements phi-4 GPU chat-completion 8.37 GB Maximum reasoning performance, complex tasks with GPU phi-4 CPU chat-completion 10.16 GB Maximum reasoning performance, CPU-only systems phi-3.5-mini GPU chat-completion 2.16 GB Most lightweight option with GPU support phi-3.5-mini CPU chat-completion 2.53 GB Most lightweight option, CPU-optimized phi-3-mini-128k GPU chat-completion 2.13 GB Extended context (128k tokens), GPU-optimized phi-3-mini-128k CPU chat-completion 2.54 GB Extended context (128k tokens), CPU-optimized phi-3-mini-4k GPU chat-completion 2.13 GB Standard context (4k tokens), GPU-optimized phi-3-mini-4k CPU chat-completion 2.53 GB Standard context (4k tokens), CPU-optimized Note: Foundry Local automatically selects the best variant for your hardware. If you have an NVIDIA GPU, it will use the GPU-optimized version. Otherwise, it will use the CPU-optimized version. run the command below to see full list of models foundry model list Step 3: Test It Out Once the download completes, an interactive session will begin. Let's test Phi-4-mini's capabilities with a few different prompts: Example 1: Explanation Phi-4-mini provides a thorough, well-structured explanation! It starts with the basic definition, explains the process in biological systems, gives real-world examples (plant cells, human blood cells). The response is detailed yet accessible. Example 2: Mathematical Problem Solving Excellent step-by-step solution! Phi-4-mini breaks down the problem methodically: 1. Distributes on the left side 2. Isolates the variable terms 3. Simplifies progressively 4. Arrives at the final answer: x = 11 The model shows its work clearly, making it easy to follow the logic and ideal for educational purposes Example 3: Code Generation The model provides a concise Python function using string slicing ([::-1]) - the most Pythonic approach to reversing a string. It includes clear documentation with a docstring explaining the function's purpose, provides example usage demonstrating the output, and even explains how the slicing notation works under the hood. The response shows that the model understands not just how to write the code, but why this approach is preferred - noting that the [::-1] slice notation means "start at the end of the string and end at position 0, move with the step -1, negative one, which means one step backwards." This showcases the model's ability to generate production-ready code with proper documentation while being educational about Python idioms. To exit the interactive session, type `/bye` Step 4: Extending Phi-4 with Real-Time Tools Understanding Phi-4's Knowledge Cutoff Like all language models, Phi-4 has a knowledge cutoff date from its training data (typically several months old). This means it won't know about very recent events, current prices, or breaking news. For example, if you ask "Who won the 2024 NBA championship?" it might not have the answer. The good thing is, there's a powerful work-around. While Phi-4 is incredibly capable, connecting it to external tools like web search, databases, or APIs transforms it from a static knowledge base into a dynamic reasoning engine. This is where Microsoft Foundry's REST API comes in. Microsoft Foundry provides a simple API that lets you integrate Phi-4 into Python applications and connect it to real-time data sources. Here's a practical example: building a web-enhanced AI assistant. Web-Enhanced AI Assistant This simple application combines Phi-4's reasoning with real-time web search, allowing it to answer current questions accurately. Prerequisites: pip install foundry-local-sdk requests ddgs Create phi4_web_assistant.py: import requests from foundry_local import FoundryLocalManager from ddgs import DDGS import json def search_web(query): """Search the web and return top results""" try: results = list(DDGS().text(query, max_results=3)) if not results: return "No search results found." search_summary = "\n\n".join([ f"[Source {i+1}] {r['title']}\n{r['body'][:500]}" for i, r in enumerate(results) ]) return search_summary except Exception as e: return f"Search failed: {e}" def ask_phi4(endpoint, model_id, prompt): """Send a prompt to Phi-4 and stream response""" response = requests.post( f"{endpoint}/chat/completions", json={ "model": model_id, "messages": [{"role": "user", "content": prompt}], "stream": True }, stream=True, timeout=180 ) full_response = "" for line in response.iter_lines(): if line: line_text = line.decode('utf-8') if line_text.startswith('data: '): line_text = line_text[6:] # Remove 'data: ' prefix if line_text.strip() == '[DONE]': break try: data = json.loads(line_text) if 'choices' in data and len(data['choices']) > 0: delta = data['choices'][0].get('delta', {}) if 'content' in delta: chunk = delta['content'] print(chunk, end="", flush=True) full_response += chunk except json.JSONDecodeError: continue print() return full_response def web_enhanced_query(question): """Combine web search with Phi-4 reasoning""" # By using an alias, the most suitable model will be downloaded # to your device automatically alias = "phi-4-mini" # Create a FoundryLocalManager instance. This will start the Foundry # Local service if it is not already running and load the specified model. manager = FoundryLocalManager(alias) model_info = manager.get_model_info(alias) print("🔍 Searching the web...\n") search_results = search_web(question) prompt = f"""Here are recent search results: {search_results} Question: {question} Using only the information above, give a clear answer with specific details.""" print("🤖 Phi-4 Answer:\n") return ask_phi4(manager.endpoint, model_info.id, prompt) if __name__ == "__main__": # Try different questions question = "Who won the 2024 NBA championship?" # question = "What is the latest iPhone model released in 2024?" # question = "What is the current price of Bitcoin?" print(f"Question: {question}\n") print("=" * 60 + "\n") web_enhanced_query(question) print("\n" + "=" * 60) Run It: python phi4_web_assistant.py What Makes This Powerful By connecting Phi-4 to external tools, you create an intelligent system that: Accesses Real-Time Information: Get news, weather, sports scores, and breaking developments Verifies Facts: Cross-reference information with multiple sources Extends Capabilities: Connect to databases, APIs, file systems, or any other tool Enables Complex Applications: Build research assistants, customer support bots, educational tutors, and personal assistants This same pattern can be applied to connect Phi-4 to: Databases: Query your company's internal data APIs: Weather services, stock prices, translation services File Systems: Analyze documents and spreadsheets IoT Devices: Control smart home systems The possibilities are endless when you combine local AI reasoning with real-world data access. Troubleshooting Common Issues Service not running: Make sure Foundry Local is properly installed and the service is running. Try restarting with foundry --version to verify installation. Model downloads slowly: Check your internet connection and ensure you have enough disk space (5-10GB per model). Out of memory: Close other applications or try using a smaller model variant like phi-3.5-mini instead of the full phi-4. Connection issues: Verify that no other services are using the same ports. Foundry Local typically runs on http://localhost:5272. Model not found: Run foundry model list to see available models, then use foundry model run <model-name> to download and run a specific model. Your Next Steps with Foundry Local Congratulations! You now have Phi-4 running locally through Microsoft Foundry Local and understand how to extend it with external tools like web search. This combination of local AI reasoning with real-time data access opens up countless possibilities for building intelligent applications. Coming in Future Posts In the coming weeks, we'll explore advanced topics using Hugging Face: Fine-tuning Phi models on your own data for domain-specific applications Phi-4-multimodal: Analyze images, process audio, and combine multiple data types Advanced deployment patterns: RAG systems and multi-agent orchestration Resources to Explore EdgeAI for Beginners Course: Comprehensive 36-45 hour course covering Edge AI fundamentals, optimization, and production deployment Phi-4 Technical Report: Deep dive into architecture and benchmarks Phi Cookbook on GitHub: Practical examples and recipes Foundry Local Documentation: Complete technical documentation and API reference Module 08: Foundry Local Toolkit: 10 comprehensive samples including RAG applications and multi-agent systems Keep experimenting with Foundry Local, and stay tuned as we unlock the full potential of Edge AI! What will you build with Phi-4? Share your ideas and projects in the comments below!1.6KViews1like1CommentGetting Started with AI Agents: A Student Developer’s Guide to the Microsoft Agent Framework
AI agents are becoming the backbone of modern applications, from personal assistants to autonomous research bots. If you're a student developer curious about building intelligent, goal-driven agents, Microsoft’s newly released Agent Framework is your launchpad. In this post, we’ll break down what the framework offers, how to get started, and why it’s a game-changer for learners and builders alike. What Is the Microsoft Agent Framework? The Microsoft Agent Framework is a modular, open-source toolkit designed to help developers build, orchestrate, and evaluate AI agents with minimal friction. It’s part of the AI Agents for Beginners curriculum, which walks you through foundational concepts using reproducible examples. At its core, the framework helps you: Define agent goals and capabilities Manage memory and context Route tasks through tools and APIs Evaluate agent performance with traceable metrics Whether you're building a research assistant, a coding helper, or a multi-agent system, this framework gives you the scaffolding to do it right. What’s Inside the Framework? Here’s a quick look at the key components: Component Purpose AgentRuntime Manages agent lifecycle, memory, and tool routing AgentConfig Defines agent goals, tools, and memory settings Tool Interface Lets you plug in custom tools (e.g., web search, code execution) MemoryProvider Supports semantic memory and context-aware responses Evaluator Tracks agent performance and goal completion The framework is built with Python and .NET and designed to be extensible, perfect for experimentation and learning. Try It: Your First Agent in 10 Minutes Here’s a simplified walkthrough to get you started: Clone the repo git clone https://github.com/microsoft/ai-agents-for-beginners Open the Sample cd ai-agents-for-beginners/14-microsoft-agent-framework Install dependencies pip install -r requirements.txt Run the sample agent python main.py You’ll see a basic agent that can answer questions using a web search tool and maintain context across turns. From here, you can customize its goals, memory, and tools. Why Student Developers Should Care Modular Design: Learn how real-world agents are structured—from memory to evaluation. Reproducible Workflows: Build agents that can be debugged, traced, and improved over time. Open Source: Contribute, fork, and remix with your own ideas. Community-Ready: Perfect for hackathons, research projects, or portfolio demos. Plus, it aligns with Microsoft’s best practices for agent governance, making it a solid foundation for enterprise-grade development. Why Learn? Here are a few ideas to take your learning further: Build a custom tool (e.g., a calculator or code interpreter) Swap in a different memory provider (like a vector DB) Create an evaluation pipeline for multi-agent collaboration Use it in a class project or student-led workshop Join the Microsoft Azure AI Foundry Discord https://aka.ms/Foundry/discord share your project and build your AI Engineer and Developer connections. Star and Fork the AI Agents for Beginners repo for updates and new modules. Final Thoughts The Microsoft Agent Framework isn’t just another library, it’s a teaching tool, a playground, and a launchpad for the next generation of AI builders. If you’re a student developer, this is your chance to learn by doing, contribute to the community, and shape the future of agentic systems. So fire up your terminal, fork the repo, and start building. Your first agent is just a few lines of code away.668Views0likes1CommentAugust 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, August was an exciting month for Azure Database for PostgreSQL! We have introduced updates that make your experience smarter and more secure. From simplified Entra ID group login to integrations with LangChain and LangGraph, these updates help with improving access control and seamless integration for your AI agents and applications. Stay tuned as we dive deeper into each of these feature updates. Feature Highlights Enhanced Performance recommendations for Azure Advisor - Generally Available Entra-ID group login using user credentials - Public Preview New Region Buildout: Austria East LangChain and LangGraph connector Active-Active Replication Guide Enhanced Performance recommendations for Azure Advisor - Generally Available Azure Advisor now offers enhanced recommendations to further optimize PostgreSQL server performance, security, and resource management. These key updates are as follows: Index Scan Insights: Detection and recommendations for disabled index and index-only scans to improve query efficiency. Audit Logging Review: Identification of excessive logging via the pgaudit.log parameter, with guidance to reduce overhead. Statistics Monitoring: Alerts on server statistics resets and suggestions to restore accurate performance tracking. Storage Optimization: Analysis of storage usage with recommendations to enable the Storage Autogrow feature for seamless scaling. Connection Management: Evaluation of workloads for short-lived connections and frequent connectivity errors, with recommendations to implement PgBouncer for efficient connection pooling. These enhancements aim to provide deeper operational insights and support proactive performance tuning for PostgreSQL workloads. For more details read the Performance recommendations documentation. Entra-ID group login using user credentials - Public Preview The public preview for Entra-ID group login using user credentials is now available. This feature simplifies user management and improves security within the Azure Database for PostgreSQL. This allows administrators and users to benefit from a more streamlined process like: Changes in Entra-ID group memberships are synchronized on a periodic 30min basis. This scheduled syncing ensures that access controls are kept up to date, simplifying user management and maintaining current permissions. Users can log in with their own credentials, streamlining authentication, and improving auditing and access management for PostgreSQL environments. As organizations continue to adopt cloud-native identity solutions, this update represents a major improvement in operational efficiency and security for PostgreSQL database environments. For more details read the documentation on Entra-ID group login. New Region Buildout: Austria East New region rollout! Azure Database for PostgreSQL flexible server is now available in Austria East, giving customers in and around the region lower latency and data residency options. This continues our mission to bring Azure PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. LangChain and LangGraph connector We are excited to announce that native LangChain & LangGraph support is now available for Azure Database for PostgreSQL! This integration brings native support for Azure Database for PostgreSQL into LangChain or LangGraph workflows, enabling developers to use Azure PostgreSQL as a secure and high-performance vector store and memory store for their AI agents and applications. Specifically, this package adds support for: Microsoft Entra ID (formerly Azure AD) authentication when connecting to your Azure Database for PostgreSQL instances, and, DiskANN indexing algorithm when indexing your (semantic) vectors. This package makes it easy to connect LangChain to your Azure-hosted PostgreSQL instances whether you're building intelligent agents, semantic search, or retrieval-augmented generation (RAG) systems. Read more at https://aka.ms/azpg-agent-frameworks Active-Active Replication Guide We have published a new blog article that guides you through setting up active-active replication in Azure Database for PostgreSQL using the pglogical extension. This walkthrough covers the fundamentals of active-active replication, key prerequisites for enabling bi-directional replication, and step-by-step demo scripts for the setup. It also compares native and pglogical approaches helping you choose the right strategy for high availability, and multi-region resilience in production environments. Read more about the active-active replication guide on this blog. Azure Postgres Learning Bytes 🎓 Enabling Zone-Redundant High Availability for Azure Database for PostgreSQL Flexible Server Using APIs. High availability (HA) is essential for ensuring business continuity and minimizing downtime in production workloads. With Zone-Redundant HA, Azure Database for PostgreSQL Flexible Server automatically provisions a standby replica in a different availability zone, providing stronger fault tolerance against zone-level failures. This section will guide you on how to enable Zone-Redundant HA using REST APIs. Using REST APIs gives you clear visibility into the exact requests and responses, making it easier to debug issues and validate configurations as you go. You can use any REST API client tool of your choice to perform these operations including Postman, Thunder Client (VS Code extension), curl, etc. to send requests and inspect the results directly. Before enabling Zone-Redundant HA, make sure your server is on the General Purpose or Memory Optimized tier and deployed in a region that supports it. If your server is currently using Same-Zone HA, you must first disable it before switching to Zone-Redundant. Steps to Enable Zone-Redundant HA: Get an ARM Bearer token: Run this in a terminal where Azure CLI is signed in (or use Azure Cloud Shell) az account get-access-token --resource https://management.azure.com --query accessToken -o tsv Paste token in your API client tool Authorization: `Bearer <token>` </token> Inspect the server (GET) using the following URL: https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroup}}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{{serverName}}?api-version={{apiVersion}} In the JSON response, note: sku.tier → must be 'GeneralPurpose' or 'MemoryOptimized' properties.availabilityZone → '1' or '2' or '3' (depends which availability zone that was specified while creating the primary server, it will be selected by system if the availability zone is not specified) properties.highAvailability.mode → 'Disabled', 'SameZone', or 'ZoneRedundant' properties.highAvailability.state → e.g. 'NotEnabled','CreatingStandby', 'Healthy' If HA is currently SameZone, disable it first (PATCH) using API. Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "Disabled" } } } Enable Zone Redundant HA (PATCH) using API: Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "ZoneRedundant" } } } Monitor until HA is Healthy: Re-run the GET from Step 3 every 30-60 seconds until you see: "highAvailability": { "mode": "ZoneRedundant", "state": "Healthy" } Conclusion That’s all for our August 2025 feature updates! We’re committed to making Azure Database for PostgreSQL better with every release, and your feedback plays a key role in shaping what’s next. 💬 Have ideas, questions, or suggestions? Share them with us: https://aka.ms/pgfeedback 📢 Want to stay informed about the latest features and best practices? Follow us here for the latest announcements, feature releases, and best practices: Azure Database for PostgreSQL Blog More exciting improvements are on the way—stay tuned for what’s coming next!1KViews2likes0CommentsJuly 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, July delivered a wave of exciting updates to Azure Database for PostgreSQL! From Fabric mirroring support for private networking to cascading read replicas, these new features are all about scaling smarter, performing faster, and building better. This blog covers what’s new, why it matters, and how to get started. Catch Up on POSETTE 2025 In case you missed POSETTE: An Event for Postgres 2025 or couldn't watch all of the sessions live, here's a playlist with the 11 talks all about Azure Database for PostgreSQL. And, if you'd like to dive even deeper, the Ultimate Guide will help you navigate the full catalog of 42 recorded talks published on YouTube. Feature Highlights Upsert and Script activity in ADF and Azure Synapse – Generally Available Power BI Entra authentication support – Generally Available New Regions: Malaysia West & Chile Central Latest Postgres minor versions: 17.5, 16.9, 15.13, 14.18 and 13.21 Cascading Read Replica – Public Preview Private Endpoint and VNet support for Fabric Mirroring - Public Preview Agentic Web with NLWeb and PostgreSQL PostgreSQL for VS Code extension enhancements Improved Maintenance Workflow for Stopped Instances Upsert and Script activity in ADF and Azure Synapse – Generally Available We’re excited to announce the general availability of Upsert method and Script activity in Azure Data Factory and Azure Synapse Analytics for Azure Database for PostgreSQL. These new capabilities bring greater flexibility and performance to your data pipelines: Upsert Method: Easily merge incoming data into existing PostgreSQL tables without writing complex logic reducing overhead and improving efficiency. Script Activity: Run custom SQL scripts as part of your workflows, enabling advanced transformations, procedural logic, and fine-grained control over data operations. Together, these features streamline ETL and ELT processes, making it easier to build scalable, declarative, and robust data integration solutions using PostgreSQL as either a source or sink. Visit our documentation guide for Upsert Method and script activity to know more. Power BI Entra authentication support – Generally Available You can now use Microsoft Entra ID authentication to connect to Azure Database for PostgreSQL from Power BI Desktop. This update simplifies access management, enhances security, and helps you support your organization’s broader Entra-based authentication strategy. To learn more, please refer to our documentation. New Regions: Malaysia West & Chile Central Azure Database for PostgreSQL has now launched in Malaysia West and Chile Central. This expanded regional presence brings lower latency, enhanced performance, and data residency support, making it easier to build fast, reliable, and compliant applications, right where your users are. This continues to be our mission to bring Azure Database for PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. Latest Postgres minor versions: 17.5, 16.9, 15.13, 14.18 and 13.21 PostgreSQL latest minor versions 17.5, 16.9, 15.13, 14.18 and 13.21 are now supported by Azure Database for PostgreSQL flexible server. These minor version upgrades are automatically performed as part of the monthly planned maintenance in Azure Database for PostgreSQL. This upgrade automation ensures that your databases are always running on the most secure and optimized versions without requiring manual intervention. This release fixes two security vulnerabilities and over 40 bug fixes and improvements. To learn more, please refer PostgreSQL community announcement for more details about the release. Cascading Read Replica – Public Preview Azure Database for PostgreSQL supports cascading read replica in public preview capacity. This feature allows you to scale read-intensive workloads more effectively by creating replicas not only from the primary database but also from existing read replicas, enabling two-level replication chains. With cascading read replicas, you can: Improve performance for read-heavy applications. Distribute read traffic more efficiently. Support complex deployment topologies. Data replication is asynchronous, and each replica can serve as a source for additional replicas. This setup enhances scalability and flexibility for your PostgreSQL deployments. For more details read the cascading read replicas documentation. Private Endpoint and VNET Support for Fabric Mirroring - Public Preview Microsoft Fabric now supports mirroring for Azure Database for PostgreSQL flexible server instances deployed with Virtual Network (VNET) integration or Private Endpoints. This enhancement broadens the scope of Fabric’s real-time data replication capabilities, enabling secure and seamless analytics on transactional data, even within network-isolated environments. Previously, mirroring was only available for flexible server instances with public endpoint access. With this update, organizations can now replicate data from Azure Database for PostgreSQL hosted in secure, private networks, without compromising on data security, compliance, or performance. This is particularly valuable for enterprise customers who rely on VNETs and Private Endpoints for database connectivity from isolated networks. For more details visit fabric mirroring with private networking support blog. Agentic Web with NLWeb and PostgreSQL We’re excited to announce that NLWeb (Natural Language Web), Microsoft’s open project for natural language interfaces on websites now supports PostgreSQL. With this enhancement, developers can leverage PostgreSQL and NLWeb to transform any website into an AI-powered application or Model Context Protocol (MCP) server. This integration allows organizations to utilize a familiar, robust database as the foundation for conversational AI experiences, streamlining deployment and maximizing data security and scalability. For more details, read Agentic web with NLWeb and PostgreSQL blog. PostgreSQL for VS Code extension enhancements PostgreSQL for VS Code extension is rolling out new updates to improve your experience with this extension. We are introducing key connections, authentication, and usability improvements. Here’s what we improved: SSH connections - You can now set up SSH tunneling directly in the Advanced Connection options, making it easier to securely connect to private networks without leaving VS Code. Clearer authentication setup - A new “No Password” option eliminates guesswork when setting up connections that don’t require credentials. Entra ID fixes - Improved default username handling, token refresh, and clearer error feedback for failed connections. Array and character rendering - Unicode and PostgreSQL arrays now display more reliably and consistently. Azure Portal flow - Reuses existing connection profiles to avoid duplicates when launching from the portal. Don’t forget to update to the latest version in the Marketplace to take advantage of these enhancements and visit our GitHub to learn more about this month’s release. Improved Maintenance Workflow for Stopped Instances We’ve improved how scheduled maintenance is handled for stopped or disabled PostgreSQL servers. Maintenance is now applied only when the server is restarted - either manually or through the 7-day auto-restart rather than forcing a restart during the scheduled maintenance window. This change reduces unnecessary disruptions and gives you more control over when updates are applied. You may notice a slightly longer restart time (5–8 minutes) if maintenance is pending. For more information, refer Applying Maintenance on Stopped/Disabled Instances. Azure Postgres Learning Bytes 🎓 Set Up HA Health Status Monitoring Alerts This section will talk about setting up HA health status monitoring alerts using Azure Portal. These alerts can be used to effectively monitor the HA health states for your server. To monitor the health of your High Availability (HA) setup: Navigate to Azure portal and select your Azure Database for PostgreSQL flexible server instance. Create an Alert Rule Go to Monitoring > Alerts > Create Alert Rule Scope: Select your PostgreSQL Flexible Server Condition: Choose the signal from the drop down (CPU percentage, storage percentage etc.) Logic: Define when the alert should trigger Action Group: Specify where the alert should be sent (email, webhook, etc.) Add tags Click on “Review + Create” Verify the Alert Check the Alerts tab in Azure Monitor to confirm the alert has been triggered. For deeper insight into resource health: Go to Azure Portal > Search for Service Health > Select Resource Health. Choose Azure Database for PostgreSQL Flexible Server from the dropdown. Review the health status of your server. For more information, check out the HA Health status monitoring documentation guide. Conclusion That’s a wrap for our July 2025 feature updates! Thanks for being part of our journey to make Azure Database for PostgreSQL better with every release. We’re always working to improve, and your feedback helps us do that. 💬 Got ideas, questions, or suggestions? We’d love to hear from you: https://aka.ms/pgfeedback 📢 Want to stay on top of Azure Database for PostgreSQL updates? Follow us here for the latest announcements, feature releases, and best practices: Azure Database for PostgreSQL Blog Stay tuned for more updates in our next blog!638Views2likes0CommentsBook of Archimedes, Copilot and SharePoint Agents
1. Introduction The Book of Archimedes is a single PDF that contains all posts in the Microsoft Graph Data Connect for SharePoint blog. This includes 40 articles and nearly 400 pages, which is handy for searching for specific keywords quickly across the many articles. The PDF is provided as an attachment to the main MGDC for SharePoint article at https://aka.ms/SharePointData. You can also download it directly from Book of Archimedes. 2. Using the Book with Copilot Beyond searching inside the PDF, you can feed the Book of Archimedes as grounding to your favorite AI tool to get answers to your questions about MGDC for SharePoint. Let's look at an example. You can use this with Copilot or any AI tool that accepts attached files. In my case, I used https://copilot.cloud.microsoft/ to access my work Copilot, the secure and compliant Copilot integrated with my enterprise account. I start with the “+” button and then the “add content” button. Then I used the up-arrow icon to “upload from this device” and selected the local copy of the Book of Archimedes I downloaded earlier. After that, I added my question as a prompt. I tried “What is Project Archimedes?” I got a summary of Microsoft Graph Data Connect for SharePoint, including key scenarios. From there, I could ask any question. If the book contains the answer, Copilot will find it for you. Project Archimedes is the early codename for Microsoft Graph Data Connect for SharePoint. This project aims to deliver rich data assets to OneDrive and SharePoint tenants, enabling them to run their own analytics, derive insights from their data, and understand how they use these products. The data is transferred to an Azure account owned by the tenant, where tools like Azure Synapse, Power BI, or Microsoft Fabric can be used to transform this data into insightful reports and dashboards [1]. Key Scenarios of Project Archimedes: Security: Understanding permissions in SharePoint and OneDrive tenants to ensure content is properly protected. Capacity: Understanding site lifecycle, ownership, and storage used by SharePoint sites and OneDrives. Sync Health: Ensuring users are properly using OneDrive for Business to protect their files by synchronizing them with the cloud [1]. Datasets Provided: Sites: Information about SharePoint sites. Groups: Details about SharePoint groups. Permissions: Information about sharing permissions. Files: Data about files stored in SharePoint and OneDrive. File Actions: Details about actions performed on files. Sync Health: Information about devices running OneDrive Sync. Sync Errors: Details about errors encountered by OneDrive Sync clients [1]. Example Use Cases: Security Scenario: Answering questions about oversharing, external sharing, sensitive data sharing, and permissions. Capacity Scenario: Tracking storage usage, site lifecycle, and identifying large or inactive sites. Sync Health Scenario: Monitoring device health, folder backup status, and sync errors [1]. For more detailed information, you can refer to the Book of Archimedes which contains comprehensive guides, FAQs, and step-by-step instructions related to Microsoft Graph Data Connect for SharePoint [1]. [1]: Book of Archimedes.pdf The more impressive capabilities of Copilot are shown when you ask for something that is not just a direct quote from the Book. For instance, I asked about pricing for a specific scenario: "If a tenant has 1,000 SharePoint sites, how much would it cost to get the SharePoint Sites and SharePoint permissions for that tenant using MGDC?" To estimate the cost of getting the SharePoint Sites and SharePoint Permissions datasets for a tenant using Microsoft Graph Data Connect (MGDC), you can follow these guidelines: Pricing Information: SharePoint Sites and Permissions: The price to pull 1,000 objects from the available MGDC for SharePoint datasets (Sites, Groups, and Permissions) in the US is $0.75, plus the cost for infrastructure like Azure Storage, Azure Data Factory, or Azure Synapse [1]. SharePoint Files: This dataset is billed at $0.75 per 50,000 objects due to its typical high volume [1]. Estimating Costs: Number of Objects: Sites: The number of objects in your SharePoint Sites dataset will be equal to the number of sites in your tenant. Permissions: The average number of SharePoint permissions per site is around 61 [1]. Cost Calculation: Example Calculation: For a tenant with 1,000 sites: Sites Dataset Cost: $0.75 Permissions Dataset Cost: $45.75 Total Cost: $0.75 + $45.75 = $46.50 Additional Costs: Azure Storage: Costs for storing the data in Azure Storage. Azure Synapse: Costs for running pipelines to move the data. For more precise estimates, you can sample the dataset to get a total object count without pulling the entire dataset [1]. [1]: Book of Archimedes.pdf Copilot will also answer questions in other languages. You just ask the question in whatever language you want, and it gets detected… Here’s an example using Brazilian Portuguese. I asked "Quais são os datasets que fazem parte do cenário de segurança?" Os datasets que fazem parte do cenário de segurança no Project Archimedes são: Sites: Informações sobre os sites do SharePoint. Permissions: Informações sobre as permissões de compartilhamento. Groups: Detalhes sobre os grupos do SharePoint. Esses datasets ajudam a entender se o conteúdo está devidamente protegido, respondendo a perguntas como: Está ocorrendo compartilhamento excessivo? Está ocorrendo compartilhamento externo? Dados sensíveis estão sendo compartilhados? Quanto compartilhamento ocorre por rótulo de sensibilidade? Dados sensíveis são compartilhados com usuários externos? Quais domínios externos estão sendo compartilhados? Quais sites foram mais compartilhados? Quais níveis de compartilhamento estão sendo usados? Quais permissões um usuário específico possui? Quais extensões de arquivo são mais compartilhadas? Quanto compartilhamento ocorre no nível de Web, Pasta, Lista ou Arquivo? Esses datasets são fundamentais para garantir a segurança e a conformidade dos dados dentro do ambiente do SharePoint e OneDrive [1]. [1]: Book of Archimedes.pdf 3. Using the Book to create a SharePoint agent You can also use the book to create a SharePoint agent. This will make it easy to ask questions about Microsoft Graph Data Connect for SharePoint without having to upload the PDF every time. To start, upload the Book of Archimedes to a SharePoint document library. Next, select the file and use the option to “create an agent”. That’s it. Your agent is created, and you can start using it immediately. The agent shows up in the document library as a file with the “.agent” extension. You can simply open that file in SharePoint to run the agent. Once you open the agent, it offers you a text box to ask questions. Here is the agent response to “What are the main questions about SharePoint capacity that MGDC for SharePoint can answer?” The Book of Archimedes provides detailed answers to several key questions about SharePoint capacity using Microsoft Graph Data Connect (MGDC). Here are the main questions it addresses: Key Questions about SharePoint Capacity: How much storage is being used by SharePoint sites? This includes understanding the total size of the latest versions of files, previous versions, metadata size, and deleted files in the recycle bin 1. What are the different storage sizes reported for a SharePoint site? Storage Used: Includes all storage, metadata, previous versions, and recycle bins. Total Size: Includes metadata and previous versions but excludes recycle bins. Metadata Size: Storage used by file metadata. Total File Stream Size: Size of the latest versions of files. Previous Versions Size: Size of previous versions of files. Recycle Bin Size: Size of deleted files in the recycle bins 1. How can administrators check the amount of storage being used by SharePoint sites? SharePoint client UX: Viewing storage while using SharePoint as a site user or Site Collection administrator. PowerShell cmdlets: Using command-line tools to query site storage information. SharePoint Admin Center reports: Viewing usage reports in the Admin Center. Microsoft Graph Data Connect: Pulling detailed datasets for SharePoint sites and OneDrives 1. How can administrators create test sites to understand storage reporting tools? Creating different types of test sites (empty site, site with files, site with deleted files, site with previous versions) and analyzing how storage sizes are reported 1. How can administrators estimate Azure costs for MGDC usage? Estimating the number of objects (sites, groups, permissions, files) and understanding the billing rates for different datasets 1. How can administrators use MGDC to run detailed analysis of SharePoint and OneDrive storage metrics? Using detailed data in MGDC to analyze storage metrics and understand the impact of metadata, versions, and deleted files 1. Tools and Templates: Capacity Template: Includes instructions, ARM template, Azure Synapse pipeline template, and Power BI template to help administrators use MGDC datasets for capacity analysis 1. Conclusion: The Book of Archimedes provides comprehensive guidance on leveraging MGDC to gather, store, and analyze SharePoint data, ensuring effective data management and security. It is a valuable resource for administrators and developers working with SharePoint and OneDrive data using MGDC 1. Would you like more specific information on any of these questions or tools? 1: <File>Book of Archimedes</File> As we did with Copilot, you can also ask any question on a topic addressed in the book. You can also use languages other than English. You can experiment with editing the agent to rename it, change the description, add more sources, adjust its behavior, offer sample starter prompts, include specific instructions or even customize with Copilot Studio. Just use the option to "edit the agent". You can also share the agent just like you share any file in SharePoint. You can learn more about SharePoint agents in this YouTube video by Vesa Juvonen: https://www.youtube.com/watch?v=pYvK76SVEv4 4. Conclusion I hope you enjoy using the Book of Archimedes as grounding for your AI tools. As usual, remember that AI tools may make mistakes and you should always double-check the answers you receive. There is also a chance that a particular AI tool might not be able to handle a large document like the Book of Archimedes, currently weighing in at around 10MB.