artificial intelligence
55 TopicsSelecting the Right Agentic Solution on Azure
Recently, we have seen a surge in requests from customers and Microsoft partners seeking guidance on building and deploying agentic solutions at various scales. With the rise of Generative AI, replacing traditional APIs with agents has become increasingly popular. There are several approaches to building, deploying, running, and orchestrating agents on Azure. In this discussion, I will focus exclusively on Azure-specific tools, services, and methodologies, setting aside Copilot and Copilot Studio for now. This article describes the options available as of today. 1. Azure OpenAI Assistants API: This feature within Azure OpenAI Service enables developers to create conversational agents (“assistants”) based on OpenAI models (such as GPT-3.5 and GPT-4). It supports capabilities like memory, tool/function calls, and retrieval (e.g., document search). However, Microsoft has already deprecated version 1 of the Azure OpenAI Assistants API, and version 2 remains in preview. Microsoft strongly recommends migrating all existing Assistants API-based agents to the Agent Service. Additionally, OpenAI is retiring the Assistants API and advises developers to use the modern “Response” API instead (see migration detail). Given these developments, it is not advisable to use the Assistants API for building agents. Instead, you should use the Azure AI Agent Service, which is part of Azure AI Foundry. 2. Workflows with AI agents and models in Azure Logic Apps (Preview) – As the name suggests, this feature is currently in public preview and is only available with Logic Apps Standard, not with the consumption plan. You can enhance your workflow by integrating agentic capabilities. For example, in a visa processing workflow, decisions can be made based on priority, application type, nationality, and background checks using a knowledge base. The workflow can then route cases to the appropriate queue and prepare messages accordingly. Workflows can be implemented either as chat assistant or APIs. If your project is workflow-dependent and you are ready to implement agents in a declarative way, this is a great option. However, there are currently limited choices for models and regional availability. For CI/CD, there is an Azure Logic Apps Standard template available for VS Code you can use. 3. Azure AI Agent Service – Part of Azure AI Foundry, the Azure AI Agent Service allows you to provision agents declaratively from the UI. You can consume various OpenAI models (with support for non-OpenAI models coming soon) and leverage important tools or knowledge bases such as files, Azure AI Search, SharePoint, and Fabric. You can connect agents together and create hierarchical agent dependencies. SDKs are available for building agents within agent services using Python, C#, or Java. Microsoft manages the infrastructure to host and run these agents in isolated containers. The service offers role-based access control, MS Entra ID integration, and options to bring your own storage for agent states and Azure Key Vault keys. You can also incorporate different actions including invoking a Logic App instance from your agent. There is also option to trigger an agent using Logic Apps (preview). Microsoft recommends using Agent Service/Azure Foundry as the destination for agents, as further enhancements and investments are focused here. 4. Agent Orchestrators – There are several excellent orchestrators available, such as LlamaIndex, LangGraph, LangChain, and two from Microsoft—Semantic Kernel and AutoGen. These options are ideal if you need full control over agent creation, hosting, and orchestration. They are developer-only solutions and do not offer a UI (barring AutoGen Studio having some UI assistance). You can create complex, multi-layered agent connections. You can then host and run these agents in you choice of Azure services like AKS or Apps Service. Additionally, you have the option to create agents using Agent Service and then orchestrate them with one of these orchestrators. Choosing the Right Solution The choice of agentic solution depends on several factors, including whether you prefer code or no-code approaches, control over the hosting platform, customer needs, scalability, maintenance, orchestration complexity, security, and cost. Customer Need: If agents need to be part of a workflow, use AI Agents in Logic Apps; otherwise, consider other options. No-Code: For workflow-based agents, Logic Apps is suitable; for other scenarios, Azure AI Agent Service is recommended. Hosting and Maintenance: If Logic Apps is not an option and you prefer not to maintain your own environment, use Azure AI Agent Service. Otherwise, consider custom agent orchestrators like Semantic Kernel or AutoGen to build the agent and services like AKS or Apps Service to host those. Orchestration Complexity: For simple hierarchical agent connections, Azure AI Agent Service is good choice. For complex orchestration, use an agent orchestrator. Versioning - If you are concerned about versioning to ensure solid CI/CD regime then you may have to chose Agent Orchestrators. Agent Service still miss this feature clarity. We have some work-around but it is not robust implementation. Hopefully we will catch up soon with a better versioning solution. Summary: When selecting the right agentic solution on Azure, consider the latest recommendations and platform developments. For most scenarios, Microsoft advises using the Azure AI Agent Service within Azure Foundry, as it is the focus of ongoing enhancements and support. For workflow-driven projects, Azure Logic Apps with agentic capabilities may be suitable, while advanced users can leverage orchestrators for custom agent architectures307Views3likes0CommentsAzure OpenAI Landing Zone reference architecture
In this article, delve into the synergy of Azure Landing Zones and Azure OpenAI Service, building a secure and scalable AI environment. Unpack the Azure OpenAI Landing Zone architecture, which integrates numerous Azure services for optimal AI workloads. Explore robust security measures and the significance of monitoring for operational success. This journey of deploying Azure OpenAI evolves alongside Azure's continual innovation.206KViews42likes20CommentsModernizing Enterprise IT & Knowledge Support with Azure-Native Multiagent AI and LangGraph.
Industry: Energy Location: North America Executive Summary: AI-Driven Multi-Agent Knowledge and IT Support Solution for an Energy Industry Firm A North American energy company sought to modernize its legacy knowledge and IT support chatbot, which was underperforming across key metrics. The existing system, built on static rules and scripts, delivered slow and often inaccurate responses, failing to meet the organization’s standards for employee engagement and operational efficiency. To address this challenge, we proposed and designed a cloud-native, AI-powered multi-agent system hosted on Microsoft Azure. Built on the LangGraph orchestration framework and Azure AI Foundry. This solution integrates advanced ai agent hierarchies, allowing for contextual, domain-specific knowledge retrieval and automated IT support. It improves speed, accuracy, and adaptability, delivering measurable gains in support resolution time, employee satisfaction, and knowledge accessibility. Business Use Case Challenge: The organization’s internal support chatbot was not scaling with the needs of its workforce. Employees experienced delays, poor response relevance, and limited capabilities in both research assistance and IT troubleshooting. This led to increased reliance on human support teams, raising operational costs and slowing productivity. Solution Overview: We implemented a LangGraph-based hierarchical multi-agent system, segmented by business domains (e.g., IT Support, Business Domain Knowledge). It enables the creation of a multi-level hierarchical ai agent-based system by creating a top-level supervisor that manages multiple supervisor agents, each of which handle a business domain within the organization. In this solution, each domain supervisor manages the worker or ReAct agents within its domain (IT support and Knowledge Retrieval). Agentic Workflow: Architecture: Solution Components: AI Agent Orchestration Framework: Langgraph Multiagent and multilevel hierarchies (Python) Frontend: React.js, FastAPI, Chainlit Server (Dev), CopilotKit Agentic UI (Prod) Memory Management/ Context Engineering: Azure Cosmos DB Memory Store Data Source: Azure Data Lake Gen2 Vector Store: Azure AI Search Agentic Retrieval & Integrated Vectorization for Data ingestion, Query Decomposition and Parallel Subqueries Secrets Management: Azure Key Vault Traditional and AI Agentic Observability: Azure Foundry and Azure Monitor (Log Analytics and Application Insights) Model Catalog: Azure AI Foundry LLM-Judge Based Evaluation: Online Evaluation of GenAI App with Azure AI Evaluation Python SDK Guardrails and AI Content Safety: AI Foundry Content Safety, Prompt Jailbreaks and Blocklists AI Governance: Azure AI Foundry Security: Managed Identity, RBAC, Network Security Responsible AI DevOps: GitHub Actions for Apps & Infra CICD, Azure App Service for hosting Strategic Value: This solution lays the groundwork for enterprise-wide AI adoption by creating a flexible, modular and extensible agentic framework. It not only replaces a legacy system but enables future expansion into HR, Compliance, and Operational domains with minimal overhead. This is the first in a series of future posts will provide a deeper dive into specific components of the solution.665Views3likes1CommentAI for Operations - Copilot Agent Integration
Solution ideas The original framework introduced several Logic App and Function App patterns for SQL BPA, Update Manager, Cost Management, Anomaly Detection, and Smart Doc creation. In this article we add two Copilot Studio Agents, packaged in the GitHub repository Microsoft Azure AI for Operation Framework, designed to be deployed in a dedicated subscription (e.g., OpenAI-CoreIntegration): Copilot FinOps Agent – interactive cost & usage analysis Copilot Update Manager Agent – interactive patch status & one-time updates Architecture Copilot FinOps Agent A Copilot Studio agent that lets stakeholders chat in natural language to retrieve, compare, and summarise cost data—without leaving Teams. Dataflow # Stage Description Initial Trigger User message (Teams / Copilot Studio web) invoke topic The conversation kicks off the topic “Analyze Azure Costs”. 1 Pre-Processing Power Automate flow captures tenant ID, subscription filters, date range. 2 Cost Query Azure Cost Management APIs pull actual and previous spend, returning JSON rows (service name, cost €). 3 OpenAI Analysis Data is analyzed by OpenAI\Copilot Agent following the flow structure. 4 Response Formatting Copilot Studio flow format the output as a table. 5 Chat Reply Copilot agent posts the insight list. Users can ask any kind of question related the FinOps topic. Components Microsoft Copilot Studio (Developer licence) – low-code agent designer Power Automate Premium – orchestrates REST calls, prompt assembly, file handling Azure Cost Management + Billing – source of spend data (Rest API) Azure OpenAI Service – GPT-4o and o3-mini reasoning & text generation Microsoft Teams – chat surface for Q&A, cards, and adaptive actions Potential use cases Finance teams asking “Why did VM spend jump last week?” Engineers requesting a monthly cost overview before sprint planning Leadership dashboards that can be drilled into via natural-language chat Copilot Update Manager Agent A Copilot Studio agent that surfaces patch compliance and can trigger ad-hoc One-Time Updates for selected VMs directly from the chat. Dataflow # Stage Description Initial Trigger User message (Teams / Copilot Studio web) invoke topic. The conversation kicks off the topic “Analyze Azure Costs”. 1 Pre-Processing Flow validates RBAC and captures target scope (subscription / RG / VM). 2 Patch Status Query Azure Update Manager & Resource Graph query patchassessmentresources for KBs, severities, pending counts. 3 OpenAI Report GPT-4o - o3-mini generates: • VM-level summary (English) • General Overview 4 Adaptive Card Power Automate builds an Adaptive Card listing non-compliant VMs with “One-time Update”- "No action" buttons. 5a User Action – Review User inspects details or asks follow-up questions. 5b User Action – Patch Now Clicking One-time Update calls Update Manager REST API to start a One-Time Update job. 6 Confirmation Agent posts job ID, live status, and final success / error summary. Components Microsoft Copilot Studio – conversational front-end Power Automate Premium – API orchestration & status polling Azure Update Manager – compliance data & patch execution Azure OpenAI Service – explanation & remediation text Microsoft Teams – Adaptive Cards with action buttons Potential use cases Service owners getting a daily compliance digest with the ability to remediate on demand Security officers validating zero-day patch rollout status via chat Help-desk agents triaging “Is VM X missing critical updates?” without opening the Azure portal Prerequisites Resource Quantity Notes Copilot Studio Developer licence 1 Assign in Microsoft 365 Admin Center Power Automate Premium licence 1 user Needed for HTTP, Azure AD, OpenAI connectors Microsoft Teams 1 user Chat interface Azure subscription 1 Dedicated OpenAI-CoreIntegration recommended GitHub repo latest Microsoft Azure AI for Operation Framework Copilot Agent Copilot Studio User Experience Deployment steps (high level) Assign licences – Copilot Studio Developer + Power Automate Premium Create Copilot Studio Agent New Agent → Skip to configure → fill basics → Create → Settings → disable GenAI orchestration Import topics Copilot topic Update Manager (link to configuration file) Copilot topic FinOps (link to configuration file) Publish & share the agent to Teams. Verify permission scopes for Cost Management and Update Manager APIs. Start chatting! Feel free to clone the GitHub repo, adapt the topics to your tag taxonomy or FinOps dashboard structure, and let us know in the comments how Copilot Agents are transforming your operational workflows and... Stay Tuned for the next updates! Contributors Principal authors Tommaso Sacco | Cloud Solutions Architect Simone Verza | Cloud Solution Architect Special thanks Carmelo Ferrara | Director CSA Antonio Sgrò | Sr CSA Manager Marco Crippa | Sr CSA Manager1.1KViews1like1CommentBoosting Productivity with Ansys RedHawk-SC and Azure NetApp Files Intelligent Data Infrastructure
Discover how integrating Ansys Access with Azure NetApp Files (ANF) is revolutionizing cloud-based engineering simulations. This article reveals how organizations can harness enterprise-grade storage performance, seamless scalability, and simplified deployment to supercharge Ansys RedHawk-SC workloads on Microsoft Azure. Unlock faster simulations, robust data management, and cost-effective cloud strategies—empowering engineering teams to innovate without hardware limitations. Dive in to learn how intelligent data infrastructure is transforming simulation productivity in the cloud!436Views0likes0CommentsModernizing Loan Processing with Gen AI and Azure AI Foundry Agentic Service
Scenario Once a loan application is submitted, financial institutions must process a variety of supporting documents—including pay stubs, tax returns, credit reports, and bank statements—before a loan can be approved. This post-application phase is often fragmented and manual, involving data retrieval from multiple systems, document verification, eligibility calculations, packet compilation, and signing. Each step typically requires coordination between underwriters, compliance teams, and loan processors, which can stretch the processing time to several weeks. This solution automates the post-application loan processing workflow using Azure services and Generative AI agents. Intelligent agents retrieve and validate applicant data, extract and summarize document contents, calculate loan eligibility, and assemble structured, compliant loan packets ready for signing. Orchestrated using Azure AI Foundry, the system ensures traceable agent actions and responsible AI evaluations. Final loan documents and metrics are stored securely for compliance and analytics, with Power BI dashboards enabling real-time visibility for underwriters and operations teams. Architecture: Workflow Description: The loan processing architecture leverages a collection of specialized AI agents, each designed to perform a focused task within a coordinated, intelligent workflow. From initial document intake to final analytics, these agents interact seamlessly through an orchestrated system powered by Azure AI Foundry, GPT-4o, Azure Functions and the Semantic Kernel. The agents not only automate and accelerate individual stages of the process but also communicate through an A2A layer to share critical context—enabling efficient, accurate, and transparent decision-making across the pipeline. Below is a breakdown of each agent and its role in the system. It all begins at the User Interaction Layer, where a Loan Processor or Underwriter interacts with the web application. This interface is designed to be simple, intuitive, and highly responsive to human input. As soon as a request enters the system, it’s picked up by the Triage Agent, powered by GPT-4o or GPT-4o-mini. This agent acts like a smart assistant that can reason through the problem and break it down into smaller, manageable tasks. For example, if the user wants to assess a new applicant, the Triage Agent identifies steps like verifying documents, calculating eligibility, assembling the loan packet, and so on. Next, the tasks are routed to the Coordinator Agent, which acts as the brains of the operation. Powered by Azure Functions & Sematic Kernel, this agent determines the execution order, tracks dependencies, and assigns each task to the appropriate specialized agent. The very first action that the Coordinator Agent triggers is the Applicant Profile Retrieval Agent. This agent taps into Azure AI Search, querying the backend to retrieve all relevant data about the applicant — previous interactions, submitted documents, financial history, etc. This rich context sets the foundation for the steps that follow. Once the applicant profile is in place, the Coordinator Agent activates a set of specialized agents, as outlined to perform specialized tasks as per the prompt received in the interaction layer. Below is the list of specialized agents: a. Documents Verification Agent: This agent checks and verifies the authenticity and completeness of applicant-submitted documents as part of the loan process. Powered by: GPT-4o b. Applicant Eligibility Assessment Agent: It evaluates whether the applicant meets the criteria for loan eligibility based on predefined rules and document content. Powered by: GPT-4o c. Loan Calculation Agent: This agent computes loan values and terms based on the applicant’s financial data and eligibility results. Powered by: GPT-4o d. Loan Packet Assembly Agent: This agent compiles all verified data into a complete and compliant loan packet ready for submission or signing. Powered by: GPT-4o e. Loan Packet Signing Agent: It handles the digital signing process by integrating with DocuSign and ensures all necessary parties have executed the loan packet. Powered by: GPT-4o f. Analytics Agent: This agent connects with Power BI to update applicant status and visualize insights for underwriters and processors. Powered by: GPT-4o Components Here are the key components of your Loan Processing AI Agent Architecture: Azure OpenAI GPT-4o/GPT 4o mini: Advanced multimodal language model. Used to summarize, interpret, and generate insights from documents, supporting intelligent automation. Empowers agents in this architecture with contextual understanding and reasoning. Azure AI Foundry Agent Service: Agent orchestration framework. Manages the creation, deployment, and lifecycle of task-specific agents—such as classifiers, retrievers, and validators—enabling modular execution across the loan processing workflow. Semantic Kernel: Lightweight orchestration library. Facilitates in-agent coordination of functions and plugins. Supports memory, chaining of LLM prompts, and integration with external systems to enable complex, context-aware behavior in each agent. Azure Functions: Serverless compute for handling triggers such as document uploads, user actions, or decision checkpoints. Initiates agent workflows, processes events, and maintains state transitions throughout the loan processing pipeline. Azure Cosmos DB: Globally distributed NoSQL database used for agent memory and context persistence. Stores conversation history, document embeddings, applicant profile snapshots, and task progress for long running or multi-turn workflows. Agentic Content Filters: Responsible AI mechanism for real-time filtering. Evaluates and blocks sensitive or non-compliant outputs generated by agents using customizable guardrails. Agentic Evaluations: Evaluation framework for agent workflows. Continuously tests, scores, and improves agent outputs using both automatic and human-in-the-loop metrics. Power BI: Business analytics tool that visualizes loan processing stages, agent outcomes, and applicant funnel data. Enables real-time monitoring of agent performance, SLA adherence, and operational bottlenecks for decision makers. Azure ML Studio: Code-first development environment for building and training machine learning models in Python. Supports rapid iteration, experimentation, and deployment of custom models that can be invoked by agents. Security Considerations: Web App: For web applications, access control and identity management can be done using App Roles, which determine whether a user or application can sign in or request an access token for a web API. For threat detection and mitigation, Defender for App Service leverages the scale of the cloud to identify attacks targeting apps hosted on Azure App Service. Azure AI Foundry: Azure AI Foundry supports robust identity management using Azure Role-Based Access Control (RBAC) to assign roles within Microsoft Entra ID, and it supports Managed Identities for secure resource access. Conditional Access policies allow organizations to enforce access based on location, device, and risk level. For network security, Azure AI Foundry supports Private Link, Managed Network Isolation, and Network Security Groups (NSGs) to restrict resource access. Data is encrypted in transit and at rest using Microsoft-managed keys or optional Customer-Managed Keys (CMKs). Azure Policy enables auditing and enforcing configurations for all resources deployed in the environment. Additionally, Microsoft Entra Agent ID, which extends identity management and access capabilities to AI agents. Now, AI agents created within Microsoft Copilot Studio and Azure AI Foundry are automatically assigned identities in a Microsoft Entra directory centralizing agent and user management in one solution. AI Security Posture Management can be used to assess the security posture of AI workloads. Purview APIs enable Azure AI Foundry and developers to integrate data security and compliance controls into custom AI apps and agents. This includes enforcing policies based on how users interact with sensitive information in AI applications. Purview Sensitive Information Types can be used to detect sensitive data in user prompts and responses when interacting with AI applications. Cosmos DB: Azure Cosmos DB enhances network security by supporting access restrictions via Virtual Network (VNet) integration and secure access through Private Link. Data protection is reinforced by integration with Microsoft Purview, which helps classify and label sensitive data, and Defender for Cosmos DB to detect threats and exfiltration attempts. Cosmos DB ensures all data is encrypted in transit using TLS 1.2+ (mandatory) and at rest using Microsoft-managed or customer-managed keys (CMKs). Power BI: Power BI leverages Microsoft Entra ID for secure identity and access management. In Power BI embedded applications, using Credential Scanner is recommended to detect hardcoded secrets and migrate them to secure storage like Azure Key Vault. All data is encrypted both at rest and during processing, with an option for organizations to use their own Customer-Managed Keys (CMKs). Power BI also integrates with Microsoft Purview sensitivity labels to manage and protect sensitive business data throughout the analytics lifecycle. For additional context, Power BI security white paper - Power BI | Microsoft Learn Related Scenarios Financial Institutions: Banks and credit unions can streamline customer onboarding by using agentic services to autofill account paperwork, verify identity, and route data to compliance systems. Similarly, signing up for credit cards and applying for personal or business loans can be orchestrated through intelligent agents that collect user input, verify eligibility, calculate offers, and securely generate submission packets—just like in the proposed loan processing model. Healthcare: Healthcare providers can deploy a similar agentic architecture to simplify patient intake by pre-filling forms, validating insurance coverage in real-time, and pulling medical history from existing systems securely. Agents can reason over patient inputs and coordinate backend workflows, improving administrative efficiency and enhancing the patient experience. University Financial Aid/Scholarships: Universities can benefit from agentic orchestration for managing financial aid processes—automating the intake of FAFSA or institutional forms, matching students with eligible scholarships, and guiding them through complex application workflows. This reduces manual errors and accelerates support delivery to students. Car Dealerships’ Financial Departments: Agentic systems can assist car dealerships in handling non-lot inventory requests, automating the intake and validation of custom vehicle orders. Additionally, customer loan applications can be processed through AI agents that handle verification, calculation, and packet assembly—mirroring the structure in the loan workflow above. Commercial Real Estate: Commercial real estate firms can adopt agentic services to streamline property research, valuations, and loan application workflows. Intelligent agents can pull property data, fill out required financial documents, and coordinate submissions, making real estate financing faster and more accurate. Law: Law firms can automate client onboarding with agents that collect intake data, pre-fill compliance documentation, and manage case file preparation. By using AI Foundry to coordinate agents for documentation, verification, and assembly, legal teams can reduce overhead and increase productivity. Contributors: This article is maintained by Microsoft. It was originally written by the following contributors. Principal authors: Manasa Ramalinga| Principal Cloud Solution Architect – US Customer Success Oscar Shimabukuro Kiyan| Senior Cloud Solution Architect – US Customer Success Abed Sau | Principal Cloud Solution Architect – US Customer Success Matt Kazanowsky | Senior Cloud Solution Architect – US Customer Success2KViews1like0CommentsBuilding an Enterprise RAG Pipeline in Azure with NVIDIA AI Blueprint for RAG and Azure NetApp Files
Transform your enterprise-grade RAG pipeline with NVIDIA AI and Azure NetApp Files. This post highlights the challenges of scaling RAG solutions and introduces NVIDIA's AI Blueprint adapted for Azure. Discover how Azure NetApp Files boosts performance and handles dynamic demands, enabling robust and efficient RAG workloads.2.1KViews1like0CommentsNatural Language to SQL Semantic Kernel Multi-Agent System
In today’s data-driven landscape, the ability to access and interpret information in a human-readable format is increasingly valuable. Being able to interact with and query your database in natural language is the game changer. In this post, we’ll walk through how to build a SQL agent using the Semantic Kernel framework to interact with a PostgreSQL database containing DVD rental data. I’ll explain how to define Semantic Kernel functions through plugins and how to incorporate them into agents. We’ll also look at how to set up these agents and guide them with well-structured instructions. Our example uses a PostgreSQL database that stores detailed information about DVD rentals. This sample database contains 15 tables and can be found here: PostgreSQL Sample Database. With a natural language interface, users can simply ask questions like, “What are the most rented DVDs this month?” and the agent will generate and execute the relevant SQL query. This approach allows non-technical users to quickly gain insights from their data without needing to write SQL themselves. We’ll use Semantic Kernel agent framework for the enterprise readiness that it offers that's backed by security enhancing features, ensuring responsible AI solution at scale. Define Semantic Kernel functions To enable our agent to generate and execute SQL queries, we first need to define the necessary functions. The easiest way to provide a Semantic Kernal AI Agent with capabilities is to wrap native code into a plugin. You can learn more here. In our SQL Agent we will define two plugins: QueryPostgresPlugin This plugin takes a SQL query as input, sanitizes it by removing unnecessary characters, and then executes it against a PostgreSQL database using the psycopg2 library. Below is an example demonstrating how this can be done. class QueryPostgresPlugin: """ Plugin to query a PostgreSQL database using a SQL query. The SQL query is provided as an input parameter. """ def __init__(self, connection_string: str) -> None: self._connection_string = connection_string @staticmethod def __clean_sql_query__(sql_query: str) -> str: """Clean the SQL query to remove unnecessaryl characters.""" return sql_query.replace(";", "").replace("\n", " ") ( name="query_postgres", description="Query a PostgreSQL database using a SQL query and return the results as a string.") async def query_postgres( self, sql_query: Annotated[str, "SQL query to be executed"]) -> Annotated[str, "The results of the SQL query as a formatted string"]: """ Executes the SQL query against PostgreSQL using psycopg2 and returns the results. Args: sql_query: The SQL query to be executed. Returns: A string representation of the query results or an error message. """ def run_query(): try: conn = psycopg2.connect(self._connection_string) cur = conn.cursor() # clean the SQL query. query = self.__clean_sql_query__(sql_query) cur.execute(query) # retrieve column names before fetching rows. col_names = [desc[0] for desc in cur.description] if cur.description else [] rows = cur.fetchall() cur.close() conn.close() if not rows: return "No results found." # convert rows to a list of dictionaries if column names are available. results = [dict(zip(col_names, row)) for row in rows] if col_names else rows return str(results) except Exception as e: return f"Error executing query: {e}" # run the synchronous query code in a thread. result = await asyncio.to_thread(run_query) return result It then returns the query results either as a formatted string or as structured data. By abstracting away the details of database connectivity and query execution, this plugin allows our agent to focus solely on generating accurate queries based on user intent. Second plugin we define is the: GetSchemaPlugin Before generating any queries, the agent must first understand the underlying database structure. This plugin connects to PostgreSQL and retrieves schema details (such as table names, column names, and data types) from the information_schema.columns view. Having this context is essential for the agent to construct accurate and complex SQL queries, ensuring that all table and column references are valid. class GetSchemaPlugin: """ Plugin to retrieve the schema of tables from a PostgreSQL database. It returns table schema, table name, column name, and data type for each column. """ def __init__(self, connection_string: str) -> None: self._connection_string = connection_string ( name="get_schema", description="Retrieves the schema of tables from the PostgreSQL database. " "Returns table names, column names, and data types as a formatted string.") async def get_schema( self, _: Annotated[str, "Unused parameter for compatibility"] ) -> Annotated[str, "The schema details as a formatted string"]: """ Connects to PostgreSQL using psycopg2, retrieves schema details from the information_schema.columns view, and returns the results as a formatted string. """ def run_schema_query(): try: # connect to the PostgreSQL database. conn = psycopg2.connect(self._connection_string) cur = conn.cursor() # query to retrieve schema details excluding internal schemas. query = """ SELECT table_schema, table_name, column_name, data_type FROM information_schema.columns WHERE table_schema NOT IN ('information_schema', 'pg_catalog') ORDER BY table_schema, table_name, ordinal_position; """ cur.execute(query) rows = cur.fetchall() cur.close() conn.close() if not rows: return "No schema information found." # format the output. schema_details = [] for table_schema, table_name, column_name, data_type in rows: schema_details.append(f"{table_schema}.{table_name} - {column_name} ({data_type})") return "\n".join(schema_details) except Exception as e: return f"Error retrieving schema: {e}" result = await asyncio.to_thread(run_schema_query) return result Both plugins encapsulate the low-level details of database operations, enabling the agent to generate natural language-based queries that interact with the database seamlessly. Define Semantic Kernel Agents Once the plugins are defined, the next step is to define our agent and integrate them. There are currently three types of agents available from Semantic Kernel Agent Framework: Azure AI Agent, Chat Completion Agent and Azure Assistant Agent. Each provides distinct features tailored to different use cases. The Azure AI Agent integrates with Azure AI Agent Services, enabling end-to-end observability via the AI Foundry interface and access to a broad ecosystem of tools and connectors. The Chat Completion Agent is optimized for generating conversational responses, while the Azure Assistant Agent uses the same wire protocol as Assistant API, allowing the use of advanced capabilities such as the Code Interpreter and File Search. In our example, we’ll use Chat Completion Agent to interact with the PostgreSQL database, and an Azure Assistant Agent to visualize the retrieved data. This setup involves three core components for each agent, along with an orchestration layer that coordinates their interaction. Defining the Agent We start by creating an instance of the Chat Completion Agent using the Semantic Kernel framework. This agent is provided with detailed guidance on how to interpret user queries, when to fetch schema information, and how to construct the appropriate SQL statements in response. agent = ChatCompletionAgent( kernel=kernel, name="SQLAssistantAgent", instructions=""" You are a helpful assistant that retrieves data from a PostgreSQL database. When a user asks for information, get schema using GetSchemaPlugin. Look for relevant tables and columns that will answer user question and generate a SQL query. Generate SQL query you are going to execute and then use the QueryPostgresPlugin's query_postgres function. The tables reside in the public schema of the database so use following format: public."table_name", for example: 'SELECT * FROM public."actor" LIMIT 5'. Always return the result of the SQL query as a json. """, plugins=[schema_plugin, query_plugin], ) Specifying detailed instructions Instructions are critical to guide the agent’s behavior, especially when generating SQL queries. We aim to provide clear guidance, including one-shot or few-shot examples if needed, as well as context about the dataset, such as descriptions of ambiguous column names or any known irregularities. Since agents are not inherently time-aware, it’s important to supply temporal context explicitly, enabling them to generate queries that reference the current date or time period when appropriate. Clear instructions ensure that the agent understands its role and properly orchestrates the plugin calls. Integrating the Plugins The agent is then configured with instances of both the QueryPostgresPlugin and the GetSchemaPlugin. When the user poses a question like “How many DVDs were rented this month,” the agent first consults the schema plugin to gather information about relevant tables and columns. It then constructs and executes a corresponding SQL query using the query plugin. Now let’s apply the same approach to the Azure Assistant Agent. We’ll create an agent equipped with the Code Interpreter tool, enabling it to generate and execute Python code for visualizing the data received from the SQL Assistant Agent. Additionally, we need to define file download functions to support this workflow. A complete example is available here. definition = await client.beta.assistants.create( model=model, name="CodeRunner", instructions=""" SQL Assistant Agent will provide you the data in JSON format. Never ask if the user wants a chart or graph, start analyzing the data and generating them automatically. 1. Parse the JSON into Python data structures. 2. Execute the Python code needed to generate a chart or graph. 3. Save the chart as a PNG file. 4. Upload PNG via the code‐interpreter tool and return its file reference (FileReferenceContent). Always format your **final answer** in markdown—do not simply return code blocks. Include the actual image reference alongside any analysis. Return file path for that the chart was saved to. """, tools=code_interpreter_tool, tool_resources=code_interpreter_tool_resources, ) code_agent = AzureAssistantAgent( client=client, definition=definition, ) Group Chat Orchestration To orchestrate the interaction between the SQL Assistant Agent and the CodeRunner Agent, we’ll use an Agent Group Chat. Within this setup, we define both a selection strategy and a termination strategy to manage the flow of conversation between agents. This ensures that control passes smoothly from one agent to another and that the dialogue concludes appropriately once the required tasks are completed. # Define a termination function where the reviewer signals completion with "yes". termination_keyword = "YES" termination_function = KernelFunctionFromPrompt( function_name="termination", prompt=f""" Determine whether the conversation is complete. The conversation is considered complete the moment CodeRunner uploads a chart image file. Or when the CodeRunner agent provided final answer in markdown format. If the last message ({{$lastmessage}}) contains any file reference (e.g. a FileReferenceContent item), for example like this: 1. I have created a bar chart representing the top 5 most rented movies. You can download the chart using the link below. 2. Below is a bar chart visualizing the top 5 most rented out movies. Please find the file of the chart attached 3. "Here is the chart you requested: " respond with exactly '{termination_keyword}' and nothing else. """, ) # Create the AgentGroupChat with selection and termination strategies. chat = AgentGroupChat( agents=[agent, code_agent], termination_strategy=KernelFunctionTerminationStrategy( agents=[agent, code_agent], function=termination_function, kernel=kernel, result_parser=lambda result: termination_keyword in str(result.value[0]).lower(), history_variable_name="lastmessage", maximum_iterations=10, history_reducer=history_reducer, ), ) group_chat = AgentGroupChat( agents=[agent, code_agent]) Results And there you have it - a working multi-agent setup that lets you ask questions in natural language and automatically visualize the results. Now let’s ask a question in natural language and see how the agents respond. The question: "What is the total payment revenue generated, and how does it break down by film category?" Response: AuthorRole.ASSISTANT - CodeRunner: 'The total payment revenue generated is $61,312.04. Additionally, here is a breakdown of payment revenue by film category, visualized in the chart above. High revenue contributors include Sports, Sci-Fi, and Animation, while categories such as Music and Travel generate comparatively lower income. And, of course, we have a clear visual to accompany the analysis: Thank you for your time! Full example can be found in this Github repo.1.5KViews0likes0CommentsStreamlining data discovery for AI/ML with OpenMetadata on AKS and Azure NetApp Files
This article contains a step-by-step guide to deploying OpenMetadata on Azure Kubernetes Service (AKS), using Azure NetApp Files for storage. It also covers the deployment and configuration of PostgreSQL and OpenSearch databases to run externally from the Kubernetes cluster, following OpenMetadata best practices, managed by NetApp® Instaclustr®. This comprehensive tutorial aims to assist Microsoft and NetApp customers in overcoming the challenges of identifying and managing their data for AI/ML purposes. By following this guide, users will achieve a fully functional OpenMetadata instance, enabling efficient data discovery, enhanced collaboration, and robust data governance.601Views0likes0CommentsAI for Operations
Solutions idea This solution series shows some examples of how Azure OpenAI and its LLM models can be used on Operations and FinOps issues. With a view to the use of models linked to the Enterprise Scale Landing Zone, the solutions shown, which are available on a dedicated GitHub, are designed to be deployed within a dedicated subscription, in the examples called ‘OpenAI-CoreIntegration’. The examples we are going to list are: SQL BPA AI Enhanced Azure Update Manager AI Enhanced Azure Cost Management AI Enhanced Azure AI Anomalies Detection Azure OpenAI Smart Doc Creator Enterprise Scale AI for Operations Landing Zone Design Architecture SQL BPA AI Enhanced Architecture This LogApp is an example of integrating ARC SQL practices assessment results with OpenAI, creating an HTML report and CSV file send via Email with OpenAI comment of Severity High and/or Medium results based on the actual Microsoft Documentation. Dataflow Initial Trigger Type: Recurrence Configuration: Frequency: Weekly Day: Monday Time: 9:00 AM Time Zone: W. Europe Standard Time Description: The Logic App is triggered weekly to gather data for SQL Best Practice Assessments. Step 1: Data Query Action: Run_query_and_list_results Description: Executes a Log Analytics query to retrieve SQL assessment results from monitored resources. Output: A dataset containing issues classified by severity (High/Medium). Step 2: Variable Initialization Actions: Initialize_variable_CSV: Initializes an empty array to store CSV results. Open_AI_API_Key: Sets up the API key for Azure OpenAI service. HelpLinkContent: Prepares a variable to store useful links. Description: Configures necessary variables for subsequent steps. Step 3: Process Results Action: For_eachSQLResult Description: Processes the query results with the following sub-steps: Condition: Checks if the severity is High or Medium. OpenAI Processing: Sends structured prompts to the GPT-4 model for recommendations on identified issues. Parses the JSON response to extract specific insights. CSV Composition: Creates an array containing detailed results. Step 4: Report Generation Actions: Create_CSV_table: Converts processed data into a CSV format. Create_HTML_table: Generates an HTML table from the data. ComposeMailMessage: Prepares an HTML email message containing the results and a link to the report. Description: Formats the data for sharing. Step 5: Saving and Sharing Actions: Create_file: Saves the HTML report to OneDrive. Send_an_email_(V2): Sends an email with the reports attached (HTML and CSV). Post_message_in_a_chat_or_channel: Shares the results in a Teams channel. Description: Distributes the reports to defined recipients. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure ARC SQL Server enabled by Azure Arc extends Azure services to SQL Server instances hosted outside of Azure: in your data center, in edge site locations like retail stores, or any public cloud or hosting provider. SQL Best Practices Assessment feature provides a mechanism to evaluate the configuration of your SQL Server instance. Azure Monitor is a comprehensive monitoring solution for collecting, analyzing, and responding to monitoring data from your cloud and on-premises environments. Azure Kusto Query is a powerful tool to explore your data and discover patterns, identify anomalies and outliers, create statistical modeling, and more Potential use cases SQL BPA AI Enhanced exploits the capabilities of the SQL Best Practice Assessment service based on Azure ARC SQL Server. The collected data can be used for the generation of customised tables. The solution is designed for customers who want to enrich their Assessment information with Generative Artificial Intelligence. Azure Update Manager AI Enhanced Architecture This LogApp solution example retrieves data from the Azure Update Manager service and returns an output processed by generative artificial intelligence. Dataflow Initial Trigger Type: Recurrence Trigger Frequency: Monthly Time Zone: W. Europe Standard Time Triggers the Logic App at the beginning of every month. Step 1: Initialize API Key Action: Initialize Variable Variable Name: Api-Key Step 2: Fetch Update Status Action: HTTP Request URI: https://management.azure.com/providers/Microsoft.ResourceGraph/resources Query: Retrieves resources related to patch assessments using patchassessmentresources. Step 3: Parse Update Status Action: Parse JSON Content: Response body from the HTTP request. Schema: Extracts details such as VM Name, Patch Name, Patch Properties, etc. Step 4: Process Updates For Each: Body('Parse_JSON')?['data'] Iterates through each item in the parsed update data. Condition: If Patch Name is not null and contains "KB": Action: Format Item Parses individual update items for VM Name, Patch Name, and additional properties. Action: Send to Azure OpenAI Description: Sends structured prompts to the GPT-4 model Headers: Content-Type: application/json api-key: @variables('Api-Key') Body: Prompts Azure OpenAI to generate a report for each virtual machine and patch, formatted in Italian. Action: Parse OpenAI Response Extracts and formats the response generated by Azure OpenAI. Action: Append to Summary and CSV Adds the OpenAI-generated response to the Updated Summary array. Appends patch details to the CSV array. Step 5: Finalize Report Action: Create Reports (I, II, III) Formats and cleans the Updated Summary variable to remove unwanted characters. Action: Compose HTML Email Content Constructs an HTML email with the following: Report summary generated using OpenAI. Disclaimer about possible formatting anomalies. Company logo embedded. Step 6: Generate CSV Table Action: Converts the CSV array into a CSV format for attachment. Step 7: Send E-Mail Action: Send Email Recipient: user@microsoft.com Subject: Security Update Assessment Body: HTML content with report summary. Attachment: Name: SmartUpdate_<timestamp>.csv Content: CSV table of update details. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your machines in Azure and on-premises/on other cloud platforms (connected by Azure Arc) from a single pane of management. You can also use Update Manager to make real-time updates or schedule them within a defined maintenance window. Azure Arc Server lets you manage Windows and Linux physical servers and virtual machines hosted outside of Azure, on your corporate network, or other cloud provider. Potential use cases Azure Update Manager AI Enhanced is an example of a solution designed for all those situations where the IT department needs to manage and automate the telling of information in a readable format on the status of updates to its infrastructure thanks to an output managed by generative artificial intelligence Azure Cost Management AI Enhanced Architecture This LogApp solution retrieves consumption data from the Azure environment and generates a general and detailed cost trend report on a scheduled basis. Dataflow Initial Trigger Type: Manual HTTP Trigger The Logic App is triggered manually using an HTTP request. Step 1: Set Current Date and Old Date Action: Set Actual Date Current date is initialized to @utcNow('yyyy-MM-dd'). Example Value: 2024-11-22. Action: Set Actual Date -30 Old date is set to 30 days before the current date. Example Value: 2024-10-23. Action: Set old date -30 Sets the variable currentdate to 30 days prior to the old date. Example Value: 2024-09-23. Action: Set old date -60 Sets the variable olddate to 60 days before the current date. Example Value: 2024-08-23. Step 2: Query Cost Data Action: Query last 30 days Queries Azure Cost Management for the last 30 days. Example Data Returned:json{ "properties": { "rows": [ ["Virtual Machines", 5000], ["Databases", 7000], ["Storage", 3000] ] } } Copia codice Action: Query -60 -30 days Queries Azure Cost Management for 30 to 60 days ago. Example Data Returned:json{ "properties": { "rows": [ ["Virtual Machines", 4800], ["Databases", 6800], ["Storage", 3050] ] } } Copia codice Step 3: Download Detailed Reports Action: Download_report_actual_month Generates and retrieves a detailed cost report for the current month. Action: Download_report_last_month Generates and retrieves a detailed cost report for the previous month. Step 4: Process and Store Reports Action: Actual_Month_Report Parses the JSON from the current month's report. Retrieves blob download links for the detailed report. Action: Last_Month_Report Parses the JSON from the last month's report. Retrieves blob download links for the detailed report. Action: Create_ActualMonthDownload and Create_LastMonthDownload Initializes variables to store download links. Action: Get_Actual_Month_Download_Link and Get_Last_Month_Download_Link Iterates through blob data and assigns the download link variables. Step 5: Generate Questions for OpenAI Action: Set_Question Prepares the first question for Azure OpenAI: "Describe the key differences between the previous and current month's costs, and create a bullet-point list detailing these differences in Euros." Action: Set_Second_Question Prepares a second question for Azure OpenAI: "Briefly describe in Italian the major cost differences between the two months, rounding the amounts to Euros." Step 6: Send Questions to Azure OpenAI Action: Passo result to OpenAI Sends the first question to OpenAI for generating detailed insights. Action: Get Description from OpenAI Sends the second question to OpenAI for a brief summary in Italian. Step 8: Process OpenAI Responses Action: Parse_JSON and Parse_JSON_Second_Question Parses the JSON response from OpenAI for both questions. Retrieves the content of the generated insights. Action: For_each_Description Iterates through OpenAI's responses and assigns the description to a variable DescriptionOutput. Step 9: Compose and send E-Mail Action: Compose_Email Composes an HTML email including: Key insights from OpenAI. Links to download the detailed reports. Example Email Content: Azure automated cost control system: - Increase of €200 in Virtual Machines. - Reduction of €50 in Storage. Download details: - Current month: [Download Report] - Previous month: [Download Report]. Action: Send_an_email_(V2) Sends the composed email. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Potential use cases Azure Cost Management AI Enhanced is an example of a solution designed for those who need to programme the generation of reports related to FinOps topics with the possibility to customise the output and send the results via e-mail or perform a customised upload. Azure AI Anomalies Detection Architecture This LogApp solution leverages Azure Monitor's native machine learning capabilities to retrieve anomalous data within application logs. These will then be analysed by OpenAI. Dataflow Initial Trigger Type: Recurrence Trigger Frequency: Monthly Time Zone: W. Europe Standard Time Triggers the Logic App at the beginning of every month. Step 1: Initialize API Key Action: Initialize Variable Variable Name: Api-Key Step 2: Fetch Update Status Action: HTTP Request URI: https://management.azure.com/providers/Microsoft.ResourceGraph/resources Query: Retrieves resources related to patch assessments using patchassessmentresources. Step 3: Parse Update Status Action: Parse JSON Content: Response body from the HTTP request. Schema: Extracts details such as VM Name, Patch Name, Patch Properties, etc. Step 4: Process Updates For Each: @body('Parse_JSON')?['data'] Iterates through each item in the parsed update data. Condition: If Patch Name is not null and contains "KB": Action: Format Item Parses individual update items for VM Name, Patch Name, and additional properties. Action: Send to Azure OpenAI Description: Sends structured prompts to the GPT-4 model. Headers: Content-Type: application/json api-key: @variables('Api-Key') Body: Prompts Azure OpenAI to generate a report for each virtual machine and patch, formatted in Italian. Action: Parse OpenAI Response Extracts and formats the response generated by Azure OpenAI. Action: Append to Summary and CSV Adds the OpenAI-generated response to the Updated Summary array. Appends patch details to the CSV array. Step 5: Finalize Report Action: Create Reports (I, II, III) Formats and cleans the Updated Summary variable to remove unwanted characters. Action: Compose HTML Email Content Constructs an HTML email with the following: Report summary generated using OpenAI. Disclaimer about possible formatting anomalies. Company logo embedded. Step 6: Generate CSV Table Action: Converts the CSV array into a CSV format for attachment. Step 7: Send Notifications Action: Send Email Recipient: user@microsoft.com Subject: Security Update Assessment Body: HTML content with report summary. Attachment: Name: SmartUpdate_<timestamp>.csv Content: CSV table of update details. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure Monitor is a comprehensive monitoring solution for collecting, analyzing, and responding to monitoring data from your cloud and on-premises environments. Azure Kusto Queryis a powerful tool to explore your data and discover patterns, identify anomalies and outliers, create statistical modeling, and more Potential use cases Azure AI Anomalies Detection is an example of a solution that exploits the Machine Learning capabilities of Azure Monitor to diagnose anomalies within application logs that will then be analysed by Azure OpenAI. The solution can be customized based on Customer requirements. Azure OpenAI Smart Doc Creator Architecture This Function App solution leverages the Azure OpenAI LLM Generative AI to create a docx file based on the Azure architectural information of a specific workload (Azure Metadata based). The function exploits the 'OpenAI multi-agent' concept. Dataflow Step 1: Logging and Configuration Setup Initialize Logging: Advanced logging is set up to provide debug-level insights. Format includes timestamps, log levels, and messages. Retrieve OpenAI Endpoint: QUESTION_ENDPOINT is retrieved from environment variables. Logging confirms the endpoint retrieval. Step 2: Authentication Managed Identity Authentication: The ManagedIdentityCredential class is used for secure Azure authentication. The SubscriptionClient is initialized to access Azure subscriptions. Retrieves a token for Azure Cognitive Services (https://cognitiveservices.azure.com/.default). Step 3: Flattening Dictionaries Function: flatten_dict Transforms nested dictionaries into a flat structure. Handles nested lists and dictionaries recursively. Used for preparing metadata for storage in CSV. Step 4: Resource Tag Filtering Functions: get_resources_by_tag_in_subscription: Filters resources in a subscription based on a tag key and value. get_resource_groups_by_tag_in_subscription: Identifies resource groups with matching tags. Purpose: Retrieve Azure resources and resource groups tagged with specific key-value pairs. Step 5: Resource Metadata Retrieval Functions: get_all_resources: Aggregates resources and resource groups across all accessible subscriptions. get_resources_in_resource_group_in_subscription: Retrieves resources from specific resource groups. get_latest_api_version: Determines the most recent API version for a given resource type. get_resource_metadata: Retrieves detailed metadata for individual resources using the latest API version. Purpose: Collect comprehensive resource details for further processing. Step 6: Documentation Generation Function: generate_infra_config Processes metadata through OpenAI to generate documentation. OpenAI generates detailed and human-readable descriptions for Azure resources. Multi-stage review process: Initial draft by OpenAI. Feedback loop with ArchitecturalReviewer and DocCreator for refinement. Final content is saved to architecture.txt. Step 7: Workload Overview Function: generate_workload_overview Reads from the generated CSV file to create a summary of the workload. Sends resource list to OpenAI for generating a high-level overview. Step 8: Conversion to DOCX Function: txt_to_docx Creates a Word document (Output.docx) with: Section 1: "Workload Overview" (generated summary). Section 2: "Workload Details" (detailed resource metadata). Adds structured headings and page breaks. Step 9: Temporary Files Cleanup Function: cleanup_files Deletes temporary files: architecture.txt resources_with_expanded_metadata.csv Output.docx Ensures no residual files remain after execution. Step 10: CSV Metadata Export Function: save_resources_with_expanded_metadata_to_csv Aggregates and flattens resource metadata. Saves details to resources_with_expanded_metadata.csv. Includes unique keys derived from all metadata fields. Step 11: Architectural Review Process Functions: ArchitecturalReviewer: Reviews and suggests improvements to documentation. DocCreator: Incorporates reviewer suggestions into the documentation. Purpose: Iterative refinement for high-quality documentation. Step 12: HTTP Trigger Function Function: smartdocs Accepts HTTP requests with tag_key and tag_value parameters. Orchestrates the entire workflow: Resource discovery. Metadata retrieval. Documentation generation. File cleanup. Responds with success or error messages. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running. Azure Function App Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure libraries for Python (SDK) are the open-source Azure libraries for Python designed to simplify the provisioning, management and utilisation of Azure resources from Python application code. Potential use cases The Azure OpenAI Smart Doc Creator Function App, like all proposed solutions, can be modified to suit your needs. It can be of practical help when there is a need to obtain all the configurations, in terms of metadata, of the resources and services that make up a workload. Contributors Principal author: Tommaso Sacco | Cloud Solutions Architect Simone Verza | Cloud Solution Architect Extended Contribution: Saverio Lorenzini | Senior Cloud Solution Architect Andrea De Gregorio | Technical Specialist Gianluca De Rossi | Technical Specialist Special Thanks: Carmelo Ferrara | Director CSA Marco Crippa | Sr CSA Manager3.1KViews4likes3Comments