azure foundry
7 TopicsSmart Auditing: Leveraging Azure AI Agents to Transform Financial Oversight
In today's data-driven business environment, audit teams often spend weeks poring over logs and databases to verify spending and billing information. This time-consuming process is ripe for automation. But is there a way to implement AI solutions without getting lost in complex technical frameworks? While tools like LangChain, Semantic Kernel, and AutoGen offer powerful AI agent capabilities, sometimes you need a straightforward solution that just works. So, what's the answer for teams seeking simplicity without sacrificing effectiveness? This tutorial will show you how to use Azure AI Agent Service to build an AI agent that can directly access your Postgres database to streamline audit workflows. No complex chains or graphs required, just a practical solution to get your audit process automated quickly. The Auditing Challenge: It's the month end, and your audit team is drowning in spreadsheets. As auditors reviewing financial data across multiple SaaS tenants, you're tasked with verifying billing accuracy by tracking usage metrics like API calls, storage consumption, and user sessions in Postgres databases. Each tenant generates thousands of transactions daily, and traditionally, this verification process consumes weeks of your team's valuable time. Typically, teams spend weeks: Manually extracting data from multiple database tables. Cross-referencing usage with invoices. Investigating anomalies through tedious log analysis. Compiling findings into comprehensive reports. With an AI-powered audit agent, you can automate these tasks and transform the process. Your AI assistant can: Pull relevant usage data directly from your database Identify billing anomalies like unexpected usage spikes Generate natural language explanations of findings Create audit reports that highlight key concerns For example, when reviewing a tenant's invoice, your audit agent can query the database for relevant usage patterns, summarize anomalies, and offer explanations: "Tenant_456 experienced a 145% increase in API usage on April 30th, which explains the billing increase. This spike falls outside normal usage patterns and warrants further investigation." Let’s build an AI agent that connects to your Postgres database and transforms your audit process from manual effort to automated intelligence. Prerequisites: Before we start building our audit agent, you'll need: An Azure subscription (Create one for free). The Azure AI Developer RBAC role assigned to your account. Python 3.11.x installed on your development machine. OR You can also use GitHub Codespaces, which will automatically install all dependencies for you. You’ll need to create a GitHub account first if you don’t already have one. Setting Up Your Database: For this tutorial, we'll use Neon Serverless Postgres as our database. It's a fully managed, cloud-native Postgres solution that's free to start, scales automatically, and works excellently for AI agents that need to query data on demand. Creating a Neon Database on Azure: Open the Neon Resource page on the Azure portal Fill out the form with the required fields and deploy your database After creation, navigate to the Neon Serverless Postgres Organization service Click on the Portal URL to access the Neon Console Click "New Project" Choose an Azure region Name your project (e.g., "Audit Agent Database") Click "Create Project" Once your project is successfully created, copy the Neon connection string from the Connection Details widget on the Neon Dashboard. It will look like this: postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require Note: Keep this connection string saved; we'll need it shortly. Creating an AI Foundry Project on Azure: Next, we'll set up the AI infrastructure to power our audit agent: Create a new hub and project in the Azure AI Foundry portal by following the guide. Deploy a model like GPT-4o to use with your agent. Make note of your Project connection string and Model Deployment name. You can find your connection string in the overview section of your project in the Azure AI Foundry portal, under Project details > Project connection string. Once you have all three values on hand: Neon connection string, Project connection string, and Model Deployment Name, you are ready to set up the Python project to create an Agent. All the code and sample data are available in this GitHub repository. You can clone or download the project. Project Environment Setup: Create a .env file with your credentials: PROJECT_CONNECTION_STRING="<Your AI Foundry connection string> "AZURE_OPENAI_DEPLOYMENT_NAME="gpt4o" NEON_DB_CONNECTION_STRING="<Your Neon connection string>" Create and activate a virtual environment: python -m venv .venv source .venv/bin/activate # on macOS/Linux .venv\Scripts\activate # on Windows Install required Python libraries: pip install -r requirements.txt Example requirements.txt: Pandas python-dotenv sqlalchemy psycopg2-binary azure-ai-projects ==1.0.0b7 azure-identity Load Sample Billing Usage Data: We will use a mock dataset for tenant usage, including computed percent change in API calls and storage usage in GB: tenant_id date api_calls storage_gb tenant_456 2025-04-01 1000 25.0 tenant_456 2025-03-31 950 24.8 tenant_456 2025-03-30 2200 26.0 Run python load_usage_data.py Python script to create and populate the usage_data table in your Neon Serverless Postgres instance: # load_usage_data.py file import os from dotenv import load_dotenv from sqlalchemy import ( create_engine, MetaData, Table, Column, String, Date, Integer, Numeric, ) # Load environment variables from .env load_dotenv() # Load connection string from environment variable NEON_DB_URL = os.getenv("NEON_DB_CONNECTION_STRING") engine = create_engine(NEON_DB_URL) # Define metadata and table schema metadata = MetaData() usage_data = Table( "usage_data", metadata, Column("tenant_id", String, primary_key=True), Column("date", Date, primary_key=True), Column("api_calls", Integer), Column("storage_gb", Numeric), ) # Create table with engine.begin() as conn: metadata.create_all(conn) # Insert mock data conn.execute( usage_data.insert(), [ { "tenant_id": "tenant_456", "date": "2025-03-27", "api_calls": 870, "storage_gb": 23.9, }, { "tenant_id": "tenant_456", "date": "2025-03-28", "api_calls": 880, "storage_gb": 24.0, }, { "tenant_id": "tenant_456", "date": "2025-03-29", "api_calls": 900, "storage_gb": 24.5, }, { "tenant_id": "tenant_456", "date": "2025-03-30", "api_calls": 2200, "storage_gb": 26.0, }, { "tenant_id": "tenant_456", "date": "2025-03-31", "api_calls": 950, "storage_gb": 24.8, }, { "tenant_id": "tenant_456", "date": "2025-04-01", "api_calls": 1000, "storage_gb": 25.0, }, ], ) print("✅ usage_data table created and mock data inserted.") Create a Postgres Tool for the Agent: Next, we configure an AI agent tool to retrieve data from Postgres. The Python script billing_agent_tools.py contains: The function billing_anomaly_summary() that: Pulls usage data from Neon. Computes % change in api_calls. Flags anomalies with a threshold of > 1.5x change. Exports user_functions list for the Azure AI Agent to use. You do not need to run it separately. # billing_agent_tools.py file import os import json import pandas as pd from sqlalchemy import create_engine from dotenv import load_dotenv # Load environment variables load_dotenv() # Set up the database engine NEON_DB_URL = os.getenv("NEON_DB_CONNECTION_STRING") db_engine = create_engine(NEON_DB_URL) # Define the billing anomaly detection function def billing_anomaly_summary( tenant_id: str, start_date: str = "2025-03-27", end_date: str = "2025-04-01", limit: int = 10, ) -> str: """ Fetches recent usage data for a SaaS tenant and detects potential billing anomalies. :param tenant_id: The tenant ID to analyze. :type tenant_id: str :param start_date: Start date for the usage window. :type start_date: str :param end_date: End date for the usage window. :type end_date: str :param limit: Maximum number of records to return. :type limit: int :return: A JSON string with usage records and anomaly flags. :rtype: str """ query = """ SELECT date, api_calls, storage_gb FROM usage_data WHERE tenant_id = %s AND date BETWEEN %s AND %s ORDER BY date DESC LIMIT %s; """ df = pd.read_sql(query, db_engine, params=(tenant_id, start_date, end_date, limit)) if df.empty: return json.dumps( {"message": "No usage data found for this tenant in the specified range."} ) df.sort_values("date", inplace=True) df["pct_change_api"] = df["api_calls"].pct_change() df["anomaly"] = df["pct_change_api"].abs() > 1.5 return df.to_json(orient="records") # Register this in a list to be used by FunctionTool user_functions = [billing_anomaly_summary] Create and Configure the AI Agent: Now we'll set up the AI agent and integrate it with our Neon Postgres tool using the Azure AI Agent Service SDK. The Python script does the following: Creates the agent Instantiates an AI agent using the selected model (gpt-4o, for example), adds tool access, and sets instructions that tell the agent how to behave (e.g., “You are a helpful SaaS assistant…”). Creates a conversation thread A thread is started to hold a conversation between the user and the agent. Posts a user message Sends a question like “Why did my billing spike for tenant_456 this week?” to the agent. Processes the request The agent reads the message, determines that it should use the custom tool to retrieve usage data, and processes the query. Displays the response Prints the response from the agent with a natural language explanation based on the tool’s output. # billing_anomaly_agent.py import os from datetime import datetime from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.projects.models import FunctionTool, ToolSet from dotenv import load_dotenv from pprint import pprint from billing_agent_tools import user_functions # Custom tool function module # Load environment variables from .env file load_dotenv() # Create an Azure AI Project Client project_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=os.environ["PROJECT_CONNECTION_STRING"], ) # Initialize toolset with our user-defined functions functions = FunctionTool(user_functions) toolset = ToolSet() toolset.add(functions) # Create the agent agent = project_client.agents.create_agent( model=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], name=f"billing-anomaly-agent-{datetime.now().strftime('%Y%m%d%H%M')}", description="Billing Anomaly Detection Agent", instructions=f""" You are a helpful SaaS financial assistant that retrieves and explains billing anomalies using usage data. The current date is {datetime.now().strftime("%Y-%m-%d")}. """, toolset=toolset, ) print(f"Created agent, ID: {agent.id}") # Create a communication thread thread = project_client.agents.create_thread() print(f"Created thread, ID: {thread.id}") # Post a message to the agent thread message = project_client.agents.create_message( thread_id=thread.id, role="user", content="Why did my billing spike for tenant_456 this week?", ) print(f"Created message, ID: {message.id}") # Run the agent and process the query run = project_client.agents.create_and_process_run( thread_id=thread.id, agent_id=agent.id ) print(f"Run finished with status: {run.status}") if run.status == "failed": print(f"Run failed: {run.last_error}") # Fetch and display the messages messages = project_client.agents.list_messages(thread_id=thread.id) print("Messages:") pprint(messages["data"][0]["content"][0]["text"]["value"]) # Optional cleanup: # project_client.agents.delete_agent(agent.id) # print("Deleted agent") Run the agent: To run the agent, run the following command python billing_anomaly_agent.py Snippet of output from agent: Using the Azure AI Foundry Agent Playground: After running your agent using the Azure AI Agent SDK, it is saved within your Azure AI Foundry project. You can now experiment with it using the Agent Playground. To try it out: Go to the Agents section in your Azure AI Foundry workspace. Find your billing anomaly agent in the list and click to open it. Use the playground interface to test different financial or billing-related questions, such as: “Did tenant_456 exceed their API usage quota this month?” “Explain recent storage usage changes for tenant_456.” This is a great way to validate your agent's behavior without writing more code. Summary: You’ve now created a working AI agent that talks to your Postgres database, all using: A simple Python function Azure AI Agent Service A Neon Serverless Postgres backend This approach is beginner-friendly, lightweight, and practical for real-world use. Want to go further? You can: Add more tools to the agent Integrate with vector search (e.g., detect anomaly reasons from logs using embeddings) Resources: Introduction to Azure AI Agent Service Develop an AI agent with Azure AI Agent Service Getting Started with Azure AI Agent Service Neon on Azure Build AI Agents with Azure AI Agent Service and Neon Multi-Agent AI Solution with Neon, Langchain, AutoGen and Azure OpenAI Azure AI Foundry GitHub Discussions That's it, folks! But the best part? You can become part of a thriving community of learners and builders by joining the Microsoft Learn Student Ambassadors Community. Connect with like-minded individuals, explore hands-on projects, and stay updated with the latest in cloud and AI. 💬 Join the community on Discord here and explore more benefits on the Microsoft Learn Student Hub.605Views5likes1CommentAI Agents: Mastering Agentic RAG - Part 5
This blog post, Part 5 of a series on AI agents, explores Agentic RAG (Retrieval-Augmented Generation), a paradigm shift in how LLMs interact with external data. Unlike traditional RAG, Agentic RAG allows LLMs to autonomously plan their information retrieval process through an iterative loop of actions and evaluations. The post highlights the importance of the LLM "owning" the reasoning process, dynamically selecting tools and refining queries. It covers key implementation details, including iterative loops, tool integration, memory management, and handling failure modes. Practical use cases, governance considerations, and code examples demonstrating Agentic RAG with AutoGen, Semantic Kernel, and Azure AI Agent Service are provided. The post concludes by emphasizing the transformative potential of Agentic RAG and encourages further exploration through linked resources and previous blog posts in the series.2.6KViews1like0CommentsThe brand new Azure AI Agent Service at your fingertips
Intro Azure AI Agent Service is a game-changer for developers. This fully managed service empowers you to build, deploy, and scale high-quality, extensible AI agents securely, without the hassle of managing underlying infrastructure. What used to take hundreds of lines of code can now be achieved in just a few lines! So here it is, a web application that streamlines document uploads, summarizes content using AI, and provides seamless access to stored summaries. This article delves into the architecture and implementation of this solution, drawing inspiration from our previous explorations with Azure AI Foundry and secure AI integrations. Architecture Overview Our Azure AI Agent Service WebApp integrates several Azure services to create a cohesive and scalable system: Azure AI Projects & Azure AI Agent Service: Powers the AI-driven summarization and title generation of uploaded documents. Azure Blob Storage: Stores the original and processed documents securely. Azure Cosmos DB: Maintains metadata and summaries for quick retrieval and display. Azure API Management (APIM): Manages and secures API endpoints, ensuring controlled access to backend services. This architecture ensures a seamless flow from document upload to AI processing and storage, providing users with immediate access to summarized content. Azure AI Agent Service – Frontend Implementation The frontend of the Azure AI Agent Service WebApp is built using Vite and React, offering a responsive and user-friendly interface. Key features include: Real-time AI Chat Interface: Users can interact with an AI agent for various queries. Document Upload Functionality: Supports uploading documents in various formats, which are then processed by the backend AI services. Document Repository: Displays a list of uploaded documents with their summaries and download links. This is the main UI , ChatApp.jsx. We can interact with Chat Agent for regular chat, while the keyword “upload:” activates the hidden upload menu. Azure AI Agent Service – Backend Services The backend is developed using Express.js, orchestrating various services to handle: File Uploads: Accepts documents from the frontend and stores them in Azure Blob Storage. AI Processing: Utilizes Azure AI Projects to extract text, generate summaries, and create concise titles. Metadata Storage: Saves document metadata and summaries in Azure Cosmos DB for efficient retrieval. One of the Challenges was to not recreate the Agents each time our backend reloads. So a careful plan is configured, with several files – modules for the Azure AI Agent Service interaction and Agents creation. The initialization for example is taken care by a single file-module: const { DefaultAzureCredential } = require('@azure/identity'); const { SecretClient } = require('@azure/keyvault-secrets'); const { AIProjectsClient, ToolUtility } = require('@azure/ai-projects'); require('dotenv').config(); // Keep track of global instances let aiProjectsClient = null; let agents = { chatAgent: null, extractAgent: null, summarizeAgent: null, titleAgent: null }; async function initializeAI(app) { try { // Setup Azure Key Vault const keyVaultName = process.env.KEYVAULT_NAME; const keyVaultUrl = `https://${keyVaultName}.vault.azure.net`; const credential = new DefaultAzureCredential(); const secretClient = new SecretClient(keyVaultUrl, credential); // Get AI connection string const secret = await secretClient.getSecret('AIConnectionString'); const AI_CONNECTION_STRING = secret.value; // Initialize AI Projects Client aiProjectsClient = AIProjectsClient.fromConnectionString( AI_CONNECTION_STRING, credential ); // Create code interpreter tool (shared among agents) const codeInterpreterTool = ToolUtility.createCodeInterpreterTool(); const tools = [codeInterpreterTool.definition]; const toolResources = codeInterpreterTool.resources; console.log('🚀 Creating AI Agents...'); // Create chat agent agents.chatAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "chat-agent", instructions: "You are a helpful AI assistant that provides clear and concise responses.", tools, toolResources }); console.log('✅ Chat Agent created'); // Create extraction agent agents.extractAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "extract-agent", instructions: "Process and clean text content while maintaining structure and important information.", tools, toolResources }); console.log('✅ Extract Agent created'); // Create summarization agent agents.summarizeAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "summarize-agent", instructions: "Create concise summaries that capture main points and key details.", tools, toolResources }); console.log('✅ Summarize Agent created'); // Create title agent agents.titleAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "title-agent", instructions: `You are a specialized title generation assistant. Your task is to create titles for documents following these rules: 1. Generate ONLY the title text, no additional explanations 2. Maximum length of 50 characters 3. Focus on the main topic or theme 4. Use proper capitalization (Title Case) 5. Avoid special characters and quotes 6. Make titles clear and descriptive 7. Respond with nothing but the title itself Example good responses: Digital Transformation Strategy 2025 Market Analysis: Premium Chai Tea Cloud Computing Implementation Guide Example bad responses: "Here's a title for your document: Digital Strategy" (no explanations needed) This document appears to be about digital transformation (just the title needed) The title is: Market Analysis (no extra text)`, tools, toolResources }); console.log('✅ Title Agent created'); // Store in app.locals app.locals.aiProjectsClient = aiProjectsClient; app.locals.agents = agents; console.log('✅ All AI Agents initialized successfully'); return { aiProjectsClient, agents }; } catch (error) { console.error('❌ Error initializing AI:', error); throw error; } } // Export both the initialization function and the shared instances module.exports = { initializeAI, getClient: () => aiProjectsClient, getAgents: () => agents }; Our backend utilizes 4 agents, creating the Azure AI Agent Service Agents and we will find them in the portal, when the Backend deploys At the same time, each interaction is stored and managed as thread and that’s how we are interacting with the Azure AI Agent Service. Deployment and Security of Azure AI Agent Service WebApp Ensuring secure and efficient deployment is crucial. We’ve employed: Azure API Management (APIM): Secures API endpoints, providing controlled access and monitoring capabilities. Azure Key Vault: Manages sensitive information such as API keys and connection strings, ensuring data protection. Every call to the backend service is protected with Azure API Management Basic Tier. We have only the required endpoints pointing to the matching Endpoints of our Azure AI Agent Service WebApp backend. Also we are storing the AIConnectionString variable in Key Vault and we can move all Variables in Key Vault as well, which i recommend ! Get started with Azure AI Agent Service To get started with Azure AI Agent Service, you need to create an Azure AI Foundry hub and an Agent project in your Azure subscription. Start with the quickstart guide if it’s your first time using the service. You can create a AI hub and project with the required resources. After you create a project, you can deploy a compatible model such as GPT-4o. When you have a deployed model, you can also start making API calls to the service using the SDKs. There are already 2 Quick-starts available to get your Azure AI Agent Service up and running, the Basic and the Standard. I have chosen the second one the Standard plan, since we have a WebApp, and the whole Architecture comes very handy ! We just added the CosmosDB interaction and the API Management to extend to an enterprise setup ! Our own Azure AI Agent Service deployment, allows us to interact with the Agents, and utilize tools and functions very easy. Conclusion By harnessing the power of Azure’s cloud services, we’ve developed a scalable and efficient web application that simplifies document management through AI-driven processing. This solution not only enhances productivity but also ensures secure and organized access to essential information. References Azure AI Agent Service Documentation What is Azure AI Agent Service Azure AI Agent Service Quick starts Azure API Management Azure AI Foundry Azure AI Foundry Inference Demo688Views1like2Comments