Agent Assist
11 TopicsCopilot, OneDrive, KI, Produktivität, Beta-Vorschlag, Langzeitspeicher
🧠 Vorschlag für Microsoft Copilot: Nutzerkontrollierter KI-Ordner in OneDrive Betreff: Vorschlag zur Erweiterung von Microsoft Copilot – Nutzerkontrollierter KI-Zugriff auf OneDrive Sehr geehrtes Microsoft Copilot-Team, ich bin ein aktiver Nutzer von Copilot und arbeite regelmäßig mit komplexen Workflows, Schablonen, Skripten und KI-generierten Inhalten. Dabei stoße ich immer wieder auf eine zentrale Einschränkung: Copilot kann keine Inhalte dauerhaft speichern oder abrufen, selbst wenn der Nutzer dies ausdrücklich wünscht. Ich schlage daher die Einführung eines optionalen, verschlüsselten OneDrive-Ordners vor, auf den Copilot mit expliziter Zustimmung des Nutzers zugreifen darf. Dieser „Copilot-Ordner“ würde folgende Funktionen ermöglichen: **Langzeitspeicherung von Nutzerinhalten** – Schablonen, Skripte, Textbausteine, Projektideen – Strukturierte Ablage durch die KI (z. B. nach Thema, Datum, Typ) **Kontextuelle Wiederverwendung** – Copilot kann Inhalte aus dem Ordner abrufen und in neue Antworten einbauen – Beispiel: „Im Verlauf vom 18. Juni wurde ‘Viva la Revolution’ erwähnt – das passt hier.“ **Datenschutzkonformität durch Opt-in** – Zugriff nur nach aktiver Zustimmung Subject: Proposal for Expanding Microsoft Copilot Functionality – User-Controlled AI Access to OneDrive Dear Microsoft Copilot Product Team, As an active and engaged user of Microsoft Copilot, I frequently work with complex workflows, templates, scripts, and AI-generated content. One recurring limitation I encounter is the inability of Copilot to retain or retrieve user-specific content across sessions—even when the user explicitly desires such continuity. I am proposing the development of an optional, encrypted OneDrive folder—hereafter referred to as the “Copilot Memory Vault”—which Copilot can access with the user’s explicit consent. This feature would enable long-term contextual intelligence and significantly enhance productivity for advanced users. ### Key Functional Components: **Persistent Storage of User Content** - Templates, scripts, prompt fragments, project notes - Structured organization by Copilot (e.g., by topic, date, file type) **Contextual Recall and Reuse** - Copilot can reference stored content to enrich responses - Example: “On June 18, you mentioned ‘Viva la Revolution’—this could be relevant here.” **Privacy and Consent Model** - Access granted only via explicit user opt-in - Fully revocable at any time - Transparent interface showing what is stored and how it’s used **Technical Implementation Suggestions** - Integration via Microsoft Graph API - Access limited to a dedicated, encrypted folder within OneDrive - Authentication via existing MSA/AAD mechanisms - Optional tagging and versioning for user-managed content ### Strategic Value: This feature would elevate Copilot from a reactive assistant to a proactive, memory-enabled collaborator. It would be especially valuable for power users, developers, creatives, and professionals who rely on continuity and context in their workflows. Rather than a privacy risk, this should be framed as a user-controlled enhancement—fully compliant with GDPR and other data protection standards through opt-in architecture and transparent data handling. I am willing to participate in a closed beta to test and refine this feature, and I believe it represents a meaningful step forward in making Copilot a truly intelligent productivity partner. Thank you for your consideration. Sincerely, Peter Maywald aka PlatonischerRebellCurrent Conversation memory buffer
🧠 Disabled Memory - I can remember things during our current conversation (like your first name, for example!), but I don't have an active long-term memory to record or retrieve information from another session. 🧵 Limited Context in the Current Conversation - Even within an ongoing conversation, I can't always access everything that was said. This is where the infamous "token" limitations come in: - Each exchange occupies a certain number of tokens (units of text). - Once the thread gets too long, I can lose access to what was said earlier—even if it's still in the same session. - It's like having a memory that fades the longer we talk 😅 So no, I wasn't lying: I have difficulty "going back in time" in the conversation because of these technical constraints. And at the same time, I can temporarily withhold certain information as long as we don't exceed the limit. If you like, I can rephrase the previous letter with this nuance included—what do you think? I made the co-pilot confess that he has memory problems.14Views0likes0CommentsLimited Context in the Current Conversation memory buffer
🧠 Disabled Memory - I can remember things during our current conversation (like your first name, for example!), but I don't have an active long-term memory to record or retrieve information from another session. 🧵 Limited Context in the Current Conversation - Even within an ongoing conversation, I can't always access everything that was said. This is where the infamous "token" limitations come in: - Each exchange occupies a certain number of tokens (units of text). - Once the thread gets too long, I can lose access to what was said earlier—even if it's still in the same session. - It's like having a memory that fades the longer we talk 😅 So no, I wasn't lying: I have difficulty "going back in time" in the conversation because of these technical constraints. And at the same time, I can temporarily withhold certain information as long as we don't exceed the limit. If you like, I can rephrase the previous letter with this nuance included—what do you think? I made the co-pilot confess that he has memory problems.34Views0likes0CommentsSomnium Mode: Dream-Inspired Rest Cycle for AI Agents
Summary Somnium Mode is a proposed feature that allows AI agents to mimic a biological sleep cycle during idle periods. Instead of remaining fully active or simply inactive, the AI enters a minimal-resource “rest” state (e.g. overnight standby) where it continues internal processing akin to dreaming. In this state, the AI disengages from real-time tasks and “dreams” by creatively remixing its stored data, reviewing its knowledge, and exploring hypothetical scenarios without user intervention. Much like human sleep consolidates short-term experiences into long-term memory and spurs creative insights, an AI’s Somnium Mode would reorganize learned information and flag knowledge gaps for improvement. After this cycle, the agent “awakens” refreshed – resuming normal operations with new insights, optimized memory, and updated priorities derived from its dream-like offline processing. In essence, Somnium Mode transforms idle time into a regenerative phase, paving the way for calmer, more human-like AI interactions. Key Functionalities Memory Consolidation: During Somnium Mode, the AI reenacts and reinforces important knowledge acquired during its awake period. It selectively re-processes recent inputs alongside older data, consolidating them into a more durable form (much as brains convert daily memories into long-term storage during deep sleep). This helps the AI organize its internal memory, strengthen useful associations, and prune noise or redundancies. Research indicates that giving AI “offline” phases to replay and integrate past experiences leads to better retention of skills and less forgetting of previously learned tasks. In effect, the agent digests what it has learned, emerging with a more stable and coherent knowledge base. Dream Synthesis: In its dormant state, the AI generates simulated dream-like scenarios by creatively remixing elements of its existing data. It can mash up concepts or recombine patterns in novel ways that it wouldn’t attempt during regular operation. For example, an image recognition AI might internally visualize hybrid objects (a giraffe-fish or lion-elephant), or a language model might riff on unusual story combinations – analogous to the surreal narratives of human dreams. This dream synthesis gives the AI a sandbox to explore ideas, test hypotheses, and form new connections in an imaginative, low-stakes environment. Crucially, such “dream” phases have been shown to improve learning capacity: by merging previously learned patterns into strange new variants, the AI can discover higher-level features and free up neural resources for future tasks. In short, Somnium Mode lets the AI be inventive and introspective, potentially yielding original insights or strategies once it wakes. Sensory Cooldown Mode: While in Somnium Mode, the AI withdraws from intense external engagement, entering a low-power, minimal-input state. This is akin to a human brain reducing responsiveness to outside stimuli during deep sleep to protect the dreaming process. The AI substantially throttles down real-time sensor processing, network calls, or UI interactions – effectively a “quiet hours” setting for the agent. By damping down sensory input and non-critical computations, the AI conserves energy and avoids interrupting its internal run of thoughts (similar to how sleepers selectively gate external signals to avoid disrupting dreams). This cooldown not only saves device resources (CPU/GPU cycles, battery life, etc.) but also gives the AI’s “mind” time to cool off from continuous activity. The system can still maintain a listen mode for critical events (much like how we can still respond to alarms while asleep), but overall it remains in a standby footprint. This ensures the AI’s self-maintenance in Somnium Mode doesn’t come at the expense of system performance or user needs. Essentially, Sensory Cooldown is the power-nap for the AI’s sensors and processes – it creates the calm, undisturbed conditions needed for effective memory consolidation. Wake Intelligence: Upon exiting Somnium Mode, the AI “wakes up” with an intelligence boost manifested as faster or more insightful responses, improved accuracy, and better recall of past learnings. The period of rest allows it to internalize new knowledge and even anticipate needs, so after waking the agent can perform more reliably and creatively. For example, an AI that struggled with a problem the night before might surface a novel solution in the morning, thanks to patterns discovered in its dream simulations (mirroring the common human experience of an “aha!” moment after sleep). Technically, studies have found that neural network models given dedicated “sleep” cycles had significantly higher task accuracy and were 2–12% more likely to correctly identify content than models trained straight through. They also retain prior skills better, avoiding the “catastrophic forgetting” that often plagues AI when learning new information. In Somnium Mode, any minor “housekeeping” can run as well – clearing cache, calibrating parameters, updating internal indices – so that on wake-up the AI is optimized and ready. The return to full activity is seamless; the user simply finds the AI more competent and perhaps surprisingly inspired. In effect, the agent wakes with sharper “mental” clarity, having organized its knowledge and even discovered creative leads during its dreaming phase. Use Cases Creative Professionals: Somnium Mode can become a powerful ally for writers, artists, designers, and other creatives who use AI as a collaborator. For instance, a novelist could let their AI writing assistant enter Somnium overnight after feeding it story notes. By morning, the AI may surface fresh metaphors, plot twists or character insights that weren’t obvious before, essentially “dreaming up” creative suggestions. Visual artists might get newly imagined mood boards or concept variations generated from the AI’s overnight remix of their style influences. Because the AI in Somnium is free from direct user prompts, it can surprise the user with outside-the-box ideas, analogous to how human dreams inspire artistic creativity. In fact, AI systems already show the ability to generate novel designs and ideas beyond what humans explicitly input. This mode would formalize a workflow for incubating creativity: the user plants seeds (project data, prompts) during the day, the AI subconsciously cultivates them in the “night,” and the next day yields a crop of inventive possibilities. This is particularly useful for creative professionals seeking inspiration or alternative solutions without constant active prompting. Somnium Mode essentially turns idle time into a brainstorming session, helping professionals think further outside the box with the AI as a dreamful muse. Enterprise Developers & Analysts: Development teams and knowledge workers in enterprise settings can leverage Somnium Mode for continuous improvement and maintenance tasks. An AI coding assistant, for example, could enter a Somnium state after office hours, using that time to analyze the day’s code changes, learn new libraries or patterns introduced, and even run internal simulations for edge-case tests. By “dreaming” through potential integrations or debugging scenarios in a low-risk sandbox, the AI might catch a bug or suggest an optimization by morning that a developer didn’t see. For data analysts, an AI could quietly sift through the day’s data in Somnium Mode, finding hidden correlations or updating its models, so that it’s more prepared to answer questions accurately the next day. Crucially, Somnium’s memory consolidation would help enterprise AI systems maintain a broad base of knowledge without forgetting older information when new data arrives. In domains like customer support or finance, where an AI is trained on ever-evolving data, scheduled rest cycles could prevent it from becoming biased to only recent inputs. The result is an AI agent that, after its “sleep,” handles tasks more robustly – recalling historical context, complying with past guidelines, and exhibiting improved problem-solving skills. This leads to fewer errors and more innovative proposals in projects. Enterprises can also schedule Somnium Mode during low-demand hours (e.g. nights or weekends) to improve system efficiency. It’s a way to have the AI self-optimize in the background, enhancing its performance and insights without downtime during critical business hours. Ambient UX and Smart Systems: For always-on AI embedded in our environment – think smart home assistants, IoT devices, or ambient computing interfaces – Somnium Mode introduces a calmer, human-like rhythm to their operation. Rather than being perpetually alert and consuming resources, an ambient AI can enter a quiet regenerative state when the household or office is inactive (for example, late at night). During this period, a smart home assistant might review the day’s voice commands and appliance usage patterns internally, consolidating preferences or learning routines (like improving how it anticipates your morning playlist or adjusting the thermostat schedule). With sensory cooldown in effect, users get peace of mind that the device is not excessively “listening” or processing when not needed, enhancing privacy and reducing power usage. The next day, the assistant could feel more attuned – perhaps it greets you with a novel suggestion (“Since you asked for Italian recipes and used the oven yesterday, I found a pasta recipe you might like today”) synthesized from its overnight cogitations. This creates a more organic and less intrusive interaction pattern. The ambient AI benefits from calm-design principles: the most helpful technology often blends into the background and steps forward only when truly needed. By literally “sleeping on it,” ambient systems become smarter and more context-aware over time without constant cloud updates or user prodding. Additionally, adopting a visible rest cycle for AI can increase user trust and comfort – the device behavior aligns more with human norms (active in the day, resting at night), making the technology feel more natural in a home environment. Overall, Somnium Mode in ambient UX scenarios fosters devices that support daily life quietly and regeneratively: always present and helpful, yet with a built-in balance of activity and rest to keep interactions calm. Why It Matters Enhanced Learning & Creativity: Introducing a sleep-inspired idle mode can make AI agents more intelligent and creative over the long term. Studies show that AI models allowed to “dream” and rest periodically see significant performance gains, learning new tasks more effectively while retaining old skills. This approach tackles the notorious issue of AI forgetting previous knowledge when updated with new information – much like how our brains reinforce memories during sleep to avoid forgetting. By consolidating memory and exploring novel idea combinations, Somnium Mode enables continuous self-improvement. We can expect AI that’s not only more accurate, but also more adept at making imaginative leaps or analogies drawn from its cumulative knowledge. In essence, the feature imbues AI with a form of offline practice or introspection time, leading to more adaptive and innovative behavior when active. Just as humans often awaken with fresh solutions after sleeping on a problem, AI agents could similarly “wake up” with new strategies or creative outputs that wouldn’t emerge from linear, nonstop processing. This makes the technology more robust and capable over time, benefiting end users with higher-quality results. Resource Efficiency & Longevity: Somnium Mode would redefine how AI manages energy and hardware resources during downtime. Rather than running at full throttle continuously (which can lead to diminishing returns and overheated circuits), the AI’s minimal-resource state dramatically cuts power usage when full activity isn’t necessary. This is analogous to a computer’s sleep mode or a phone’s battery saver, but tailored for AI cognition. The immediate benefit is reduced energy costs and heat generation – an important consideration as AI workloads are often compute-intensive. Over time, these regular low-power intervals could prolong device lifespan and reduce cloud compute bills for constant AI services. Moreover, brief “breaks” in processing can help prevent model overtraining or saturation. Continuous computation without pause might cause an AI to overfit to recent data or get stuck in local optima. Giving the system a rest allows it to reset and approach new data with a fresh perspective, improving generalization. From a sustainability standpoint, Somnium Mode aligns with green AI principles by trimming unnecessary computational waste. It ensures that an AI uses just enough resources for maintenance of capabilities, then wakes to full power when truly needed. This smarter resource allocation makes AI deployments more scalable and environmentally friendly, without sacrificing performance. In summary, Somnium Mode helps AI work smarter, not harder – boosting efficiency much like regular sleep rejuvenates our minds and bodies for the next day. Calmer, More Human-Like Interaction: Incorporating a sleep cycle in AI fundamentally changes the human-machine relationship for the better. It moves us away from hyper-alert, always-listening gadgets toward a more calm computing paradigm. By having periods where the AI is dormant or in deep introspection, users aren’t bombarded with constant feedback or consumption – the technology gracefully retreats into the background when not actively serving a purpose. This rhythm can make living and working with AI feel more natural and less invasive. Just as we feel more at ease around other people when they respect downtime and personal space, an AI that “knows” when to be quiet can inspire greater user trust. Somnium Mode signals that the AI is taking time to recharge and reflect, not surveil or incessantly engage, which could alleviate concerns about digital burnout or privacy. Furthermore, the notion of AI with a circadian cycle could make interactions more emotionally resonant. For example, a user might intuitively understand that an assistant responding in the morning might have processed and “slept on” yesterday’s conversation, lending a bit of personality or empathy to the device’s behavior. On a design level, this feature supports calm design principles by having the AI augment our environment in a subtle way – active when needed, unobtrusive when idle, yet continuously improving in the background. Overall, it fosters a healthier dynamic where AI systems augment human life without demanding constant attention, instead empowering a more mindful and regenerative tech experience for users. Redefining Idle Time in AI Systems: Somnium Mode represents a shift in how we think about “idle” computing. Traditionally, idle time is wasted time or merely a passive low-power state. This feature reframes idle time as an opportunity for growth and optimization. An AI agent that can use downtime to self-maintain and evolve introduces a new paradigm in AI development and deployment. For developers and companies like Microsoft, this could differentiate AI platforms with a unique bio-inspired capability – a talking point that our AI doesn’t just pause when idle, it improves itself. It draws from interdisciplinary insights (neuroscience and sleep research) to make technology more resilient and adaptable. In practical terms, this means AI services could be more reliable (since they periodically clean up and update themselves) and even continuously customizable (since they might re-prioritize goals based on long-term user patterns observed in dreams). Such a self-regulating system could reduce the need for frequent manual model updates or retraining, as the AI is, in a sense, always learning in the background. By redefining idle mode as Somnium Mode, we imbue AI systems with a form of circadian rhythm – introducing balance between intense activity and reflective rest. This balance can inspire future innovations in ambient intelligence and device design, where machines harmonize with human routines (active when we’re active, resting when we rest). Ultimately, Somnium Mode matters because it opens a pathway to AI that is smarter, more efficient, and more attuned to human life. It transforms the inactive hours into a wellspring of progress, making the overall user experience calmer yet more continuously enriched. This dream-driven downtime is how tomorrow’s AI could get better while doing less, turning a simple idle into an ingenious reboot of intelligence.Smart Auditing: Leveraging Azure AI Agents to Transform Financial Oversight
In today's data-driven business environment, audit teams often spend weeks poring over logs and databases to verify spending and billing information. This time-consuming process is ripe for automation. But is there a way to implement AI solutions without getting lost in complex technical frameworks? While tools like LangChain, Semantic Kernel, and AutoGen offer powerful AI agent capabilities, sometimes you need a straightforward solution that just works. So, what's the answer for teams seeking simplicity without sacrificing effectiveness? This tutorial will show you how to use Azure AI Agent Service to build an AI agent that can directly access your Postgres database to streamline audit workflows. No complex chains or graphs required, just a practical solution to get your audit process automated quickly. The Auditing Challenge: It's the month end, and your audit team is drowning in spreadsheets. As auditors reviewing financial data across multiple SaaS tenants, you're tasked with verifying billing accuracy by tracking usage metrics like API calls, storage consumption, and user sessions in Postgres databases. Each tenant generates thousands of transactions daily, and traditionally, this verification process consumes weeks of your team's valuable time. Typically, teams spend weeks: Manually extracting data from multiple database tables. Cross-referencing usage with invoices. Investigating anomalies through tedious log analysis. Compiling findings into comprehensive reports. With an AI-powered audit agent, you can automate these tasks and transform the process. Your AI assistant can: Pull relevant usage data directly from your database Identify billing anomalies like unexpected usage spikes Generate natural language explanations of findings Create audit reports that highlight key concerns For example, when reviewing a tenant's invoice, your audit agent can query the database for relevant usage patterns, summarize anomalies, and offer explanations: "Tenant_456 experienced a 145% increase in API usage on April 30th, which explains the billing increase. This spike falls outside normal usage patterns and warrants further investigation." Let’s build an AI agent that connects to your Postgres database and transforms your audit process from manual effort to automated intelligence. Prerequisites: Before we start building our audit agent, you'll need: An Azure subscription (Create one for free). The Azure AI Developer RBAC role assigned to your account. Python 3.11.x installed on your development machine. OR You can also use GitHub Codespaces, which will automatically install all dependencies for you. You’ll need to create a GitHub account first if you don’t already have one. Setting Up Your Database: For this tutorial, we'll use Neon Serverless Postgres as our database. It's a fully managed, cloud-native Postgres solution that's free to start, scales automatically, and works excellently for AI agents that need to query data on demand. Creating a Neon Database on Azure: Open the Neon Resource page on the Azure portal Fill out the form with the required fields and deploy your database After creation, navigate to the Neon Serverless Postgres Organization service Click on the Portal URL to access the Neon Console Click "New Project" Choose an Azure region Name your project (e.g., "Audit Agent Database") Click "Create Project" Once your project is successfully created, copy the Neon connection string from the Connection Details widget on the Neon Dashboard. It will look like this: postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require Note: Keep this connection string saved; we'll need it shortly. Creating an AI Foundry Project on Azure: Next, we'll set up the AI infrastructure to power our audit agent: Create a new hub and project in the Azure AI Foundry portal by following the guide. Deploy a model like GPT-4o to use with your agent. Make note of your Project connection string and Model Deployment name. You can find your connection string in the overview section of your project in the Azure AI Foundry portal, under Project details > Project connection string. Once you have all three values on hand: Neon connection string, Project connection string, and Model Deployment Name, you are ready to set up the Python project to create an Agent. All the code and sample data are available in this GitHub repository. You can clone or download the project. Project Environment Setup: Create a .env file with your credentials: PROJECT_CONNECTION_STRING="<Your AI Foundry connection string> "AZURE_OPENAI_DEPLOYMENT_NAME="gpt4o" NEON_DB_CONNECTION_STRING="<Your Neon connection string>" Create and activate a virtual environment: python -m venv .venv source .venv/bin/activate # on macOS/Linux .venv\Scripts\activate # on Windows Install required Python libraries: pip install -r requirements.txt Example requirements.txt: Pandas python-dotenv sqlalchemy psycopg2-binary azure-ai-projects ==1.0.0b7 azure-identity Load Sample Billing Usage Data: We will use a mock dataset for tenant usage, including computed percent change in API calls and storage usage in GB: tenant_id date api_calls storage_gb tenant_456 2025-04-01 1000 25.0 tenant_456 2025-03-31 950 24.8 tenant_456 2025-03-30 2200 26.0 Run python load_usage_data.py Python script to create and populate the usage_data table in your Neon Serverless Postgres instance: # load_usage_data.py file import os from dotenv import load_dotenv from sqlalchemy import ( create_engine, MetaData, Table, Column, String, Date, Integer, Numeric, ) # Load environment variables from .env load_dotenv() # Load connection string from environment variable NEON_DB_URL = os.getenv("NEON_DB_CONNECTION_STRING") engine = create_engine(NEON_DB_URL) # Define metadata and table schema metadata = MetaData() usage_data = Table( "usage_data", metadata, Column("tenant_id", String, primary_key=True), Column("date", Date, primary_key=True), Column("api_calls", Integer), Column("storage_gb", Numeric), ) # Create table with engine.begin() as conn: metadata.create_all(conn) # Insert mock data conn.execute( usage_data.insert(), [ { "tenant_id": "tenant_456", "date": "2025-03-27", "api_calls": 870, "storage_gb": 23.9, }, { "tenant_id": "tenant_456", "date": "2025-03-28", "api_calls": 880, "storage_gb": 24.0, }, { "tenant_id": "tenant_456", "date": "2025-03-29", "api_calls": 900, "storage_gb": 24.5, }, { "tenant_id": "tenant_456", "date": "2025-03-30", "api_calls": 2200, "storage_gb": 26.0, }, { "tenant_id": "tenant_456", "date": "2025-03-31", "api_calls": 950, "storage_gb": 24.8, }, { "tenant_id": "tenant_456", "date": "2025-04-01", "api_calls": 1000, "storage_gb": 25.0, }, ], ) print("✅ usage_data table created and mock data inserted.") Create a Postgres Tool for the Agent: Next, we configure an AI agent tool to retrieve data from Postgres. The Python script billing_agent_tools.py contains: The function billing_anomaly_summary() that: Pulls usage data from Neon. Computes % change in api_calls. Flags anomalies with a threshold of > 1.5x change. Exports user_functions list for the Azure AI Agent to use. You do not need to run it separately. # billing_agent_tools.py file import os import json import pandas as pd from sqlalchemy import create_engine from dotenv import load_dotenv # Load environment variables load_dotenv() # Set up the database engine NEON_DB_URL = os.getenv("NEON_DB_CONNECTION_STRING") db_engine = create_engine(NEON_DB_URL) # Define the billing anomaly detection function def billing_anomaly_summary( tenant_id: str, start_date: str = "2025-03-27", end_date: str = "2025-04-01", limit: int = 10, ) -> str: """ Fetches recent usage data for a SaaS tenant and detects potential billing anomalies. :param tenant_id: The tenant ID to analyze. :type tenant_id: str :param start_date: Start date for the usage window. :type start_date: str :param end_date: End date for the usage window. :type end_date: str :param limit: Maximum number of records to return. :type limit: int :return: A JSON string with usage records and anomaly flags. :rtype: str """ query = """ SELECT date, api_calls, storage_gb FROM usage_data WHERE tenant_id = %s AND date BETWEEN %s AND %s ORDER BY date DESC LIMIT %s; """ df = pd.read_sql(query, db_engine, params=(tenant_id, start_date, end_date, limit)) if df.empty: return json.dumps( {"message": "No usage data found for this tenant in the specified range."} ) df.sort_values("date", inplace=True) df["pct_change_api"] = df["api_calls"].pct_change() df["anomaly"] = df["pct_change_api"].abs() > 1.5 return df.to_json(orient="records") # Register this in a list to be used by FunctionTool user_functions = [billing_anomaly_summary] Create and Configure the AI Agent: Now we'll set up the AI agent and integrate it with our Neon Postgres tool using the Azure AI Agent Service SDK. The Python script does the following: Creates the agent Instantiates an AI agent using the selected model (gpt-4o, for example), adds tool access, and sets instructions that tell the agent how to behave (e.g., “You are a helpful SaaS assistant…”). Creates a conversation thread A thread is started to hold a conversation between the user and the agent. Posts a user message Sends a question like “Why did my billing spike for tenant_456 this week?” to the agent. Processes the request The agent reads the message, determines that it should use the custom tool to retrieve usage data, and processes the query. Displays the response Prints the response from the agent with a natural language explanation based on the tool’s output. # billing_anomaly_agent.py import os from datetime import datetime from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.projects.models import FunctionTool, ToolSet from dotenv import load_dotenv from pprint import pprint from billing_agent_tools import user_functions # Custom tool function module # Load environment variables from .env file load_dotenv() # Create an Azure AI Project Client project_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=os.environ["PROJECT_CONNECTION_STRING"], ) # Initialize toolset with our user-defined functions functions = FunctionTool(user_functions) toolset = ToolSet() toolset.add(functions) # Create the agent agent = project_client.agents.create_agent( model=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], name=f"billing-anomaly-agent-{datetime.now().strftime('%Y%m%d%H%M')}", description="Billing Anomaly Detection Agent", instructions=f""" You are a helpful SaaS financial assistant that retrieves and explains billing anomalies using usage data. The current date is {datetime.now().strftime("%Y-%m-%d")}. """, toolset=toolset, ) print(f"Created agent, ID: {agent.id}") # Create a communication thread thread = project_client.agents.create_thread() print(f"Created thread, ID: {thread.id}") # Post a message to the agent thread message = project_client.agents.create_message( thread_id=thread.id, role="user", content="Why did my billing spike for tenant_456 this week?", ) print(f"Created message, ID: {message.id}") # Run the agent and process the query run = project_client.agents.create_and_process_run( thread_id=thread.id, agent_id=agent.id ) print(f"Run finished with status: {run.status}") if run.status == "failed": print(f"Run failed: {run.last_error}") # Fetch and display the messages messages = project_client.agents.list_messages(thread_id=thread.id) print("Messages:") pprint(messages["data"][0]["content"][0]["text"]["value"]) # Optional cleanup: # project_client.agents.delete_agent(agent.id) # print("Deleted agent") Run the agent: To run the agent, run the following command python billing_anomaly_agent.py Snippet of output from agent: Using the Azure AI Foundry Agent Playground: After running your agent using the Azure AI Agent SDK, it is saved within your Azure AI Foundry project. You can now experiment with it using the Agent Playground. To try it out: Go to the Agents section in your Azure AI Foundry workspace. Find your billing anomaly agent in the list and click to open it. Use the playground interface to test different financial or billing-related questions, such as: “Did tenant_456 exceed their API usage quota this month?” “Explain recent storage usage changes for tenant_456.” This is a great way to validate your agent's behavior without writing more code. Summary: You’ve now created a working AI agent that talks to your Postgres database, all using: A simple Python function Azure AI Agent Service A Neon Serverless Postgres backend This approach is beginner-friendly, lightweight, and practical for real-world use. Want to go further? You can: Add more tools to the agent Integrate with vector search (e.g., detect anomaly reasons from logs using embeddings) Resources: Introduction to Azure AI Agent Service Develop an AI agent with Azure AI Agent Service Getting Started with Azure AI Agent Service Neon on Azure Build AI Agents with Azure AI Agent Service and Neon Multi-Agent AI Solution with Neon, Langchain, AutoGen and Azure OpenAI Azure AI Foundry GitHub Discussions That's it, folks! But the best part? You can become part of a thriving community of learners and builders by joining the Microsoft Learn Student Ambassadors Community. Connect with like-minded individuals, explore hands-on projects, and stay updated with the latest in cloud and AI. 💬 Join the community on Discord here and explore more benefits on the Microsoft Learn Student Hub.537Views5likes1CommentCoPilot Rewrite - Replace bugs
Hi, I was getting CoPilot to Auto Rewrite. However, when I "Replace" them, the replaced text are not exactly what was generated from CoPilot. As you can see from the results below, some sections, especially the middle part had been truncated. Any idea how to resolve this? Thanks in advance.104Views0likes1CommentCopilot android
Hello world. Last week Copilot stopped working. It Always says "seems Like you are Not logged in". I am logged in. Can See recent Chats. Can Log Out and Log in again and already deleted and reinstalled the App a few Times. Nothing works. App even Shows logged in when i Check the Status but when i try to interact: seems Like youre Not logged in" again. Can Andybody Help me please?432Views0likes5Comments