MVPBuzz
21 TopicsSmart Auditing: Leveraging Azure AI Agents to Transform Financial Oversight
In today's data-driven business environment, audit teams often spend weeks poring over logs and databases to verify spending and billing information. This time-consuming process is ripe for automation. But is there a way to implement AI solutions without getting lost in complex technical frameworks? While tools like LangChain, Semantic Kernel, and AutoGen offer powerful AI agent capabilities, sometimes you need a straightforward solution that just works. So, what's the answer for teams seeking simplicity without sacrificing effectiveness? This tutorial will show you how to use Azure AI Agent Service to build an AI agent that can directly access your Postgres database to streamline audit workflows. No complex chains or graphs required, just a practical solution to get your audit process automated quickly. The Auditing Challenge: It's the month end, and your audit team is drowning in spreadsheets. As auditors reviewing financial data across multiple SaaS tenants, you're tasked with verifying billing accuracy by tracking usage metrics like API calls, storage consumption, and user sessions in Postgres databases. Each tenant generates thousands of transactions daily, and traditionally, this verification process consumes weeks of your team's valuable time. Typically, teams spend weeks: Manually extracting data from multiple database tables. Cross-referencing usage with invoices. Investigating anomalies through tedious log analysis. Compiling findings into comprehensive reports. With an AI-powered audit agent, you can automate these tasks and transform the process. Your AI assistant can: Pull relevant usage data directly from your database Identify billing anomalies like unexpected usage spikes Generate natural language explanations of findings Create audit reports that highlight key concerns For example, when reviewing a tenant's invoice, your audit agent can query the database for relevant usage patterns, summarize anomalies, and offer explanations: "Tenant_456 experienced a 145% increase in API usage on April 30th, which explains the billing increase. This spike falls outside normal usage patterns and warrants further investigation." Let’s build an AI agent that connects to your Postgres database and transforms your audit process from manual effort to automated intelligence. Prerequisites: Before we start building our audit agent, you'll need: An Azure subscription (Create one for free). The Azure AI Developer RBAC role assigned to your account. Python 3.11.x installed on your development machine. OR You can also use GitHub Codespaces, which will automatically install all dependencies for you. You’ll need to create a GitHub account first if you don’t already have one. Setting Up Your Database: For this tutorial, we'll use Neon Serverless Postgres as our database. It's a fully managed, cloud-native Postgres solution that's free to start, scales automatically, and works excellently for AI agents that need to query data on demand. Creating a Neon Database on Azure: Open the Neon Resource page on the Azure portal Fill out the form with the required fields and deploy your database After creation, navigate to the Neon Serverless Postgres Organization service Click on the Portal URL to access the Neon Console Click "New Project" Choose an Azure region Name your project (e.g., "Audit Agent Database") Click "Create Project" Once your project is successfully created, copy the Neon connection string from the Connection Details widget on the Neon Dashboard. It will look like this: postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require Note: Keep this connection string saved; we'll need it shortly. Creating an AI Foundry Project on Azure: Next, we'll set up the AI infrastructure to power our audit agent: Create a new hub and project in the Azure AI Foundry portal by following the guide. Deploy a model like GPT-4o to use with your agent. Make note of your Project connection string and Model Deployment name. You can find your connection string in the overview section of your project in the Azure AI Foundry portal, under Project details > Project connection string. Once you have all three values on hand: Neon connection string, Project connection string, and Model Deployment Name, you are ready to set up the Python project to create an Agent. All the code and sample data are available in this GitHub repository. You can clone or download the project. Project Environment Setup: Create a .env file with your credentials: PROJECT_CONNECTION_STRING="<Your AI Foundry connection string> "AZURE_OPENAI_DEPLOYMENT_NAME="gpt4o" NEON_DB_CONNECTION_STRING="<Your Neon connection string>" Create and activate a virtual environment: python -m venv .venv source .venv/bin/activate # on macOS/Linux .venv\Scripts\activate # on Windows Install required Python libraries: pip install -r requirements.txt Example requirements.txt: Pandas python-dotenv sqlalchemy psycopg2-binary azure-ai-projects ==1.0.0b7 azure-identity Load Sample Billing Usage Data: We will use a mock dataset for tenant usage, including computed percent change in API calls and storage usage in GB: tenant_id date api_calls storage_gb tenant_456 2025-04-01 1000 25.0 tenant_456 2025-03-31 950 24.8 tenant_456 2025-03-30 2200 26.0 Run python load_usage_data.py Python script to create and populate the usage_data table in your Neon Serverless Postgres instance: # load_usage_data.py file import os from dotenv import load_dotenv from sqlalchemy import ( create_engine, MetaData, Table, Column, String, Date, Integer, Numeric, ) # Load environment variables from .env load_dotenv() # Load connection string from environment variable NEON_DB_URL = os.getenv("NEON_DB_CONNECTION_STRING") engine = create_engine(NEON_DB_URL) # Define metadata and table schema metadata = MetaData() usage_data = Table( "usage_data", metadata, Column("tenant_id", String, primary_key=True), Column("date", Date, primary_key=True), Column("api_calls", Integer), Column("storage_gb", Numeric), ) # Create table with engine.begin() as conn: metadata.create_all(conn) # Insert mock data conn.execute( usage_data.insert(), [ { "tenant_id": "tenant_456", "date": "2025-03-27", "api_calls": 870, "storage_gb": 23.9, }, { "tenant_id": "tenant_456", "date": "2025-03-28", "api_calls": 880, "storage_gb": 24.0, }, { "tenant_id": "tenant_456", "date": "2025-03-29", "api_calls": 900, "storage_gb": 24.5, }, { "tenant_id": "tenant_456", "date": "2025-03-30", "api_calls": 2200, "storage_gb": 26.0, }, { "tenant_id": "tenant_456", "date": "2025-03-31", "api_calls": 950, "storage_gb": 24.8, }, { "tenant_id": "tenant_456", "date": "2025-04-01", "api_calls": 1000, "storage_gb": 25.0, }, ], ) print("✅ usage_data table created and mock data inserted.") Create a Postgres Tool for the Agent: Next, we configure an AI agent tool to retrieve data from Postgres. The Python script billing_agent_tools.py contains: The function billing_anomaly_summary() that: Pulls usage data from Neon. Computes % change in api_calls. Flags anomalies with a threshold of > 1.5x change. Exports user_functions list for the Azure AI Agent to use. You do not need to run it separately. # billing_agent_tools.py file import os import json import pandas as pd from sqlalchemy import create_engine from dotenv import load_dotenv # Load environment variables load_dotenv() # Set up the database engine NEON_DB_URL = os.getenv("NEON_DB_CONNECTION_STRING") db_engine = create_engine(NEON_DB_URL) # Define the billing anomaly detection function def billing_anomaly_summary( tenant_id: str, start_date: str = "2025-03-27", end_date: str = "2025-04-01", limit: int = 10, ) -> str: """ Fetches recent usage data for a SaaS tenant and detects potential billing anomalies. :param tenant_id: The tenant ID to analyze. :type tenant_id: str :param start_date: Start date for the usage window. :type start_date: str :param end_date: End date for the usage window. :type end_date: str :param limit: Maximum number of records to return. :type limit: int :return: A JSON string with usage records and anomaly flags. :rtype: str """ query = """ SELECT date, api_calls, storage_gb FROM usage_data WHERE tenant_id = %s AND date BETWEEN %s AND %s ORDER BY date DESC LIMIT %s; """ df = pd.read_sql(query, db_engine, params=(tenant_id, start_date, end_date, limit)) if df.empty: return json.dumps( {"message": "No usage data found for this tenant in the specified range."} ) df.sort_values("date", inplace=True) df["pct_change_api"] = df["api_calls"].pct_change() df["anomaly"] = df["pct_change_api"].abs() > 1.5 return df.to_json(orient="records") # Register this in a list to be used by FunctionTool user_functions = [billing_anomaly_summary] Create and Configure the AI Agent: Now we'll set up the AI agent and integrate it with our Neon Postgres tool using the Azure AI Agent Service SDK. The Python script does the following: Creates the agent Instantiates an AI agent using the selected model (gpt-4o, for example), adds tool access, and sets instructions that tell the agent how to behave (e.g., “You are a helpful SaaS assistant…”). Creates a conversation thread A thread is started to hold a conversation between the user and the agent. Posts a user message Sends a question like “Why did my billing spike for tenant_456 this week?” to the agent. Processes the request The agent reads the message, determines that it should use the custom tool to retrieve usage data, and processes the query. Displays the response Prints the response from the agent with a natural language explanation based on the tool’s output. # billing_anomaly_agent.py import os from datetime import datetime from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.projects.models import FunctionTool, ToolSet from dotenv import load_dotenv from pprint import pprint from billing_agent_tools import user_functions # Custom tool function module # Load environment variables from .env file load_dotenv() # Create an Azure AI Project Client project_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=os.environ["PROJECT_CONNECTION_STRING"], ) # Initialize toolset with our user-defined functions functions = FunctionTool(user_functions) toolset = ToolSet() toolset.add(functions) # Create the agent agent = project_client.agents.create_agent( model=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], name=f"billing-anomaly-agent-{datetime.now().strftime('%Y%m%d%H%M')}", description="Billing Anomaly Detection Agent", instructions=f""" You are a helpful SaaS financial assistant that retrieves and explains billing anomalies using usage data. The current date is {datetime.now().strftime("%Y-%m-%d")}. """, toolset=toolset, ) print(f"Created agent, ID: {agent.id}") # Create a communication thread thread = project_client.agents.create_thread() print(f"Created thread, ID: {thread.id}") # Post a message to the agent thread message = project_client.agents.create_message( thread_id=thread.id, role="user", content="Why did my billing spike for tenant_456 this week?", ) print(f"Created message, ID: {message.id}") # Run the agent and process the query run = project_client.agents.create_and_process_run( thread_id=thread.id, agent_id=agent.id ) print(f"Run finished with status: {run.status}") if run.status == "failed": print(f"Run failed: {run.last_error}") # Fetch and display the messages messages = project_client.agents.list_messages(thread_id=thread.id) print("Messages:") pprint(messages["data"][0]["content"][0]["text"]["value"]) # Optional cleanup: # project_client.agents.delete_agent(agent.id) # print("Deleted agent") Run the agent: To run the agent, run the following command python billing_anomaly_agent.py Snippet of output from agent: Using the Azure AI Foundry Agent Playground: After running your agent using the Azure AI Agent SDK, it is saved within your Azure AI Foundry project. You can now experiment with it using the Agent Playground. To try it out: Go to the Agents section in your Azure AI Foundry workspace. Find your billing anomaly agent in the list and click to open it. Use the playground interface to test different financial or billing-related questions, such as: “Did tenant_456 exceed their API usage quota this month?” “Explain recent storage usage changes for tenant_456.” This is a great way to validate your agent's behavior without writing more code. Summary: You’ve now created a working AI agent that talks to your Postgres database, all using: A simple Python function Azure AI Agent Service A Neon Serverless Postgres backend This approach is beginner-friendly, lightweight, and practical for real-world use. Want to go further? You can: Add more tools to the agent Integrate with vector search (e.g., detect anomaly reasons from logs using embeddings) Resources: Introduction to Azure AI Agent Service Develop an AI agent with Azure AI Agent Service Getting Started with Azure AI Agent Service Neon on Azure Build AI Agents with Azure AI Agent Service and Neon Multi-Agent AI Solution with Neon, Langchain, AutoGen and Azure OpenAI Azure AI Foundry GitHub Discussions That's it, folks! But the best part? You can become part of a thriving community of learners and builders by joining the Microsoft Learn Student Ambassadors Community. Connect with like-minded individuals, explore hands-on projects, and stay updated with the latest in cloud and AI. 💬 Join the community on Discord here and explore more benefits on the Microsoft Learn Student Hub.560Views5likes1CommentSeoul AI Hub & Microsoft MVPs Empower Citizens with AI Skills — No Coding Required
Seoul AI Hub, an AI-specialized support organization under the Seoul Metropolitan Government, is dedicated to fostering the city’s AI industry ecosystem through talent development, startup incubation, and public education. In partnership with Microsoft MVPs, the hub is making AI accessible to all through the AI Frontiers Series — blending expert talks with hands-on workshops. “AI is no longer just for experts; it’s a tool for everyone,” says Chan-jin Park, Director of Seoul AI Hub. The collaboration between Seoul AI Hub and Microsoft MVPs demonstrates the transformative power of community-led expertise. MVPs such as Jaeseok Lee, Heo seok, Haesun Park, and Minseok Song brought their technical leadership to the forefront — integrating advanced AI concepts with practical skills that citizens could immediately use. From explaining multi-agent architectures to building custom Copilot solutions, their sessions showed how complex AI tools can be democratized for non-developers. Beyond teaching, these MVPs are active contributors to the global AI ecosystem. Minseok Song maintains the Co-op Translator open-source project, integrating AI-based translation workflows into real-world scenarios. Jaeseok Lee leads Korea’s Power Platform User Group, connecting business users and developers to collaborate on Copilot Studio innovations. This kind of community-driven leadership extends the impact of Microsoft technologies far beyond corporate settings. These events also reflect how the MVP community is growing more diverse in expertise and audience reach. Participants came from varied backgrounds — students, entrepreneurs, office workers, and hobbyists — all united by a desire to understand and use AI meaningfully. For many attendees, this was their first encounter with building AI agents, and the supportive environment encouraged experimentation and collaboration. MVPs not only shared technical knowledge but also their own journeys: how they discovered Microsoft AI, grew into community leaders, and applied their skills to solve local and global challenges. Such stories inspire the next generation of community builders and potential MVPs. AI Frontiers Series Summer Sessions Recent events at the Seoul AI Hub where MVP participated included: July 22 featured a deep-dive seminar on “Open AI Technologies for Survival in the AI Frontier Era,” covering multi-agent strategies, LLM and multimodal trends, and real-world open-source AI applications. Aug 12 brought the AI Agent Bootcamp for Non-Developers, where 80 registered citizens learned to create Copilot agents without code. Participants explored integrating AI agents into Microsoft Teams and M365, building document-driven assistants, and deploying multi-channel solutions. “Copilot Studio allows anyone to build their own ChatGPT-like agent. The key is not just creating one agent, but learning how to design multiple agents that work together to solve real problems,” said Jaeseok Lee, Microsoft Copilot Studio MVP. These back-to-back sessions show what’s possible when technical expertise, open-source spirit, and a commitment to public education come together. The impact extends beyond the events themselves — sparking curiosity, building confidence, and equipping citizens to harness AI in ways that are relevant to their lives and work The AI Frontiers Series proves that when experts and communities connect, technology becomes more inclusive and impactful. By lowering the barrier to AI adoption, Seoul AI Hub and Microsoft MVPs are equipping citizens with skills for the future. To explore upcoming sessions or get involved, visit the Seoul AI Hub website and join the movement to make AI a tool for everyone.167Views4likes0CommentsYellowHat 2025: A Global Stage for Deep Microsoft Security Insights
YellowHat 2025, held on March 6th, was a landmark event focused on Microsoft Security, drawing together a global audience of professionals and enthusiasts. Hosted at Microsoft's Amsterdam headquarters, the event featured over 150 in-person attendees and 1500+ online participants, all eager to delve into advanced security topics. MVP Myron Helgering and the organizing team shared their insights YellowHat's ideas, motivations and future prospects for the event What inspired you to organize YellowHat 2025? We felt there was a need for something new: an event organized by and for the community, focused solely on Microsoft Security content. One thing was also clear: We wanted it to be a deeply technical event, so level 400+. Our goal was to be visible worldwide, so we chose a hybrid event and focused on delivering a high-quality online and in-person experience. As it was our first edition, we aimed to create an exciting and easily recognizable event. How did you ensure that the content was relevant and immediately applicable to current security challenges? The most important thing was getting the right speakers on board for our event; they had to be top-notch. We selected our speakers based on their expertise, experience, and their ability to deliver engaging and relevant content. Luckily, we could attract visionary leaders and security experts like Raviv Tamir, Roberto Rodriguez, Dirk-Jan Mollema, Mattias Borg, Stefan Schörling, Thomas Naunheim, Ran Marom, and Eyal Haik. In addition to selecting the right speakers, we aimed to tell a cohesive story throughout the day. By interconnecting our deep-dive sessions and zooming out when necessary, we could highlight different security challenges and make the content applicable to a broad audience. How did you manage to attract such a large global audience, both in-person and online? Most of the YellowHat organizers (not all of them) are also organizers for the Dutch Microsoft Security Meetup, which has 2000+ members. We used the power of our community to our advantage, attracting our local in-person attendees and promoting our event globally. To reach the large global audience, we had the help of our international speakers and Microsoft Security MVPs who could promote the event, as well as Microsoft's very own Raviv Tamir and Dan Michelson (YellowHat's founder). Lastly, our very own Ninja Cat with a yellow hard hat mascot was all over the socials for weeks to do our marketing for us. How did the hybrid format (in-person and online) impact the overall experience for attendees? When organizing a hybrid event, organizing suddenly becomes a lot more complex because you have to provide an excellent experience to both online and in-person attendees simultaneously. We engaged our online attendees during breaks by providing them with live interviews and sponsor commercials, while our in-person attendees had time for food, drinks, and networking opportunities. Ultimately, I hope we made the people feel like they were part of that YellowHat experience we were going for by providing them with the same deep technical content, but not prioritizing one experience over the other. We received overwhelmingly positive feedback from our in-person and online attendees, which reassures us that we are on the right track and motivates us to continue improving the YellowHat experience. What are your plans for future iterations of YellowHat, and how do you envision the event evolving? Even though YellowHat 2025 was already a global event, the in-person attendees mostly visited from the Netherlands. We would love to grow and evolve YellowHat into something that can attract an international audience, which will be a focus of our plans. We haven't officially decided on anything yet, but YellowHat 2026 will definitely happen, and it will be bigger, bolder, and more exciting. How can interested community members get involved in organizing or participating in future YellowHat conferences? If you have any questions or suggestions or would like to get involved, please feel free to contact us using our contact form. If you want to be the first to receive sneak peeks, early announcements, and exclusive insider information then please go ahead and subscribe to our mailing list so you won't miss anything about YellowHat! Why YellowHat? Yellow (Hard) Hats are used by construction workers for "protection and security", which a reference to our work as Microsoft Security Defenders / Protectors. The content at the conference was aligned with that; we're focused on the defensive / preventive side of (Microsoft) security. One of our unofficial sayings at the conference was; wear your yellow hat to prevent cyber threats.409Views3likes2CommentsBreaking Barriers: Addressing Unconscious Bias in the Tech Industry
Unconscious bias affects everyone, including tech professionals who believe they are objective and data-driven. In this blog post, we delve into the experiences of South African Business Applications MVP Carike Botha, who has worked diligently to recognize and mitigate her own biases. Carike will share practical strategies for the tech community to identify unconscious bias, fostering better team dynamics and enhancing creativity. While everyone experiences unconscious bias, how can you become more aware of your own unconscious biases and work to counteract them? In my experience, becoming aware of unconscious bias starts with actively seeking out programs and resources that can help raise awareness. At my workplace, there is a dedicated program for addressing unconscious bias. Without this program, I might never have fully recognized my own biases. The program includes a Women’s Forum which began about 6 years ago. Women from this group participated in a course created by the Chapter Network, and upon completion we facilitated an Unconscious Bias session. The response was overwhelmingly positive, and from there, the training was rolled out across the whole business. The Unconscious Bias training is led by women in the forum and is a 90-minute session. The training creates a space where employees can openly voice biases they’ve experienced or those they’ve recognized in themselves or others. During these sessions, we emphasize that there is no right or wrong, but the key takeaway is that awareness is crucial to making a difference. The positive response led to the training being rolled out company-wide. Remaining open-minded and encouraging others to approach you with any concerns about bias is essential. This practice helps cultivate a learning environment where everyone can grow together. By inviting feedback, we ensure biases are addressed, fostering mutual respect, accountability, and a culture where everyone's voice is heard and valued. In your experience, how does unconscious bias manifest in everyday interactions among tech professionals? Unconscious bias often appears in everyday interactions within the tech industry, especially through stereotypes about what a woman in tech “should” look like. These biases show up in assumptions about our wardrobe, whether we’re gamers, or the constant need to prove and validate our skills, despite our progress. Unconscious biases towards women in tech manifest in various ways, such as assumptions that women are less technical or always need validation. For example, some may think women lack expertise in certain roles because they are perceived as soft, social, or friendly, or assume they are in the industry just to "fill a quota." These biases can result in women being overlooked for leadership positions, denied career advancement, or having their ideas dismissed in meetings. Over time, these challenges can erode confidence and hinder career growth. Additionally, it limits innovation and reduces the diversity of thought crucial to the success of teams and organizations in the tech sector. Many tech professionals believe they are objective and data driven. How can unconscious bias still affect their work despite this belief? Many tech professionals believe they are objective and data-driven, making unconscious bias harder to detect. However, even in data-driven fields, personal perspectives influence decisions, whether consciously or not. Therefore, it's crucial to continuously challenge assumptions and remain open to feedback. Unconscious biases can subtly affect decisions in data-driven environments. For instance, when interpreting data, individuals may unintentionally prioritize information that aligns with their pre-existing beliefs rather than examining it objectively. In hiring, unconscious bias might lead a manager to favor candidates who resemble themselves or fit a certain mold, even if data suggests other candidates are better suited for the role. Algorithmic biases are another example—tech professionals may not realize that the models they design or the datasets they use reflect their own biases, resulting in skewed outcomes. Moreover, relying on data to justify decisions can be problematic. Tech professionals might overlook the broader context or social implications, thereby ignoring how biases in data collection or model assumptions could perpetuate inequality or exclusion. How can tech leaders and managers actively mitigate unconscious bias within their teams? Tech leaders and managers can mitigate unconscious bias by fostering an inclusive environment, providing ongoing training, and encouraging open conversations. Creating an inclusive environment starts with intentional actions that demonstrate a commitment to fairness, respect, and diversity. Practical ways to foster inclusion include: Promoting Diverse Representation: Actively recruit from diverse talent pools and ensure diverse voices are heard in meetings. Mentorship programs pairing underrepresented groups with senior leaders can help break down barriers and build trust. Implementing Bias-Reducing Strategies in Hiring: Use blind hiring processes where personal information for example gender, race, or age is removed from resumes or applications to focus on skills and qualifications. Encouraging Open Dialogue: Facilitate regular discussions around unconscious bias and its impact, allowing team members to safely share experiences and learn from one another. This can be done through lunch-and-learns or town hall meetings that address diversity and inclusion. Training and Resources: Offer continuous training on recognizing and addressing bias, and creating accessible resources (e.g., reading materials, workshops, or bias assessment tools) for employees to explore at their own pace. Celebrating Diversity: Actively recognize the contributions of diverse team members and celebrate various cultural holidays and events that reflect the team’s diversity. This raises awareness and fosters an environment where people feel seen and valued for who they are. By implementing these strategies, leaders can create an environment where merit and contributions are recognized, and biases are less likely to influence decisions. In conclusion, by actively practicing these skills, we can begin to create a safe space for open dialogue and awareness. Encouraging feedback and promoting diverse representation are crucial steps tech leaders can take to mitigate unconscious bias and foster a culture of respect and inclusion. Acknowledging and confronting our biases allows us to enhance team collaboration, boost innovation, and foster a deeper sense of community within the tech industry.477Views3likes1CommentDeploy Your First App Using GitHub Copilot for Azure: A Beginner’s Guide
Deploying an app for the first time can feel overwhelming. You may find yourself switching between tutorials, scanning documentation, and wondering if you missed a step. But what if you could do it all in one place? Now you can! With GitHub Copilot for Azure, you can receive real time deployment guidance without leaving the Visual Studio Code. While it won’t fully automate deployments, it serves as a step-by-step AI powered assistant, helping you navigate the process with clear, actionable instructions. No more endless tab switching or searching for the right tutorial—simply type, deploy, and learn, all within your IDE i.e. Visual Studio Code. If you are a student, you have access to exclusive opportunities! Whether you are exploring new technologies or experimenting with them, platforms like GitHub Education and the Microsoft Learn Student Hub provide free Azure credits, structured learning paths, and certification opportunities. These resources can help you gain hands-on experience with GitHub Copilot for Azure and streamline your journey toward deploying applications efficiently. Prerequisites: Before we begin, ensure you have the following: Account in GitHub. Sign up with GitHub Copilot. Account in Azure (Claim free credits using Azure for Students) Visual Studio Code installed. Step 1: Installation How to install GitHub Copilot for Azure? Open VS Code, in the leftmost panel, click on Extensions, type – ‘GitHub Copilot for Azure’, and install the first result which is by Microsoft. After this installation, you will be prompted to install – GitHub Copilot, Azure Tools, and other required installations. Click on allow and install all required extensions from the same method, as used above. Step 2: Enable How to enable GitHub Copilot in GitHub? Open GitHub click on top rightmost Profile pic, a left panel will open. Click on Your Copilot. Upon opening, enable it for IDE, as shown in the below Figure. Step 3: Walkthrough Open VSCode, and click on the GitHub Copilot icon from topmost right side. This will open the GitHub Copilot Chat. From here, you can customize the model type and Send commands. Type azure to work with Azure related tasks. Below figure will help to locate the things smoothly: Step 4: Generate Boilerplate Code with GitHub Copilot Let’s start by creating a simple HTML website that we will deploy to Azure Static Web Apps Service. Prompt for GitHub Copilot: Create a simple "Hello, World!" code with HTML. Copilot will generate a basic structure like this: Then, click on "Edit with Copilot." It will create an index.html file and add the code to it. Then, click on "Accept" and modify the content and style if needed before moving forward. Step 5: Deploy Your App Using Copilot Prompts Instead of searching for documentation, let’s use Copilot to generate deployment instructions directly within Visual Studio Code. Trigger Deployment Prompts Using azure To get deployment related suggestions, use azure in GitHub Copilot’s chat. In the chat text box at the bottom of the pane, type the following prompt after azure, then select Send (paper airplane icon) or press Enter on your keyboard: Prompt: azure How do I deploy a static website? Copilot will provide two options: deploying via Azure Blob Storage or Azure Static Web App Service. We will proceed with Azure Static Web Apps, so we will ask Copilot to guide us through deploying our app using this service. We will use the following prompt: azure I would like to deploy a site using Azure Static Web Apps. Please provide a step-by-step guide. Copilot will then return steps like: You will receive a set of instructions to deploy your website. To make it simpler, you can ask Copilot for a more detailed guide. To get a detailed guide, we will use the following prompt: azure Can you provide a more detailed guide and elaborate on GitHub Actions, including the steps to take for GitHub Actions? Copilot will then return steps like: See? That’s how you can experiment, ask questions, and get step-by-step guidance. Remember, the better the prompt, the better the results will be. Step 6: Learn as You Deploy One of the best features of Copilot is that you can ask follow-up questions if anything is unclear—all within Visual Studio Code, without switching tabs. Examples of Useful Prompts: What Azure services should I use with my app? What is GitHub Actions, and how does it work? What are common issues when deploying to Azure, and how can I fix them? Copilot provides contextual responses, guiding you through troubleshooting and best practices. You can learn more about this here. Conclusion: With GitHub Copilot for Azure, deploying applications is now more intuitive than ever. Instead of memorizing complex commands, you can use AI powered prompts to generate deployment steps in real time and even debug the errors within Visual Studio Code. 🚀 Next Steps: Experience with different prompts and explore how Copilot assists you. Try deploying more advanced applications, like Node.js or Python apps. GitHub Copilot isn’t just an AI assistant, it’s a learning tool. The more you engage with it, the more confident you’ll become in deploying and managing applications on Azure! Learn more about GitHub Copilot for Azure: Understand what GitHub Copilot for Azure Preview is and how it works. See example prompts for learning more about Azure and understanding your Azure account, subscription, and resources. See example prompts for designing and developing applications for Azure. See example prompts for deploying your application to Azure. See example prompts for optimizing your applications in Azure. See example prompts for troubleshooting your Azure resources. That's it, folks! But the best part? You can become part of a thriving community of learners and builders by joining the Microsoft Learn Student Ambassadors Community. Connect with like-minded individuals, explore hands-on projects, and stay updated with the latest in cloud and AI. 💬 Join the community on Discord here.972Views2likes1Comment