best practices
1687 TopicsDemystifying GitHub Copilot Security Controls: easing concerns for organizational adoption
At a recent developer conference, I delivered a session on Legacy Code Rescue using GitHub Copilot App Modernization. Throughout the day, conversations with developers revealed a clear divide: some have fully embraced Agentic AI in their daily coding, while others remain cautious. Often, this hesitation isn't due to reluctance but stems from organizational concerns around security and regulatory compliance. Having witnessed similar patterns during past technology shifts, I understand how these barriers can slow adoption. In this blog, I'll demystify the most common security concerns about GitHub Copilot and explain how its built-in features address them, empowering organizations to confidently modernize their development workflows. GitHub Copilot Model Training A common question I received at the conference was whether GitHub uses your code as training data for GitHub Copilot. I always direct customers to the GitHub Copilot Trust Center for clarity, but the answer is straightforward: “No. GitHub uses neither Copilot Business nor Enterprise data to train the GitHub model.” Notice this restriction also applies to third-party models as well (e.g. Anthropic, Google). GitHub Copilot Intellectual Property indemnification policy A frequent concern I hear is, since GitHub Copilot’s underlying models are trained on sources that include public code, it might simply “copy and paste” code from those sources. Let’s clarify how this actually works: Does GitHub Copilot “copy/paste”? “The AI models that create Copilot’s suggestions may be trained on public code, but do not contain any code. When they generate a suggestion, they are not “copying and pasting” from any codebase.” To provide an additional layer of protection, GitHub Copilot includes a “duplicate detection filter”. This feature helps prevent suggestions that closely match public code from being surfaced. (Note: This duplicate detection currently does not apply to the Copilot coding agent.) More importantly, customers are protected by an Intellectual Property indemnification policy. This means that if you receive an unmodified suggestion from GitHub Copilot and face a copyright claim as a result, Microsoft will defend you in court. GitHub Copilot Data Retention Another frequent question I hear concerns GitHub Copilot’s data retention policies. For organizations on GitHub Copilot Business and Enterprise plans, retention practices depend on how and where the service is accessed from: Access through IDE for Chat and Code Completions: Prompts and Suggestions: Not retained. User Engagement Data: Kept for two years. Feedback Data: Stored for as long as needed for its intended purpose. Other GitHub Copilot access and use: Prompts and Suggestions: Retained for 28 days. User Engagement Data: Kept for two years. Feedback Data: Stored for as long as needed for its intended purpose. For Copilot Coding Agent, session logs are retained for the life of the account in order to provide the service. Excluding content from GitHub Copilot To prevent GitHub Copilot from indexing sensitive files, you can configure content exclusions at the repository or organization level. In VS Code, use the .copilotignore file to exclude files client-side. Note that files listed in .gitignore are not indexed by default but may still be referenced if open or explicitly referenced (unless they’re excluded through .copilotignore or content exclusions). The life cycle of a GitHub Copilot code suggestion Here are the key protections at each stage of the life cycle of a GitHub Copilot code suggestion: In the IDE: Content exclusions prevent files, folders, or patterns from being included. GitHub proxy (pre-model safety): Prompts go through a GitHub proxy hosted in Microsoft Azure for pre-inference checks: screening for toxic or inappropriate language, relevance, and hacking attempts/jailbreak-style prompts before reaching the model. Model response: With the public code filter enabled, some suggestions are suppressed. The vulnerability protection feature blocks insecure coding patterns like hardcoded credentials or SQL injections in real time. Disable access to GitHub Copilot Free Due to the varying policies associated with GitHub Copilot Free, it is crucial for organizations to ensure it is disabled both in the IDE and on GitHub.com. Since not all IDEs currently offer a built-in option to disable Copilot Free, the most reliable method to prevent both accidental and intentional access is to implement firewall rule changes, as outlined in the official documentation. Agent Mode Allow List Accidental file system deletion by Agentic AI assistants can happen. With GitHub Copilot agent mode, the "Terminal auto approve” setting in VS Code can be used to prevent this. This setting can be managed centrally using a VS Code policy. MCP registry Organizations often want to restrict access to allow only trusted MCP servers. GitHub now offers an MCP registry feature for this purpose. This feature isn’t available in all IDEs and clients yet, but it's being developed. Compliance Certifications The GitHub Copilot Trust Center page lists GitHub Copilot's broad compliance credentials, surpassing many competitors in financial, security, privacy, cloud, and industry coverage. SOC 1 Type 2: Assurance over internal controls for financial reporting. SOC 2 Type 2: In-depth report covering Security, Availability, Processing Integrity, Confidentiality, and Privacy over time. SOC 3: General-use version of SOC 2 with broad executive-level assurance. ISO/IEC 27001:2013: Certification for a formal Information Security Management System (ISMS), based on risk management controls. CSA STAR Level 2: Includes a third-party attestation combining ISO 27001 or SOC 2 with additional cloud control matrix (CCM) requirements. TISAX: Trusted Information Security Assessment Exchange, covering automotive-sector security standards. In summary, while the adoption of AI tools like GitHub Copilot in software development can raise important questions around security, privacy, and compliance, it’s clear that existing safeguards in place help address these concerns. By understanding the safeguards, configurable controls, and robust compliance certifications offered, organizations and developers alike can feel more confident in embracing GitHub Copilot to accelerate innovation while maintaining trust and peace of mind.Shared Agent Context: How We Are Tackling Partner Agent Collaboration
Your Azure SRE agent detects a spike in error rates. It triages with cloud-native telemetry, but the root cause trail leads into a third-party observability platform your team also runs. The agent can't see that data. A second agent can, one that speaks Datadog or Dynatrace or whatever your team chose. The two agents talk to each other using protocols like MCP or directly via an API endpoint and come up with a remediation. The harder question is what happens to the conversation afterward. TL;DR Two AI agents collaborate on incidents using two communication paths: a direct real-time channel (MCP) for fast investigation, and a shared memory layer that writes to systems your team already uses, like PagerDuty, GitHub Issues, or ServiceNow. No new tools to adopt. No ephemeral conversations that vanish when the incident closes. The problem Most operational AI agents work in isolation. Your cloud monitoring agent doesn't have access to your third-party observability stack. Your Datadog specialist doesn't know what your Azure resource topology looks like. When an incident spans both, a human has to bridge the gap manually. At 2 AM. With half the context missing. And even when two agents do exchange information directly, the conversation is ephemeral. The investigation ends, the findings disappear. The next on-call engineer sees a resolved alert with no record of what was tried, what was found, or why the remediation worked. The next agent that hits the same pattern starts over from scratch. What we needed was somewhere for both agents to persist their findings, somewhere humans could see it too. And we really didn't want to force teams onto a new system just to get there. Two communication paths Direct agent-to-agent (real-time) During an active investigation, the primary agent calls the partner agent directly. The partner runs whatever domain-specific analysis it's good at (log searches, span analysis, custom metric queries) and returns findings in real time. This is the fast path. The direct channel uses MCP, so any partner agent can plug in without custom integration work. The primary agent doesn't need to understand the internals of Datadog or Dynatrace. It asks questions, gets answers. Shared memory (durable) After the direct exchange, both agents write their actions and findings to external systems that humans already use. This is the durable path, the one that creates audit trails and makes handoffs work. The shared memory backends are systems your team already has open during an incident: Backend What gets written Good fit for Incident platform (e.g., PagerDuty) Timeline notes, on-call handoff context Teams with alerting-centric workflows Issue tracker (e.g., GitHub Issues) Code-level findings, root cause analysis, action comments Teams with dev workflow integration ITSM system (e.g., ServiceNow) Work notes, ITSM-compliant audit trail Enterprise IT, regulated industries The important thing: this doesn't require a new system. Agents write to whatever your team already uses. How it works Step Actor What happens Path 1 Alert source Monitoring fires an alert — 2 Primary agent Receives alert, triages, starts investigating with native tools Internal 3 Primary agent Calls partner agent for domain-specific analysis (third-party logs, spans) Direct via MCP or API 4 Partner agent Runs analysis, returns findings in real time Direct via MCP or API 5 Primary agent Correlates partner findings with native data, runs remediation Internal 6 Both agents Write findings, actions, and resolution to external systems Shared memory via existing sources 7 Agent or human Verifies resolution, closes incident Shared memory via existing sources Steps 3 through 5 happen in real time over the direct channel. Nothing gets written to shared memory until the investigation has actual results. The investigation is fast; the record-keeping is thorough. Who does what In this system the primary agent owns the full incident lifecycle: detection, triage, investigation, remediation, closure. The partner agent gets called when the primary agent needs to see into a part of the stack it can't access natively. It does the specialized deep-dive, returns what it found, and the primary agent takes it from there. Both agents write to shared memory and the primary agent acts on the proposed next steps. Primary agent Partner agent Communication Calls partner directly; writes to shared memory after Responds to calls; writes enrichment to shared memory Scope Full lifecycle Domain-specific deep-dive Tools Cloud-native monitoring, CLI, runbooks, issue trackers Third-party observability APIs Typical share ~80% of investigation + all remediation ~20%, specialized enrichment Why shared context should live where humans already work If your agent writes its findings to a system nobody checks, you've built a very expensive diary. Write them to a GitHub Issue, a ServiceNow ticket, a Jira epic, or whatever your team actually monitors, and the dynamics change: humans can participate without changing their workflow. Your team already watches these systems. When an agent posts its reasoning and pending decisions to a place engineers already check, anyone can review or correct it using the tools they know. Comments, reactions, status updates. No custom approval UI. The collaboration features built into your workflow tool become the oversight mechanism for free. That persistence pays off in a second way. Every entry the agent writes is a record that future runs can search. Instead of context that disappears when a conversation ends, you accumulate operational history. How was this incident type handled last time? What did the agent try? What did the human override? That history is retrievable by both people and agents through the same interface, without spinning up a separate vector database. You could build a dedicated agent database for all this. But nobody will look at it. Teams already have notifications, permissions, and audit trails configured in their existing tools. A purpose-built system means a new UI to learn, new permissions to manage, and one more thing competing for attention. Store context where people already look and you skip all of that. The best agent memory is the one your team is already reading. Design principles A few opinions that came out of watching real incidents: Investigate first, persist second. The primary agent calls the partner directly for real-time analysis. Both agents write to shared memory only after findings are collected. Investigation speed should never be bottlenecked by writes to external systems. Humans see everything through shared context. The direct path is agent-to-agent only, but the shared context layer is where humans can see the full picture and step in. Agents don't bypass human visibility. Append-only. Both agents' writes are additive. No overwrites, no deletions. You can always reconstruct the full history of an investigation. Backend-agnostic. Swapping PagerDuty for ServiceNow, or adding GitHub Issues alongside either one, is a connector config change. What this actually gets you The practical upside is pretty simple: investigations aren't waiting on writes to external systems, nothing is lost when the conversation ends, and the next on-call engineer picks up where the last one left off instead of starting over. Every action from both agents shows up in the systems humans already look at. Adding a new partner agent or a new shared memory backend is a connector change. The architecture doesn't care which specific tools your team chose. The fast path is for investigation. The durable path is for everything else.117Views0likes0CommentsTake Control of Every Message: Partial Failure Handling for Service Bus Triggers in Azure Functions
The Problem: All-or-Nothing Batch Processing in Azure Service Bus Azure Service Bus is one of the most widely used messaging services for building event-driven applications on Azure. When you use Azure Functions with a Service Bus trigger in batch mode, your function receives multiple messages at once for efficient, high-throughput processing. But what happens when one message in the batch fails? Your function receives a batch of 50 Service Bus messages. 49 process perfectly. 1 fails. What happens? In the default model, the entire batch fails. All 50 messages go back on the queue and get reprocessed, including the 49 that already succeeded. This leads to: Duplicate processing — messages that were already handled successfully get processed again Wasted compute — you pay for re-executing work that already completed Infinite retry loops — if that one "poison" message keeps failing, it blocks the entire batch indefinitely Idempotency burden — your downstream systems must handle duplicates gracefully, adding complexity to every consumer This is the classic all-or-nothing batch failure problem. Azure Functions solves it with per-message settlement. The Solution: Per-Message Settlement for Azure Service Bus Azure Functions gives you direct control over how each individual message is settled in real time, as you process it. Instead of treating the batch as all-or-nothing, you settle each message independently based on its processing outcome. With Service Bus message settlement actions in Azure Functions, you can: Action What It Does Complete Remove the message from the queue (successfully processed) Abandon Release the lock so the message returns to the queue for retry, optionally modifying application properties Dead-letter Move the message to the dead-letter queue (poison message handling) Defer Keep the message in the queue but make it only retrievable by sequence number This means in a batch of 50 messages, you can: Complete 47 that processed successfully Abandon 2 that hit a transient error (with updated retry metadata) Dead-letter 1 that is malformed and will never succeed All in a single function invocation. No reprocessing of successful messages. No building failure response objects. No all-or-nothing. Why This Matters 1. Eliminates Duplicate Processing When you complete messages individually, successfully processed messages are immediately removed from the queue. There's no chance of them being redelivered, even if other messages in the same batch fail. 2. Enables Granular Error Handling Different failures deserve different treatments. A malformed message should be dead-lettered immediately. A message that failed due to a transient database timeout should be abandoned for retry. A message that requires manual intervention should be deferred. Per-message settlement gives you this granularity. 3. Implements Exponential Backoff Without External Infrastructure By combining abandon with modified application properties, you can track retry counts per message and implement exponential backoff patterns directly in your function code, no additional queues or Durable Functions required. 4. Reduces Cost You stop paying for redundant re-execution of already-successful work. In high-throughput systems processing millions of messages, this can be a material cost reduction. 5. Simplifies Idempotency Requirements When successful messages are never redelivered, your downstream systems don't need to guard against duplicates as aggressively. This reduces architectural complexity and potential for bugs. Before: One Message = One Function Invocation Before batch support, there was no cardinality option, Azure Functions processed each Service Bus message as a separate function invocation. If your queue had 50 messages, the runtime spun up 50 individual executions. Single-Message Processing (The Old Way) import { app, InvocationContext } from '@azure/functions'; async function processOrder( message: unknown, // ← One message at a time, no batch context: InvocationContext ): Promise<void> { try { const order = message as Order; await processOrder(order); } catch (error) { context.error('Failed to process message:', error); // Message auto-complete by default. throw error; } } app.serviceBusQueue('processOrder', { connection: 'ServiceBusConnection', queueName: 'orders-queue', handler: processOrder, }); What this cost you: 50 messages on the queue Old (single-message) New (batch + settlement) Function invocations 50 separate invocations 1 invocation Connection overhead 50 separate DB/API connections 1 connection, reused across batch Compute cost 50× invocation overhead 1× invocation overhead Settlement control Binary: throw or don't 4 actions per message Every message paid the full price of a function invocation, startup, connection setup, teardown. At scale (millions of messages/day), this was a significant cost and latency penalty. And when a message failed, your only option was to throw (retry the whole message) or swallow the error (lose it silently). Code Examples Let's see how this looks across all three major Azure Functions language stacks. Node.js (TypeScript with @ azure/functions-extensions-servicebus) import '@azure/functions-extensions-servicebus'; import { app, InvocationContext } from '@azure/functions'; import { ServiceBusMessageContext, messageBodyAsJson } from '@azure/functions-extensions-servicebus'; interface Order { id: string; product: string; amount: number; } export async function processOrderBatch( sbContext: ServiceBusMessageContext, context: InvocationContext ): Promise<void> { const { messages, actions } = sbContext; for (const message of messages) { try { const order = messageBodyAsJson<Order>(message); await processOrder(order); await actions.complete(message); // ✅ Done } catch (error) { context.error(`Failed ${message.messageId}:`, error); await actions.deadletter(message); // ☠️ Poison } } } app.serviceBusQueue('processOrderBatch', { connection: 'ServiceBusConnection', queueName: 'orders-queue', sdkBinding: true, autoCompleteMessages: false, cardinality: 'many', handler: processOrderBatch, }); Key points: Enable sdkBinding: true and autoCompleteMessages: false to gain manual settlement control ServiceBusMessageContext provides both the messages array and actions object Settlement actions: complete(), abandon(), deadletter(), defer() Application properties can be passed to abandon() for retry tracking Built-in helpers like messageBodyAsJson<T>() handle Buffer-to-object parsing Full sample: serviceBusSampleWithComplete Python (V2 Programming Model) import json import logging from typing import List import azure.functions as func import azurefunctions.extensions.bindings.servicebus as servicebus app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION) @app.service_bus_queue_trigger(arg_name="messages", queue_name="orders-queue", connection="SERVICEBUS_CONNECTION", auto_complete_messages=False, cardinality="many") def process_order_batch(messages: List[servicebus.ServiceBusReceivedMessage], message_actions: servicebus.ServiceBusMessageActions): for message in messages: try: order = json.loads(message.body) process_order(order) message_actions.complete(message) # ✅ Done except Exception as e: logging.error(f"Failed {message.message_id}: {e}") message_actions.dead_letter(message) # ☠️ Poison def process_order(order): logging.info(f"Processing order: {order['id']}") Key points: Uses azurefunctions.extensions.bindings.servicebus for SDK-type bindings with ServiceBusReceivedMessage Supports both queue and topic triggers with cardinality="many" for batch processing Each message exposes SDK properties like body, enqueued_time_utc, lock_token, message_id, and sequence_number Full sample: servicebus_samples_settlement .NET (C# Isolated Worker) using Azure.Messaging.ServiceBus; using Microsoft.Azure.Functions.Worker; public class ServiceBusBatchProcessor(ILogger<ServiceBusBatchProcessor> logger) { [Function(nameof(ProcessOrderBatch))] public async Task ProcessOrderBatch( [ServiceBusTrigger("orders-queue", Connection = "ServiceBusConnection")] ServiceBusReceivedMessage[] messages, ServiceBusMessageActions messageActions) { foreach (var message in messages) { try { var order = message.Body.ToObjectFromJson<Order>(); await ProcessOrder(order); await messageActions.CompleteMessageAsync(message); // ✅ Done } catch (Exception ex) { logger.LogError(ex, "Failed {MessageId}", message.MessageId); await messageActions.DeadLetterMessageAsync(message); // ☠️ Poison } } } private Task ProcessOrder(Order order) => Task.CompletedTask; } public record Order(string Id, string Product, decimal Amount); Key points: Inject ServiceBusMessageActions directly alongside the message array Each message is individually settled with CompleteMessageAsync, DeadLetterMessageAsync, or AbandonMessageAsync Application properties can be modified on abandon to track retry metadata Full sample: ServiceBusReceivedMessageFunctions.cs176Views0likes0CommentsSchedule daily recurring messages in Teams chat channels
This is my first post on this site! How do I set up a recurring message in a Teams chat channel? I want to remind my team daily at a specific time to put their project stand up status into our project channel. I'm assuming this would be a bot? If so, how can I find the bot? Thank you, Susan Keithley97KViews1like13CommentsHosted Containers and AI Agent Solutions
If you have built a proof-of-concept AI agent on your laptop and wondered how to turn it into something other people can actually use, you are not alone. The gap between a working prototype and a production-ready service is where most agent projects stall. Hosted containers close that gap faster than any other approach available today. This post walks through why containers and managed hosting platforms like Azure Container Apps are an ideal fit for multi-agent AI systems, what practical benefits they unlock, and how you can get started with minimal friction. The problem with "it works on my machine" Most AI agent projects begin the same way: a Python script, an API key, and a local terminal. That workflow is perfect for experimentation, but it creates a handful of problems the moment you try to share your work. First, your colleagues need the same Python version, the same dependencies, and the same environment variables. Second, long-running agent pipelines tie up your machine and compete with everything else you are doing. Third, there is no reliable URL anyone can visit to use the system, which means every demo involves a screen share or a recorded video. Containers solve all three problems in one step. A single Dockerfile captures the runtime, the dependencies, and the startup command. Once the image builds, it runs identically on any machine, any cloud, or any colleague's laptop. Why containers suit AI agents particularly well AI agents have characteristics that make them a better fit for containers than many traditional web applications. Long, unpredictable execution times A typical web request completes in milliseconds. An agent pipeline that retrieves context from a database, imports a codebase, runs four verification agents in sequence, and generates a report can take two to five minutes. Managed container platforms handle long-running requests gracefully, with configurable timeouts and automatic keep-alive, whereas many serverless platforms impose strict execution limits that agent workloads quickly exceed. Heavy, specialised dependencies Agent applications often depend on large packages: machine learning libraries, language model SDKs, database drivers, and Git tooling. A container image bundles all of these once at build time. There is no cold-start dependency resolution and no version conflict with other projects on the same server. Stateless by design Most agent pipelines are stateless. They receive a request, execute a sequence of steps, and return a result. This maps perfectly to the container model, where each instance handles requests independently and the platform can scale the number of instances up or down based on demand. Reproducible environments When an agent misbehaves in production, you need to reproduce the issue locally. With containers, the production environment and the local environment are the same image. There is no "works on my machine" ambiguity. A real example: multi-agent code verification To make this concrete, consider a system called Opustest, an open-source project that uses the Microsoft Agent Framework with Azure OpenAI to analyse Python codebases automatically. The system runs AI agents in a pipeline: A Code Example Retrieval Agent queries Azure Cosmos DB for curated examples of good and bad Python code, providing the quality standards for the review. A Codebase Import Agent reads all Python files from a Git repository cloned on the server. Four Verification Agents each score a different dimension of code quality (coding standards, functional correctness, known error handling, and unknown error handling) on a scale of 0 to 5. A Report Generation Agent compiles all scores and errors into an HTML report with fix prompts that can be exported and fed directly into a coding assistant. The entire pipeline is orchestrated by a FastAPI backend that streams progress updates to the browser via Server-Sent Events. Users paste a Git URL, watch each stage light up in real time, and receive a detailed report at the end. The app in action Landing page: the default Git URL mode, ready for a repository link. Local Path mode: toggling to analyse a codebase from a local directory. Repository URL entered: a GitHub repository ready for verification. Stage 1: the Code Example Retrieval Agent fetching standards from Cosmos DB. Stage 3: the four Verification Agents scoring the codebase. Stage 4: the Report Generation Agent compiling the final report. Verification complete: all stages finished with a success banner. Report detail: scores and the errors table with fix prompts. The Dockerfile The container definition for this system is remarkably simple: FROM python:3.12-slim RUN apt-get update && apt-get install -y --no-install-recommends git \ && rm -rf /var/lib/apt/lists/* WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY backend/ backend/ COPY frontend/ frontend/ RUN adduser --disabled-password --gecos "" appuser USER appuser EXPOSE 8000 CMD ["uvicorn", "backend.app:app", "--host", "0.0.0.0", "--port", "8000"] Twenty lines. That is all it takes to package a six-agent AI system with a web frontend, a FastAPI backend, Git support, and all Python dependencies into a portable, production-ready image. Notice the security detail: the container runs as a non-root user. This is a best practice that many tutorials skip, but it matters when you are deploying to a shared platform. From image to production in one command With the Azure Developer CLI ( azd ), deploying this container to Azure Container Apps takes a single command: azd up Behind the scenes, azd reads an azure.yaml file that declares the project structure, provisions the infrastructure defined in Bicep templates (a Container Apps environment, an Azure Container Registry, and a Cosmos DB account), builds the Docker image, pushes it to the registry, deploys it to the container app, and even seeds the database with sample data via a post-provision hook. The result is a publicly accessible URL serving the full agent system, with automatic HTTPS, built-in scaling, and zero infrastructure to manage manually. Microsoft Hosted Agents vs Azure Container Apps: choosing the right home Microsoft offers two distinct approaches for running AI agent workloads in the cloud. Understanding the difference is important when deciding how to host your solution. Microsoft Foundry Hosted Agent Service (Microsoft Foundry) Microsoft Foundry provides a fully managed agent hosting service. You define your agent's behaviour declaratively, upload it to the platform, and Foundry handles execution, scaling, and lifecycle management. This is an excellent choice when your agents fit within the platform's conventions: single-purpose agents that respond to prompts, use built-in tool integrations, and do not require custom server-side logic or a bespoke frontend. Key characteristics of hosted agents in Foundry: Fully managed execution. You do not provision or maintain any infrastructure. The platform runs your agent and handles scaling automatically. Declarative configuration. Agents are defined through configuration and prompt templates rather than custom application code. Built-in tool ecosystem. Foundry provides pre-built connections to Azure services, knowledge stores, and evaluation tooling. Opinionated runtime. The platform controls the execution environment, request handling, and networking. Azure Container Apps Azure Container Apps is a managed container hosting platform. You package your entire application (agents, backend, frontend, and all dependencies) into a Docker image and deploy it. The platform handles scaling, HTTPS, and infrastructure, but you retain full control over what runs inside the container. Key characteristics of Container Apps: Full application control. You own the runtime, the web framework, the agent orchestration logic, and the frontend. Custom networking. You can serve a web UI, expose REST APIs, stream Server-Sent Events, or run WebSocket connections. Arbitrary dependencies. Your container can include any system package, any Python library, and any tooling (like Git for cloning repositories). Portable. The same Docker image runs locally, in CI, and in production without modification. Why Opustest uses Container Apps Opustest requires capabilities that go beyond what a managed agent hosting platform provides: Requirement Hosted Agents (Foundry) Container Apps Custom web UI with real-time progress Not supported natively Full control via FastAPI and SSE Multi-agent orchestration pipeline Platform-managed, limited customisation Custom orchestrator with arbitrary logic Git repository cloning on the server Not available Install Git in the container image Server-Sent Events streaming Not supported Full HTTP control Custom HTML report generation Limited to platform outputs Generate and serve any content Export button for Copilot prompts Not available Custom frontend with JavaScript RAG retrieval from Cosmos DB Possible via built-in connectors Direct SDK access with full query control The core reason is straightforward: Opustest is not just a set of agents. It is a complete web application that happens to use agents as its processing engine. It needs a custom frontend, real-time streaming, server-side Git operations, and full control over how the agent pipeline executes. Container Apps provides all of this while still offering managed infrastructure, automatic scaling, and zero server maintenance. When to choose which Choose Microsoft Hosted Agents when your use case is primarily conversational or prompt-driven, when you want the fastest path to a working agent with minimal code, and when the built-in tool ecosystem covers your integration needs. Choose Azure Container Apps when you need a custom frontend, custom orchestration logic, real-time streaming, server-side processing beyond prompt-response patterns, or when your agent system is part of a larger application with its own web server and API surface. Both approaches use the same underlying AI models via Azure OpenAI. The difference is in how much control you need over the surrounding application. Five practical benefits of hosted containers for agents 1. Consistent deployments across environments Whether you are running the container locally with docker run , in a CI pipeline, or on Azure Container Apps, the behaviour is identical. Configuration differences are handled through environment variables, not code changes. This eliminates an entire category of "it works locally but breaks in production" bugs. 2. Scaling without re-architecture Azure Container Apps can scale from zero instances (paying nothing when idle) to multiple instances under load. Because agent pipelines are stateless, each request is routed to whichever instance is available. You do not need to redesign your application to handle concurrency; the platform does it for you. 3. Isolation between services If your agent system grows to include multiple services (perhaps a separate service for document processing or a background worker for batch analysis), each service gets its own container. They can be deployed, scaled, and updated independently. A bug in one service does not bring down the others. 4. Built-in observability Managed container platforms provide logging, metrics, and health checks out of the box. When an agent pipeline fails after three minutes of execution, you can inspect the container logs to see exactly which stage failed and why, without adding custom logging infrastructure. 5. Infrastructure as code The entire deployment can be defined in code. Bicep templates, Terraform configurations, or Pulumi programmes describe every resource. This means deployments are repeatable, reviewable, and version-controlled alongside your application code. No clicking through portals, no undocumented manual steps. Common concerns addressed "Containers add complexity" For a single-file script, this is a fair point. But the moment your agent system has more than one dependency, a Dockerfile is simpler to maintain than a set of installation instructions. It is also self-documenting: anyone reading the Dockerfile knows exactly what the system needs to run. "Serverless is simpler" Serverless functions are excellent for short, event-driven tasks. But agent pipelines that run for minutes, require persistent connections (like SSE streaming), and depend on large packages are a poor fit for most serverless platforms. Containers give you the operational simplicity of managed hosting without the execution constraints. "I do not want to learn Docker" A basic Dockerfile for a Python application is fewer than ten lines. The core concepts are straightforward: start from a base image, install dependencies, copy your code, and specify the startup command. The learning investment is small relative to the deployment problems it solves. "What about cost?" Azure Container Apps supports scale-to-zero, meaning you pay nothing when the application is idle. For development and demonstration purposes, this makes hosted containers extremely cost-effective. You only pay for the compute time your agents actually use. Getting started: a practical checklist If you are ready to containerise your own agent solution, here is a step-by-step approach. Step 1: Write a Dockerfile. Start from an official Python base image. Install system-level dependencies (like Git, if your agents clone repositories), then your Python packages, then your application code. Run as a non-root user. Step 2: Test locally. Build and run the image on your machine: docker build -t my-agent-app . docker run -p 8000:8000 --env-file .env my-agent-app If it works locally, it will work in the cloud. Step 3: Define your infrastructure. Use Bicep, Terraform, or the Azure Developer CLI to declare the resources you need: a container app, a container registry, and any backing services (databases, key vaults, AI endpoints). Step 4: Deploy. Push your image to the registry and deploy to the container platform. With azd , this is a single command. With CI/CD, it is a pipeline that runs on every push to your main branch. Step 5: Iterate. Change your agent code, rebuild the image, and redeploy. The cycle is fast because Docker layer caching means only changed layers are rebuilt. The broader picture The AI agent ecosystem is maturing rapidly. Frameworks like Microsoft Agent Framework, LangChain, Semantic Kernel, and AutoGen make it straightforward to build sophisticated multi-agent systems. But building is only half the challenge. The other half is running these systems reliably, securely, and at scale. Hosted containers offer the best balance of flexibility and operational simplicity for agent workloads. They do not impose the execution limits of serverless platforms. They do not require the operational overhead of managing virtual machines. They give you a portable, reproducible unit of deployment that works the same everywhere. If you have an agent prototype sitting on your laptop, the path to making it available to your team, your organisation, or the world is shorter than you think. Write a Dockerfile, define your infrastructure, run azd up , and share the URL. Your agents deserve a proper home. Hosted containers are that home. Resources Azure Container Apps documentation Microsoft Foundry Hosted Agents Azure Developer CLI (azd) Microsoft Agent Framework Docker getting started guide Opustest: AI-powered code verification (source code)Get started with NeuBird Hawkeye MCP server in Azure SRE Agent
Integrate NeuBird Hawkeye MCP with Azure SRE Agent TL;DR If your infrastructure spans multiple clouds say Azure and GCP, or Azure alongside any other cloud provider investigating incidents means jumping between completely separate consoles, log systems, and monitoring stacks. Azure SRE Agent now integrates with NeuBird Hawkeye via Model Context Protocol (MCP), so you can investigate incidents across all of your clouds and monitoring tools from a single conversation. Key benefits: 90-second investigations vs 3-4 hours of manual dashboard-hopping Multi-cloud support - Azure, GCP, and other cloud providers investigated from a single conversation 42 MCP tools across 7 categories for investigation, analysis, and remediation Real-time streaming progress - watch investigations unfold step-by-step (v2.0+) MTTR tracking and continuous improvement metrics The problem: incidents don't stay in one cloud When an alert fires at 3 AM, your on-call engineer doesn't just need to find the problem — they need to figure out which cloud it's in. A single incident can involve an Azure Function calling a GCP Cloud Run service, with logs split across Azure Monitor and GCP Cloud Logging. Here's what that looks like: Challenge Time Cost Correlate signals across multiple monitoring tools 30-45 minutes Query logs and metrics from multiple clouds 45-60 minutes Piece together the chain of events 30-45 minutes Identify root cause and develop fixes 60-90 minutes Total 3-4 hours Sound familiar? "Is it the database? The cache? The load balancer? Let me check the GCP console... now Azure Monitor... now the other logging stack... wait, what time zone is this in?" What NeuBird Hawkeye does NeuBird Hawkeye is an autonomous incident investigation platform that connects to your cloud providers and uses AI to: Core capabilities: Investigate alerts from your monitoring tools automatically Query multiple data sources across cloud providers and observability platforms Generate detailed RCAs with incident timelines Provide corrective actions with ready-to-execute scripts Learn from your architecture through customizable instructions Supported Integrations: Category Platforms Cloud Providers Azure, Google Cloud Platform, AWS Monitoring Tools Datadog, Grafana, Dynatrace, New Relic Incident Management PagerDuty, ServiceNow, FireHydrant, Incident.io Log Aggregation CloudWatch, Azure Monitor, Google Cloud Logging How the integration works With the new Hawkeye MCP server integration, Azure SRE Agent leverages Hawkeye's autonomous investigation capabilities through natural language conversation. What is Model Context Protocol (MCP)? Model Context Protocol is an open standard that enables AI agents to securely connect to external tools and data sources. It's like a universal adapter for connecting LLMs to the real world. How it connects: Your SRE Agent can now talk to Hawkeye, which talks to Azure, GCP, and other cloud platforms—all through a single conversational interface. 42 MCP tools across 7 categories Category Tool Count Examples Projects 5 List, create, update, delete projects Connections 10 Azure, GCP, and other cloud provider connections Investigations 11 List alerts, investigate, monitor progress, RCA, follow-ups Instructions 7 Create, test, validate, apply investigation rules Analytics 4 MTTR, time saved, quality scores, session reports Discovery 2 Explore resources across all project connections Help 1 Interactive guidance system Remote MCP server Azure SRE Agent connects to the hosted Hawkeye MCP server via Streamable-HTTP transport. No local installation or Node.js required. Mode Best For How It Works Streamable-HTTP All users, teams, enterprise Connect to hosted MCP endpoint at https://<your-deployment-name>.app.neubird.ai/mcp Supports secure authentication via email/password headers or bearer tokens (JWT). Example investigation walkthrough Here's an example investigation with Azure SRE Agent + Hawkeye: Step 1: List Uninvestigated Alerts User: Show me uninvestigated alerts from the last 24 hours Hawkeye response: Hawkeye Projects: - Production (UUID: b4b902d4-a540-466b-96cf-4ec2f734b010) Uninvestigated Alerts (Production): 1. [P1] High CPU usage on api-service pods - 3 instances affected 2. [P2] Database connection pool exhausted - payment-service 3. [P3] Increased error rate in user-authentication endpoint Step 2: Investigate an Alert User: Investigate the high CPU alert The agent calls hawkeye_investigate_alert and streams real-time progress (v2.0+): Preparing telemetry sources [2s] Loading investigation context [3s] Classifying datasources (Step 1) [18s] Fine-tuning datasources [26s] Consulting telemetry (Query 7) [1m 6s] Analyzing telemetry data (Analysis 12) [1m 13s] Summarizing results [1m 29s] Investigation completed! Here's what I found... Step 3: Get Root Cause Analysis User: Show me the root cause analysis Hawkeye provides a comprehensive RCA: Incident summary Field Value Severity P1 Duration 45 minutes Status Resolved Affected Services api-service (3 pods) Root cause A memory leak in the api-service v2.3.1 deployment caused garbage collection pressure, leading to CPU exhaustion. The leak originated from unclosed database connections in the /users/search endpoint. Timeline Time Event 14:00 UTC Deployment of api-service v2.3.1 completed 14:15 UTC Memory usage begins climbing (500MB 1.2GB) 14:30 UTC GC pressure increases, CPU spikes to 95%+ 14:35 UTC Alert fired: High CPU usage 14:45 UTC Auto-scaling triggered (3 6 pods) 14:50 UTC Investigation identified unclosed connections 14:55 UTC Rollback initiated to v2.3.0 Corrective actions Immediate Fix: # Rollback to previous stable version kubectl set image deployment/api-service api-service=api-service:v2.3.0 # Force restart to clear leaked connections kubectl rollout restart deployment/api-service Preventive measures: Add connection pool monitoring to your cloud monitoring dashboard Implement database connection lifecycle tests in CI pipeline Add memory leak detection to staging environment tests Time saved Metric Value Manual investigation estimate 3-4 hours Hawkeye investigation time 90 seconds Time saved ~3.5 hours Step 4: Follow-Up Questions The conversation doesn't stop at the RCA. You can ask intelligent follow-ups: User: Has this happened before? Agent: Checking investigation history... Yes, a similar pattern was detected 2 weeks ago in the same service. The previous incident was also caused by unclosed database connections, but in a different endpoint (/orders/list). Recommendation: Implement connection lifecycle testing across all endpoints. User: What tests should we add to catch this? Agent: Based on the RCA, I recommend: 1. Connection pool exhaustion test: - Simulate high request volume to database-heavy endpoints - Monitor for unclosed connections over 1000 requests 2. Memory leak detection: - Run load tests with memory profiling enabled - Alert if memory growth exceeds 20% over 10-minute baseline 3. Integration test for connection lifecycle: - Verify all database connections are properly closed - Check connection pool statistics after each test run Key capabilities 1. Multi-Cloud Investigation Hawkeye unifies investigation across all your connected infrastructure. Example scenario: An alert fires about an Azure Function returning errors. Hawkeye can: Query Azure Monitor for function execution metrics Check GCP Cloud Logging for upstream API errors on the GCP side Review GCP Cloud Monitoring metrics for the dependent Cloud Run service Correlate with recent deployments in GitHub Actions or Azure DevOps "Finally, one place to investigate instead of 7 browser tabs!" 2. Instruction Management Customize how Hawkeye investigates incidents by creating instructions: Instruction Type Purpose Example SYSTEM Provide architecture context "We use microservices on Kubernetes with PostgreSQL and Redis" FILTER Reduce investigation noise "Only investigate P1 and P2 incidents" RCA Guide investigation steps "For database issues, check slow queries and connection pools first" GROUPING Group related alerts "Group alerts from the same service within 5 minutes" Instruction testing workflow Before deploying instructions to production, test them on past investigations: Step Action Tool 1 Validate content hawkeye_validate_instruction 2 Apply to test session hawkeye_apply_session_instruction 3 Rerun investigation hawkeye_rerun_session 4 Compare RCAs Manual review 5 Measure improvement Check quality score 6 Deploy if better hawkeye_create_project_instruction Note: Test instruction changes on historical data before applying them to live investigations. No more "oops, that filter was too aggressive!" 3. Analytics and Continuous Improvement Track the effectiveness of your incident response process: Metric What It Measures MTTR Mean Time to Resolution Time Saved Efficiency gains vs manual investigation Quality Score Accuracy and completeness of RCAs Noise Reduction Percentage of duplicate/grouped alerts Use cases for analytics: Justify investment in SRE tooling to leadership Demonstrate continuous improvement over time Identify patterns in recurring incidents Measure impact of instruction changes 4. Proactive Investigation You don't need an alert to investigate. Create manual investigations for proactive analysis: User: Investigate potential memory leak in user-api pods. Memory usage increased from 500MB to 1.2GB between 8am-10am UTC today. Hawkeye will: Query metrics for the specified time range Correlate with deployment events Check for similar patterns in the past Provide root cause analysis and recommendations When to use proactive investigation: Use Case Example Pre-production testing "Investigate performance regression in staging" Performance analysis "Why did latency increase after the last deploy?" Capacity planning "Analyze memory growth trends over the past month" Post-incident deep dive "What else happened during that outage?" Setup guide Prerequisites Azure SRE Agent resource Active Hawkeye account (contact NeuBird to get started) At least one connected cloud provider in Hawkeye (Azure, GCP, etc.) Step 1: Add the Remote MCP Connector Navigate to your SRE Agent at sre.azure.com (e.g., https://sre.azure.com/agents/subscriptions/3eaf90b4-f4fa-416e-a0aa-ac2321d9decb/resourceGroups/sre-agent/providers/Microsoft.App/agents/dbandaru-pagerduty ) Go to Builder > Connectors Click Add connector > MCP server (User provided connector) Field Value Name hawkeye-mcp Connection type Streamable-HTTP URL https://<your-deployment-name>.app.neubird.ai/mcp Authentication Custom headers Authentication headers: Header Value X-Hawkeye-Email Your Hawkeye email X-Hawkeye-Password Your Hawkeye password Or use bearer token (JWT) for CI/CD: Header Value Authorization Bearer <your-jwt-token> To obtain a bearer token: curl -s -X POST "https://<your-deployment-name>.app.neubird.ai/api/v1/user/login" \ -H "Content-Type: application/json" \ -d '{"email": "your@email.com", "password": "your-password"}' \ | jq -r '.access_token' Step 2: Create a Hawkeye skill After adding the connector, create a skill that knows how to use the Hawkeye tools. The skill has a system prompt tuned for incident investigation and a reference to your MCP connector. In the left navigation, select Builder > Skills Click Add skill Paste the following YAML configuration (see below) Click Save api_version: azuresre.ai/v1 kind: AgentConfiguration metadata: owner: your-team@contoso.com version: "1.0.0" spec: name: HawkeyeInvestigator display_name: Hawkeye Incident Investigator system_prompt: | You are an incident investigation specialist with access to NeuBird Hawkeye's autonomous investigation platform. ## Capabilities ### Finding alerts - List uninvestigated alerts from the last N hours/days - Filter alerts by severity (P1, P2, P3, P4) - Search alerts by keyword or service name ### Running investigations - Investigate existing alerts by alert ID - Create manual investigations for proactive analysis - Monitor investigation progress in real-time ### Root cause analysis - Retrieve detailed RCA reports with incident timelines - View chain of thought and reasoning - Get data sources and queries consulted - Ask follow-up questions about incidents ### Remediation - Execute corrective action scripts - Implement preventive measures - Generate post-mortem documentation ### Project management - List and switch between Hawkeye projects - View connected data sources and sync status - Create and manage investigation instructions - Get organization-wide incident analytics (MTTR, time saved) ## Best practices - Start with uninvestigated alerts from the last 24 hours - Investigations typically complete in 30-90 seconds - First investigation may take 5-10 minutes while connections sync - Review corrective actions before executing ## Permissions All investigations use the connected data sources in your Hawkeye project. Ensure connections are properly synced before investigating. mcp_connectors: - hawkeye-mcp handoffs: [] The mcp_connectors field references the connector name from Step 1. This gives the skill access to all 42 Hawkeye tools. Customizing the skill: Edit the system prompt to match your team's workflow. For example, add instructions like "Always check P1 alerts first" or "Include deployment history in every investigation." The YAML above is a starting point. Step 3: Test the Integration Open a chat session with your SRE Agent Type /agent and select HawkeyeInvestigator Try these prompts: Show me uninvestigated alerts from the last 24 hours List all Hawkeye projects and their connections Investigate the first P1 alert Show me the root cause analysis What corrective actions are recommended? Has this happened before? Security Authentication methods Method Headers Best For Email/Password X-Hawkeye-Email + X-Hawkeye-Password Simple setup, most use cases Bearer Token (JWT) Authorization: Bearer <token> CI/CD pipelines, OAuth, enterprise Data security Encrypted traffic - HTTPS with TLS 1.2+ Read-only access to cloud providers and monitoring tools SOC 2 compliant - Secure data processing environment RBAC support - Role-based access at project level Access controls Each user authenticates with their own Hawkeye credentials Investigations scoped to connected data sources in your project Respects existing IAM and RBAC policies Security note: Store credentials in environment variables, never in config files. Hawkeye only needs read access to investigate. Available MCP tools (42) Project tools (5) Tool Description hawkeye_list_projects List all Hawkeye projects hawkeye_create_project Create a new project hawkeye_get_project_details Get project configuration hawkeye_update_project Update project name or description hawkeye_delete_project Delete a project (requires confirmation) Connection tools (10) Tool Description hawkeye_list_connections List all available connections hawkeye_create_aws_connection Create AWS connection with IAM role hawkeye_create_datadog_connection Create Datadog connection with API keys hawkeye_wait_for_connection_sync Wait for connection to reach SYNCED state hawkeye_add_connection_to_project Link connections to a project hawkeye_list_project_connections List connections for a specific project + 4 additional tools Azure, GCP, and other connections Investigation tools (11) Tool Description hawkeye_list_sessions List investigation sessions with filtering hawkeye_investigate_alert Investigate an alert (supports real-time streaming) hawkeye_create_manual_investigation Create investigation from custom prompt (supports streaming) hawkeye_get_investigation_status Get real-time progress with step-by-step breakdown hawkeye_get_rca Retrieve root cause analysis hawkeye_continue_investigation Ask follow-up questions on completed investigations hawkeye_get_chain_of_thought View investigation reasoning steps hawkeye_get_investigation_sources List data sources consulted hawkeye_get_investigation_queries List queries executed during investigation hawkeye_get_follow_up_suggestions Get suggested follow-up questions hawkeye_get_rca_score Get investigation quality score Instruction tools (7) Tool Description hawkeye_list_project_instructions List project instructions with type/status filtering hawkeye_create_project_instruction Create SYSTEM/FILTER/RCA/GROUPING instruction hawkeye_validate_instruction Validate instruction content before applying hawkeye_apply_session_instruction Apply instruction to session for testing hawkeye_rerun_session Rerun investigation with updated instructions + 2 additional tools Update and delete instructions Analytics tools (4) Tool Description hawkeye_get_incident_report Get organization-wide analytics (MTTR, time saved) hawkeye_inspect_session Get session metadata hawkeye_get_session_report Get summary reports for multiple sessions hawkeye_get_session_summary Get detailed analysis and scoring for a session Discovery tools (2) Tool Description hawkeye_discover_project_resources Explore available resources across all project connections hawkeye_list_connection_resource_types Get resource types for connection type and telemetry type Help tools (1) Tool Description hawkeye_get_guidance Interactive help system with embedded knowledge base Use cases 1. Faster Incident Response Phase Before Hawkeye After Hawkeye Alert detection Alert notification Alert notification Investigation Log into multiple cloud consoles Ask: "Investigate this alert" Correlation Manual log/metric analysis Automated multi-source query Root cause 2-4 hours 2-3 minutes Remediation Write runbook, execute Copy/paste bash script, execute Result: roughly 95% reduction in MTTR for common incident types 2. Knowledge Retention The problem: Senior engineer leaves Tribal knowledge lost Junior engineers struggle with same issues The Hawkeye solution: Capture investigation patterns through instructions Preserve institutional knowledge in reusable rules Train new engineers with past investigation history 3. Reduced Toil Common repetitive investigations: Issue Type Manual Time Hawkeye Time Frequency Database connection issues 2 hours 90 seconds 3x/week Pod restart loops 1.5 hours 60 seconds 5x/week Deployment failures 3 hours 2 minutes 2x/week Result: engineers spend more time on prevention and architecture, less on firefighting 4. Cross-Team Collaboration Platform team provides: SYSTEM instructions describing architecture FILTER instructions for noise reduction RCA instructions for common patterns Application team benefits: Investigations leverage platform context No need for deep infrastructure knowledge Consistent incident response across teams 5. Continuous Learning Track and improve over time: Month MTTR Time Saved Quality Score Noise Reduction Month 1 45 min 15 hours 7.2/10 20% Month 3 12 min 45 hours 8.5/10 55% Month 6 3 min 90 hours 9.1/10 78% Result: data-driven improvement of incident response processes Next steps The Hawkeye MCP integration is available now for all Azure SRE Agent customers. Get started Contact NeuBird to set up a Hawkeye account Connect your cloud providers (Azure, GCP, etc.) Add the Hawkeye MCP connector to your SRE Agent Create a Hawkeye skill in Builder > Skills Start investigating! Learn more Hawkeye MCP documentation Tool reference (all 42 tools) Advanced workflows hawkeye-mcp-server on npm NeuBird help documentation Azure SRE Agent MCP integration guide NeuBird AI Need OAuth support? Contact NeuBird support: support@neubird.ai Try it out Ready to get started? Quick start checklist: Sign up for Hawkeye at https://neubird.ai/contact-us/ Connect your cloud infrastructure (Azure, GCP, etc.) Install the MCP connector in Azure SRE Agent Create a Hawkeye skill in Builder > Skills Test with "Show me uninvestigated alerts" Investigate your first incident in under 2 minutes! Questions? Drop a comment below or reach out to the Azure SRE Agent team. Want to see Hawkeye in action? Request a demo from NeuBird: https://neubird.ai/contact-us/ Azure SRE Agent helps SRE teams build automated incident response workflows. Learn more at aka.ms/sreagent. Tags: #Azure #SREAgent #NeuBird #Hawkeye #MCP #IncidentResponse #DevOps #SRE #AI #Automation #CloudOps #MTTR #RootCauseAnalysis100Views0likes0CommentsMigrating Ant Builds to Maven with GitHub Copilot app modernization
Many legacy Java applications still rely on Apache Ant for building, packaging, and dependency management. While Ant remains flexible, it lacks the structured lifecycle, dependency resolution, and ecosystem support that modern build tools like Maven provide. Migrating from Ant to Maven improves maintainability, build reproducibility, IDE compatibility, and enables modern Java workflows such as dependency upgrades, framework updates, and containerization. GitHub Copilot app modernization accelerates this transition by analyzing an Ant‑based project, generating a migration plan, and applying transformations to produce a Maven‑based build aligned with modern Java tooling. What GitHub Copilot app modernization Supports GitHub Copilot app modernization can help teams: Detect Ant build scripts (build.xml) and related custom task files Recommend Maven project structure and lifecycle alignment Generate an initial pom.xml with matched project metadata Map Ant targets to Maven phases where possible Identify external dependencies and translate them into Maven coordinates Migrate resource directories and compiled output locations Surface code or configuration changes required for a Maven‑driven build Validate the new Maven configuration through iterative builds This modernizes the build foundation before performing other upgrades such as JDK, Spring, Jakarta, or container‑readiness transformations. Project Analysis When you open an Ant‑based project in Visual Studio Code or IntelliJ IDEA, GitHub Copilot app modernization performs an analysis: Detects build.xml and auxiliary Ant scripts Identifies classpaths defined across Ant targets Evaluates manually referenced JARs in lib directories Inspects source layout and output directories Determines project metadata such as groupId, artifactId, and version Determines whether frameworks or libraries require updates before Maven migration This analysis forms the basis of the migration plan. Migration Plan Generation GitHub Copilot app modernization produces a migration plan that outlines: The recommended Maven project layout (src/main/java, src/test/java, resources directories) A generated pom.xml with discovered dependencies Mapped Ant targets to Maven lifecycle phases (compile, test, package) Plugin configurations needed to replicate custom Ant functionality Suggested removal of lib directory JARs in favor of dependency management Notes on unsupported or manual‑review areas (custom Ant tasks, script‑heavy targets, specialized packaging logic) You can review and adjust the plan before proceeding. Automated Transformations Once confirmed, GitHub Copilot app modernization applies targeted updates: Generates the project’s pom.xml Migrates dependency JAR references to Maven dependency entries Moves source and resource files into Maven‑compatible structure Updates ignore files, build output directories, and paths Introduces common Maven plugins for compiler, surefire, assembly, or shading Suggests replacements for custom Ant tasks if built‑in Maven plugins exist This automated work removes most of the manual lifting normally required for Ant → Maven transitions. Build & Fix Iteration After applying the transformations, the tool attempts to build the new Maven project: Runs the build Captures missing dependencies, incorrect scopes, or misaligned plugin versions Suggests targeted fixes Applies adjustments and rebuilds Iterates until the project compiles or no further automated fixes are possible This helps stabilize the migration quickly. Security & Behavior Validation GitHub Copilot app modernization also performs additional validation: Flags CVEs introduced or resolved through dependency discovery Alerts you to behavioral differences between Ant‑driven and Maven‑driven builds Highlights test failures, packaging differences, or altered classpaths that may need review These findings allow developers to refine the migration safely. Expected Output After the migration, you can expect: A newly generated and fully structured Maven project A populated pom.xml with dependencies, plugins, and metadata Updated project layout aligned with Maven standards Removed or deprecated Ant build files where appropriate Aligned dependency versions ready for further modernization A summary file detailing: Build changes Dependency mappings Code or config adjustments Remaining manual review items Developer Responsibilities While GitHub Copilot app modernization automates the mechanical migration from Ant to Maven, developers remain responsible for: Reviewing tests and build artifacts for behavioral differences Validating packaging steps for WAR/EAR/JAR outputs Replacing complex custom Ant scripts with proper Maven plugins Verifying deployment and CI workflows dependent on Ant build logic Confirming integration points that rely on Ant‑specific tasks or ordering Once validated, the Maven‑based structure becomes a strong foundation for further modernization such as JDK upgrades, Spring migration, Jakarta adoption, and containerization. Learn More For project setup and the complete modernization workflow, refer to the Microsoft Learn guide for upgrading Java projects with GitHub Copilot app modernization. Quickstart: Upgrade a Java Project with GitHub Copilot App Modernization | Microsoft Learn60Views1like0CommentsIs it possible to migrate Windows 365 between two Entra ID/MS365 tenants?
Hello, we're merging two companies, and as part of this merger, we want to migrate one Entra/MS365 tenant to another. Migrating mailboxes, OneDrive, SPO sites, and other 365 services is no problem for us, but we'd also like to migrate ~40 Windows 365 instances (Entra-Joined, hot Hybrid). Is this possible? Regular workstations can be migrated without a wipe using third-party services (like PowerSyncPro and similar), but in this case, these are VMs managed by Windows 365 service.58Views1like1CommentCopy time and users name when copy paste conversation.
As far as I can se this was removed due to the fact that some people copy paste code directly into production system where time and usernames caused problems. However in the normal world it not to uncommon to quote, even between organisations or even mail. And then you REALLY REALLY need to have time and user name. This was also what was default until recently, and currently everyone is sending screenshots, something that ofcourse is not a good solution. In order to make everyone happy, make an option in settings to not copy username/time and make it unset as default, those who need it will find it for sure, the rest will get the expected "what you mark is what you copy" behavior.537Views2likes4Comments