tips and tricks
827 TopicsFrom Prompt to Production: Open in VS Code for Terraform in Azure Copilot
We’re excited to introduce a new step in the Terraform on Azure experience: Open in VS Code, now available directly from Azure Copilot in the Azure Portal. This capability helps you move seamlessly from AI‑generated Terraform code to real Azure deployments - within a connected, guided workflow designed for enterprise scenarios. Why This Matters Infrastructure as Code with Terraform is powerful, but moving from generated configuration to a deployed environment typically involves multiple tools and handoffs. Teams need to understand Terraform state, work with remote backends, and integrate their code into version‑controlled CI/CD pipelines - often backed by Terraform Cloud or Azure‑native backends in enterprise environments. Open in VS Code brings these steps together. It bridges the gap between AI‑assisted authoring in the Azure Portal and the real‑world workflows required to validate, manage state, and deploy infrastructure with confidence. Continue Your Workflow in VS Code With Azure Copilot, you can describe your infrastructure in natural language and generate Terraform configurations in seconds. For example: “Create an Azure Container App using Terraform with a managed environment, Log Analytics enabled, and a system‑assigned managed identity to securely pull images from Azure Container Registry.” Copilot generates the Terraform configuration for you. From there, you can select Open full view to enter a full‑screen Terraform editor, and then choose Open in VS Code to launch the configuration in an Azure‑hosted VS Code environment. There’s no need to download files or set up a local development environment. VS Code for the web opens with Azure authentication already configured, along with commonly used extensions, so you can immediately focus on refining, validating, and preparing your infrastructure for deployment. Built‑in Guidance for Real Deployments Beyond editing, the VS Code experience includes built‑in, step‑by‑step guidance to help you deploy your Terraform configuration into your own Azure environment - whether you’re experimenting or preparing for production. Because Terraform relies on state management, the workflow starts by helping you choose and configure a backend. Backend Options Option 1: Azure Storage Account as a remote backend A natural fit for Azure‑native and enterprise environments. The experience guides you through creating or selecting a storage account and configuring Terraform to store state securely in Azure. Option 2: HCP Terraform (Terraform Cloud) as a remote backend Ideal for teams already using Terraform Cloud. The guided flow helps you authenticate, connect to an existing organization and workspace, and generate the required backend configuration directly into your Terraform files. Option 3: Temporary workspace for quick validation Designed for learning and experimentation. You can run terraform plan and terraform apply directly in the Azure workspace with temporary state, without committing to a long‑term backend - ideal for quick validation, but not intended for production use. Each option includes an end‑to‑end walkthrough, so you can complete backend setup and run Terraform commands without leaving the VS Code environment or searching through external documentation. Connecting Code, State, and Deployment This experience connects three essential parts of the Terraform workflow: AI‑assisted code generation in Azure Portal Copilot Interactive editing and guided execution in VS Code for the web Flexible backend options for managing Terraform state Together, these pieces make it easier to move from idea to infrastructure in a structured, supported way—whether you’re new to Terraform or managing production workloads with established CI/CD pipelines. Available Now - and What’s Next The Open in VS Code experience for Terraform is now public preview in Azure Portal Copilot. We’re continuing to invest in this workflow, including clearer deployment guidance, future integration with GitHub Actions and other CI/CD pipelines, and deeper enhancements to the full‑screen Terraform editor experience. If you haven’t tried it yet, generate a Terraform configuration with Azure Copilot and open it in VS Code to go from prompt to production end to end in one connected workflow.78Views0likes0CommentsFrom Test Cases to Trusted Automation: Scaling Enterprise Quality with GitHub Copilot
Automation First, But Trust Is Earned Enterprise QA teams today are automation‑led by default. Regression suites run daily, API tests validate integrations, and UI automation protects critical workflows. Yet, many teams still struggle with: Automation suites that lag behind changing requirements Brittle regression tests producing false failures High effort spent on maintaining, refactoring, and rewriting tests Limited time for testers to think deeply about risk and coverage Automation creates speed—but trust is built only when automation stays relevant, maintainable, and aligned to business intent. That is where AI‑assisted workflows started to play a role—not to replace automation engineers, but to remove friction from automation execution and evolution. GitHub Copilot as an Automation Accelerator GitHub Copilot proved most effective when used as a support system for automation teams, not a replacement for expertise. Faster Automation Creation Without Losing Intent Automation engineers often spend significant time writing boilerplate code—test scaffolding, assertions, setup, and repetitive patterns. Copilot helped accelerate this phase by: Generating consistent test skeletons Assisting with repetitive automation logic Suggesting assertions aligned to test intent This allowed engineers to focus on what needed to be validated, not how fast they could type it. Improving Maintainability of Automation Suites At enterprise scale, the true cost of automation is maintenance. Copilot helped reduce this burden by: Accelerating refactoring of existing test code Making automation scripts more readable and standardized Supporting quicker updates when requirements changed As a result, regression suites stayed healthier and more reliable—directly improving release confidence. Strengthening Regression Confidence Automation is valuable only when it can be trusted during regression cycles. By reducing effort spent on maintaining and updating tests, Copilot indirectly strengthened regression stability, ensuring automation remained aligned with evolving functionality. Importantly, every AI suggestion was reviewed, validated, and owned by humans. Automation logic remained intentional, deterministic, and compliant with enterprise standards. Automation at Scale: Where Quality Is Really Won or Lost As automation grows across releases and teams, quality risks move upstream. The questions stop being: Do we have automation? And become: Can we trust what automation is telling us? This is where quality engineering truly matters. By using Copilot to lower the mechanical overhead of automation, QA engineers could invest more time in: Identifying risk‑based test coverage gaps Improving negative and edge‑case scenarios Ensuring UI, API, and integration automation complemented each other Designing automation that reflected real business flows Automation stopped being a maintenance burden and became a strategic quality asset. The Real Mindset Shift for QA Teams The biggest impact was not technical—it was cultural. Instead of spending the majority of time creating and fixing automation scripts, QA engineers could shift their focus toward: Test design strategy Regression optimization Failure analysis and pattern recognition Cross‑team conversations on quality risks AI didn’t reduce QA effort. It redirected effort to higher‑value quality ownership. This is what modern QA leadership looks like—not writing more tests, but ensuring the right tests exist, run reliably, and protect customer trust. Responsible AI Was Non‑Negotiable In an enterprise context, automation quality is inseparable from governance and responsibility. Clear guardrails were essential: No blind acceptance of AI‑generated automation Human review for every test case and assertion Awareness of security, data sensitivity, and compliance Using Copilot as an assistant—not an authority This ensured automation quality improved without compromising trust or control. Final Thoughts: Automation Builds Speed, Trust Builds Confidence Automation enables scale. Test design ensures coverage. Trust is built when both evolve together. GitHub Copilot did not replace automation skills on our enterprise project—it amplified them. By removing friction from test creation and maintenance, it allowed automation to scale responsibly and enabled QA teams to focus on what truly matters: confidence in every release. The future of quality engineering is not manual vs automation. It is automation‑led, AI‑assisted, and human‑governed quality. That is how trust is built at enterprise scale. Microsoft Learn – References on Automation & Quality Engineering The following Microsoft Learn resources provide authoritative guidance on automation‑led quality engineering, test strategy, and building trust at enterprise scale. Architecture strategies for testing - Microsoft Azure Well-Architected Framework | Microsoft Learn Architecture strategies for designing a reliability testing strategy - Microsoft Azure Well-Architected Framework | Microsoft Learn What is Azure Test Plans? Manual, exploratory, and automated test tools. - Azure Test Plans | Microsoft Learn Azure/AZVerify Your Azure diagram, your Bicep templates, and your live environment are three separate sources of truth. They can drift apart. AzVerify gives GitHub Copilot the skills to connect them.Turning GitHub Copilot into a “Best Practices Coach” with Copilot Spaces + a Markdown Knowledge Base
Why Copilot Spaces + Markdown repos work so well When you ask Copilot generic questions (“How should we log errors?” “What’s our API versioning approach?”), the model will often respond with reasonable defaults. But reasonable defaults are not the same as your standards. Copilot Spaces solve the context problem by allowing you to attach a curated set of sources (files, folders, repos, PRs/issues, uploads, free text) plus explicit instructions—so Copilot answers in the context of your team’s rules and artifacts. Spaces can be shared with your team and stay updated as the underlying GitHub content changes—so your “best practices coach” stays evergreen. The architecture (high level) Here’s the mental model: Engineering Knowledge Base Repo: A dedicated repo containing your standards as Markdown (coding style, architecture decisions, security rules, testing conventions, examples, templates). Copilot Space: “Engineering Standards Coach”: A Space that attaches the knowledge base repo (or key folders/files within it), optionally your main application repo(s), and a short set of “rules of engagement” (instructions). In-repo reinforcement (optional but powerful): Custom instruction files (repo-wide + path-specific) and prompt files (slash commands) inside your production repos to standardize behavior and workflows. Step 1 Create a Knowledge Base repo (Markdown-first) Create a repo such as: engineering-knowledge-base platform-playbook org-standards A practical starter structure: engineering-knowledge-base/ README.md standards/ coding-style.md logging.md error-handling.md performance.md security/ secure-coding.md secrets.md threat-modeling.md architecture/ overview.md adr/ 0001-service-boundaries.md 0002-api-versioning.md testing/ unit-testing.md integration-testing.md contract-testing.md templates/ pr-review-checklist.md api-design-checklist.md definition-of-done.md Tip: Keep these docs opinionated, concrete, and example-heavy—Copilot works best when it can point to specific patterns rather than abstract principles. Step 2 Create a Copilot Space and attach your sources Create a space, name it, choose an owner (personal or organization), then add sources and instructions. Inside the Space, add two types of context: Instructions (how Copilot should behave) and Sources (your actual code and docs). 2.1 Instructions (how Copilot should behave in this Space) Example instructions you can paste: You are the Engineering Standards Coach for this organization. Goals: - Recommend solutions that follow our standards in the attached knowledge base. - When proposing code, align with our logging, error-handling, security, and testing guidelines. - When uncertain, ask for the missing repo context or point to the exact standard that applies. Output format: - Start with the standard(s) you are applying (with a link or filename reference). - Then provide the recommended implementation. - Include a lightweight checklist for reviewers. 2.2 Sources (your real “knowledge base”) Attach: The knowledge base repo (or just the folders that matter) Your main code repo(s) (or select folders) PR checklist and Definition of Done templates Key architecture docs, runbooks, or troubleshooting guides Step 3 (Optional) Add instruction files to your production repos Spaces are excellent for curated context and team-wide “ask me anything about our standards.” But you can reinforce consistency directly inside each repo by adding custom instruction files. 3.1 Repo-wide instructions (.github/copilot-instructions.md) Create: your-app-repo/.github/copilot-instructions.md # Repository Copilot Instructions ## Tech stack - Language: TypeScript (strict) - Framework: Node.js + Express - Testing: Jest - Lint/format: ESLint + Prettier ## Engineering rules - Use structured logging as defined in /docs/logging.md - Never log secrets or raw tokens - Prefer small, composable functions - All new endpoints must include: input validation, authz checks, unit tests, and consistent error handling ## Build & test - Install: npm ci - Test: npm test - Lint: npm run lint 3.2 Path-specific instructions (.github/instructions/*.instructions.md) Create: your-app-repo/.github/instructions/security.instructions.md --- applyTo: "**/*.ts" --- # Security rules (TypeScript) - Never introduce dynamic SQL construction; use parameterized queries only. - Any new external HTTP call must enforce timeouts and retry policy. - Any auth logic must include negative tests. Step 4 (Optional) Turn your best practices into “slash commands” with prompt files To standardize repeatable workflows like code review, test scaffolding, or endpoint scaffolding, create prompt files (slash commands) as .prompt.md files—commonly in .github/prompts/. Engineers invoke them manually in chat by typing /. Create: your-app-repo/.github/prompts/standards-code-review.prompt.md --- description: Review code against our org standards (security, perf, style, tests) --- You are a senior engineer performing a standards-based review. Use these checks: 1) Security: input validation, authz, secrets, injection risks 2) Reliability: error handling, retries/timeouts, edge cases 3) Observability: structured logs, metrics, tracing where relevant 4) Testing: required coverage, negative tests, naming conventions 5) Style: follow repository rules in .github/copilot-instructions.md Output: - Summary (2-3 lines) - Issues (severity: BLOCKER/REQUIRED/SUGGESTION) - Suggested patch snippets (only where confident) - “Ready to merge?” verdict Now any engineer can type /standards-code-review and get the same structured output every time, without rewriting the prompt. How teams actually use this day-to-day Recipe A Onboarding a new engineer Ask inside the Space: “Summarize our service architecture and coding conventions for onboarding. Link the key docs.” Recipe B Writing a feature with best-practice guardrails Ask in the Space: “We’re adding endpoint X. Generate a plan that follows our API versioning ADR and error-handling standard.” Recipe C Enforcing review standards consistently In the repo, run the prompt file: /standards-code-review. Governance and best practices (what to do / what to avoid) Keep Spaces purpose-built. Avoid dumping an entire org into one Space if your goal is consistent, grounded output. Prefer linking the “golden source.” Keep standards in a single repo and update them via PR—treat it like code. Make instructions short but strict. Detailed rules belong in your Markdown standards. Avoid conflicting instruction files. If instructions contradict each other, results can be inconsistent. References (official docs for further reading) About GitHub Copilot Spaces: https://docs.github.com/en/copilot/concepts/context/spaces Creating GitHub Copilot Spaces: https://docs.github.com/en/copilot/how-tos/provide-context/use-copilot-spaces/create-copilot-spaces Adding custom instructions for GitHub Copilot: https://docs.github.com/en/copilot/how-tos/copilot-cli/customize-copilot/add-custom-instructions Use custom instructions in VS Code: https://code.visualstudio.com/docs/copilot/customization/custom-instructions Use prompt files in VS Code: https://code.visualstudio.com/docs/copilot/customization/prompt-files Closing: the “best practices” flywheel Once you implement this pattern, you get a virtuous cycle: teams encode standards as Markdown; Copilot Spaces ground answers in those standards; prompt files and instruction files standardize execution; and code reviews shift from style policing to design and correctness.The Daily Stand-Up Agent A Custom Copilot for Summarizing Jira & Azure DevOps Progress
Modern software teams move fast. Between sprint planning, backlog grooming, pull requests, deployments, and stakeholder updates, developers often spend more time discussing work than actually doing it. One of the biggest pain points in Agile workflows is the daily stand-up meeting especially when team members need to manually summarize updates across dozens of tickets in tools like Jira and Azure DevOps. https://dellenny.com/the-daily-stand-up-agent-a-custom-copilot-for-summarizing-jira-azure-devops-progress/33Views0likes0CommentsBuilding a Scalable Contract Data Extraction Pipeline with Microsoft Foundry and Python
Architecture Overview Alt text: Architecture diagram showing Blob Storage triggering Azure Function, calling Document Intelligence, transforming data, and storing in Cosmos DB Flow: Upload contract files (PDF or ZIP) to Azure Blob Storage Azure Function triggers automatically on file upload Azure AI Document Intelligence extracts layout and tables A transformation layer converts output into a canonical JSON format Data is stored in Azure Cosmos DB Step 1: Trigger Processing with Azure Functions An Azure Function with a Blob trigger enables automatic processing when a file is uploaded. import logging import azure.functions as func import zipfile import io def main(myblob: func.InputStream): logging.info(f"Processing blob: {myblob.name}") if myblob.name.endswith(".zip"): with zipfile.ZipFile(io.BytesIO(myblob.read())) as z: for file_name in z.namelist(): logging.info(f"Extracting {file_name}") file_data = z.read(file_name) # Pass file_data to extraction step Best Practices Keep functions stateless and idempotent Handle retries for transient failures Store configuration in environment variables Step 2: Extract Layout Using Document Intelligence The prebuilt layout model helps extract tables, text, and structure from documents. from azure.ai.documentintelligence import DocumentIntelligenceClient from azure.core.credentials import AzureKeyCredential client = DocumentIntelligenceClient( endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>") ) poller = client.begin_analyze_document( "prebuilt-layout", document=file_data ) result = poller.result() Output Includes Structured tables Paragraphs and text blocks Bounding regions for layout context Step 3: Handle Multi-Page Table Continuity Contract documents often contain tables split across multiple pages. These need to be merged to preserve data integrity. def merge_tables(tables): merged = [] current = None for table in tables: headers = [cell.content for cell in table.cells if cell.row_index == 0] if current and headers == current["headers"]: current["rows"].extend(extract_rows(table)) else: if current: merged.append(current) current = { "headers": headers, "rows": extract_rows(table) } if current: merged.append(current) return merged Key Considerations Match headers to detect continuation Preserve row order Avoid duplicate headers Step 4: Transform to a Canonical JSON Schema A consistent schema ensures compatibility across downstream systems. { "id": "contract_123", "documentType": "contract", "vendorName": "ABC Corp", "invoiceDate": "2023-05-05", "tables": [ { "name": "Line Items", "headers": ["Item", "Qty", "Price"], "rows": [ ["Service A", "2", "100"] ] } ], "metadata": { "sourceFile": "contract.pdf", "processedAt": "2026-04-22T10:00:00Z" } } Design Tips Keep schema flexible and extensible Include metadata for traceability Avoid excessive nesting Step 5: Persist Data in Cosmos DB Store the transformed data in a scalable NoSQL database. from azure.cosmos import CosmosClient client = CosmosClient("<cosmos-uri>", "<key>") database = client.get_database_client("contracts-db") container = database.get_container_client("documents") container.upsert_item(canonical_json) Best Practices Choose an appropriate partition key (for example, documentType or vendorName) Optimize indexing policies Monitor request units (RU) usage Observability and Monitoring To ensure reliability: Enable logging with Application Insights Track processing time and failures Monitor document extraction accuracy Security Considerations Store secrets securely using Azure Key Vault Use Managed Identity for service authentication Apply role-based access control (RBAC) to storage resources Conclusion This approach provides a scalable and maintainable solution for contract data extraction: Event-driven processing with Azure Functions Accurate extraction using Document Intelligence Clean transformation into a reusable schema Efficient storage with Cosmos DB This foundation can be extended with validation layers, review workflows, or analytics dashboards depending on your business requirements. Resources Contract data extraction – Document Intelligence: Foundry Tools | Microsoft Learn microsoft/content-processing-solution-accelerator: Programmatically extract data and apply schemas to unstructured documents across text-based and multi-modal content using Azure AI Foundry, Azure OpenAI, Azure AI Content Understanding, and Cosmos DB.The Architecture of Copilot Agents: Building Intelligent Assistants for the Modern Era
The rapid evolution of artificial intelligence has ushered in a new class of systems often referred to as copilot agents. These agents are not fully autonomous decision-makers, nor are they passive tools they sit in the middle, augmenting human capability by assisting with tasks, providing insights, and automating workflows while keeping humans in the loop. From coding assistants to enterprise productivity tools, copilot agents are becoming foundational to how we interact with software. https://dellenny.com/the-architecture-of-copilot-agents-building-intelligent-assistants-for-the-modern-era/69Views0likes0CommentsAutomating the feedback collection process before reviews. Is it possible with Copilot?
Every quarter, our HR team manually emails people asking for peer feedback before reviews start. Its a huge time sink and half the responses come in late. Someone mentioned Copilot agents might be able to handle this, like automatically reaching out to the right people, collecting responses, and organizing them. Has anyone tried building something like this? Or is there a simpler way to automate 360 feedback collection in M365?34Views0likes1CommentMicrosoft 365 Copilot Gets Smarter: GPT-5.5 Thinking and ChatGPT Images 2.0 Transform Workflows
The evolution of workplace AI is accelerating, and the latest update to Microsoft 365 Copilot signals a major leap forward. With the introduction of GPT-5.5 Thinking and ChatGPT Images 2.0, Copilot is no longer just an assistant it’s becoming a more capable collaborator for complex thinking, creative production, and multi-step problem-solving. https://dellenny.com/microsoft-365-copilot-gets-smarter-gpt-5-5-thinking-and-chatgpt-images-2-0-transform-workflows/327Views0likes0Comments