tips and tricks
826 TopicsFrom Test Cases to Trusted Automation: Scaling Enterprise Quality with GitHub Copilot
Automation First, But Trust Is Earned Enterprise QA teams today are automation‑led by default. Regression suites run daily, API tests validate integrations, and UI automation protects critical workflows. Yet, many teams still struggle with: Automation suites that lag behind changing requirements Brittle regression tests producing false failures High effort spent on maintaining, refactoring, and rewriting tests Limited time for testers to think deeply about risk and coverage Automation creates speed—but trust is built only when automation stays relevant, maintainable, and aligned to business intent. That is where AI‑assisted workflows started to play a role—not to replace automation engineers, but to remove friction from automation execution and evolution. GitHub Copilot as an Automation Accelerator GitHub Copilot proved most effective when used as a support system for automation teams, not a replacement for expertise. Faster Automation Creation Without Losing Intent Automation engineers often spend significant time writing boilerplate code—test scaffolding, assertions, setup, and repetitive patterns. Copilot helped accelerate this phase by: Generating consistent test skeletons Assisting with repetitive automation logic Suggesting assertions aligned to test intent This allowed engineers to focus on what needed to be validated, not how fast they could type it. Improving Maintainability of Automation Suites At enterprise scale, the true cost of automation is maintenance. Copilot helped reduce this burden by: Accelerating refactoring of existing test code Making automation scripts more readable and standardized Supporting quicker updates when requirements changed As a result, regression suites stayed healthier and more reliable—directly improving release confidence. Strengthening Regression Confidence Automation is valuable only when it can be trusted during regression cycles. By reducing effort spent on maintaining and updating tests, Copilot indirectly strengthened regression stability, ensuring automation remained aligned with evolving functionality. Importantly, every AI suggestion was reviewed, validated, and owned by humans. Automation logic remained intentional, deterministic, and compliant with enterprise standards. Automation at Scale: Where Quality Is Really Won or Lost As automation grows across releases and teams, quality risks move upstream. The questions stop being: Do we have automation? And become: Can we trust what automation is telling us? This is where quality engineering truly matters. By using Copilot to lower the mechanical overhead of automation, QA engineers could invest more time in: Identifying risk‑based test coverage gaps Improving negative and edge‑case scenarios Ensuring UI, API, and integration automation complemented each other Designing automation that reflected real business flows Automation stopped being a maintenance burden and became a strategic quality asset. The Real Mindset Shift for QA Teams The biggest impact was not technical—it was cultural. Instead of spending the majority of time creating and fixing automation scripts, QA engineers could shift their focus toward: Test design strategy Regression optimization Failure analysis and pattern recognition Cross‑team conversations on quality risks AI didn’t reduce QA effort. It redirected effort to higher‑value quality ownership. This is what modern QA leadership looks like—not writing more tests, but ensuring the right tests exist, run reliably, and protect customer trust. Responsible AI Was Non‑Negotiable In an enterprise context, automation quality is inseparable from governance and responsibility. Clear guardrails were essential: No blind acceptance of AI‑generated automation Human review for every test case and assertion Awareness of security, data sensitivity, and compliance Using Copilot as an assistant—not an authority This ensured automation quality improved without compromising trust or control. Final Thoughts: Automation Builds Speed, Trust Builds Confidence Automation enables scale. Test design ensures coverage. Trust is built when both evolve together. GitHub Copilot did not replace automation skills on our enterprise project—it amplified them. By removing friction from test creation and maintenance, it allowed automation to scale responsibly and enabled QA teams to focus on what truly matters: confidence in every release. The future of quality engineering is not manual vs automation. It is automation‑led, AI‑assisted, and human‑governed quality. That is how trust is built at enterprise scale. Microsoft Learn – References on Automation & Quality Engineering The following Microsoft Learn resources provide authoritative guidance on automation‑led quality engineering, test strategy, and building trust at enterprise scale. Architecture strategies for testing - Microsoft Azure Well-Architected Framework | Microsoft Learn Architecture strategies for designing a reliability testing strategy - Microsoft Azure Well-Architected Framework | Microsoft Learn What is Azure Test Plans? Manual, exploratory, and automated test tools. - Azure Test Plans | Microsoft Learn Azure/AZVerify Your Azure diagram, your Bicep templates, and your live environment are three separate sources of truth. They can drift apart. AzVerify gives GitHub Copilot the skills to connect them.Turning GitHub Copilot into a “Best Practices Coach” with Copilot Spaces + a Markdown Knowledge Base
Why Copilot Spaces + Markdown repos work so well When you ask Copilot generic questions (“How should we log errors?” “What’s our API versioning approach?”), the model will often respond with reasonable defaults. But reasonable defaults are not the same as your standards. Copilot Spaces solve the context problem by allowing you to attach a curated set of sources (files, folders, repos, PRs/issues, uploads, free text) plus explicit instructions—so Copilot answers in the context of your team’s rules and artifacts. Spaces can be shared with your team and stay updated as the underlying GitHub content changes—so your “best practices coach” stays evergreen. The architecture (high level) Here’s the mental model: Engineering Knowledge Base Repo: A dedicated repo containing your standards as Markdown (coding style, architecture decisions, security rules, testing conventions, examples, templates). Copilot Space: “Engineering Standards Coach”: A Space that attaches the knowledge base repo (or key folders/files within it), optionally your main application repo(s), and a short set of “rules of engagement” (instructions). In-repo reinforcement (optional but powerful): Custom instruction files (repo-wide + path-specific) and prompt files (slash commands) inside your production repos to standardize behavior and workflows. Step 1 Create a Knowledge Base repo (Markdown-first) Create a repo such as: engineering-knowledge-base platform-playbook org-standards A practical starter structure: engineering-knowledge-base/ README.md standards/ coding-style.md logging.md error-handling.md performance.md security/ secure-coding.md secrets.md threat-modeling.md architecture/ overview.md adr/ 0001-service-boundaries.md 0002-api-versioning.md testing/ unit-testing.md integration-testing.md contract-testing.md templates/ pr-review-checklist.md api-design-checklist.md definition-of-done.md Tip: Keep these docs opinionated, concrete, and example-heavy—Copilot works best when it can point to specific patterns rather than abstract principles. Step 2 Create a Copilot Space and attach your sources Create a space, name it, choose an owner (personal or organization), then add sources and instructions. Inside the Space, add two types of context: Instructions (how Copilot should behave) and Sources (your actual code and docs). 2.1 Instructions (how Copilot should behave in this Space) Example instructions you can paste: You are the Engineering Standards Coach for this organization. Goals: - Recommend solutions that follow our standards in the attached knowledge base. - When proposing code, align with our logging, error-handling, security, and testing guidelines. - When uncertain, ask for the missing repo context or point to the exact standard that applies. Output format: - Start with the standard(s) you are applying (with a link or filename reference). - Then provide the recommended implementation. - Include a lightweight checklist for reviewers. 2.2 Sources (your real “knowledge base”) Attach: The knowledge base repo (or just the folders that matter) Your main code repo(s) (or select folders) PR checklist and Definition of Done templates Key architecture docs, runbooks, or troubleshooting guides Step 3 (Optional) Add instruction files to your production repos Spaces are excellent for curated context and team-wide “ask me anything about our standards.” But you can reinforce consistency directly inside each repo by adding custom instruction files. 3.1 Repo-wide instructions (.github/copilot-instructions.md) Create: your-app-repo/.github/copilot-instructions.md # Repository Copilot Instructions ## Tech stack - Language: TypeScript (strict) - Framework: Node.js + Express - Testing: Jest - Lint/format: ESLint + Prettier ## Engineering rules - Use structured logging as defined in /docs/logging.md - Never log secrets or raw tokens - Prefer small, composable functions - All new endpoints must include: input validation, authz checks, unit tests, and consistent error handling ## Build & test - Install: npm ci - Test: npm test - Lint: npm run lint 3.2 Path-specific instructions (.github/instructions/*.instructions.md) Create: your-app-repo/.github/instructions/security.instructions.md --- applyTo: "**/*.ts" --- # Security rules (TypeScript) - Never introduce dynamic SQL construction; use parameterized queries only. - Any new external HTTP call must enforce timeouts and retry policy. - Any auth logic must include negative tests. Step 4 (Optional) Turn your best practices into “slash commands” with prompt files To standardize repeatable workflows like code review, test scaffolding, or endpoint scaffolding, create prompt files (slash commands) as .prompt.md files—commonly in .github/prompts/. Engineers invoke them manually in chat by typing /. Create: your-app-repo/.github/prompts/standards-code-review.prompt.md --- description: Review code against our org standards (security, perf, style, tests) --- You are a senior engineer performing a standards-based review. Use these checks: 1) Security: input validation, authz, secrets, injection risks 2) Reliability: error handling, retries/timeouts, edge cases 3) Observability: structured logs, metrics, tracing where relevant 4) Testing: required coverage, negative tests, naming conventions 5) Style: follow repository rules in .github/copilot-instructions.md Output: - Summary (2-3 lines) - Issues (severity: BLOCKER/REQUIRED/SUGGESTION) - Suggested patch snippets (only where confident) - “Ready to merge?” verdict Now any engineer can type /standards-code-review and get the same structured output every time, without rewriting the prompt. How teams actually use this day-to-day Recipe A Onboarding a new engineer Ask inside the Space: “Summarize our service architecture and coding conventions for onboarding. Link the key docs.” Recipe B Writing a feature with best-practice guardrails Ask in the Space: “We’re adding endpoint X. Generate a plan that follows our API versioning ADR and error-handling standard.” Recipe C Enforcing review standards consistently In the repo, run the prompt file: /standards-code-review. Governance and best practices (what to do / what to avoid) Keep Spaces purpose-built. Avoid dumping an entire org into one Space if your goal is consistent, grounded output. Prefer linking the “golden source.” Keep standards in a single repo and update them via PR—treat it like code. Make instructions short but strict. Detailed rules belong in your Markdown standards. Avoid conflicting instruction files. If instructions contradict each other, results can be inconsistent. References (official docs for further reading) About GitHub Copilot Spaces: https://docs.github.com/en/copilot/concepts/context/spaces Creating GitHub Copilot Spaces: https://docs.github.com/en/copilot/how-tos/provide-context/use-copilot-spaces/create-copilot-spaces Adding custom instructions for GitHub Copilot: https://docs.github.com/en/copilot/how-tos/copilot-cli/customize-copilot/add-custom-instructions Use custom instructions in VS Code: https://code.visualstudio.com/docs/copilot/customization/custom-instructions Use prompt files in VS Code: https://code.visualstudio.com/docs/copilot/customization/prompt-files Closing: the “best practices” flywheel Once you implement this pattern, you get a virtuous cycle: teams encode standards as Markdown; Copilot Spaces ground answers in those standards; prompt files and instruction files standardize execution; and code reviews shift from style policing to design and correctness.The Daily Stand-Up Agent A Custom Copilot for Summarizing Jira & Azure DevOps Progress
Modern software teams move fast. Between sprint planning, backlog grooming, pull requests, deployments, and stakeholder updates, developers often spend more time discussing work than actually doing it. One of the biggest pain points in Agile workflows is the daily stand-up meeting especially when team members need to manually summarize updates across dozens of tickets in tools like Jira and Azure DevOps. https://dellenny.com/the-daily-stand-up-agent-a-custom-copilot-for-summarizing-jira-azure-devops-progress/23Views0likes0CommentsBuilding a Scalable Contract Data Extraction Pipeline with Microsoft Foundry and Python
Architecture Overview Alt text: Architecture diagram showing Blob Storage triggering Azure Function, calling Document Intelligence, transforming data, and storing in Cosmos DB Flow: Upload contract files (PDF or ZIP) to Azure Blob Storage Azure Function triggers automatically on file upload Azure AI Document Intelligence extracts layout and tables A transformation layer converts output into a canonical JSON format Data is stored in Azure Cosmos DB Step 1: Trigger Processing with Azure Functions An Azure Function with a Blob trigger enables automatic processing when a file is uploaded. import logging import azure.functions as func import zipfile import io def main(myblob: func.InputStream): logging.info(f"Processing blob: {myblob.name}") if myblob.name.endswith(".zip"): with zipfile.ZipFile(io.BytesIO(myblob.read())) as z: for file_name in z.namelist(): logging.info(f"Extracting {file_name}") file_data = z.read(file_name) # Pass file_data to extraction step Best Practices Keep functions stateless and idempotent Handle retries for transient failures Store configuration in environment variables Step 2: Extract Layout Using Document Intelligence The prebuilt layout model helps extract tables, text, and structure from documents. from azure.ai.documentintelligence import DocumentIntelligenceClient from azure.core.credentials import AzureKeyCredential client = DocumentIntelligenceClient( endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>") ) poller = client.begin_analyze_document( "prebuilt-layout", document=file_data ) result = poller.result() Output Includes Structured tables Paragraphs and text blocks Bounding regions for layout context Step 3: Handle Multi-Page Table Continuity Contract documents often contain tables split across multiple pages. These need to be merged to preserve data integrity. def merge_tables(tables): merged = [] current = None for table in tables: headers = [cell.content for cell in table.cells if cell.row_index == 0] if current and headers == current["headers"]: current["rows"].extend(extract_rows(table)) else: if current: merged.append(current) current = { "headers": headers, "rows": extract_rows(table) } if current: merged.append(current) return merged Key Considerations Match headers to detect continuation Preserve row order Avoid duplicate headers Step 4: Transform to a Canonical JSON Schema A consistent schema ensures compatibility across downstream systems. { "id": "contract_123", "documentType": "contract", "vendorName": "ABC Corp", "invoiceDate": "2023-05-05", "tables": [ { "name": "Line Items", "headers": ["Item", "Qty", "Price"], "rows": [ ["Service A", "2", "100"] ] } ], "metadata": { "sourceFile": "contract.pdf", "processedAt": "2026-04-22T10:00:00Z" } } Design Tips Keep schema flexible and extensible Include metadata for traceability Avoid excessive nesting Step 5: Persist Data in Cosmos DB Store the transformed data in a scalable NoSQL database. from azure.cosmos import CosmosClient client = CosmosClient("<cosmos-uri>", "<key>") database = client.get_database_client("contracts-db") container = database.get_container_client("documents") container.upsert_item(canonical_json) Best Practices Choose an appropriate partition key (for example, documentType or vendorName) Optimize indexing policies Monitor request units (RU) usage Observability and Monitoring To ensure reliability: Enable logging with Application Insights Track processing time and failures Monitor document extraction accuracy Security Considerations Store secrets securely using Azure Key Vault Use Managed Identity for service authentication Apply role-based access control (RBAC) to storage resources Conclusion This approach provides a scalable and maintainable solution for contract data extraction: Event-driven processing with Azure Functions Accurate extraction using Document Intelligence Clean transformation into a reusable schema Efficient storage with Cosmos DB This foundation can be extended with validation layers, review workflows, or analytics dashboards depending on your business requirements. Resources Contract data extraction – Document Intelligence: Foundry Tools | Microsoft Learn microsoft/content-processing-solution-accelerator: Programmatically extract data and apply schemas to unstructured documents across text-based and multi-modal content using Azure AI Foundry, Azure OpenAI, Azure AI Content Understanding, and Cosmos DB.The Architecture of Copilot Agents: Building Intelligent Assistants for the Modern Era
The rapid evolution of artificial intelligence has ushered in a new class of systems often referred to as copilot agents. These agents are not fully autonomous decision-makers, nor are they passive tools they sit in the middle, augmenting human capability by assisting with tasks, providing insights, and automating workflows while keeping humans in the loop. From coding assistants to enterprise productivity tools, copilot agents are becoming foundational to how we interact with software. https://dellenny.com/the-architecture-of-copilot-agents-building-intelligent-assistants-for-the-modern-era/54Views0likes0CommentsAutomating the feedback collection process before reviews. Is it possible with Copilot?
Every quarter, our HR team manually emails people asking for peer feedback before reviews start. Its a huge time sink and half the responses come in late. Someone mentioned Copilot agents might be able to handle this, like automatically reaching out to the right people, collecting responses, and organizing them. Has anyone tried building something like this? Or is there a simpler way to automate 360 feedback collection in M365?29Views0likes1CommentMicrosoft 365 Copilot Gets Smarter: GPT-5.5 Thinking and ChatGPT Images 2.0 Transform Workflows
The evolution of workplace AI is accelerating, and the latest update to Microsoft 365 Copilot signals a major leap forward. With the introduction of GPT-5.5 Thinking and ChatGPT Images 2.0, Copilot is no longer just an assistant it’s becoming a more capable collaborator for complex thinking, creative production, and multi-step problem-solving. https://dellenny.com/microsoft-365-copilot-gets-smarter-gpt-5-5-thinking-and-chatgpt-images-2-0-transform-workflows/300Views0likes0CommentsMoving Beyond Prompts: A Practical Introduction to Spec-Driven Development
In the last year, many of us have started writing code differently. We describe what we want, let AI generate an answer, review it, tweak the prompt, and try again. This loop—prompt, retry, adjust—has quietly become part of our daily workflow. At first, it feels incredibly productive. But as the complexity of the task increases, something changes. The iteration cycle becomes longer, outputs become inconsistent, and the effort shifts from solving the problem to refining the prompt. This is where a subtle but important shift in approach can help: moving from prompt-driven development to spec-driven development. The Problem: Prompt → Retry → Guess Most AI-assisted workflows today look something like this: Write a prompt describing the task Review the generated output Adjust the prompt Repeat until it looks acceptable In practice, this often simplifies to: Prompt → Retry → Guess Figure: Prompt-driven vs spec-driven workflow comparison For simple tasks, this works well. But for anything involving multiple inputs, constraints, or edge cases, the process can become unpredictable. In my experience, the challenge is not the model—it is the lack of structure in how we describe the problem. A Shift in Thinking: From Prompts to Specifications Instead of asking AI to “figure it out,” spec-driven development introduces a simple idea: Define the problem clearly before asking for a solution. A specification (spec) is not a long document—it is a structured way of describing: Inputs Outputs Constraints Edge cases When this structure is provided upfront, the interaction changes significantly. Rather than iterating on vague prompts, you are guiding the system with a clear contract. What This Looks Like in Practice Let’s take a simple example: an order summary API (for example, a backend service hosted on Azure App Service). Without a Spec (Typical Prompt) “Write an API that returns order details for a user.” A model can generate something reasonable, but in practice, the responses often vary: Field names may be inconsistent Pagination may be missing Edge cases (no orders, large datasets) may not be handled Structure may change across iterations Example response (typical output): { "userId": 123, "orders": [ { "id": 1, "amount": 250 } ] } With a Spec (Structured Input) Now consider providing a simple specification: Specification: Input: userId page pageSize Output: userId orders[] orderId totalAmount orderDate pagination page pageSize totalRecords Constraints: Default pageSize = 10 Return empty list if no orders Handle large datasets efficiently Example response (based on the spec): { "userId": 123, "orders": [ { "orderId": 1, "totalAmount": 250, "orderDate": "2024-01-10" } ], "pagination": { "page": 1, "pageSize": 10, "totalRecords": 50 } } Why This Tends to Work The difference here is not just stylistic—it is structural. An unstructured prompt leaves room for interpretation. A spec reduces ambiguity by defining expectations explicitly. In practice, I have observed that providing structured inputs like this often leads to the following: More consistent field naming Better handling of edge cases Reduced need for repeated prompt refinement Rather than relying on trial-and-error, the interaction becomes more predictable and aligned with expectations. Applying This to Existing Code (Refactor Scenario) This approach becomes even more useful when applied to existing code. Instead of asking: “Fix the bug in the Auth controller” You can define expected behavior: Input validation rules Response formats Error handling Authorization behavior The task then becomes aligning the implementation with the defined spec. This shifts the interaction from guesswork to validation—comparing current behavior with intended behavior. Example Comparison (Auth Scenario) Without Spec (Typical Prompt) “Fix the login issue in Auth controller” Possible outcomes include: Partial validation added Inconsistent error responses No clear handling of repeated failed attempts With Spec (Defined Behavior) Spec defines: Validate username and password Return consistent error responses Lock account after 5 failed attempts Do not expose internal errors Resulting behavior: Input validation is consistently applied Error responses follow a defined structure Edge cases like account lockout are handled explicitly This mirrors the same pattern seen in the API example—moving from ambiguity to clearly defined behavior. A Practical Way to Start You do not need new tools or frameworks to try this. A simple workflow that has worked well in practice: Ask – Describe the problem (prompt, discussion, or notes) Write a spec – Define inputs, outputs, constraints Refine – Remove ambiguity Generate – Use the spec as input Validate – Compare output with the spec This adds a small upfront step, but it often reduces back-and-forth iterations later. The Practical Challenge One important point to note: Writing a good spec requires understanding the problem. Spec-driven development does not eliminate complexity—it surfaces it earlier. In many cases, the hardest part is not writing code, but clearly defining: What the system should do What it should not do How it should behave under edge conditions This is also why specs evolve over time. They do not need to be perfect upfront. They improve as your understanding improves. Where This Approach Helps From what I have seen, this approach is most useful in scenarios where the problem involves multiple inputs, defined contracts, or structured outputs such as APIs, schema-driven systems, or refactoring existing code where consistency matters. Where It May Not Be Necessary For simpler tasks such as small scripts, minor UI changes, or quick experiments, a detailed specification may not add much value. In those cases, a straightforward prompt is often sufficient. A Note on Tools Tools like GitHub Copilot, Azure AI Studio, and AI-assisted workflows in Visual Studio Code tend to be more effective when given clear, structured inputs. Spec-driven development is not tied to any specific tool. It is a way of thinking about how we interact with these systems more effectively. References https://github.com/features/copilot https://platform.openai.com/docs/guides/prompt-engineering https://github.com/github/spec-kit Amplifier - Modular AI Agent Framework - Amplifier Final Thoughts Many discussions around AI-assisted development focus on what tools can do. This approach focuses on something slightly different: How developers can structure problems more effectively before implementation. In my experience, moving from prompts to specs does not eliminate iteration, but it makes that iteration more predictable and purposeful.