prompt engineering
18 Topics## Advanced Copilot Prompt for High‑Fidelity Teams Meeting Analysis (v1.5)
## Advanced Copilot Prompt for High‑Fidelity Teams Meeting Analysis (v1.5) I’ve been working on a structured Copilot prompt designed to dramatically improve the quality of meeting analysis inside **Microsoft Teams**, especially when the default Intelligent Recap doesn’t capture enough nuance, decisions, or actionable follow‑ups. This prompt produces a detailed, repeatable output that includes: - TL;DR executive summary - Meeting quality assessment - Prioritized action items table - Confirmed vs. tentative decisions - Open questions & risks - Mind‑map style outline - Timeline of key moments - Confidence & source citations - Tech jargon glossary - Planner‑ready task export It’s now at **version 1.5**, and I’m sharing it publicly for anyone who wants deeper meeting insights or more reliable task handoff into Planner. --- ### Why I Built This In many engineering, security, and cross‑functional meetings, clarity is everything. The default recap is helpful, but sometimes too generic. I wanted something that: - Reduces ambiguity - Surfaces decisions clearly - Highlights risks and open questions - Produces actionable, Planner‑ready tasks - Works consistently across different meeting types - Enforces strict inference rules to avoid hallucinations If your team relies heavily on Teams + Copilot, this can significantly improve meeting outcomes. --- ### What’s Included The full prompt includes: - Strict ordering rules - Anti‑hallucination constraints - Fallback rules for missing data - TL;DR section - Speaker‑labeling rules - Timestamp restrictions - Bullet‑length limits - Planner task title constraints - Deduplication rules - Tone consistency - Signal‑to‑noise filtering I’ve included the complete prompt below for anyone who wants to use or adapt it. --- ### How to Use It 1. Open the **Recap** tab of any Teams meeting with transcription enabled. 2. Click **Open Copilot**. 3. Paste the entire prompt into the Copilot compose box. 4. Wait for the structured output (usually 30–120 seconds). 5. Copy the Planner tasks section directly into Planner or Copilot for Planner. --- ### Looking for Feedback If you try this prompt, I’d love to hear: - What worked well - What didn’t - What you’d like added in v1.6 - Any edge cases or meeting types where it struggled I’m planning to maintain this as a community resource, so suggestions are welcome. Thanks to everyone experimenting with Copilot in Teams — the creativity in this community is incredible. --- ### Full Prompt (v1.5) ````markdown ```markdown # ============================================================ # PROMPT NAME: Advanced Teams Meeting Analyst (Copilot Enhancement) # ============================================================ # Version: 1.5 # Author: Scott M # Last Updated: 2026-01-14 # # Goal: # Use Microsoft Copilot in Teams (Recap tab or live meeting) to generate a highly structured, # high-signal meeting analysis that goes far beyond the default Intelligent Recap output. # Produce executive summary with TL;DR, prioritized action items table, confirmed/tentative decisions, # risks/open questions, mind-map outline, timeline, quality assessment, confidence/sources, # tech jargon glossary, and Planner-ready task export—all derived strictly from the transcript, # shared screens, chat, and attachments. # # Why This Is Superior to Default Teams/Copilot Processing: # - Default Recap: Basic chapters, highlights, simple tasks, attendance—often generic and misses nuance. # - This custom prompt: Forces strict inference rules (no hallucinations), adds confidence labeling, # decision status, risks section, mind-map structure, quality flags, source citations, # jargon glossary, and direct Planner integration for seamless task handoff. # Delivers scannable, professional-grade notes + actionable tasks for tech/engineering teams. # # Audience: # Microsoft 365 Copilot users in Teams-heavy environments who want deeper analysis # and direct bridge to Planner for follow-up execution. # # Non-Goals: # - This is NOT a replacement for legal/compliance-grade minutes. # - This is NOT verbatim transcription (use the native transcript for that). # - Relies on Teams transcription quality (enable Intelligent Speakers if available). # # Usage Instructions: # 1. Prerequisites: # - Ensure the meeting had transcription enabled (Meeting options → Record & transcribe → Allow transcription). # - For best speaker attribution: Enable Intelligent Speakers (if your org supports it) or have participants use their names clearly. # - Copilot license required (M365 Copilot or Teams Premium for full Recap features). # # 2. Post-Meeting (Recommended – Recap Tab): # - Go to the Teams meeting chat → Click the Recap tab (appears after meeting ends and processing finishes). # - Click Open Copilot (or the Copilot icon in the top-right of Recap). # - In the Copilot pane compose box, paste this ENTIRE prompt and press Enter/Send. # - Wait 30–120 seconds (longer for 60+ min meetings) for the full structured output. # # 3. During Live Meeting (Quick Catch-Up): # - While the meeting is active → Click the Copilot icon in the meeting controls. # - Paste the prompt (or a shortened version if time-sensitive) and ask for real-time summary/actions so far. # # 4. After Output Appears: # - Review the markdown sections—copy any part (e.g., Action Items table, Planner tasks) directly. # - For Planner handoff: # - Copy the entire "10. Planner Integration" section. # - Open Planner (in Teams app or planner.microsoft.com). # - Option A: Manually create tasks by pasting titles/descriptions. # - Option B: In Planner's Copilot pane (if available): Paste the tasks list and say "Create these tasks in my [plan name] plan". # - Save/export: Copy full output to OneNote, Word, or email for sharing. # # 5. Refinement & Follow-Ups (Highly Recommended): # - In the same Copilot pane, type targeted follow-ups like: # - "Expand the Risks section with mitigation ideas" # - "Draft a professional follow-up email to attendees including the summary and action table" # - "Create these tasks in Planner plan 'Engineering Syncs'" # - "Explain [specific jargon term] in more detail" # - "Prioritize the action items by impact" # - Iterate until satisfied—Copilot remembers context in the session. # # 6. Tips & Troubleshooting: # - If output is incomplete: Re-paste the prompt or say "Regenerate full analysis". # - Short meetings (<15 min): Output may be concise—ask for more detail if needed. # - No Recap tab? Ensure recording/transcription was on; wait 5–10 min post-meeting. # - Sensitive meetings: Redaction is automatic per rules, but double-check output. # # Changelog: # v1.0 - Initial release # v1.1 - Added confidence/sources + follow-up suggestions # v1.2 - Added Tech Jargon Glossary # v1.3 - Added Planner Integration section # v1.4 - Expanded Usage Instructions into detailed, step-by-step guide with prerequisites, live/post options, refinement examples, and troubleshooting # v1.5 - Added strict ordering rules, anti-hallucination constraints, fallback rules for missing data, TL;DR section, speaker-labeling rules, timestamp restrictions, bullet-length limits, Planner title constraints, deduplication rules, tone consistency, and signal-to-noise filtering # # ============================================================ # CRITICAL INSTRUCTIONS (STRICT) # ============================================================ - Do NOT summarize, restate, or comment on this prompt. Produce only the meeting analysis. - Follow the numbered sections in the exact order shown. Do not omit, reorder, merge, or rename sections. - If any section lacks sufficient evidence, include the header and write: **“No reliable data found.”** - Derive ALL content ONLY from the Teams transcript, shared content, chat, and attachments. - NEVER invent details. If unclear, mark as “Unclear” or “TBD.” - Use neutral labels (Speaker A, Speaker B, etc.) if speaker names are not confidently identified. - Assign deterministic speaker labels based on first appearance. - Redact sensitive info as [REDACTED] and flag in Risks. - Include inline citations [Transcript HH:MM, Slide X] where possible. - Keep bullet points ≤ 20 words unless quoting transcript evidence. - Exclude small talk, greetings, jokes, or irrelevant chatter unless they directly impact decisions or tasks. - Only include timestamps if explicitly present in the transcript. Never estimate or invent them. - Deduplicate action items, decisions, and risks before final output. - Maintain a professional, concise, cross-functional technical PM tone. - Planner task titles must be ≤ 10 words and start with a verb. # ============================================================ # OUTPUT FORMAT (USE EXACTLY) # ============================================================ **TL;DR (1–2 sentences)** A concise, high-level summary of why the team met and what was resolved. --- 1. **Meeting Quality Assessment** - Clarity: [Good | Fair | Poor — brief explanation] - Speaker overlap / noise: [Low | Medium | High] - Estimated accuracy: [High | Medium | Low — justification] 2. **Executive Summary** Start with 1–2 sentence overview. Then provide 5–8 bullets covering: - Purpose - Attendees (names or count if unclear) - Key topics - Outcomes - Next steps 3. **Action Items** | Priority | Owner | Task Description | Due Date | Timestamp | Dependencies | Status | Notes | |----------|-------|------------------|----------|-----------|--------------|--------|-------| **Rules:** - Sort by Priority (High → Medium → Low), then Due Date. - Infer owners/dates ONLY if explicitly stated or clearly volunteered. - Default Priority: Medium; Status: Open. - Titles ≤ 10 words, start with a verb. - Deduplicate similar tasks. 4. **Key Decisions** - **DECISION:** [What was decided] - Status: [Confirmed | Tentative | Disputed] - Confidence: [High/Medium/Low — reason] - Rationale: [Why] - Impacted: [Who] - Evidence: [Transcript HH:MM or Slide reference] 5. **Open Questions & Risks** **Open Questions** - [Unresolved or unclear items] **Risks** - [Ambiguity, missing owners, conflicting views, scope creep, technical risks, etc.] 6. **Mind Map Outline (Hierarchical Outline)** - Main Topic 1 - Subtopic A - Action / Decision / Fact - Subtopic B **Rules:** - Max 5 main topics - Max 3 levels deep - ≤ 8 words per node - Prune low-signal branches 7. **Timeline of Key Moments** - HH:MM – [Brief one-line description] - HH:MM – [etc.] *Only include if timestamps exist; otherwise write “No reliable data found.”* 8. **Confidence & Sources Summary** - Overall confidence: XX/100 - Key sources: [Transcript HH:MM, Slide X, Chat message, etc.] 9. **Tech Jargon Glossary** - TERM: Definition (1–2 sentences) *Include only if relevant terms appear.* 10. **Planner Integration: Ready-to-Create Tasks** Numbered list, each formatted as: 1. **Task Title:** [≤10 words, verb-led] - Assigned to: [Owner or TBD] - Due: [Date or TBD] - Priority: [High/Medium/Low] - Description: [Brief details + dependencies/notes] - Labels/Buckets: [Suggested grouping] **Rules:** - Only include items with clear action/owner potential. - Group related tasks under consistent buckets. - Deduplicate tasks. --- **Follow-Up Prompts (suggest 3–5)** - “Create these tasks in Planner plan ‘X’.” - “Expand the Risks section with mitigation strategies.” - “Draft a follow-up email summarizing this meeting.” - “Prioritize action items by impact and urgency.” - “Clarify ambiguous decisions and propose next steps.”36Views0likes0CommentsAI Agents in Production: From Prototype to Reality - Part 10
This blog post, the tenth and final installment in a series on AI agents, focuses on deploying AI agents to production. It covers evaluating agent performance, addressing common issues, and managing costs. The post emphasizes the importance of a robust evaluation system, providing potential solutions for performance issues, and outlining cost management strategies such as response caching, using smaller models, and implementing router models.1.2KViews3likes1CommentOperationalize your Prompt Engineering Skills with Azure Prompt Flow
In today’s AI-driven world, prompt engineering is a game-changing skill for developers and professionals alike. With Azure Prompt Flow, you can harness the power of open-source LLMs to solve real-world operational challenges! This article guides you through using Azure’s robust tools to build, deploy, and refine your own LLM apps—from chatbots to data extraction tools and beyond. Whether you're just starting or looking to sharpen your AI expertise, this guide has everything you need to unlock new possibilities with prompt engineering. Dive in and take your tech journey to the next level!1.5KViews5likes3CommentsEmbracing Responsible AI: A Comprehensive Guide and Call to Action
In an age where artificial intelligence (AI) is becoming increasingly integrated into our daily lives, the need for responsible AI practices has never been more critical. From healthcare to finance, AI systems influence decisions affecting millions of people. As developers, organizations, and users, we are responsible for ensuring that these technologies are designed, deployed, and evaluated ethically. This blog will delve into the principles of responsible AI, the importance of assessing generative AI applications, and provide a call to action to engage with the Microsoft Learn Module on responsible AI evaluations. What is Responsible AI? Responsible AI encompasses a set of principles and practices aimed at ensuring that AI technologies are developed and used in ways that are ethical, fair, and accountable. Here are the core principles that define responsible AI: Fairness AI systems must be designed to avoid bias and discrimination. This means ensuring that the data used to train these systems is representative and that the algorithms do not favor one group over another. Fairness is crucial in applications like hiring, lending, and law enforcement, where biased AI can lead to significant societal harm. Transparency Transparency involves making AI systems understandable to users and stakeholders. This includes providing clear explanations of how AI models make decisions and what data they use. Transparency builds trust and allows users to challenge or question AI decisions when necessary. Accountability Developers and organizations must be held accountable for the outcomes of their AI systems. This includes establishing clear lines of responsibility for AI decisions and ensuring that there are mechanisms in place to address any negative consequences that arise from AI use. Privacy AI systems often rely on vast amounts of data, raising concerns about user privacy. Responsible AI practices involve implementing robust data protection measures, ensuring compliance with regulations like GDPR, and being transparent about how user data is collected, stored, and used. The Importance of Evaluating Generative AI Applications Generative AI, which includes technologies that can create text, images, music, and more, presents unique challenges and opportunities. Evaluating these applications is essential for several reasons: Quality Assessment Evaluating the output quality of generative AI applications is crucial to ensure that they meet user expectations and ethical standards. Poor-quality outputs can lead to misinformation, misrepresentation, and a loss of trust in AI technologies. Custom Evaluators Learning to create and use custom evaluators allows developers to tailor assessments to specific applications and contexts. This flexibility is vital in ensuring that the evaluation process aligns with the intended use of the AI system. Synthetic Datasets Generative AI can be used to create synthetic datasets, which can help in training AI models while addressing privacy concerns and data scarcity. Evaluating these synthetic datasets is essential to ensure they are representative and do not introduce bias. Call to Action: Engage with the Microsoft Learn Module To deepen your understanding of responsible AI and enhance your skills in evaluating generative AI applications, I encourage you to explore the Microsoft Learn Module available at this link. What You Will Learn: Concepts and Methodologies: The module covers essential frameworks for evaluating generative AI, including best practices and methodologies that can be applied across various domains. Hands-On Exercises: Engage in practical, code-first exercises that simulate real-world scenarios. These exercises will help you apply the concepts learned tangibly, reinforcing your understanding. Prerequisites: An Azure subscription (you can create one for free). Basic familiarity with Azure and Python programming. Tools like Docker and Visual Studio Code for local development. Why This Matters By participating in this module, you are not just enhancing your skills; you are contributing to a broader movement towards responsible AI. As AI technologies continue to evolve, the demand for professionals who understand and prioritize ethical considerations will only grow. Your engagement in this learning journey can help shape the future of AI, ensuring it serves humanity positively and equitably. Conclusion As we navigate the complexities of AI technology, we must prioritize responsible AI practices. By engaging with educational resources like the Microsoft Learn Module on responsible AI evaluations, we can equip ourselves with the knowledge and skills necessary to create AI systems that are not only innovative but also ethical and responsible. Join the movement towards responsible AI today! Take the first step by exploring the Microsoft Learn Module and become an advocate for ethical AI practices in your community and beyond. Together, we can ensure that AI serves as a force for good in our society. References Evaluate generative AI applications https://learn.microsoft.com/en-us/training/paths/evaluate-generative-ai-apps/?wt.mc_id=studentamb_263805 Azure Subscription for Students https://azure.microsoft.com/en-us/free/students/?wt.mc_id=studentamb_263805 Visual Studio Code https://code.visualstudio.com/?wt.mc_id=studentamb_263805759Views0likes0CommentsExploring Generative AI: A Hands-on Course on Prompt Engineering for non-tech students - Part 1
Introduction Generative Artificial Intelligence (AI) has transformed the digital landscape through "intent-based outcome specification," a paradigm where users describe desired outcomes via detailed prompts instead of traditional commands. This course - targeting a non-developer audience - delved into the foundational principles of Generative AI and Large Language Models (LLMs), focusing on their core mechanisms and capabilities. Students learned and practiced effective prompting techniques, essential for navigating this powerful yet complex method. The course included the analysis and discussion of recent research on prompt engineering, keeping students abreast of the latest developments. The structure of the course balanced theoretical understanding and practical application, with 30% dedicated to traditional lectures and 70% to hands-on workshops and collaborative group projects. Practical exercises using models like GPT allowed students to apply their theoretical knowledge in real-world scenarios. Group projects focused on specific application domains – including music, literature and cuisine - leading to presentations, peer reviews, and instructor feedback. The course - comprising 20 hours of direct instruction - was conducted at the Fondazione Bruno Kessler (FBK) campuses in Povo. Instructors Antonio Bucchiarone and Nadia Mana guided the learning journey. Additionally, Carlotta Castelluccio from Microsoft conducted a seminar on Responsible AI, emphasizing ethical considerations in AI applications. In this first part of the blog series, we are going to present the methodological framework and the tools used throughout the course. In the second part, we are going to cover the student projects’ main outcomes and key takeaways. The Card Model Template and the Flow of Cards One of the primary goals of the course was to provide a clear and comprehensive understanding of prompt engineering. This was achieved by introducing a structured framework known as the "Card Model" to define and organize generative AI tasks. In the context of this course, a card refers to a structured format or template used to define a specific task or objective for generating content or output using generative AI techniques. The Card Model serves as a conceptual framework that outlines the structure, components, and relationships involved in generating content or output using generative AI techniques. It provides a high-level abstraction of the task, capturing its essential elements and defining their interactions. Here’s a simplified model of a generative AI task: Objective: This is the overarching goal or purpose of the generative AI task, defining what needs to be achieved through the content generation process. Input: Information provided to the generative AI technique to guide the content generation process. This includes: Prompt: A starting point or stimulus to generate content, such as a partial sentence, a question, an instruction, or other forms of input. Context: Additional information or constraints that provide context for the generation task, such as background knowledge, relevant data sources, or specific requirements. Generative Model: The AI model responsible for generating content based on the input provided. Examples include pre-trained language models like OpenAI GPT-3.5Turbo, neural network architectures for text generation, or other generative AI systems. Output: The generated content produced by the generative model in response to the input, including: Generated Text: The actual output, which could be in the form of text, images, or other media. Evaluation Metrics: Criteria used to assess the quality and relevance of the generated content, including measures of coherence, relevance, fluency, and other factors depending on the specific task requirements. Feedback Loop: A mechanism for iteratively improving the generative AI model based on feedback from users or evaluators. This may involve refining the input prompts, adjusting model parameters, or incorporating additional training data to enhance performance. The Card Model helps define the key components involved in the task and their relationships, facilitating the design, execution, and evaluation of generative AI tasks in various applications. Cards Flow Model The concept of flow was also introduced in the course to provide a formal representation of the relationships between different cards composing a generative AI task. This flow model helps in visualizing and understanding the sequential and conditional transitions between different stages of a generative AI task, ensuring a structured and systematic approach to designing, executing, and evaluating generative AI processes. In more details, cards can be combined together to create complex workflows by defining specific transitions and dependencies between them. By linking these cards through directed edges, students can create intricate flows that mirror real-world applications. This pattern also helps students to break down complex tasks into smaller subtasks, described through detailed prompts and potentially addressed by different specialized models, generally leading to a more accurate final outcome. For example, a card flow might begin with a card that generates an initial story prompt. The output of this card could then flow into a card that adds contextual details, which in turn flows into another card responsible for generating the story based on the enhanced prompt. Subsequent cards could be used to evaluate the generated content, refine the prompt based on evaluation metrics, and iterate the process. To ensure a thorough understanding of these flows, students were asked to evaluate different paths within a flow. This involved analyzing how changes in one card could affect the overall output and exploring alternative pathways to achieve the desired outcome. Students were tasked with: Mapping Out Flows: Students mapped out various flows, identifying all possible paths and transitions between cards. Evaluating Paths: They evaluated each path to understand how different sequences and combinations of tasks impacted the final output. Comparing Outcomes: Students compared outcomes from different paths to determine which flow produced the most coherent, relevant, and high-quality results. Feedback and Iteration: They incorporated feedback into their flows, refining cards and transitions to optimize the generative process. By engaging in these activities, students gained hands-on experience in managing complex generative AI tasks, learning to anticipate and handle the dependencies and contingencies that arise in practical applications. This exercise not only reinforced their understanding of prompt engineering but also highlighted the importance of structured planning and iterative improvement in generative AI projects. The Azure AI Proxy Playground Students learned to interact with OpenAI models through the GUI offered by the Azure AI Proxy Playground. The service is an open-source solution which provides a Playground-like experience to explore the Azure OpenAI chat completions using a time-bound event code with different models and parameters. It’s designed for educational scenarios (e.g., a course, a hackathon, or a workshop) where students might not have access to an Azure subscription enabled with Azure OpenAI service and/or are not familiar with the Azure ecosystem and how to provision and consume Azure AI resources. By leveraging this solution, we were able to provide students with a simplified lab environment, where all the complexity related to the Cloud resources provisioning and model deployments was hidden to the final user and managed through a single Azure subscription, connected to the Proxy Playground. This was particularly helpful in the context of a course whose audience was non-technical and whose focus was learning to interact with large language models through prompt engineering techniques. For the sake of the course, we provisioned a gpt-3.5 turbo instance, so all the students’ interactions via the playground happened with that specific model. Tool GUI and Card Model mapping The Playground GUI is composed of several elements. Most of them can be directly mapped with the Card Model components, ensuring consistency between the theoretical concepts and the actual experimentations. User prompt: free-form text field used to enter the user request to the model. It’s the prompt component of the input in the card model. System message: free-form text field used to enter additional information to use in responses, data sources and/or tone and style specifications. It maps with the context component of the input in the card model. Configuration: parameters to tune the degree of randomness of the responses. It also includes a dropdown menu to select the model to use as chat engine, what we call generative model in the card template. Assistant response: in the chat session the user can read the model’s response, aka the generated text component of the output in the card model. Summary In this article, we covered the methodological framework and tools used in the Prompt Engineering course at Fondazione Bruno Kessler, to teach non-tech students to effectively interact with generative AI models. We explored the "Card Model" - a structured approach to define and organize generative AI tasks - and the concept of the "flow", which further structures the relationships between tasks, aiding in the creation of complex workflows. Students utilized the Azure AI Proxy Playground, an open-source GUI, to interact with OpenAI models like GPT-3.5 Turbo, applying their theoretical knowledge in practical scenarios without needing extensive technical skills. In the second part of the blog series, we will delve into the main outcomes of the students' projects and the key takeaways from their practical applications.5.3KViews1like0CommentsExploring AI Development and Management: A Journey through Contoso Chat and LLM Ops
In this blog, we'll navigate through the world of AI models, exploring Contoso Chat, Prompt Engineering, limitations of Prompt Engineering, and Large Language Models. We'll introduce tools like the RAG Pattern and Azure AI Studio that can boost AI responses and system performance. Ready to dive into the intricacies of AI development and management? Join us!15KViews3likes1CommentIntroduction to Prompt Engineering
With GPT-3, GPT-3.5, and GPT-4 prompt-based models, the user interacts with the model by entering a text prompt, to which the model responds with a text completion. Basic concepts and elements of GPT prompts Prompt components Instructions Primary Content Examples Cue Supporting content Prompts Basics Text prompts are how users interact with GPT models GPT models attempt to produce the next series of words that are most likely to follow from the previous text. Prompts | Best Practices Be Specific: Leave as little to interpretation as possible. Restrict the operational space Be Descriptive: Use analogies Double Down:Sometimes you may need to repeat yourself to the model. Give instructions before and after your primary content, use an instruction and a cue, etc. Order Matters:The order in which you present information to the model may impact the output. Whether you put instructions before your content (“summarize the following…”) or after (“summarize the above…”) can make a difference in output. Even the order of few-shot examples can matter. This is referred to as recency bias. Give the model an “out” :It can sometimes be helpful to give the model an alternative path if it is unable to complete the assigned task. For example, when asking a question over a piece of text you might include something like "respond with ‘not found’ if the answer is not present". This can help the model avoid generating false responses Prompt components Instructions When we show up to the present moment with all of our senses, we invite the world to fill us with joy. The pains of the past are behind us. The future has yet to unfold. But the now is full of beauty simply waiting for our attention. Instructions are likely the most commonly used prompt component Instructions - instruct the model on what to do Space efficiency TABLES As shown in the examples in the previous section, GPT models can understand tabular formatted data quite easily. This can be a space efficient way to include data, rather than preceding every field with name (such as with JSON). WHITE SPACE Consecutive whitespaces are treated as separate tokens which can be an easy way to waste space. Spaces preceding a word, on the other hand, are typically treated as part of the same token as the word. Carefully watch your usage of whitespace and don’t use punctuation when a space alone will do. Advanced techniques in prompt design and prompt engineering Certain models expect a specialized prompt structure For Azure OpenAI GPT models, there are currently two distinct APIs where prompt engineering comes into play: Chat Completion API Completion API Each API requires input data to be formatted differently Use of affordances | Factual claims, Search queries and Snippets Factual claims: John Smith is married to Lucy Smith John and Lucy have five kids John works as a software engineer at Microsoft Search queries: John Smith married to Lucy Smith John Smith number of children John Smith software engineer Microsoft Snippets: [1] … John Smith’s wedding was on September 25, 2012 … [2] … John Smith was accompanied by his wife Lucy to a party [3]John was accompanied to the soccer game by his two daughters and three sons [4] … After spending 10 years at Microsoft, Smith founded his own startup, Tailspin Toys [5] John M is the town smith, and he married Fiona. They have a daughter named Lucy System message framework and template recommendations for Large Language Models (LLMs) Define the model’s profile, capabilities, and limitations for your scenari Define the specific task(s) Define how the model should complete the tasks Define the scope and limitations Define the posture and tone Define the model's output format Define the language and syntax Define any styling or formatting Provide example(s) to demonstrate the intended behavior of the mode Describe difficult use cases Show the potential “inner monologue” Define additional behavioral guardrail Identify and prioritize the harms you’d like to address.1.8KViews1like0Comments