vscode
55 TopicsIntroducing GitHub Copilot for Azure: Your Cloud Coding Companion in VS Code!
🚀 Ready to Elevate Your Cloud Coding Game? Meet “@azure” – your companion in GitHub Copilot Chat. Get personalized answers about Azure resources, streamline deployments, and improve your troubleshooting. Early access awaits – sign up now! 🌟49KViews15likes17CommentsVisual Studio Code AI Toolkit: Run LLMs locally
AI Toolkit is here Getting the LLMs/ SLMs on our local machines. This toolkit lets us easily download the models on our local machine. Evaluation of the model. Whenever we need to evaluate a model to check for the feasibility to any particular application, then this tool lets us do it in a playground environment, which is what we will seeing in this blog. Fine-tuning, this majorly delas with training the model further to do the tasks that we specifically want the model to do. Usually, it does a generic task and has generic data, with fine-tuning we can give it a particular flavor to perform particular task.32KViews5likes3CommentsAnnouncing the AI Toolkit for Visual Studio Code
Announcing the AI Toolkit for Visual Studio Code, available now in the Visual Studio Marketplace. This powerful extension lets developers explore, fine-tune, and integrate top models from Azure AI Studio and HuggingFace into their apps. The AI Toolkit evolved from the Windows AI Studio extension released in November 2023, addressing user feedback for a cross-platform experience. Key features include a curated model catalog optimized for Windows and Linux (MacOS models coming soon), a model playground for local experimentation, advanced fine-tuning techniques, detailed evaluations, and easy deployment to Azure. Enjoy high-performance inferencing with OnnxRuntime and DirectML, leveraging GPUs, NPUs, and CPUs on Windows. Start building and fine-tuning projects within VS Code today!Data Science Day 2024 , 14th March 2024 - Schedule Announcement
We are thrilled to announce Python Data Science Day will be taking place March 14th, 2024 with talks all day; a "PyDay" on Pi Day: 3.14. If you're a Python developer, entrepreneur, data scientist, student, or researcher working on projects from hobbyist and start up to enterprise level, you'll find solutions to modernize your data pipelines and answer complex queries with data.2.1KViews4likes0CommentsJune 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, We have introduced a range of exciting new features and updates to Azure Database for PostgreSQL in June. From general availability of PG 17 to public preview of the SSD v2 storage tier for High Availability, there have been some significant feature announcements across multiple areas in the last month. Stay tuned as we dive deeper into each of these feature updates. Before that, let’s look at POSETTE 2025 highlights. POSETTE 2025 Highlights We hosted POSETTE: An Event for Postgres 2025 in June! This year marked our 4th annual event featuring 45 speakers and a total of 42 talks. PostgreSQL developers, contributors, and community members came together to share insights on topics covering everything from AI-powered applications to deep dives into PostgreSQL internals. If you missed it, you can catch up by watching the POSETTE livestream sessions. If this conference sounds interesting to you and want to be part of it next year, don’t forget to subscribe to POSETTE news. Feature Highlights General Availability of PostgreSQL 17 with 'In-Place' upgrade support General Availability of Online Migration Migration service support for PostgreSQL 17 Public Preview of SSD v2 High Availability New Region: Indonesia Central VS Code Extension for PostgreSQL enhancements Enhanced role management Ansible collection released for latest REST API version General Availability of PostgreSQL 17 with 'In-Place' upgrade support PostgreSQL 17 is now generally available on Azure Database for PostgreSQL flexible server, bringing key community innovations to your workloads. You’ll see faster vacuum operations, richer JSON processing, smarter query planning (including better join ordering and parallel execution), dynamic logical replication controls, and enhanced security & audit-logging features—backed by Azure’s five-year support policy. You can easily upgrade to PostgreSQL 17 using the in-place major version upgrade feature available through the Azure portal and CLI, without changing server endpoints or reconfiguring applications. The process includes built-in validations and rollback safety to help ensure a smooth and reliable upgrade experience. For more details, read the PostgreSQL 17 release announcement blog. General Availability of Online Migration We're excited to announce that Online Migration is now generally available for the Migration service for Azure Database for PostgreSQL! Online migration minimizes downtime by keeping your source database operational during the migration process, with continuous data synchronization until cut over. This is particularly beneficial for mission-critical applications that require minimal downtime during migration. This milestone brings production-ready online migration capabilities supporting various source environments including on-premises PostgreSQL, Azure VMs, Amazon RDS, Amazon Aurora, and Google Cloud SQL. For detailed information about the capabilities and how to get started, visit our Migration service documentation. Migration service support for PostgreSQL 17 Building on our PostgreSQL 17 general availability announcement, the Migration service for Azure Database for PostgreSQL now fully supports PostgreSQL 17. This means you can seamlessly migrate your existing PostgreSQL instances from various source platforms to Azure Database for PostgreSQL flexible server running PostgreSQL 17. With this support, organizations can take advantage of the latest PostgreSQL 17 features and performance improvements while leveraging our online migration capabilities for minimal downtime transitions. The migration service maintains full compatibility with PostgreSQL 17's enhanced security features, improved query planning, and other community innovations. Public Preview of SSD v2 High Availability We’re excited to announce the public preview High availability (HA) support for the Premium SSD v2 storage tier in Azure Database for PostgreSQL flexible server. This support allows you to enable Zone-Redundant HA using Premium SSD v2 during server deployments. In addition to high availability on SSDv2 you now get improved resiliency and 10 second failover times when using Premium SSD v2 with zone-redundant HA, helping customers build resilient, high-performance PostgreSQL applications with minimal overhead. This feature is particularly well-suited for mission-critical workloads, including those in financial services, real-time analytics, retail, and multi-tenant SaaS platforms. Key Benefits of Premium SSD v2: Flexible disk sizing: Scale from 32 GiB to 64 TiB in 1-GiB increments Fast failovers: Planned or unplanned failovers typically around 10 seconds Independent performance configuration: Achieve up to 80,000 IOPS and 1,200 Mbps throughput without resizing your disk. Baseline performance: Free throughput of 125 MB/s and 3,000 IOPS for disks up to 399 GiB, and 500 MB/s and 12,000 IOPS for disks 400 GiB and above at no additional cost. For more details, please refer to the Premium SSD v2 HA blog. New Region: Indonesia Central New region rollout! Azure Database for PostgreSQL flexible server is now available in Indonesia Central, giving customers in and around the region lower latency and data residency options. This continues our mission to bring Azure PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. VS Code Extension for PostgreSQL enhancements The brand-new VS code extension for PostgreSQL launched in mid-May and has already garnered over 122K installs from the Visual Studio Marketplace! And the kickoff blog about this new IDE for PostgreSQL in VS Code has had over 150K views. This extension makes it easier for developers to seamlessly interact with PostgreSQL databases. We have been committed to make this experience better and have introduced several enhancements to improve reliability and compatibility updates. You can now have better control over service restarts and process terminations on supported operating systems. Additionally, we have added support for parsing additional connection-string formats in the “Create Connection” flow, making it more flexible and user-friendly. We also resolved Entra token-fetching failures for newly created accounts, ensuring a smoother onboarding experience. On the feature front, you can now leverage Entra Security Groups and guest accounts across multiple tenants when establishing new connections, streamlining permission management in complex Entra environments. Don’t forget to update to the latest version in the marketplace to take advantage of these enhancements and visit our GitHub repository to learn more about this month’s release. If you learn best by video, these 2 videos are a great way to learn more about this new VS Code extension: POSETTE 2025: Introducing Microsoft’s VS Code Extension for PostgreSQL Demo of using VS code extension for PostgreSQL Enhanced role management With the introduction of PostgreSQL 16, a strict role hierarchy structure has been implemented. As a result, GRANT statements that were functional in PostgreSQL 11-15 may no longer work in PostgreSQL 16. We have improved the administrative flexibility and addressed this limitation in Azure Database for PostgreSQL flexible server across all PostgreSQL versions. Members of ‘azure_pg_admin’ can now manage, and access objects owned by any role that is non-restricted, giving control and permission over user-defined roles. To learn more about this improvement, please refer to our documentation on roles. Ansible collection released for latest REST API version A new version of Ansible collection for Azure Database for PostgreSQL flexible server is now released. Version 3.6.0 now includes the latest GA REST API features. This update introduces several enhancements, such as support for virtual endpoints, on-demand backups, system-assigned identity, storage auto-grow, and seamless switchover of read replicas to a new site (Read Replicas - Switchover), among many other improvements. To get started with using please visit flexible server Ansible collection link. Azure Postgres Learning Bytes 🎓 Using PostgreSQL VS code extension with agent mode The VS Code extension for PostgreSQL has been trending amongst the developer community. In this month's Learning Bytes section, we want to share how to enable the extension and use GitHub Copilot to create a database in Agent Mode, add dummy data, and visualize it using the Agent Mode and VS Code extension. Step 1: Download the VS code Extension for PostgreSQL Step 2: Check GitHub Copilot and Agent mode is enabled Go to File -> Preferences -> Settings (Ctrl + ,). Search and enable "chat.agent.enabled" and "pgsql copilot.enable". Reload VS Code to apply changes. Step 3: Connect to Azure Database for PostgreSQL Use the extension to enter instance details and establish a connection. Create and view schemas under Databases -> Schemas. Step 4: Visualize and Populate Data Right-click the database to visualize schemas. Ask the agent to insert dummy data or run queries. Conclusion That's all for the June 2025 feature updates! We are dedicated to continuously improve Azure Database for PostgreSQL with every release. Stay updated with the latest updates to our features by following this link. Your feedback is important and helps us continue to improve. If you have any suggestions, ideas, or questions, we’d love to hear from you. Share your thoughts here: aka.ms/pgfeedback We look forward to bringing you even more exciting updates throughout the year, stay tuned!New Generative AI Features in Azure Database for PostgreSQL
by: Maxim Lukiyanov, PhD, Principal PM Manager This week at Microsoft Build conference, we're excited to unveil a suite of new Generative AI capabilities in Azure Database for PostgreSQL flexible server. These features unlock a new class of applications powered by an intelligent database layer, expanding the horizons of what application developers can achieve. In this post, we’ll give you a brief overview of these announcements. Data is the fuel of AI. Looking back, the intelligence of Large Language Models (LLMs) can be reframed as intelligence that emerged from the vast data they were trained on. The LLMs just happened to be this technological leap necessary to extract that knowledge, but the knowledge itself was hidden in the data all along. In modern AI applications, the Retrieval-Augmented Generation (RAG) pattern applies this same principle to real-time data. RAG extracts relevant facts from data on the fly to augment an LLM’s knowledge. At Microsoft, we believe this principle will continue to transform technology. Every bit of data will be squeezed dry of every bit of knowledge it holds. And there’s no better place to find the most critical and up-to-date data than in databases. Today, we're excited to announce the next steps on our journey to make databases smarter – so they can help you capture the full potential of your data. Fast and accurate vector search with DiskANN First, we’re announcing the General Availability of DiskANN vector indexing in Azure Database for PostgreSQL. Vector search is at the heart of the RAG pattern, and it continues to be a cornerstone technology for the new generation of AI Agents - giving it contextual awareness and access to fresh knowledge hidden in data. DiskANN brings years of state-of-the-art innovation in vector indexing from Microsoft Research directly to our customers. This release introduces supports for vectors up to 16,000 dimensions — far surpassing the 2,000-dimension limit of the standard pgvector extension in PostgreSQL. This enables the development of highly accurate applications using high-dimensional embeddings. We’ve also accelerated index creation with enhanced memory management, parallel index building, and other optimizations – delivering up to 3x faster index builds while reducing disk I/O. Additionally, we're excited to announce the Public Preview of Product Quantization – a cutting-edge vector compression technique that delivers exceptional compression while maintaining high accuracy. DiskANN Product Quantization enables efficient storage of large vector volumes, making it ideal for production workloads where both performance and cost matter. With Product Quantization enabled, DiskANN offers up to 10x faster performance and 4x cost savings compared to pgvector HNSW. You can learn more about DiskANN in a dedicated blog post. Semantic operators in the database Next, we’re announcing the Public Preview of Semantic Operators in Azure Database for PostgreSQL – bringing a new intelligence layer to relational algebra, integrated directly into the SQL query engine. While vector search is foundational to the Generative AI (GenAI) apps and agents, it only scratches the surface of what’s possible. Semantic relationships between elements of the enterprise data are not visible to the vector search. This knowledge exists within the data but is lost at the lowest level of the stack – vector search – and this loss propagates upward, limiting the agent’s ability to reason about the data. This is where new Semantic Operators come in. Semantic Operators leverage LLMs to add semantic understanding of operational data. Today, we’re introducing four operators: generate() – a versatile generation operator capable of ChatGPT-style responses. is_true() – a semantic filtering operator that evaluates filter conditions and joins in natural language. extract() – a knowledge extraction operator that extracts hidden semantic relationships and other knowledge from your data, bringing a new level of intelligence to your GenAI apps and agents. rank() - a highly accurate semantic ranking operator, offering two types of state-of-the-art re-ranking models: Cohere Rank-v3.5 or OpenAI gpt-4.1 models from Azure AI Foundry Model Catalog. You can learn more about Semantic Operators in a dedicated blog post. Graph database and GraphRAG knowledge graph support Finally, we’re announcing the General Availability of GraphRAG support and the General Availability of the Apache AGE extension in Azure Database for PostgreSQL. Apache AGE extension on Azure Database for PostgreSQL offers a cost-effective, managed graph database service powered by PostgreSQL engine – and serves as the foundation for building GraphRAG applications. The semantic relationships in the data once extracted can be stored in various ways within the database. While relational tables with referential integrity can represent some relationships, this approach is suboptimal for knowledge graphs. Semantic relationships are dynamic; many aren’t known ahead of time and can’t be effectively modeled by a fixed schema. Graph databases provide a much more flexible structure, enabling knowledge graphs to be expressed naturally. Apache AGE supports openCypher, the emerging standard for querying graph data. OpenCypher offers an expressive, intuitive language well-suited for knowledge graph queries. We believe that combining semantic operators with graph support in Azure Database for PostgreSQL creates a compelling data platform for the next generation of AI agents — capable of effectively extracting, storing, and retrieving semantic relationships in your data. You can learn more about graph support in a separate blog post. Resources to help you get started We’re also happy to announce availability of the new resources and tools for application developers: Model Context Protocol (MCP) is an emerging open protocol designed to integrate AI models with external data sources and services. We have integrated MCP server for Azure Database for PostgreSQL into the Azure MCP Server, making it easy to connect your agentic apps not only to Azure Database for PostgreSQL, but to other Azure services as well through one unified interface. To learn more, refer to this blog post. New Solution Accelerator which showcases all of the capabilities we have announced today working together in one solution solving real world problems of ecommerce retail reimagined for agentic era. New PostgreSQL extension for VSCode for application developers and database administrators alike, bringing new generation of query editing and Copilot experiences to the world of PostgreSQL. And read about New enterprise features making Azure Database for PostgreSQL faster and more secure in the accompanying post. Begin your journey Generative AI innovation continues its advancement, bringing new opportunities every month. We’re excited for what is to come and look forward to sharing this journey of discovery with our customers. With today’s announcements - DiskANN vector indexing, Semantic Operators, and GraphRAG - Azure Database for PostgreSQL is ready to help you explore new boundaries of what’s possible. We invite you to begin your Generative AI journey today by exploring our new Solution Accelerator.1.8KViews3likes0CommentsPrompt Engineering Simplified: AI Toolkit's Prompt Builder
In the age of generative AI, crafting effective prompts is no longer a nice-to-have, it's a must-have. Understanding how to communicate with these underlying models is the key to unlocking their true potential and getting the results we need. What are Prompts? Every time we want to communicate to the language model, we give set of instructions to these models, we refer to these inputs as Prompts. Prompts play a very crucial role while working with the GenAI models. The quality of a prompt directly impacts the output of GenAI models. Precise and well-crafted prompts are crucial for achieving desired results. What factors crafts an optimal Prompt? Crafting an optimal requires balancing clarity, specificity and context. Besides these, constraints are a critical factor in crafting effective prompts. Specificity Clearly define the expectations. The prompt should leave no room for misinterpretation. Precise language is the key. Avoid vague language. Vague prompts lead to vague or irrelevant responses. e.g., “Tell me about history” ➔ “Explain the economic causes of the French Revolution”. Clarity Use simple, unambiguous language. Avoid jargon unless your audience expects it, recommended to use action verbs like "write," "summarize," "explain," "translate". Context Provide background for e.g., “As a beginner in coding, how do I write a Python loop?”. Give the LLM enough context to understand the situation. Include relevant details, keywords, and background information Conciseness Trim unnecessary words (e.g., “Describe photosynthesis” vs. “Can you tell me about how plants use sunlight?”). Ensure the prompt remains relevant to the desired output Tone & Audience Alignment Match the tone to the goal (formal, casual, instructive). Example: For kids, “Explain how rainbows form in simple terms.” Explicit Instructions Directly state what is needed e.g., “Compare X and Y”, “List pros and cons,” “Write a poem about…”. Guiding Constraints Limit scope to avoid overly broad answers e.g., “Focus on environmental impacts, not economic ones”. Constraints reduce ambiguity, focus responses, and improve relevance. Few example constraints, Format: “Summarize in 3 bullet points.” Length: “Explain in 2 sentences.” Scope: “Focus on environmental impacts, not economic ones.” Style/Tone: “Write a casual email,” or “Use non-technical terms.” Technical limits: “Keep code examples under 50 lines.” Few advanced Considerations for AI/LLM Prompts Examples or Demonstrations Include examples to set expectations for e.g., “Write a limerick like this: There once was a cat from Peru…”. Step-by-Step Guidance Break complex tasks into steps for e.g., “First analyze the Python code, then suggest solutions”. Role Assignment Assign roles to guide the AI for e.g., “Act as a historian explaining World War 2”. Avoid Bias Neutral phrasing ensures fair responses for e.g., “Discuss pros and cons of renewable energy” vs. “Why is solar energy bad?” the former is a well-formed prompt. Prompt engineering is an iterative process. Experiment with different phrasings and structures to see what works best. Analyze the LLM's responses and refine prompts accordingly. Make adjustments to improve the accuracy and relevance of the output. Prompt Builder: From the above section, we know that crafting effective prompts is essential for robust AI engagement. Prompt Builder tool on AI Toolkit helps in this enhancement by streamlining the whole process of crafting prompts. Prompt builder helps the users by helping in the following areas, o Prompt Creation, Modification, and Evaluation: Customize prompts through an accessible and straightforward interface. o AI-Assisted Prompt Generation: Articulate the project concept using everyday language, and the AI-powered feature will produce prompts for your exploration. o Organized Output Capability: Craft the prompts to yield outputs in a consistent, standardized and predictable manner. o Automated Code Generation for Prompt Usage: Following model and prompt experimentation, transition to coding immediately by accessing automatically generated, executable Python code. This tool has three sections on the UI. Prompt configuration Response History Prompt Configuration Section: In the Prompt configuration section, there are 4 major sub sections, Model System Prompt User Prompt Add Prompt Model: The Model section is the first subsection of the Prompt Configuration. Here, we select the model to use. The AI Toolkit offers a wide range of models, including remote models served from GitHub and those from providers such as OpenAI, Google, Anthropic, and Nvidia. For this tutorial we will be using OpenAI GPT-4o mini via GitHub System Prompt: In System prompt section, we provide instructions with relevant context to guide the system response. We can think of a system prompt as the "role" we give an AI before we ask it anything, like telling an actor what character to play. Generate Prompt: Upon choosing cloud-based / GitHub / Remote models, a new tool called as “Generate Prompt” is enabled, this is an AI Powered tool especially useful for crafting AI Powered well defined prompts which can be used in the “System Prompt” Section. Upon clicking on the “Generate Prompt” we can see a small window that pops up and asks for the input prompt. This can generate a prompt template by sharing basic details about the task. In this tutorial, let’s ask the LLM to generate prompt about “Professor in university teaching math”. Once the message is updated click on “Generate” button, and in a few seconds, we will have a well-structured prompt in the “System Prompt” section. The prompt that we generated is as follows Provide a detailed syllabus for a university-level mathematics course, including course objectives, weekly topics, assessment methods, and required materials. The syllabus should cover all essential components such as the course title, description, prerequisites, learning outcomes, weekly schedules, and any relevant policies regarding attendance, grading, and participation. # Steps 1. **Course Title and Description**: Clearly state the title of the course and provide a brief description of what the course will cover. 2. **Prerequisites**: List any required courses or knowledge necessary for students to enroll. 3. **Learning Outcomes**: Define what students are expected to learn by the end of the course. 4. **Weekly Schedule**: Outline topics for each week, along with any associated readings or assignments. 5. **Assessment Methods**: Describe how students will be evaluated (e.g., exams, quizzes, projects). 6. **Required Materials**: Include information on textbooks and other resources needed for the course. 7. **Course Policies**: State attendance, grading, and participation rules. # Output Format The output should be formatted as a structured syllabus, presented in clear sections with headings for each part. The document should be detailed yet concise, ideally around 3-5 pages in length. # Examples **Example 1** **Input:** Create a syllabus for a Calculus I course. **Output:** - **Course Title**: Calculus I - **Description**: An introduction to limits, derivatives, and integrals. - **Prerequisites**: Pre-Calculus or equivalent. - **Learning Outcomes**: Students will be able to calculate limits, differentiate basic functions, and understand the Fundamental Theorem of Calculus. - **Weekly Schedule**: - Week 1: Introduction to Limits - Week 2: Continuity - Week 3: Derivatives - ... - **Assessment Methods**: Midterm exam (30%), Final exam (40%), Weekly quizzes (20%), Participation (10%). - **Required Materials**: "Calculus: Early Transcendentals" by James Stewart. - **Course Policies**: Attendance required, late assignments will incur a penalty. **Example 2** **Input:** Design a syllabus for a Linear Algebra course. **Output:** - **Course Title**: Linear Algebra - **Description**: Study vector spaces, matrices, and linear transformations. - **Prerequisites**: None. - **Learning Outcomes**: Mastery of matrix operations and ability to solve systems of linear equations. - **Weekly Schedule**: - Week 1: Introduction to Vector Spaces - Week 2: Matrix Operations - Week 3: Determinants - ... - **Assessment Methods**: Two midterms (50%), Homework assignments (30%), Attendance (20%). - **Required Materials**: "Linear Algebra Done Right" by Sheldon Axler. - **Course Policies**: Participation in class discussions is mandatory. # Notes Ensure that the syllabus is comprehensive and tailored to the specific course topic. Consider including any unique teaching methods or technologies that will be employed during the course. User Prompt: User prompt is the specific question, instruction, or request that a person provides to the AI to elicit a response. It's the direct input from the user that initiates the AI's processing and generation of text. In AI Toolkit for a few models that support the multimodal feature, we can also upload images in this section. For this tutorial let’s input “Explain to me the Fourier equation in simple terms” Add Prompt: If any additional prompt needs to be added, we can configure more User or assistant prompt. So, in a conversation, we have: User Prompt: What the human says. Assistant Prompt: What the AI says. The major configuration part is now completed through this window, its now time to test the responses based on the LLM’s knowledge, in this case how well does GPT 4o mini behave in the role as university-level mathematics professor. In order to test it, we navigate to the next window, the Response section. Response Section: The Response section is where we finally get to see the responses. This section has the “Run” and “View Code” buttons. We can also choose the type of response we need. It can be a simple text or json schema. Upon choosing Json Schema, user will be prompted to “Prepare Schema”. Users can define their own schema or select from example. There are a few examples for the user to choose from. For this tutorial we will be using the simple text format. As we have our setup ready, we can directly click on the “Run” button, In a few seconds we have our well formatted and accurate answer on the screen, AI Toolkit‘s markdown capability can neatly format all the mathematical signs and equations. We can also add this to the “Assistant Prompt” by using the button provided. It provides better example for the LLM in the code later. The result from the LLM now seems very satisfactory with our well-crafted prompt. We can now proceed with the Code generation feature of the Prompt Builder tool of AI Toolkit. Upon clicking the “View Code” button, user is prompted to choose the SDK of their choice. This SDK lets us communicate with the API from the code. For this tutorial, we will use Azure AI Inference SDK. For more details on this SDK refer here. The code requires azure-ai-inference. Install the library by pip install azure-ai-inference """Run this model in Python > pip install azure-ai-inference """ import os from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import AssistantMessage, SystemMessage, UserMessage from azure.ai.inference.models import ImageContentItem, ImageUrl, TextContentItem from azure.core.credentials import AzureKeyCredential # To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings. # Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens client = ChatCompletionsClient( endpoint = "https://models.inference.ai.azure.com", credential = AzureKeyCredential(os.environ["GITHUB_TOKEN"]), api_version = "2024-08-01-preview", ) response = client.complete( messages = [ SystemMessage(content = "Provide a detailed syllabus for a university-level mathematics course, including course objectives, weekly topics, assessment methods, and required materials.\n \nThe syllabus should cover all essential components such as the course title, description, prerequisites, learning outcomes, weekly schedules, and any relevant policies regarding attendance, grading, and participation.\n \n# Steps\n \n1. **Course Title and Description**: Clearly state the title of the course and provide a brief description of what the course will cover.\n2. **Prerequisites**: List any required courses or knowledge necessary for students to enroll.\n3. **Learning Outcomes**: Define what students are expected to learn by the end of the course.\n4. **Weekly Schedule**: Outline topics for each week, along with any associated readings or assignments.\n5. **Assessment Methods**: Describe how students will be evaluated (e.g., exams, quizzes, projects).\n6. **Required Materials**: Include information on textbooks and other resources needed for the course.\n7. **Course Policies**: State attendance, grading, and participation rules.\n \n# Output Format\n \nThe output should be formatted as a structured syllabus, presented in clear sections with headings for each part. The document should be detailed yet concise, ideally around 3-5 pages in length.\n \n# Examples\n \n**Example 1** \n**Input:** \nCreate a syllabus for a Calculus I course. \n**Output:** \n- **Course Title**: Calculus I \n- **Description**: An introduction to limits, derivatives, and integrals. \n- **Prerequisites**: Pre-Calculus or equivalent. \n- **Learning Outcomes**: Students will be able to calculate limits, differentiate basic functions, and understand the Fundamental Theorem of Calculus. \n- **Weekly Schedule**: \n - Week 1: Introduction to Limits \n - Week 2: Continuity \n - Week 3: Derivatives \n - ... \n- **Assessment Methods**: Midterm exam (30%), Final exam (40%), Weekly quizzes (20%), Participation (10%). \n- **Required Materials**: \"Calculus: Early Transcendentals\" by James Stewart. \n- **Course Policies**: Attendance required, late assignments will incur a penalty.\n \n**Example 2** \n**Input:** \nDesign a syllabus for a Linear Algebra course. \n**Output:** \n- **Course Title**: Linear Algebra \n- **Description**: Study vector spaces, matrices, and linear transformations. \n- **Prerequisites**: None. \n- **Learning Outcomes**: Mastery of matrix operations and ability to solve systems of linear equations. \n- **Weekly Schedule**: \n - Week 1: Introduction to Vector Spaces \n - Week 2: Matrix Operations \n - Week 3: Determinants \n - ... \n- **Assessment Methods**: Two midterms (50%), Homework assignments (30%), Attendance (20%). \n- **Required Materials**: \"Linear Algebra Done Right\" by Sheldon Axler. \n- **Course Policies**: Participation in class discussions is mandatory. \n \n# Notes\n \nEnsure that the syllabus is comprehensive and tailored to the specific course topic. Consider including any unique teaching methods or technologies that will be employed during the course."), UserMessage(content = [ TextContentItem(text = "Explain to me the Fourier equation in simple terms"), ]), ], model = "gpt-4o-mini", response_format = "text", max_tokens = 4096, temperature = 1, top_p = 1, ) print(response.choices[0].message.content) This Python code is ready to be modified and used in any Generative AI application. It can be modified with any Orchestration framework like Semantic Kernel to add more features or even make an agentic application. History Section: We also have the “History” and “New Prompt”. History shows all the previous sessions; we can revisit and resume working or perhaps check the output or regenerate the code. History” and “New Prompt” In essence, the Prompt Builder tool significantly streamlines the process of crafting effective prompts, saving developers valuable time. Beyond prompt creation, it also facilitates output evaluation, model behavior analysis, and generates quality code to accelerate application development. Stay tuned for upcoming blog posts, where we'll delve into even more advanced techniques for building powerful generative AI applications. You can also join our AI Sparks series to learn more about the capabilities of the AI Toolkit for Visual Studio Code.2.9KViews3likes0Comments