developers
110 TopicsModel Mondays S2E10: Automating Document Processing with AI
1. Weekly Highlights We kicked off with the top news and updates in the Azure AI ecosystem: Agent Factory Blog Series: A new 6-part blog series on designing reliable, agentic AI—exploring multi-step, collaborative agents that reflect, plan, and adapt using tool integrations and design patterns. Text PII Preview in Azure AI Language: Now redacts PII (like date of birth, license plates) in major European languages, with better accuracy for UK bank entities. Claude Opus 4.1 in Copilot Pro & Enterprise: Public preview brings smarter summaries, tool assistant thinking, and "Ask Mode" in VS Code.Now leverages stronger computer vision algorithms for table parsing—achieving 94-97% accuracy across Latin, Chinese, Japanese, and Korean—with sub-10ms latency. Mistral Document AI in Azure Foundry: Instantly turn PDFs, contracts, and scanned docs into structured JSON with tables, headings, and LaTeX support. Serverless, multilingual, secure, and perfect for regulated industries. 2. Spotlight On: Document Intelligence with Azure & Mistral This week’s spotlight was a hands-on exploration of document processing, featuring both Microsoft and Mistral AI experts. Why Document Processing? Unstructured data—receipts, forms, handwritten notes—are everywhere. Modern document AI can extract, structure, and even annotate this data, fueling everything from search to RAG pipelines. Azure Document Intelligence: State-of-the-art OCR and table extraction with super-high accuracy and speed. Handles multi-language, complex layouts, and returns structured outputs ready for programmatic use. Mistral Document AI: Transforms PDFs and scanned docs into JSON, retaining complex formatting, tables, images, and even LaTeX. Supports custom schema extraction, image/document annotations, and returns everything in one API call. Integrates seamlessly with Azure AI Foundry and developer workflows. Demo Highlights: Extracting Receipts: OCR accurately pulls out store, date, and transaction details from photos. Handwriting Recognition: Even historical documents (like Thomas Jefferson’s letters) are parsed with surprising accuracy. Tables & Structured Data: Financial statements and reports converted into structured markdown and JSON—ready for downstream apps. Advanced Annotations: Define your own schema (via JSON Schema or Pydantic), extract custom fields, classify images, summarize documents, and even translate summaries—all in a single call. 3. Customer Story: Oracle Health Oracle Health shared how agentic AI and fine-tuned models are revolutionizing clinical workflows: Problem: Clinicians spend hours on documentation, searching records, and manual data entry—reducing time for patient care. Solution: Oracle’s clinical AI agents automate chart reviews, data extraction, and even conversational Q&A—while keeping humans in the loop for safety. Technical Highlights: Multi-agent architecture understands provider specialty and context. Orchestrator model "routes" requests to the right agent or plugin, extracting needed arguments from context. Fine-tuning was key: For low latency, Oracle used lightweight models (like GPT-4 Mini) and fine-tuned on their data—achieving sub-800ms responses, with accuracy matching larger models. Fine-tuning also allowed for nuanced tool selection, argument extraction, and rule-based orchestration—better than prompt engineering alone. Used LoRA for efficient, targeted fine-tuning without erasing base model knowledge. Live Demo: Agent summarizes patient history, retrieves lab results, filters for abnormals, and answers follow-up questions—all conversationally. Fine-tuned orchestrator chooses the right tool and context for each doctor’s workflow. Result: 1-2 hours saved per day, more time for patients, and happier doctors! 4. Key Takeaways Here are the key learnings from this episode: Document AI is Production-Ready: Azure Document Intelligence and Mistral Document AI offer fast, accurate, and customizable document parsing for real enterprise needs. Schema-Driven Extraction & Annotation: Define your own schemas and extract exactly what you want—no more one-size-fits-all. Fine-Tuning Unlocks Performance: For low latency and high accuracy, fine-tuning lightweight models beats prompt engineering in complex, rule-based agent workflows. Agentic Workflows in Action: Multi-agent systems can automate complex tasks, route requests, and keep humans in control, especially in regulated domains like healthcare. Community & Support: Join the Discord and Forum to ask questions, share use cases, and connect with the team. Sharda's Tips: How I Wrote This Blog Writing this recap is all about sharing what I learned and making it practical for the community! I start by organizing the key highlights, then walk through customer stories and demos, using simple language and real-world examples. Copilot helps me structure and clarify my notes, especially when summarizing technical sections. Here’s the prompt I used for Copilot this week: "Generate a technical blog post for Model Mondays S2E10 based on the transcript and episode details. Focus on document processing with Azure AI and Mistral, include customer demos, and highlight practical workflows and fine-tuning. Make it clear and approachable for developers and students." Every episode inspires me to try these tools myself, and I hope this blog makes it easy for you to start, too. If you have questions or want to share your own experience, I’d love to hear from you! Coming Up Next Week Next week: Text & Speech AI Playgrounds! Learn how to build and test language and speech models, with live demos and expert guests. | Register For The Livestream – Aug 25, 2025 | Register For The AMA – Aug 29, 2025 | Ask Questions & View Recaps – Discussion Forum About Model Mondays Model Mondays is a weekly series to build your Azure AI IQ with: 5-Minute Highlights: News & updates on Mondays 15-Minute Spotlight: Deep dives into new features, models, and protocols 30-Minute AMA Fridays: Live Q&A with product teams and experts Get started: Register For Livestreams Watch Past Replays Register For AMA Recap Past AMAs Join The Community Don’t build alone! Join the Azure AI Developer Community for real-time chats, events, support, and more: Join the Discord Explore the Forum About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador focused on cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I share takeaways from each week’s Model Mondays livestream.154Views0likes0CommentsModel Mondays S2E9: Models for AI Agents
1. Weekly Highlights This episode kicked off with the top news and updates in the Azure AI ecosystem: GPT-5 and GPT-OSS Models Now in Azure AI Foundry: Azure AI Foundry now supports OpenAI’s GPT-5 lineup (including GPT-5, GPT-5 Mini, and GPT-5 Nano) and the new open-weight GPT-OSS models (120B, 20B). These models offer powerful reasoning, real-time agent tasks, and ultra-low latency Q&A, all with massive context windows and flexible deployment via the Model Router. Flux 1 Context Pro & Flux 1.1 Pro from Black Forest Labs: These new vision models enable in-context image generation, editing, and style transfer, now available in the Image Playground in Azure AI Foundry. Browser Automation Tool (Preview): Agents can now perform real web tasks—search, navigation, form filling, and more—via natural language, accessible through API and SDK. GitHub Copilot Agent Mode + Playwright MCP Server: Debug UIs with AI: Copilot’s agent mode now pairs with Playwright MCP Server to analyze, identify, and fix UI bugs automatically. Discord Community: Join the conversation, share your feedback, and connect with the product team and other developers. 2. Spotlight On: Azure AI Agent Service & Agent Catalog This week’s spotlight was on building and orchestrating multi-agent workflows using the Azure AI Agent Service and the new Agent Catalog. What is the Azure AI Agent Service? A managed platform for building, deploying, and scaling agentic AI solutions. It supports modular, multi-agent workflows, secure authentication, and seamless integration with Azure Logic Apps, OpenAPI tools, and more. Agent Catalog: A collection of open-source, ready-to-use agent templates and workflow samples. These include orchestrator agents, connected agents, and specialized agents for tasks like customer support, research, and more. Demo Highlights: Connected Agents: Orchestrate workflows by delegating tasks to specialized sub-agents (e.g., mortgage application, market insights). Multi-Agent Workflows: Design complex, hierarchical agent graphs with triggers, events, and handoffs (e.g., customer support with escalation to human agents). Workflow Designer: Visualize and edit agent flows, transitions, and variables in a modular, no-code interface. Integration with Azure Logic Apps: Trigger workflows from 1400+ external services and apps. 3. Customer Story: Atomic Work Atomic Work showcased how agentic AI can revolutionize enterprise service management, making employees more productive and ops teams more efficient. Problem: Traditional IT service management is slow, manual, and frustrating for both employees and ops teams. Solution: Atomic Work’s “Atom” is a universal, multimodal agent that works across channels (Teams, browser, etc.), answers L1/L2 questions, automates requests, and proactively assists users. Technical Highlights: Multimodal & Cross-Channel: Atom can guide users through web interfaces, answer questions, and automate tasks without switching tools. Data Ingestion & Context: Regularly ingests up-to-date documentation and context, ensuring accurate, current answers. Security & Integration: Built on Azure for enterprise-grade security and seamless integration with existing systems. Demo: Resetting passwords, troubleshooting VPN, requesting GitHub repo access—all handled by Atom, with proactive suggestions and context-aware actions. Atom can even walk users through complex UI tasks (like generating GitHub tokens) by “seeing” the user’s screen and providing step-by-step guidance. 4. Key Takeaways Here are the key learnings from this episode: Agentic AI is Production-Ready: Azure AI Agent Service and the Agent Catalog make it easy to build, deploy, and scale multi-agent workflows for real-world business needs. Modular, No-Code Workflow Design: The workflow designer lets you visually create and edit agent graphs, triggers, and handoffs—no code required. Open-Source & Extensible: The Agent Catalog provides open-source templates and welcomes community contributions. Real-World Impact: Solutions like Atomic Work show how agentic AI can transform IT, HR, and customer support, making organizations more efficient and employees more empowered. Community & Support: Join the Discord and Forum to connect, ask questions, and share your own agentic AI projects. Sharda's Tips: How I Wrote This Blog Writing this blog is like sharing my own learning journey with friends. I start by thinking about why the topic matters and how it can help someone new to Azure or agentic AI. I use simple language, real examples from the episode, and organize my thoughts with GitHub Copilot to make sure I cover all the important points. Here’s the prompt I gave Copilot to help me draft this blog: Generate a technical blog post for Model Mondays S2E9 based on the transcript and episode details. Focus on Azure AI Agent Service, Agent Catalog, and real-world demos. Explain the concepts for students, add a section on practical applications, and share tips for writing technical blogs. Make it clear, engaging, and useful for developers and students. After watching the video, I felt inspired to try out these tools myself. The way the speakers explained and demonstrated everything made me believe that anyone can get started, no matter their background. My goal with this blog is to help you feel the same way—curious, confident, and ready to explore what AI and Azure can do for you. If you have questions or want to share your own experience, I’d love to hear from you. Coming Up Next Week Next week: Document Processing with AI! Join us as we explore how to automate document workflows using Azure AI Foundry, with live demos and expert guests. 1️⃣ | Register For The Livestream – Aug 18, 2025 2️⃣ | Register For The AMA – Aug 22, 2025 3️⃣ | Ask Questions & View Recaps – Discussion Forum About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ with three elements: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream Want to get started? Register For Livestreams – every Monday at 1:30pm ET Watch Past Replays to revisit other spotlight topics Register For AMA – to join the next AMA on the schedule Recap Past AMAs – check the AMA schedule for episode specific links Join The Community Great devs don't build alone! In a fast-paced developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! Join the Discord – for real-time chats, events & learning Explore the Forum – for AMA recaps, Q&A, and Discussion! About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I summarize my takeaways from each week's Model Mondays livestream.163Views0likes0CommentsTaming Mutable State: Applying Functional Programming in an Object-Oriented Language
🔥 .NET July at Microsoft Hero is on fire! 🚀 The last two sessions have blown us away with incredible speakers and fresh content, but the party isn’t even close to over. July is bursting with .NET energy, and next up, Rodney will join us to take us down a path less traveled with a topic that promises to shake up the way you think about C#. 🧠✨ What’s coming up? Imagine blending the strengths of object-oriented C# with some of the most intriguing secrets from the world of functional programming. This session teases the mysterious forces behind writing more resilient, maintainable apps, without giving it all away. Expect big “aha!” moments and insights you won’t see coming. 🕵️♂️💡 Curious? You should be! Make sure you’re registered, mark your calendar, and get ready to join us live for another game-changing session. Let’s unlock new perspectives together, the Microsoft Learn way! 🌟🤝 📅 July 19, 2025 06:00 PM CEST 🔗 https://streamyard.com/watch/CDGBWtmDTtjQ?wt.mc_id=MVP_350258110Views3likes0CommentsS2:E4 Understanding AI Developer Experiences with Leo Yao
This week in Model Mondays, we put the spotlight on the AI Toolkit for Visual Studio Code - and explore the tools and workflows that make building generative AI apps and agents easier for developers. Read on for my recap. This post was generated with AI help and human revision & review. To learn more about our motivation and workflows, please refer to this document in our website. About Model Mondays Model Mondays is a weekly series designed to help you grow your Azure AI Foundry Model IQ step by step. Each week includes: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from the Monday livestream If you're looking to grow your skills with the latest in AI model development, this series is a great place to begin. Useful links: Register for upcoming livestreams Watch past episodes Join the AMA on AI Developer Experiences Visit the Model Mondays forum Spotlight On: AI Developer Experiences 1. What is this topic and why is it important? AI Developer Experiences focus on making the process of building, testing, and deploying AI models as efficient as possible. With the right tools—such as the AI Toolkit and Azure AI Foundry extensions for Visual Studio Code—developers can eliminate unnecessary friction and focus on innovation. This is essential for accelerating the real-world impact of generative AI. 2. What is one key takeaway from the episode? The integration of Azure AI Foundry with Visual Studio Code allows developers to manage models, run experiments, and deploy applications directly from their preferred development environment. This unified workflow enhances productivity and simplifies the AI development lifecycle. 3. How can I get started? Here are a few resources to explore: Install the AI Toolkit for VS Code Explore Azure AI Foundry Documentation Join the Microsoft Tech Community to follow and contribute to discussions 4. What’s New in Azure AI Foundry? Azure AI Foundry continues to evolve to meet developer needs with more power, flexibility, and productivity. Here are some of the latest updates highlighted in this week’s episode: AI Toolkit for Visual Studio Code Now with deeper integration, allowing developers to manage models, run experiments, and deploy applications directly within their editor—streamlining the entire workflow. Prompt Shields Enhanced security capabilities designed to protect generative AI applications from prompt injection and unsafe content, improving reliability in production environments. Model Router A new intelligent routing system that dynamically directs model requests to the most suitable model available—enhancing performance and efficiency at scale. Expanded Model Catalog The catalog now includes more open-source and proprietary models, featuring the latest from Hugging Face, OpenAI, and other leading providers. Improved Documentation and Sample Projects Newly added guides and ready-to-use examples to help developers get started faster, understand workflows, and build confidently. My A-Ha Moment Before watching this episode, setting up an AI development environment always felt like a challenge. There were so many moving parts—configurations, integrations, and dependencies—that it was hard to know where to begin. Seeing the AI Toolkit in action inside Visual Studio Code changed everything for me. It was a realization moment: “That’s it? I can explore models, test prompts, and deploy apps—without ever leaving my editor?” This episode made it clear that building with AI doesn’t have to be complex or intimidating. With the right tools, experimentation becomes faster and far more enjoyable. Now, I’m genuinely excited to build, test, and explore new generative AI solutions because the process finally feels accessible. Coming Up Next Week In the next episode, we’ll be exploring Fine-Tuning and Distillation with Dave Voutila. This session will focus on how to adapt Azure OpenAI models to your unique use cases and apply best practices for efficient knowledge transfer. Register here to reserve your spot and be part of the conversation. Join the Community Building in AI is better when we do it together. That’s why the Azure AI Developer Community exists—to support your journey and provide resources every step of the way. Join the Discord for real-time discussions, events, and peer learning Explore the Forum to catch up on AMAs, ask questions, and connect with other developers About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador passionate about cloud technologies and artificial intelligence. I enjoy learning, building, and helping others grow in tech. Connect with me: LinkedIn GitHub Dev.to Microsoft Tech Community197Views0likes0CommentsStefan Pölz - Null & Void, everything about nothing in .NET
After an electrifying kickoff to .NET July, it’s time to keep the momentum rolling! 🔥 🎇 .NET July isn’t just a month for developers, it’s a celebration for everyone passionate about tech, the cloud, and leveling up their skills. Whether you’re aiming to supercharge your knowledge or make a bold move in your career, this is the community to join. 🫶 Our next session features the incredible https://www.linkedin.com/in/ACoAAC9Q2ZAB2u-_JbumHA-DJvD2qxaBcTfzuTo, ready to share his hard-earned wisdom and hands-on experience on one of the hottest topics in .NET today. This is your chance to gain insights that could change the way you build and think about software. Want to understand the "billion-dollar mistake" and why it's also a powerful tool? Curious how modern .NET helps you avoid runtime nightmares, before they even start? Register now, save your VIP spot, and become part of another unforgettable session with the https://www.linkedin.com/company/microsofthero/! Let’s grow and learn together with https://www.linkedin.com/company/microsoftlearn/. 🚀 📺 Subscribe us on YouTube and watch live --> https://lnkd.in/dQSgYXgi 📑 Register for the session: https://lnkd.in/dywm3CCd https://www.linkedin.com/in/ACoAAC9Q2ZAB2u-_JbumHA-DJvD2qxaBcTfzuTo Null & Void - Everything about Nothing in .NET July 12, 2025 06:00 PM CET #MVPBUZZ #MicrosoftHero #MicrosoftZeroToHero #DOTNET #MicrosoftLearn #MicrosoftDeveloper #Developer #Microsoft146Views0likes0CommentsModel Mondays S2:E2 - Understanding Model Context Protocol (MCP)
This week in Model Mondays, we focus on the Model Context Protocol (MCP) — and learn how to securely connect AI models to real-world tools and services using MCP, Azure AI Foundry, and industry-standard authorization. Read on for my recap About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ step by step. Here’s how it works: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream If you want to grow your skills with the latest in AI model development, Model Mondays is the place to start. Want to follow along? Register Here - to watch upcoming Mondel Monday livestreams Watch Playlists to replay past Model Monday episodes Register Here - to join the AMA on MCP on Friday Jun 27 Visit The Forum- to view Foundry Friday AMAs and recaps Spotlight On: Model Context Protocol (MCP) This week, the Model Monday’s spotlight was on the Model Context Protocol (MCP) with subject matter expert Den Delimarsky. Don't forget to check out the slides from the presentation, for resource links! In this blog post, I’ll talk about my five key takeaways from this episode: What Is MCP and Why Does It Matter? What Is MCP Authorization and Why Is It Important? How Can I Get Started with MCP? Spotlight: My Aha Moment Highlights: What’s New in Azure AI 1 . What Is MCP and Why is it Important? MCP is a protocol that standardizes how AI applications connect the underlying AI models to required knowledge sources (data) and interaction APIs (functions) for more effective task execution. Because these models are pre-trained, they lack access to real-time or proprietary data sources (for knowledge) and real-world environments (for interaction). MCP allows them to "discover and use" relevant knowledge and action tools to add relevant context to the model for task execution. Explore: The MCP Specification Learn: MCP For Beginners Want to learn more about MCP - check out the AI Engineer World Fair 2025 "MCP and Keynotes" track. It kicks off with a keynote from Asha Sharma that gives you a broader vision for Azure AI Foundry. Then look for the talk from Harald Kirschner on MCP and VS Code. 2. What Is MCP Authorization and Why Does It Matter? MCP (Model Context Protocol) authorization is a system that helps developers manage who can access their apps, especially when they are hosted in the cloud. The goal is to simplify the process of securing these apps by using common tools like OAuth and identity providers (such as Google or GitHub), so developers don't have to be security experts. Key Takeaways: The new MCP proposal uses familiar identity providers to simplify the authorization process. It allows developers to secure their apps without requiring deep knowledge of security. The update ensures better security controls and prepares the system for future authentication methods. Related Reading: Aaron Parecki, Let's Fix OAuth in MCP Den Delimarsky, Improving The MCP Authorization Spec - One RFC At A Time MCP Specification, Authorization protocol draft On Monday, Den joined us live to talk about the work he did for the authorization protocol. Watch the session now to get a sense for what the MCP Authorization protocol does, how it works, and why it matters. Have questions? Submit them to the forum or Join the Foundry Friday AMA on Jun 27 at 1:30pm ET. 3. How Can I Get Started? If you want to start working with MCP, here’s how to do it easily: Learn the Fundamentals: Explore MCP For Beginners Use an MCP Server: Explore VSCode Agent Mode support . Use MCP with AI Agents: Explore the Azure MCP Server 4. What’s New in Azure AI Foundry? Managed Compute for Cohere Models: Faster, secure AI deployments with low latency. Prompt Shields: New Azure security system to protect against prompt injection and unsafe content. OpenAI o3 Pro Model: A fast, low-cost model similar to GPT-4 Turbo. Codex Mini Model: A smaller, quicker model perfect for developer command-line tasks. MCP Security Upgrades: Now easier to secure AI apps using familiar OAuth identity providers. 5. My Aha Moment Before this session, I used to think that connecting apps to AI was complicated and risky. I believed developers had to build their own security systems from scratch, which sounded tough. But this week, I learned that MCP makes it simple. We can now use trusted logins like Google or GitHub and securely connect AI models to real-world apps without extra hassle. How I Learned This ? To be honest, I also used Copilot to help me understand and summarize this topic in simple words. I wanted to make sure I really understood it well enough to explain it to my friends and peers. I believe in learning with the tools we have, and AI is one of them. By using Copilot and combining it with what I learned from the Model Monday’s session, I was able to write this blog in a way that is easy to understand Takeaway for Beginners: It’s okay to use AI to learn what matters is that you grow, verify, and share the knowledge in your own way. Coming Up Next Week: Next week, we dive into SLMs & Reasoning (Phi-4) with Mojan Javaheripi, PhD, Senior Researcher at Microsoft Research. This session will explore how Small Language Models (SLMs) can perform advanced reasoning tasks, and what makes models like Phi-4 reasoning efficient, scalable, and useful in practical AI applications. Register Here! Join The Community Great devs don't build alone! In a fast-pased developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! Join the Discord - for real-time chats, events & learning Explore the Forum - for AMA recaps, Q&A, and help! About Me: I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on Github, Dev.to, Tech Community and Linkedin. In this blog series I have summarized my takeaways from this week's Model Mondays livestream.650Views1like2CommentsParticipe da 2ª edição do GitHub Copilot Global Bootcamp
O GitHub Copilot Global Bootcamp começou em fevereiro como uma jornada totalmente virtual de aprendizado — e foi um sucesso. Mais de 60 mil desenvolvedores participaram da primeira edição, em vários idiomas e regiões. Agora, estamos empolgados em lançar a segunda edição — maior e melhor — com workshops virtuais e presenciais, organizados por comunidades de tecnologia ao redor do mundo. Essa nova edição chega logo após os anúncios do Microsoft Build 2025, onde as equipes do GitHub e do Visual Studio Code revelaram novidades empolgantes: A extensão GitHub Copilot Chat será open source, reforçando a transparência e a colaboração. A IA está sendo profundamente integrada ao Visual Studio Code, que agora está evoluindo para um editor de IA de código aberto. Novas APIs e ferramentas estão tornando mais fácil do que nunca construir com IA e LLMs. Este bootcamp é a sua oportunidade de explorar essas novas ferramentas, entender como usar o GitHub Copilot de forma eficaz e fazer parte da crescente conversa global sobre IA no desenvolvimento de software. 👩💻 Quem pode participar? Seja você um(a) desenvolvedor(a) iniciante, estudante ou profissional experiente em tecnologia, este bootcamp foi feito para você. Você aprenderá casos de uso práticos do GitHub Copilot e como aumentar sua produtividade usando IA — em um formato acessível e prático. Participe da edição virtual Não importa onde você esteja, é possível participar online e aprender com a gente: Português (Brasil – UTC -3) 24 de junho, 19h: Boas práticas para dominar o GitHub Copilot Chat 25 de junho, 19h: Integrações práticas com MCP Servers no VS Code e GitHub Copilot Aprenda na sua cidade! Estamos em parceria com comunidades locais de desenvolvedores para levar workshops presenciais a diversas cidades ao redor do mundo. Sessões presenciais confirmadas no Brasil: Data Cidade Inscrições 17 de Junho Brasília, Brasil Registre-se agora! 17 de Junho Pato de Minas, Brasil Registre-se agora! 21 de Junho Mogi das Cruzes, Brasil Registre-se agora! 26 de Junho Recife, Brasil Registre-se agora! 27 de Junho Rio de Janeiro, Brasil Registre-se agora! O Microsoft Applied Skills é um programa de credenciamento criado para validar sua capacidade de realizar tarefas técnicas específicas do mundo real. Diferente das certificações tradicionais que costumam abranger cargos amplos, o Applied Skills foca em habilidades práticas e cenários reais, diretamente aplicáveis a desafios de negócios. E a melhor parte? É totalmente gratuito! Você demonstra suas habilidades por meio de avaliações interativas, baseadas em tarefas, em um ambiente simulado — sem perguntas de múltipla escolha, apenas trabalho real. Uma das adições mais recentes é o Applied Skill do GitHub Copilot, que comprova sua habilidade de aproveitar a IA para aumentar a produtividade no desenvolvimento de software e melhorar a qualidade do código: Acelere o desenvolvimento de aplicativos usando o GitHub Copilot.4.3KViews2likes8CommentsGitHub Copilot Bootcamp Resources
Passos para resgatar o desconto para sua certificação GitHub (Português Brasileiro) Pasos para canjear el descuento para tu certificación de GitHub (Español) Steps to redeem the discount for your GitHub certification (English) 兑换 GitHub 认证折扣的步骤 (Chinese) Passos para resgatar o desconto para sua certificação GitHub Se você deseja consultar o código de desconto compartilhado durante a sessão, por favor, visite a página de inscrição para acessar a gravação: https://aka.ms/GitHubCopilotBootcampBrasil O código do voucher deverá ser inserido manualmente durante o processo de checkout. Abaixo estão os passos para registro e agendamento: Faça login no site de registro do exame e escolha a certificação desejada. Isso o redirecionará para a página de registro. Clique em “Agendar/fazer exame” para prosseguir. Complete o formulário de registro e selecione “Agendar exame” na parte inferior. Esta ação transmitirá seus detalhes de elegibilidade para nosso fornecedor de testes, PSI. Ao enviar o formulário de registro, você será direcionado ao site de testes da PSI para finalizar o agendamento do seu exame. Durante o processo de checkout no site de testes da PSI, você encontrará um campo designado onde poderá inserir o código do voucher para zerar o saldo. Pasos para canjear el descuento para tu certificación de GitHub Si deseas consultar el código de descuento compartido durante la sesión, por favor, visita la página de inscripción para acceder a la grabación: https://aka.ms/GitHubCopilotBootcampLATAM El código del voucher (cupón) se ingresará manualmente durante el proceso de pago. A continuación, se detallan los pasos de registro y para agendar tu examen: Inicia sesión en el sitio de registro del examen y elige la certificación deseada. Esto te redireccionará a la página de registro. Haz clic en "Programar/realizar examen" para continuar. Completa el formulario de registro y selecciona "Programar examen" en la parte inferior. Esta acción transmitirá tus detalles de elegibilidad a nuestro proveedor de pruebas, PSI. Al enviar el formulario de registro, serás dirigido al sitio de pruebas de PSI para finalizar la programación de su examen. Durante el proceso de pago en el sitio de pruebas de PSI, encontrarás un campo designado donde puedes ingresar el código del voucher (cupón) para poner a cero el saldo. Steps to redeem the discount for your GitHub certification If you wish to check the discount code shared during the session, please visit the registration page to access the recording: https://aka.ms/GHCopilot-Bootcamp The voucher code will be entered manually during the checkout process. Below are the registration and scheduling steps: Log into the exam registration site and choose the desired certification. This will redirect you to the registration page. Click on "Schedule/take exam" to proceed. Complete the registration form and select "Schedule exam" at the bottom. This action will transmit your eligibility details to our testing vendor, PSI. Upon submitting the registration form, you'll be directed to the PSI testing site to finalize the scheduling of your exam. During the checkout process on the PSI testing site, you'll encounter a designated field where you can enter the voucher code to zero the balance.7.3KViews1like9CommentsModo Agente disponível para todos os usuários do VS Code e com suporte a MCP
O Modo Agente está sendo lançado para todos os usuários do VS Code! O agente atua como um programador autônomo que realiza tarefas de codificação em várias etapas sob seu comando, como analisar sua base de código, propor edições de arquivos e executar comandos no terminal. Ele responde a erros de compilação e lint, monitora a saída do terminal e corrige automaticamente em um loop até que a tarefa seja concluída. O agente também pode usar ferramentas contribuídas, permitindo que ele interaja com servidores MCP externos ou extensões do VS Code para realizar uma ampla variedade de tarefas. Disponível para todos os usuários Abra a visualização de Chat, faça login no GitHub, configure chat.agent.enabled nas suas configurações e selecione Agente no menu suspenso do modo de Chat. Se você não vir a configuração, certifique-se de recarregar o VS Code após atualizar para a versão mais recente. Nas próximas semanas, estamos lançando isso por padrão para todos - nenhuma configuração será necessária. O modo Agente é ótimo para cenários onde: Sua tarefa envolve várias etapas. O agente edita o código, executa comandos no terminal, monitora erros e itera para resolver quaisquer problemas que surgirem. Você não tem certeza sobre o escopo das mudanças. O agente determina automaticamente os arquivos e o contexto relevantes. Sua tarefa requer interação com aplicativos ou dados externos. O agente se integra com servidores MCP e extensões do VS Code. Por outro lado, use o modo de edição quando a tarefa tiver um escopo bem definido, você quiser uma resposta rápida ou quiser um controle mais preciso sobre o número de solicitações ao LLM. Criamos uma experiência de chat unificada, combinando as visualizações de Chat e Edições, que traz benefícios como histórico de sessões, mover o chat para uma janela separada e simplificação da visualização do Conjunto de Trabalho. Tudo isso agora também está disponível no modo Agente. Continuamos a receber um feedback fantástico dos usuários (por favor, continuem enviando!), o que inspirou muitas das melhorias que fizemos. Mais notavelmente: A ação de desfazer agora reverte as mudanças até a última chamada da ferramenta de edição de arquivos. Suporte para múltiplas sessões de agente no mesmo workspace (melhor quando as sessões de edição não modificam os mesmos arquivos). O agente agora pode criar e editar notebooks. A capacidade de aprovar automaticamente chamadas de ferramentas (aprovação automática de terminal chegando em abril). Uma série de melhorias na qualidade de vida e correções de bugs. Tanto as experiências de perguntar quanto de editar estão evoluindo para uma arquitetura que, como o agente, utiliza ferramentas. Estamos fazendo essa mudança para unificar os modos de perguntar/editar/agente para serem todos agenticos, com o objetivo de suavizar a experiência geral do usuário. Isso permite que o modo de edição use a ferramenta de edição de arquivos para melhorar a velocidade, e os modos de edição e pergunta usem #codebase, uma busca agentica na base de código. Consequentemente, modelos de linguagem sem suporte para chamadas de ferramentas não estarão mais disponíveis no modo de edição. Extensível: Servidores MCP e Extensões do VS Code Assim como as extensões do VS Code permitem que você personalize seus fluxos de trabalho específicos, a extensibilidade do agente permite que você adapte o agente às suas necessidades. Com a extensibilidade, o agente pode realizar ações no navegador (depuração web com IA), conectar-se aos seus aplicativos de chat e de anotações, interagir com seus bancos de dados, obter contexto do seu sistema de design, obter problemas e contexto de repositório do GitHub e integrar-se com suas plataformas de nuvem. O poder do modo agente está na diversidade de ferramentas disponíveis e na flexibilidade de adicionar e remover ferramentas conforme necessário. Estamos lançando a extensibilidade em preview e disponível para todos os usuários. O modo agente pode usar as seguintes ferramentas: Ferramentas integradas contribuídas pelo VS Code (azul no diagrama), que permitem ao agente pesquisar no workspace, aplicar mudanças de código, executar comandos no terminal, capturar erros de compilação ou linting do editor, buscar conteúdo de sites (#fetch para acionar manualmente), e mais. Ferramentas contribuídas por servidores MCP (verde no diagrama). Ferramentas contribuídas por extensões do VS Code (verde no diagrama). Quando a equipe do VS Code inventou o Protocolo de Servidor de Linguagem (LSP) em 2016, nosso objetivo era padronizar como os servidores de linguagem se comunicam com as ferramentas de desenvolvimento. Estamos orgulhosos de que o LSP se tornou um padrão amplamente adotado e cumpriu nossa visão. Recentemente, as ideias por trás do LSP inspiraram um novo protocolo: o Protocolo de Contexto de Modelo (MCP), que padroniza como as aplicações fornecem contexto para LLMs. Com o modo agente no VS Code usando ferramentas contribuídas por servidores MCP, agora completamos o ciclo de volta ao VS Code. É sobre controle do desenvolvedor Nem toda tarefa precisa de todas as ferramentas que você pode ter adicionado ao modo agente, e como em qualquer fluxo de trabalho de IA, ser específico leva a melhores resultados. Recomendamos usar a interface de ferramentas para gerenciar e habilitar as ferramentas necessárias para cada cenário ou referenciar explicitamente as ferramentas em seu prompt digitando #. Para lhe dar total controle, cada invocação de ferramenta é exibida de forma transparente na interface e requer sua aprovação (exceto para ferramentas integradas de leitura). Você pode permitir uma ferramenta específica para a sessão atual, workspace ou todas as invocações futuras. Se você quiser minimizar interrupções permitindo sempre que o agente use todas as ferramentas, enquanto ainda mantém a segurança, considere usar a extensão Dev Containers. Isso isola todas as mudanças feitas pelo agente dentro do ambiente do contêiner até certo ponto (por exemplo, o agente ainda pode enviar mudanças para o remoto se você permitir). Comece agora Para personalizar o agente para seus fluxos de trabalho, selecione o ícone de "Ferramentas" na entrada do chat e siga o fluxo Adicionar Mais Ferramentas.... Alternativamente, leia nossa documentação do servidor MCP, que explica o formato de configuração, como adicionar um servidor MCP ou como importar servidores MCP de um aplicativo cliente MCP existente, como o Claude Desktop. O VS Code suporta entrada/saída padrão local (stdio) e eventos enviados pelo servidor (sse) para transporte de servidor MCP. O repositório oficial de servidores do MCP é um ótimo ponto de partida para servidores oficiais e contribuídos pela comunidade que mostram a versatilidade do MCP. Para instalar extensões que contribuem com ferramentas, abra a visualização de Extensões e pesquise usando a tag @tag:language-model-tools. Como desenvolvedor, você pode estender o agente criando um servidor MCP, ou se você é um autor de extensão, pode contribuir com ferramentas para sua extensão do VS Code. Consulte estes documentos para orientações e melhores práticas sobre como escrever ferramentas. O que vem a seguir O modo Agente está melhorando a cada dia, e para estar entre os primeiros a se beneficiar, considere instalar o VS Code Insiders. Usar o VS Code Insiders e fornecer feedback em nosso repositório é a melhor maneira de nos ajudar a melhorar o produto. Em seguida, planejamos trabalhar em: Suporte para modos personalizados com conjuntos de ferramentas e instruções personalizadas Uma experiência de aplicação de código mais rápida Expandir o suporte MCP de ferramentas para prompts, recursos e as atualizações mais recentes das especificações Transmissão de edições limitada a blocos de código alterados para melhorar a velocidade Pontos de verificação para voltar facilmente a uma etapa específica na sua sessão de modo agente Melhorias gerais de desempenho e qualidade de serviço Certifique-se de estar na versão mais recente do VS Code Stable, configure chat.agent.enabled nas suas configurações e selecione Agente no menu suspenso do modo. Experimente hoje e nos diga o que você acha! Você pode encontrar a documentação aqui.275Views0likes0Comments