youtube
54 TopicsLevel up your Python Gen AI Skills from our free nine-part YouTube series!
Want to learn how to use generative AI models in your Python applications? We're putting on a series of nine live streams, in both English and Spanish, all about generative AI. We'll cover large language models, embedding models, vision models, introduce techniques like RAG, function calling, and structured outputs, and show you how to build Agents and MCP servers. Plus we'll talk about AI safety and evaluations, to make sure all your models and applications are producing safe outputs. đ Register for the entire series. In addition to the live streams, you can also join a weekly office hours in our AI Discord to ask any questions that don't get answered in the chat. You can also scroll down to learn about each live stream and register for individual sessions. See you in the streams! đđ» Large Language Models 7 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Join us for the first session in our Python + AI series! In this session, we'll talk about Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We'll use Python to interact with LLMs using popular packages like the OpenAI SDK and Langchain. We'll experiment with prompt engineering and few-shot examples to improve our outputs. We'll also show how to build a full stack app powered by LLMs, and explain the importance of concurrency and streaming for user-facing AI apps. Vector embeddings 8 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our second session of the Python + AI series, we'll dive into a different kind of model: the vector embedding model. A vector embedding is a way to encode a text or image as an array of floating point numbers. Vector embeddings make it possible to perform similarity search on many kinds of content. In this session, we'll explore different vector embedding models, like the OpenAI text-embedding-3 series, with both visualizations and Python code. We'll compare distance metrics, use quantization to reduce vector size, and try out multimodal embedding models. Retrieval Augmented Generation 9 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our fourth Python + AI session, we'll explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that sends context to the LLM so that it can provide well-grounded answers for a particular domain. The RAG approach can be used with many kinds of data sources like CSVs, webpages, documents, databases. In this session, we'll walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Vision models 14 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Our third stream in the Python + AI series is all about vision models! Vision models are LLMs that can accept both text and images, like GPT 4o and 4o-mini. You can use those models for image captioning, data extraction, question-answering, classification, and more! We'll use Python to send images to vision models, build a basic chat-on-images app, and build a multimodal search engine. Structured outputs 15 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our fifth stream of the Python + AI series, we'll discover how to get LLMs to output structured responses that adhere to a schema. In Python, all we need to do is define a @dataclass or a Pydantic BaseModel, and we get validated output that meets our needs perfectly. We'll focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples will demonstrate the many ways you can use structured responses, like entity extraction, classification, and agentic workflows. Quality and safety 16 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. Tool calling 21 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. AI agents 22 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor For the penultimate session of our Python + AI series, we're building AI agents! We'll use many of the most popular Python AI agent frameworks: Langgraph, Semantic Kernel, Autogen, Pydantic AI, and more. Our agents will start simple and then ramp up in complexity, demonstrating different architectures like hand-offs, round-robin, supervisor, graphs, and ReAct. Model Context Protocol 23 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In the final session of our Python + AI series, we're diving into the hottest technology of 2025: MCP, Model Context Protocol. This open protocol makes it easy to extend AI agents and chatbots with custom functionality, to make them more powerful and flexible. We'll show how to use the official Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we'll build our own MCP client to consume the server. Finally, we'll discover how easy it is to point popular AI agent frameworks like Langgraph, Pydantic AI, and Semantic Kernel at MCP servers. With great power comes great responsibility, so we will briefly discuss the many security risks that come with MCP, both as a user and developer.Foundry Fridays: Your Front-Row Seat to Azure AI Innovation
đ„ Foundry Fridays: Your Front-Row Seat to Azure AI Innovation Are you ready to go beyond the blog posts and docs and get your questions answered directly by the minds behind Azure AI? Then mark your calendars for Foundry Fridays a weekly Ask Me Anything (AMA) series hosted on the Azure AI Foundry Discord. Every Friday at 1:30 PM ET, the Azure AI team opens the floor to developers, researchers, and enthusiasts for a 30-minute live AMA with the experts building the future of AI at Microsoft. Whether you're curious about model fine-tuning, local inference, agentic workflows, or the latest in open-source toolingâFoundry Fridays is where the real-time insights happen. đïž Why Join Foundry Fridays? Direct Access to Experts: Ask your questions live to Principal PMs, researchers, and engineers from the Azure AI Foundry team. Fresh Topics Weekly: Each session spotlights a new theme from model routing and MCP registries to SAMBA architectures and AI agent security. Community-Driven: These arenât lecturesâtheyâre conversations. Bring your curiosity, share your feedback, and help shape the future of Azure AI. No Slides, Just Substance: Itâs raw, real, and refreshingly unscripted. Youâll hear whatâs working, whatâs coming, and whatâs still being figured out. Episode is hosted by community leaders like Nitya Narasimhan and Lee Stott, who guide the conversation and ensure your questions get the spotlight they deserve you can watch all the Monday Model Series on Demand at https://aka.ms/model-mondays and get ready for Season 3 of Model Mondays every Monday at 1.30pm ET. đ Why It Matters Foundry Fridays isnât just another event itâs a community catalyst. Join our communty hear from experts and share your experiences of using Azure AI Tools and Services. đ How to Join Join the Discord: aka.ms/model-mondays/discord Find the AMA: Head to the Events #community-calls and #model-mondays channel or check the pinned events. Ask Anything: Come with questions, ideas, or just listen in. No registration required. Want a sneak peek at whatâs coming? Check the Foundry Fridays schedule or follow the Azure AI Foundry Blog for recaps and resources. đŹ Final Thoughts Whether you're building with Azure AI, exploring open-source models, or just curious about whatâs nextâFoundry Fridays is your chance to connect, learn, and grow with the community. So grab your headphones, fire up Discord, and letâs build the future of AIâtogether. đïž Fridays | 1:30 PM ET đ Azure AI Foundry Discord đ Join NowVideo Super Resolution Missing Options in Setting
Enhance videos in Microsoft Edge (Flag: #edge-video-super-resolution) is missing critical options in setting, making it impossible to function properly. In Version 136.0.3240.29 (Official build) beta (64-bit), as well as the previous beta build, it is missing the option to choose from three methods of video enhancement, as shown in screenshot. Here is the setting from latest beta build: This is how it is supposed to look: Thus, beta build defaults to Microsoft Super Resolution and I can't find a way to use graphics driver enhancement like RTX Super Resolution, which is critical to my experience. Also, HDR is no longer supported when video enhancement is enabled, which I believe is a limitation of Microsoft Super Resolution.Solved7.9KViews23likes24CommentsVS Code Live: Extending Agent Mode
VS Code Live is a monthly livestream showcasing the latest updates in Visual Studio Code, with hands-on demos from the VS Code team and key partners. On June 12 (8 AM PST), weâll dive into the VS Code 1.101 release, with a special focus on Extending Agent Mode. In this session, weâll explore how to unlock its full potential using MCP servers and custom extensions. Youâll hear from experts at GitHub, Figma, and Netlify, and see a live demo of the new PostgreSQL extension. Join us live to learn from the people shaping the future of VS Code and see whatâs coming next.How to use Comments as Prompts in GitHub Copilot for Visual Studio
GitHub Copilot is a coding assistant powered by Artificial Intelligence (AI), which can run in various environments and help you be more efficient in your daily coding tasks. In this new short video, Bruno shows you how to use inline comments to generate code with GitHub Copilot.Model Mondays Season 2: Learn to Choose & Use the Right AI Models with Azure AI
Skill Up on the Latest AI Models & Tools with Model Mondays â Season 2 The world of AI is evolving at lightning speed. With over 11,000 models now available in the Azure AI Foundry catalogâincluding frontier models from top providers and thousands of open-source variantsâdevelopers face a new challenge: How do you choose the right model for your task? Thatâs where Model Mondays comes in. What Is Model Mondays? Model Mondays is a weekly livestream and AMA series hosted on https://developer.microsoft.com/en-us/reactor/ and the Azure AI Foundry Discord. Itâs designed to help developers like you build your Model IQ one spotlight at a time. Each 30-minute episode includes: 5-min Highlights: Catch up on the latest model-related news. 15-min Spotlight: Deep dive into a specific model, model family, or tool. Live Q&A: Ask questions during the stream or join the Friday AMA on Discord. Whether you're just starting out or already building AI-powered apps, this series will help you stay current and confident in your model choices. Season 2 Starts June 16 â Register Now! Weâre kicking off Season 2 with three powerful episodes: đč EP1: Advanced Reasoning Models đïž https://developer.microsoft.com/en-us/reactor/events/25905/ đč EP2: Model Context Protocol (MCP) đïž https://developer.microsoft.com/en-us/reactor/events/25906/ đč EP3: SLMs and Reasoning (Phi-4 Ecosystem) đïž https://developer.microsoft.com/en-us/reactor/events/25907/ Why Should You Join? Stay Ahead: Learn about the latest models, tools, and trends in AI. Get Hands-On: Explore real-world use cases and demos. Build Smarter: Discover how to evaluate, fine-tune, and deploy models effectively. Connect: Join the community on Discord and get your questions answered. Quick Links đ https://aka.ms/model-mondays đ„ https://aka.ms/model-mondays/playlist đŹ https://aka.ms/model-mondays/discord Bonus: Learn from Microsoft Build 2025 If you missed Microsoft Build, nowâs the time to catch up. Azure AI Foundry is expanding fastâwith new tools like Model Router, AI Evaluations SDK, and Foundry Portal making it easier than ever to build, test, and deploy AI apps. Check out http://aka.ms/learnatbuild for the top 10 things you need to know. Ready to Build? Whether you're exploring edge models, open-source AI, or fine-tuning GPTs, Model Mondays will help you level up your skills and build confidently on Azure. Letâs build our model IQ together. See you on June 16!Learn Generative AI with JavaScript: Free and Interactive Course! đĄđ€
Master Generative AI using JavaScript in a free, interactive course by Microsoft. Explore lessons with historical characters, hands-on projects, and real-world AI tools. Start learning today with zero setup via GitHub Codespaces.Construa, inove e #Hacktogether!
đ ïž Construa, inove e #Hacktogether! đ ïž 2025 Ă© o ano dos agentes de IA! Mas o que exatamente Ă© um agente? E como vocĂȘ pode criar um? Seja vocĂȘ um desenvolvedor experiente ou esteja apenas começando, este hackathon virtual GRATUITO de trĂȘs semanas Ă© sua chance de mergulhar no desenvolvimento de agentes de IA. đ„ Aprenda com mais de 20 sessĂ”es lideradas por especialistas, transmitidas ao vivo no YouTube, abordando os principais frameworks, como Semantic Kernel, Autogen, o novo Azure AI Agents SDK e o Microsoft 365 Agents SDK. đĄ Coloque a mĂŁo na massa, explore sua criatividade e crie agentes de IA poderosos! Depois, envie seu projeto e concorra a prĂȘmios incrĂveis! đž Datas importantes: SessĂ”es com especialistas: 8 de abril de 2025 â 30 de abril de 2025 Prazo para envio do hack: 30 de abril de 2025, 23:59 PST NĂŁo perca essa oportunidadeâjunte-se a nĂłs e comece a construir o futuro da IA! đ„ Inscrição đïž Garanta sua vaga agora! Preencha o formulĂĄrio para confirmar sua participação no hackathon. Em seguida, confira a programação das transmissĂ”es ao vivo e inscreva-se nas sessĂ”es que mais te interessam. ApĂłs se inscrever, apresente-se e procure por colegas de equipe! SubmissĂŁo de Projetos đ Leia atentamente as regras oficiais e certifique-se de entender os requisitos. Quando seu projeto estiver pronto, siga o processo de submissĂŁo. đ PrĂȘmios e Categorias đ Os projetos serĂŁo avaliados por um painel de jurados, incluindo engenheiros da Microsoft, gerentes de produto e defensores de desenvolvedores. Os critĂ©rios de avaliação incluirĂŁo inovação, impacto, usabilidade tĂ©cnica e alinhamento com a categoria correspondente do hackathon. Cada equipe vencedora nas categorias abaixo receberĂĄ um prĂȘmio. đž Melhor Agente Geral - $20,000 Melhor Agente em Python - $5,000 Melhor Agente em C# - $5,000 Melhor Agente em Java - $5,000 Melhor Agente em JavaScript/TypeScript - $5,000 Melhor Agente Copilot (usando Microsoft Copilot Studio ou Microsoft 365 Agents SDK) - $5,000 Melhor Uso do Azure AI Agent Service - $5,000 Cada equipe pode ganhar em apenas uma categoria. Todos os participantes que submeterem um projeto receberĂŁo um badge digital. TransmissĂ”es đ PortuguĂȘs Inscreva-se em todas as sessĂ”es em portuguĂȘs Dia/HorĂĄrio TĂłpico Recursos 4/8 12:00 PM PT Bem-vindo ao AI Agents Hackathon - 4/10 12:00 PM PT Crie um aplicativo com o Azure AI Agent Service - 4/17 06:00 AM PT Seu primeiro agente de IA em JavaScript com o Azure AI Agent Service - Outros Idiomas Teremos mais de 30 transmissĂ”es em inglĂȘs, alĂ©m de transmissĂ”es em espanhol e chinĂȘs. Veja a pĂĄgina principal para mais detalhes. đ HorĂĄrio de Suporte TĂ©cnico Precisa de ajuda com seu projeto? Participe do HorĂĄrio de Suporte TĂ©cnico no canal de Discord de IA e receba orientação de especialistas! đ Aqui estĂŁo os horĂĄrios de atendimento jĂĄ agendados: Dia/HorĂĄrio TĂłpico/AnfitriĂ”es Toda quinta-feira, 12:30 PM PT Python + IA (InglĂȘs) Toda segunda-feira, 03:00 PM PT Python + IA (Espanhol) Recursos de Aprendizado đ Acesse os recursos aqui! Junte-se ao TheSource EHub para explorar os principais recursos, incluindo treinamentos, transmissĂ”es ao vivo, repositĂłrios, guias tĂ©cnicos, blogs, downloads, certificaçÔes e muito mais, atualizados mensalmente. A seção de Agentes de IA oferece recursos essenciais para criar agentes de IA, enquanto outras seçÔes fornecem insights sobre IA, ferramentas de desenvolvimento e linguagens de programação. VocĂȘ tambĂ©m pode postar perguntas em nosso fĂłrum de discussĂ”es ou conversar com outros participantes no canal do Discord.GitHub Copilot for Azure: Deploy an AI RAG App to ACA using AZD
Recently, I had to develop a Retrieval-Augmented Generation (RAG) prototype for an internal project. Since I enjoy working with LlamaIndex, I decided to use GitHub Copilot for Azure to quickly find an existing sample that I could use as a starting point and deploy it to Azure Container Apps. Getting Started with GitHub Copilot for Azure To begin, I installed the GitHub Copilot for Azure extension in VS Code. This extension allows me to interact with Azure directly using the azure command. I used this feature to ask my Copilot to help me locate a relevant sample to use as a foundation for my project. After querying available Azure resources, the extension found a LlamaIndex JavaScript sample, which was ideal for my needs. I then copied the Azure Developer CLI (azd) command to initialize my project and set up my environment. Deploying the Sample to Azure Container Apps With the sample files downloaded, the next step was to deploy the application as-is to ensure everything functioned correctly. I asked my Copilot how to proceed, and it suggested running the following command: azd up . After executing the command, my sample was successfully deployed to Azure Container Apps. Now, it was time to test it! Debugging Deployment Issues with Copilot To verify that everything was working, I interacted with the app by entering my prompt. However, I encountered an issueâthere was a missing configuration in the container. To troubleshoot, I shared the error message with the extension and asked for guidance. My Copilot suggested adding a specific line to my main.bicep file. I applied the change and then wondered if I also needed to pass the variable to my container as a runtime configuration. Again, I consulted Copilot, which confirmed that I should add the variable to the container configuration. After vibe copying and pasting the suggested change into my Bicep file, I was ready to redeploy. Redeploying and Final Testing To redeploy my updated configuration, I executed: azd deploy . The new revision of the app was successfully deployed. Time for another test! Success! The application responded correctly, confirming that my configuration updates worked as expected. Conclusion Using GitHub Copilot for Azure significantly accelerated my RAG prototype development by helping me find relevant resources, debug issues, and deploy my app seamlessly. If youâre building Azure-based applications, I highly recommend trying out this extension. You can download the GitHub Copilot for Azure extension in VS Code and give it a go yourself. If you do, share your feedback in the repoâIâd love to hear how it improves your workflow!