Blog Post

Microsoft Developer Community Blog
4 MIN READ

Supercharge your Python with Gen AI Skills from our free six-part YouTube series!

Pamela_Fox's avatar
Pamela_Fox
Icon for Microsoft rankMicrosoft
Feb 26, 2025

 

Want to learn how to use generative AI models in your Python applications? We're putting on a series of six live streams, in both English and Spanish, all about generative AI.

We'll cover large language models, embedding models, vision models, and introduce techniques like RAG, function calling, and structured outputs. Plus we'll talk about AI safety and evaluations, to make sure all your models and applications are producing safe outputs.

In addition to the live streams, you can also join a weekly office hours in our AI Discord to ask any questions that don't get answered in the chat.

Register for the entire series in your preferred language:

English | Spanish

You can also scroll down to learn about each session and register for individual sessions.

See you in the streams! ๐Ÿ‘‹๐Ÿป ยกNos vemos en los streams! ๐Ÿ‘‹๐Ÿฝ

LLMs: Large Language Models

11 March, 2025 | 4:30 PM - 5:30 PM (UTC) Coordinated Universal Time

Register: English | Spanish

Join us for the first session in our Python + AI series! In this session, we'll talk about Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We'll use Python to interact with LLMs using popular packages like the OpenAI SDK and Langchain. We'll experiment with prompt engineering and few-shot examples to improve our outputs. We'll show how to build a full stack app powered by LLMs, and explain the importance of concurrency and streaming for user-facing AI apps. 

Vector embeddings

13 March, 2025 | 4:30 PM - 5:30 PM (UTC) Coordinated Universal Time

Register: English | Spanish

In our second session of the Python + AI series, we'll dive into a different kind of model: the vector embedding model. A vector embedding is a way to encode a text or image as an array of floating point numbers. Vector embeddings make it possible to perform similarity search on many kinds of content. In this session, we'll explore different vector embedding models, like the OpenAI text-embedding-3 series, with both visualizations and Python code. We'll compare distance metrics, use quantization to reduce vector size, and try out multimodal embedding models. 

RAG: Retrieval Augmented Generation

18 March, 2025 | 4:30 PM - 5:30 PM (UTC) Coordinated Universal Time

Register: English | Spanish

In our third Python + AI session, we'll explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that sends context to the LLM so that it can provide well-grounded answers for a particular domain. The RAG approach can be used with many kinds of data sources like CSVs, webpages, documents, databases. In this session, we'll walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. 

Vision models

20 March, 2025 | 4:30 PM - 5:30 PM (UTC) Coordinated Universal Time

Register: English | Spanish

Our fourth stream in the Python + AI series is all about vision models! Vision models are LLMs that can accept both text and images, like GPT 4o and GPT 4o-mini. You can use those models for image captioning, data extraction, question-answering, classification, and more! We'll use Python to send images to vision models, build a basic chat app with image upload, and even use vision models inside a RAG application.

Function calling & structured outputs

25 March, 2025 | 4:30 PM - 5:30 PM (UTC) Coordinated Universal Time

Register: English | Spanish

In our fifth stream of the Python + AI series, we're going to explore the two main ways to get LLMs to output structured responses that adhere to a schema: function calling and structured outputs. We'll start with function calling, which is the most well supported way to get structured responses, and discuss its drawbacks. Then we'll focus on the new structured outputs mode available in OpenAI models, which can be used with Pydantic models and even used in combination with function calling. Our examples will demonstrate the many ways you can use structured responses, like entity extraction, classification, and agentic workflows.

AI quality & safety

27 March, 2025 | 4:30 PM - 5:30 PM (UTC) Coordinated Universal Time

Register: English | Spanish

In our final session of the Python + AI series, we're culminating with a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. 

Updated Feb 21, 2025
Version 1.0
No CommentsBe the first to comment