Recent Discussions
2/10/21 - Announcing an Azure Cognitive Services – Speech AMA!
We are very excited to announce an Azure Cognitive Services – Speech AMA! We'll be answering your questions on how to add capabilities like Text-to-Speech and Custom Neural Voice with Azure Speech service. The AMA will take place on Wednesday, February 10, 2021 from 9:00 a.m. to 10:00 a.m. PT in the Azure AI AMA space. Add the event to your calendar and view in your time zone here. An AMA is a live online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with Microsoft product experts who will be on hand to answer your questions and listen to feedback. Please note, if you are unable to participate on the day of the event, we will open up the space for new questions 24 hours before the event, so feel free to post your questions beforehand if it fits your schedule or time zone better. BEFORE THE EVENT - check out these resources: Learn more about text to speech capability: Build a natural custom voice for your brand (microsoft.com) Check out Friday's AI show featuring Edward Un and Sarah Bird!9.6KViews33likes20Comments3/10/21 - Announcing an Azure Cognitive Search AMA!
We are very excited to announce an Azure Cognitive Search AMA! The AMA will take place on Wednesday, March 10, 2021 from 9:00 a.m. to 10:00 a.m. PT in the Azure AI AMA space. Add the event to your calendar and view in your time zone here. An AMA is a live online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with Microsoft product experts who will be on hand to answer your questions and listen to feedback. The space is now open for new questions (24 hours before the event), so feel free to post your questions anytime beforehand if it fits your schedule or time zone better.9.4KViews10likes0CommentsWelcome to the AI Platform Tech Community!
There are several exciting things happening in the AI space and across AI platform at Microsoft. Welcome to the AI platform tech community where we would like to share our learnings from our engagements working closely with partners and customers. You will see blog articles providing deep dives, getting started and 'tips and tricks' across various AI technology areas. This community includes members from the AI Platform customer engineering and product teams. Thanks!1.4KViews9likes3CommentsGet Rewarded for Sharing Your Experience with Microsoft Azure AI
We invite our valued Microsoft Azure AI customers to share your firsthand experience developing with Azure AI by writing a review on Gartner Peer Insights. Your review will not only assist other developers and technical decision-makers but also help shape the future of our AI products. Thank you for your time and contribution, and we are excited to hear your thoughts! To Write a Review & Claim Your Reward: Read our blog for next steps. You will receive a $25 gift card, a 3-month subscription to Gartner research, or a donation to a charitable cause as a token of our appreciation.6.5KViews8likes2Comments4/28/21 AMA - Conversational AI on Bot Framework Composer and the Telephony Channel
We are very excited to announce a Conversational AI with Bot Framework Composer and the Telephony Channel AMA! The AMA will take place on Wednesday, April 28, 2021 from 9:00 a.m. to 10:00 a.m. PT in the Azure AI AMA space. Add the event to your calendar and view in your time zone here. An AMA is a live text-based online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with Microsoft product experts who will be on hand to answer your questions and listen to feedback. The space will be open 24 hours before the event, so feel free to post your questions anytime beforehand during that period if it fits your schedule or time zone better.5.6KViews8likes6Comments2/10/21: That's a wrap: Azure Cognitive Services – Speech AMA
Thank you for joining us for this action-packed hour and voicing your questions and feedback in this Azure Cognitive Services - Speech AMA! Please answer our poll here: https://techcommunity.microsoft.com/t5/azure-ai-ama/request-we-would-love-to-hear-your-feedback-on-your-tts-building/td-p/2120672 If you still have Cognitive Services questions, feel free to post in our Cognitive Services discussion space: https://techcommunity.microsoft.com/t5/cognitive-services/bd-p/CognitiveServices See summary of questions and answered attached!3.9KViews7likes3CommentsHow to Build AI Agents in 10 Lessons
Microsoft has released an excellent learning resource for anyone looking to dive into the world of AI agents: "AI Agents for Beginners". This comprehensive course is available free on GitHub. It is designed to teach the fundamentals of building AI agents, even if you are just starting out. What You'll Learn The course is structured into 10 lessons, covering a wide range of essential topics including: Agentic Frameworks: Understand the core structures and components used to build AI agents. Design Patterns: Learn proven approaches for designing effective and efficient AI agents. Retrieval Augmented Generation (RAG): Enhance AI agents by incorporating external knowledge. Building Trustworthy AI Agents: Discover techniques for creating AI agents that are reliable and safe. AI Agents in Production: Get insights into deploying and managing AI agents in real-world applications. Hands-On Experience The course includes practical code examples that utilize: Azure AI Foundry GitHub Models These examples help you learn how to interact with Language Models and use AI Agent frameworks and services from Microsoft, such as: Azure AI Agent Service Semantic Kernel Agent Framework AutoGen - A framework for building AI agents and applications Getting Started To get started, make sure you have the proper set-up. Here are the 10 lessons Intro to AI Agents and Agent Use Cases Exploring AI Agent Frameworks Understanding AI Agentic Design Principles Tool Use Design Pattern Agentic RAG Building Trustworthy AI Agents Planning Design Multi-Agent Design Patterns Metacognition in AI Agents AI Agents in Production Multi-Language Support To make learning accessible to a global audience, the course offers multi-language support. Get Started Today! If you are eager to learn about AI agents, this course is an excellent starting point. You can find the complete course materials on GitHub at AI Agents for Beginners.6/3: Announcing an Azure Cognitive Search AMA!
We are very excited to announce an Azure Cognitive Search AMA! The AMA will take place on Thursday, June 3, 2021 from 9:00 a.m. to 10:00 a.m. PT in the Azure AI AMA space. Add the event to your calendar and view in your time zone here. An AMA is a live text-based online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with Microsoft product experts who will be on hand to answer your questions and listen to feedback. The space will be open 24 hours before the event, so feel free to post your questions anytime beforehand if it fits your schedule or time zone better.5.9KViews6likes2Comments3/10/21 - That's a wrap on our Azure Cognitive Search AMA!
Thank you for joining us for this action-packed hour and voicing your questions and feedback in this Azure Cognitive Search AMA! If you still have Cognitive Services questions, feel free to post in our Cognitive Services discussion space: https://techcommunity.microsoft.com/t5/cognitive-services/bd-p/CognitiveServices We have attached a summary of the questions and answers here in this thread. See you next time!3KViews6likes1CommentAzure OpenAI: GPT-5-Codex Availability?
Greetings everyone! I just wanted to see if there's any word as to when/if https://openai.com/index/introducing-upgrades-to-codex/ will make it's way to the AI Foundry. It was released on September 15th, 2025, but I have no idea how long Azure tends to follow behind OpenAI's releases. It doesn't really seem like there's any source of information to view whenever new models drop as to what Azure is going to do with them, if any. Any conversation around this would be helpful and appreciated, thanks!571Views5likes2CommentsWhat is Convolutional Neural Network — CNN (Deep Learning)
Convolutional Neural Networks (CNNs) are a type of deep learning neural network architecture that is particularly well suited to image classification and object recognition tasks. A CNN works by transforming an input image into a feature map, which is then processed through multiple convolutional and pooling layers to produce a predicted output. Convolutional Neural Network — CNN architecture In this blog post, we will explore the basics of CNNs, including how they work, their architecture, and how they can be used for a wide range of computer vision tasks. We will also provide examples of some real-world applications of CNNs, and outline some of the benefits and limitations of this deep-learning architecture. Working of Convolutional Neural Network: A convolutional neural network starts by taking an input image, which is then transformed into a feature map through a series of convolutional and pooling layers. The convolutional layer applies a set of filters to the input image, each filter producing a feature map that highlights a specific aspect of the input image. The pooling layer then downsamples the feature map to reduce its size, while retaining the most important information. The feature map produced by the convolutional layer is then passed through multiple additional convolutional and pooling layers, each layer learning increasingly complex features of the input image. The final output of the network is a predicted class label or probability score for each class, depending on the task. The architecture of Convolutional Neural Network: A typical CNN architecture is made up of three main components: the input layer, the hidden layers, and the output layer. The input layer receives the input image and passes it to the hidden layers, which are made up of multiple convolutional and pooling layers. The output layer provides the predicted class label or probability scores for each class. The hidden layers are the most important part of a CNN, and the number of hidden layers and the number of filters in each layer can be adjusted to optimize the network’s performance. A common architecture for a CNN is to have multiple convolutional layers, followed by one or more pooling layers, and then a fully connected layer that provides the final output. Applications of Convolutional Neural Network: CNNs have a wide range of applications in computer vision, including image classification, object detection, semantic segmentation, and style transfer. Image classification: Image classification is the task of assigning a class label to an input image. CNNs can be trained on large datasets of labeled images to learn the relationships between the image pixels and the class labels, and then applied to new, unseen images to make a prediction. Object detection: Object detection is the task of identifying objects of a specific class in an input image and marking their locations. This can be useful for applications such as security and surveillance, where it is important to detect and track objects in real time. Semantic segmentation: Semantic segmentation is the task of assigning a class label to each pixel in an input image, producing a segmented image that can be used for further analysis. This can be useful for applications such as medical image analysis, where it is important to segment specific structures in an image for further analysis. Style transfer: Style transfer is the task of transferring the style of one image to another image while preserving the content of the target image. This can be useful for applications such as art and design, where it is desired to create an image that combines the content of one image with the style of another. Layers of Convolutional neural network: The layers of a Convolutional Neural Network (CNN) can be broadly classified into the following categories: Convolutional Layer: The convolutional layer is responsible for extracting features from the input image. It performs a convolution operation on the input image, where a filter or kernel is applied to the image to identify and extract specific features. Convolutional Layer Pooling Layer: The pooling layer is responsible for reducing the spatial dimensions of the feature maps produced by the convolutional layer. It performs a down-sampling operation to reduce the size of the feature maps and reduce computational complexity. MaxPooling Layer Activation Layer: The activation layer applies a non-linear activation function, such as the ReLU function, to the output of the pooling layer. This function helps to introduce non-linearity into the model, allowing it to learn more complex representations of the input data. Activation Layer Fully Connected Layer: The fully connected layer is a traditional neural network layer that connects all the neurons in the previous layer to all the neurons in the next layer. This layer is responsible for combining the features learned by the convolutional and pooling layers to make a prediction. Fully Connected Layer Normalization Layer: The normalization layer performs normalization operations, such as batch normalization or layer normalization, to ensure that the activations of each layer are well-conditioned and prevent overfitting. Dropout Layer: The dropout layer is used to prevent overfitting by randomly dropping out neurons during training. This helps to ensure that the model does not memorize the training data but instead generalizes to new, unseen data. Dense Layer: After the convolutional and pooling layers have extracted features from the input image, the dense layer can then be used to combine those features and make a final prediction. In a CNN, the dense layer is usually the final layer and is used to produce the output predictions. The activations from the previous layers are flattened and passed as inputs to the dense layer, which performs a weighted sum of the inputs and applies an activation function to produce the final output. Dense layer Benefits of Convolutional Neural Network: Feature extraction: CNNs are capable of automatically extracting relevant features from an input image, reducing the need for manual feature engineering. Spatial invariance: CNNs can recognize objects in an image regardless of their location, size, or orientation, making them well-suited to object recognition tasks. Robust to noise: CNNs can often handle noisy or cluttered images, making them useful for real-world applications where image quality may be variable. Transfer learning: CNNs can leverage pre-trained models, reducing the amount of data and computational resources required to train a new model. Performance: CNNs have demonstrated state-of-the-art performance on a range of computer vision tasks, including image classification, object detection, and semantic segmentation. Limitations of Convolutional Neural Network: Computational cost: Training a deep CNN can be computationally expensive, requiring significant amounts of data and computational resources. Overfitting: Deep CNNs are prone to overfitting, especially when trained on small datasets, where the model may memorize the training data rather than generalize to new, unseen data. Lack of interpretability: CNNs are considered to be a “black box” model, making it difficult to understand why a particular prediction was made. Limited to grid-like structures: CNNs are limited to grid-like structures and cannot handle irregular shapes or non-grid-like data structures. Conclusion: In conclusion, Convolutional Neural Networks (CNNs) is a powerful deep learning architecture well-suited to image classification and object recognition tasks. With its ability to automatically extract relevant features, handle noisy images, and leverage pre-trained models, CNNs have demonstrated state-of-the-art performance on a range of computer vision tasks. However, they also have their limitations, including a high computational cost, overfitting, a lack of interpretability, and a limited ability to handle irregular shapes. Nevertheless, CNNs remain a popular choice for many computer vision tasks and are likely to continue to be a key area of research and development in the coming years.1.8KViews5likes0Comments5/20: Azure Cognitive Services - Speech AMA announcement!
We are very excited to announce an Azure Cognitive Services - Speech AMA! The AMA will take place on Thursday, May 20, 2021 from 9:00 a.m. to 10:00 a.m. PT in the Azure AI AMA space. Add the event to your calendar and view in your time zone here. An AMA is a live text-based online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with Microsoft product experts who will be on hand to answer your questions and listen to feedback. The space will be open 24 hours before the event, so feel free to post your questions anytime beforehand if it fits your schedule or time zone better.4.9KViews5likes6CommentsParsing unstructured medical data
I'm interested in tools for parsing, understanding and ML from unstructured medical data. Is Azure Cognitive Services being used for this type of data?3.1KViews5likes6CommentsWelcome to the Azure Cognitive Services - Speech AMA!
Welcome to the Azure Cognitive Services - Speech Ask Microsoft Anything (AMA)! This live hour gives you the opportunity to ask questions and provide feedback. Please introduce yourself by replying to this thread. Post your questions in a new thread within the Azure AI AMA space, by clicking on, "Start a New Conversation" at the top of the page.10KViews5likes31CommentsBYO Thread Storage in Azure AI Foundry using Python
Build scalable, secure, and persistent multi-agent memory with your own storage backend As AI agents evolve beyond one-off interactions, persistent context becomes a critical architectural requirement. Azure AI Foundry’s latest update introduces a powerful capability — Bring Your Own (BYO) Thread Storage — enabling developers to integrate custom storage solutions for agent threads. This feature empowers enterprises to control how agent memory is stored, retrieved, and governed, aligning with compliance, scalability, and observability goals. What Is “BYO Thread Storage”? In Azure AI Foundry, a thread represents a conversation or task execution context for an AI agent. By default, thread state (messages, actions, results, metadata) is stored in Foundry’s managed storage. With BYO Thread Storage, you can now: Store threads in your own database — Azure Cosmos DB, SQL, Blob, or even a Vector DB. Apply custom retention, encryption, and access policies. Integrate with your existing data and governance frameworks. Enable cross-region disaster recovery (DR) setups seamlessly. This gives enterprises full control of data lifecycle management — a big step toward AI-first operational excellence. Architecture Overview A typical setup involves: Azure AI Foundry Agent Service — Hosts your multi-agent setup. Custom Thread Storage Backend — e.g., Azure Cosmos DB, Azure Table, or PostgreSQL. Thread Adapter — Python class implementing the Foundry storage interface. Disaster Recovery (DR) replication — Optional replication of threads to secondary region. Implementing BYO Thread Storage using Python Prerequisites First, install the necessary Python packages: pip install azure-ai-projects azure-cosmos azure-identity Setting Up the Storage Layer from azure.cosmos import CosmosClient, PartitionKey from azure.identity import DefaultAzureCredential import json from datetime import datetime class ThreadStorageManager: def __init__(self, cosmos_endpoint, database_name, container_name): credential = DefaultAzureCredential() self.client = CosmosClient(cosmos_endpoint, credential=credential) self.database = self.client.get_database_client(database_name) self.container = self.database.get_container_client(container_name) def create_thread(self, user_id, metadata=None): """Create a new conversation thread""" thread_id = f"thread_{user_id}_{datetime.utcnow().timestamp()}" thread_data = { 'id': thread_id, 'user_id': user_id, 'messages': [], 'created_at': datetime.utcnow().isoformat(), 'updated_at': datetime.utcnow().isoformat(), 'metadata': metadata or {} } self.container.create_item(body=thread_data) return thread_id def add_message(self, thread_id, role, content): """Add a message to an existing thread""" thread = self.container.read_item(item=thread_id, partition_key=thread_id) message = { 'role': role, 'content': content, 'timestamp': datetime.utcnow().isoformat() } thread['messages'].append(message) thread['updated_at'] = datetime.utcnow().isoformat() self.container.replace_item(item=thread_id, body=thread) return message def get_thread(self, thread_id): """Retrieve a complete thread""" try: return self.container.read_item(item=thread_id, partition_key=thread_id) except Exception as e: print(f"Thread not found: {e}") return None def get_thread_messages(self, thread_id): """Get all messages from a thread""" thread = self.get_thread(thread_id) return thread['messages'] if thread else [] def delete_thread(self, thread_id): """Delete a thread""" self.container.delete_item(item=thread_id, partition_key=thread_id) Integrating with Azure AI Foundry from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential class ConversationManager: def __init__(self, project_endpoint, storage_manager): self.ai_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=project_endpoint ) self.storage = storage_manager def start_conversation(self, user_id, system_prompt): """Initialize a new conversation""" thread_id = self.storage.create_thread( user_id=user_id, metadata={'system_prompt': system_prompt} ) # Add system message self.storage.add_message(thread_id, 'system', system_prompt) return thread_id def send_message(self, thread_id, user_message, model_deployment): """Send a message and get AI response""" # Store user message self.storage.add_message(thread_id, 'user', user_message) # Retrieve conversation history messages = self.storage.get_thread_messages(thread_id) # Call Azure AI with conversation history response = self.ai_client.inference.get_chat_completions( model=model_deployment, messages=[ {"role": msg['role'], "content": msg['content']} for msg in messages ] ) assistant_message = response.choices[0].message.content # Store assistant response self.storage.add_message(thread_id, 'assistant', assistant_message) return assistant_message Usage Example # Initialize storage and conversation manager storage = ThreadStorageManager( cosmos_endpoint="https://your-cosmos-account.documents.azure.com:443/", database_name="conversational-ai", container_name="threads" ) conversation_mgr = ConversationManager( project_endpoint="your-project-connection-string", storage_manager=storage ) # Start a new conversation thread_id = conversation_mgr.start_conversation( user_id="user123", system_prompt="You are a helpful AI assistant." ) # Send messages response1 = conversation_mgr.send_message( thread_id=thread_id, user_message="What is machine learning?", model_deployment="gpt-4" ) print(f"AI: {response1}") response2 = conversation_mgr.send_message( thread_id=thread_id, user_message="Can you give me an example?", model_deployment="gpt-4" ) print(f"AI: {response2}") # Retrieve full conversation history history = storage.get_thread_messages(thread_id) for msg in history: print(f"{msg['role']}: {msg['content']}") Key Highlights: Threads are stored in Cosmos DB under your control. You can attach metadata such as region, owner, or compliance tags. Integrates natively with existing Azure identity and Key Vault. Disaster Recovery & Resilience When coupled with geo-replicated Cosmos DB or Azure Storage RA-GRS, your BYO thread storage becomes resilient by design: Primary writes in East US replicate to Central US. Foundry auto-detects failover and reconnects to secondary region. Threads remain available during outages — ensuring operational continuity. This aligns perfectly with the AI-First Operational Excellence architecture theme, where reliability and observability drive intelligent automation. Best Practices Area Recommendation Security Use Azure Key Vault for credentials & encryption keys. Compliance Configure data residency & retention in your own DB. Observability Log thread CRUD operations to Azure Monitor or Application Insights. Performance Use async I/O and partition keys for large workloads. DR Enable geo-redundant storage & failover tests regularly. When to Use BYO Thread Storage Scenario Why it helps Regulated industries (BFSI, Healthcare, etc.) Maintain data control & audit trails Multi-region agent deployments Support DR and data sovereignty Advanced analytics on conversation data Query threads directly from your DB Enterprise observability Unified monitoring across Foundry + Ops The Future BYO Thread Storage opens doors to advanced use cases — federated agent memory, semantic retrieval over past conversations, and dynamic workload failover across regions. For architects, this feature is a key enabler for secure, scalable, and compliant AI system design. For developers, it means more flexibility, transparency, and integration power. Summary Feature Benefit Custom thread storage Full control over data Python adapter support Easy extensibility Multi-region DR ready Business continuity Azure-native security Enterprise-grade safety Conclusion Implementing BYO thread storage in Azure AI Foundry gives you the flexibility to build AI applications that meet your specific requirements for data governance, performance, and scalability. By taking control of your storage, you can create more robust, compliant, and maintainable AI solutions.Using AI to convert unstructured information to structured information
We have a use case to extract the information from various types of documents like Excel, PDF, and Word and convert it into structured information. The data exists in different formats. We started building this use case with AI Builder, and we hit the roadblock and are now exploring ways using the Co-pilot studio. It would be great if someone could point us in the right direction. What should be the right technology stack that we should consider for this use case? Thank you for the pointer.2.4KViews4likes18CommentsGenerative AI for Beginners
Learn the fundamentals of building Generative AI applications with our 12-lesson comprehensive course by Microsoft Cloud Advocates. Each lesson covers a key aspect of Generative AI principles and application development. Throughout this course, we will be building our own Generative AI startup so you can get an understanding of what it takes to launch your ideas. Check out the course here: https://aka.ms/genai-beginners What AI applications are you excited to build?2.5KViews4likes0Comments96 languages supported for Azure Cognitive Service for Language
Azure Cognitive Service for Language recently announced its new custom features, custom text classification, custom named entity recognition, and conversational language understanding, are available in 96 languages! https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/azure-cognitive-service-for-language-support-96-languages-for/ba-p/3256994 Are you using Azure Cognitive Service for Language currently, and if so, what are your use cases? Which languages are you using or supporting?4.3KViews4likes0CommentsWelcome to the 6/3 Azure Cognitive Search AMA!
Welcome to the Azure Cognitive Search Ask Microsoft Anything (AMA)! This live hour gives you the opportunity to ask questions and provide feedback. We hope you enjoyed Microsoft Build and all the news on Azure Cognitive Search, particularly new language support in Semantic Search as well as new data source ingestion through Power Query connectors. Here's a link. Please introduce yourself by replying to this thread. Post your questions in a new thread within the Azure AI AMA space, by clicking on, "Start a New Conversation" at the top of the page.4.2KViews4likes10Comments
Events
Recent Blogs
- Kimi K2 Thinking represents a major leap forward in agentic intelligence. Designed as a true thinking agent, it performs multi-step reasoning, orchestrates long chains of tool calls, and maintains st...Dec 08, 2025595Views0likes0Comments
- 3 MIN READAnnouncing GPT-5.1-codex-max: The Future of Enterprise Coding Starts Now We’re thrilled to announce the general availability of OpenAI's GPT-5.1-codex-max in Microsoft Foundry Models; a leap forwar...Dec 05, 20252.1KViews2likes3Comments