Recent Discussions
Do not see option Add your data in Azure AI chat playground for DeepSeek model
Issue 1: I am evaluating different models in Azure AI Foundry against my own data in Azure AI search and do not see option to add your data when DeepSeek-R1 model is selected in chat playground. It used to be there but disappeared recently ( precisely on Feb 18 ET). However see option to add your data when gpt models are selected in chat play ground. Issue 2: When add your data option was available for DeepSeek-R1 model (prior to Feb 18,2025), I was getting following error in chat playground "An error occurred when calling Azure OpenAI: Server responded with status 400. Error message: {'error': {'code': 'unknown_model', 'message': 'Unknown model: chatgpt', 'details': 'Unknown model: chatgpt'}}"79Views2likes1CommentIntegrating Azure OpenAI Services
Hello everyone, I'm currently developing an AI to Human Text Converter Free platform aitohumantextconverterfree.net that aims to transform AI text into more human style engaging text. I'm exploring ways to enhance the platform's capabilities and am particularly interested in leveraging Azure's AI services. I've been reading about the Azure OpenAI Service and its various models, such as the o-series and GPT4o, which are designed for advanced reasoning and problem solving tasks. I have a few questions: Model Selection: Which Azure AI models would be most suitable for refining ai generated text to make it more human style? Integration Best Practices: Are there recommended approaches or resources for integrating Azure's ai services into existing web platforms? Customization: Is it possible to fine-tune these models specifically for converting AI-generated text into a more natural, human-like style? Any insights, experiences, or resources you could share would be greatly appreciated. Thank you!36Views0likes1CommentImage-to-Image generation using Stable-Diffuson-3.5 Large Model
Has anybody been able to generate an image with 'Image-to-Image' mode of 'Stable-Diffusion 3.5' ( deployed as a serverless api) ? I tried with text-to-image option (via Python + POST request ) and was able to generate the image but 'image-to-image' option does not seem to work. Infact even Azure Playground does not list parameters for this option. But the model information on Azure does list it supports image-to-image mode and accepts an image input. Any leads on this will be greatly appreciated.81Views0likes1CommentPrinciple Does not have Access to API/Operation
Hi all, I am trying to connect Azure OpenAI service to Azure AI Search service to Azure Gen 2 Data lake. In the Azure AI Foundry studio Chat Playground, I am able to add my data source, which is a .csv file in the data lake that has been indexed successfully. I use "System Assigned Managed Identity". The following RBAC has been applied: AI Search service has Cognitive Services OpenAI Contributor in Azure Open AI service Azure OpenAI service has Search Index Data Reader in AI Search Service Azure OpenAI service has Search Service Contributor in AI Search Service AI Search Service has Storage Blob Data Reader in Storage account (Data Lake) As mentioned when adding the data source it passes validation but when I try to ask a question, I get the error "We couldn't connect your data Principal does not have access to API/Operation"241Views2likes2CommentsAzure AI speech studio - synthesis failed
Hi, in my TTS project all files created so far cause a failure when I hit the Play button. I get following error msg: Response status code does not indicate success: 400 (Synthesis failed. StatusCode: FailedPrecondition, Details: '=' is an unexpected token. The expected token is ';'. Line 1, position 535..). Connection ID: c2e319c0-c447-11ef-8937-33bd13f92760 Changing voices does not solve it. Location of the speech service is "Germany West Central"58Views0likes1CommentUsing AI to convert unstructured information to structured information
We have a use case to extract the information from various types of documents like Excel, PDF, and Word and convert it into structured information. The data exists in different formats. We started building this use case with AI Builder, and we hit the roadblock and are now exploring ways using the Co-pilot studio. It would be great if someone could point us in the right direction. What should be the right technology stack that we should consider for this use case? Thank you for the pointer.263Views2likes8CommentsIntegration between AI Agent and D365 Finance and Operations
We have a requirement to fire a database query using the AI agent to find and retrieve certain information. I understood from our team that retrieving the information from a specific table may be possible, but the team is facing difficulty when retrieving the information from multiple related tables and when there are multiple conditions to consider. Any pointer here is greatly appreciated.28Views0likes0CommentsUse case: Access to product documentation using the sidecar in D365
As an ISV, we continuously enhance our product by adding new features. We want to make up-to-date information about these features easily accessible to our customers directly within D365 using the Co-pilot sidecar. This would allow customers to prompt for details on a specific feature, and the system would provide comprehensive explanations. Currently, our product documentation for new features is stored on SharePoint. Would it be possible to integrate the Co-pilot sidecar for all our customers, enabling them to access our centrally stored product documentation on SharePoint?20Views0likes0CommentsHow do I integrate a custom AI agent with D365 Finance and Operations?
We have a requirement where a custom AI agent is required to call D365 Finance and Operation to search an item with some search criteria. Any pointer here is greatly appreciated. We are in the process of building this custom AI agent using the Azure AI Foundry. We are also looking to retrieve the matching items from D365 Finance and Operations to be returned to the AI agent.20Views0likes0CommentsCreate indexer - Can anyone help me please?
Hi everyone, I'm trying to Create a Custom Skill for Azure AI Search, following the Labs below. https://github.com/MicrosoftLearning/mslearn-knowledge-mining/blob/main/Instructions/Labs/01-azure-search.md When I try to creat an indexer with "./create-search" command on PowerShell of Visual Studio Code, the error is returned (I'm using C#); {"error":{"code":"","message":"This indexer refers to a skillset 'margies-custom-skillset' that doesn't exist"}} Why does this happen? Where is the skillset 'margies-custom-skillset' stored?? Is there any hint to get this though? Access Keys (service, admin, storge) are correctly entered.257Views0likes1CommentModel in Document Intelligence has stuck in state "running"
Hello, my custom model has stuck in state "running". Besides it, I am not able to delete it since all the actions like "delete", "compose" etc (including the model itself) has been greyed out and are not active. What are the steps for model unblocking? Thanks in advance for the hint!30Views0likes0CommentsMulti Model Deployment with Azure AI Foundry Serverless, Python and Container Apps
Intro Azure AI Foundry is a comprehensive AI suite, with a vast set of serverless and managed models offerings designed to democratize AI deployment. Whether you’re running a small startup or an 500 enterprise, Azure AI Foundry provides the flexibility and scalability needed to implement and manage machine learning and AI models seamlessly. By leveraging Azure’s robust cloud infrastructure, you can focus on innovating and delivering value, while Azure takes care of the heavy lifting behind the scenes. In this demonstration, we delve into building an Azure Container Apps stack. This innovative approach allows us to deploy a Web App that facilitates interaction with three powerful models: GPT-4, Deepseek, and PHI-3. Users can select from these models for Chat Completions, gaining invaluable insights into their actual performance, token consumption, and overall efficiency through real-time metrics. This deployment not only showcases the versatility and robustness of Azure AI Foundry but also provides a practical framework for businesses to observe and measure AI effectiveness, paving the way for data-driven decision-making and optimized AI solutions. Azure AI Foundry: The evolution Azure AI Foundry represents the next evolution in Microsoft’s AI offerings, building on the success of Azure AI and Cognitive Services. This unified platform is designed to streamline the development, deployment, and management of AI solutions, providing developers and enterprises with a comprehensive suite of tools and services. With Azure AI Foundry, users gain access to a robust model catalog, collaborative GenAIOps tools, and enterprise-grade security features. The platform’s unified portal simplifies the AI development lifecycle, allowing seamless integration of various AI models and services. Azure AI Foundry offers the flexibility and scalability needed to bring your AI projects to life, with deep insights and fast adoption path for the users. The Model Catalog allows us to filter and compare models per our requirements and easily create deployments directly from the Interface. Building the Application Before describing the methodology and the process, we have to make sure our dependencies are in place. So let’s have a quick look on the prerequisites of our deployment. GitHub - passadis/ai-foundry-multimodels: Azure AI Foundry multimodel utilization and performance metrics Web App. Azure AI Foundry multimodel utilization and performance metrics Web App. - passadis/ai-foundry-multimodels github.com Prerequisites Azure Subscription Azure AI Foundry Hub with a project in East US. The models are all supported in East US. VSCode with the Azure Resources extension There is no need to show the Azure Resources deployment steps, since there are numerous ways to do it and i have also showcased that in previous posts. In fact, it is a standard set of services to support our Micro-services Infrastructure: Azure Container Registry, Azure Key Vault, Azure User Assigned Managed identity, Azure Container Apps Environment and finally our Azure AI Foundry Model deployments. Frontend – Vite + React + TS The frontend is built using Vite and React and features a dropdown menu for model selection, a text area for user input, real-time response display, as well as loading states and error handling. Key considerations in the frontend implementation include the use of modern React patterns and hooks, ensuring a responsive design for various screen sizes, providing clear feedback for user interactions, and incorporating elegant error handling. The current implementation allows us to switch models even after we have initiated a conversation and we can keep up to 5 messages as Chat History. The uniqueness of our frontend is the performance information we get for each response, with Tokens, Tokens per Second and Total Time. Backend – Python + FastAPI The backend is built with FastAPI and is responsible for model selection and configuration, integrating with Azure AI Foundry, processing requests and responses, and handling errors and validation. A directory structure as follows can help us organize our services and utilize the modular strengths of Python: backend/ ├── app/ │ ├── __init__.py │ ├── main.py │ ├── config.py │ ├── api/ │ │ ├── __init__.py │ │ └── routes.py │ ├── models/ │ │ ├── __init__.py │ │ └── request_models.py │ └── services/ │ ├── __init__.py │ └── azure_ai.py ├── run.py # For Local runs ├── Dockerfile ├── requirements.txt └── .env Azure Container Apps A powerful combination allows us to easily integrate both using Dapr, since it is natively supported and integrated in our Azure Container Apps. try { const response = await fetch('/api/v1/generate', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ model: selectedModel, prompt: userInput, parameters: { temperature: 0.7, max_tokens: 800 } }), }); However we need to correctly configure NGINX to proxy the request to the Dapr Sidecar since we are using Container Images. # API endpoints via Dapr location /api/v1/ { proxy_pass http://localhost:3500/v1.0/invoke/backend/method/api/v1/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; Azure Key Vault As always all our secret variables like the API Endpoints and the API Keys are stored in Key Vault. We create a Key Vault Client in our Backend and we call each key only the time we need it. That makes our deployment more secure and efficient. Deployment Considerations When deploying your application: Set up proper environment variables Configure CORS settings appropriately Implement monitoring and logging Set up appropriate scaling policies Azure AI Foundry: Multi Model Architecture The solution is built on Azure Container Apps for serverless scalability. The frontend and backend containers are hosted in Azure Container Registry and deployed to Container Apps with Dapr integration for service-to-service communication. Azure Key Vault manages sensitive configurations like API keys through a user-assigned Managed Identity. The backend connects to three Azure AI Foundry models (DeepSeek, GPT-4, and Phi-3), each with its own endpoint and configuration. This serverless architecture ensures high availability, secure secret management, and efficient model interaction while maintaining cost efficiency through consumption-based pricing. Conclusion This Azure AI Foundry Models Demo showcases the power of serverless AI integration in modern web applications. By leveraging Azure Container Apps, Dapr, and Azure Key Vault, we’ve created a secure, scalable, and cost-effective solution for AI model comparison and interaction. The project demonstrates how different AI models can be effectively compared and utilized, providing insights into their unique strengths and performance characteristics. Whether you’re a developer exploring AI capabilities, an architect designing AI solutions, or a business evaluating AI models, this demo offers practical insights into Azure’s AI infrastructure and serverless computing potential. References Azure AI Foundry Azure Container Apps Azure AI – Documentation AI learning hub CloudBlogger: Text To Speech with ContainersIs AI Foundry in new exam for DP-100
A 25-30% of the DP-100 exam is now dedicated to Optimizing Language Models for AI Applications - is this requiring Azure AI Foundry? It doesn't say specifically in the study guide: https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/dp-100 Also, the videos could benefit from being updated to cover the changes as of 16 January 2025.82Views0likes0CommentsDeep Fake - what do u think about it ?
Hi, what do u think about deepfake technology ? I found this article Before you believe – how to recognize a deepfake and is it inherently evil? - Marek Jeleśniański (jelesnianski.com) Do you think that AI is more of a threat or an opportunity for development?179Views1like1CommentHow to create your personal AI powered Email Assistant
Crafting an AI Powered Email Assistant with Semantic Kernel and Neon Serverless PostgreSQL Intro In the realm of Artificial Intelligence, crafting applications that seamlessly blend advanced capabilities with user-friendly design is no small feat. Today, we take you behind the scenes of building an AI Powered Email Assistant, a project that leverages Semantic Kernel for embedding generation and indexing, Neon PostgreSQL for vector storage, and the Azure OpenAI API for generative AI capabilities. This blog post is a practical guide to implementing a powerful AI-driven solution from scratch. The Vision Our AI Powered Email Assistant is designed to: Draft emails automatically using input prompts. Enable easy approval, editing, and sending via Microsoft Graph API. Create and store embeddings of the Draft and Send emails in NEON Serverless PostgreSQL DB. Provide a search feature to retrieve similar emails based on contextual embeddings. This application combines cutting-edge AI technologies and modern web development practices, offering a seamless user experience for drafting and managing emails. The Core Technologies of our AI Powered Email Assistant 1. Semantic Kernel Semantic Kernel simplifies the integration of AI services into applications. It provides robust tools for text generation, embedding creation, and memory management. For our project, Semantic Kernel acts as the foundation for: Generating email drafts via Azure OpenAI. Creating embeddings for storing and retrieving contextual data. 2. Vector Indexing with Neon PostgreSQL Neon, a serverless PostgreSQL solution, allows seamless storage and retrieval of embeddings using the pgvector extension. Its serverless nature ensures scalability and reliability, making it perfect for real-time AI applications. 3. Azure OpenAI API With Azure OpenAI, the project harnesses models like gpt-4 and text-embedding-ada-002 for generative text and embedding creation. These APIs offer unparalleled flexibility and power for building AI-driven workflows. How We Built our AI Powered Email Assistant Step 1: Frontend – A React-Based Interface The frontend, built in React, provides users with a sleek interface to: Input recipient details, subject, and email description. Generate email drafts with a single click. Approve, edit, and send emails directly. We incorporated a loading spinner to enhance user feedback and search functionality for retrieving similar emails. Key Features: State Management: For handling draft generation and email sending. API Integration: React fetch calls connect seamlessly to backend APIs. Dynamic UI: A real-time experience for generating and reviewing drafts. The backend, powered by ASP.NET Core, uses Semantic Kernel for AI services and Neon for vector indexing. Key backend components include Semantic Kernel Services: Text Embedding Generation: Uses Azure OpenAI’s text-embedding-ada-002 to create embeddings for email content. Draft Generation: The AI Powered Email Assistant creates email drafts based on user inputs using Azure OpenAI gpt-4 model (OpenAI Skill). public async Task<string> GenerateEmailDraftAsync(string subject, string description) { try { var chatCompletionService = _kernel.GetRequiredService<IChatCompletionService>(); var message = new ChatMessageContent( AuthorRole.User, $"Draft a professional email with the following details:\nSubject: {subject}\nDescription: {description}" ); var result = await chatCompletionService.GetChatMessageContentAsync(message.Content ?? string.Empty); return result?.Content ?? string.Empty; } catch (Exception ex) { throw new Exception($"Error generating email draft: {ex.Message}", ex); } } } Vector Indexing with Neon: Embedding Storage: Stores embeddings in Neon using the pgvector extension. Contextual Search: Retrieves similar emails by calculating vector similarity. Email Sending via Microsoft Graph: Enables sending emails directly through an authenticated Microsoft Graph API integration. Key Backend Features: Middleware for PIN Authentication: Adds a secure layer to ensure only authorized users access the application. CORS Policies: Allow safe fronted-backend communication. Swagger Documentation: The Swagger Docs that simplify API testing during development. Step 3: Integration with Neon The pgvector extension in Neon PostgreSQL facilitates efficient vector storage and similarity search. Here’s how we integrated Neon into the project: Table Design: A dedicated table for embeddings with columns for subject, content,type, embedding, and created_at. The type column can hold 2 values draft or sent in case the users want to explore previous unsent drafts. Index Optimization: Optimizing the index can save us a lot of time and effort before facing performance issues with CREATE INDEX ON embeddings USING ivfflat (embedding) WITH (lists = 100); Search Implementation: Using SQL queries with vector operations to find the most relevant embeddings. Enhanced Serverless Out-Of-the-box: Even the free SKU offers Read Replica and Autoscaling making it Enterprise-ready. Why This Approach Stands Out Efficiency: By storing embeddings instead of just raw data, the system maintains privacy while enabling rich contextual searches. Scalability: Leveraging Neon’s serverless capabilities ensures that the application can grow without bottlenecks. Autoscale is enabled User-Centric Design: The combination of React’s dynamic frontend and Semantic Kernel’s advanced AI delivers a polished user experience. Prerequisites Azure Account with OpenAI access Microsoft 365 Developer Account NEON PostgreSQL Account .NET 8 SDK Node.js and npm Visual Studio Code or Visual Studio 2022 Step 1: Setting Up Azure Resources Azure OpenAI Setup: Create an Azure OpenAI resource Deploy two models: GPT-4 for text generation text-embedding-ada-002 for embeddings Note down endpoint and API key Entra ID App Registration: Create new App Registration Required API Permissions: Microsoft Graph: Mail.Send (Application) Microsoft Graph: Mail.ReadWrite (Application) Generate Client Secret Note down Client ID and Tenant ID Step 2: Database Setup NEON PostgreSQL: Create a new project Create database Enable pgvector extension Save connection string Step 3: Backend Implementation (.NET) Project Structure: /Controllers - EmailController.cs (handles email operations) - HomeController.cs (root routing) - VectorSearchController.cs (similarity search) /Services - EmailService.cs (Graph API integration) - SemanticKernelService.cs (AI operations) - VectorSearchService.cs (embedding operations) - OpenAISkill.cs (email generation) Key Components: SemanticKernelService: Initializes Semantic Kernel Manages AI model connections Handles prompt engineering EmailService: Microsoft Graph API integration Email sending functionality Authentication management VectorSearchService: Generates embeddings Manages vector storage Performs similarity searches Step 5: Configuration Create new dotnet project with: dotnet new webapi -n SemanticKernelEmailAssistant Configure appsettings.json for your Connections. Install Semantic Kernel ( Look into the SemanticKernelEmailAssistant.csproj for all packages and versions) – Versions are Important ! When all of your files are complete, then you can execute: dotnet build & dotnet publish -c Release To test locally simply run dotnet run Step 5: React Frontend Start a new React App with: npx create-react-app ai-email-assistant Change directory in the newly created Copy all files from Git and run npm install Initialize Tailwind npx tailwindcss init (if you see any related errors) Step 6: Deploy to Azure Both our Apps are Containerized with Docker, so pay attention to get the Dockerfile for each App. Use: [ docker build -t backend . ] and tag and push: [ docker tag backend {acrname}.azurecr.io/backend:v1 ] , [ docker push {acrname}.azurecr.io/backend:v1 ]. The same applies for the Frontend. Make sure to login to Azure Container Registry with: az acr login --name $(az acr list -g myresourcegroup --query "[].{name: name}" -o tsv) We will be able to see our new Repo on Azure Container Registry and deploy our Web Apps Troubleshooting and Maintenance Backend Issues: Use Swagger (/docs) for API testing and debugging. Check Azure Key Vault for PIN and credential updates. Embedding Errors: Ensure pgvector is correctly configured in Neon PostgreSQL. Verify the Azure OpenAI API key and endpoint are correct. Frontend Errors: Use browser dev tools to debug fetch requests. Ensure environment variables are correctly set during build and runtime. Conclusion In today’s rapidly evolving tech landscape, building an AI-powered application is no longer a daunting task, primarily thanks to groundbreaking technologies like Semantic Kernel, Neon PostgreSQL, and Azure OpenAI. Most importantly, this project clearly demonstrates how these powerful tools can seamlessly work together to deliver a robust, scalable, and user-friendly solution. First and foremost, the integration of Semantic Kernel effectively streamlines AI orchestration and prompt management. Additionally, Neon PostgreSQL provides exceptional serverless database capabilities that automatically scale with your application’s needs. Furthermore, Azure OpenAI’s reliable API and advanced language models consistently ensure high-quality AI responses and content generation. Moreover, whether you’re developing a customer service bot, content creation tool, or data analysis platform, this versatile technology stack offers the essential flexibility and power to bring your innovative ideas to life. Consequently, if you’re ready to create your own AI application, the powerful combination of Semantic Kernel and Neon serves as your ideal starting point. Above all, this robust foundation successfully balances sophisticated functionality with straightforward implementation, while simultaneously ensuring seamless scalability as your project continues to grow and evolve. References: Semantic Kernel Vector Store Embeddings NEON Project144Views0likes0CommentsAzure Open AI accounts
Hello, We are using REST API to get Azure Open AI accounts: https://learn.microsoft.com/en-us/rest/api/aiservices/accountmanagement/accounts/list?view=rest-aiservices-accountmanagement-2024-10-01&tabs=HTTP We noticed that it was updated and now returns all types of AI services' accounts. Also the API endpoint is now under "Microsoft.CognitiveServices". What does this change mean? How should we treat it? What is the relation between the "Azure AI services" account and the "Open AI account"? Any information would be appreciated!32Views0likes0CommentsAidemos Microsoft site doesn't work https://aidemos.microsoft.com/
Hello MS team, I am learning AI-900 in Coursera. The course guides me to try AI demos on https://aidemos.microsoft.com/. But it seems broken for weeks. According to the error message, it could be the issue of the backend. Could the MS team fix it, please? Best Regards, Dale4.4KViews1like13CommentsAzure AI Foundry | How to best tackle use case
Hi all, I am quite new to Azure AI Foundry and did some first trainings and got a basic understanding of the functions. Before building the first case I thought it makes sense to ask the community how to they would tackle following use case: We have a use case where we receive certificates from customers - those are in PDF format and are the main topic to be analyzed within my use case. Each of those certificates should be screened on specific formulations and other potential issues (which are written down in a guideline and a word document). So the process should be, that users upload the certificate as a PDF (in either a chatbot environment or application) and a detailed prompt gets executed that reads through the text and checks for the criteria written down in guideline + word. What would be the best way to build this case from your perspective? We also have PowerApps/PowerAutomate to make use of additionally. Thank you very much in advance for your feedback. Kind regards, Alex104Views0likes0Comments
Events
Recent Blogs
- Build your Model IQ! Join us for an 8-week power series where we cut through the noise and bring you the most relevant models, insights, and hands-on demos. Every Monday, we’ll round up the la...Mar 07, 202558Views0likes0Comments
- We are pleased to announce the availability of DeepSeek-V3 on Azure AI Foundry model catalog with token-based billing. This latest iteration is part of our commitment to enable powerful, efficient, a...Mar 07, 2025507Views1like0Comments