azure ai
14 TopicsAzure OpenAI: GPT-5-Codex Availability?
Greetings everyone! I just wanted to see if there's any word as to when/if https://openai.com/index/introducing-upgrades-to-codex/ will make it's way to the AI Foundry. It was released on September 15th, 2025, but I have no idea how long Azure tends to follow behind OpenAI's releases. It doesn't really seem like there's any source of information to view whenever new models drop as to what Azure is going to do with them, if any. Any conversation around this would be helpful and appreciated, thanks!436Views5likes2CommentsAgent in Azure AI Foundry not able to access SharePoint data via C# (but works in Foundry portal)
Hi Team, I created an agent in Azure AI Foundry and added a knowledge source using the SharePoint tool. When I test the agent inside the Foundry portal, it works correctly; it can read from the SharePoint site and return file names/data. However, when I call the same agent using C# code, it answers normal questions fine, but whenever I ask about the SharePoint data, I get the error: Sorry, something went wrong. Run status: failed I also referred to the official documentation and sample here: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/sharepoint-samples?pivots=rest I tried the cURL samples as well, and while the agent is created successfully, the run status always comes back as failed. Has anyone faced this issue? Do I need to configure something extra for SharePoint when calling the agent programmatically (like additional permissions or connection binding)? Any help on this would be greatly appreciated. Thanks!87Views0likes1CommentPush for Rapid AI Growth
There is a key factors of why AI is not growing as quick as speed of light, the reason is because most AI are either built by a specific company (e.g Open AI for chatgpt, Microsoft for Copilot, Google for Gemini). or individuals/small groups building agents for fun or for their workplaces. But what would happen if we merge them together. Imagine, if a website that is own by no one and it is open source and it allows everyone to train the same AI simultaneously at the same time, what would happen. Imagine instead of Microsoft building Copilot, the whole world is building Copilot at the same time, training Copilot at the same time through all global computing power. This would led to an shocking and exponential growth of AI never seen before. This is why I think Copilot should allow everyone to train its AI.112Views1like1CommentFrom Space to Subsurface: Using Azure AI to Predict Gold Rich Zones
In traditional mineral exploration, identifying gold bearing zones can take months of fieldwork and high cost drilling often with limited success. In our latest project, we flipped the process on its head by using Azure AI and Satellite data to guide geologists before they break ground. Using Azure AI and Azure Machine Learning, we built an intelligent, automated pipeline that identified high potential zones from geospatial data saving time, cost, and uncertainty. Here’s a behind the scenes look at how we did it.👇 📡 Step 1: Translating Satellite Imagery into Features We began with Sentinel-2 imagery covering our Area of Interest (AOI) and derived alteration indices commonly used in mineral exploration, including: 🟤 Clay Index – proxies for hydrothermal alteration 🟥 Fe (Iron Oxide) Index 🌫️ Silica Ratio 💧 NDMI (Normalized Difference Moisture Index) Using Azure Notebooks and Python, we processed and cleaned the imagery, transforming raw reflectance bands into meaningful geochemical features. 🔍 Step 2: Discovering Patterns with Unsupervised Learning (KMeans) With feature rich geospatial data prepared, we used unsupervised clustering (KMeans) in Azure Machine Learning Studio to identify natural groupings across the region. This gave us a first look at the terrain’s underlying geochemical structure one cluster in particular stood out as a strong candidate for gold rich zones. No geology degree needed: AI finds patterns humans can't see :) 🧠 Step 3: Scaling with Azure AutoML We then trained a classification model using Azure AutoML to predict these clusters over a dense prediction grid: ✅ 7,200+ data points generated ✅ ~50m resolution grid ✅ 14 km² area of interest This was executed as a short, early stopping run to minimize cost and optimize training time. Models were trained, validated, and registered using: Azure Machine Learning Compute Instance + Compute Cluster Azure Storage for dataset access 🔬 Step 4: Validation with Field Samples To ground our predictions, we validated against lab assayed (gold concentration) from field sampling points. The results? 🔥 The geospatial cluster labeled 'Class 0' by the model showed strong correlation with lab validated gold concentrations, supporting the model's predictive validity. This gave geologists AI augmented evidence to prioritize areas for further sampling and drilling. ⚖️ Traditional vs AI-based Workflow 🚀 Why Azure? ✅ Azure Machine Learning Studio for AutoML and experiment tracking ✅ Azure Storage for seamless access to geospatial data ✅ Azure OpenAI Service for advanced language understanding, report generation, and enhanced human AI interaction ✅ Azure Notebooks for scripting, preprocessing, and validation ✅ Azure Compute Cluster for scalable, cost effective model training ✅ Model Registry for versioning and deployment 🌍 Key Takeaways AI turns mineral exploration from reactive guesswork into proactive intelligence. In our workflow, AI plays a critical role by: ✅ Extracting key geochemical features from satellite imagery 🧠 Identifying patterns using unsupervised learning 🎯 Predicting high potential zones through automated classification 🌍 Delivering full spatial coverage at scale With Azure AIand Azure ML tools, we’ve built a complete pipeline that supports: End to end automation; from data prep to model deployment Faster, more accurate exploration with lower costs A reusable, scalable solution for global teams This isn’t just a proof of concept, it’s a production ready framework that empowers geologists with AI driven insights before the first drill hits the ground. 🔗 If you're working in Mining industry, geoscience, AI for Earth, or exploration tech, let’s connect! We’re on a mission to bring AI deeper into every industry through strategic partnerships and collaborative innovation.110Views2likes0CommentsIntroducing Azure AI Models: The Practical, Hands-On Course for Real Azure AI Skills
Hello everyone, Today, I’m excited to share something close to my heart. After watching so many developers, including myself—get lost in a maze of scattered docs and endless tutorials, I knew there had to be a better way to learn Azure AI. So, I decided to build a guide from scratch, with a goal to break things down step by step—making it easy for beginners to get started with Azure, My aim was to remove the guesswork and create a resource where anyone could jump in, follow along, and actually see results without feeling overwhelmed. Introducing Azure AI Models Guide. This is a brand new, solo-built, open-source repo aimed at making Azure AI accessible for everyone—whether you’re just getting started or want to build real, production-ready apps using Microsoft’s latest AI tools. The idea is simple: bring all the essentials into one place. You’ll find clear lessons, hands-on projects, and sample code in Python, JavaScript, C#, and REST—all structured so you can learn step by step, at your own pace. I wanted this to be the resource I wish I’d had when I started: straightforward, practical, and friendly to beginners and pros alike. It’s early days for the project, but I’m excited to see it grow. If you’re curious.. Check out the repo at https://github.com/DrHazemAli/Azure-AI-Models Your feedback—and maybe even your contributions—will help shape where it goes next!Solved703Views1like5CommentsIntroducing AzureImageSDK — A Unified .NET SDK for Azure Image Generation And Captioning
Hello 👋 I'm excited to share something I've been working on — AzureImageSDK — a modern, open-source .NET SDK that brings together Azure AI Foundry's image models (like Stable Image Ultra, Stable Image Core), along with Azure Vision and content moderation APIs and Image Utilities, all in one clean, extensible library. While working with Azure’s image services, I kept hitting the same wall: Each model had its own input structure, parameters, and output format — and there was no unified, async-friendly SDK to handle image generation, visual analysis, and moderation under one roof. So... I built one. AzureImageSDK wraps Azure's powerful image capabilities into a single, async-first C# interface that makes it dead simple to: 🎨 Inferencing Image Models 🧠 Analyze visual content (Image to text) 🚦 Image Utilities — with just a few lines of code. It's fully open-source, designed for extensibility, and ready to support new models the moment they launch. 🔗 GitHub Repo: https://github.com/DrHazemAli/AzureImageSDK Also, I've posted the release announcement on the https://github.com/orgs/azure-ai-foundry/discussions/47 👉🏻 feel free to join the conversation there too. The SDK is available on NuGet too. Would love to hear your thoughts, use cases, or feedback!117Views1like0CommentsIntroducing AzureSoraSDK: A Community C# SDK for Azure OpenAI Sora Video Generation
Hello everyone! I’m excited to share the first community release of AzureSoraSDK, a fully-featured .NET 6+ class library that makes it incredibly easy to generate AI-driven videos using Azure’s OpenAI Sora model and even improve your prompts on the fly. 🔗 Repository: https://github.com/DrHazemAli/AzureSoraSDK242Views0likes2CommentsHow to Build AI Agents in 10 Lessons
Microsoft has released an excellent learning resource for anyone looking to dive into the world of AI agents: "AI Agents for Beginners". This comprehensive course is available free on GitHub. It is designed to teach the fundamentals of building AI agents, even if you are just starting out. What You'll Learn The course is structured into 10 lessons, covering a wide range of essential topics including: Agentic Frameworks: Understand the core structures and components used to build AI agents. Design Patterns: Learn proven approaches for designing effective and efficient AI agents. Retrieval Augmented Generation (RAG): Enhance AI agents by incorporating external knowledge. Building Trustworthy AI Agents: Discover techniques for creating AI agents that are reliable and safe. AI Agents in Production: Get insights into deploying and managing AI agents in real-world applications. Hands-On Experience The course includes practical code examples that utilize: Azure AI Foundry GitHub Models These examples help you learn how to interact with Language Models and use AI Agent frameworks and services from Microsoft, such as: Azure AI Agent Service Semantic Kernel Agent Framework AutoGen - A framework for building AI agents and applications Getting Started To get started, make sure you have the proper set-up. Here are the 10 lessons Intro to AI Agents and Agent Use Cases Exploring AI Agent Frameworks Understanding AI Agentic Design Principles Tool Use Design Pattern Agentic RAG Building Trustworthy AI Agents Planning Design Multi-Agent Design Patterns Metacognition in AI Agents AI Agents in Production Multi-Language Support To make learning accessible to a global audience, the course offers multi-language support. Get Started Today! If you are eager to learn about AI agents, this course is an excellent starting point. You can find the complete course materials on GitHub at AI Agents for Beginners.2.2KViews6likes3CommentsAzure AI Assistants with Logic Apps
Introduction to AI Automation with Azure OpenAI Assistants Intro Welcome to the future of automation! In the world of Azure, AI assistants are becoming your trusty sidekicks, ready to tackle the repetitive tasks that once consumed your valuable time. But what if we could make these assistants even smarter? In this post, we’ll dive into the exciting realm of integrating Azure AI assistants with Logic Apps – Microsoft’s powerful workflow automation tool. Get ready to discover how this dynamic duo can transform your workflows, freeing you up to focus on the big picture and truly innovative work. Azure OpenAI Assistants (preview) Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. To accelerate and simplify the creation of intelligent applications, we can now enable the ability to call Logic Apps workflows through function calling in Azure OpenAI Assistants. The Assistants playground enumerates and lists all the workflows in your subscription that are eligible for function calling. Here are the requirements for these workflows: Schema: The workflows you want to use for function calling should have a JSON schema describing the inputs and expected outputs. Using Logic Apps you can streamline and provide schema in the trigger, which would be automatically imported as a function definition. Consumption Logic Apps: Currently supported consumption workflows. Request trigger: Function calling requires a REST-based API. Logic Apps with a request trigger provides a REST endpoint. Therefore only workflows with a request trigger are supported for function calling. AI Automation So apart from the Assistants API, which we will explore in another post, we know that we can Integrate Azure Logic Apps workflows! Isn’t that amazing ? The road now is open for AI Automation and we are on the genesis of it, so let’s explore it. We need an Azure Subscription and: Azure OpenAI in the supported regions. This demo is on Sweden Central. Logic Apps consumption Plan. We will work in Azure OpenAI Studio and utilize the Playground. Our model deployment is GPT-4o. The Assistants Playground offers the ability to create and save our Assistants, so we can start working and return later, open the Assistant and continue. We can find the System Message option and the three tools that enhance the Assistants with Code Interpreter, Function Calling ( Including Logic Apps) and Files upload. The following table describes the configuration elements of our Assistants: Name Description Assistant name Your deployment name that is associated with a specific model. Instructions Instructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, tell it what it should and shouldn’t answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses. Deployment This is where you set which model deployment to use with your assistant. Functions Create custom function definitions for the models to formulate API calls and structure data outputs based on your specifications Code interpreter Code interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code. Files You can upload up to 20 files, with a max file size of 512 MB to use with tools. You can upload up to 10,000 files using AI Studio. The Studio provides 2 sample Functions (Get Weather and Get Stock Price) to get an idea of the schema requirement in JSON for Function Calling. It is important to provide a clear message that makes the Assistant efficient and productive, with careful consideration since the longer the message the more Tokens are consumed. Challenge #1 – Summarize WordPress Blog Posts How about providing a prompt to the Assistant with a URL instructing it to summarize a WordPress blog post? It is WordPress cause we have a unified API and we only need to change the URL. We can be more strict and narrow down the scope to a specific URL but let’s see the flexibility of Logic Apps in a workflow. We should start with the Logic App. We will generate the JSON schema directly from the Trigger which must be an HTTP request. { "name": "__ALA__lgkapp002", // Remove this for the Logic App Trigger "description": "Fetch the latest post from a WordPress website,summarize it, and return the summary.", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The base URL of the WordPress site" }, "post": { "type": "string", "description": "The page number" } }, "required": [ "url", "post" ] } } In the Designer this looks like this : As you can see the Schema is the same, excluding the name which is need only in the OpenAI Assistants. We will see this detail later on. Let’s continue with the call to WordPress. An HTTP Rest API call: And finally mandatory as it is, a Response action where we tell the Assistant that the Call was completed and bring some payload, in our case the body of the previous step: Now it is time to open our Azure OpenAI Studio and create a new Assistant. Remember the prerequisites we discussed earlier! From the Assistants menu create a [+New] Assistant, give it a meaningful name, select the deployment and add a System Message . For our case it could be something like : ” You are a helpful Assistant that summarizes the WordPress Blog Posts the users request, using Functions. You can utilize code interpreter in a sandbox Environment for advanced analysis and tasks if needed “. The Code interpreter here could be an overkill but we mention it to see the use of it ! Remember to save the Assistant. Now, in the Functions, do not select Logic Apps, rather stay on the custom box and add the code we presented earlier. The Assistant will understand that the Logic App named xxxx must be called, aka [“name”: “__ALA__lgkapp002“,] in the schema! In fact the Logic App is declared by 2 underscores as prefix and 2 underscores as suffix, with ALA inside and the name of the Logic App. Let’s give our Assistant a Prompt and see what happens: The Assistant responded pretty solidly with a meaningful summary of the post we asked for! Not bad at all for a Preview service. Challenge #2 – Create Azure Virtual Machine based on preferences For the purpose of this task we have activated System Assigned managed identity to the Logic App we use, and a pre-provisioned Virtual Network with a subnet as well. The Logic App must reside in the same subscription as our Azure OpenAI resource. This is a more advanced request, but after all it translates to Logic Apps capabilities. Can we do it fast enough so the Assistant won’t time out? Yes we do, by using the Azure Resource Manager latest API which indeed is lightning fast! The process must follow the same pattern, Request – Actions – Response. The request in our case must include such input so the Logic App can carry out the tasks. The Schema should include a “name” input which tells the Assistant which Logic App to look up: { "name": "__ALA__assistkp02" //remove this for the Logic App Trigger "description": "Create an Azure VM based on the user input", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "The name of the VM" }, "location": { "type": "string", "description": "The region of the VM" }, "size": { "type": "string", "description": "The size of the VM" }, "os": { "type": "string", "description": "The OS of the VM" } }, "required": [ "name", "location", "size", "os" ] } } And the actual screenshot from the Trigger, observe the absence of the “name” here: Now as we have number of options, this method allows us to keep track of everything including the user’s inputs like VM Name , VM Size, VM OS etc.. Of Course someone can expand this, since we use a default resource group and a default VNET and Subnet, but it’s also configurable! So let’s store the input into variables, we Initialize 5 variables. The name, the size, the location (which is preset for reduced complexity since we don’t create a new VNET), and we break down the OS. Let’s say the user selects Windows 10. The API expects an offer and a sku. So we take Windows 10 and create an offer variable, the same with OS we create an OS variable which is the expected sku: if(equals(triggerBody()?['os'], 'Windows 10'), 'Windows-10', if(equals(triggerBody()?['os'], 'Windows 11'), 'Windows-11', 'default-offer')) if(equals(triggerBody()?['os'], 'Windows 10'), 'win10-22h2-pro-g2', if(equals(triggerBody()?['os'], 'Windows 11'), 'win11-22h2-pro', 'default-sku')) As you understand this is narrowed to Windows Desktop only available choices, but we can expand the Logic App to catch most well know Operating Systems. After the Variables all we have to do is create a Public IP (optional) , a Network Interface, and finally the VM. This is the most efficient way i could make, so we won’t get complains from the API and it will complete it very fast ! Like 3 seconds fast ! The API calls are quite straightforward and everything is available in Microsoft Documentation. Let’s see an example for the Public IP: And the Create VM action with highlight to the storage profile – OS Image setup: Finally we need the response which can be as we like it to be. I am facilitating the Assistant’s response with an additional Action “Get Virtual Machine” that allows us to include the properties which we add in the response body: Let’s make our request now, through the Assistants playground in Azure OpenAI Studio. Our prompt is quite clear: “Create a new VM with size=Standard_D4s_v3, location=swedencentral, os=Windows 11, name=mynewvm02”. Even if we don’t add the parameters the Assistant will ask for them as we have set in the System Message. Pay attention to the limitation also . When we ask about the Public IP, the Assistant does not know it. Yet it informs us with a specific message, that makes sense and it is relevant to the whole operation. If we want to have a look of the time it took we will be amazed : The sum of the time starting from the user request till the response from the Assistant is around 10 seconds. We have a limit of 10 minutes for Function Calling execution so we can built a whole Infrastructure using just our prompts. Conclusion In conclusion, this experiment highlights the powerful synergy between Azure AI Assistant’s Function Calling capability and the automation potential of Logic Apps. By successfully tackling two distinct challenges, we’ve demonstrated how this combination can streamline workflows, boost efficiency, and unlock new possibilities for integrating intelligent decision-making into your business processes. Whether you’re automating customer support interactions, managing data pipelines, or optimizing resource allocation, the integration of AI assistants and Logic Apps opens doors to a more intelligent and responsive future. We encourage you to explore these tools further and discover how they can revolutionize your own automation journey. References: Getting started with Azure OpenAI Assistants (Preview) Call Azure Logic apps as functions using Azure OpenAI Assistants Azure OpenAI Assistants function calling Azure OpenAI Service models What is Azure Logic Apps? Azure Resource Manager – Rest OperationsAzure AI Services on AKS
Host your AI Language Containers and Web Apps on Azure Kubernetes Cluster: Flask Web App Sentiment Analysis In this post, we'll explore how to integrate Azure AI Containers into our applications running on Azure Kubernetes Service (AKS). Azure AI Containers enable you to harness the power of Azure's AI services directly within your AKS environment, giving you complete control over where your data is processed. By streamlining the deployment process and ensuring consistency, Azure AI Containers simplify the integration of cutting-edge AI capabilities into your applications. Whether you're developing tools for education, enhancing accessibility, or creating innovative user experiences, this guide will show you how to seamlessly incorporate Azure's AI Containers into your web apps running on AKS. Why Containers ? Azure AI services provides several Docker containers that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure AI services. Azure AI Containers offer: Immutable infrastructure: Consistent and reliable system parameters for DevOps teams, with flexibility to adapt and avoid configuration drift. Data control: Choose where data is processed, essential for data residency or security requirements. Model update control: Flexibility in versioning and updating deployed models. Portable architecture: Deploy on Azure, on-premises, or at the edge, with Kubernetes support. High throughput/low latency: Scale for demanding workloads by running Azure AI services close to data and logic. Scalability: Built on scalable cluster technology like Kubernetes for high availability and adaptable performance. Source: https://learn.microsoft.com/en-us/azure/ai-services/cognitive-services-container-support Workshop Our Solution will utilize the Azure Language AI Service with the Text Analytics container for Sentiment Analysis. We will build a Python Flask Web App, containerize it with Docker and push it to Azure Container Registry. An AKS Cluster which we will create, will pull the Flask Image along with the Microsoft provided Sentiment Analysis Image directly from mcr.microsoft.com and we will make all required configurations on our AKS Cluster to have an Ingress Controller with SSL Certificate presenting a simple Web UI to write our Text, submit it for analysis and get the results. Our Web UI will look like this: Azure Kubernetes Cluster, Azure Container Registry & Azure Text Analytics These are our main resources and a Virtual Network of course for the AKS which is deployed automatically. Our Solution is hosted entirely on AKS with a Let's Encrypt Certificate we will create separately offering secure HTTP with an Ingress Controller serving publicly our Flask UI which is calling via REST the Sentiment Analysis service, also hosted on AKS. The difference is that Flask is build with a custom Docker Image pulled from Azure Container Registry, while the Sentiment Analysis is a Microsoft ready Image which we pull directly. In case your Azure Subscription does not have an AI Service you have to create a Language Service of Text Analytics using the Portal due to the requirement to accept the Responsible AI Terms. For more detail go to https://go.microsoft.com/fwlink/?linkid=2164190 . My preference as a best practice, is to create an AKS Cluster with the default System Node Pool and add an additional User Node Pool to deploy my Apps, but it is really a matter of preference at the end of the day. So let's start deploying! Start from your terminal by logging in with az login and set your Subscription with az account set --subscription 'YourSubName" ## Change the values in < > with your values and remove < >! ## Create the AKS Cluster az aks create \ --resource-group <your-resource-group> \ --name <your-cluster-name> \ --node-count 1 \ --node-vm-size standard_a4_v2 \ --nodepool-name agentpool \ --generate-ssh-keys \ --nodepool-labels nodepooltype=system \ --no-wait \ --aks-custom-headers AKSSystemNodePool=true \ --network-plugin azure ## Add a User Node Pool az aks nodepool add \ --resource-group <your-resource-group> \ --cluster-name <your-cluster-name> \ --name userpool \ --node-count 1 \ --node-vm-size standard_d4s_v3 \ --no-wait ## Create Azure Container Registry az acr create \ --resource-group <your-resource-group> \ --name <your-acr-name> \ --sku Standard \ --location northeurope ## Attach ACR to AKS az aks update -n <your-cluster-name> -g <your-resource-group> --attach-acr <your-acr-name> The Language Service is created from the Portal for the reasons we explained earlier. Search for Language and create a new Language service leaving the default selections ( No Custom QnA, no Custom Text Classification) on the F0 (Free) SKU. You may see a VNET menu appear in the Networking Tab, just ignore it, as long as you leave the default Public Access enabled it won’t create a Virtual Network. The presence of the Cloud Resource is for Billing and Metrics. A Flask Web App has a directory structure where we store index.html in the Templates directory and our CSS and images in the Static directory. So in essence it looks like this: -sentiment-aks --flaskwebapp app.py requirements.txt Dockerfile ---static 1.style.css 2.logo.png ---templates 1.index.html The requirements.txt should have the needed packages : ## requirements.txt Flask==3.0.0 requests==2.31.0 ## index.html <!DOCTYPE html> <html> <head> <title>Sentiment Analysis App</title> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}"> </head> <body> <img src="{{ url_for('static', filename='logo.png') }}" class="icon" alt="App Icon"> <h2>Sentiment Analysis</h2> <form id="textForm"> <textarea name="text" placeholder="Enter text here..."></textarea> <button type="submit">Analyze</button> </form> <div id="result"></div> <script> document.getElementById('textForm').onsubmit = async function(e) { e.preventDefault(); let formData = new FormData(this); let response = await fetch('/analyze', { method: 'POST', body: formData }); let resultData = await response.json(); let results = resultData.results; if (results) { let displayText = `Document: ${results.document}\nSentiment: ${results.overall_sentiment}\n`; displayText += `Confidence - Positive: ${results.confidence_positive}, Neutral: ${results.confidence_neutral}, Negative: ${results.confidence_negative}`; document.getElementById('result').innerText = displayText; } else { document.getElementById('result').innerText = 'No results to display'; } }; </script> </body> </html> ## style.css body { font-family: Arial, sans-serif; background-color: #f0f8ff; /* Light blue background */ margin: 0; padding: 0; display: flex; flex-direction: column; align-items: center; justify-content: center; height: 100vh; } h2 { color: #0277bd; /* Darker blue for headings */ } .icon { height: 100px; /* Adjust the size as needed */ margin-top: 20px; /* Add some space above the logo */ } form { background-color: white; padding: 20px; border-radius: 8px; width: 300px; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); } textarea { width: 100%; box-sizing: border-box; height: 100px; margin-bottom: 10px; border: 1px solid #0277bd; border-radius: 4px; padding: 10px; } button { background-color: #029ae4; /* Blue button */ color: white; border: none; padding: 10px 15px; border-radius: 4px; cursor: pointer; } button:hover { background-color: #0277bd; } #result { margin-top: 20px; } And here is the most interesting file, our app.py. Notice the use of a REST API call directly to the Sentiment Analysis endpoint which we will declare in the YAML file for the Kubernetes deployment. ## app.py from flask import Flask, render_template, request, jsonify import requests import os app = Flask(__name__) @app.route('/', methods=['GET']) def index(): return render_template('index.html') # HTML file with input form @app.route('/analyze', methods=['POST']) def analyze(): # Extract text from the form submission text = request.form['text'] if not text: return jsonify({'error': 'No text provided'}), 400 # Fetch API endpoint and key from environment variables endpoint = os.environ.get("CONTAINER_API_URL") # Ensure required configurations are available if not endpoint: return jsonify({'error': 'API configuration not set'}), 500 # Construct the full URL for the sentiment analysis API url = f"{endpoint}/text/analytics/v3.1/sentiment" headers = { 'Content-Type': 'application/json' } body = { 'documents': [{'id': '1', 'language': 'en', 'text': text}] } # Make the HTTP POST request to the sentiment analysis API response = requests.post(url, json=body, headers=headers) if response.status_code != 200: return jsonify({'error': 'Failed to analyze sentiment'}), response.status_code # Process the API response data = response.json() results = data['documents'][0] detailed_results = { 'document': text, 'overall_sentiment': results['sentiment'], 'confidence_positive': results['confidenceScores']['positive'], 'confidence_neutral': results['confidenceScores']['neutral'], 'confidence_negative': results['confidenceScores']['negative'] } # Return the detailed results to the client return jsonify({'results': detailed_results}) if __name__ == '__main__': app.run(host='0.0.0.0', port=5001, debug=False) And finally we need a Dockerfile, pay attention to have it on the same level as your app.py file. ## Dockerfile # Use an official Python runtime as a parent image FROM python:3.10-slim # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 5001 available to the world outside this container EXPOSE 5001 # Define environment variable ENV CONTAINER_API_URL="http://sentiment-service/" # Run app.py when the container launches CMD ["python", "app.py"] Our Web UI is ready to build ! We need Docker running on our development environment and we need to login to Azure Container Registry: ## Login to ACR az acr login -n <your-acr-name> ## Build and Tag our image docker build -t <acr-name>.azurecr.io/flaskweb:latest . docker push <acr-name>.azurecr.io/flaskweb:latest You can go to the Portal and from Azure Container Registry, Repositories you will find our new Image ready to be pulled! Kubernetes Deployments Let’s start deploying our AKS services ! As we already know we can pull the Sentiment Analysis Container from Microsoft directly and that’s what we are going to do with the following tasks. First, we need to login to our AKS Cluster so from Azure Portal head over to your AKS Cluster and click on the Connect link on the menu. Azure will provide the command to connect from our terminal: Select Azure CLI and just copy-paste the commands to your Terminal. Now we can run kubectl commands and manage our Cluster and AKS Services. We need a YAML file for each service we are going to build, including the Certificate at the end. For now let’s create the Sentiment Analysis Service, as a Container, with the following file. Pay attention as you need to get the Language Service Key and Endpoint from the Text Analytics resource we created earlier, and in the nodeSelector block we must enter the name of the User Node Pool we created. apiVersion: apps/v1 kind: Deployment metadata: name: sentiment-deployment spec: replicas: 1 selector: matchLabels: app: sentiment template: metadata: labels: app: sentiment spec: containers: - name: sentiment image: mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:latest ports: - containerPort: 5000 resources: limits: memory: "8Gi" cpu: "1" requests: memory: "8Gi" cpu: "1" env: - name: Eula value: "accept" - name: Billing value: "https://<your-Language-Service>.cognitiveservices.azure.com/" - name: ApiKey value: "xxxxxxxxxxxxxxxxxxxx" nodeSelector: agentpool: userpool --- apiVersion: v1 kind: Service metadata: name: sentiment-service spec: selector: app: sentiment ports: - protocol: TCP port: 5000 targetPort: 5000 type: ClusterIP Save the file and run from your Terminal: kubectl apply -f sentiment-deployment.yaml In a few seconds you can observe the service running from the AKS Services and Ingresses menu. Let’s continue to bring our Flask Container now. In the same manner create a new YAML: apiVersion: apps/v1 kind: Deployment metadata: name: flask-service spec: replicas: 1 selector: matchLabels: app: flask template: metadata: labels: app: flask spec: containers: - name: flask image: <your-ACR-name>.azurecr.io/flaskweb:latest ports: - containerPort: 5001 env: - name: CONTAINER_API_URL value: "http://sentiment-service:5000" resources: requests: cpu: "500m" memory: "256Mi" limits: cpu: "1" memory: "512Mi" nodeSelector: agentpool: userpool --- apiVersion: v1 kind: Service metadata: name: flask-lb spec: type: LoadBalancer selector: app: flask ports: - protocol: TCP port: 80 targetPort: 5001 kubectl apply -f flask-service.yaml Observe the Sentiment Analysis Environment Value. It is directly using the Service name of our Sentiment Analysis container as AKS has it’s own DNS resolver for easy communication between services. In fact if we hit the Service Public IP we will have HTTP access to the Web UI. But let’s see how we can import our Certificate. We won’t describe how to get a Certificate. All we need is the PEM files, meaning the privatekey.pem and the cert.pem. IF we have a PFX we can export them with OpenSSL. Once we have these files in place we will create a secret in AKS that will hold our Certificate key and file. We just need to run this command from within the directory of our PEM files: kubectl create secret tls flask-app-tls –key privkey.pem –cert cert.pem –namespace default Once we create our Secret we will deploy a Kubernetes Ingress Controller (NGINX is fine) which will manage HTTPS and will point to the Flask Service. Remember to add an A record to your DNS registrar with the DNS Hostname you are going to use and the Public IP, once you see the IP Address: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: flask-app-ingress spec: ingressClassName: webapprouting.kubernetes.azure.com tls: - hosts: - your.host.domain secretName: flask-app-tls rules: - host: your.host.domain http: paths: - path: / pathType: Prefix backend: service: name: flask-lb port: number: 80 kubectl apply -f flask-app-ingress.yaml From AKS – Services and Ingresses – Ingresses you will see the assigned Public IP. Add it to your DNS and once the Name Servers are updated you can hit your Hostname using HTTPS! Final Thoughts As we’ve explored, the combination of Azure AI Containers and AKS offers a powerful and flexible solution for deploying AI-driven applications in cloud-native environments. By leveraging these technologies, you gain granular control over your data and model deployments, while maintaining the scalability and portability essential for modern applications. Remember, this is just the starting point. As you delve deeper, consider the specific requirements of your project and explore the vast possibilities that Azure AI Containers unlock. Embrace the power of AI within your AKS deployments, and you’ll be well on your way to building innovative, intelligent solutions that redefine what’s possible in the cloud. Architecture