User Profile
KonstantinosPassadis
Learn Expert
Joined 6 years ago
User Widgets
Recent Discussions
Bot Framework: Build an AI Security Assistant with ease
How to create intelligent Bots with the Bot Framework Intro In an era where cybersecurity threats loom large, the need for vigilant and responsive security measures has never been greater. The Microsoft Bot Framework SDK, with its powerful AI capabilities, offers a new frontier in security management. This blog post will delve into the development of such an AI security assistant, showcasing how to leverage the SDK to interpret security logs, generate KQL queries, and provide real-time security alerts. We’ll explore how to integrate with existing security infrastructure and harness the power of AI to build our own AI Security Assistant. Join us as we explore this exciting intersection of AI and cybersecurity, where intelligent bots stand guard against the ever-evolving landscape of digital threats. Setup Before we start with the Bot Framework SDK, we need to prepare our development environment. This section will guide you through the necessary steps to set up your “canvas” and get started with building your AI-powered writing assistant. Prerequisites: Visual Studio: Ensure you have Visual Studio installed with the .NET desktop development workload. You can download it from the official Microsoft website. Azure Subscription: An active Azure subscription is required to access the Copilot SDK and its related services. If you don’t have one already, you can sign up for a free trial. Bot Framework Emulator: This tool allows you to test your bot locally before deploying it to Azure. Download it from the Bot Framework website. Creating a New Bot Project: Install the Bot Framework SDK: Open Visual Studio and create a new project. Choose the “Echo Bot (Bot Framework v4)” template. This template provides a basic bot structure to get you started quickly. Install the required NuGet Packages: Azure.AI.OpenAI Azure.Core Microsoft.Bot.Builder.Integration.AspNet.Core Configure Your Bot: In the file, you’ll need to configure your bot with the appropriate API keys and endpoints for the Copilot service. You can obtain these credentials from your Azure portal.appsettings.json { "MicrosoftAppType": "xxxxxx", // Leave it empty until publish "MicrosoftAppId": "xxxx", // Leave it empty until publish "MicrosoftAppPassword": "xxxx", // Leave it empty until publish "MicrosoftAppTenantId": "xxxxxx", // Leave it empty until publish "AzureOpenAI": { "ApiKey": "xxxxxxxxxx", "DeploymentName": "gpt-4o", "Endpoint": "https://xxxx.openai.azure.com" }, "AzureSentinel": { // Log Analytics "ClientId": "xxxx", "ClientSecret": "xxxxx", "TenantId": "xxxx", "WorkspaceId": "xxxxx" } } When you open the Echo Bot we need to make changes to our code in order to achieve 3 things: Azure OpenAI Chat Interaction and generic advice KQL Query generation KQL Query execution against a Sentinel Workspace \ Log Analytics The main program is EchoBot.cs (you can rename as needed). using Microsoft.Bot.Builder; using Microsoft.Bot.Schema; using Newtonsoft.Json; using System.Net.Http; using System.Text; using System.Threading; using System.Threading.Tasks; using Microsoft.Extensions.Configuration; using Azure.AI.OpenAI; using Azure; using System.Collections.Generic; using System.Linq; using System; using System.Text.RegularExpressions; namespace SecurityBot.Bots { public class Security : ActivityHandler { private readonly HttpClient _httpClient; private readonly AzureOpenAIClient _azureClient; private readonly string _chatDeployment; private readonly IConfiguration _configuration; private Dictionary<string, int> eventMapping; // Declare eventMapping here public Security(IConfiguration configuration) { _configuration = configuration ?? throw new ArgumentNullException(nameof(configuration)); _httpClient = new HttpClient(); // Load event mappings from JSON file string eventMappingPath = Path.Combine(AppContext.BaseDirectory, "eventMappings.json"); if (File.Exists(eventMappingPath)) { var json = File.ReadAllText(eventMappingPath); eventMapping = JsonConvert.DeserializeObject<Dictionary<string, int>>(json); } // Azure OpenAI Chat API configuration var endpoint = configuration["AzureOpenAI:Endpoint"]; var apiKey = configuration["AzureOpenAI:ApiKey"]; _chatDeployment = configuration["AzureOpenAI:DeploymentName"]; // Your Chat model deployment name // Initialize the Azure OpenAI client _azureClient = new AzureOpenAIClient(new Uri(endpoint), new AzureKeyCredential(apiKey)); } protected override async Task OnMessageActivityAsync(ITurnContext<IMessageActivity> turnContext, CancellationToken cancellationToken) { var userInput = turnContext.Activity.Text.ToLower(); // Detect if the user wants to generate a query if (userInput.Contains("generate")) { // If the user says "generate", extract event and date, then generate the query var kqlQuery = await BuildKQLQueryFromInput(userInput, turnContext, cancellationToken); await turnContext.SendActivityAsync(MessageFactory.Text($"Generated KQL Query: {kqlQuery}"), cancellationToken); } else if (userInput.Contains("run")) { // If the user says "run", extract event and date, then run the query var kqlQuery = await BuildKQLQueryFromInput(userInput, turnContext, cancellationToken); var queryResult = await RunKqlQueryAsync(kqlQuery); await turnContext.SendActivityAsync(MessageFactory.Text($"KQL Query: {kqlQuery}\n\nResult: {queryResult}"), cancellationToken); } else { // For other inputs, handle the conversation with Azure OpenAI await GenerateChatResponseAsync(turnContext, userInput, cancellationToken); } } // Generate responses using the Azure OpenAI Chat API without streaming private async Task GenerateChatResponseAsync(ITurnContext<IMessageActivity> turnContext, string userInput, CancellationToken cancellationToken) { var chatClient = _azureClient.GetChatClient(_chatDeployment); // Set up the chat conversation context var chatMessages = new List<ChatMessage> { new SystemChatMessage("You are a cybersecurity assistant responding only to Security related questions. For irrelevant topics answer with 'Irrelevant'"), new UserChatMessage(userInput) }; // Call the Azure OpenAI API to get the complete chat response var chatResponse = await chatClient.CompleteChatAsync(chatMessages); // Access the completion content properly var assistantMessage = chatResponse.Value.Content.FirstOrDefault()?.Text; if (!string.IsNullOrEmpty(assistantMessage)) { // Send the entire response to the user at once await turnContext.SendActivityAsync(MessageFactory.Text(assistantMessage.ToString().Trim()), cancellationToken); } else { await turnContext.SendActivityAsync(MessageFactory.Text("I'm sorry, I couldn't process your request."), cancellationToken); } } // Build a KQL query from the user's input using Text Analytics private async Task<string> BuildKQLQueryFromInput(string input, ITurnContext<IMessageActivity> turnContext, CancellationToken cancellationToken) { // Start with a base KQL query string kqlQuery = "SecurityEvent | where 1 == 1 "; // Use the eventMapping dictionary to map the user's input to an EventID var matchedEventId = eventMapping.FirstOrDefault(mapping => input.Contains(mapping.Key)).Value; if (matchedEventId != 0) // EventID was found { kqlQuery += $"| where EventID == {matchedEventId} "; } else { // Fallback if no matching EventID is found await turnContext.SendActivityAsync(MessageFactory.Text("Sorry, I couldn't find a matching event ID for your request."), cancellationToken); return null; // Exit early if no valid EventID is found } // Extract the DateRange (e.g., "7 days") and add it to the query var dateRange = ExtractDateRange(input); if (!string.IsNullOrEmpty(dateRange)) { kqlQuery += $"| where TimeGenerated > ago({dateRange}) | project TimeGenerated, Account, Computer, EventID | take 10 "; } return kqlQuery; // Return the constructed KQL query } private string ExtractDateRange(string input) { // Simple extraction logic to detect "7 days", "3 days", etc. var match = Regex.Match(input, @"(\d+)\s+days?"); if (match.Success) { return $"{match.Groups[1].Value}d"; // Return as "7d", "3d", etc. } return null; // Return null if no date range found } // Run KQL query in Azure Sentinel / Log Analytics private async Task<string> RunKqlQueryAsync(string kqlQuery) { var _workspaceId = _configuration["AzureSentinel:WorkspaceId"]; string queryUrl = $"https://api.loganalytics.io/v1/workspaces/{_workspaceId}/query"; var accessToken = await GetAccessTokenAsync(); // Get Azure AD token var requestBody = new { query = kqlQuery }; var jsonContent = new StringContent(JsonConvert.SerializeObject(requestBody), Encoding.UTF8, "application/json"); _httpClient.DefaultRequestHeaders.Clear(); _httpClient.DefaultRequestHeaders.Add("Authorization", $"Bearer {accessToken}"); var response = await _httpClient.PostAsync(queryUrl, jsonContent); var responseBody = await response.Content.ReadAsStringAsync(); return responseBody; // Return the query result } // Get Azure AD token for querying Log Analytics private async Task<string> GetAccessTokenAsync() { var _tenantId = _configuration["AzureSentinel:TenantId"]; var _clientId = _configuration["AzureSentinel:ClientId"]; var _clientSecret = _configuration["AzureSentinel:ClientSecret"]; var url = $"https://login.microsoftonline.com/{_tenantId}/oauth2/v2.0/token"; var body = new Dictionary<string, string> { { "grant_type", "client_credentials" }, { "client_id", _clientId }, { "client_secret", _clientSecret }, { "scope", "https://api.loganalytics.io/.default" } }; var content = new FormUrlEncodedContent(body); var response = await _httpClient.PostAsync(url, content); var responseBody = await response.Content.ReadAsStringAsync(); dynamic result = JsonConvert.DeserializeObject(responseBody); return result.access_token; } } } Event ID Mapping Let’s map most important Event ids to utterances. The Solution can be enhanced with Text Analytics and NLU, but for this workshop we are creating the dictionary. { "failed sign-in": 4625, "successful sign-in": 4624, "account lockout": 4740, "password change": 4723, "account creation": 4720, "logon type": 4624, "registry value was modified": 4657, "user account was changed": 4738, "user account was enabled": 4722, "user account was disabled": 4725, "user account was deleted": 4726, "user account was undeleted": 4743, "user account was locked out": 4767, "user account was unlocked": 4768, "user account was created": 4720, "attempt was made to duplicate a handle to an object": 4690, "indirect access to an object was requested": 4691, "backup of data protection master key was attempted": 4692, "recovery of data protection master key was attempted": 4693, "protection of auditable protected data was attempted": 4694, "unprotection of auditable protected data was attempted": 4695, "a primary token was assigned to process": 4696, "a service was installed in the system": 4697, "a scheduled task was created": 4698, "a scheduled task was deleted": 4699, "a scheduled task was enabled": 4700, "a scheduled task was disabled": 4701, "a scheduled task was updated": 4702, "a token right was adjusted": 4703, "a user right was assigned": 4704, "a user right was removed": 4705, "a new trust was created to a domain": 4706, "a trust to a domain was removed": 4707, "IPsec Services was started": 4709, "IPsec Services was disabled": 4710 } Make all required updates to Program.cs and Startup.cs for the Namespace and the public class. // Generated with Bot Builder V4 SDK Template for Visual Studio EchoBot v4.22.0 using Microsoft.AspNetCore.Hosting; using Microsoft.Extensions.Hosting; namespace SecurityBot { public class Program { public static void Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); } } Testing Run the Application and open the Azure Bot Emulator to test the Bot. All you need is to add the localhost URL to the Emulator and make some Chat interactions for example: What is a SOAR ? Using OpenAI Chat Generate a KQL query for failed sign-in logs on the past 3 days Run a KQL query for failed sign-in logs on the past 3 days We have correct executions and KQL against our Sentinel\Log Analytics workspace. Let’s build this Bot on Azure and use it from our Teams Client as our Trusted Security Assistant ! Build on Azure The logic behind a Bot build on Azure is to create an Azure Web App and then the relevant Azure Bot Service. All the steps are published in Microsoft Documentation. You will find the ARM Templates on the Solution Window in Visual Studio 2022: Use the following commands to create your app registration and set its password. On success, these commands generate JSON output. Use thecommand to create an Microsoft Entra ID app registration.az ad app create This command generates an app ID that you’ll use in the next step. az ad app create –display-name “<app-registration-display-name>” –sign-in-audience “AzureADMyOrg” Usefor a single tenant app.AzureADMyOrg Use thecommand to generate a new password for your app registration. az ad app credential resetad app credential reset --id "<appId>" Record values you’ll need in later steps: theapp IDandpasswordfrom the command output. Once you have the App Registration ready and configured deploy the Web App on Azure using the deployment Templates. Create the App Service and the Azure Bot resources for your bot. Both steps use an ARM template and theAzure CLI command to create the resource or resources.az deployment group create Create an App Service resource for your bot. The App service can be within a new or existing App Service Plan.For detailed steps, seeUse Azure CLI to create an App Service. Create an Azure Bot resource for your bot.For detailed steps, seeUse Azure CLI to create or update an Azure Bot. az deployment group create –resource-group <resource-group> –template-file <template-file-path> –parameters “@<parameters-file-path>” Now time to build and Publish the Bot, make sure you have run the Bot resource ARM deployment as we did with the Web App Create the deployment file for the Bot: Switch to your project’s root folder. For C#, the root is the folder that contains the .csproj file. Do a clean rebuild inrelease mode. If you haven’t done so before, runto add required files to the root of your local source code directory. This command generates afile in your bot project folder.az bot prepare-deploy.deployment Within your project’s root folder, create a zip file that contains all files and sub-folders. I suggest after this to run either: Run theaz webapp deploycommandfrom the command line to perform deployment using the Kudu zip push deployment for your app service (web app). Or select the Publish option from the Solution Explorer and Publish using the created Web App. Remember to add the App ID and the relevant details to appsettings.json we saw earlier. In case you need to re test with the Emulator, remove the App Type, the App ID , Password and Tenant ID settings before running the App locally! Upon success make sure the Bot Messaging Endpoint has the Web App URL we created, followed by the /api/messages suffix. In case it is missing add it: Now we must add the correct API Permissions to the App registration in Entra ID. Select the App Registration, go to API Permissions, add permission and select API My Organization uses. Find the Log analytics and add the Application Permissions for Read: This way we are able to run\execute KQL against our Sentinel – Log Analytics Workspace. Bot Channels – Teams Now that our Bot is active and we can Test in “Test in Web Chat”, we can create the Teams Integration. It is really a simple step, where we select the Teams option from the Channels and verify the configuration. Once we enable that, we can get the HTTPS code from the Get Embed option in the Channels Menu, or open he URL Directly when we select the Teams Channel: Before we start using the Bot we must make a significant configuration in Teams Admin Center. Otherwise the Bot will probably show-up but unable to get messages from the Chat. Bot in Teams Finally we are able to use our Security Assistant Bot in Teams, Web or Desktop App. The Bot will provide generic advice from Azure OpenAI Chat model, will generate KQL queries for a number of Events and execute those Queries in Log Analytics and we will see the results in our UI. We can always change the appearance of the results, in this workshop we have minimal presentation for better visibility. The next phase of this Deployment can utilize Language Service where all Event IDs are dynamically recognized through a Text Analytics service. Conclusion In conclusion, this workshop demonstrated the seamless integration of Azure’s powerful AI services and Log Analytics to build a smart, security-focused chatbot. By leveraging tools like Azure OpenAI, Log Analytics, and the Bot Framework, we’ve empowered bots to provide dynamic insights and interact meaningfully with data. Whether it’s querying log events or responding to security inquiries, this solution highlights the potential of AI-driven assistants to elevate security operations. Keep exploring and building with Azure, and unlock new possibilities in automation and intelligence! Architecture:1.9KViews0likes0CommentsAzure AI Assistants with Logic Apps
Introduction to AI Automation with Azure OpenAI Assistants Intro Welcome to the future of automation! In the world of Azure, AI assistants are becoming your trusty sidekicks, ready to tackle the repetitive tasks that once consumed your valuable time. But what if we could make these assistants even smarter? In this post, we’ll dive into the exciting realm of integrating Azure AI assistants with Logic Apps – Microsoft’s powerful workflow automation tool. Get ready to discover how this dynamic duo can transform your workflows, freeing you up to focus on the big picture and truly innovative work. Azure OpenAI Assistants (preview) Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. To accelerate and simplify the creation of intelligent applications, we can now enable the ability to call Logic Apps workflows through function calling in Azure OpenAI Assistants. The Assistants playground enumerates and lists all the workflows in your subscription that are eligible for function calling. Here are the requirements for these workflows: Schema: The workflows you want to use for function calling should have a JSON schema describing the inputs and expected outputs. Using Logic Apps you can streamline and provide schema in the trigger, which would be automatically imported as a function definition. Consumption Logic Apps: Currently supported consumption workflows. Request trigger: Function calling requires a REST-based API. Logic Apps with a request trigger provides a REST endpoint. Therefore only workflows with a request trigger are supported for function calling. AI Automation So apart from the Assistants API, which we will explore in another post, we know that we can Integrate Azure Logic Apps workflows! Isn’t that amazing ? The road now is open for AI Automation and we are on the genesis of it, so let’s explore it. We need an Azure Subscription and: Azure OpenAI in the supported regions. This demo is on Sweden Central. Logic Apps consumption Plan. We will work in Azure OpenAI Studio and utilize the Playground. Our model deployment is GPT-4o. The Assistants Playground offers the ability to create and save our Assistants, so we can start working and return later, open the Assistant and continue. We can find the System Message option and the three tools that enhance the Assistants with Code Interpreter, Function Calling ( Including Logic Apps) and Files upload. The following table describes the configuration elements of our Assistants: Name Description Assistant name Your deployment name that is associated with a specific model. Instructions Instructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, tell it what it should and shouldn’t answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses. Deployment This is where you set which model deployment to use with your assistant. Functions Create custom function definitions for the models to formulate API calls and structure data outputs based on your specifications Code interpreter Code interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code. Files You can upload up to 20 files, with a max file size of 512 MB to use with tools. You can upload up to 10,000 files using AI Studio. The Studio provides 2 sample Functions (Get Weather and Get Stock Price) to get an idea of the schema requirement in JSON for Function Calling. It is important to provide a clear message that makes the Assistant efficient and productive, with careful consideration since the longer the message the more Tokens are consumed. Challenge #1 – Summarize WordPress Blog Posts How about providing a prompt to the Assistant with a URL instructing it to summarize a WordPress blog post? It is WordPress cause we have a unified API and we only need to change the URL. We can be more strict and narrow down the scope to a specific URL but let’s see the flexibility of Logic Apps in a workflow. We should start with the Logic App. We will generate the JSON schema directly from the Trigger which must be an HTTP request. { "name": "__ALA__lgkapp002", // Remove this for the Logic App Trigger "description": "Fetch the latest post from a WordPress website,summarize it, and return the summary.", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The base URL of the WordPress site" }, "post": { "type": "string", "description": "The page number" } }, "required": [ "url", "post" ] } } In the Designer this looks like this : As you can see the Schema is the same, excluding the name which is need only in the OpenAI Assistants. We will see this detail later on. Let’s continue with the call to WordPress. An HTTP Rest API call: And finally mandatory as it is, a Response action where we tell the Assistant that the Call was completed and bring some payload, in our case the body of the previous step: Now it is time to open our Azure OpenAI Studio and create a new Assistant. Remember the prerequisites we discussed earlier! From the Assistants menu create a [+New] Assistant, give it a meaningful name, select the deployment and add a System Message . For our case it could be something like : ” You are a helpful Assistant that summarizes the WordPress Blog Posts the users request, using Functions. You can utilize code interpreter in a sandbox Environment for advanced analysis and tasks if needed “. The Code interpreter here could be an overkill but we mention it to see the use of it ! Remember to save the Assistant. Now, in the Functions, do not select Logic Apps, rather stay on the custom box and add the code we presented earlier. The Assistant will understand that the Logic App named xxxx must be called, aka [“name”: “__ALA__lgkapp002“,] in the schema! In fact the Logic App is declared by 2 underscores as prefix and 2 underscores as suffix, with ALA inside and the name of the Logic App. Let’s give our Assistant a Prompt and see what happens: The Assistant responded pretty solidly with a meaningful summary of the post we asked for! Not bad at all for a Preview service. Challenge #2 – Create Azure Virtual Machine based on preferences For the purpose of this task we have activated System Assigned managed identity to the Logic App we use, and a pre-provisioned Virtual Network with a subnet as well. The Logic App must reside in the same subscription as our Azure OpenAI resource. This is a more advanced request, but after all it translates to Logic Apps capabilities. Can we do it fast enough so the Assistant won’t time out? Yes we do, by using the Azure Resource Manager latest API which indeed is lightning fast! The process must follow the same pattern, Request – Actions – Response. The request in our case must include such input so the Logic App can carry out the tasks. The Schema should include a “name” input which tells the Assistant which Logic App to look up: { "name": "__ALA__assistkp02" //remove this for the Logic App Trigger "description": "Create an Azure VM based on the user input", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "The name of the VM" }, "location": { "type": "string", "description": "The region of the VM" }, "size": { "type": "string", "description": "The size of the VM" }, "os": { "type": "string", "description": "The OS of the VM" } }, "required": [ "name", "location", "size", "os" ] } } And the actual screenshot from the Trigger, observe the absence of the “name” here: Now as we have number of options, this method allows us to keep track of everything including the user’s inputs like VM Name , VM Size, VM OS etc.. Of Course someone can expand this, since we use a default resource group and a default VNET and Subnet, but it’s also configurable! So let’s store the input into variables, we Initialize 5 variables. The name, the size, the location (which is preset for reduced complexity since we don’t create a new VNET), and we break down the OS. Let’s say the user selects Windows 10. The API expects an offer and a sku. So we take Windows 10 and create an offer variable, the same with OS we create an OS variable which is the expected sku: if(equals(triggerBody()?['os'], 'Windows 10'), 'Windows-10', if(equals(triggerBody()?['os'], 'Windows 11'), 'Windows-11', 'default-offer')) if(equals(triggerBody()?['os'], 'Windows 10'), 'win10-22h2-pro-g2', if(equals(triggerBody()?['os'], 'Windows 11'), 'win11-22h2-pro', 'default-sku')) As you understand this is narrowed to Windows Desktop only available choices, but we can expand the Logic App to catch most well know Operating Systems. After the Variables all we have to do is create a Public IP (optional) , a Network Interface, and finally the VM. This is the most efficient way i could make, so we won’t get complains from the API and it will complete it very fast ! Like 3 seconds fast ! The API calls are quite straightforward and everything is available in Microsoft Documentation. Let’s see an example for the Public IP: And the Create VM action with highlight to the storage profile – OS Image setup: Finally we need the response which can be as we like it to be. I am facilitating the Assistant’s response with an additional Action “Get Virtual Machine” that allows us to include the properties which we add in the response body: Let’s make our request now, through the Assistants playground in Azure OpenAI Studio. Our prompt is quite clear: “Create a new VM with size=Standard_D4s_v3, location=swedencentral, os=Windows 11, name=mynewvm02”. Even if we don’t add the parameters the Assistant will ask for them as we have set in the System Message. Pay attention to the limitation also . When we ask about the Public IP, the Assistant does not know it. Yet it informs us with a specific message, that makes sense and it is relevant to the whole operation. If we want to have a look of the time it took we will be amazed : The sum of the time starting from the user request till the response from the Assistant is around 10 seconds. We have a limit of 10 minutes for Function Calling execution so we can built a whole Infrastructure using just our prompts. Conclusion In conclusion, this experiment highlights the powerful synergy between Azure AI Assistant’s Function Calling capability and the automation potential of Logic Apps. By successfully tackling two distinct challenges, we’ve demonstrated how this combination can streamline workflows, boost efficiency, and unlock new possibilities for integrating intelligent decision-making into your business processes. Whether you’re automating customer support interactions, managing data pipelines, or optimizing resource allocation, the integration of AI assistants and Logic Apps opens doors to a more intelligent and responsive future. We encourage you to explore these tools further and discover how they can revolutionize your own automation journey. References: Getting started with Azure OpenAI Assistants (Preview) Call Azure Logic apps as functions using Azure OpenAI Assistants Azure OpenAI Assistants function calling Azure OpenAI Service models What is Azure Logic Apps? Azure Resource Manager – Rest OperationsCreating and customizing Copilots in Copilot Studio
How to create a CoPilot and use it in your Blog with your blog’s Data Intro Today, we’re going to embark on an exciting journey of creating our very own AI assistant, or ‘Copilot’, using the powerfulCopilot Studio. But that’s not all! We’ll also learn how to seamlessly integrate this Copilot into ourWordPress site, transforming it into a dynamic, interactive platform. Our WordPress site will serve as the primary data source, enabling our Copilot to provide personalized and context-aware responses. Whether you’re a seasoned developer or a tech enthusiast, this guide will offer a step-by-step approach to leverage AI capabilities for your WordPress site. So, let’s dive in and start our AI adventure! Preparation Luckily we can try the Copilot Studio with a trial license. So head on to https://learn.microsoft.com/en-us/microsoft-copilot-studio/sign-up-individual and find all the details. You will have to sign in with a Microsoft 365 user email. You need a Microsoft 365 Tenant as you understand! For those who are actively using Power Apps i suggest to have a god look at https://learn.microsoft.com/en-us/microsoft-copilot-studio/environments-first-run-experience, so you can grasp the details regarding Environments. Creation Once we are ready, head over to https://copilotstudio.microsoft.com and you can start working with new Copilots! Let’s create one shall we ? Select the upper left Copilots menu, and New Copilot. Add the name you want and add your Blog\Site where the Copilot will get it’s data. Go to the bottom and select Edit Advanced Options and check the “Include lesson topics…”, select a icon and leave the default “Common Data Services Default Solution”. Once you create the Copilot you will find it in the left menu on the Copilots section: Configure The first thing we are going to do is to change the Copilot message for salutation. There is a default one which we can change once we click on the Copilot and inside the chat box of the Copilot message. We will find on the left designer area the predefined message which we will change to our preference. Remember to Save your changes! Topics The most important element of our Copilot are the Topics. Topics are the core building blocks of a chatbot. Topics can be seen as the bot competencies: they define how a conversation dialog plays out. Topics are discrete conversation paths that, when used together, allow for users to have a conversation with a bot that feels natural and flows appropriately. In our Copilot we have 3 Topics that we do not need, so from the Topics menu, select each Lesson Topic, from the dotted selection and disable it. You can also delete completely these three unneeded Topics. It is also important to disable Topics that we don’t need otherwise we have to resolve any errors on the existing Topics, since we are making changes. The Topics we need to disable are in grey : Before starting deep we also changed a standard Topic named “Goodbye”. You will understand that we may need to make it simpler so here is a proposed version: As you can see we just changed the end of the Chat with a simple “Thanks for using …” We also propose to change the Greeting to Redirect to the Conversation Start for a unified experience ! Let’s create a simple Topic, where the Copilot responds to specific questions. You can add your own phrases as well. From the Topics menu select “Create” – “Topic” – “From Blank” Add the Trigger phrases you wish. We have selected the following : What do you do?, What is your reach?, What can you tell me? Add a node with the Message property and add the text which the Copilot will use to answer. You can add the name of the Copilot by selecting the variable icon inside the node. Add a final node that ends this topic: You can edit the name of your Topic in the upper left corner, and save it ! Before anything you can always test it on the left chat box! Now let’s do something more creative ! Let’s ask the user if they would provide their email so we can send a summary of the conversation ! The Copilot should make it clear that it is optional and should not interrupt the conversation. So the first thing we need to do is to add a new Topic where we can get the user’s email address and store it as a variable. Since the user can request to provide the email later, we can offer this option as well, with the trigger. Here is our Topic: Pay attention to the closing node and the comment. We have added a Redirect to the Greeting Topic, so we can avoid falling in the Loop of the Start Conversation. To do that we add a new node, Topic Management – Go to another Topic. Now let’s build the the request with a condition, by editing the Conversation Start Topic ( the one we edited at the beginning ). From the Topics menu select All and find the Conversation Start Topic. Add a new Node after the Message with a question. We have this text so the user is aware about the options they have: Would you like to provide an email so you can get a summary of our Interaction? It is optional and you can add it later by simply saying “Get my Email”! In this Question Node, select the Multiple Choice options and add the YES and NO possible answers, while saving the answer on a variable. You can rename the variable if you want to. The next node is an “Add a Condition” node and when the answer is YES we send the conversation to the Get User’s Email Topic, while the opposite we send it to the Greeting. Here is our design for the Topic: Save the Topic, and you can test your Copilot on the left Chat box. You will notice that we can’t redirect the user without a validation message. So we can edit the Get User’s Email Topic with a Message Node like this: Now we have the basic idea of the Topics ! Play around and create your paths ! Be careful not to fall under loops and always try the Copilot ! We can expand to Power Apps for Data operations , like storing the Email to a Table or creating a Flow in Power automate but that’s not our focus. Authentication-Channels Once we are happy with our Copilot we need to make it available to our Channels, specifically to Web Sites. If we select Channels from the left menu we will get a message about Authentication: So we have to follow a straight forward process to configure Authentication for our Copilot to be available in all Channels. Unless we want users to Sign in we won’t activate that option but you can always change that option. We will enable Entra ID as our Service Provider. The following part is from Microsoft Documentation Source: Configure user authentication with Microsoft Entra ID – Microsoft Copilot Studio | Microsoft Learn Create an app registration Sign in to theAzure portal, using an admin account in the same tenant as your copilot. Go toApp registrations, either by selecting the icon or searching in the top search bar. SelectNew registrationand enter a name for the registration.It can be helpful later to use the name of your copilot. For example, if your copilot is called “Contoso sales help,” you might name the app registration “ContosoSalesReg.” UnderSupported account types, selectAccounts in any organizational directory (Any Microsoft Entra ID directory – Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox). Leave theRedirect URIsection blank for now. Enter that information in the next steps. SelectRegister. After the registration is complete, go toOverview. Copy theApplication (client) IDand paste it in a temporary file. You need it in later steps. Add the redirect URL Go toAuthentication, and then selectAdd a platform. UnderPlatform configurations, selectAdd a platform, and then selectWeb. UnderRedirect URIs, enterhttps://token.botframework.com/.auth/web/redirectandhttps://europe.token.botframework.com/.auth/web/redirect. Note The authentication configuration pane in Copilot Studio might show the following redirect URL:https://unitedstates.token.botframework.com/.auth/web/redirect. Using that URL makes the authentication fail; use the URI instead. In theImplicit grant and hybrid flowssection, turn on bothAccess tokens (used for implicit flows)andID tokens (used for implicit and hybrid flows). SelectConfigure. Generate a client secret Go toCertificates & secrets. In theClient secretssection, selectNew client secret. (Optional) Enter a description. One is provided if left blank. Select the expiry period. Select the shortest period that’s relevant for the life of your copilot. SelectAddto create the secret. Store the secret’sValuein a secure temporary file. You need it when you configure your copilot’s authentication later on. Tip Don’t leave the page before you copy the value of the client secret. If you do, the value is obfuscated and you must generate a new client secret. Configure manual authentication In Copilot Studio, in the navigation menu underSettings, selectSecurity. Then select theAuthenticationcard. SelectManual (for any channel including Teams)then turn onRequire users to sign in. Enter the following values for the properties: Service provider: SelectMicrosoft Entra ID. Client ID: Enter the application (client) ID that you copied earlier from the Azure portal. Client secret: Enter the client secret you generated earlier from the Azure portal. Scopes: Enterprofile openid. SelectSaveto finish the configuration. Configure API permissions Go toAPI permissions. SelectGrant admin consent for <your tenant name>, and then selectYes. If the button isn’t available, you may need to ask a tenant administrator to do enter it for you. NoteTo avoid users from having to consent to each application, a Global Administrator, Application Administrator, or Cloud Application Administrator can grant tenant-wide consent to your app registrations. SelectAdd a permission, and then selectMicrosoft Graph. SelectDelegated permissions. ExpandOpenId permissionsand turn onopenidandprofile. SelectAdd permissions. Define a custom scope for your copilot Scopesallow you to determine user and admin roles and access rights. You create a custom scope for the canvas app registration that you create in a later step. Go toExpose an APIand selectAdd a scope. Set the following properties. You can leave the other properties blank.Expand tablePropertyValueScope nameEnter a name that makes sense in your environment, such asTest.ReadWho can consent?SelectAdmins and usersAdmin consent display nameEnter a name that makes sense in your environment, such asTest.ReadAdmin consent descriptionEnterAllows the app to sign the user in.StateSelectEnabled SelectAdd scope. Source: Configure user authentication with Microsoft Entra ID – Microsoft Copilot Studio | Microsoft Learn You can always make the Copilot more secure by adding required Authentication and SSO. Read the Documentation to see how you can also add scopes on the Copilot. Now it’s time to Publish ! Hit the Publish from the menu and publish your Copilot. If any errors occur it will mostly be a Topic. Read carefully our instructions and of course you can make your own routes since you got the concept ! Once Publishing is done, the Channels menu will activate all channels and from the Custom Website you can grab the embedding code and add it in a Post on your WordPress or your Webpage ! You can also see it in the Demo Website if you have not enabled “require secure access”. Here it is in the actual WordPress using the embedded code: Closing With Copilot Studio, building a custom AI assistant and seamlessly integrating it into your WordPress site is simpler than you might have imagined. It empowers you to create a more dynamic and personalized user experience. Whether you’re looking to automate tasks, provide intelligent insights, or offer a more conversational interface on your site, Copilot Studio provides the tools and straightforward process to get you there. Remember, the possibilities are endless. Experiment, refine, and watch as your WordPress site becomes a hub of unparalleled AI-powered engagement! References Create Copilots with Copilot Studio Manage Topics in Copilot Studio AI-based copilot authoring overview Quickstart guide for building copilots with generative AI Microsoft Copilot Studio overview1.7KViews0likes5CommentsSemantic Kernel: Develop your AI Integrated Web App on Azure and .NET 8.0
How to create a Smart Career Advice and Job Search Engine with Semantic Kernel The concept The Rise of Semantic Kernel Semantic Kernel, an open-source development kit, has taken the .NET community by storm. With support for C#, Python, and Java, it seamlessly integrates with dotnet services and applications. But what makes it truly remarkable? Let’s dive into the details. A Perfect Match: Semantic Kernel and .NET Picture this: you’re building a web app, and you want to infuse it with AI magic. Enter Semantic Kernel. It’s like the secret sauce that binds your dotnet services and AI capabilities into a harmonious blend. Whether you’re a seasoned developer or just dipping your toes into AI waters, Semantic Kernel simplifies the process. As part of the Semantic Kernel community, I’ve witnessed its evolution firsthand. The collaborative spirit, the shared knowledge—it’s electrifying! We’re not just building software; we’re shaping the future of AI-driven web applications. The Web App Our initial plan was simple: create a job recommendations engine. But Semantic Kernel had other ideas. It took us on an exhilarating ride. Now, our web application not only suggests career paths but also taps into third-party APIs to fetch relevant job listings. And that’s not all—it even crafts personalized skilling plans and preps candidates for interviews. Talk about exceeding expectations! Build Since i have already created the repository on GitHub i don’t think it is critical to re post Terraform files here. We are building our main Infrastructure with Terraform and also invoke an Azure Cli script to automate the Container Image build and push. We will have these resources at the end: Before deployment make sure to assign the Service Principal with the role “RBAC Administrator” and narrow down the assignments to AcrPull, AcrPush, so you can create a User Assigned Managed Identity with these roles. Since we are building and pushing the Container Images with local-exec and Az Cli scripts within Terraform you will notice some explicit dependencies, for us to make sure everything builds in order. It is really amazing the fact that we can build all the Infra including the Apps with Terraform ! Architecture Upon completion you will have a functioning React Web App with the ASP NET Core webapi, utilizing Semantic Kernel and an external Job Listings API, to get advice, find Jobs and get a Skilling Plan for a specific recommended role! The following is a reference Architecture. Aside the Private Endpoints the same deployment is available in GitHub. Kernel SDK The SDK provides a simple yet powerful array of commands to configure and “set” the Semantic Kernel characteristics. Let’s the first endpoint, where users ask for recommended career paths: [HttpPost("get-recommendations")] public async Task<IActionResult> GetRecommendations([FromBody] UserInput userInput) { _logger.LogInformation("Received user input: {Skills}, {Interests}, {Experience}", userInput.Skills, userInput.Interests, userInput.Experience); var query = $"I have the following skills: {userInput.Skills}. " + $"My interests are: {userInput.Interests}. " + $"My experience includes: {userInput.Experience}. " + "Based on this information, what career paths would you recommend for me?"; var history = new ChatHistory(); history.AddUserMessage(query); ChatMessageContent? result = await _chatCompletionService.GetChatMessageContentAsync(history); if (result == null) { _logger.LogError("Received null result from the chat completion service."); return StatusCode(500, "Error processing your request."); } string content = result.Content; _logger.LogInformation("Received content: {Content}", content); var recommendations = ParseRecommendations(content); _logger.LogInformation("Returning recommendations: {Count}", recommendations.Count); return Ok(new { recommendations }); The actual data flow is depicted below, and we can see the Interaction with the local Endpoints and the external endpoint as well. The user provides Skills, Interests, Experience and Level of current position and the API sends the Payload to Semantic kernel with a constructed prompt asking for positions recommendations. The recommendations return with clickable buttons, one to find relevant positions from LinkedIn listings using the external API, and another to ask again the Semantic Kernel for skill up advice! The UI experience : Recommendations: Skill Up Plan: Job Listings: The Project can be extended to a point of automation and AI Integration where users can upload their CVs and ask the Semantic Kernel to provide feedback as well as apply for a specific position! As we discussed earlier some additional optimizations are good to have, like the Private Endpoints, Azure Front Door and/or Azure Firewall, but the point is to see Semantic Kernel in action with it’s amazing capabilities especially when used within the .NET SDK. Important Note: This could have been a one shot deployment but we cannot add the custom domain with Terraform ( unless we use Azure DNS) and the Cors Settings. So we have to add these details for our Solution to function properly! Once the Terraform completes, add the Custom Domains to both Container Apps. The advantage here is that we will know the Frontend and Backend FQDNs, since we decide the Domain name, and the React Environment Value is preconfigured with the backend URL. Same for the Backend, we have set as Environment Value for the ALLOWED_ORIGINS, the frontend URL. So we can just go to Custom Domain on each App, and add the domain names after selecting the Certificate which will be already there, since we have uploaded it via Terraform! Lessons Learned This was a real adventure and i want to share with you important lessons learned and hopefully save you some time and effort. Prepare ahead with a Certificate. I was having problems from the get go with ASP NET refusing to build on Containers until i integrated the certificate. The local development works fine without it. Cross Origin is very important, do not underestimate it ! Configure it correctly and in this example i went directly to Custom Domains, so i can have better overall control. This solution worked both on Azure Web Apps and Azure Container Apps. The Git Hub repo has the Container Apps solution but you can go with Web Apps. Finally don’t waste you time to go with Dapr. React does not ‘react’ well with the Dapr Client and my lesson learned here is that Dapr is made for same framework invocation or you are going to need a middleware. Since we cannot create the Custom Domain with Terraform there are solutions we can use, like using AzApi, We utilized a small portion of what really Semantic Kernel can do and i stopped when i realized that this project will never end if i continue pursuing ideas ! It is much better to have it on GiHub and probably we can come back and add some more features ! Conclusion In this journey through the intersection of technology and career guidance, we’ve explored the powerful capabilities of Azure Container Apps and the transformative potential of Semantic Kernel, Microsoft’s open-source development kit. By seamlessly integrating AI into .NET applications, Semantic Kernel has not only simplified the development process but also opened new doors for innovation in career advice. Our adventure began with a simple idea—creating a job recommendations engine. However, with the help of Semantic Kernel, this idea evolved into a sophisticated web application that goes beyond recommendations. It connects to third-party APIs, crafts personalized skilling plans, and prepares candidates for interviews, demonstrating the true power of AI-driven solutions. By leveraging Terraform for infrastructure management and Azure CLI for automating container builds, we successfully deployed a robust architecture that includes a React Web App, ASP.NET Core web API, and integrated AI services. This project highlights the ease and efficiency of building and deploying cloud-based applications with modern tools. The code is available in GitHub for you to explore, contribute and extend as mush as you want to ! Git Hub Repo: Semantic Kernel - Career Advice Links\References Intro to Semantic Kernel Understanding the kernel Chat completion Deep dive into Semantic Kernel Azure Container Apps documentationAzure AI Search: Nativity in Microsoft Fabric
How to create an AI Web App with Azure OpenAI, Azure AI Search with Vector Embeddings and Microsoft Fabric Pipelines Intro Today, we embark on an exciting journey to build an AI Assistant and Recommendations bot with cutting-edge features, helping users decide which Book is best suitable for their preferences. Our bot will handle various interactions, such as, providing customized recommendations, and engaging in chat conversations. Additionally, users can register and log in to this Azure Cloud-native AI application. Microsoft Fabric will handle, automation and AI related tasks such as: Load and clean the books Dataset with triggered Pipelines and Notebooks Transform the Dataset to JSON and making proper adjustments for Vector usability Load the cleaned and transformed Dataset to Azure AI Search and configuring Vector and Semantic profiles Create and save embeddings with Azure OpenAI to Azure AI Search As you may already guessed our foundation lies in Microsoft Fabric, leveraging its powerful Python Notebooks, Pipelines, and Datalake toolsets. We’ll integrate these tools with a custom Identity Database and an AI Assistant. Our mission? To explore the core AI functionalities that set modern applications apart—think embeddings, semantic kernel, and vectors. As we navigate Microsoft Azure’s vast offerings, we’ll build our solution from scratch.. Prerequisites for Workshop Apart from this guide, everything will be shared through GitHub; nevertheless we need: Azure Subscription, access to Azure OpenAI with text-embeddings ad chat-gpt deployments, Microsoft Fabric with a Pro license (trial is fine), patience and excitement! Infrastructure I do respect everyone’s time and i am going to point you to the Git Hub repo that holds the whole implementation, along with Terraform automation. We will start with the SQL query that is running within terraform. The query needs the following code: CREATE TABLE Users ( UserId INT IDENTITY(1,1) PRIMARY KEY, FirstName NVARCHAR(50) NOT NULL, LastName NVARCHAR(50) NOT NULL, Username NVARCHAR(50) UNIQUE NOT NULL, PasswordHash NVARCHAR(255) NOT NULL, Age INT NOT NULL, photoUrl NVARCHAR(500) NOT NULL ); -- Genres table CREATE TABLE Genres ( GenreId INT PRIMARY KEY IDENTITY(1,1), GenreName NVARCHAR(50) ); -- UsersGenres join table CREATE TABLE UsersGenres ( UserId INT, GenreId INT, FOREIGN KEY (UserId) REFERENCES Users(UserId), FOREIGN KEY (GenreId) REFERENCES Genres(GenreId) ); ALTER DATABASE usersdb01 SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON) We have enabled Change Tracking in case we wan to trigger the Embeddings creation upon each change on the Database. You can see we are using a JOIN statement to handle users and genres since the various genres selected by the users will help the assistant to make recommendations. We are also enabling Change Tracking so we can trigger updates for the Vector once a change is made. Keep in mind you need the sqlcmd installed on your Workstation ! Vector, Embeddings & Fabric Pipelines Yes you read it well! We are going to get a Books Dataset from Kaggle, clean it , transform it and upload it to AI Search, where we will create an index for the books. We will also create and store the embeddings using Vector Profile from AI Search. In a similar manner we will get the Users from SQL and upload them to AI Search users index, create the embeddings and save them as well. The real exciting stuff is that we will use Microsoft Fabric Pipelines and Notebooks for the books index and embeddings ! So it is important to have a Fabric Pro Trial License with the minimum capacity enabled. Books Dataset The ultimate purpose here is to achieve automation for the creation of embeddings for both Books and Users datasets, so on the Web App we can get recommendations based on preferences but also on actual queries we set to the AI Assistant. We will get a main books dataset as Delimited Text (CSV) and transform it to JSON with correct format so it can be uploaded to Azure AI Search index, utilizing the native AI Search vector profiles and Azure OpenAI for the embeddings. The Fabric Pipelines will be triggered on schedule and we will explore other possible ways. In Microsoft Fabric, Notebooks are an important tool as in most modern Data Platforms. The managed Spark Clusters allows us to create and execute powerful scripts in the form of Python Notebooks (PySpark), add them in a Pipeline and build solid Projects and Solutions. Microsoft Fabric provides the ability to pre install libraries and configure our Spark Compute within Environments, so our code will have all requirements in this managed environment. In our case we will install all required libraries and also pin the OpenAI version to pre 1.0.0 for this project. But let’s take it from the start. We need to access app.fabric.microsoft.com and create a new Workspace with a Trial Pro License. It should look like this and also has the diamond icon: Once we have our Workspace in place we can select it and from the left menu select New and create the Environment and later a Lakehouse. The Environment settings that worked for me are the following, you can see that we just install Public Libraries: Fabric Environment: OpenAI Pinning Since all the code will be available on GitHub i prefer to explore the next task, create the Pipeline, which will contain the Notebooks. Select your Workspace icon on the left vertical menu, find the NEW+ drop-down menu and More Options until you find the Data Pipeline. You will be presented with the familiar Synapse\Data Factory dashboard (quite similar) where we can start inserting our activities. You have to create all Notebooks before hands just to keep everything in order. So based on the GitHub we will have 5 Notebooks ready. The Fabric API does not support yet firing pipelines, it will happen eventually, so can either schedule or work with Event Stream. The Reflex supports same Directory Azure Connections only ( We will have a look another time), but our Subscription is on another Tenant so yeah! Schedule it is ! The Pipeline has the following activities: Let’s shed some light ! We assume that the Dataset is stored in Blob Storage Account so we get that CSV into the Lakehouse. First Notebook is cleaning the data with Python, remove nulls, remove non-English characters and so on. Since the activity stores it as part of a Folder-like structure with non-direct access we need a task to save it on our Lakehouse. We then transform to JSON, make the JSON a correct array set of records, again save it to Lakehouse and the last 2 Notebooks are creating the AI Search Index, uploading the JSON to AI Search, configure the AI Search with vector and semantic profiles and get all records to create embeddings from Azure OpenAI and store those back to AI Search. Due to the great number of Documents we apply rate-limit evasion (back-off) and you can be sure this will take almost 30 minutes to conclude for around 9500 records. Users Dataset Most of the workflow is similar for the users index and embeddings. The difference is that our users are stored and updated with new ones, in an Azure SQL Database. Since we utilize pipelines, Microsoft Fabric natively connects to Azure SQL and in fact our activity is a Copy Task but we have a query to bring SQL data. SELECT u.UserId, u.Age, STRING_AGG(g.GenreName, ',') AS Genres FROM Users u JOIN UsersGenres ug ON u.UserId = ug.UserId JOIN Genres g ON ug.GenreId = g.GenreId GROUP BY u.UserId, u.Age This SQL query is selecting data from three related tables: Users, UsersGenres, and Genres. Specifically, it’s returning a list of users (based on their UserId and Age) along with a comma-separated list of all the genres associated with each user. The STRING_AGG function is used to concatenate the GenreName into a single string, separated by commas. The JOIN operations are used to link the tables together based on common fields – in this case, the UserId in the Users and UsersGenres tables, and the GenreId in the UsersGenres and Genres tables. The GROUP BY clause is grouping the results by both UserId and Age, meaning that each row in the output will represent a unique combination of these two fields. So it is a simpler process after all, and due to the small amount of users ( i can only subscribe up to 5-6 imaginary accounts ! ), it is a quicker process. So what have we done so far ? Well let’s break it down, shall we ? Process Created the main Infrastructure using Terraform – available on GitHub The Infra provides a Web UI where we register as users and select favorite book Genres, and can login into a Dashboard that we have access to an AI Assistant. The database used to store User’s info is Azure SQL. The Infrastructure consists also of Azure Key Vault, Azure Container Registry, Azure AI Search and Azure Web Apps. A separate Azure OpenAI is already in place. The backend creates a Join Table to store UserId with Genres so later it will be easier to create personalized recommendations We got a Books dataset with [id, Author, Title, Genres, Rating] fields and upload it to Azure Blob Storage We activated Trial (or just have available) license for Microsoft Fabric capacity We created Jupyter Notebooks to clean the source books dataset, transform it and store it as JSON We created a Fabric Pipeline integrating these Notebooks and new ones that create a books-index in Azure AI Search, configure it with Vector and Semantic Profiles and uploaded all JSON records in it The Pipeline continues with additional Notebooks that create embeddings with Azure OpenAI and store this embeddings back in Azure AI Search. A new Pipeline has been deployed, that gets the Users data with a query that combines the Genres information with Users from the Azure SQL Database resource and stores it as JSON The users Pipeline creates and configures a new users-index in Azure AI Search, configures Vector and Semantic profiles and creates embeddings, for all data, with Azure OpenAI and stores the embeddings back to the index. Now we are left with the Backend details and maybe some minor changes for the Frontend. As you will see the GitHub repo contains all required files to create a Docker Image, push it to Container Registry and create a Web App in Azure Web Apps. Use: [ docker build -t backend . ] and tag and push: [ docker tag backend {acrname}.azurecr.io/backend:v1 ] , [ docker push {acrname}.azurecr.io/backend:v1 ]. We will be able to see our new Repo on Azure Container Registry and deploy our new Web App : Don’t forget to add * in CORS settings for the backend Web App! The overall Architecture is like this: The only variable needed for the Backend Web App is the KeyVault name and the User Assigned Managed Identity ID. All access to other services (SQL, Storage Account, Ai Search, Azure OpenAI) is going through Key Vault Secrets. Let’s have a quick look on our Backend import dotenv from 'dotenv'; import express from 'express'; import sql from 'mssql'; import bcrypt from 'bcrypt'; import jwt from 'jsonwebtoken'; import multer from 'multer'; import azureStorage from 'azure-storage'; import getStream from 'into-stream'; import cors from 'cors'; import { SecretClient } from "@azure/keyvault-secrets"; import { DefaultAzureCredential } from "@azure/identity"; import { OpenAIClient, AzureKeyCredential } from '@azure/openai'; import { SearchClient } from '@azure/search-documents'; import bodyParser from 'body-parser'; dotenv.config(); const app = express(); app.use(cors({ origin: '*' })); app.use((req, res, next) => { res.setHeader('X-Content-Type-Options', 'nosniff'); next(); }); app.use(express.json()); // set up rate limiter: maximum of five requests per minute var RateLimit = require('express-rate-limit'); var limiter = RateLimit({ windowMs: 15 * 60 * 1000, // 15 minutes max: 100, // max 100 requests per windowMs }); // apply rate limiter to all requests app.use(limiter); app.get('/:path', function(req, res) { let path = req.params.path; if (isValidPath(path)) res.sendFile(path); }); const vaultName = process.env.AZURE_KEY_VAULT_NAME; const vaultUrl = `https://${vaultName}.vault.azure.net`; const credential = new DefaultAzureCredential({ managedIdentityClientId: process.env.MANAGED_IDENTITY_CLIENT_ID, // Use environment variable for managed identity client ID }); const secretClient = new SecretClient(vaultUrl, credential); async function getSecret(secretName) { const secret = await secretClient.getSecret(secretName); return secret.value; } const inMemoryStorage = multer.memoryStorage(); const uploadStrategy = multer({ storage: inMemoryStorage }).single('photo'); let sqlConfig; let storageAccountName; let azureStorageConnectionString; let jwtSecret; let searchEndpoint; let searchApiKey; let openaiEndpoint; let openaiApiKey; async function initializeApp() { sqlConfig = { user: await getSecret("sql-admin-username"), password: await getSecret("sql-admin-password"), database: await getSecret("sql-database-name"), server: await getSecret("sql-server-name"), options: { encrypt: true, trustServerCertificate: false } }; storageAccountName = await getSecret("storage-account-name"); azureStorageConnectionString = await getSecret("storage-account-connection-string"); jwtSecret = await getSecret("jwt-secret"); searchEndpoint = await getSecret("search-endpoint"); searchApiKey = await getSecret("search-apikey"); openaiEndpoint = await getSecret("openai-endpoint"); openaiApiKey = await getSecret("openai-apikey"); //console.log("SQL Config:", sqlConfig); // console.log("Storage Account Name:", storageAccountName); // console.log("Azure Storage Connection String:", azureStorageConnectionString); // console.log("JWT Secret:", jwtSecret); // console.log("Search Endpoint:", searchEndpoint); // console.log("Search API Key:", searchApiKey); // console.log("OpenAI Endpoint:", openaiEndpoint); // console.log("OpenAI API Key:", openaiApiKey); // Initialize OpenAI and Azure Search clients const openaiClient = new OpenAIClient(openaiEndpoint, new AzureKeyCredential(openaiApiKey)); const userSearchClient = new SearchClient(searchEndpoint, 'users-index', new AzureKeyCredential(searchApiKey)); const bookSearchClient = new SearchClient(searchEndpoint, 'books-index', new AzureKeyCredential(searchApiKey)); // Start server const PORT = process.env.PORT || 3001; app.listen(PORT, () => { console.log(`Server is running on port ${PORT}`); }).on('error', error => { console.error("Error initializing application:", error); }); } initializeApp().catch(error => { console.error("Error initializing application:", error); }); // Upload photo endpoint app.post('/uploadphoto', uploadStrategy, (req, res) => { if (!req.file) { return res.status(400).send('No file uploaded.'); } const blobName = `userphotos/${Date.now()}_${req.file.originalname}`; const stream = getStream(req.file.buffer); const streamLength = req.file.buffer.length; const blobService = azureStorage.createBlobService(azureStorageConnectionString); blobService.createBlockBlobFromStream('pics', blobName, stream, streamLength, err => { if (err) { console.error(err); res.status(500).send('Error uploading the file'); } else { const photoUrl = `https://${storageAccountName}.blob.core.windows.net/pics/${blobName}`; res.status(200).send({ photoUrl }); } }); }); // Register endpoint app.post('/register', uploadStrategy, async (req, res) => { const { firstName, lastName, username, password, age, emailAddress, genres } = req.body; if (!password) { return res.status(400).send({ message: 'Password is required' }); } let photoUrl = ''; if (req.file) { const blobName = `userphotos/${Date.now()}_${req.file.originalname}`; const stream = getStream(req.file.buffer); const streamLength = req.file.buffer.length; const blobService = azureStorage.createBlobService(azureStorageConnectionString); await new Promise((resolve, reject) => { blobService.createBlockBlobFromStream('pics', blobName, stream, streamLength, err => { if (err) { console.error(err); reject(err); } else { photoUrl = `https://${storageAccountName}.blob.core.windows.net/pics/${blobName}`; resolve(); } }); }); } const hashedPassword = await bcrypt.hash(password, 10); try { let pool = await sql.connect(sqlConfig); let result = await pool.request() .input('username', sql.NVarChar, username) .input('password', sql.NVarChar, hashedPassword) .input('firstname', sql.NVarChar, firstName) .input('lastname', sql.NVarChar, lastName) .input('age', sql.Int, age) .input('emailAddress', sql.NVarChar, emailAddress) .input('photoUrl', sql.NVarChar, photoUrl) .query(` INSERT INTO Users (Username, PasswordHash, FirstName, LastName, Age, EmailAddress, PhotoUrl) VALUES (@username, @password, @firstname, @lastname, @age, @emailAddress, @photoUrl); SELECT SCOPE_IDENTITY() AS UserId; `); const userId = result.recordset[0].UserId; if (genres && genres.length > 0) { const genreNames = genres.split(','); // Assuming genres are sent as a comma-separated string for (const genreName of genreNames) { let genreResult = await pool.request() .input('genreName', sql.NVarChar, genreName.trim()) .query(` IF NOT EXISTS (SELECT 1 FROM Genres WHERE GenreName = @genreName) BEGIN INSERT INTO Genres (GenreName) VALUES (@genreName); END SELECT GenreId FROM Genres WHERE GenreName = @genreName; `); const genreId = genreResult.recordset[0].GenreId; await pool.request() .input('userId', sql.Int, userId) .input('genreId', sql.Int, genreId) .query('INSERT INTO UsersGenres (UserId, GenreId) VALUES (@userId, @genreId)'); } } res.status(201).send({ message: 'User registered successfully' }); } catch (error) { console.error(error); res.status(500).send({ message: 'Error registering user' }); } }); // Login endpoint app.post('/login', async (req, res) => { try { let pool = await sql.connect(sqlConfig); let result = await pool.request() .input('username', sql.NVarChar, req.body.username) .query('SELECT UserId, PasswordHash FROM Users WHERE Username = username'); if (result.recordset.length === 0) { return res.status(401).send({ message: 'Invalid username or password' }); } const user = result.recordset[0]; const validPassword = await bcrypt.compare(req.body.password, user.PasswordHash); if (!validPassword) { return res.status(401).send({ message: 'Invalid username or password' }); } const token = jwt.sign({ UserId: user.UserId }, jwtSecret, { expiresIn: '1h' }); res.send({ token: token, UserId: user.UserId }); } catch (error) { console.error(error); res.status(500).send({ message: 'Error logging in' }); } }); // Get user data endpoint app.get('/user/:UserId', async (req, res) => { try { let pool = await sql.connect(sqlConfig); let result = await pool.request() .input('UserId', sql.Int, req.params.UserId) .query('SELECT Username, FirstName, LastName, Age, EmailAddress, PhotoUrl FROM Users WHERE UserId = @UserId'); if (result.recordset.length === 0) { return res.status(404).send({ message: 'User not found' }); } const user = result.recordset[0]; res.send(user); } catch (error) { console.error(error); res.status(500).send({ message: 'Error fetching user data' }); } }); // AI Assistant endpoint for book questions and recommendations app.post('/ai-assistant', async (req, res) => { const { query, userId } = req.body; console.log('Received request body:', req.body); console.log('Extracted userId:', userId); try { if (!userId) { console.error('User ID is missing from the request.'); return res.status(400).send({ message: 'User ID is required.' }); } //console.log(`Received request for user ID: ${userId}`); // Retrieve user data let pool = await sql.connect(sqlConfig); let userResult = await pool.request() .input('UserId', sql.Int, userId) .query('SELECT * FROM Users WHERE UserId = @UserId'); const user = userResult.recordset[0]; if (!user) { console.error(`User with ID ${userId} not found.`); return res.status(404).send({ message: `User with ID ${userId} not found.` }); } console.log(`User data: ${JSON.stringify(user)}`); if (query.toLowerCase().includes("recommendation")) { // Fetch user genres const userGenresResult = await pool.request() .input('UserId', sql.Int, userId) .query('SELECT GenreName FROM Genres g JOIN UsersGenres ug ON g.GenreId = ug.GenreId WHERE ug.UserId = @UserId'); const userGenres = userGenresResult.recordset.map(record => record.GenreName).join(' '); //console.log(`User genres: ${userGenres}`); // Fetch user embedding from search index const userSearchClient = new SearchClient(searchEndpoint, 'users-index', new AzureKeyCredential(searchApiKey)); const userEmbeddingResult = await userSearchClient.getDocument(String(user.UserId)); const userEmbedding = userEmbeddingResult.Embedding; //console.log(`User embedding result: ${JSON.stringify(userEmbeddingResult)}`); //console.log(`User embedding: ${userEmbedding}`); if (!userEmbedding || userEmbedding.length === 0) { console.error('User embedding not found.'); return res.status(500).send({ message: 'User embedding not found.' }); } // Search for recommendations const bookSearchClient = new SearchClient(searchEndpoint, 'books-index', new AzureKeyCredential(searchApiKey)); const searchResponse = await bookSearchClient.search("*", { vectors: [{ value: userEmbedding, fields: ["Embedding"], kNearestNeighborsCount: 5 }], includeTotalCount: true, select: ["Title", "Author"] }); const recommendations = []; for await (const result of searchResponse.results) { recommendations.push({ title: result.document.Title, author: result.document.Author, score: result.score }); } // Limit recommendations to top 5 const topRecommendations = recommendations.slice(0, 5); return res.json({ response: "Here are some personalized recommendations for you:", recommendations: topRecommendations }); } else { // General book query const openaiClient = new OpenAIClient(openaiEndpoint, new AzureKeyCredential(openaiApiKey)); const deploymentId = "gpt"; // Replace with your deployment ID // Extract rating and genre from query const ratingMatch = query.match(/rating over (\d+(\.\d+)?)/); const genreMatch = query.match(/genre (\w+)/i); const rating = ratingMatch ? parseFloat(ratingMatch[1]) : null; const genre = genreMatch ? genreMatch[1] : null; if (rating && genre) { // Search for books with the specified genre and rating const bookSearchClient = new SearchClient(searchEndpoint, 'books-index', new AzureKeyCredential(searchApiKey)); const searchResponse = await bookSearchClient.search("*", { filter: `Rating gt ${rating} and Genres/any(g: g eq '${genre}')`, top: 5, select: ["Title", "Author", "Rating"] }); const books = []; for await (const result of searchResponse.results) { books.push({ title: result.document.Title, author: result.document.Author, rating: result.document.Rating }); } const bookResponse = books.map(book => `${book.title} by ${book.author} with rating ${book.rating}`).join('\n'); return res.json({ response: `Here are 5 books with rating over ${rating} in ${genre} genre:\n${bookResponse}` }); } else { // Handle general queries about books using OpenAI with streaming chat completions const events = await openaiClient.streamChatCompletions( deploymentId, [ { role: "system", content: "You are a helpful assistant that answers questions about books and provides personalized recommendations." }, { role: "user", content: query } ], { maxTokens: 350 } ); let aiResponse = ""; for await (const event of events) { for (const choice of event.choices) { aiResponse += choice.delta?.content || ''; } } return res.json({ response: aiResponse }); } } } catch (error) { console.error('Error processing AI Assistant request:', error); return res.status(500).send({ message: 'Error processing your request.' }); } }); As you can see apart form the registration and login endpoints we have the ai-assistant endpoint. Users are able not only to get personalized recommendations when the word "recommendations" is in the chat, but also information on Genres and ratings, again when these words are in the Chat request. Also they can chat regularly with the Assistant about books and literature! The UI needs some fine tuning, we can add Chat History and you are welcome to do it![Done] Please find the code in GitHub and in case you need help let me know ! Conclusion We just build our own Web AI Assistant with an enhanced recommendation engine, utilizing a number of Azure and Microsoft Services. It is important to prepare well ahead of such a project, load yourself with patience and be prepared to make mistakes and learn ! I reached 15 Docker Images for the backend to have a basic functionality ! But hey i did it for everyone so you can just grab it and enjoy it, even make it better! Thank you for staying up to this point! References Azure SDK for JavaScript Azure AI Search Create a Vector Index Generate Embeddings Fabric: Introduction to deployment pipelines Develop, execute, and manage Microsoft Fabric notebooks632Views0likes1CommentINTRO TO MICROSOFT COPILOT FOR SECURITY
All you need to know to deploy your own Copilot for Security Instance Copilot for Security is a generative AI security product that empowers security and IT professionals respond to cyber threats, process signals, and assess risk exposure at the speed and scale of AI. Minimum requirements Subscription In order to purchase security compute units,you need to have an Azure subscription. For more information, seeCreate your Azure free account. Security compute units Security compute units are the required units of resources that are needed for dependable and consistent performance of Microsoft Copilot for Security. Copilot for Security is sold in a provisioned capacity model and is billed by the hour. You can provision Security Compute Units (SCUs) and increase or decrease them at any time. Billing is calculated on an hourly basis with a minimum of one hour. For more information, seeMicrosoft Copilot for Security pricing. Capacity Capacity in the context of Copilot for Security, is an Azure resource that contains SCUs. SCUs are provisioned for Copilot for Security. You can easily manage capacity by increasing or decreasing provisioned SCUs within the Azure portal or the Copilot for Security portal. Copilot for Security provides a usage monitoring dashboard for Copilot owners, allowing them to track usage over time and make informed decisions about capacity provisioning. For more information, seeManaging usage. Provisioning We have 2 options to provision Compute Units, directly from theCopilot for Security Portalor from our Azure Subscription. The second option is to simply head over to Azure, search for Copilot for Security and you can create the resource, which in fact represents the billable CUs for the Directory the Subscription is associated with. The first is the recommended option where we go through theactual portalwhere we can later manage Access and see our provisioned CUs, and follow a wizard type of activation. Configure Once we complete the wizard and press finish we are ready to start working with our Copilot for Security. Observe the information on the Home screen of the portal, with links to Training Prompts and Documentation. Authentication & Roles It is important to have a good understating of the Roles and permissions that apply for Copilot for Security. Copilot for Security roles Copilot for Security introduces two roles that function like access groups but aren’t Microsoft Entra ID roles. Instead, they only control access to the capabilities of the Copilot for Security platform. Copilot owner Copilot contributor By default, all users in the Microsoft Entra tenant are givenCopilot contributoraccess. Microsoft Entra roles The following Microsoft Entra roles automatically inheritCopilot owneraccess. Security Administrator Global Administrator Have a look at the relevant documentation page explaining everything about Roles & permissions: Understand authentication in Microsoft Copilot for Security | Microsoft Learn. Copilot in action Once we have a good understanding and we have built our Team, we can start working with Copilot for Security within Defender Dashboards fromhttps://security.microsoft.com. Most Dashboards offer the interactive experience that helps us understand different signals, take potential actions and get explanatory suggestions from the Copilot. “Copilot for Threat Analytics is designed to assist users in understanding and responding to security threats. It provides evidence-based, objective, and actionable insights derived from security data. The purpose is to help users make informed decisions about their security posture and response strategies. It does this by analyzing data from various sources, identifying potential threats, and providing detailed information about those threats. This includes information about the nature of the threat, its potential impact, and possible mitigation strategies. The goal is to provide users with the information they need to effectively manage and respond to security threats.” (generated from Copilot for Security) Especially in Advanced Hunting, Copilot offers a preset of KQL Queries that we can run directly or load them into our Editor for further editing. Another powerful capability lays inside the Incidents that we get from Defender for Endpoint. Just click on the incident and Copilot will provide information and investigation information along with recommendations if available. Intune – Copilot the Endpoints Yes you are reading correct! Once your Copilot Platform is ready you are in for a nice surprise! In Endpoint Management or Intune you will find Copilot ready to assist on your Endpoints Management Tasks! It is an integration in Preview, and i believe it is going to be a great addition for Endpoint Administrators. Here is an example where we are getting a summary of our Windows policy in Intune: If we wanted to list the functionality here is a list of the main points: Input Processing: When you ask Copilot a question in Intune, it sends the query to Copilot for Security. Data Sources: Copilot for Security uses data from your tenant and authoritative Microsoft documentation sources. Response Generation: It processes the input and generates a response, which is then displayed in Intune. Session Tracking: You can review all interactions in Copilot for Security by checking your sessions. Privacy and Verification: Always double-check Copilot’s responses, as it may not always be accurate. Partial Information: In some cases, Copilot might provide partial information due to large data volumes. It is quite important to pay attention to the Responsible use of AI. A Frequently Asked Questions page is available for everyone as well. Copilot for Security is a natural language, AI-powered security analysis tool that assists security professionals in responding to threats quickly, processing signals at machine speed, and assessing risk exposure in minutes. It draws context from plugins and data to answer security-related prompts so that security professionals can help keep their organizations secure. Users can collect the responses that they find useful from Copilot for Security and pin them to the pinboard for future reference.(source:Microsoft Responsible use of AI FAQ) But that’s not all. Apart from the Defender portal, a good use of Copilot for Security comes within theCopilot for Security Platform. We can find a wide range of Prompts, we can utilize Plugins even build our own. We can upload files that provide guidance to the Copilot,examples of files you can upload are your organization’s policy and compliance documents, investigation and response procedures, and templates. Integrating this wealth of knowledge into Copilot allows Copilot to reason over the knowledge base or documents and generate responses that are more relevant, specific, and customized to your operational needs (sourceMicrosoft Documentation). The current library of Plugins is quite extensive but a key capability is the fact that we can create our own. You can create new plugins to extend what Copilot can do by following the steps inCreate new plugins.To add and manage your custom plugins to Copilot for Security, follow the steps inManage custom plugins. (sourceMicrosoft Documentation). Final thoughts Microsoft has significantly impacted the cybersecurity landscape with Copilot for Security. This powerful tool provides an instant upgrade for organizations, enabling IT and security teams to work more efficiently, prioritize findings, and take action without exhausting investigation efforts. Copilot for Security serves as a valuable AI expert assistant, guiding the security landscape in the right direction. It also acts as an upskilling platform, presenting a positive challenge for all involved to embrace the AI era through the lens of cybersecurity excellence. My personal testimony through the experience so far, is that the Product Team did an excellent Job building an AI Security platform that makes the difference and combines the best of our technology at hand with our needs for secure environments, while keeping a “learn while doing” pattern as usual. Don’t forget to download theSecurity Copilot diagramand get a high level overview of the architecture.711Views0likes0CommentsAzure AI Services on AKS
Host your AI Language Containers and Web Apps on Azure Kubernetes Cluster: Flask Web App Sentiment Analysis In this post, we'll explore how to integrate Azure AI Containers into our applications running on Azure Kubernetes Service (AKS). Azure AI Containers enable you to harness the power of Azure's AI services directly within your AKS environment, giving you complete control over where your data is processed. By streamlining the deployment process and ensuring consistency, Azure AI Containers simplify the integration of cutting-edge AI capabilities into your applications. Whether you're developing tools for education, enhancing accessibility, or creating innovative user experiences, this guide will show you how to seamlessly incorporate Azure's AI Containers into your web apps running on AKS. Why Containers ? Azure AI services provides several Docker containers that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure AI services. Azure AI Containers offer: Immutable infrastructure: Consistent and reliable system parameters for DevOps teams, with flexibility to adapt and avoid configuration drift. Data control: Choose where data is processed, essential for data residency or security requirements. Model update control: Flexibility in versioning and updating deployed models. Portable architecture: Deploy on Azure, on-premises, or at the edge, with Kubernetes support. High throughput/low latency: Scale for demanding workloads by running Azure AI services close to data and logic. Scalability: Built on scalable cluster technology like Kubernetes for high availability and adaptable performance. Source: https://learn.microsoft.com/en-us/azure/ai-services/cognitive-services-container-support Workshop Our Solution will utilize the Azure Language AI Service with the Text Analytics container for Sentiment Analysis. We will build a Python Flask Web App, containerize it with Docker and push it to Azure Container Registry. An AKS Cluster which we will create, will pull the Flask Image along with the Microsoft provided Sentiment Analysis Image directly from mcr.microsoft.com and we will make all required configurations on our AKS Cluster to have an Ingress Controller with SSL Certificate presenting a simple Web UI to write our Text, submit it for analysis and get the results. Our Web UI will look like this: Azure Kubernetes Cluster, Azure Container Registry & Azure Text Analytics These are our main resources and a Virtual Network of course for the AKS which is deployed automatically. Our Solution is hosted entirely on AKS with a Let's Encrypt Certificate we will create separately offering secure HTTP with an Ingress Controller serving publicly our Flask UI which is calling via REST the Sentiment Analysis service, also hosted on AKS. The difference is that Flask is build with a custom Docker Image pulled from Azure Container Registry, while the Sentiment Analysis is a Microsoft ready Image which we pull directly. In case your Azure Subscription does not have an AI Service you have to create a Language Service of Text Analytics using the Portal due to the requirement to accept the Responsible AI Terms. For more detail go to https://go.microsoft.com/fwlink/?linkid=2164190 . My preference as a best practice, is to create an AKS Cluster with the default System Node Pool and add an additional User Node Pool to deploy my Apps, but it is really a matter of preference at the end of the day. So let's start deploying! Start from your terminal by logging in with az login and set your Subscription with az account set --subscription 'YourSubName" ## Change the values in < > with your values and remove < >! ## Create the AKS Cluster az aks create \ --resource-group <your-resource-group> \ --name <your-cluster-name> \ --node-count 1 \ --node-vm-size standard_a4_v2 \ --nodepool-name agentpool \ --generate-ssh-keys \ --nodepool-labels nodepooltype=system \ --no-wait \ --aks-custom-headers AKSSystemNodePool=true \ --network-plugin azure ## Add a User Node Pool az aks nodepool add \ --resource-group <your-resource-group> \ --cluster-name <your-cluster-name> \ --name userpool \ --node-count 1 \ --node-vm-size standard_d4s_v3 \ --no-wait ## Create Azure Container Registry az acr create \ --resource-group <your-resource-group> \ --name <your-acr-name> \ --sku Standard \ --location northeurope ## Attach ACR to AKS az aks update -n <your-cluster-name> -g <your-resource-group> --attach-acr <your-acr-name> The Language Service is created from the Portal for the reasons we explained earlier. Search for Language and create a new Language service leaving the default selections ( No Custom QnA, no Custom Text Classification) on the F0 (Free) SKU. You may see a VNET menu appear in the Networking Tab, just ignore it, as long as you leave the default Public Access enabled it won’t create a Virtual Network. The presence of the Cloud Resource is for Billing and Metrics. A Flask Web App has a directory structure where we store index.html in the Templates directory and our CSS and images in the Static directory. So in essence it looks like this: -sentiment-aks --flaskwebapp app.py requirements.txt Dockerfile ---static 1.style.css 2.logo.png ---templates 1.index.html The requirements.txt should have the needed packages : ## requirements.txt Flask==3.0.0 requests==2.31.0 ## index.html <!DOCTYPE html> <html> <head> <title>Sentiment Analysis App</title> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}"> </head> <body> <img src="{{ url_for('static', filename='logo.png') }}" class="icon" alt="App Icon"> <h2>Sentiment Analysis</h2> <form id="textForm"> <textarea name="text" placeholder="Enter text here..."></textarea> <button type="submit">Analyze</button> </form> <div id="result"></div> <script> document.getElementById('textForm').onsubmit = async function(e) { e.preventDefault(); let formData = new FormData(this); let response = await fetch('/analyze', { method: 'POST', body: formData }); let resultData = await response.json(); let results = resultData.results; if (results) { let displayText = `Document: ${results.document}\nSentiment: ${results.overall_sentiment}\n`; displayText += `Confidence - Positive: ${results.confidence_positive}, Neutral: ${results.confidence_neutral}, Negative: ${results.confidence_negative}`; document.getElementById('result').innerText = displayText; } else { document.getElementById('result').innerText = 'No results to display'; } }; </script> </body> </html> ## style.css body { font-family: Arial, sans-serif; background-color: #f0f8ff; /* Light blue background */ margin: 0; padding: 0; display: flex; flex-direction: column; align-items: center; justify-content: center; height: 100vh; } h2 { color: #0277bd; /* Darker blue for headings */ } .icon { height: 100px; /* Adjust the size as needed */ margin-top: 20px; /* Add some space above the logo */ } form { background-color: white; padding: 20px; border-radius: 8px; width: 300px; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); } textarea { width: 100%; box-sizing: border-box; height: 100px; margin-bottom: 10px; border: 1px solid #0277bd; border-radius: 4px; padding: 10px; } button { background-color: #029ae4; /* Blue button */ color: white; border: none; padding: 10px 15px; border-radius: 4px; cursor: pointer; } button:hover { background-color: #0277bd; } #result { margin-top: 20px; } And here is the most interesting file, our app.py. Notice the use of a REST API call directly to the Sentiment Analysis endpoint which we will declare in the YAML file for the Kubernetes deployment. ## app.py from flask import Flask, render_template, request, jsonify import requests import os app = Flask(__name__) @app.route('/', methods=['GET']) def index(): return render_template('index.html') # HTML file with input form @app.route('/analyze', methods=['POST']) def analyze(): # Extract text from the form submission text = request.form['text'] if not text: return jsonify({'error': 'No text provided'}), 400 # Fetch API endpoint and key from environment variables endpoint = os.environ.get("CONTAINER_API_URL") # Ensure required configurations are available if not endpoint: return jsonify({'error': 'API configuration not set'}), 500 # Construct the full URL for the sentiment analysis API url = f"{endpoint}/text/analytics/v3.1/sentiment" headers = { 'Content-Type': 'application/json' } body = { 'documents': [{'id': '1', 'language': 'en', 'text': text}] } # Make the HTTP POST request to the sentiment analysis API response = requests.post(url, json=body, headers=headers) if response.status_code != 200: return jsonify({'error': 'Failed to analyze sentiment'}), response.status_code # Process the API response data = response.json() results = data['documents'][0] detailed_results = { 'document': text, 'overall_sentiment': results['sentiment'], 'confidence_positive': results['confidenceScores']['positive'], 'confidence_neutral': results['confidenceScores']['neutral'], 'confidence_negative': results['confidenceScores']['negative'] } # Return the detailed results to the client return jsonify({'results': detailed_results}) if __name__ == '__main__': app.run(host='0.0.0.0', port=5001, debug=False) And finally we need a Dockerfile, pay attention to have it on the same level as your app.py file. ## Dockerfile # Use an official Python runtime as a parent image FROM python:3.10-slim # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 5001 available to the world outside this container EXPOSE 5001 # Define environment variable ENV CONTAINER_API_URL="http://sentiment-service/" # Run app.py when the container launches CMD ["python", "app.py"] Our Web UI is ready to build ! We need Docker running on our development environment and we need to login to Azure Container Registry: ## Login to ACR az acr login -n <your-acr-name> ## Build and Tag our image docker build -t <acr-name>.azurecr.io/flaskweb:latest . docker push <acr-name>.azurecr.io/flaskweb:latest You can go to the Portal and from Azure Container Registry, Repositories you will find our new Image ready to be pulled! Kubernetes Deployments Let’s start deploying our AKS services ! As we already know we can pull the Sentiment Analysis Container from Microsoft directly and that’s what we are going to do with the following tasks. First, we need to login to our AKS Cluster so from Azure Portal head over to your AKS Cluster and click on the Connect link on the menu. Azure will provide the command to connect from our terminal: Select Azure CLI and just copy-paste the commands to your Terminal. Now we can run kubectl commands and manage our Cluster and AKS Services. We need a YAML file for each service we are going to build, including the Certificate at the end. For now let’s create the Sentiment Analysis Service, as a Container, with the following file. Pay attention as you need to get the Language Service Key and Endpoint from the Text Analytics resource we created earlier, and in the nodeSelector block we must enter the name of the User Node Pool we created. apiVersion: apps/v1 kind: Deployment metadata: name: sentiment-deployment spec: replicas: 1 selector: matchLabels: app: sentiment template: metadata: labels: app: sentiment spec: containers: - name: sentiment image: mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:latest ports: - containerPort: 5000 resources: limits: memory: "8Gi" cpu: "1" requests: memory: "8Gi" cpu: "1" env: - name: Eula value: "accept" - name: Billing value: "https://<your-Language-Service>.cognitiveservices.azure.com/" - name: ApiKey value: "xxxxxxxxxxxxxxxxxxxx" nodeSelector: agentpool: userpool --- apiVersion: v1 kind: Service metadata: name: sentiment-service spec: selector: app: sentiment ports: - protocol: TCP port: 5000 targetPort: 5000 type: ClusterIP Save the file and run from your Terminal: kubectl apply -f sentiment-deployment.yaml In a few seconds you can observe the service running from the AKS Services and Ingresses menu. Let’s continue to bring our Flask Container now. In the same manner create a new YAML: apiVersion: apps/v1 kind: Deployment metadata: name: flask-service spec: replicas: 1 selector: matchLabels: app: flask template: metadata: labels: app: flask spec: containers: - name: flask image: <your-ACR-name>.azurecr.io/flaskweb:latest ports: - containerPort: 5001 env: - name: CONTAINER_API_URL value: "http://sentiment-service:5000" resources: requests: cpu: "500m" memory: "256Mi" limits: cpu: "1" memory: "512Mi" nodeSelector: agentpool: userpool --- apiVersion: v1 kind: Service metadata: name: flask-lb spec: type: LoadBalancer selector: app: flask ports: - protocol: TCP port: 80 targetPort: 5001 kubectl apply -f flask-service.yaml Observe the Sentiment Analysis Environment Value. It is directly using the Service name of our Sentiment Analysis container as AKS has it’s own DNS resolver for easy communication between services. In fact if we hit the Service Public IP we will have HTTP access to the Web UI. But let’s see how we can import our Certificate. We won’t describe how to get a Certificate. All we need is the PEM files, meaning the privatekey.pem and the cert.pem. IF we have a PFX we can export them with OpenSSL. Once we have these files in place we will create a secret in AKS that will hold our Certificate key and file. We just need to run this command from within the directory of our PEM files: kubectl create secret tls flask-app-tls –key privkey.pem –cert cert.pem –namespace default Once we create our Secret we will deploy a Kubernetes Ingress Controller (NGINX is fine) which will manage HTTPS and will point to the Flask Service. Remember to add an A record to your DNS registrar with the DNS Hostname you are going to use and the Public IP, once you see the IP Address: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: flask-app-ingress spec: ingressClassName: webapprouting.kubernetes.azure.com tls: - hosts: - your.host.domain secretName: flask-app-tls rules: - host: your.host.domain http: paths: - path: / pathType: Prefix backend: service: name: flask-lb port: number: 80 kubectl apply -f flask-app-ingress.yaml From AKS – Services and Ingresses – Ingresses you will see the assigned Public IP. Add it to your DNS and once the Name Servers are updated you can hit your Hostname using HTTPS! Final Thoughts As we’ve explored, the combination of Azure AI Containers and AKS offers a powerful and flexible solution for deploying AI-driven applications in cloud-native environments. By leveraging these technologies, you gain granular control over your data and model deployments, while maintaining the scalability and portability essential for modern applications. Remember, this is just the starting point. As you delve deeper, consider the specific requirements of your project and explore the vast possibilities that Azure AI Containers unlock. Embrace the power of AI within your AKS deployments, and you’ll be well on your way to building innovative, intelligent solutions that redefine what’s possible in the cloud. ArchitectureAzure Text to Speech with Container Apps
Azure Text to Speech with Container Apps Imagine interacting with not just one, but three distinct speaking agents, each bringing their unique flair to life right through your React web UI. Whether it’s getting the latest weather updates, catching up on breaking news, or staying on top of the Stock Exchange, our agents have got you covered. We’ve seamlessly integrated the Azure Speech SDK with a modular architecture and dynamic external API calls, creating an experience that’s as efficient as it is enjoyable. What sets this application apart is its versatility. Choose your preferred agent, like the News Agent, and watch as it transforms data fetched from a news API into speech, courtesy of the Azure Speech Service. The result? Crisp, clear audio that you can either savor live on the UI or download as an MP3 file for on-the-go convenience. But that’s not all. We’ve infused the application with a range of Python modules, each offering different voices, adding layers of personality and depth to the user experience. A testament to the power of AI Speech capabilities and modern web development, making it an exciting project for any IT professional to explore and build upon. Requirements Our Project is build with the help of VSCode, Azure CLI, React and Python. We need an Azure Subscription to create Azure Container Apps and an Azure Speech service resource. We will build our Docker images directly to Azure Container Registry and create the relevant ingress configurations. Additional security should be taken in account like Private Endpoints and Front Door in case you want this as a production application. Build We are building a simple React Web UI and containerizing it, while the interesting part of our code lays into the modular design of the Python backend. It is also a Docker container Image with a main application and three different python modules each one responsible for it's respective agent. Visual elements make the UI quite friendly and simple to understand and use. The user selects the agent and presses the 'TALK" button. The backend fetches data from the selected API ( GNEWS, OpenMeteo and Alphavantage) interacts the text with Azure Speech Service and returns the audio to be played on the UI with a small player, providing also a Download link for the MP3. Each time we select and activate a different agent the file is updated with the new audio. Let's have a look on the React build: import React, { useState } from 'react'; import './App.css'; import logo from './logo.png'; import avatarRita from './assets/rita.png'; import avatarMark from './assets/mark.png'; import avatarMary from './assets/mary.png'; function App() { const [activeAgent, setActiveAgent] = useState(null); const [audioUrl, setAudioUrl] = useState(null); // Add this line to define audioUrl and setAudioUrl const [audioStream, setAudioStream] = useState(null); // Add this line to define audioStream and setAudioStream const [stockSymbol, setStockSymbol] = useState(''); const handleAgentClick = (agent) => { setActiveAgent(agent); }; /*const handleCommand = async (command) => { if (activeAgent === 'rita' && command === 'TALK') { try { // Default text to send const defaultText = { text: "Good Morning to everyone" }; const response = await fetch(`${process.env.REACT_APP_API_BASE_URL}/talk-to-rita`, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(defaultText), // Include default text in the request body }); const data = await response.json();*/ const handleCommand = async (command) => { if (command === 'TALK') { let endpoint = ''; let bodyData = {}; if (activeAgent === 'rita') { endpoint = '/talk-to-rita'; bodyData = { text: "Good Morning to everyone" };// Add any specific data or parameters for RITA if required } else if (activeAgent === 'mark') { endpoint = '/talk-to-mark'; // Add any specific data or parameters for MARK if required } else if (activeAgent === 'mary' && stockSymbol) { endpoint = '/talk-to-mary'; bodyData = { symbol: stockSymbol }; } else { console.error('Agent not selected or stock symbol not provided'); return; } try { const response = await fetch(`${process.env.REACT_APP_API_BASE_URL}${endpoint}`, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(bodyData),// Add body data if needed for the specific agent }); const data = await response.json(); if (response.ok) { const audioContent = base64ToArrayBuffer(data.audioContent); // Convert base64 to ArrayBuffer const blob = new Blob([audioContent], { type: 'audio/mp3' }); const url = URL.createObjectURL(blob); setAudioUrl(url); // Update state setAudioStream(url); } else { console.error('Response error:', data); } } catch (error) { console.error('Error:', error); } } }; // Function to convert base64 to ArrayBuffer function base64ToArrayBuffer(base64) { const binaryString = window.atob(base64); const len = binaryString.length; const bytes = new Uint8Array(len); for (let i = 0; i < len; i++) { bytes[i] = binaryString.charCodeAt(i); } return bytes.buffer; } return ( <div className="App"> <header className="navbar"> <span>DATE: {new Date().toLocaleDateString()}</span> <span> </span> </header> <h1>Welcome to MultiChat!</h1> <h2>Choose an agent to start the conversation</h2> <h3>Select Rita for Weather, Mark for Headlines and Mary for Stocks</h3> <img src={logo} className="logo" alt="logo" /> <div className="avatar-container"> <div className={`avatar ${activeAgent === 'rita' ? 'active' : ''}`} onClick={() => handleAgentClick('rita')}> <img src={avatarRita} alt="Rita" /> <p>RITA</p> </div> <div className={`avatar ${activeAgent === 'mark' ? 'active' : ''}`} onClick={() => handleAgentClick('mark')}> <img src={avatarMark} alt="Mark" /> <p>MARK</p> </div> <div className={`avatar ${activeAgent === 'mary' ? 'active' : ''}`} onClick={() => handleAgentClick('mary')}> <img src={avatarMary} alt="Mary" /> <p>MARY</p> </div> </div> <div> {activeAgent === 'mary' && ( <input type="text" placeholder="Enter Stock Symbol" value={stockSymbol} onChange={(e) => setStockSymbol(e.target.value)} className="stock-input" /> )} </div> <div className="controls"> <button onClick={() => handleCommand('TALK')}>TALK</button> </div> <div className="audio-container"> {audioStream && <audio src={audioStream} controls autoPlay />} {audioUrl && ( <a href={audioUrl} download="speech.mp3" className="download-link"> Download MP3 </a> )} </div> </div> ); } export default App; The CSS is available on GitHub and this is the final result: Now the Python backend is the force that makes this Web App a real Application ! Let’s have a look on our app.py , and the 3 different modules of weather_service.py, news_service.py and stock_service.py. Keep in mind that the external APIs used here are free and we can adjust our calls to our needs, based on the documentation of each API and its capabilities. For example the Stock agent brings up a text box to write the Stock symbol which you want information from. import os import base64 from flask import Flask, request, jsonify import azure.cognitiveservices.speech as speechsdk import weather_service import news_service import stock_service from flask_cors import CORS app = Flask(__name__) CORS(app) # Azure Speech Service configuration using environment variables speech_key = os.getenv('SPEECH_KEY') speech_region = os.getenv('SPEECH_REGION') speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=speech_region) # Set the voice name (optional, remove if you want to use the default voice) speech_config.speech_synthesis_voice_name='en-US-JennyNeural' def text_to_speech(text, voice_name='en-US-JennyNeural'): try: # Set the synthesis output format to MP3 speech_config.set_speech_synthesis_output_format(speechsdk.SpeechSynthesisOutputFormat.Audio16Khz32KBitRateMonoMp3) # Set the voice name dynamically speech_config.speech_synthesis_voice_name = voice_name # Create a synthesizer with no audio output (null output) synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None) result = synthesizer.speak_text_async(text).get() # Check result if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted: print("Speech synthesized for text [{}]".format(text)) return result.audio_data # This is in MP3 format elif result.reason == speechsdk.ResultReason.Canceled: cancellation_details = result.cancellation_details print("Speech synthesis canceled: {}".format(cancellation_details.reason)) print("Error details: {}".format(cancellation_details.error_details)) return None except Exception as e: print(f"Error in text_to_speech: {e}") return None @app.route('/talk-to-rita', methods=['POST']) def talk_to_rita(): try: # Use default coordinates or get them from request latitude = 37.98 # Default latitude longitude = 23.72 # Default longitude data = request.json if data: latitude = data.get('latitude', latitude) longitude = data.get('longitude', longitude) # Get weather description using the weather service descriptive_text = weather_service.get_weather_description(latitude, longitude) if descriptive_text: audio_content = text_to_speech(descriptive_text, 'en-US-JennyNeural') # Use the US voice #audio_content = text_to_speech(descriptive_text) if audio_content: # Convert audio_content to base64 for JSON response audio_base64 = base64.b64encode(audio_content).decode('utf-8') return jsonify({"audioContent": audio_base64}), 200 else: return jsonify({"error": "Failed to synthesize speech"}), 500 else: return jsonify({"error": "Failed to get weather description"}), 500 except Exception as e: return jsonify({"error": str(e)}), 500 @app.route('/talk-to-mark', methods=['POST']) def talk_to_mark(): try: gnews_api_key = os.getenv('GNEWS_API_KEY') news_headlines = news_service.fetch_greek_news(gnews_api_key) # Set the language to Greek for MARK # speech_config.speech_synthesis_voice_name = 'el-GR-AthinaNeural' # Example Greek voice audio_content = text_to_speech(news_headlines, 'el-GR-NestorasNeural') # Use the Greek voice if audio_content: audio_base64 = base64.b64encode(audio_content).decode('utf-8') return jsonify({"audioContent": audio_base64}), 200 else: return jsonify({"error": "Failed to synthesize speech"}), 500 except Exception as e: return jsonify({"error": str(e)}), 500 @app.route('/talk-to-mary', methods=['POST']) def talk_to_mary(): try: data = request.json stock_symbol = data.get('symbol') # Extract the stock symbol from the request if not stock_symbol: return jsonify({"error": "No stock symbol provided"}), 400 api_key = os.getenv('ALPHAVANTAGE_API_KEY') # Get your Alpha Vantage API key from the environment variable stock_info = stock_service.fetch_stock_quote(api_key, stock_symbol) audio_content = text_to_speech(stock_info, 'en-US-JennyNeural') # Use an English voice for Mary if audio_content: audio_base64 = base64.b64encode(audio_content).decode('utf-8') return jsonify({"audioContent": audio_base64}), 200 else: return jsonify({"error": "Failed to synthesize speech"}), 500 except Exception as e: print(f"Error in /talk-to-mary: {e}") return jsonify({"error": str(e)}), 500 if __name__ == '__main__': app.run(debug=True) and here is the sample weather_service.py: import requests_cache import pandas as pd from retry_requests import retry import openmeteo_requests # Function to create descriptive text for each day's weather def create_weather_descriptions(df): descriptions = [] for index, row in df.iterrows(): description = (f"On {row['date'].strftime('%Y-%m-%d')}, the maximum temperature is {row['temperature_2m_max']}°C, " f"the minimum temperature is {row['temperature_2m_min']}°C, " f"and the total rainfall is {row['rain_sum']}mm.") descriptions.append(description) return descriptions # Setup the Open-Meteo API client with cache and retry on error cache_session = requests_cache.CachedSession('.cache', expire_after=3600) retry_session = retry(cache_session, retries=5, backoff_factor=0.2) openmeteo = openmeteo_requests.Client(session=retry_session) def fetch_weather_data(latitude=37.98, longitude=23.72): # Default coordinates for Athens, Greece # Define the API request parameters params = { "latitude": latitude, "longitude": longitude, "daily": ["weather_code", "temperature_2m_max", "temperature_2m_min", "rain_sum"], "timezone": "auto" } # Make the API call url = "https://api.open-meteo.com/v1/forecast" responses = openmeteo.weather_api(url, params=params) # Process the response and return daily data as a DataFrame response = responses[0] daily = response.Daily() daily_dataframe = pd.DataFrame({ "date": pd.date_range( start=pd.to_datetime(daily.Time(), unit="s", utc=True), end=pd.to_datetime(daily.TimeEnd(), unit="s", utc=True), freq=pd.Timedelta(seconds=daily.Interval()), inclusive="left" ), "weather_code": daily.Variables(0).ValuesAsNumpy(), "temperature_2m_max": daily.Variables(1).ValuesAsNumpy(), "temperature_2m_min": daily.Variables(2).ValuesAsNumpy(), "rain_sum": daily.Variables(3).ValuesAsNumpy() }) return daily_dataframe def get_weather_description(latitude, longitude): # Fetch the weather data weather_data = fetch_weather_data(latitude, longitude) # Create weather descriptions from the data weather_descriptions = create_weather_descriptions(weather_data) return ' '.join(weather_descriptions) Refer to the GitHub Repo for the other modules , and the Dockerfiles as well. Now here is the Azure Cli scripts that we need to execute in order to build, tag and push our Images to Container Registry and pull them as Container Apps to our Environment on Azure: ## Run these before anything ! : az login az extension add --name containerapp --upgrade az provider register --namespace Microsoft.App az provider register --namespace Microsoft.OperationalInsights ## Load your resources to variables $RESOURCE_GROUP="rg-demo24" $LOCATION="northeurope" $ENVIRONMENT="env-web-x24" $FRONTEND="frontend" $BACKEND="backend" $ACR="acrx2024" ## Create a Resource Group, a Container Registry and a Container Apps Environment: az group create --name $RESOURCE_GROUP --location "$LOCATION" az acr create --resource-group $RESOURCE_GROUP --name $ACR --sku Basic --admin-enabled true az containerapp env create --name $ENVIRONMENT -g $RESOURCE_GROUP --location "$LOCATION" ## Login from your Terminal to ACR: az acr login --name $(az acr list -g rg-demo24 --query "[].{name: name}" -o tsv) ## Build your backend: az acr build --registry $ACR --image backendtts . ## Create your Backend Container App: az containerapp create \ --name backendtts \ --resource-group $RESOURCE_GROUP \ --environment $ENVIRONMENT \ --image "$ACR.azurecr.io/backendtts:latest" \ --target-port 5000 \ --env-vars SPEECH_KEY=xxxxxxxxxx SPEECH_REGION=northeurope \ --ingress 'external' \ --registry-server "$ACR.azurecr.io" \ --query properties.configuration.ingress.fqdn ## Make sure to cd into the React Frontend directory where your Dockerfile is: az acr build --registry $ACR --image frontendtts . ## Create your Frontend: az containerapp create --name frontendtts --resource-group $RESOURCE_GROUP \ --environment $ENVIRONMENT \ --image "$ACR.azurecr.io/frontendtts:latest" \ --target-port 80 --ingress 'external' \ --registry-server "$ACR.azurecr.io" \ --query properties.configuration.ingress.fqdn Now we usually need to have the Web UI up and running so what we do is to set the scaling on each Container App to minimum 1 instance, but this is up to you ! That’s it ! Select your agents and make calls. Hear the audio, download the MP3 and make any changes to your App, just remember to rebuild your image and restart the revision ! Closing As we wrap up this exciting project showcasing the seamless integration of Azure Speech Service with React, Python, and Azure Container Apps, we hope it has sparked your imagination and inspired you to explore the endless possibilities of modern cloud technologies. It’s been an exciting journey combining these powerful tools to create an application that truly speaks to its users. We eagerly look forward to seeing how you, our innovative community, will use these insights to build your own extraordinary projects. References: GitHub Repo Azure Container Apps Azure Speech SDK Azure Speech Service Quickstart Text to Speech Architecture:606Views0likes0CommentsUnlocking the Power of Azure: A Guide to Essential SDKs
Explore Azure SDKs for Python, .NET and JavaScript Intro Let's explore the power of Azure SDKs, Software Development Kits, from the most used and widespread programming languages like Python, .NET and JavaScript. The aim is to provide you with practical insights and code snippets that bring Azure’s capabilities to your fingertips. Whether you’re a mature developer or just starting out, this guide will enhance your understanding and use of Azure SDKs, which are described in a number of Learning Paths on Microsoft Learn. Overview of Azure SDKs But what are exactly Azure SDKs ? The Azure SDKs are collections of libraries built to make it easier to use Azure services from your language of choice. These libraries are designed to be consistent, approachable, diagnosable, dependable, and idiomatic. Azure SDKs are designed to streamline the process of integrating Azure services into our applications. These SDKs provide developers with pre-written code, tools, and libraries that make it easier to interact with Azure’s vast array of services. Whether it’s managing storage, securing applications with KeyVault, orchestrating compute resources, or handling complex networking tasks, SDKs encapsulate much of the necessary heavy lifting. One of Azure SDKs’ greatest strengths is their support for a wide range of programming languages and platforms. This inclusive approach allows developers from different backgrounds and with varying expertise to take advantage of Azure’s cloud capabilities. As you may understand the field is vast ! We have SDK for iOS, for Python for Go and so on! So let’s focus on three key languages: Python, .NET, and JavaScript. Each of these languages has a dedicated set of Azure SDKs, tailored to fit their distinctive styles and best practices. Key Azure SDKs and examples Let’s start with Python! Python’s Azure SDKs bring simplicity and efficiency to cloud operations. The DefaultAzureCredential class from the azure-identity package is a cornerstone for authentication, automatically selecting the best available credential type based on the environment. For example let’s have a look at Storage. It is a common task to authenticate to Azure Storage and we can do it with a few lines : from azure.storage.blob import BlobServiceClient from azure.identity import DefaultAzureCredential credential = DefaultAzureCredential() blob_service_client = BlobServiceClient(account_url="https://<your_account>.blob.core.windows.net", credential=credential) If we want to break it down : Importing Necessary Modules: from azure.storage.blob import BlobServiceClient: This imports the BlobServiceClient class, which is used to interact with the Blob Storage service. from azure.identity import DefaultAzureCredential: This imports the DefaultAzureCredential class, which provides a seamless way to authenticate with Azure services, especially when your code is running on Azure. Setting Up Authentication: credential = DefaultAzureCredential(): This line creates an instance of DefaultAzureCredential. This class automatically selects the best available authentication method based on the environment your code is running in. For example, it might use managed identity in an Azure-hosted environment or a developer’s credentials when running locally. Creating the Blob Service Client: blob_service_client = BlobServiceClient(account_url="https://<your_account>.blob.core.windows.net", credential=credential): This line creates an instance of BlobServiceClient, which is used to perform operations on Blob Storage. You need to replace <your_account> with your Azure Storage account name. The credential argument is passed the DefaultAzureCredential instance for authentication. Another well known core service is Azure Key Vault. The azure-keyvault-secrets package manages secrets. Authenticate and create a KeyVault client as follows: from azure.identity import DefaultAzureCredential from azure.keyvault.secrets import SecretClient credential = DefaultAzureCredential() secret_client = SecretClient(vault_url="https://<your-vault-name>.vault.azure.net/", credential=credential) In a similar manner, for example, managing virtual machines and networking, use azure-mgmt-compute and azure-mgmt-network. The client setup is similar, utilizing DefaultAzureCredential for authentication. Moving on to .NET SDK. Azure SDKs for .NET integrate seamlessly with the .NET ecosystem, offering a familiar and powerful environment for managing Azure resources. The Azure SDK for .NET is designed to make it easy to use Azure services from your .NET applications. Whether it is uploading and downloading files to Blob Storage, retrieving application secrets from Azure Key Vault, or processing notifications from Azure Event Hubs, the Azure SDK for .NET provides a consistent and familiar interface to access Azure services. The Azure SDK for .NET is available as series of NuGet packages that can be used in both .NET Core (2.1 and higher) and .NET Framework (4.7.2 and higher) applications. If we wanted to create a client for Azure Storage: using Azure.Identity; using Azure.Storage.Blobs; var credential = new DefaultAzureCredential(); var blobServiceClient = new BlobServiceClient(new Uri("https://<your_account>.blob.core.windows.net"), credential); If we want to implement Logging to the console: using AzureEventSourceListener listener = AzureEventSourceListener.CreateConsoleLogger(); To manage secrets in KeyVault, use the Azure.Security.KeyVault.Secrets namespace. Client initialization is straightforward: using Azure.Identity; using Azure.Security.KeyVault.Secrets; var credential = new DefaultAzureCredential(); var secretClient = new SecretClient(new Uri("https://<your-vault-name>.vault.azure.net/"), credential); Finally JavaScript! Azure’s JavaScript SDKs are tailored for modern web development, offering easy integration with Azure services in Node.js applications. So, in our example with Storage, the azure/storage-blob package is used for interacting with blob storage: const { BlobServiceClient } = require("@azure/storage-blob"); const { DefaultAzureCredential } = require("@azure/identity"); const credential = new DefaultAzureCredential(); const blobServiceClient = new BlobServiceClient(`https://${yourAccount}.blob.core.windows.net`, credential); It is a common usage for Node.js to take the role of the Frontend or Backend Application due to the flexibility and range of use cases. In the KeyVault example let’s see an extended version where we create and get our secrets: const { SecretClient } = require("@azure/keyvault-secrets"); const { DefaultAzureCredential } = require("@azure/identity"); // Replace 'yourVaultName' with your Key Vault name const vaultName = "yourVaultName"; const url = `https://${vaultName}.vault.azure.net/`; const credential = new DefaultAzureCredential(); const secretClient = new SecretClient(url, credential); async function main() { // Secret to store in the Key Vault const secretName = "mySecretName"; const secretValue = "mySecretValue"; // Storing a secret console.log(`Storing secret: ${secretName}`); await secretClient.setSecret(secretName, secretValue); console.log(`Secret stored: ${secretName}`); // Retrieving the stored secret console.log(`Retrieving stored secret: ${secretName}`); const retrievedSecret = await secretClient.getSecret(secretName); console.log(`Retrieved secret: ${retrievedSecret.name} with value: ${retrievedSecret.value}`); // Deleting the secret (optional) console.log(`Deleting secret: ${secretName}`); await secretClient.beginDeleteSecret(secretName); console.log(`Secret deleted: ${secretName}`); } main().catch((error) => { console.error("An error occurred:", error); process.exit(1); }); In this expanded example: A secret named mySecretName with the value mySecretValue is created and stored in Azure Key Vault. The SecretClient is used to interact with the Key Vault. It is initialized with the vault URL and a credential object, which in this case is obtained from DefaultAzureCredential. The setSecret method stores the secret in the Key Vault. The getSecret method retrieves the secret from the Key Vault. Optionally, the beginDeleteSecret method is used to delete the secret. Note that this deletion process may be delayed as it involves a recovery period; the secret isn’t immediately removed from the Key Vault. Putting it all together In Azure, we benefit from exceptional flexibility through a variety of resources designed to host and manage our applications effectively. Given the modern trend towards microservices and containerization, it’s common to see a combination of diverse SDKs, each contributing to a larger project framework. This architecture typically involves distinct components such as a Frontend, a Backend, and potentially a Middleware layer. Each component serves a specific role, seamlessly integrating as part of a comprehensive application solution. Azure’s robust infrastructure supports this modular approach, enabling scalable, efficient, and highly customizable application development. Let’s see an example, shall we? Frontend-Node.js The frontend is built using Node.js and Express. It serves as the user interface and communicates with the Python middleware. const express = require('express'); const axios = require('axios'); const app = express(); const port = 3000; app.get('/data', async (req, res) => { try { // Communicate with Python middleware const response = await axios.get('http://localhost:5000/process'); res.send(response.data); } catch (error) { res.status(500).send('Error communicating with middleware'); } }); app.listen(port, () => { console.log(`Frontend listening at http://localhost:${port}`); }); Node.js is excellent for building lightweight frontend/backend services. Here, it’s used to handle HTTP requests and communicate with the middleware. The simplicity and non-blocking nature of Node.js make it ideal for such tasks. Middleware-Python The middleware is a Python Flask application. It acts as an intermediary, processing data from the frontend and communicating with the .NET backend. from flask import Flask, jsonify import requests app = Flask(__name__) @app.route('/process', methods=['GET']) def process_data(): try: # Communicate with .NET backend response = requests.get('http://localhost:6000/data') processed_data = response.json() # Example of data processing return jsonify(processed_data) except: return jsonify({"error": "Failed to communicate with backend"}), 500 if __name__ == '__main__': app.run(port=5000) Python’s simplicity and powerful libraries make it a good choice for middleware. In this case, it’s used to perform intermediate processing and orchestrate communication between the frontend and the backend. Backend-.NET The backend is developed using .NET Core, providing data to the middleware. It could also be integrated with Azure services like Azure SQL Database or Azure Blob Storage for data persistence. using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Builder; using Microsoft.Extensions.DependencyInjection; var builder = WebApplication.CreateBuilder(args); builder.Services.AddControllers(); var app = builder.Build(); app.MapControllers(); app.Run("http://localhost:6000"); [ApiController] [Route("[controller]")] public class DataController : ControllerBase { [HttpGet] public IActionResult GetData() { // Example data from the backend var data = new { Message = "Data from .NET Backend" }; return Ok(data); } } .NET Core is robust and scalable, suitable for building complex backend systems. It can efficiently handle database connections, business logic, and other backend processes. Integration and Azure SDK Usage The Node.js frontend serves as the entry point for user requests, which it forwards to the Python middleware. The Python middleware processes the request and then communicates with the .NET backend, which could be integrated with Azure services for enhanced functionality. The .NET backend could utilize Azure SDKs, like Azure Storage SDK for storing data or Azure Cognitive Services for AI processing. Each component communicates over HTTP, demonstrating a microservices architecture. This approach allows each part to be scaled and maintained independently. An example of the Architecture is showcased in the following example: Best Practices What is the number one thing we should do when building our Solutions and Projects ? Following Best Practices! Here are some best practices: Stay Updated with SDK Versions: Azure SDKs are frequently updated to introduce new features and fix bugs. Regularly updating your SDKs ensures you have the latest improvements and security patches. However, keep an eye on release notes for any breaking changes. Use DefaultAzureCredential for Simplified Authentication: This class simplifies the authentication process across various environments (local development, deployment in Azure, etc.), making your code more versatile and secure. Error Handling: Proper error handling is crucial. Azure SDKs throw exceptions for service-side issues. Implement try-catch blocks to handle these exceptions gracefully, ensuring your application remains stable and provides useful feedback to the user. Asynchronous Programming: Many Azure SDKs offer asynchronous methods. Utilize these to improve the scalability and responsiveness of your applications, especially when dealing with I/O-bound operations. Resource Management: Be mindful of resource creation and management. Clean up resources that are no longer needed to avoid unnecessary costs and maintain an efficient cloud environment. Utilize SDK Core Features: Azure SDKs provide core functionalities like retries, logging, and telemetry. Familiarize yourself with these features to enhance your application’s reliability and maintainability. Leverage Community and Documentation: The Azure SDKs are well-documented, with a wealth of examples and guidance. Additionally, the community around Azure SDKs is a valuable resource for best practices, troubleshooting, and staying updated with the latest trends. Azure SDKs are powerful tools in our development arsenal, simplifying the complexity of cloud services. Staying informed, following best practices, and leveraging these SDKs, we can unlock the full potential of Azure Cloud, making our cloud journey productive, secure, and efficient.1.1KViews0likes0CommentsAzure AI Language: Sentiment Analysis with Durable Functions
Implementing Sentiment Analysis with Azure AI Language and Durable Functions Intro In today’s exploration, we delve into the world of Durable Functions, an innovative orchestration mechanism that elevates our coding experience. Durable Functions stand out by offering granular control over the execution steps, seamlessly integrating within the Azure Functions framework. This unique approach not only maintains the serverless nature of Azure Functions but also adds remarkable flexibility. It allows us to craft multifaceted applications, each capable of performing a variety of tasks under the expansive Azure Functions umbrella. Originating from the Durable Task Framework, widely used by Microsoft and various organizations for automating critical processes, Durable Functions represent the next step in serverless computing. They bring the power and efficiency of the Durable Task Framework into the serverless realm of Azure Functions, offering an ideal solution for complex, mission-critical workflows. Alongside with Azure Functions we are going to build a Python Flask Web Application where users enter text and we get a Sentiment Analysis from Azure AI Language Text Analytics, while results are stored into Azure Table Storage. Requirements For this workshop we need an Azure Subscription and we are using VSCode with Azure Functions Core Tools. We are building an Azure Web App to host our Flask UI, Azure Language AI with Python SDK for the sentiment analysis, Azure Durable Functions and Storage Account. The Durable Functions have an HTTP Trigger, the Orchestrator and two Activity Functions. The first activity is the API that sends data to the Language Endpoint and the second stores the results into Azure Table Storage, where we can utilize later for analysis and so on. Build Let’s explore our elements from the UI to each Function. Our UI is a Flask Web App and we have the index.html served from our app.py program: from flask import Flask, render_template, request, jsonify import requests import os app = Flask(__name__) @app.route('/', methods=['GET']) def index(): return render_template('index.html') # HTML file with input form @app.route('/analyze', methods=['POST']) def analyze(): text = request.form['text'] print("Received text:", text) function_url = os.environ.get('FUNCTION_URL') if not function_url: return jsonify({'error': 'Function URL is not configured'}) # Trigger the Azure Function response = requests.post(function_url, json={'text': text}) if response.status_code != 202: return jsonify({'error': 'Failed to start the analysis'}) # Get the status query URL status_query_url = response.headers['Location'] # Poll the status endpoint while True: status_response = requests.get(status_query_url) status_response_json = status_response.json() if status_response_json['runtimeStatus'] in ['Completed']: # The result should be directly in the output results = status_response_json.get('output', []) return jsonify({'results': results}) elif status_response_json['runtimeStatus'] in ['Failed', 'Terminated']: return jsonify({'error': 'Analysis failed or terminated'}) # Implement a delay here if necessary if __name__ == '__main__': app.run(debug=True) <!DOCTYPE html> <html> <head> <title>Sentiment Analysis App</title> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}"> </head> <body> <img src="{{ url_for('static', filename='logo.png') }}" class="icon" alt="App Icon"> <h2>Sentiment Analysis</h2> <form id="textForm"> <textarea name="text" placeholder="Enter text here..."></textarea> <button type="submit">Analyze</button> </form> <div id="result"></div> <script> document.getElementById('textForm').onsubmit = async function(e) { e.preventDefault(); let formData = new FormData(this); let response = await fetch('/analyze', { method: 'POST', body: formData }); let resultData = await response.json(); // Accessing the 'results' object from the response let results = resultData.results; if (results) { // Constructing the display text with sentiment and confidence scores let displayText = `Document: ${results.document}\nSentiment: ${results.overall_sentiment}\n`; displayText += `Confidence - Positive: ${results.confidence_positive}, Neutral: ${results.confidence_neutral}, Negative: ${results.confidence_negative}`; document.getElementById('result').innerText = displayText; } else { // Handling cases where results may not be present document.getElementById('result').innerText = 'No results to display'; } }; </script> </body> </html> Durable Functions There are currently four durable function types in Azure Functions: activity, orchestrator, entity, and client. In our deployment we are using: Function 1 – HTTP Trigger (Client-Starter Function): Receives text input from the frontend and starts the orchestrator. Function 2 – Orchestrator Function: Orchestrates the sentiment analysis workflow. Function 3 – Activity Function: Calls Azure Cognitive Services Text Analytics API to analyze sentiment. Function 4 – Activity Function: Stores results into Azure Table Storage. And here is the code for each Durable Function, starting with the HTTP Trigger: # HTTP Trigger - The Client\Starter Function listener import logging import azure.functions as func import azure.durable_functions as df async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse: client = df.DurableOrchestrationClient(starter) text = req.params.get('text') if not text: try: req_body = req.get_json() except ValueError: pass else: text = req_body.get('text') if text: instance_id = await client.start_new("SentimentOrchestrator", None, text) logging.info(f"Started orchestration with ID = '{instance_id}'.") return client.create_check_status_response(req, instance_id) else: return func.HttpResponse( "Please pass the text to analyze in the request body", status_code=400 ) Following the Orchestrator: # Orchestrator Function import azure.durable_functions as df def orchestrator_function(context: df.DurableOrchestrationContext): document = context.get_input() # Treat input as a single document result = yield context.call_activity("AnalyzeSentiment", document) # Call the function to store the result in Azure Table Storage yield context.call_activity("StoreInTableStorage", result) return result main = df.Orchestrator.create(orchestrator_function) The Orchestrator is firing the following Activity functions, the Sentiment Analysis call and the results stored to Azure Table Storage: # Activity - Sentiment Analysis import os import requests from azure.core.credentials import AzureKeyCredential from azure.ai.textanalytics import TextAnalyticsClient def main(document: str) -> dict: endpoint = os.environ["TEXT_ANALYTICS_ENDPOINT"] key = os.environ["TEXT_ANALYTICS_KEY"] text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key)) response = text_analytics_client.analyze_sentiment([document], show_opinion_mining=False) doc = next(iter(response)) if not doc.is_error: simplified_result = { "overall_sentiment": doc.sentiment, "confidence_positive": doc.confidence_scores.positive, "confidence_neutral": doc.confidence_scores.neutral, "confidence_negative": doc.confidence_scores.negative, "document": document } return simplified_result else: return {"error": "Sentiment analysis failed"} # Activity - Results to Table Storage from azure.data.tables import TableServiceClient import os import json from datetime import datetime def main(results: dict) -> str: connection_string = os.environ['AZURE_TABLE_STORAGE_CONNECTION_STRING'] table_name = 'SentimentAnalysisResults' table_service = TableServiceClient.from_connection_string(connection_string) table_client = table_service.get_table_client(table_name) # Prepare the entity with a unique RowKey using timestamp timestamp = datetime.utcnow().strftime('%Y%m%d%H%M%S%f') row_key = f"{results.get('document')}-{timestamp}" entity = { "PartitionKey": "SentimentAnalysis", "RowKey": row_key, "Document": results.get('document'), "Sentiment": results.get('overall_sentiment'), "Confidence": results.get('confidence') } # Insert the entity table_client.create_entity(entity=entity) return "Result stored in Azure Table Storage" Our Serverless Workshop is almost ready ! We need to carefully add the relevant configuration values for each resource : Azure Web Application : FUNCTION_URL, the HTTP Start URL from the Durable Functions resource. Durable Functions : TEXT_ANALYTICS_ENDPOINT, the Azure AI Language endpoint. Durable Functions : TEXT_ANALYTICS_KEY, the Azure AI Language key. Durable Functions : AZURE_TABLE_STORAGE_CONNECTION_STRING, the connection string for the Storage Account. We need to create a Storage Account and a Table , an Azure Web Application with an App Service Plan, an Azure Durable Functions resource and either a Cognitive Services Multi-Service account or an Azure AI Language resource. From VSCode create a new Durable Functions Project and four Durable Functions, each one as mentioned above. Make sure to add the correct names on the bindings for example in the Store In Table Storage Function we have: def main(results: dict) -> str: connection_string = os.environ['AZURE_TABLE_STORAGE_CONNECTION_STRING'] table_name = 'SentimentAnalysisResults'....... So in the function.json binding file make sure to match the name given in our code: { "scriptFile": "__init__.py", "bindings": [ { "name": "results", "type": "activityTrigger", "direction": "in" } ] } Add a System Assigned Managed Identity to the Function Resource and add the Storage Table Data Contributor role. Create the Web Application and Deploy the app.py to the Web App, make sure you have selected the Directory where your app.py file exists. Add the Configuration setting we described and hit the URL, you will be presented with the UI: Let’s break down the whole procedure in addition to the flow we have seen above: User Enters Text: It all starts when a user types a sentence or paragraph into the text box on your web page (the UI). Form Submission to Flask App: When the user clicks the “Analyze” button, the text is sent from the web page to your Flask app. This happens via an HTTP POST request, triggered by the JavaScript code on your web page. The Flask app, running on a server, receives this text. Flask App Invokes Azure Function: The Flask app then sends this text to an Azure Function. This is done by making another HTTP POST request, this time from the Flask app to the Azure Function’s endpoint. The Azure Function is a part of Azure Durable Functions, which are special types of Azure Functions designed for more complex workflows. Processing in Azure Durable Function: The text first arrives at the Orchestrator function in your Azure Durable Function setup. This Orchestrator function coordinates what happens to the text next. The Orchestrator function calls another function, typically known as an Activity function, specifically designed for sentiment analysis. This Activity function might use Azure Cognitive Services to analyze the sentiment of the text. Once the Activity function completes the sentiment analysis, it returns the results (like whether the sentiment is positive, neutral, or negative, and confidence scores) back to the Orchestrator function. Storing Results (Optional): If you’ve set it up, the Orchestrator function might then call another Activity function to store these results in Azure Table Storage for later use. Results Sent Back to Flask App: After processing (and optionally storing) the results, the Orchestrator function sends these results back to your Flask app. Flask App Responds to Web Page: Your Flask app receives the sentiment analysis results and sends them back to the web page as a response to the initial HTTP POST request. Displaying Results on the UI: Finally, the JavaScript code on your web page receives this response and updates the web page to display the sentiment analysis results to the user. And here is the Data Stored in our Table: As you may understand we can expand the Solution to further analyze our Data, add Visualizations and ultimately provide an Enterprise grade Solution where Durable Functions is the heart of it! Our Architecture is simple but powerful and extendable: Closing Modern solutions are bound to innovative yet powerful offerings and Azure Durable Functions can integrate seamlessly with every Azure service, even better, orchestrate our code with ease, providing fast delivery, scalability and security. Today we explored Azure AI Language with Text Analytics and Sentiment Analysis and Durable Functions helped us deliver a multipurpose solution with Azure Python SDK. Integration is key if we want to create robust and modern solutions without having to write hundreds of lines of code and Azure is leading the way with cutting edge, serverless PaaS offerings for us to keep building! GitHub Repository: Sentiment Analysis with Durable Functions813Views0likes0Comments
Groups
Azure Tech Bites
Azure Tech Bites event series help inspiring digital transformation and cloud adoption by empowering technical communities to achieve more by building an echo system of technical community (customers, partners and experts) focused on building customer trust to drive tech intensity in solving business problem by effectively using Azure products and services and accelerate cloud adoption for a better customer-connected experience
Latest Activity: Jul 01, 2024Developer User Group Leaders Hub
The place where user group leaders who want to be in the know -- on the latest & greatest from Microsoft Dev Tools, Azure, and AI topics -- come together to discuss, learn, share best practices, and get weekly updates.