User Profile
KonstantinosPassadis
Learn Expert
Joined 7 years ago
User Widgets
Recent Discussions
New Community Poll!
Hey Tech Enthusiasts! 👋 We’re gearing up for our next round of deep-dive sessions, and we want you to help us decide which topic we explore next. Cast your vote and shape the future of our Tech Community events! 🚀 Link: Event Polls Here are the contenders: 🔍 Navigating Azure AI Foundry Get the full scoop on Azure AI Foundry—what it is, how it works, and the latest updates that are changing the game. Vote for this event | Click "Show Results" to see vote counts 💻 GitHub Copilot and VSCode Discover how to supercharge your coding workflow with GitHub Copilot and Visual Studio Code. Your AI-powered coding sidekick awaits! Vote for this event | Click "Show Results" to see vote counts 🧠 Embeddings and Vector Databases Unpack the magic behind embeddings and vector databases in Azure. Learn how they power intelligent search, recommendations, and more. Vote for this event | Click "Show Results" to see vote counts 🗣️ Azure Live Voice API and Avatar Creation Explore the cutting edge of voice synthesis and avatar creation using Azure’s Live Voice API. The future of digital interaction is here. Vote for this event | Click "Show Results" to see vote counts 💬 Got questions or want to advocate for your favorite topic? Drop a comment below! Let’s build the future—together. 💙 Would you like me to turn this into a visual post or add emojis and formatting for a specific platform like LinkedIn or Teams?50Views0likes0CommentsRe: Welcome!
Hello Surya_Narayana ! No prerequisites ! Just ideas and the will of ours ! Do you have some good ideas to share ? Do you have expertise in an area within Azure , Modern Apps or AI ? We are excited to hear them ! Also we would like to invite everyone to vote for our First Event : Event Polls , and also looking for Speakers for Azure and \or Apps and AI ! Please check if you are able to create a Blog Post in case you want i can upgrade your permissions and be a valuable content creator in our Room !35Views0likes0CommentsEvent Speakers
Looking for Speakers in "Modern Development with Azure Integration and AI" Learning Room. If you have ideas, expertise to share and knowledge we would love to have you in a Live or Prerecorded Sesssion ! Contact me or post your anwer ! Vote for the First Event you would like to see : Event Polls45Views0likes0CommentsEvent Polling System
Vote for your favorite subject and it will be presented ! Head over to the following link and vote what you would like to see and discuss all about in the coming weeks! The Poll will stay open until the 29th of June ! Invite others and vote for yourself so we can connect and exchnage ideas and experiences in our Virtual Space where the event will take place. If the voting system is something the Community embraces, we will enhance it and more ideas will be implemented, as well as Call for Speakers ! Event Voting App138Views2likes2CommentsRe: Welcome!
Hello Surya_Narayana This Learning Room focuses on mastering the integration principles of Azure and the Microsoft Stack to enable the successful and efficient development and deployment of advanced, AI-powered applications and solutions. Please do hesitate to request further clarifications or suggest an additional request.96Views0likes2CommentsThe brand new Azure AI Agent Service at your fingertips
Intro Azure AI Agent Service is a game-changer for developers. This fully managed service empowers you to build, deploy, and scale high-quality, extensible AI agents securely, without the hassle of managing underlying infrastructure. What used to take hundreds of lines of code can now be achieved in just a few lines! So here it is, a web application that streamlines document uploads, summarizes content using AI, and provides seamless access to stored summaries. This article delves into the architecture and implementation of this solution, drawing inspiration from our previous explorations with Azure AI Foundry and secure AI integrations. Architecture Overview Our Azure AI Agent Service WebApp integrates several Azure services to create a cohesive and scalable system: Azure AI Projects & Azure AI Agent Service: Powers the AI-driven summarization and title generation of uploaded documents. Azure Blob Storage: Stores the original and processed documents securely. Azure Cosmos DB: Maintains metadata and summaries for quick retrieval and display. Azure API Management (APIM): Manages and secures API endpoints, ensuring controlled access to backend services. This architecture ensures a seamless flow from document upload to AI processing and storage, providing users with immediate access to summarized content. Azure AI Agent Service – Frontend Implementation The frontend of the Azure AI Agent Service WebApp is built using Vite and React, offering a responsive and user-friendly interface. Key features include: Real-time AI Chat Interface: Users can interact with an AI agent for various queries. Document Upload Functionality: Supports uploading documents in various formats, which are then processed by the backend AI services. Document Repository: Displays a list of uploaded documents with their summaries and download links. This is the main UI , ChatApp.jsx. We can interact with Chat Agent for regular chat, while the keyword “upload:” activates the hidden upload menu. Azure AI Agent Service – Backend Services The backend is developed using Express.js, orchestrating various services to handle: File Uploads: Accepts documents from the frontend and stores them in Azure Blob Storage. AI Processing: Utilizes Azure AI Projects to extract text, generate summaries, and create concise titles. Metadata Storage: Saves document metadata and summaries in Azure Cosmos DB for efficient retrieval. One of the Challenges was to not recreate the Agents each time our backend reloads. So a careful plan is configured, with several files – modules for the Azure AI Agent Service interaction and Agents creation. The initialization for example is taken care by a single file-module: const { DefaultAzureCredential } = require('@azure/identity'); const { SecretClient } = require('@azure/keyvault-secrets'); const { AIProjectsClient, ToolUtility } = require('@azure/ai-projects'); require('dotenv').config(); // Keep track of global instances let aiProjectsClient = null; let agents = { chatAgent: null, extractAgent: null, summarizeAgent: null, titleAgent: null }; async function initializeAI(app) { try { // Setup Azure Key Vault const keyVaultName = process.env.KEYVAULT_NAME; const keyVaultUrl = `https://${keyVaultName}.vault.azure.net`; const credential = new DefaultAzureCredential(); const secretClient = new SecretClient(keyVaultUrl, credential); // Get AI connection string const secret = await secretClient.getSecret('AIConnectionString'); const AI_CONNECTION_STRING = secret.value; // Initialize AI Projects Client aiProjectsClient = AIProjectsClient.fromConnectionString( AI_CONNECTION_STRING, credential ); // Create code interpreter tool (shared among agents) const codeInterpreterTool = ToolUtility.createCodeInterpreterTool(); const tools = [codeInterpreterTool.definition]; const toolResources = codeInterpreterTool.resources; console.log('🚀 Creating AI Agents...'); // Create chat agent agents.chatAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "chat-agent", instructions: "You are a helpful AI assistant that provides clear and concise responses.", tools, toolResources }); console.log('✅ Chat Agent created'); // Create extraction agent agents.extractAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "extract-agent", instructions: "Process and clean text content while maintaining structure and important information.", tools, toolResources }); console.log('✅ Extract Agent created'); // Create summarization agent agents.summarizeAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "summarize-agent", instructions: "Create concise summaries that capture main points and key details.", tools, toolResources }); console.log('✅ Summarize Agent created'); // Create title agent agents.titleAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "title-agent", instructions: `You are a specialized title generation assistant. Your task is to create titles for documents following these rules: 1. Generate ONLY the title text, no additional explanations 2. Maximum length of 50 characters 3. Focus on the main topic or theme 4. Use proper capitalization (Title Case) 5. Avoid special characters and quotes 6. Make titles clear and descriptive 7. Respond with nothing but the title itself Example good responses: Digital Transformation Strategy 2025 Market Analysis: Premium Chai Tea Cloud Computing Implementation Guide Example bad responses: "Here's a title for your document: Digital Strategy" (no explanations needed) This document appears to be about digital transformation (just the title needed) The title is: Market Analysis (no extra text)`, tools, toolResources }); console.log('✅ Title Agent created'); // Store in app.locals app.locals.aiProjectsClient = aiProjectsClient; app.locals.agents = agents; console.log('✅ All AI Agents initialized successfully'); return { aiProjectsClient, agents }; } catch (error) { console.error('❌ Error initializing AI:', error); throw error; } } // Export both the initialization function and the shared instances module.exports = { initializeAI, getClient: () => aiProjectsClient, getAgents: () => agents }; Our backend utilizes 4 agents, creating the Azure AI Agent Service Agents and we will find them in the portal, when the Backend deploys At the same time, each interaction is stored and managed as thread and that’s how we are interacting with the Azure AI Agent Service. Deployment and Security of Azure AI Agent Service WebApp Ensuring secure and efficient deployment is crucial. We’ve employed: Azure API Management (APIM): Secures API endpoints, providing controlled access and monitoring capabilities. Azure Key Vault: Manages sensitive information such as API keys and connection strings, ensuring data protection. Every call to the backend service is protected with Azure API Management Basic Tier. We have only the required endpoints pointing to the matching Endpoints of our Azure AI Agent Service WebApp backend. Also we are storing the AIConnectionString variable in Key Vault and we can move all Variables in Key Vault as well, which i recommend ! Get started with Azure AI Agent Service To get started with Azure AI Agent Service, you need to create an Azure AI Foundry hub and an Agent project in your Azure subscription. Start with the quickstart guide if it’s your first time using the service. You can create a AI hub and project with the required resources. After you create a project, you can deploy a compatible model such as GPT-4o. When you have a deployed model, you can also start making API calls to the service using the SDKs. There are already 2 Quick-starts available to get your Azure AI Agent Service up and running, the Basic and the Standard. I have chosen the second one the Standard plan, since we have a WebApp, and the whole Architecture comes very handy ! We just added the CosmosDB interaction and the API Management to extend to an enterprise setup ! Our own Azure AI Agent Service deployment, allows us to interact with the Agents, and utilize tools and functions very easy. Conclusion By harnessing the power of Azure’s cloud services, we’ve developed a scalable and efficient web application that simplifies document management through AI-driven processing. This solution not only enhances productivity but also ensures secure and organized access to essential information. References Azure AI Agent Service Documentation What is Azure AI Agent Service Azure AI Agent Service Quick starts Azure API Management Azure AI Foundry Azure AI Foundry Inference Demo648Views1like2CommentsMulti Model Deployment with Azure AI Foundry Serverless, Python and Container Apps
Intro Azure AI Foundry is a comprehensive AI suite, with a vast set of serverless and managed models offerings designed to democratize AI deployment. Whether you’re running a small startup or an 500 enterprise, Azure AI Foundry provides the flexibility and scalability needed to implement and manage machine learning and AI models seamlessly. By leveraging Azure’s robust cloud infrastructure, you can focus on innovating and delivering value, while Azure takes care of the heavy lifting behind the scenes. In this demonstration, we delve into building an Azure Container Apps stack. This innovative approach allows us to deploy a Web App that facilitates interaction with three powerful models: GPT-4, Deepseek, and PHI-3. Users can select from these models for Chat Completions, gaining invaluable insights into their actual performance, token consumption, and overall efficiency through real-time metrics. This deployment not only showcases the versatility and robustness of Azure AI Foundry but also provides a practical framework for businesses to observe and measure AI effectiveness, paving the way for data-driven decision-making and optimized AI solutions. Azure AI Foundry: The evolution Azure AI Foundry represents the next evolution in Microsoft’s AI offerings, building on the success of Azure AI and Cognitive Services. This unified platform is designed to streamline the development, deployment, and management of AI solutions, providing developers and enterprises with a comprehensive suite of tools and services. With Azure AI Foundry, users gain access to a robust model catalog, collaborative GenAIOps tools, and enterprise-grade security features. The platform’s unified portal simplifies the AI development lifecycle, allowing seamless integration of various AI models and services. Azure AI Foundry offers the flexibility and scalability needed to bring your AI projects to life, with deep insights and fast adoption path for the users. The Model Catalog allows us to filter and compare models per our requirements and easily create deployments directly from the Interface. Building the Application Before describing the methodology and the process, we have to make sure our dependencies are in place. So let’s have a quick look on the prerequisites of our deployment. GitHub - passadis/ai-foundry-multimodels: Azure AI Foundry multimodel utilization and performance metrics Web App. Azure AI Foundry multimodel utilization and performance metrics Web App. - passadis/ai-foundry-multimodels github.com Prerequisites Azure Subscription Azure AI Foundry Hub with a project in East US. The models are all supported in East US. VSCode with the Azure Resources extension There is no need to show the Azure Resources deployment steps, since there are numerous ways to do it and i have also showcased that in previous posts. In fact, it is a standard set of services to support our Micro-services Infrastructure: Azure Container Registry, Azure Key Vault, Azure User Assigned Managed identity, Azure Container Apps Environment and finally our Azure AI Foundry Model deployments. Frontend – Vite + React + TS The frontend is built using Vite and React and features a dropdown menu for model selection, a text area for user input, real-time response display, as well as loading states and error handling. Key considerations in the frontend implementation include the use of modern React patterns and hooks, ensuring a responsive design for various screen sizes, providing clear feedback for user interactions, and incorporating elegant error handling. The current implementation allows us to switch models even after we have initiated a conversation and we can keep up to 5 messages as Chat History. The uniqueness of our frontend is the performance information we get for each response, with Tokens, Tokens per Second and Total Time. Backend – Python + FastAPI The backend is built with FastAPI and is responsible for model selection and configuration, integrating with Azure AI Foundry, processing requests and responses, and handling errors and validation. A directory structure as follows can help us organize our services and utilize the modular strengths of Python: backend/ ├── app/ │ ├── __init__.py │ ├── main.py │ ├── config.py │ ├── api/ │ │ ├── __init__.py │ │ └── routes.py │ ├── models/ │ │ ├── __init__.py │ │ └── request_models.py │ └── services/ │ ├── __init__.py │ └── azure_ai.py ├── run.py # For Local runs ├── Dockerfile ├── requirements.txt └── .env Azure Container Apps A powerful combination allows us to easily integrate both using Dapr, since it is natively supported and integrated in our Azure Container Apps. try { const response = await fetch('/api/v1/generate', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ model: selectedModel, prompt: userInput, parameters: { temperature: 0.7, max_tokens: 800 } }), }); However we need to correctly configure NGINX to proxy the request to the Dapr Sidecar since we are using Container Images. # API endpoints via Dapr location /api/v1/ { proxy_pass http://localhost:3500/v1.0/invoke/backend/method/api/v1/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; Azure Key Vault As always all our secret variables like the API Endpoints and the API Keys are stored in Key Vault. We create a Key Vault Client in our Backend and we call each key only the time we need it. That makes our deployment more secure and efficient. Deployment Considerations When deploying your application: Set up proper environment variables Configure CORS settings appropriately Implement monitoring and logging Set up appropriate scaling policies Azure AI Foundry: Multi Model Architecture The solution is built on Azure Container Apps for serverless scalability. The frontend and backend containers are hosted in Azure Container Registry and deployed to Container Apps with Dapr integration for service-to-service communication. Azure Key Vault manages sensitive configurations like API keys through a user-assigned Managed Identity. The backend connects to three Azure AI Foundry models (DeepSeek, GPT-4, and Phi-3), each with its own endpoint and configuration. This serverless architecture ensures high availability, secure secret management, and efficient model interaction while maintaining cost efficiency through consumption-based pricing. Conclusion This Azure AI Foundry Models Demo showcases the power of serverless AI integration in modern web applications. By leveraging Azure Container Apps, Dapr, and Azure Key Vault, we’ve created a secure, scalable, and cost-effective solution for AI model comparison and interaction. The project demonstrates how different AI models can be effectively compared and utilized, providing insights into their unique strengths and performance characteristics. Whether you’re a developer exploring AI capabilities, an architect designing AI solutions, or a business evaluating AI models, this demo offers practical insights into Azure’s AI infrastructure and serverless computing potential. References Azure AI Foundry Azure Container Apps Azure AI – Documentation AI learning hub CloudBlogger: Text To Speech with ContainersHow to create your personal AI powered Email Assistant
Crafting an AI Powered Email Assistant with Semantic Kernel and Neon Serverless PostgreSQL Intro In the realm of Artificial Intelligence, crafting applications that seamlessly blend advanced capabilities with user-friendly design is no small feat. Today, we take you behind the scenes of building an AI Powered Email Assistant, a project that leverages Semantic Kernel for embedding generation and indexing, Neon PostgreSQL for vector storage, and the Azure OpenAI API for generative AI capabilities. This blog post is a practical guide to implementing a powerful AI-driven solution from scratch. The Vision Our AI Powered Email Assistant is designed to: Draft emails automatically using input prompts. Enable easy approval, editing, and sending via Microsoft Graph API. Create and store embeddings of the Draft and Send emails in NEON Serverless PostgreSQL DB. Provide a search feature to retrieve similar emails based on contextual embeddings. This application combines cutting-edge AI technologies and modern web development practices, offering a seamless user experience for drafting and managing emails. The Core Technologies of our AI Powered Email Assistant 1. Semantic Kernel Semantic Kernel simplifies the integration of AI services into applications. It provides robust tools for text generation, embedding creation, and memory management. For our project, Semantic Kernel acts as the foundation for: Generating email drafts via Azure OpenAI. Creating embeddings for storing and retrieving contextual data. 2. Vector Indexing with Neon PostgreSQL Neon, a serverless PostgreSQL solution, allows seamless storage and retrieval of embeddings using the pgvector extension. Its serverless nature ensures scalability and reliability, making it perfect for real-time AI applications. 3. Azure OpenAI API With Azure OpenAI, the project harnesses models like gpt-4 and text-embedding-ada-002 for generative text and embedding creation. These APIs offer unparalleled flexibility and power for building AI-driven workflows. How We Built our AI Powered Email Assistant Step 1: Frontend – A React-Based Interface The frontend, built in React, provides users with a sleek interface to: Input recipient details, subject, and email description. Generate email drafts with a single click. Approve, edit, and send emails directly. We incorporated a loading spinner to enhance user feedback and search functionality for retrieving similar emails. Key Features: State Management: For handling draft generation and email sending. API Integration: React fetch calls connect seamlessly to backend APIs. Dynamic UI: A real-time experience for generating and reviewing drafts. The backend, powered by ASP.NET Core, uses Semantic Kernel for AI services and Neon for vector indexing. Key backend components include Semantic Kernel Services: Text Embedding Generation: Uses Azure OpenAI’s text-embedding-ada-002 to create embeddings for email content. Draft Generation: The AI Powered Email Assistant creates email drafts based on user inputs using Azure OpenAI gpt-4 model (OpenAI Skill). public async Task<string> GenerateEmailDraftAsync(string subject, string description) { try { var chatCompletionService = _kernel.GetRequiredService<IChatCompletionService>(); var message = new ChatMessageContent( AuthorRole.User, $"Draft a professional email with the following details:\nSubject: {subject}\nDescription: {description}" ); var result = await chatCompletionService.GetChatMessageContentAsync(message.Content ?? string.Empty); return result?.Content ?? string.Empty; } catch (Exception ex) { throw new Exception($"Error generating email draft: {ex.Message}", ex); } } } Vector Indexing with Neon: Embedding Storage: Stores embeddings in Neon using the pgvector extension. Contextual Search: Retrieves similar emails by calculating vector similarity. Email Sending via Microsoft Graph: Enables sending emails directly through an authenticated Microsoft Graph API integration. Key Backend Features: Middleware for PIN Authentication: Adds a secure layer to ensure only authorized users access the application. CORS Policies: Allow safe fronted-backend communication. Swagger Documentation: The Swagger Docs that simplify API testing during development. Step 3: Integration with Neon The pgvector extension in Neon PostgreSQL facilitates efficient vector storage and similarity search. Here’s how we integrated Neon into the project: Table Design: A dedicated table for embeddings with columns for subject, content,type, embedding, and created_at. The type column can hold 2 values draft or sent in case the users want to explore previous unsent drafts. Index Optimization: Optimizing the index can save us a lot of time and effort before facing performance issues with CREATE INDEX ON embeddings USING ivfflat (embedding) WITH (lists = 100); Search Implementation: Using SQL queries with vector operations to find the most relevant embeddings. Enhanced Serverless Out-Of-the-box: Even the free SKU offers Read Replica and Autoscaling making it Enterprise-ready. Why This Approach Stands Out Efficiency: By storing embeddings instead of just raw data, the system maintains privacy while enabling rich contextual searches. Scalability: Leveraging Neon’s serverless capabilities ensures that the application can grow without bottlenecks. Autoscale is enabled User-Centric Design: The combination of React’s dynamic frontend and Semantic Kernel’s advanced AI delivers a polished user experience. Prerequisites Azure Account with OpenAI access Microsoft 365 Developer Account NEON PostgreSQL Account .NET 8 SDK Node.js and npm Visual Studio Code or Visual Studio 2022 Step 1: Setting Up Azure Resources Azure OpenAI Setup: Create an Azure OpenAI resource Deploy two models: GPT-4 for text generation text-embedding-ada-002 for embeddings Note down endpoint and API key Entra ID App Registration: Create new App Registration Required API Permissions: Microsoft Graph: Mail.Send (Application) Microsoft Graph: Mail.ReadWrite (Application) Generate Client Secret Note down Client ID and Tenant ID Step 2: Database Setup NEON PostgreSQL: Create a new project Create database Enable pgvector extension Save connection string Step 3: Backend Implementation (.NET) Project Structure: /Controllers - EmailController.cs (handles email operations) - HomeController.cs (root routing) - VectorSearchController.cs (similarity search) /Services - EmailService.cs (Graph API integration) - SemanticKernelService.cs (AI operations) - VectorSearchService.cs (embedding operations) - OpenAISkill.cs (email generation) Key Components: SemanticKernelService: Initializes Semantic Kernel Manages AI model connections Handles prompt engineering EmailService: Microsoft Graph API integration Email sending functionality Authentication management VectorSearchService: Generates embeddings Manages vector storage Performs similarity searches Step 5: Configuration Create new dotnet project with: dotnet new webapi -n SemanticKernelEmailAssistant Configure appsettings.json for your Connections. Install Semantic Kernel ( Look into the SemanticKernelEmailAssistant.csproj for all packages and versions) – Versions are Important ! When all of your files are complete, then you can execute: dotnet build & dotnet publish -c Release To test locally simply run dotnet run Step 5: React Frontend Start a new React App with: npx create-react-app ai-email-assistant Change directory in the newly created Copy all files from Git and run npm install Initialize Tailwind npx tailwindcss init (if you see any related errors) Step 6: Deploy to Azure Both our Apps are Containerized with Docker, so pay attention to get the Dockerfile for each App. Use: [ docker build -t backend . ] and tag and push: [ docker tag backend {acrname}.azurecr.io/backend:v1 ] , [ docker push {acrname}.azurecr.io/backend:v1 ]. The same applies for the Frontend. Make sure to login to Azure Container Registry with: az acr login --name $(az acr list -g myresourcegroup --query "[].{name: name}" -o tsv) We will be able to see our new Repo on Azure Container Registry and deploy our Web Apps Troubleshooting and Maintenance Backend Issues: Use Swagger (/docs) for API testing and debugging. Check Azure Key Vault for PIN and credential updates. Embedding Errors: Ensure pgvector is correctly configured in Neon PostgreSQL. Verify the Azure OpenAI API key and endpoint are correct. Frontend Errors: Use browser dev tools to debug fetch requests. Ensure environment variables are correctly set during build and runtime. Conclusion In today’s rapidly evolving tech landscape, building an AI-powered application is no longer a daunting task, primarily thanks to groundbreaking technologies like Semantic Kernel, Neon PostgreSQL, and Azure OpenAI. Most importantly, this project clearly demonstrates how these powerful tools can seamlessly work together to deliver a robust, scalable, and user-friendly solution. First and foremost, the integration of Semantic Kernel effectively streamlines AI orchestration and prompt management. Additionally, Neon PostgreSQL provides exceptional serverless database capabilities that automatically scale with your application’s needs. Furthermore, Azure OpenAI’s reliable API and advanced language models consistently ensure high-quality AI responses and content generation. Moreover, whether you’re developing a customer service bot, content creation tool, or data analysis platform, this versatile technology stack offers the essential flexibility and power to bring your innovative ideas to life. Consequently, if you’re ready to create your own AI application, the powerful combination of Semantic Kernel and Neon serves as your ideal starting point. Above all, this robust foundation successfully balances sophisticated functionality with straightforward implementation, while simultaneously ensuring seamless scalability as your project continues to grow and evolve. References: Semantic Kernel Vector Store Embeddings NEON ProjectGraphQL API: Unlimited flexibility for your AI applications
Building a Modern Speech-to-Text Solution with GraphQL and Azure AI Speech Intro Have you ever wondered how to build a modern AI enhanced web application that handles audio transcription while keeping your codebase clean and maintainable? In this post, I’ll walk you through how we combined the power of GraphQL with Azure’s AI services to create a seamless audio transcription solution. Let’s dive in! The Challenge In today’s world, converting speech to text is becoming increasingly important for accessibility, content creation, and data processing. But building a robust solution that handles file uploads, processes audio, and manages transcriptions can be complex. Traditional REST APIs often require multiple endpoints, leading to increased complexity and potential maintenance headaches. That’s where GraphQL comes in. What is GraphQL GraphQL is an open-source data query and manipulation language for APIs, and a runtime for executing those queries with your existing data. It was developed by Facebook in 2012 and publicly released in 2015. To break that down formally: It’s a query language specification that allows clients to request exactly the data they need It’s a type system that helps describe your API’s data model and capabilities It’s a runtime engine that processes and validates queries against your schema It provides a single endpoint to interact with multiple data sources and services In technical documentation, GraphQL is officially described as: “A query language for your API and a server-side runtime for executing queries by using a type system you define for your data.“ Why GraphQL? GraphQL has revolutionized how we think about API design. Instead of dealing with multiple endpoints for different operations, we get a single, powerful endpoint that handles everything. This is particularly valuable when dealing with complex workflows like audio file processing and transcription. Here’s what makes GraphQL perfect for our use case: Single endpoint for all operations (uploads, queries, mutations) Type-safe API contracts Flexible data fetching Real-time updates through subscriptions Built-in documentation and introspection Solution Architecture Our solution architecture centers around a modern web application built with a powerful combination of technologies. On the frontend, we utilize React to create a dynamic and responsive user interface, enhanced by Apollo Client for seamless GraphQL integration and Fluent UI for a polished and visually appealing design. The backend is powered by Apollo Server, providing our GraphQL API. To handle the core functionality of audio processing, we leverage Azure Speech-to-Text for AI-driven transcription. File management is streamlined with Azure Blob Storage, while data persistence is ensured through Azure Cosmos DB. Finally, we prioritize security by using Azure Key Vault for the secure management of sensitive information. This architecture allows us to deliver a robust and efficient application for audio processing and transcription. Flow Chart Key Technologies Frontend Stack React for a dynamic user interface Apollo Client for GraphQL integration Fluent UI for a polished look and feel Backend Stack Apollo Server for our GraphQL API Azure Speech-to-Text for AI-powered transcription Azure Blob Storage for file management Azure Cosmos DB for data persistence Azure Key Vault for secure secret management Architecture Diagram GraphQL Schema and Resolvers: The Foundation At its core, GraphQL requires two fundamental components to function: a Schema Definition Language (SDL) and Resolvers. Schema Definition Language (SDL) The schema is your API’s contract – it defines the types, queries, and mutations available. Here’s an example: import { gql } from "apollo-server"; const typeDefs = gql` scalar Upload type UploadResponse { success: Boolean! message: String! } type Transcription { id: ID! filename: String! transcription: String! fileUrl: String! } type Query { hello: String listTranscriptions: [Transcription!] getTranscription(id: ID!): Transcription } type Mutation { uploadFile(file: Upload!): UploadResponse! } `; export default typeDefs; Resolvers Resolvers are functions that determine how the data for each field in your schema is fetched or computed. They’re the implementation behind your schema. Here’s a typical resolver structure: import axios from "axios"; import { BlobServiceClient } from "@azure/storage-blob"; import { SecretClient } from "@azure/keyvault-secrets"; import { DefaultAzureCredential } from "@azure/identity"; import * as fs from "fs"; import FormData from "form-data"; import { GraphQLUpload } from "graphql-upload"; import { CosmosClient } from "@azure/cosmos"; import { v4 as uuidv4 } from "uuid"; import { pipeline } from "stream"; import { promisify } from "util"; const pipelineAsync = promisify(pipeline); // Key Vault setup const vaultName = process.env.AZURE_KEY_VAULT_NAME; const vaultUrl = `https://${vaultName}.vault.azure.net`; const credential = new DefaultAzureCredential({ managedIdentityClientId: process.env.MANAGED_IDENTITY_CLIENT_ID, }); const secretClient = new SecretClient(vaultUrl, credential); async function getSecret(secretName) { try { const secret = await secretClient.getSecret(secretName); console.log(`Successfully retrieved secret: ${secretName}`); return secret.value; } catch (error) { console.error(`Error fetching secret "${secretName}":`, error.message); throw new Error(`Failed to fetch secret: ${secretName}`); } } // Cosmos DB setup const databaseName = "TranscriptionDB"; const containerName = "Transcriptions"; let cosmosContainer; async function initCosmosDb() { const connectionString = await getSecret("COSMOSCONNECTIONSTRING"); const client = new CosmosClient(connectionString); const database = client.database(databaseName); cosmosContainer = database.container(containerName); console.log(`Connected to Cosmos DB: ${databaseName}/${containerName}`); } // Initialize Cosmos DB connection initCosmosDb(); const resolvers = { Upload: GraphQLUpload, Query: { hello: () => "Hello from Azure Backend!", // List all stored transcriptions listTranscriptions: async () => { try { const { resources } = await cosmosContainer.items.query("SELECT c.id, c.filename FROM c").fetchAll(); return resources; } catch (error) { console.error("Error fetching transcriptions:", error.message); throw new Error("Could not fetch transcriptions."); } }, // Fetch transcription details by ID getTranscription: async (parent, { id }) => { try { const { resource } = await cosmosContainer.item(id, id).read(); return resource; } catch (error) { console.error(`Error fetching transcription with ID ${id}:`, error.message); throw new Error(`Could not fetch transcription with ID ${id}.`); } }, }, Mutation: { uploadFile: async (parent, { file }) => { const { createReadStream, filename } = await file; const id = uuidv4(); const filePath = `/tmp/${id}-${filename}`; try { console.log("---- STARTING FILE UPLOAD ----"); console.log(`Original filename: ${filename}`); console.log(`Temporary file path: ${filePath}`); // Save the uploaded file to /tmp const stream = createReadStream(); const writeStream = fs.createWriteStream(filePath); await pipelineAsync(stream, writeStream); console.log("File saved successfully to temporary storage."); // Fetch secrets from Azure Key Vault console.log("Fetching secrets from Azure Key Vault..."); const subscriptionKey = await getSecret("AZURESUBSCRIPTIONKEY"); const endpoint = await getSecret("AZUREENDPOINT"); const storageAccountUrl = await getSecret("AZURESTORAGEACCOUNTURL"); const sasToken = await getSecret("AZURESASTOKEN"); console.log("Storage Account URL and SAS token retrieved."); // Upload the WAV file to Azure Blob Storage console.log("Uploading file to Azure Blob Storage..."); const blobServiceClient = new BlobServiceClient(`${storageAccountUrl}?${sasToken}`); const containerClient = blobServiceClient.getContainerClient("wav-files"); const blockBlobClient = containerClient.getBlockBlobClient(`${id}-${filename}`); await blockBlobClient.uploadFile(filePath); console.log("File uploaded to Azure Blob Storage successfully."); const fileUrl = `${storageAccountUrl}/wav-files/${id}-${filename}`; console.log(`File URL: ${fileUrl}`); // Send transcription request to Azure console.log("Sending transcription request..."); const form = new FormData(); form.append("audio", fs.createReadStream(filePath)); form.append( "definition", JSON.stringify({ locales: ["en-US"], profanityFilterMode: "Masked", channels: [0, 1], }) ); const response = await axios.post( `${endpoint}/speechtotext/transcriptions:transcribe?api-version=2024-05-15-preview`, form, { headers: { ...form.getHeaders(), "Ocp-Apim-Subscription-Key": subscriptionKey, }, } ); console.log("Azure Speech API response received."); console.log("Response Data:", JSON.stringify(response.data, null, 2)); // Extract transcription const combinedPhrases = response.data?.combinedPhrases; if (!combinedPhrases || combinedPhrases.length === 0) { throw new Error("Transcription result not available in the response."); } const transcription = combinedPhrases.map((phrase) => phrase.text).join(" "); console.log("Transcription completed successfully."); // Store transcription in Cosmos DB await cosmosContainer.items.create({ id, filename, transcription, fileUrl, createdAt: new Date().toISOString(), }); console.log(`Transcription stored in Cosmos DB with ID: ${id}`); return { success: true, message: `Transcription: ${transcription}`, }; } catch (error) { console.error("Error during transcription process:", error.response?.data || error.message); return { success: false, message: `Transcription failed: ${error.message}`, }; } finally { try { fs.unlinkSync(filePath); console.log(`Temporary file deleted: ${filePath}`); } catch (cleanupError) { console.error(`Error cleaning up temporary file: ${cleanupError.message}`); } console.log("---- FILE UPLOAD PROCESS COMPLETED ----"); } }, }, }; export default resolvers; Server – Apollo Finally the power behind all, the Apollo Server. We need to install with npm and in addition add the client to the frontend. It can easily integrate in our Javascript ExpressJS: import { ApolloServer } from "apollo-server-express"; import express from "express"; import cors from "cors"; // Add CORS middleware import { graphqlUploadExpress } from "graphql-upload"; import typeDefs from "./schema.js"; import resolvers from "./resolvers.js"; const startServer = async () => { const app = express(); // Add graphql-upload middleware app.use(graphqlUploadExpress()); // Configure CORS middleware app.use( cors({ origin: "https://<frontend>.azurewebsites.net", // Allow only the frontend origin credentials: true, // Allow cookies and authentication headers }) ); const server = new ApolloServer({ typeDefs, resolvers, csrfPrevention: true, }); await server.start(); server.applyMiddleware({ app, cors: false }); // Disable Apollo's CORS to rely on Express const PORT = process.env.PORT || 4000; app.listen(PORT, "0.0.0.0", () => console.log(`🚀 Server ready at http://0.0.0.0:${PORT}${server.graphqlPath}`) ); }; startServer(); Getting Started Want to try it yourself? Check out our GitHub repository for: Complete source code Deployment instructions Configuration guides API documentation Conclusion This project demonstrates the powerful combination of GraphQL and Azure AI services, showcasing how modern web applications can handle complex audio processing workflows with elegance and efficiency. By leveraging GraphQL’s flexible data fetching capabilities alongside Azure’s robust cloud infrastructure, we’ve created a scalable solution that streamlines the audio transcription process from upload to delivery. The integration of Apollo Server provides a clean, type-safe API layer that simplifies client-server communication, while Azure AI Speech Services ensures accurate transcription results. This architecture not only delivers a superior developer experience but also provides end-users with a seamless, professional-grade audio transcription service. References GraphQL – Language API What is GraphQL for Azure? Azure Key Vault secrets in JavaScript Azure AI Speech Fast transcription API CloudBlogger – MultiAgent Speech324Views2likes0CommentsHow to Create an AI Model for Streaming Data
A Practical Guide with Microsoft Fabric, Kafka and MLFlow Intro In today’s digital landscape, the ability to detect and respond to threats in real-time isn’t just a luxury—it’s a necessity. Imagine building a system that can analyze thousands of user interactions per second, identifying potential phishing attempts before they impact your users. While this may sound complex, Microsoft Fabric makes it possible, even with streaming data. Let’s explore how. In this hands-on guide, I’ll walk you through creating an end-to-end AI solution that processes streaming data from Kafka and employs machine learning for real-time threat detection. We’ll leverage Microsoft Fabric’s comprehensive suite of tools to build, train, and deploy an AI model that works seamlessly with streaming data. Why This Matters Before we dive into the technical details, let’s explore the key advantages of this approach: real-time detection, proactive protection, and the ability to adapt to emerging threats. Real-Time Processing: Traditional batch processing isn’t enough in today’s fast-paced threat landscape. We need immediate insights. Scalability: With Microsoft Fabric’s distributed computing capabilities, our solution can handle enterprise-scale data volumes. Integration: By combining streaming data processing with AI, we create a system that’s both intelligent and responsive. What We’ll Build I’ve created a practical demonstration that showcases how to: Ingest streaming data from Kafka using Microsoft Fabric’s Eventhouse Clean and prepare data in real-time using PySpark Train and evaluate an AI model for phishing detection Deploy the model for real-time predictions Store and analyze results for continuous improvement The best part? Everything stays within the Microsoft Fabric ecosystem, making deployment and maintenance straightforward. Azure Event Hub Start by creating an Event Hub namespace and a new Event Hub. Azure Event Hubs have Kafka endpoints ready to start receiving Streaming Data. Create a new Shared Access Signature and utilize the Python i have created. You may adopt the Constructor to your own idea. import uuid import random import time from confluent_kafka import Producer # Kafka configuration for Azure Event Hub config = { 'bootstrap.servers': 'streamiot-dev1.servicebus.windows.net:9093', 'sasl.mechanisms': 'PLAIN', 'security.protocol': 'SASL_SSL', 'sasl.username': '$ConnectionString', 'sasl.password': 'Endpoint=sb://<replacewithyourendpoint>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xxxxxxx', } # Create a Kafka producer producer = Producer(config) # Shadow traffic generation def generate_shadow_payload(): """Generates a shadow traffic payload.""" subscriber_id = str(uuid.uuid4()) # Weighted choice for subscriberData if random.choices([True, False], weights=[5, 1])[0]: subscriber_data = f"{random.choice(['John', 'Mark', 'Alex', 'Gordon', 'Silia' 'Jane', 'Alice', 'Bob'])} {random.choice(['Doe', 'White', 'Blue', 'Green', 'Beck', 'Rogers', 'Fergs', 'Coolio', 'Hanks', 'Oliver', 'Smith', 'Brown'])}" else: subscriber_data = f"https://{random.choice(['example.com', 'examplez.com', 'testz.com', 'samplez.com', 'testsite.com', 'mysite.org'])}" return { "subscriberId": subscriber_id, "subscriberData": subscriber_data, } # Delivery report callback def delivery_report(err, msg): """Callback for delivery reports.""" if err is not None: print(f"Message delivery failed: {err}") else: print(f"Message delivered to {msg.topic()} [partition {msg.partition()}] at offset {msg.offset()}") # Topic configuration topic = 'streamio-events1' # Simulate shadow traffic generation and sending to Kafka try: print("Starting shadow traffic simulation. Press Ctrl+C to stop.") while True: # Generate payload payload = generate_shadow_payload() # Send payload to Kafka producer.produce( topic=topic, key=str(payload["subscriberId"]), value=str(payload), callback=delivery_report ) # Throttle messages (1500ms) producer.flush() # Ensure messages are sent before throttling time.sleep(1.5) except KeyboardInterrupt: print("\nSimulation stopped.") finally: producer.flush() You can run this from your Workstation, an Azure Function or whatever fits your case. Architecture Deep Dive: The Three-Layer Approach When building AI-powered streaming solutions, thinking in layers helps manage complexity. Let’s break down our architecture into three distinct layers: Bronze Layer: Raw Streaming Data Ingestion At the foundation of our solution lies the raw data ingestion layer. Here’s where our streaming story begins: A web service generates JSON payloads containing subscriber data These events flow through Kafka endpoints Data arrives as structured JSON with key fields like subscriberId, subscriberData, and timestamps Microsoft Fabric’s Eventstream captures this raw streaming data, providing a reliable foundation for our ML pipeline and stores the payloads in Eventhouse Silver Layer: The Intelligence Hub This is where the magic happens. Our silver layer transforms raw data into actionable insights: The EventHouse KQL database stores and manages our streaming data Our ML model, trained using PySpark’s RandomForest classifier, processes the data SynapseML’s Predict API enables seamless model deployment A dedicated pipeline applies our ML model to detect potential phishing attempts Results are stored in Lakehouse Delta Tables for immediate access Gold Layer: Business Value Delivery The final layer focuses on making our insights accessible and actionable: Lakehouse tables store cleaned, processed data Semantic models transform our predictions into business-friendly formats Power BI dashboards provide real-time visibility into phishing detection Real-time dashboards enable immediate response to potential threats The Power of Real-Time ML for Streaming Data What makes this architecture particularly powerful is its ability to: Process data in real-time as it streams in Apply sophisticated ML models without batch processing delays Provide immediate visibility into potential threats Scale automatically as data volumes grow Implementing the Machine Learning Pipeline Let’s dive into how we built and deployed our phishing detection model using Microsoft Fabric’s ML capabilities. What makes this implementation particularly interesting is how it combines traditional ML with streaming data processing. Building the ML Foundation First, let’s look at how we structured the training phase of our machine learning pipeline using PySpark: Training Notebook Connect to Eventhouse Load the data from pyspark.sql import SparkSession # Initialize Spark session (already set up in Fabric Notebooks) spark = SparkSession.builder.getOrCreate() # Define connection details kustoQuery = """ SampleData | project subscriberId, subscriberData, ingestion_time() """ # Replace with your desired KQL query kustoUri = "https://<eventhousedbUri>.z9.kusto.fabric.microsoft.com" # Replace with your Kusto cluster URI database = "Eventhouse" # Replace with your Kusto database name # Fetch the access token for authentication accessToken = mssparkutils.credentials.getToken(kustoUri) # Read data from Kusto using Spark df = spark.read \ .format("com.microsoft.kusto.spark.synapse.datasource") \ .option("accessToken", accessToken) \ .option("kustoCluster", kustoUri) \ .option("kustoDatabase", database) \ .option("kustoQuery", kustoQuery) \ .load() # Show the loaded data print("Loaded data:") df.show() Separate and flag Phishing payload Load it with Spark from pyspark.sql.functions import col, expr, when, udf from urllib.parse import urlparse # Define a UDF (User Defined Function) to extract the domain def extract_domain(url): if url.startswith('http'): return urlparse(url).netloc return None # Register the UDF with Spark extract_domain_udf = udf(extract_domain) # Feature engineering with Spark df = df.withColumn("is_url", col("subscriberData").startswith("http")) \ .withColumn("domain", extract_domain_udf(col("subscriberData"))) \ .withColumn("is_phishing", col("is_url")) # Show the transformed data df.show() Use Spark ML Lib to Train the model Evaluate the Model from pyspark.sql.functions import col from pyspark.ml.feature import Tokenizer, HashingTF, IDF from pyspark.ml.classification import RandomForestClassifier from pyspark.ml import Pipeline from pyspark.ml.evaluation import MulticlassClassificationEvaluator # Ensure the label column is of type double df = df.withColumn("is_phishing", col("is_phishing").cast("double")) # Tokenizer to break text into words tokenizer = Tokenizer(inputCol="subscriberData", outputCol="words") # Convert words to raw features using hashing hashingTF = HashingTF(inputCol="words", outputCol="rawFeatures", numFeatures=100) # Compute the term frequency-inverse document frequency (TF-IDF) idf = IDF(inputCol="rawFeatures", outputCol="features") # Random Forest Classifier rf = RandomForestClassifier(labelCol="is_phishing", featuresCol="features", numTrees=10) # Build the ML pipeline pipeline = Pipeline(stages=[tokenizer, hashingTF, idf, rf]) # Split the dataset into training and testing sets train_data, test_data = df.randomSplit([0.7, 0.3], seed=42) # Train the model model = pipeline.fit(train_data) # Make predictions on the test data predictions = model.transform(test_data) # Evaluate the model's accuracy evaluator = MulticlassClassificationEvaluator( labelCol="is_phishing", predictionCol="prediction", metricName="accuracy" ) accuracy = evaluator.evaluate(predictions) # Output the accuracy print(f"Model Accuracy: {accuracy}") Add Signature to AI Model from mlflow.models.signature import infer_signature from pyspark.sql import Row # Select a sample for inferring signature sample_data = train_data.limit(10).toPandas() # Create a Pandas DataFrame for schema inference input_sample = sample_data[["subscriberData"]] # Input column(s) output_sample = model.transform(train_data.limit(10)).select("prediction").toPandas() # Infer the signature signature = infer_signature(input_sample, output_sample) Run – Publish Model and Log Metric: Accuracy import mlflow from mlflow import spark # Start an MLflow run with mlflow.start_run() as run: # Log the Spark MLlib model with the signature mlflow.spark.log_model( spark_model=model, artifact_path="phishing_detector", registered_model_name="PhishingDetector", signature=signature # Add the inferred signature ) # Log metrics like accuracy mlflow.log_metric("accuracy", accuracy) print(f"Model logged successfully under run ID: {run.info.run_id}") Results and Impact Our implementation achieved: 81.8% accuracy in phishing detection Sub-second prediction times for streaming data Scalable processing of thousands of events per second Yes, that's a good start ! Now let's continue our post by explaining the deployment and operation phase of our ML solution: From Model to Production: Automating the ML Pipeline After training our model, the next crucial step is operationalizing it for real-time use. We’ve implemented one Pipeline with two activities that process our streaming data every 5 minutes: All Streaming Data Notebook # Main prediction snippet from synapse.ml.predict import MLFlowTransformer # Apply ML model for phishing detection model = MLFlowTransformer( inputCols=["subscriberData"], outputCol="predictions", modelName="PhishingDetector", modelVersion=3 ) # Transform and save all predictions df_with_predictions = model.transform(df) df_with_predictions.write.format('delta').mode("append").save("Tables/phishing_predictions") Clean Streaming Data Notebook # Filter for non-phishing data only non_phishing_df = df_with_predictions.filter(col("predictions") == 0) # Save clean data for business analysis non_phishing_df.write.format("delta").mode("append").save("Tables/clean_data") Creating Business Value What makes this architecture particularly powerful is the seamless transition from ML predictions to business insights: Delta Lake Integration: All predictions are stored in Delta format, ensuring ACID compliance Enables time travel and data versioning Perfect for creating semantic models Real-Time Processing: 5-minute refresh cycle ensures near real-time threat detection Automatic segregation of clean vs. suspicious data Immediate visibility into potential threats Business Intelligence Ready: Delta tables are directly compatible with semantic modeling Power BI can connect to these tables for live reporting Enables both historical analysis and real-time monitoring The Power of Semantic Models With our data now organized in Delta tables, we’re ready for: Creating dimensional models for better analysis Building real-time dashboards Generating automated reports Setting up alerts for security teams Real-Time Visualization Capabilities While Microsoft Fabric offers extensive visualization capabilities through Power BI, it’s worth highlighting one particularly powerful feature: direct KQL querying for real-time monitoring. Here’s a glimpse of how simple yet powerful this can be: SampleData | where EventProcessedUtcTime > ago(1m) // Fetch rows processed in the last 1 minute | project subscriberId, subscriberData, EventProcessedUtcTime This simple KQL query, when integrated into a dashboard, provides near real-time visibility into your streaming data with sub-minute latency. The visualization possibilities are extensive, but that’s a topic for another day. Conclusion: Bringing It All Together What we’ve built here is more than just a machine learning model – it’s a complete, production-ready system that: Ingests and processes streaming data in real-time Applies sophisticated ML algorithms for threat detection Automatically segregates clean from suspicious data Provides immediate visibility into potential threats The real power of Microsoft Fabric lies in how it seamlessly integrates these different components. From data ingestion through Eventhouse ad Lakehouse, to ML model training and deployment, to real-time monitoring – everything works together in a unified platform. What’s Next? While we’ve focused on phishing detection, this architecture can be adapted for various use cases: Fraud detection in financial transactions Quality control in manufacturing Customer behavior analysis Anomaly detection in IoT devices The possibilities are endless with our imagination and creativity! Stay tuned for the Git Repo where all the code will be shared ! References Get Started with Microsoft Fabric Delta Lake in Fabric Overview of Eventhouse CloudBlogger: A guide to innovative Apps with MS Fabric309Views0likes0CommentsBot Framework: Build an AI Security Assistant with ease
How to create intelligent Bots with the Bot Framework Intro In an era where cybersecurity threats loom large, the need for vigilant and responsive security measures has never been greater. The Microsoft Bot Framework SDK, with its powerful AI capabilities, offers a new frontier in security management. This blog post will delve into the development of such an AI security assistant, showcasing how to leverage the SDK to interpret security logs, generate KQL queries, and provide real-time security alerts. We’ll explore how to integrate with existing security infrastructure and harness the power of AI to build our own AI Security Assistant. Join us as we explore this exciting intersection of AI and cybersecurity, where intelligent bots stand guard against the ever-evolving landscape of digital threats. Setup Before we start with the Bot Framework SDK, we need to prepare our development environment. This section will guide you through the necessary steps to set up your “canvas” and get started with building your AI-powered writing assistant. Prerequisites: Visual Studio: Ensure you have Visual Studio installed with the .NET desktop development workload. You can download it from the official Microsoft website. Azure Subscription: An active Azure subscription is required to access the Copilot SDK and its related services. If you don’t have one already, you can sign up for a free trial. Bot Framework Emulator: This tool allows you to test your bot locally before deploying it to Azure. Download it from the Bot Framework website. Creating a New Bot Project: Install the Bot Framework SDK: Open Visual Studio and create a new project. Choose the “Echo Bot (Bot Framework v4)” template. This template provides a basic bot structure to get you started quickly. Install the required NuGet Packages: Azure.AI.OpenAI Azure.Core Microsoft.Bot.Builder.Integration.AspNet.Core Configure Your Bot: In the file, you’ll need to configure your bot with the appropriate API keys and endpoints for the Copilot service. You can obtain these credentials from your Azure portal.appsettings.json { "MicrosoftAppType": "xxxxxx", // Leave it empty until publish "MicrosoftAppId": "xxxx", // Leave it empty until publish "MicrosoftAppPassword": "xxxx", // Leave it empty until publish "MicrosoftAppTenantId": "xxxxxx", // Leave it empty until publish "AzureOpenAI": { "ApiKey": "xxxxxxxxxx", "DeploymentName": "gpt-4o", "Endpoint": "https://xxxx.openai.azure.com" }, "AzureSentinel": { // Log Analytics "ClientId": "xxxx", "ClientSecret": "xxxxx", "TenantId": "xxxx", "WorkspaceId": "xxxxx" } } When you open the Echo Bot we need to make changes to our code in order to achieve 3 things: Azure OpenAI Chat Interaction and generic advice KQL Query generation KQL Query execution against a Sentinel Workspace \ Log Analytics The main program is EchoBot.cs (you can rename as needed). using Microsoft.Bot.Builder; using Microsoft.Bot.Schema; using Newtonsoft.Json; using System.Net.Http; using System.Text; using System.Threading; using System.Threading.Tasks; using Microsoft.Extensions.Configuration; using Azure.AI.OpenAI; using Azure; using System.Collections.Generic; using System.Linq; using System; using System.Text.RegularExpressions; namespace SecurityBot.Bots { public class Security : ActivityHandler { private readonly HttpClient _httpClient; private readonly AzureOpenAIClient _azureClient; private readonly string _chatDeployment; private readonly IConfiguration _configuration; private Dictionary<string, int> eventMapping; // Declare eventMapping here public Security(IConfiguration configuration) { _configuration = configuration ?? throw new ArgumentNullException(nameof(configuration)); _httpClient = new HttpClient(); // Load event mappings from JSON file string eventMappingPath = Path.Combine(AppContext.BaseDirectory, "eventMappings.json"); if (File.Exists(eventMappingPath)) { var json = File.ReadAllText(eventMappingPath); eventMapping = JsonConvert.DeserializeObject<Dictionary<string, int>>(json); } // Azure OpenAI Chat API configuration var endpoint = configuration["AzureOpenAI:Endpoint"]; var apiKey = configuration["AzureOpenAI:ApiKey"]; _chatDeployment = configuration["AzureOpenAI:DeploymentName"]; // Your Chat model deployment name // Initialize the Azure OpenAI client _azureClient = new AzureOpenAIClient(new Uri(endpoint), new AzureKeyCredential(apiKey)); } protected override async Task OnMessageActivityAsync(ITurnContext<IMessageActivity> turnContext, CancellationToken cancellationToken) { var userInput = turnContext.Activity.Text.ToLower(); // Detect if the user wants to generate a query if (userInput.Contains("generate")) { // If the user says "generate", extract event and date, then generate the query var kqlQuery = await BuildKQLQueryFromInput(userInput, turnContext, cancellationToken); await turnContext.SendActivityAsync(MessageFactory.Text($"Generated KQL Query: {kqlQuery}"), cancellationToken); } else if (userInput.Contains("run")) { // If the user says "run", extract event and date, then run the query var kqlQuery = await BuildKQLQueryFromInput(userInput, turnContext, cancellationToken); var queryResult = await RunKqlQueryAsync(kqlQuery); await turnContext.SendActivityAsync(MessageFactory.Text($"KQL Query: {kqlQuery}\n\nResult: {queryResult}"), cancellationToken); } else { // For other inputs, handle the conversation with Azure OpenAI await GenerateChatResponseAsync(turnContext, userInput, cancellationToken); } } // Generate responses using the Azure OpenAI Chat API without streaming private async Task GenerateChatResponseAsync(ITurnContext<IMessageActivity> turnContext, string userInput, CancellationToken cancellationToken) { var chatClient = _azureClient.GetChatClient(_chatDeployment); // Set up the chat conversation context var chatMessages = new List<ChatMessage> { new SystemChatMessage("You are a cybersecurity assistant responding only to Security related questions. For irrelevant topics answer with 'Irrelevant'"), new UserChatMessage(userInput) }; // Call the Azure OpenAI API to get the complete chat response var chatResponse = await chatClient.CompleteChatAsync(chatMessages); // Access the completion content properly var assistantMessage = chatResponse.Value.Content.FirstOrDefault()?.Text; if (!string.IsNullOrEmpty(assistantMessage)) { // Send the entire response to the user at once await turnContext.SendActivityAsync(MessageFactory.Text(assistantMessage.ToString().Trim()), cancellationToken); } else { await turnContext.SendActivityAsync(MessageFactory.Text("I'm sorry, I couldn't process your request."), cancellationToken); } } // Build a KQL query from the user's input using Text Analytics private async Task<string> BuildKQLQueryFromInput(string input, ITurnContext<IMessageActivity> turnContext, CancellationToken cancellationToken) { // Start with a base KQL query string kqlQuery = "SecurityEvent | where 1 == 1 "; // Use the eventMapping dictionary to map the user's input to an EventID var matchedEventId = eventMapping.FirstOrDefault(mapping => input.Contains(mapping.Key)).Value; if (matchedEventId != 0) // EventID was found { kqlQuery += $"| where EventID == {matchedEventId} "; } else { // Fallback if no matching EventID is found await turnContext.SendActivityAsync(MessageFactory.Text("Sorry, I couldn't find a matching event ID for your request."), cancellationToken); return null; // Exit early if no valid EventID is found } // Extract the DateRange (e.g., "7 days") and add it to the query var dateRange = ExtractDateRange(input); if (!string.IsNullOrEmpty(dateRange)) { kqlQuery += $"| where TimeGenerated > ago({dateRange}) | project TimeGenerated, Account, Computer, EventID | take 10 "; } return kqlQuery; // Return the constructed KQL query } private string ExtractDateRange(string input) { // Simple extraction logic to detect "7 days", "3 days", etc. var match = Regex.Match(input, @"(\d+)\s+days?"); if (match.Success) { return $"{match.Groups[1].Value}d"; // Return as "7d", "3d", etc. } return null; // Return null if no date range found } // Run KQL query in Azure Sentinel / Log Analytics private async Task<string> RunKqlQueryAsync(string kqlQuery) { var _workspaceId = _configuration["AzureSentinel:WorkspaceId"]; string queryUrl = $"https://api.loganalytics.io/v1/workspaces/{_workspaceId}/query"; var accessToken = await GetAccessTokenAsync(); // Get Azure AD token var requestBody = new { query = kqlQuery }; var jsonContent = new StringContent(JsonConvert.SerializeObject(requestBody), Encoding.UTF8, "application/json"); _httpClient.DefaultRequestHeaders.Clear(); _httpClient.DefaultRequestHeaders.Add("Authorization", $"Bearer {accessToken}"); var response = await _httpClient.PostAsync(queryUrl, jsonContent); var responseBody = await response.Content.ReadAsStringAsync(); return responseBody; // Return the query result } // Get Azure AD token for querying Log Analytics private async Task<string> GetAccessTokenAsync() { var _tenantId = _configuration["AzureSentinel:TenantId"]; var _clientId = _configuration["AzureSentinel:ClientId"]; var _clientSecret = _configuration["AzureSentinel:ClientSecret"]; var url = $"https://login.microsoftonline.com/{_tenantId}/oauth2/v2.0/token"; var body = new Dictionary<string, string> { { "grant_type", "client_credentials" }, { "client_id", _clientId }, { "client_secret", _clientSecret }, { "scope", "https://api.loganalytics.io/.default" } }; var content = new FormUrlEncodedContent(body); var response = await _httpClient.PostAsync(url, content); var responseBody = await response.Content.ReadAsStringAsync(); dynamic result = JsonConvert.DeserializeObject(responseBody); return result.access_token; } } } Event ID Mapping Let’s map most important Event ids to utterances. The Solution can be enhanced with Text Analytics and NLU, but for this workshop we are creating the dictionary. { "failed sign-in": 4625, "successful sign-in": 4624, "account lockout": 4740, "password change": 4723, "account creation": 4720, "logon type": 4624, "registry value was modified": 4657, "user account was changed": 4738, "user account was enabled": 4722, "user account was disabled": 4725, "user account was deleted": 4726, "user account was undeleted": 4743, "user account was locked out": 4767, "user account was unlocked": 4768, "user account was created": 4720, "attempt was made to duplicate a handle to an object": 4690, "indirect access to an object was requested": 4691, "backup of data protection master key was attempted": 4692, "recovery of data protection master key was attempted": 4693, "protection of auditable protected data was attempted": 4694, "unprotection of auditable protected data was attempted": 4695, "a primary token was assigned to process": 4696, "a service was installed in the system": 4697, "a scheduled task was created": 4698, "a scheduled task was deleted": 4699, "a scheduled task was enabled": 4700, "a scheduled task was disabled": 4701, "a scheduled task was updated": 4702, "a token right was adjusted": 4703, "a user right was assigned": 4704, "a user right was removed": 4705, "a new trust was created to a domain": 4706, "a trust to a domain was removed": 4707, "IPsec Services was started": 4709, "IPsec Services was disabled": 4710 } Make all required updates to Program.cs and Startup.cs for the Namespace and the public class. // Generated with Bot Builder V4 SDK Template for Visual Studio EchoBot v4.22.0 using Microsoft.AspNetCore.Hosting; using Microsoft.Extensions.Hosting; namespace SecurityBot { public class Program { public static void Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); } } Testing Run the Application and open the Azure Bot Emulator to test the Bot. All you need is to add the localhost URL to the Emulator and make some Chat interactions for example: What is a SOAR ? Using OpenAI Chat Generate a KQL query for failed sign-in logs on the past 3 days Run a KQL query for failed sign-in logs on the past 3 days We have correct executions and KQL against our Sentinel\Log Analytics workspace. Let’s build this Bot on Azure and use it from our Teams Client as our Trusted Security Assistant ! Build on Azure The logic behind a Bot build on Azure is to create an Azure Web App and then the relevant Azure Bot Service. All the steps are published in Microsoft Documentation. You will find the ARM Templates on the Solution Window in Visual Studio 2022: Use the following commands to create your app registration and set its password. On success, these commands generate JSON output. Use the command to create an Microsoft Entra ID app registration.az ad app create This command generates an app ID that you’ll use in the next step. az ad app create –display-name “<app-registration-display-name>” –sign-in-audience “AzureADMyOrg” Use for a single tenant app.AzureADMyOrg Use the command to generate a new password for your app registration. az ad app credential resetad app credential reset --id "<appId>" Record values you’ll need in later steps: the app ID and password from the command output. Once you have the App Registration ready and configured deploy the Web App on Azure using the deployment Templates. Create the App Service and the Azure Bot resources for your bot. Both steps use an ARM template and the Azure CLI command to create the resource or resources.az deployment group create Create an App Service resource for your bot. The App service can be within a new or existing App Service Plan.For detailed steps, see Use Azure CLI to create an App Service. Create an Azure Bot resource for your bot.For detailed steps, see Use Azure CLI to create or update an Azure Bot. az deployment group create –resource-group <resource-group> –template-file <template-file-path> –parameters “@<parameters-file-path>” Now time to build and Publish the Bot, make sure you have run the Bot resource ARM deployment as we did with the Web App Create the deployment file for the Bot: Switch to your project’s root folder. For C#, the root is the folder that contains the .csproj file. Do a clean rebuild in release mode. If you haven’t done so before, run to add required files to the root of your local source code directory. This command generates a file in your bot project folder.az bot prepare-deploy.deployment Within your project’s root folder, create a zip file that contains all files and sub-folders. I suggest after this to run either: Run the az webapp deploy command from the command line to perform deployment using the Kudu zip push deployment for your app service (web app). Or select the Publish option from the Solution Explorer and Publish using the created Web App. Remember to add the App ID and the relevant details to appsettings.json we saw earlier. In case you need to re test with the Emulator, remove the App Type, the App ID , Password and Tenant ID settings before running the App locally! Upon success make sure the Bot Messaging Endpoint has the Web App URL we created, followed by the /api/messages suffix. In case it is missing add it: Now we must add the correct API Permissions to the App registration in Entra ID. Select the App Registration, go to API Permissions, add permission and select API My Organization uses. Find the Log analytics and add the Application Permissions for Read: This way we are able to run\execute KQL against our Sentinel – Log Analytics Workspace. Bot Channels – Teams Now that our Bot is active and we can Test in “Test in Web Chat”, we can create the Teams Integration. It is really a simple step, where we select the Teams option from the Channels and verify the configuration. Once we enable that, we can get the HTTPS code from the Get Embed option in the Channels Menu, or open he URL Directly when we select the Teams Channel: Before we start using the Bot we must make a significant configuration in Teams Admin Center. Otherwise the Bot will probably show-up but unable to get messages from the Chat. Bot in Teams Finally we are able to use our Security Assistant Bot in Teams, Web or Desktop App. The Bot will provide generic advice from Azure OpenAI Chat model, will generate KQL queries for a number of Events and execute those Queries in Log Analytics and we will see the results in our UI. We can always change the appearance of the results, in this workshop we have minimal presentation for better visibility. The next phase of this Deployment can utilize Language Service where all Event IDs are dynamically recognized through a Text Analytics service. Conclusion In conclusion, this workshop demonstrated the seamless integration of Azure’s powerful AI services and Log Analytics to build a smart, security-focused chatbot. By leveraging tools like Azure OpenAI, Log Analytics, and the Bot Framework, we’ve empowered bots to provide dynamic insights and interact meaningfully with data. Whether it’s querying log events or responding to security inquiries, this solution highlights the potential of AI-driven assistants to elevate security operations. Keep exploring and building with Azure, and unlock new possibilities in automation and intelligence! Architecture:2.1KViews0likes0CommentsAzure AI Assistants with Logic Apps
Introduction to AI Automation with Azure OpenAI Assistants Intro Welcome to the future of automation! In the world of Azure, AI assistants are becoming your trusty sidekicks, ready to tackle the repetitive tasks that once consumed your valuable time. But what if we could make these assistants even smarter? In this post, we’ll dive into the exciting realm of integrating Azure AI assistants with Logic Apps – Microsoft’s powerful workflow automation tool. Get ready to discover how this dynamic duo can transform your workflows, freeing you up to focus on the big picture and truly innovative work. Azure OpenAI Assistants (preview) Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. To accelerate and simplify the creation of intelligent applications, we can now enable the ability to call Logic Apps workflows through function calling in Azure OpenAI Assistants. The Assistants playground enumerates and lists all the workflows in your subscription that are eligible for function calling. Here are the requirements for these workflows: Schema: The workflows you want to use for function calling should have a JSON schema describing the inputs and expected outputs. Using Logic Apps you can streamline and provide schema in the trigger, which would be automatically imported as a function definition. Consumption Logic Apps: Currently supported consumption workflows. Request trigger: Function calling requires a REST-based API. Logic Apps with a request trigger provides a REST endpoint. Therefore only workflows with a request trigger are supported for function calling. AI Automation So apart from the Assistants API, which we will explore in another post, we know that we can Integrate Azure Logic Apps workflows! Isn’t that amazing ? The road now is open for AI Automation and we are on the genesis of it, so let’s explore it. We need an Azure Subscription and: Azure OpenAI in the supported regions. This demo is on Sweden Central. Logic Apps consumption Plan. We will work in Azure OpenAI Studio and utilize the Playground. Our model deployment is GPT-4o. The Assistants Playground offers the ability to create and save our Assistants, so we can start working and return later, open the Assistant and continue. We can find the System Message option and the three tools that enhance the Assistants with Code Interpreter, Function Calling ( Including Logic Apps) and Files upload. The following table describes the configuration elements of our Assistants: Name Description Assistant name Your deployment name that is associated with a specific model. Instructions Instructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, tell it what it should and shouldn’t answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses. Deployment This is where you set which model deployment to use with your assistant. Functions Create custom function definitions for the models to formulate API calls and structure data outputs based on your specifications Code interpreter Code interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code. Files You can upload up to 20 files, with a max file size of 512 MB to use with tools. You can upload up to 10,000 files using AI Studio. The Studio provides 2 sample Functions (Get Weather and Get Stock Price) to get an idea of the schema requirement in JSON for Function Calling. It is important to provide a clear message that makes the Assistant efficient and productive, with careful consideration since the longer the message the more Tokens are consumed. Challenge #1 – Summarize WordPress Blog Posts How about providing a prompt to the Assistant with a URL instructing it to summarize a WordPress blog post? It is WordPress cause we have a unified API and we only need to change the URL. We can be more strict and narrow down the scope to a specific URL but let’s see the flexibility of Logic Apps in a workflow. We should start with the Logic App. We will generate the JSON schema directly from the Trigger which must be an HTTP request. { "name": "__ALA__lgkapp002", // Remove this for the Logic App Trigger "description": "Fetch the latest post from a WordPress website,summarize it, and return the summary.", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The base URL of the WordPress site" }, "post": { "type": "string", "description": "The page number" } }, "required": [ "url", "post" ] } } In the Designer this looks like this : As you can see the Schema is the same, excluding the name which is need only in the OpenAI Assistants. We will see this detail later on. Let’s continue with the call to WordPress. An HTTP Rest API call: And finally mandatory as it is, a Response action where we tell the Assistant that the Call was completed and bring some payload, in our case the body of the previous step: Now it is time to open our Azure OpenAI Studio and create a new Assistant. Remember the prerequisites we discussed earlier! From the Assistants menu create a [+New] Assistant, give it a meaningful name, select the deployment and add a System Message . For our case it could be something like : ” You are a helpful Assistant that summarizes the WordPress Blog Posts the users request, using Functions. You can utilize code interpreter in a sandbox Environment for advanced analysis and tasks if needed “. The Code interpreter here could be an overkill but we mention it to see the use of it ! Remember to save the Assistant. Now, in the Functions, do not select Logic Apps, rather stay on the custom box and add the code we presented earlier. The Assistant will understand that the Logic App named xxxx must be called, aka [“name”: “__ALA__lgkapp002“,] in the schema! In fact the Logic App is declared by 2 underscores as prefix and 2 underscores as suffix, with ALA inside and the name of the Logic App. Let’s give our Assistant a Prompt and see what happens: The Assistant responded pretty solidly with a meaningful summary of the post we asked for! Not bad at all for a Preview service. Challenge #2 – Create Azure Virtual Machine based on preferences For the purpose of this task we have activated System Assigned managed identity to the Logic App we use, and a pre-provisioned Virtual Network with a subnet as well. The Logic App must reside in the same subscription as our Azure OpenAI resource. This is a more advanced request, but after all it translates to Logic Apps capabilities. Can we do it fast enough so the Assistant won’t time out? Yes we do, by using the Azure Resource Manager latest API which indeed is lightning fast! The process must follow the same pattern, Request – Actions – Response. The request in our case must include such input so the Logic App can carry out the tasks. The Schema should include a “name” input which tells the Assistant which Logic App to look up: { "name": "__ALA__assistkp02" //remove this for the Logic App Trigger "description": "Create an Azure VM based on the user input", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "The name of the VM" }, "location": { "type": "string", "description": "The region of the VM" }, "size": { "type": "string", "description": "The size of the VM" }, "os": { "type": "string", "description": "The OS of the VM" } }, "required": [ "name", "location", "size", "os" ] } } And the actual screenshot from the Trigger, observe the absence of the “name” here: Now as we have number of options, this method allows us to keep track of everything including the user’s inputs like VM Name , VM Size, VM OS etc.. Of Course someone can expand this, since we use a default resource group and a default VNET and Subnet, but it’s also configurable! So let’s store the input into variables, we Initialize 5 variables. The name, the size, the location (which is preset for reduced complexity since we don’t create a new VNET), and we break down the OS. Let’s say the user selects Windows 10. The API expects an offer and a sku. So we take Windows 10 and create an offer variable, the same with OS we create an OS variable which is the expected sku: if(equals(triggerBody()?['os'], 'Windows 10'), 'Windows-10', if(equals(triggerBody()?['os'], 'Windows 11'), 'Windows-11', 'default-offer')) if(equals(triggerBody()?['os'], 'Windows 10'), 'win10-22h2-pro-g2', if(equals(triggerBody()?['os'], 'Windows 11'), 'win11-22h2-pro', 'default-sku')) As you understand this is narrowed to Windows Desktop only available choices, but we can expand the Logic App to catch most well know Operating Systems. After the Variables all we have to do is create a Public IP (optional) , a Network Interface, and finally the VM. This is the most efficient way i could make, so we won’t get complains from the API and it will complete it very fast ! Like 3 seconds fast ! The API calls are quite straightforward and everything is available in Microsoft Documentation. Let’s see an example for the Public IP: And the Create VM action with highlight to the storage profile – OS Image setup: Finally we need the response which can be as we like it to be. I am facilitating the Assistant’s response with an additional Action “Get Virtual Machine” that allows us to include the properties which we add in the response body: Let’s make our request now, through the Assistants playground in Azure OpenAI Studio. Our prompt is quite clear: “Create a new VM with size=Standard_D4s_v3, location=swedencentral, os=Windows 11, name=mynewvm02”. Even if we don’t add the parameters the Assistant will ask for them as we have set in the System Message. Pay attention to the limitation also . When we ask about the Public IP, the Assistant does not know it. Yet it informs us with a specific message, that makes sense and it is relevant to the whole operation. If we want to have a look of the time it took we will be amazed : The sum of the time starting from the user request till the response from the Assistant is around 10 seconds. We have a limit of 10 minutes for Function Calling execution so we can built a whole Infrastructure using just our prompts. Conclusion In conclusion, this experiment highlights the powerful synergy between Azure AI Assistant’s Function Calling capability and the automation potential of Logic Apps. By successfully tackling two distinct challenges, we’ve demonstrated how this combination can streamline workflows, boost efficiency, and unlock new possibilities for integrating intelligent decision-making into your business processes. Whether you’re automating customer support interactions, managing data pipelines, or optimizing resource allocation, the integration of AI assistants and Logic Apps opens doors to a more intelligent and responsive future. We encourage you to explore these tools further and discover how they can revolutionize your own automation journey. References: Getting started with Azure OpenAI Assistants (Preview) Call Azure Logic apps as functions using Azure OpenAI Assistants Azure OpenAI Assistants function calling Azure OpenAI Service models What is Azure Logic Apps? Azure Resource Manager – Rest OperationsRe: Creating and customizing Copilots in Copilot Studio
No worries , i thought so ! I wish i could change your perception. Let me explain with respect to your view. Dealing with AI means programming a Robot. That simple and awful at the same time . We have to program each and every movement, from the blink of an eye to a hand movement and so on. Traditional developers are full on writing code and creating amazing Web Apps. Copilot Studio helps us non coders, experience the creation process with a no code\low code approach, but still we are in the Microsoft realm. This was previously Power Virtual Agents but got rebranded and integrated to expand the audience and frankly it is the closest platform to create that Robot with minimal effort but using specific tools and for free under a limited time frame ! so i can understand your point on the other hand you know.....that robot cant build itself!!! Thank you for your time !1.2KViews0likes1CommentRe: Creating and customizing Copilots in Copilot Studio
Hello ! Thank you for your feedback. This is a tech community and i have posted a Technical Solution for a Custom CoPilot creation. Each preference is respected but a low code Platform to create a Copilot does not exist yet! Unless you touch code you cannot create a solid copilot in AI, and the Copilot Studio is here to fascilitate that for no code \ low code!1.3KViews0likes3CommentsCreating and customizing Copilots in Copilot Studio
How to create a CoPilot and use it in your Blog with your blog’s Data Intro Today, we’re going to embark on an exciting journey of creating our very own AI assistant, or ‘Copilot’, using the powerful Copilot Studio. But that’s not all! We’ll also learn how to seamlessly integrate this Copilot into our WordPress site, transforming it into a dynamic, interactive platform. Our WordPress site will serve as the primary data source, enabling our Copilot to provide personalized and context-aware responses. Whether you’re a seasoned developer or a tech enthusiast, this guide will offer a step-by-step approach to leverage AI capabilities for your WordPress site. So, let’s dive in and start our AI adventure! Preparation Luckily we can try the Copilot Studio with a trial license. So head on to https://learn.microsoft.com/en-us/microsoft-copilot-studio/sign-up-individual and find all the details. You will have to sign in with a Microsoft 365 user email. You need a Microsoft 365 Tenant as you understand! For those who are actively using Power Apps i suggest to have a god look at https://learn.microsoft.com/en-us/microsoft-copilot-studio/environments-first-run-experience, so you can grasp the details regarding Environments. Creation Once we are ready, head over to https://copilotstudio.microsoft.com and you can start working with new Copilots! Let’s create one shall we ? Select the upper left Copilots menu, and New Copilot. Add the name you want and add your Blog\Site where the Copilot will get it’s data. Go to the bottom and select Edit Advanced Options and check the “Include lesson topics…”, select a icon and leave the default “Common Data Services Default Solution”. Once you create the Copilot you will find it in the left menu on the Copilots section: Configure The first thing we are going to do is to change the Copilot message for salutation. There is a default one which we can change once we click on the Copilot and inside the chat box of the Copilot message. We will find on the left designer area the predefined message which we will change to our preference. Remember to Save your changes! Topics The most important element of our Copilot are the Topics. Topics are the core building blocks of a chatbot. Topics can be seen as the bot competencies: they define how a conversation dialog plays out. Topics are discrete conversation paths that, when used together, allow for users to have a conversation with a bot that feels natural and flows appropriately. In our Copilot we have 3 Topics that we do not need, so from the Topics menu, select each Lesson Topic, from the dotted selection and disable it. You can also delete completely these three unneeded Topics. It is also important to disable Topics that we don’t need otherwise we have to resolve any errors on the existing Topics, since we are making changes. The Topics we need to disable are in grey : Before starting deep we also changed a standard Topic named “Goodbye”. You will understand that we may need to make it simpler so here is a proposed version: As you can see we just changed the end of the Chat with a simple “Thanks for using …” We also propose to change the Greeting to Redirect to the Conversation Start for a unified experience ! Let’s create a simple Topic, where the Copilot responds to specific questions. You can add your own phrases as well. From the Topics menu select “Create” – “Topic” – “From Blank” Add the Trigger phrases you wish. We have selected the following : What do you do?, What is your reach?, What can you tell me? Add a node with the Message property and add the text which the Copilot will use to answer. You can add the name of the Copilot by selecting the variable icon inside the node. Add a final node that ends this topic: You can edit the name of your Topic in the upper left corner, and save it ! Before anything you can always test it on the left chat box! Now let’s do something more creative ! Let’s ask the user if they would provide their email so we can send a summary of the conversation ! The Copilot should make it clear that it is optional and should not interrupt the conversation. So the first thing we need to do is to add a new Topic where we can get the user’s email address and store it as a variable. Since the user can request to provide the email later, we can offer this option as well, with the trigger. Here is our Topic: Pay attention to the closing node and the comment. We have added a Redirect to the Greeting Topic, so we can avoid falling in the Loop of the Start Conversation. To do that we add a new node, Topic Management – Go to another Topic. Now let’s build the the request with a condition, by editing the Conversation Start Topic ( the one we edited at the beginning ). From the Topics menu select All and find the Conversation Start Topic. Add a new Node after the Message with a question. We have this text so the user is aware about the options they have: Would you like to provide an email so you can get a summary of our Interaction? It is optional and you can add it later by simply saying “Get my Email”! In this Question Node, select the Multiple Choice options and add the YES and NO possible answers, while saving the answer on a variable. You can rename the variable if you want to. The next node is an “Add a Condition” node and when the answer is YES we send the conversation to the Get User’s Email Topic, while the opposite we send it to the Greeting. Here is our design for the Topic: Save the Topic, and you can test your Copilot on the left Chat box. You will notice that we can’t redirect the user without a validation message. So we can edit the Get User’s Email Topic with a Message Node like this: Now we have the basic idea of the Topics ! Play around and create your paths ! Be careful not to fall under loops and always try the Copilot ! We can expand to Power Apps for Data operations , like storing the Email to a Table or creating a Flow in Power automate but that’s not our focus. Authentication-Channels Once we are happy with our Copilot we need to make it available to our Channels, specifically to Web Sites. If we select Channels from the left menu we will get a message about Authentication: So we have to follow a straight forward process to configure Authentication for our Copilot to be available in all Channels. Unless we want users to Sign in we won’t activate that option but you can always change that option. We will enable Entra ID as our Service Provider. The following part is from Microsoft Documentation Source: Configure user authentication with Microsoft Entra ID – Microsoft Copilot Studio | Microsoft Learn Create an app registration Sign in to the Azure portal, using an admin account in the same tenant as your copilot. Go to App registrations, either by selecting the icon or searching in the top search bar. Select New registration and enter a name for the registration.It can be helpful later to use the name of your copilot. For example, if your copilot is called “Contoso sales help,” you might name the app registration “ContosoSalesReg.” Under Supported account types, select Accounts in any organizational directory (Any Microsoft Entra ID directory – Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox). Leave the Redirect URI section blank for now. Enter that information in the next steps. Select Register. After the registration is complete, go to Overview. Copy the Application (client) ID and paste it in a temporary file. You need it in later steps. Add the redirect URL Go to Authentication, and then select Add a platform. Under Platform configurations, select Add a platform, and then select Web. Under Redirect URIs, enter https://token.botframework.com/.auth/web/redirect and https://europe.token.botframework.com/.auth/web/redirect. Note The authentication configuration pane in Copilot Studio might show the following redirect URL: https://unitedstates.token.botframework.com/.auth/web/redirect. Using that URL makes the authentication fail; use the URI instead. In the Implicit grant and hybrid flows section, turn on both Access tokens (used for implicit flows) and ID tokens (used for implicit and hybrid flows). Select Configure. Generate a client secret Go to Certificates & secrets. In the Client secrets section, select New client secret. (Optional) Enter a description. One is provided if left blank. Select the expiry period. Select the shortest period that’s relevant for the life of your copilot. Select Add to create the secret. Store the secret’s Value in a secure temporary file. You need it when you configure your copilot’s authentication later on. Tip Don’t leave the page before you copy the value of the client secret. If you do, the value is obfuscated and you must generate a new client secret. Configure manual authentication In Copilot Studio, in the navigation menu under Settings, select Security. Then select the Authentication card. Select Manual (for any channel including Teams) then turn on Require users to sign in. Enter the following values for the properties: Service provider: Select Microsoft Entra ID. Client ID: Enter the application (client) ID that you copied earlier from the Azure portal. Client secret: Enter the client secret you generated earlier from the Azure portal. Scopes: Enter profile openid. Select Save to finish the configuration. Configure API permissions Go to API permissions. Select Grant admin consent for <your tenant name>, and then select Yes. If the button isn’t available, you may need to ask a tenant administrator to do enter it for you. NoteTo avoid users from having to consent to each application, a Global Administrator, Application Administrator, or Cloud Application Administrator can grant tenant-wide consent to your app registrations. Select Add a permission, and then select Microsoft Graph. Select Delegated permissions. Expand OpenId permissions and turn on openid and profile. Select Add permissions. Define a custom scope for your copilot Scopes allow you to determine user and admin roles and access rights. You create a custom scope for the canvas app registration that you create in a later step. Go to Expose an API and select Add a scope. Set the following properties. You can leave the other properties blank.Expand tablePropertyValueScope nameEnter a name that makes sense in your environment, such as Test.ReadWho can consent?Select Admins and usersAdmin consent display nameEnter a name that makes sense in your environment, such as Test.ReadAdmin consent descriptionEnter Allows the app to sign the user in.StateSelect Enabled Select Add scope. Source: Configure user authentication with Microsoft Entra ID – Microsoft Copilot Studio | Microsoft Learn You can always make the Copilot more secure by adding required Authentication and SSO. Read the Documentation to see how you can also add scopes on the Copilot. Now it’s time to Publish ! Hit the Publish from the menu and publish your Copilot. If any errors occur it will mostly be a Topic. Read carefully our instructions and of course you can make your own routes since you got the concept ! Once Publishing is done, the Channels menu will activate all channels and from the Custom Website you can grab the embedding code and add it in a Post on your WordPress or your Webpage ! You can also see it in the Demo Website if you have not enabled “require secure access”. Here it is in the actual WordPress using the embedded code: Closing With Copilot Studio, building a custom AI assistant and seamlessly integrating it into your WordPress site is simpler than you might have imagined. It empowers you to create a more dynamic and personalized user experience. Whether you’re looking to automate tasks, provide intelligent insights, or offer a more conversational interface on your site, Copilot Studio provides the tools and straightforward process to get you there. Remember, the possibilities are endless. Experiment, refine, and watch as your WordPress site becomes a hub of unparalleled AI-powered engagement! References Create Copilots with Copilot Studio Manage Topics in Copilot Studio AI-based copilot authoring overview Quickstart guide for building copilots with generative AI Microsoft Copilot Studio overview2.3KViews0likes5CommentsSemantic Kernel: Develop your AI Integrated Web App on Azure and .NET 8.0
How to create a Smart Career Advice and Job Search Engine with Semantic Kernel The concept The Rise of Semantic Kernel Semantic Kernel, an open-source development kit, has taken the .NET community by storm. With support for C#, Python, and Java, it seamlessly integrates with dotnet services and applications. But what makes it truly remarkable? Let’s dive into the details. A Perfect Match: Semantic Kernel and .NET Picture this: you’re building a web app, and you want to infuse it with AI magic. Enter Semantic Kernel. It’s like the secret sauce that binds your dotnet services and AI capabilities into a harmonious blend. Whether you’re a seasoned developer or just dipping your toes into AI waters, Semantic Kernel simplifies the process. As part of the Semantic Kernel community, I’ve witnessed its evolution firsthand. The collaborative spirit, the shared knowledge—it’s electrifying! We’re not just building software; we’re shaping the future of AI-driven web applications. The Web App Our initial plan was simple: create a job recommendations engine. But Semantic Kernel had other ideas. It took us on an exhilarating ride. Now, our web application not only suggests career paths but also taps into third-party APIs to fetch relevant job listings. And that’s not all—it even crafts personalized skilling plans and preps candidates for interviews. Talk about exceeding expectations! Build Since i have already created the repository on GitHub i don’t think it is critical to re post Terraform files here. We are building our main Infrastructure with Terraform and also invoke an Azure Cli script to automate the Container Image build and push. We will have these resources at the end: Before deployment make sure to assign the Service Principal with the role “RBAC Administrator” and narrow down the assignments to AcrPull, AcrPush, so you can create a User Assigned Managed Identity with these roles. Since we are building and pushing the Container Images with local-exec and Az Cli scripts within Terraform you will notice some explicit dependencies, for us to make sure everything builds in order. It is really amazing the fact that we can build all the Infra including the Apps with Terraform ! Architecture Upon completion you will have a functioning React Web App with the ASP NET Core webapi, utilizing Semantic Kernel and an external Job Listings API, to get advice, find Jobs and get a Skilling Plan for a specific recommended role! The following is a reference Architecture. Aside the Private Endpoints the same deployment is available in GitHub. Kernel SDK The SDK provides a simple yet powerful array of commands to configure and “set” the Semantic Kernel characteristics. Let’s the first endpoint, where users ask for recommended career paths: [HttpPost("get-recommendations")] public async Task<IActionResult> GetRecommendations([FromBody] UserInput userInput) { _logger.LogInformation("Received user input: {Skills}, {Interests}, {Experience}", userInput.Skills, userInput.Interests, userInput.Experience); var query = $"I have the following skills: {userInput.Skills}. " + $"My interests are: {userInput.Interests}. " + $"My experience includes: {userInput.Experience}. " + "Based on this information, what career paths would you recommend for me?"; var history = new ChatHistory(); history.AddUserMessage(query); ChatMessageContent? result = await _chatCompletionService.GetChatMessageContentAsync(history); if (result == null) { _logger.LogError("Received null result from the chat completion service."); return StatusCode(500, "Error processing your request."); } string content = result.Content; _logger.LogInformation("Received content: {Content}", content); var recommendations = ParseRecommendations(content); _logger.LogInformation("Returning recommendations: {Count}", recommendations.Count); return Ok(new { recommendations }); The actual data flow is depicted below, and we can see the Interaction with the local Endpoints and the external endpoint as well. The user provides Skills, Interests, Experience and Level of current position and the API sends the Payload to Semantic kernel with a constructed prompt asking for positions recommendations. The recommendations return with clickable buttons, one to find relevant positions from LinkedIn listings using the external API, and another to ask again the Semantic Kernel for skill up advice! The UI experience : Recommendations: Skill Up Plan: Job Listings: The Project can be extended to a point of automation and AI Integration where users can upload their CVs and ask the Semantic Kernel to provide feedback as well as apply for a specific position! As we discussed earlier some additional optimizations are good to have, like the Private Endpoints, Azure Front Door and/or Azure Firewall, but the point is to see Semantic Kernel in action with it’s amazing capabilities especially when used within the .NET SDK. Important Note: This could have been a one shot deployment but we cannot add the custom domain with Terraform ( unless we use Azure DNS) and the Cors Settings. So we have to add these details for our Solution to function properly! Once the Terraform completes, add the Custom Domains to both Container Apps. The advantage here is that we will know the Frontend and Backend FQDNs, since we decide the Domain name, and the React Environment Value is preconfigured with the backend URL. Same for the Backend, we have set as Environment Value for the ALLOWED_ORIGINS, the frontend URL. So we can just go to Custom Domain on each App, and add the domain names after selecting the Certificate which will be already there, since we have uploaded it via Terraform! Lessons Learned This was a real adventure and i want to share with you important lessons learned and hopefully save you some time and effort. Prepare ahead with a Certificate. I was having problems from the get go with ASP NET refusing to build on Containers until i integrated the certificate. The local development works fine without it. Cross Origin is very important, do not underestimate it ! Configure it correctly and in this example i went directly to Custom Domains, so i can have better overall control. This solution worked both on Azure Web Apps and Azure Container Apps. The Git Hub repo has the Container Apps solution but you can go with Web Apps. Finally don’t waste you time to go with Dapr. React does not ‘react’ well with the Dapr Client and my lesson learned here is that Dapr is made for same framework invocation or you are going to need a middleware. Since we cannot create the Custom Domain with Terraform there are solutions we can use, like using AzApi, We utilized a small portion of what really Semantic Kernel can do and i stopped when i realized that this project will never end if i continue pursuing ideas ! It is much better to have it on GiHub and probably we can come back and add some more features ! Conclusion In this journey through the intersection of technology and career guidance, we’ve explored the powerful capabilities of Azure Container Apps and the transformative potential of Semantic Kernel, Microsoft’s open-source development kit. By seamlessly integrating AI into .NET applications, Semantic Kernel has not only simplified the development process but also opened new doors for innovation in career advice. Our adventure began with a simple idea—creating a job recommendations engine. However, with the help of Semantic Kernel, this idea evolved into a sophisticated web application that goes beyond recommendations. It connects to third-party APIs, crafts personalized skilling plans, and prepares candidates for interviews, demonstrating the true power of AI-driven solutions. By leveraging Terraform for infrastructure management and Azure CLI for automating container builds, we successfully deployed a robust architecture that includes a React Web App, ASP.NET Core web API, and integrated AI services. This project highlights the ease and efficiency of building and deploying cloud-based applications with modern tools. The code is available in GitHub for you to explore, contribute and extend as mush as you want to ! Git Hub Repo: Semantic Kernel - Career Advice Links\References Intro to Semantic Kernel Understanding the kernel Chat completion Deep dive into Semantic Kernel Azure Container Apps documentation