azure
7937 TopicsApproaches to Integrating Azure Databricks with Microsoft Fabric: The Better Together Story!
Azure Databricks and Microsoft Fabric can be combined to create a unified and scalable analytics ecosystem. This document outlines eight distinct integration approaches, each accompanied by step-by-step implementation guidance and key design considerations. These methods are not prescriptive—your cloud architecture team can choose the integration strategy that best aligns with your organization’s governance model, workload requirements and platform preferences. Whether you prioritize centralized orchestration, direct data access, or seamless reporting, the flexibility of these options allows you to tailor the solution to your specific needs.5KViews9likes1CommentNetwork Connectivity Check APIs for Logic App Standard
Introduction When your Logic App Standard is integrated with a Virtual Network (VNET), you can use these APIs to troubleshoot connectivity issues to downstream resources like SQL databases, Storage Accounts, Service Bus, Key Vault, and more. The checks run directly from the worker hosting your Logic App, so the results reflect the actual network path your workflows use. API Overview API HTTP Method Route Suffix Purpose ConnectivityCheck POST /connectivityCheck Validates end-to-end connectivity to an Azure resource (SQL, Key Vault, Storage, Service Bus, etc.) DnsCheck POST /dnsCheck Performs DNS resolution for a hostname TcpPingCheck POST /tcpPingCheck Performs a TCP ping to a host and port How to Call Using Azure API Playground Sign in with your Azure account. https://portal.azure.com/#view/Microsoft_Azure_Resources/ArmPlayground.ReactView Use POST method with the URLs below. Instead of API playground you can also use PowerShell or Az Rest URL Pattern Production slot: POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{logicAppName}/connectivityCheck?api-version=2026-03-01-preview Deployment slot: POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{logicAppName}/slots/{slotName}/connectivityCheck?api-version=2026-03-01-preview Replace connectivityCheck with dnsCheck or tcpPingCheck as needed. all the requests should be Json 1. ConnectivityCheck Tests end-to-end connectivity from your Logic App to an Azure resource. This validates DNS, TCP, and authentication in a single call. Supported Provider Types ProviderType Use For KeyVault Azure Key Vault SQL Azure SQL Database / SQL Server ServiceBus Azure Service Bus EventHubs Azure Event Hubs BlobStorage Azure Blob Storage FileShare Azure File Share (see Port 445 limitation) only tese 443 QueueStorage Azure Queue Storage TableStorage Azure Table Storage Web Any HTTP/HTTPS endpoint Credential Types CredentialType When to Use ConnectionString You have a connection string to provide directly Authentication You have an endpoint URL with username and password CredentialReference You want to reference an existing connection string or app setting by name AppSetting You want to reference an app setting configured on the Logic App ManagedIdentity Your Logic App uses Managed Identity to authenticate Sample Request — Connection String (SQL Database) POST https://management.azure.com/subscriptions/{subId}/resourceGroups/{rg}/providers/Microsoft.Web/sites/{logicAppName}/connectivityCheck?api-version=2026-03-01-preview Content-Type: application/json { "properties": { "providerType": "SQL", "credentials": { "credentialType": "ConnectionString", "connectionString": "Server=tcp:myserver.database.windows.net,1433;Database=mydb;User ID=myuser;Password=mypassword;Encrypt=True;TrustServerCertificate=False;" }, "resourceMetadata": { "entityName": "" } } } Sample Request — App Setting Reference (Service Bus) Use this when your connection string is stored in an app setting on the Logic App (e.g., ServiceBusConnection). { "properties": { "providerType": "ServiceBus", "credentials": { "credentialType": "AppSetting", "appSetting": "ServiceBusConnection" }, "resourceMetadata": { "entityName": "myqueue" } } } Sample Request — Managed Identity (Blob Storage) Use this when your Logic App authenticates using Managed Identity. { "properties": { "providerType": "BlobStorage", "credentials": { "credentialType": "ManagedIdentity", "managedIdentity": { "targetResourceUrl": "https://mystorageaccount.blob.core.windows.net", "clientId": "" } }, "resourceMetadata": { "entityName": "" } } } Tip: Leave clientId empty to use the system-assigned managed identity. Provide a client ID to use a specific user-assigned managed identity. 2. DnsCheck Tests whether a hostname can be resolved from your Logic App's worker. This is useful for verifying private DNS zones and private endpoints are configured correctly. Sample Request POST https://management.azure.com/subscriptions/{subId}/resourceGroups/{rg}/providers/Microsoft.Web/sites/{logicAppName}/dnsCheck?api-version=2026-03-01-preview Content-Type: application/json { "properties": { "dnsName": "myserver.database.windows.net" } } 3. TcpPingCheck Tests whether a TCP connection can be established from your Logic App to a specific host and port. This is useful for checking if a port is open and reachable through your VNET. Sample Request POST https://management.azure.com/subscriptions/{subId}/resourceGroups/{rg}/providers/Microsoft.Web/sites/{logicAppName}/tcpPingCheck?api-version=2026-03-01-preview Content-Type: application/json { "properties": { "host": "myserver.database.windows.net", "port": "1433" } } Port 445 (SMB / Azure File Share) — Known Limitation Port 445 cannot be reliably tested using TcpPingCheck or ConnectivityCheck with the FileShare provider type. Restricted Outgoing Ports Regardless of address, applications cannot connect to anywhere using ports 445, 137, 138, and 139. In other words, even if connecting to a non-private IP address or the address of a virtual network, connections to ports 445, 137, 138, and 139 are not permitted.Excluding break-glass account from MFA Registration Campaign – impact on existing users?
Hi everyone, I'm currently reviewing the configuration of a break-glass (emergency access) account in Microsoft Entra ID and I have a question regarding MFA registration enforcement. We currently have an Authentication Methods Registration Campaign enabled for all users for quite some time. We identified that the break-glass account is being required to register MFA due to this configuration. The account is already excluded from all Conditional Access policies that enforce MFA, so the behavior appears to be specifically coming from the registration campaign (Microsoft Authenticator requirement). Our goal is to exclude this break-glass account from the MFA registration requirement, following Microsoft best practices. My question is: If we edit the existing registration campaign and add an exclusion (user or group), could this have any impact on users who are already registered? Specifically, could it re-trigger the registration process or affect existing MFA configurations? We want to avoid any unintended impact, considering this campaign has been in place for a long time. Has anyone implemented a similar exclusion for break-glass accounts within an active registration campaign? Any insights or confirmation would be really helpful. Thanks in advance!Solved55Views0likes2CommentsBuilding a Restaurant Management System with Azure Database for MySQL
In this hands-on tutorial, we'll build a Restaurant Management System using Azure Database for MySQL. This project is perfect for beginners looking to understand cloud databases while creating something practical.1.3KViews5likes5CommentsMicrosoft Fabric Operations Agent Step by Step Walkthrough
Fabric Capacity and Workspace You need a Microsoft Fabric workspace backed by a paid capacity. Trial capacities are not supported for Operations Agent. Your capacity must be provisioned in a supported region. As of April 2026, Operations Agent is available in all Microsoft Fabric regions except South Central US and East US. If your capacity is outside the US or EU, you will also need to enable cross geo processing and storage for AI through the tenant settings. Your workspace must contain an Eventhouse with at least one KQL database. The Eventhouse is the telemetry backbone, and the KQL database holds the tables the agent will monitor. In the screenshot below, you can see a workspace named OperationAgent-WS that contains an Eventhouse (ops_eventhouse), two KQL databases (ops_db and ops_eventhouse), and a Lakehouse (ops_lakehouse). This is the environment used throughout this guide. Figure 1. Workspace contents showing the Eventhouse, KQL databases, and Lakehouse ready for the Operations Agent. Enabling the Operations Agent in the Admin Portal A Fabric administrator must enable the Operations Agent preview toggle in the Admin Portal before anyone in the organization can create an agent. Navigate to the Admin Portal, locate the section for Real Time Intelligence, and find the setting labeled Enable Operations Agents (Preview). Toggle it to Enabled for the entire organization or for specific security groups depending on your governance requirements. In addition to this toggle, ensure that Microsoft Copilot and Azure OpenAI Service are also enabled at the tenant level. The Operations Agent relies on Azure OpenAI to generate its playbook and to reason about data when conditions are met. Figure 2. The Admin Portal showing the Enable Operations Agents (Preview) toggle set to Enabled for the entire organization. Note that messages sent to Operations Agents are processed through the Azure AI Bot Service. If your capacity is outside the EU Data Boundary, data may be processed outside your geographic or national cloud boundary. Be sure to communicate this to your compliance stakeholders before enabling the feature in production tenants. Microsoft Teams Account Every person who will receive recommendations from the agent must have a Microsoft Teams account. The Operations Agent delivers its findings and action suggestions through a dedicated Teams app called Fabric Operations Agent. You can install this app from the Teams app store by searching for its name. Once installed, the agent will be able to send messages containing data summaries and recommended actions directly to the designated recipients. Creating and Configuring the Operations Agent With your prerequisites in place, you are ready to create the Operations Agent. The following steps walk you through the entire configuration process using the Fabric portal. Step 1: Create a New Operations Agent Open the Microsoft Fabric portal and navigate to your workspace. On the Fabric home page, select the ellipsis icon and then select Create. In the Create pane, scroll to the Real Time Intelligence section and select Operations Agent. A dialog will appear asking you to name your agent and select the target workspace. Choose a descriptive name that reflects the agent’s purpose. In this guide, the agent is named OperationsAgent_1 and is deployed to the OperationAgent-WS workspace. Step 2: Define Business Goals and Agent Instructions Once the agent is created, you are taken to the Agent Setup page. This page is divided into two halves. On the left side, you configure the agent’s behavior. On the right side, you see the generated Agent Playbook after saving. The first field is Business Goals, where you describe the high level objective the agent should accomplish. Write this in clear, outcome oriented language. In this demo, the business goal is set to: “Monitor data pipeline execution and alert on failures.” The second field is Agent Instructions, where you provide more specific guidance on how the agent should reason about the data. Think of this as a brief you would hand to an analyst who will be watching your systems overnight. Be explicit about the table name, the column to watch, and the condition that constitutes an alert. In this demo, the instruction reads: “Monitor pipeline_runs table. Alert when status is failed.” Together, the business goals and instructions give the underlying large language model enough context to generate an accurate playbook. The more specific your instructions, the more reliable the agent’s behavior will be. Figure 3. The Agent Setup page showing business goals, agent instructions, and the generated playbook on the right. On the right side of the screen, you can see the Agent Playbook that was generated after saving. The playbook includes a Business Term Glossary, which shows the business objects the agent inferred from your goals and data. In this case, it identified an object called PipelineRun, mapped to the pipeline_runs table, with two properties: status (the pipeline run status from the status column) and runId (the unique identifier from the run_id column). It also displays the Rules section, which contains the conditions the agent will evaluate. Review the playbook carefully. Since it is generated by an AI model, there may be occasional misinterpretations. Verify that every property maps to the correct column and that the rules reflect your intended thresholds. If something is off, update your goals or instructions and save again to regenerate the playbook. Step 3: Add a Knowledge Source Scroll down on the Agent Setup page to find the Knowledge section. This is where you connect the agent to the data it will monitor. When you first open this section, it will display a message indicating that no knowledge source has been added yet. Figure 4. The Knowledge section before any data source has been added. Select the Add Data button to browse the available data sources. A panel will appear listing the KQL databases and Eventhouses accessible within your Fabric environment. In this demo, three sources are available: ops_db in the OperationAgent-WS workspace, wms_eventhouse in the WMS-CDC-Demo workspace, and ops_eventhouse in the OperationAgent-WS workspace. Select the database that contains the table you want the agent to monitor. For this guide, select ops_db, which holds the pipeline_runs table referenced in the agent instructions. Figure 5. Selecting the knowledge source from available KQL databases and Eventhouses. Once the knowledge source is connected, the agent will be able to query this database at regular intervals (approximately every five minutes) to evaluate its rules. Make sure the table in your selected database is actively receiving data, especially if you plan to demonstrate the agent detecting a condition in real time. Step 4: Define Actions Actions are the responses the agent can recommend when it detects a condition that matches its rules. Scroll further down the Agent Setup page to find the Actions section. Select the Add Action button to define a new custom action. A dialog titled New Custom Action will appear. It has three fields. The Action Name is a short, descriptive label for the action. The Action Description explains the purpose of the action and gives the agent context about when to use it. The Parameters section allows you to define input fields that pass dynamic values (such as names, dates, or identifiers) into the Power Automate flow that will be triggered. Figure 6. The New Custom Action dialog where you define the action name, description, and optional parameters. In this demo, the action is named Send Email Alert with a description indicating that it should send an email notification when a pipeline failure is detected. Once created, you can see the action listed in the Actions section with a green status indicator showing that the action is successfully connected. Figure 7. The Actions section showing the Send Email Alert action with a connected status. Step 5: Configure the Custom Action with Power Automate After creating the action, you need to configure it by linking it to an activator item and a Power Automate flow. Select the action you just created to open the Configure Custom Action pane. In this pane, you will see several fields. First, select the Workspace where the activator item resides. In this demo, the workspace is OperationAgent-WS. Next, select the Activator, which is the Fabric item that bridges the Operations Agent and Power Automate. Here, the activator is named Email_Alert_Activator. Once the connection is created, a Connection String is generated. This string is a unique identifier that links the Operations Agent to the Power Automate flow. Select the Copy button to copy this connection string to your clipboard. You will need it in the next step. Below the connection string, you will find the Open Flow Builder button. Select this to launch the Power Automate flow designer where you will build the email notification flow. Figure 8. The Configure Custom Action pane showing the workspace, activator, connection string, and the button to open the flow builder. Step 6: Build the Power Automate Flow When you select Open Flow Builder, a new browser tab opens with the Power Automate designer. The flow is pre-configured with a trigger called When an Activator Rule is Triggered. This trigger fires whenever the Operations Agent approves an action. In the Parameters tab of the trigger, you will see a field labeled Connection String. Paste the connection string you copied from the previous step into this field. This is the critical link that connects the Power Automate flow back to your Operations Agent. If this string is incorrect or missing, the flow will not fire when the agent recommends the action. Figure 9. The Power Automate flow builder with the activator trigger and the Connection String field. Below the trigger, you can add any actions your workflow requires. For an email alert scenario, add an Office 365 Outlook action to send an email to the operations team. You can use dynamic content from the trigger to include details such as the pipeline run ID, the failure status, and any parameters passed through from the Operations Agent. Save the flow and return to the Fabric portal. Your action is now fully configured and ready to be triggered by the agent. Step 7: Generate the Playbook and Start the Agent With all configuration complete (business goals, instructions, knowledge source, and actions), select Save on the Agent Setup page. Fabric will use the underlying large language model to generate the agent’s playbook. The playbook is a structured summary of everything the agent knows: its goals, the properties it monitors, and the rules it evaluates. You can also select Generate Playbook at the top of the page to regenerate the playbook if you have made changes. Review the playbook one final time to confirm that properties map correctly to your table columns and that rules reflect the exact conditions you want to monitor. When you are satisfied, select Start in the toolbar at the top of the page. The agent will begin actively monitoring your data. It queries the knowledge source approximately every five minutes, evaluating the playbook rules against the latest data. If a condition is met, the agent uses the LLM to summarize the data, generate a recommendation, and send a message to the designated recipients through Microsoft Teams. To pause the agent at any time, select Stop. This is useful during demos when you want to control the timing of the demonstration. How the Agent Operates at Runtime Once started, the Operations Agent follows a continuous loop. Every five minutes, it queries the connected KQL database to evaluate the rules defined in the playbook. If no conditions are met, it continues silently. If a condition is matched (for example, a pipeline run with a status of "failed" appears in the pipeline_runs table), the agent proceeds through the following sequence. First, the agent uses the large language model to analyze the data that triggered the condition. It summarizes the context, identifies the relevant business object (such as a specific pipeline run), and determines which action to recommend. Second, the agent sends a message to the designated recipients through Microsoft Teams. This message contains a summary of the detected insight, the data context that triggered it, and a suggested action. Recipients can approve the action by selecting Yes or reject it by selecting No. If parameters are included (such as a run ID or a severity level), they can be reviewed and adjusted before final approval. Third, if the recipient approves the action, the agent executes it on behalf of the creator using the creator’s credentials. In this demo, approving the action would trigger the Power Automate flow that sends an email alert. It is important to note that if a recommendation is not responded to within three days, the operation is automatically canceled. After cancellation, the action can no longer be approved or interacted with.184Views0likes0CommentsMicrosoft Finland - Software Developing Companies monthly community series.
Tervetuloa jälleen mukaan Microsoftin webinaarisarjaan teknologiayrityksille! Microsoft Finlandin järjestämä Software Development monthly Community series on webinaarisarja, joka tarjoaa ohjelmistotaloille ajankohtaista tietoa, konkreettisia esimerkkejä ja strategisia näkemyksiä siitä, miten yhteistyö Microsoftin kanssa voi vauhdittaa kasvua ja avata uusia liiketoimintamahdollisuuksia. Sarja on suunnattu kaikenkokoisille ja eri kehitysvaiheissa oleville teknologiayrityksille - startupeista globaaleihin toimijoihin. Jokaisessa jaksossa pureudutaan käytännönläheisesti siihen, miten ohjelmistoyritykset voivat hyödyntää Microsoftin ekosysteemiä, teknologioita ja kumppanuusohjelmia omassa liiketoiminnassaan. Huom. Microsoft Software Developing Companies monthly community webinars -webinaarisarja järjestetään Cloud Champion -sivustolla, josta webinaarit ovat kätevästi saatavilla tallenteina pari tuntia live-lähetyksen jälkeen. Muistathan rekisteröityä Cloud Champion -alustalle ensimmäisellä kerralla, jonka jälkeen pääset aina sisältöön sekä tallenteisiin käsiksi. Pääset rekisteröitymään, "Register now"-kohdasta. Täytä tietosi ja valitse Distributor kohtaan - Other, mikäli et tiedä Microsoft-tukkurianne. Webinaarit: 24.4.2026 klo 09.00–09.30 Marketplace ja Resale Enabled Offers (REO) Tervetuloa SDC Community Monthly ‑webinaariin. Kasvua Azure Marketplace ‑kanavan kautta – tehokkaammin REO‑mallilla. Tämän kuukauden aiheena on REO (Resale Enabled Offers), ja käymme läpi, mitä REO käytännössä tarkoittaa, milloin sitä kannattaa käyttää ja mitä se muuttaa kumppaneille. Käsittelemme aihetta yhdessä Partner Solution Sales Manager Veli Myllylän kanssa. Opit, miten Resale Enabled Offers (REO) mahdollistaa kanavakumppaneiden myydä Marketplace‑tarjouksiasi puolestasi ja miten tämä vauhdittaa co‑sell‑kauppoja, skaalaa myyntiä ja tukee Azure‑kulutusta. Ilmoittaudu mukaan: Microsoft Finland – Software Developing Companies monthly community series – Marketplace ja Resale Enabled Offers (REO) – Finland Cloud Champion Puhujat Mikko Marttinen, Sr Partner Development Manager, Microsoft Veli Myllylä, Partner Solutions Sales Manager Microsoft, Microsoft 27.3.2026 klo 09:00-09:30 - Agent Factory Microsoft Foundryllä – miten rakennat ja viet AI-agentteja tuotantoon AI‑agentit ovat nopeasti nousemassa enterprise‑ohjelmistojen keskeiseksi rakennuspalikaksi, mutta monilla organisaatioilla haasteena on agenttien vieminen tuotantoon asti. Todellinen kilpailuetu syntyy siitä, miten agentit rakennetaan hallitusti, integroidaan osaksi kokonaisarkkitehtuuria ja skaalataan luotettavasti. Tässä webinaarissa käymme läpi ja näytämme käytännön demolla, miten AI-agentti rakennetaan Microsoft Foundry:n Agent Service -palvelulla. Näytämme miten agentin rooli ja ohjeet määritellään, miten agentille liitetään tietolähteitä ja työkaluja sekä katsomme miten tämä asemoituu Microsoft Agent Factoryyn. Katso nauhoite: Microsoft Finland – Software Developing Companies monthly community series: Agent Factory Microsoft Foundryllä – miten rakennat ja viet AI-agentteja tuotantoon – Finland Cloud Champion Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Veli Myllylä, Partner Solutions Sales Manager, 27.2.2026 klo 09:00-09:30 - M-Files polku menestykseen yhdessä Microsoftin kanssa Mitä globaalin kumppanuuden rakentaminen M-Files:in ja Microsoft:in välillä on vaatinut – ja mitä hyötyä siitä on syntynyt? Tässä webinaarissa kuulet insiderit suoraan M-Filesin Kimmo Järvensivulta, Stategic Alliances Director: miten kumppanuus Microsoft kanssa on rakennettu, mitä matkalla on opittu ja miten yhteistyö on vauhdittanut kasvua. M-Files on älykäs tiedonhallinta-alusta, joka auttaa organisaatioita hallitsemaan dokumentteja ja tietoa metatiedon avulla sijainnista riippumatta. Se tehostaa tiedon löytämistä, parantaa vaatimustenmukaisuutta ja tukee modernia työtä Microsoft-ekosysteemissä. Tule kuulemaan, mitä menestyksekäs kumppanuus todella vaatii, ja miten siitä tehdään strateginen kilpailuetu. Katso nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – M-Files polku menestykseen yhdessä Microsoftin kanssa – Finland Cloud Champion Asiantuntijat: Kimmi Järvensivu, Strategic Alliances Director, M-Files Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft 30.1.2026 klo 09:00-09:30 - Model Context Protocol (MCP)—avoin standardi, joka mullistaa AI-integraatiot Webinaarissa käymme läpi, mikä on Model Context Protocol (MCP), miten se mahdollistaa turvalliset ja skaalautuvat yhteydet AI‑mallien ja ulkoisten järjestelmien välillä ilman räätälöityä koodia, mikä on Microsoftin lähestyminen MCP‑protokollan hyödyntämiseen sekä miten softayritykset voivat hyödyntää MCP‑standardin tarjoamia liiketoimintamahdollisuuksia. Webinaarissa käymme läpi: Mikä MCP on ja miksi se on tärkeä nykyaikaisissa AI‑prosesseissa Kuinka MCP vähentää integraatioiden monimutkaisuutta ja nopeuttaa kehitystä Käytännön esimerkkejä Webiinarin asiaosuus käydään läpi englanniksi. Katso nauhoite: 30.1.2026 klo 09:00-09:30 – Model Context Protocol (MCP)—avoin standardi, joka mullistaa AI-integraatiot – Finland Cloud Champion Asiantuntijat: Massimo Caterino, Kumppaniteknologiastrategisti, Microsoft Europe North Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft 12.12. klo 09:00-09:30 - Mitä Suomen Azure-regioona tarkoittaa ohjelmistotaloille? Microsoftin uusi datakeskusalue Suomeen tuo pilvipalvelut lähemmäksi suomalaisia ohjelmistotaloja – olipa kyseessä startup, scaleup tai globaali toimija. Webinaarissa pureudumme siihen, mitä mahdollisuuksia uusi Azure-regioona avaa datan sijainnin, suorituskyvyn, sääntelyn ja asiakasvaatimusten näkökulmasta. Keskustelemme muun muassa: Miten datan paikallinen sijainti tukee asiakasvaatimuksia ja sääntelyä? Mitä hyötyä ohjelmistotaloille on pienemmästä latenssista ja paremmasta suorituskyvystä? Miten Azure-regioona tukee yhteismyyntiä ja skaalautumista Suomessa? Miten valmistautua teknisesti ja kaupallisesti uuden regioonan avaamiseen? Puhujat: Fama Doumbouya, Sales Director, Cloud Infra and Security, Microsoft Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – Mitä Suomen Azure-regioona tarkoittaa ohjelmistotaloille? – Finland Cloud Champion 28.11. klo 09:00-09:30 - Pilvipalvelut omilla ehdoilla – mitä Microsoftin Sovereign Cloud tarkoittaa ohjelmistotaloille? Yhä useampi ohjelmistotalo kohtaa vaatimuksia datan sijainnista, sääntelyn noudattamisesta ja operatiivisesta kontrollista – erityisesti julkisella sektorilla ja säädellyillä toimialoilla. Tässä webinaarissa pureudumme siihen, miten Microsoftin uusi Sovereign Cloud -tarjonta vastaa näihin tarpeisiin ja mitä mahdollisuuksia se avaa suomalaisille ohjelmistoyrityksille. Keskustelemme muun muassa: Miten Sovereign Public ja Private Cloud eroavat ja mitä ne mahdollistavat? Miten datan hallinta, salaus ja operatiivinen suvereniteetti toteutuvat eurooppalaisessa kontekstissa? Mitä tämä tarkoittaa ohjelmistoyrityksille, jotka rakentavat ratkaisuja julkiselle sektorille tai säädellyille toimialoille? Puhujat: Juha Karppinen, National Security Officer, Microsoft Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – Pilvipalvelut omilla ehdoilla – mitä Microsoftin Sovereign Cloud tarkoittaa ohjelmistotaloille? – Finland Cloud Champion 31.10. klo 09:00-09:30 - Kasvua ja näkyvyyttä ohjelmistotaloille – hyödynnä ISV Success ja Azure Marketplace rewards -ohjelmia Tässä webinaarissa pureudumme ohjelmistotaloille suunnattuihin Microsoftin keskeisiin kiihdytinohjelmiin, jotka tukevat kasvua, skaalautuvuutta ja kansainvälistä näkyvyyttä. Käymme läpi, miten ISV Success -ohjelma tarjoaa teknistä ja kaupallista tukea ohjelmistoyrityksille eri kehitysvaiheissa, ja miten Azure Marketplace toimii tehokkaana myyntikanavana uusien asiakkaiden tavoittamiseen. Lisäksi esittelemme Marketplace Rewards -edut, jotka tukevat markkinointia, yhteismyyntiä ja asiakashankintaa Microsoftin ekosysteemissä. Webinaari tarjoaa: Konkreettisia esimerkkejä ohjelmien hyödyistä Käytännön vinkkejä ohjelmiin liittymiseen ja hyödyntämiseen Näkemyksiä siitä, miten ohjelmistotalot voivat linjata strategiansa Microsoftin tarjoamiin mahdollisuuksiin Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – Kasvua ja näkyvyyttä ohjelmistotaloille – hyödynnä ISV Success ja Azure Marketplace rewards -ohjelmia – Finland Cloud Champion 3.10. klo 09:00-09:30 - Autonomiset ratkaisut ohjelmistotaloille – Azure AI Foundry ja agenttiteknologioiden uudet mahdollisuudet Agenttiteknologiat mullistavat tapaa, jolla ohjelmistotalot voivat rakentaa älykkäitä ja skaalautuvia ratkaisuja. Tässä webinaarissa tutustumme siihen, miten Azure AI Foundry tarjoaa kehittäjille ja tuoteomistajille työkalut autonomisten agenttien rakentamiseen – mahdollistaen monimutkaisten prosessien automatisoinnin ja uudenlaisen asiakasarvon tuottamisen. Kuulet mm. Miten agenttiteknologiat muuttavat ohjelmistokehitystä ja liiketoimintaa. Miten Azure AI Foundry tukee agenttien suunnittelua, kehitystä ja käyttöönottoa. Miten ohjelmistotalot voivat hyödyntää agentteja kilpailuetuna. Puhujat: Juha Karvonen, Sr Partner Tech Strategist Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite täältä: Microsoft Finland – Software Developing Companies Monthly Community Series – Autonomiset ratkaisut ohjelmistotaloille – Azure AI Foundry ja agenttiteknologioiden uudet mahdollisuudet – Finland Cloud Champion 5.9.2025 klo 09:00-09:30 - Teknologiayritysten ja Microsoftin prioriteetit syksylle 2025. Tervetuloa jälleen mukaan Microsoftin webinaarisarjaan teknologiayrityksille! Jatkamme sarjassa kuukausittain pureutumista siihen, miten yhteistyö Microsoftin kanssa voi vauhdittaa kasvua ja avata uusia mahdollisuuksia eri vaiheissa oleville ohjelmistotaloille – olipa yritys sitten start-up, scale-up tai globaalia toimintaa harjoittava. Jokaisessa jaksossa jaamme konkreettisia esimerkkejä, näkemyksiä ja strategioita, jotka tukevat teknologia-alan yritysten liiketoiminnan kehitystä ja innovaatioita. Elokuun lopun jaksossa keskitymme syksyn 2025 prioriteetteihin ja uusiin mahdollisuuksiin, jotka tukevat ohjelmistoyritysten oman toiminnan suunnittelua, kehittämistä ja kasvun vauhdittamista. Käymme läpi, mitkä ovat Microsoftin strategiset painopisteet tulevalle tilikaudelle – ja ennen kaikkea, miten ohjelmistotalot voivat hyödyntää niitä omassa liiketoiminnassaan. Tavoitteena on tarjota kuulijoille selkeä ymmärrys siitä, miten oma tuote, palvelu tai markkinastrategia voidaan linjata ekosysteemin kehityksen kanssa, ja miten Microsoft voi tukea tätä matkaa konkreettisin keinoin. Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoitus täältä: Teknologiayritysten ja Microsoftin prioriteetit syksylle 2025. – Finland Cloud Champion447Views0likes0Comments🏆 Agents League Winner Spotlight – Reasoning Agents Track
Agents League was designed to showcase what agentic AI can look like when developers move beyond single‑prompt interactions and start building systems that plan, reason, verify, and collaborate. Across three competitive tracks—Creative Apps, Reasoning Agents, and Enterprise Agents—participants had two weeks to design and ship real AI agents using production‑ready Microsoft and GitHub tools, supported by live coding battles, community AMAs, and async builds on GitHub. Today, we’re excited to spotlight the winning project for the Reasoning Agents track, built on Microsoft Foundry: CertPrep Multi‑Agent System — Personalised Microsoft Exam Preparation by Athiq Ahmed. The Reasoning Agents Challenge Scenario The goal of the Reasoning Agents track challenge was to design a multi‑agent system capable of effectively assisting students in preparing for Microsoft certification exams. Participants were asked to build an agentic workflow that could understand certification syllabi, generate personalized study plans, assess learner readiness, and continuously adapt based on performance and feedback. The suggested reference architecture modeled a realistic learning journey: starting from free‑form student input, a sequence of specialized reasoning agents collaboratively curated Microsoft Learn resources, produced structured study plans with timelines and milestones, and maintained learner engagement through reminders. Once preparation was complete, the system shifted into an assessment phase to evaluate readiness and either recommend the appropriate Microsoft certification exam or loop back into targeted remediation—emphasizing reasoning, decision‑making, and human‑in‑the‑loop validation at every step. All details are available here: agentsleague/starter-kits/2-reasoning-agents at main · microsoft/agentsleague. The Winning Project: CertPrep Multi‑Agent System The CertPrep Multi‑Agent System is an AI solution for personalized Microsoft certification exam preparation, supporting nine certification exam families. At a high level, the system turns free‑form learner input into a structured certification plan, measurable progress signals, and actionable recommendations—demonstrating exactly the kind of reasoned orchestration this track was designed to surface. Inside the Multi‑Agent Architecture At its core, the system is designed as a multi‑agent pipeline that combines sequential reasoning, parallel execution, and human‑in‑the‑loop gates, with traceability and responsible AI guardrails. The solution is composed of eight specialized reasoning agents, each focused on a specific stage of the learning journey: LearnerProfilingAgent – Converts free‑text background information into a structured learner profile using Microsoft Foundry SDK (with deterministic fallbacks). StudyPlanAgent – Generates a week‑by‑week study plan using a constrained allocation algorithm to respect the learner’s available time. LearningPathCuratorAgent – Maps exam domains to curated Microsoft Learn resources with trusted URLs and estimated effort. ProgressAgent – Computes a weighted readiness score based on domain coverage, time utilization, and practice performance. AssessmentAgent – Generates and evaluates domain‑proportional mock exams. CertificationRecommendationAgent – Issues a clear “GO / CONDITIONAL GO / NOT YET” decision with remediation steps and next‑cert suggestions. Throughout the pipeline, a 17‑rule Guardrails Pipeline enforces validation checks at every agent boundary, and two explicit human‑in‑the‑loop gates ensure that decisions are made only when sufficient learner confirmation or data is present. CertPrep leverages Microsoft Foundry Agent Service and related tooling to run this reasoning pipeline reliably and observably: Managed agents via Foundry SDK Structured JSON outputs using GPT‑4o (JSON mode) with conservative temperature settings Guardrails enforced through Azure Content Safety Parallel agent fan‑out using concurrent execution Typed contracts with Pydantic for every agent boundary AI-assisted development with GitHub Copilot, used throughout for code generation, refactoring, and test scaffolding Notably, the full pipeline is designed to run in under one second in mock mode, enabling reliable demos without live credentials. User Experience: From Onboarding to Exam Readiness Beyond its backend architecture, CertPrep places strong emphasis on clarity, transparency, and user trust through a well‑structured front‑end experience. The application is built with Streamlit and organized as a 7‑tab interactive interface, guiding learners step‑by‑step through their preparation journey. From a user’s perspective, the flow looks like this: Profile & Goals Input Learners start by describing their background, experience level, and certification goals in natural language. The system immediately reflects how this input is interpreted by displaying the structured learner profile produced by the profiling agent. Learning Path & Study Plan Visualization Once generated, the study plan is presented using visual aids such as Gantt‑style timelines and domain breakdowns, making it easy to understand weekly milestones, expected effort, and progress over time. Progress Tracking & Readiness Scoring As learners move forward, the UI surfaces an exam‑weighted readiness score, combining domain coverage, study plan adherence, and assessment performance—helping users understand why the system considers them ready (or not yet). Assessments and Feedback Practice assessments are generated dynamically, and results are reported alongside actionable feedback rather than just raw scores. Transparent Recommendations Final recommendations are presented clearly, supported by reasoning traces and visual summaries, reinforcing trust and explainability in the agent’s decision‑making. The UI also includes an Admin Dashboard and demo‑friendly modes, enabling judges, reviewers, or instructors to inspect reasoning traces, switch between live and mock execution, and demonstrate the system reliably without external dependencies. Why This Project Stood Out This project embodies the spirit of the Reasoning Agents track in several ways: ✅ Clear separation of reasoning roles, instead of prompt‑heavy monoliths ✅ Deterministic fallbacks and guardrails, critical for educational and decision‑support systems ✅ Observable, debuggable workflows, aligned with Foundry’s production goals ✅ Explainable outputs, surfaced directly in the UX It demonstrates how agentic patterns translate cleanly into maintainable architectures when supported by the right platform abstractions. Try It Yourself Explore the project, architecture, and demo here: 🔗 GitHub Issue (full project details): https://github.com/microsoft/agentsleague/issues/76 🎥 Demo video: https://www.youtube.com/watch?v=okWcFnQoBsE 🌐 Live app (mock data): https://agentsleague.streamlit.app/Building an Agentic, AI-Powered Helpdesk with Agents Framework, Azure, and Microsoft 365
The article describes how to build an agentic, AI-powered helpdesk using Azure, Microsoft 365, and the Microsoft Agent Framework. The goal is to automate ticket handling, enrich requests with AI, and integrate seamlessly with M365 tools like Teams, Planner, and Power Automate.759Views0likes1Comment