azure
7933 TopicsMicrosoft Fabric Operations Agent Step by Step Walkthrough
Fabric Capacity and Workspace You need a Microsoft Fabric workspace backed by a paid capacity. Trial capacities are not supported for Operations Agent. Your capacity must be provisioned in a supported region. As of April 2026, Operations Agent is available in all Microsoft Fabric regions except South Central US and East US. If your capacity is outside the US or EU, you will also need to enable cross geo processing and storage for AI through the tenant settings. Your workspace must contain an Eventhouse with at least one KQL database. The Eventhouse is the telemetry backbone, and the KQL database holds the tables the agent will monitor. In the screenshot below, you can see a workspace named OperationAgent-WS that contains an Eventhouse (ops_eventhouse), two KQL databases (ops_db and ops_eventhouse), and a Lakehouse (ops_lakehouse). This is the environment used throughout this guide. Figure 1. Workspace contents showing the Eventhouse, KQL databases, and Lakehouse ready for the Operations Agent. Enabling the Operations Agent in the Admin Portal A Fabric administrator must enable the Operations Agent preview toggle in the Admin Portal before anyone in the organization can create an agent. Navigate to the Admin Portal, locate the section for Real Time Intelligence, and find the setting labeled Enable Operations Agents (Preview). Toggle it to Enabled for the entire organization or for specific security groups depending on your governance requirements. In addition to this toggle, ensure that Microsoft Copilot and Azure OpenAI Service are also enabled at the tenant level. The Operations Agent relies on Azure OpenAI to generate its playbook and to reason about data when conditions are met. Figure 2. The Admin Portal showing the Enable Operations Agents (Preview) toggle set to Enabled for the entire organization. Note that messages sent to Operations Agents are processed through the Azure AI Bot Service. If your capacity is outside the EU Data Boundary, data may be processed outside your geographic or national cloud boundary. Be sure to communicate this to your compliance stakeholders before enabling the feature in production tenants. Microsoft Teams Account Every person who will receive recommendations from the agent must have a Microsoft Teams account. The Operations Agent delivers its findings and action suggestions through a dedicated Teams app called Fabric Operations Agent. You can install this app from the Teams app store by searching for its name. Once installed, the agent will be able to send messages containing data summaries and recommended actions directly to the designated recipients. Creating and Configuring the Operations Agent With your prerequisites in place, you are ready to create the Operations Agent. The following steps walk you through the entire configuration process using the Fabric portal. Step 1: Create a New Operations Agent Open the Microsoft Fabric portal and navigate to your workspace. On the Fabric home page, select the ellipsis icon and then select Create. In the Create pane, scroll to the Real Time Intelligence section and select Operations Agent. A dialog will appear asking you to name your agent and select the target workspace. Choose a descriptive name that reflects the agent’s purpose. In this guide, the agent is named OperationsAgent_1 and is deployed to the OperationAgent-WS workspace. Step 2: Define Business Goals and Agent Instructions Once the agent is created, you are taken to the Agent Setup page. This page is divided into two halves. On the left side, you configure the agent’s behavior. On the right side, you see the generated Agent Playbook after saving. The first field is Business Goals, where you describe the high level objective the agent should accomplish. Write this in clear, outcome oriented language. In this demo, the business goal is set to: “Monitor data pipeline execution and alert on failures.” The second field is Agent Instructions, where you provide more specific guidance on how the agent should reason about the data. Think of this as a brief you would hand to an analyst who will be watching your systems overnight. Be explicit about the table name, the column to watch, and the condition that constitutes an alert. In this demo, the instruction reads: “Monitor pipeline_runs table. Alert when status is failed.” Together, the business goals and instructions give the underlying large language model enough context to generate an accurate playbook. The more specific your instructions, the more reliable the agent’s behavior will be. Figure 3. The Agent Setup page showing business goals, agent instructions, and the generated playbook on the right. On the right side of the screen, you can see the Agent Playbook that was generated after saving. The playbook includes a Business Term Glossary, which shows the business objects the agent inferred from your goals and data. In this case, it identified an object called PipelineRun, mapped to the pipeline_runs table, with two properties: status (the pipeline run status from the status column) and runId (the unique identifier from the run_id column). It also displays the Rules section, which contains the conditions the agent will evaluate. Review the playbook carefully. Since it is generated by an AI model, there may be occasional misinterpretations. Verify that every property maps to the correct column and that the rules reflect your intended thresholds. If something is off, update your goals or instructions and save again to regenerate the playbook. Step 3: Add a Knowledge Source Scroll down on the Agent Setup page to find the Knowledge section. This is where you connect the agent to the data it will monitor. When you first open this section, it will display a message indicating that no knowledge source has been added yet. Figure 4. The Knowledge section before any data source has been added. Select the Add Data button to browse the available data sources. A panel will appear listing the KQL databases and Eventhouses accessible within your Fabric environment. In this demo, three sources are available: ops_db in the OperationAgent-WS workspace, wms_eventhouse in the WMS-CDC-Demo workspace, and ops_eventhouse in the OperationAgent-WS workspace. Select the database that contains the table you want the agent to monitor. For this guide, select ops_db, which holds the pipeline_runs table referenced in the agent instructions. Figure 5. Selecting the knowledge source from available KQL databases and Eventhouses. Once the knowledge source is connected, the agent will be able to query this database at regular intervals (approximately every five minutes) to evaluate its rules. Make sure the table in your selected database is actively receiving data, especially if you plan to demonstrate the agent detecting a condition in real time. Step 4: Define Actions Actions are the responses the agent can recommend when it detects a condition that matches its rules. Scroll further down the Agent Setup page to find the Actions section. Select the Add Action button to define a new custom action. A dialog titled New Custom Action will appear. It has three fields. The Action Name is a short, descriptive label for the action. The Action Description explains the purpose of the action and gives the agent context about when to use it. The Parameters section allows you to define input fields that pass dynamic values (such as names, dates, or identifiers) into the Power Automate flow that will be triggered. Figure 6. The New Custom Action dialog where you define the action name, description, and optional parameters. In this demo, the action is named Send Email Alert with a description indicating that it should send an email notification when a pipeline failure is detected. Once created, you can see the action listed in the Actions section with a green status indicator showing that the action is successfully connected. Figure 7. The Actions section showing the Send Email Alert action with a connected status. Step 5: Configure the Custom Action with Power Automate After creating the action, you need to configure it by linking it to an activator item and a Power Automate flow. Select the action you just created to open the Configure Custom Action pane. In this pane, you will see several fields. First, select the Workspace where the activator item resides. In this demo, the workspace is OperationAgent-WS. Next, select the Activator, which is the Fabric item that bridges the Operations Agent and Power Automate. Here, the activator is named Email_Alert_Activator. Once the connection is created, a Connection String is generated. This string is a unique identifier that links the Operations Agent to the Power Automate flow. Select the Copy button to copy this connection string to your clipboard. You will need it in the next step. Below the connection string, you will find the Open Flow Builder button. Select this to launch the Power Automate flow designer where you will build the email notification flow. Figure 8. The Configure Custom Action pane showing the workspace, activator, connection string, and the button to open the flow builder. Step 6: Build the Power Automate Flow When you select Open Flow Builder, a new browser tab opens with the Power Automate designer. The flow is pre-configured with a trigger called When an Activator Rule is Triggered. This trigger fires whenever the Operations Agent approves an action. In the Parameters tab of the trigger, you will see a field labeled Connection String. Paste the connection string you copied from the previous step into this field. This is the critical link that connects the Power Automate flow back to your Operations Agent. If this string is incorrect or missing, the flow will not fire when the agent recommends the action. Figure 9. The Power Automate flow builder with the activator trigger and the Connection String field. Below the trigger, you can add any actions your workflow requires. For an email alert scenario, add an Office 365 Outlook action to send an email to the operations team. You can use dynamic content from the trigger to include details such as the pipeline run ID, the failure status, and any parameters passed through from the Operations Agent. Save the flow and return to the Fabric portal. Your action is now fully configured and ready to be triggered by the agent. Step 7: Generate the Playbook and Start the Agent With all configuration complete (business goals, instructions, knowledge source, and actions), select Save on the Agent Setup page. Fabric will use the underlying large language model to generate the agent’s playbook. The playbook is a structured summary of everything the agent knows: its goals, the properties it monitors, and the rules it evaluates. You can also select Generate Playbook at the top of the page to regenerate the playbook if you have made changes. Review the playbook one final time to confirm that properties map correctly to your table columns and that rules reflect the exact conditions you want to monitor. When you are satisfied, select Start in the toolbar at the top of the page. The agent will begin actively monitoring your data. It queries the knowledge source approximately every five minutes, evaluating the playbook rules against the latest data. If a condition is met, the agent uses the LLM to summarize the data, generate a recommendation, and send a message to the designated recipients through Microsoft Teams. To pause the agent at any time, select Stop. This is useful during demos when you want to control the timing of the demonstration. How the Agent Operates at Runtime Once started, the Operations Agent follows a continuous loop. Every five minutes, it queries the connected KQL database to evaluate the rules defined in the playbook. If no conditions are met, it continues silently. If a condition is matched (for example, a pipeline run with a status of "failed" appears in the pipeline_runs table), the agent proceeds through the following sequence. First, the agent uses the large language model to analyze the data that triggered the condition. It summarizes the context, identifies the relevant business object (such as a specific pipeline run), and determines which action to recommend. Second, the agent sends a message to the designated recipients through Microsoft Teams. This message contains a summary of the detected insight, the data context that triggered it, and a suggested action. Recipients can approve the action by selecting Yes or reject it by selecting No. If parameters are included (such as a run ID or a severity level), they can be reviewed and adjusted before final approval. Third, if the recipient approves the action, the agent executes it on behalf of the creator using the creator’s credentials. In this demo, approving the action would trigger the Power Automate flow that sends an email alert. It is important to note that if a recommendation is not responded to within three days, the operation is automatically canceled. After cancellation, the action can no longer be approved or interacted with.77Views0likes0CommentsApproaches to Integrating Azure Databricks with Microsoft Fabric: The Better Together Story!
Azure Databricks and Microsoft Fabric can be combined to create a unified and scalable analytics ecosystem. This document outlines eight distinct integration approaches, each accompanied by step-by-step implementation guidance and key design considerations. These methods are not prescriptive—your cloud architecture team can choose the integration strategy that best aligns with your organization’s governance model, workload requirements and platform preferences. Whether you prioritize centralized orchestration, direct data access, or seamless reporting, the flexibility of these options allows you to tailor the solution to your specific needs.4.9KViews9likes1CommentExcluding break-glass account from MFA Registration Campaign – impact on existing users?
Hi everyone, I'm currently reviewing the configuration of a break-glass (emergency access) account in Microsoft Entra ID and I have a question regarding MFA registration enforcement. We currently have an Authentication Methods Registration Campaign enabled for all users for quite some time. We identified that the break-glass account is being required to register MFA due to this configuration. The account is already excluded from all Conditional Access policies that enforce MFA, so the behavior appears to be specifically coming from the registration campaign (Microsoft Authenticator requirement). Our goal is to exclude this break-glass account from the MFA registration requirement, following Microsoft best practices. My question is: If we edit the existing registration campaign and add an exclusion (user or group), could this have any impact on users who are already registered? Specifically, could it re-trigger the registration process or affect existing MFA configurations? We want to avoid any unintended impact, considering this campaign has been in place for a long time. Has anyone implemented a similar exclusion for break-glass accounts within an active registration campaign? Any insights or confirmation would be really helpful. Thanks in advance!41Views0likes1CommentMicrosoft Finland - Software Developing Companies monthly community series.
Tervetuloa jälleen mukaan Microsoftin webinaarisarjaan teknologiayrityksille! Microsoft Finlandin järjestämä Software Development monthly Community series on webinaarisarja, joka tarjoaa ohjelmistotaloille ajankohtaista tietoa, konkreettisia esimerkkejä ja strategisia näkemyksiä siitä, miten yhteistyö Microsoftin kanssa voi vauhdittaa kasvua ja avata uusia liiketoimintamahdollisuuksia. Sarja on suunnattu kaikenkokoisille ja eri kehitysvaiheissa oleville teknologiayrityksille - startupeista globaaleihin toimijoihin. Jokaisessa jaksossa pureudutaan käytännönläheisesti siihen, miten ohjelmistoyritykset voivat hyödyntää Microsoftin ekosysteemiä, teknologioita ja kumppanuusohjelmia omassa liiketoiminnassaan. Huom. Microsoft Software Developing Companies monthly community webinars -webinaarisarja järjestetään Cloud Champion -sivustolla, josta webinaarit ovat kätevästi saatavilla tallenteina pari tuntia live-lähetyksen jälkeen. Muistathan rekisteröityä Cloud Champion -alustalle ensimmäisellä kerralla, jonka jälkeen pääset aina sisältöön sekä tallenteisiin käsiksi. Pääset rekisteröitymään, "Register now"-kohdasta. Täytä tietosi ja valitse Distributor kohtaan - Other, mikäli et tiedä Microsoft-tukkurianne. Webinaarit: 24.4.2026 klo 09.00–09.30 Marketplace ja Resale Enabled Offers (REO) Tervetuloa SDC Community Monthly ‑webinaariin. Kasvua Azure Marketplace ‑kanavan kautta – tehokkaammin REO‑mallilla. Tämän kuukauden aiheena on REO (Resale Enabled Offers), ja käymme läpi, mitä REO käytännössä tarkoittaa, milloin sitä kannattaa käyttää ja mitä se muuttaa kumppaneille. Käsittelemme aihetta yhdessä Partner Solution Sales Manager Veli Myllylän kanssa. Opit, miten Resale Enabled Offers (REO) mahdollistaa kanavakumppaneiden myydä Marketplace‑tarjouksiasi puolestasi ja miten tämä vauhdittaa co‑sell‑kauppoja, skaalaa myyntiä ja tukee Azure‑kulutusta. Ilmoittaudu mukaan: Microsoft Finland – Software Developing Companies monthly community series – Marketplace ja Resale Enabled Offers (REO) – Finland Cloud Champion Puhujat Mikko Marttinen, Sr Partner Development Manager, Microsoft Veli Myllylä, Partner Solutions Sales Manager Microsoft, Microsoft 27.3.2026 klo 09:00-09:30 - Agent Factory Microsoft Foundryllä – miten rakennat ja viet AI-agentteja tuotantoon AI‑agentit ovat nopeasti nousemassa enterprise‑ohjelmistojen keskeiseksi rakennuspalikaksi, mutta monilla organisaatioilla haasteena on agenttien vieminen tuotantoon asti. Todellinen kilpailuetu syntyy siitä, miten agentit rakennetaan hallitusti, integroidaan osaksi kokonaisarkkitehtuuria ja skaalataan luotettavasti. Tässä webinaarissa käymme läpi ja näytämme käytännön demolla, miten AI-agentti rakennetaan Microsoft Foundry:n Agent Service -palvelulla. Näytämme miten agentin rooli ja ohjeet määritellään, miten agentille liitetään tietolähteitä ja työkaluja sekä katsomme miten tämä asemoituu Microsoft Agent Factoryyn. Katso nauhoite: Microsoft Finland – Software Developing Companies monthly community series: Agent Factory Microsoft Foundryllä – miten rakennat ja viet AI-agentteja tuotantoon – Finland Cloud Champion Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Veli Myllylä, Partner Solutions Sales Manager, 27.2.2026 klo 09:00-09:30 - M-Files polku menestykseen yhdessä Microsoftin kanssa Mitä globaalin kumppanuuden rakentaminen M-Files:in ja Microsoft:in välillä on vaatinut – ja mitä hyötyä siitä on syntynyt? Tässä webinaarissa kuulet insiderit suoraan M-Filesin Kimmo Järvensivulta, Stategic Alliances Director: miten kumppanuus Microsoft kanssa on rakennettu, mitä matkalla on opittu ja miten yhteistyö on vauhdittanut kasvua. M-Files on älykäs tiedonhallinta-alusta, joka auttaa organisaatioita hallitsemaan dokumentteja ja tietoa metatiedon avulla sijainnista riippumatta. Se tehostaa tiedon löytämistä, parantaa vaatimustenmukaisuutta ja tukee modernia työtä Microsoft-ekosysteemissä. Tule kuulemaan, mitä menestyksekäs kumppanuus todella vaatii, ja miten siitä tehdään strateginen kilpailuetu. Katso nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – M-Files polku menestykseen yhdessä Microsoftin kanssa – Finland Cloud Champion Asiantuntijat: Kimmi Järvensivu, Strategic Alliances Director, M-Files Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft 30.1.2026 klo 09:00-09:30 - Model Context Protocol (MCP)—avoin standardi, joka mullistaa AI-integraatiot Webinaarissa käymme läpi, mikä on Model Context Protocol (MCP), miten se mahdollistaa turvalliset ja skaalautuvat yhteydet AI‑mallien ja ulkoisten järjestelmien välillä ilman räätälöityä koodia, mikä on Microsoftin lähestyminen MCP‑protokollan hyödyntämiseen sekä miten softayritykset voivat hyödyntää MCP‑standardin tarjoamia liiketoimintamahdollisuuksia. Webinaarissa käymme läpi: Mikä MCP on ja miksi se on tärkeä nykyaikaisissa AI‑prosesseissa Kuinka MCP vähentää integraatioiden monimutkaisuutta ja nopeuttaa kehitystä Käytännön esimerkkejä Webiinarin asiaosuus käydään läpi englanniksi. Katso nauhoite: 30.1.2026 klo 09:00-09:30 – Model Context Protocol (MCP)—avoin standardi, joka mullistaa AI-integraatiot – Finland Cloud Champion Asiantuntijat: Massimo Caterino, Kumppaniteknologiastrategisti, Microsoft Europe North Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft 12.12. klo 09:00-09:30 - Mitä Suomen Azure-regioona tarkoittaa ohjelmistotaloille? Microsoftin uusi datakeskusalue Suomeen tuo pilvipalvelut lähemmäksi suomalaisia ohjelmistotaloja – olipa kyseessä startup, scaleup tai globaali toimija. Webinaarissa pureudumme siihen, mitä mahdollisuuksia uusi Azure-regioona avaa datan sijainnin, suorituskyvyn, sääntelyn ja asiakasvaatimusten näkökulmasta. Keskustelemme muun muassa: Miten datan paikallinen sijainti tukee asiakasvaatimuksia ja sääntelyä? Mitä hyötyä ohjelmistotaloille on pienemmästä latenssista ja paremmasta suorituskyvystä? Miten Azure-regioona tukee yhteismyyntiä ja skaalautumista Suomessa? Miten valmistautua teknisesti ja kaupallisesti uuden regioonan avaamiseen? Puhujat: Fama Doumbouya, Sales Director, Cloud Infra and Security, Microsoft Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – Mitä Suomen Azure-regioona tarkoittaa ohjelmistotaloille? – Finland Cloud Champion 28.11. klo 09:00-09:30 - Pilvipalvelut omilla ehdoilla – mitä Microsoftin Sovereign Cloud tarkoittaa ohjelmistotaloille? Yhä useampi ohjelmistotalo kohtaa vaatimuksia datan sijainnista, sääntelyn noudattamisesta ja operatiivisesta kontrollista – erityisesti julkisella sektorilla ja säädellyillä toimialoilla. Tässä webinaarissa pureudumme siihen, miten Microsoftin uusi Sovereign Cloud -tarjonta vastaa näihin tarpeisiin ja mitä mahdollisuuksia se avaa suomalaisille ohjelmistoyrityksille. Keskustelemme muun muassa: Miten Sovereign Public ja Private Cloud eroavat ja mitä ne mahdollistavat? Miten datan hallinta, salaus ja operatiivinen suvereniteetti toteutuvat eurooppalaisessa kontekstissa? Mitä tämä tarkoittaa ohjelmistoyrityksille, jotka rakentavat ratkaisuja julkiselle sektorille tai säädellyille toimialoille? Puhujat: Juha Karppinen, National Security Officer, Microsoft Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – Pilvipalvelut omilla ehdoilla – mitä Microsoftin Sovereign Cloud tarkoittaa ohjelmistotaloille? – Finland Cloud Champion 31.10. klo 09:00-09:30 - Kasvua ja näkyvyyttä ohjelmistotaloille – hyödynnä ISV Success ja Azure Marketplace rewards -ohjelmia Tässä webinaarissa pureudumme ohjelmistotaloille suunnattuihin Microsoftin keskeisiin kiihdytinohjelmiin, jotka tukevat kasvua, skaalautuvuutta ja kansainvälistä näkyvyyttä. Käymme läpi, miten ISV Success -ohjelma tarjoaa teknistä ja kaupallista tukea ohjelmistoyrityksille eri kehitysvaiheissa, ja miten Azure Marketplace toimii tehokkaana myyntikanavana uusien asiakkaiden tavoittamiseen. Lisäksi esittelemme Marketplace Rewards -edut, jotka tukevat markkinointia, yhteismyyntiä ja asiakashankintaa Microsoftin ekosysteemissä. Webinaari tarjoaa: Konkreettisia esimerkkejä ohjelmien hyödyistä Käytännön vinkkejä ohjelmiin liittymiseen ja hyödyntämiseen Näkemyksiä siitä, miten ohjelmistotalot voivat linjata strategiansa Microsoftin tarjoamiin mahdollisuuksiin Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – Kasvua ja näkyvyyttä ohjelmistotaloille – hyödynnä ISV Success ja Azure Marketplace rewards -ohjelmia – Finland Cloud Champion 3.10. klo 09:00-09:30 - Autonomiset ratkaisut ohjelmistotaloille – Azure AI Foundry ja agenttiteknologioiden uudet mahdollisuudet Agenttiteknologiat mullistavat tapaa, jolla ohjelmistotalot voivat rakentaa älykkäitä ja skaalautuvia ratkaisuja. Tässä webinaarissa tutustumme siihen, miten Azure AI Foundry tarjoaa kehittäjille ja tuoteomistajille työkalut autonomisten agenttien rakentamiseen – mahdollistaen monimutkaisten prosessien automatisoinnin ja uudenlaisen asiakasarvon tuottamisen. Kuulet mm. Miten agenttiteknologiat muuttavat ohjelmistokehitystä ja liiketoimintaa. Miten Azure AI Foundry tukee agenttien suunnittelua, kehitystä ja käyttöönottoa. Miten ohjelmistotalot voivat hyödyntää agentteja kilpailuetuna. Puhujat: Juha Karvonen, Sr Partner Tech Strategist Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite täältä: Microsoft Finland – Software Developing Companies Monthly Community Series – Autonomiset ratkaisut ohjelmistotaloille – Azure AI Foundry ja agenttiteknologioiden uudet mahdollisuudet – Finland Cloud Champion 5.9.2025 klo 09:00-09:30 - Teknologiayritysten ja Microsoftin prioriteetit syksylle 2025. Tervetuloa jälleen mukaan Microsoftin webinaarisarjaan teknologiayrityksille! Jatkamme sarjassa kuukausittain pureutumista siihen, miten yhteistyö Microsoftin kanssa voi vauhdittaa kasvua ja avata uusia mahdollisuuksia eri vaiheissa oleville ohjelmistotaloille – olipa yritys sitten start-up, scale-up tai globaalia toimintaa harjoittava. Jokaisessa jaksossa jaamme konkreettisia esimerkkejä, näkemyksiä ja strategioita, jotka tukevat teknologia-alan yritysten liiketoiminnan kehitystä ja innovaatioita. Elokuun lopun jaksossa keskitymme syksyn 2025 prioriteetteihin ja uusiin mahdollisuuksiin, jotka tukevat ohjelmistoyritysten oman toiminnan suunnittelua, kehittämistä ja kasvun vauhdittamista. Käymme läpi, mitkä ovat Microsoftin strategiset painopisteet tulevalle tilikaudelle – ja ennen kaikkea, miten ohjelmistotalot voivat hyödyntää niitä omassa liiketoiminnassaan. Tavoitteena on tarjota kuulijoille selkeä ymmärrys siitä, miten oma tuote, palvelu tai markkinastrategia voidaan linjata ekosysteemin kehityksen kanssa, ja miten Microsoft voi tukea tätä matkaa konkreettisin keinoin. Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoitus täältä: Teknologiayritysten ja Microsoftin prioriteetit syksylle 2025. – Finland Cloud Champion445Views0likes0Comments🏆 Agents League Winner Spotlight – Reasoning Agents Track
Agents League was designed to showcase what agentic AI can look like when developers move beyond single‑prompt interactions and start building systems that plan, reason, verify, and collaborate. Across three competitive tracks—Creative Apps, Reasoning Agents, and Enterprise Agents—participants had two weeks to design and ship real AI agents using production‑ready Microsoft and GitHub tools, supported by live coding battles, community AMAs, and async builds on GitHub. Today, we’re excited to spotlight the winning project for the Reasoning Agents track, built on Microsoft Foundry: CertPrep Multi‑Agent System — Personalised Microsoft Exam Preparation by Athiq Ahmed. The Reasoning Agents Challenge Scenario The goal of the Reasoning Agents track challenge was to design a multi‑agent system capable of effectively assisting students in preparing for Microsoft certification exams. Participants were asked to build an agentic workflow that could understand certification syllabi, generate personalized study plans, assess learner readiness, and continuously adapt based on performance and feedback. The suggested reference architecture modeled a realistic learning journey: starting from free‑form student input, a sequence of specialized reasoning agents collaboratively curated Microsoft Learn resources, produced structured study plans with timelines and milestones, and maintained learner engagement through reminders. Once preparation was complete, the system shifted into an assessment phase to evaluate readiness and either recommend the appropriate Microsoft certification exam or loop back into targeted remediation—emphasizing reasoning, decision‑making, and human‑in‑the‑loop validation at every step. All details are available here: agentsleague/starter-kits/2-reasoning-agents at main · microsoft/agentsleague. The Winning Project: CertPrep Multi‑Agent System The CertPrep Multi‑Agent System is an AI solution for personalized Microsoft certification exam preparation, supporting nine certification exam families. At a high level, the system turns free‑form learner input into a structured certification plan, measurable progress signals, and actionable recommendations—demonstrating exactly the kind of reasoned orchestration this track was designed to surface. Inside the Multi‑Agent Architecture At its core, the system is designed as a multi‑agent pipeline that combines sequential reasoning, parallel execution, and human‑in‑the‑loop gates, with traceability and responsible AI guardrails. The solution is composed of eight specialized reasoning agents, each focused on a specific stage of the learning journey: LearnerProfilingAgent – Converts free‑text background information into a structured learner profile using Microsoft Foundry SDK (with deterministic fallbacks). StudyPlanAgent – Generates a week‑by‑week study plan using a constrained allocation algorithm to respect the learner’s available time. LearningPathCuratorAgent – Maps exam domains to curated Microsoft Learn resources with trusted URLs and estimated effort. ProgressAgent – Computes a weighted readiness score based on domain coverage, time utilization, and practice performance. AssessmentAgent – Generates and evaluates domain‑proportional mock exams. CertificationRecommendationAgent – Issues a clear “GO / CONDITIONAL GO / NOT YET” decision with remediation steps and next‑cert suggestions. Throughout the pipeline, a 17‑rule Guardrails Pipeline enforces validation checks at every agent boundary, and two explicit human‑in‑the‑loop gates ensure that decisions are made only when sufficient learner confirmation or data is present. CertPrep leverages Microsoft Foundry Agent Service and related tooling to run this reasoning pipeline reliably and observably: Managed agents via Foundry SDK Structured JSON outputs using GPT‑4o (JSON mode) with conservative temperature settings Guardrails enforced through Azure Content Safety Parallel agent fan‑out using concurrent execution Typed contracts with Pydantic for every agent boundary AI-assisted development with GitHub Copilot, used throughout for code generation, refactoring, and test scaffolding Notably, the full pipeline is designed to run in under one second in mock mode, enabling reliable demos without live credentials. User Experience: From Onboarding to Exam Readiness Beyond its backend architecture, CertPrep places strong emphasis on clarity, transparency, and user trust through a well‑structured front‑end experience. The application is built with Streamlit and organized as a 7‑tab interactive interface, guiding learners step‑by‑step through their preparation journey. From a user’s perspective, the flow looks like this: Profile & Goals Input Learners start by describing their background, experience level, and certification goals in natural language. The system immediately reflects how this input is interpreted by displaying the structured learner profile produced by the profiling agent. Learning Path & Study Plan Visualization Once generated, the study plan is presented using visual aids such as Gantt‑style timelines and domain breakdowns, making it easy to understand weekly milestones, expected effort, and progress over time. Progress Tracking & Readiness Scoring As learners move forward, the UI surfaces an exam‑weighted readiness score, combining domain coverage, study plan adherence, and assessment performance—helping users understand why the system considers them ready (or not yet). Assessments and Feedback Practice assessments are generated dynamically, and results are reported alongside actionable feedback rather than just raw scores. Transparent Recommendations Final recommendations are presented clearly, supported by reasoning traces and visual summaries, reinforcing trust and explainability in the agent’s decision‑making. The UI also includes an Admin Dashboard and demo‑friendly modes, enabling judges, reviewers, or instructors to inspect reasoning traces, switch between live and mock execution, and demonstrate the system reliably without external dependencies. Why This Project Stood Out This project embodies the spirit of the Reasoning Agents track in several ways: ✅ Clear separation of reasoning roles, instead of prompt‑heavy monoliths ✅ Deterministic fallbacks and guardrails, critical for educational and decision‑support systems ✅ Observable, debuggable workflows, aligned with Foundry’s production goals ✅ Explainable outputs, surfaced directly in the UX It demonstrates how agentic patterns translate cleanly into maintainable architectures when supported by the right platform abstractions. Try It Yourself Explore the project, architecture, and demo here: 🔗 GitHub Issue (full project details): https://github.com/microsoft/agentsleague/issues/76 🎥 Demo video: https://www.youtube.com/watch?v=okWcFnQoBsE 🌐 Live app (mock data): https://agentsleague.streamlit.app/Building an Agentic, AI-Powered Helpdesk with Agents Framework, Azure, and Microsoft 365
The article describes how to build an agentic, AI-powered helpdesk using Azure, Microsoft 365, and the Microsoft Agent Framework. The goal is to automate ticket handling, enrich requests with AI, and integrate seamlessly with M365 tools like Teams, Planner, and Power Automate.754Views0likes1CommentPartner Case Study | DeepJudge
Legal work depends on precision, precedent, and the ability to apply institutional knowledge across diverse matters. For many firms, that knowledge is documented but not always easy to access or act on. DeepJudge, a Microsoft partner, is helping legal teams bridge that gap with AI-powered search and workflow tools built on Microsoft Azure. DeepJudge specializes in enterprise search and agentic AI workflows tailored for legal professionals. DeepJudge participated in Microsoft for Startups and the Pegasus Program, a selective initiative that helps high-potential partners scale through technical guidance, go-to-market support, and early access to Microsoft innovations. Today, the company’s platform is built on Microsoft Azure—including Azure OpenAI Services, Azure Kubernetes Service, and Microsoft Defender for Cloud—to promote high performance and robust data protection. Making internal knowledge accessible—and secure CMS Switzerland, a full-service law firm with more than 160 employees and a legacy spanning over 80 years, offers tailored legal solutions for businesses, investors, and private individuals. Known for its deep legal expertise and cross-border capabilities, CMS Switzerland built a strong foundation of internal knowledge—contracts, case files, templates, presentations, and precedent documents—stored across various systems and folders. While this information was historically well maintained, it wasn’t always easy for lawyers to locate and apply it consistently across cases. “Clients hire law firms for their expertise—but law firms often underestimate their breadth and depth of existing knowledge and experience,” said Stefan Brunnschweiler, Managing Partner at CMS Switzerland. The firm wanted to better surface and apply its internal expertise across teams—without disrupting existing workflows or compromising on data protection. The goal was to institutionalize internal know-how so that all employees could access and apply it confidently in their daily work. Security was a critical consideration. As part of CMS, one of the largest international law firms with over 7,200 lawyers in 92 offices across 50 countries, CMS Switzerland needed a solution that could meet strict data protection requirements while offering the flexibility and performance of modern AI tools. “Data security is our top priority,” Brunnschweiler emphasized. “The fact that Microsoft hosts our data in Switzerland and that DeepJudge, through Microsoft, also ensures high data protection convinced us.” CMS Switzerland began exploring options that could help surface internal knowledge more efficiently, reduce time spent on manual research, and support faster onboarding of new employees. The firm was looking for a solution that could meet the highest standards for security, reliability, and usability—while also aligning with the operational realities of legal work. Continue reading here Explore all case studies or submit your own Subscribe to case studies tag to follow all new case study posts. Don't forget to follow this blog to receive email notifications of new stories!110Views0likes0CommentsAct now: Unlock cloud security growth to drive stronger customer outcomes
When it comes to accelerating migration and modernization, security is no longer an add‑on for customers—it’s a baseline expectation. By embedding security earlier, you can deliver a built-in security approach, making it possible to drive more seamless adoption and stronger customer outcomes from day one. Microsoft is offering two initiatives to empower you to lead with security. Complete the Cloud Security Envisioning Workshop before June 30 to sharpen your approach and engage customers earlier in their cloud journey. After June 30, access requirements for the workshop may change, so review the latest specialization guidance before planning your next actions. Attain the Cloud Security specialization to unlock continued access to the workshop and strengthen your ability to lead secure cloud transformations. Get started by reviewing the Cloud Security specialization requirements, then assess your progress in Partner Center and collaborate with your Partner Development Manager to accelerate readiness and streamline the process. For additional guidance, go to Partner Concierge or join the Azure Migrate and Modernize Partner Forum. Drive stronger outcomes, starting at the foundation of every cloud engagement: security. Get started by acting now: Learn about the Cloud Security Envisioning Workshop. Explore and attain the Cloud Security specialization.70Views1like0CommentsAzure IoT Hub + Azure Device Registry (Preview Refresh): Device Trust and Management at Fleet Scale
What’s New in this Preview In November 2025, we announced the preview integration of Azure IoT Hub with Azure Device Registry, marking a huge step towards integrating IoT devices into the broader Azure ecosystem. We’re grateful to the customers and partners who participated in the preview and shared valuable feedback along the way. Today, we’re expanding the preview with new capabilities to strengthen security, improve fleet management, and simplify development for connected devices. With this refresh, preview customers can: Automate device certificate renewals with zero-touch, at-runtime operations to minimize downtime and maintain a strong security posture. Integrate existing security infrastructure like private certificate authorities with your Azure Device Registry namespace. Leverage certificate revocation controls to isolate device or fleet-level risks and maintain operational continuity Utilize an improved Azure Portal experience for streamlined configuration and lifecycle management of your devices. Accelerate solution development with expanded IoT Hub and DPS Device SDK compatibility for smoother integration and faster time to value. Together, these enhancements help organizations to secure, govern, and manage their IoT deployments using familiar Azure-native tools and workflows. Why this matters: From Connected Devices to Connected Operations Operational excellence begins by bridging the gap between physical assets and digital intelligence. Consider a global logistics fleet where every vehicle is more than just a machine; it is a trusted, connected, and manageable digital entity in the cloud. As these assets move, they emit a continuous stream of telemetry - from engine vibrations to fuel consumption – directly to a unified data ecosystem, where AI agents can reason over it with greater context. Instead of waiting for a breakdown, these agents detect wear patterns, cross-reference with digital twins, and provide recommendations to reroute a vehicle for service before a failure occurs. This completes a shift from reactive troubleshooting to proactive physical operations. Yet, for many organizations, this transformation is often stalled by fragmented systems where security policies, device registries, and data streams exist in silos. Overcoming this requires a sophisticated stack designed to establish trust, manage device lifecycles, and orchestrate data flows at a global scale: The Digital Operations stack for cloud-connected devices This journey starts with having a secure foundation for fleet management. In an era where perimeter security is no longer enough, organizations need an identity foundation that is both hardware-rooted and deeply integrated with device provisioning. Utilizing robust X.509 certificate management, where keys and credentials are anchored in tamper-resistant hardware, provides high-assurance system integrity across millions of endpoints. Once trust is established, Azure Device Registry creates a unified management plane, where devices are represented as first-class Azure resources, enabling ARM-based fleet management, role-based access control for lifecycle operations, and Azure Policy for enforcement. Simultaneously, IoT Hub provides secure, bidirectional messaging for at-scale fleets. This high-fidelity data provides the essential fuel for Physical AI. By streaming trusted telemetry into Microsoft Fabric, organizations can break down data silos and allow AI agents to reason over real-world events in a centralized analytics environment. The Azure IoT stack provides the essential bridge for cloud-connected devices, enabling customers to transform their industrial environments into highly secure and intelligent ecosystems. For more information on Azure's approach to industrial AI, check out: Making Physical AI Practical for Real-World Industrial Operations. Azure IoT Hub + ADR (Preview): Expanding Fleet and Certificate Lifecycle Management The April 2026 Preview for Azure IoT Hub and Azure Device Registry (ADR) deliver key features to further standardize device identity and enable policy‑driven management for certificates at scale. You can think of device identity in Azure Device Registry like the birth record of a person. When someone is born, certain information becomes permanently associated with them - such as their date and place of birth. In the same way, a device’s identity represents its immutable existence within your solution - things like its serial number, model, or ownership context. However, as that person moves through life, they obtain different credentials that allow them to prove who they are in different situations - such as a driver’s license or passport. These credentials may expire, be renewed, or even replaced entirely over time without changing the person’s underlying identity. In IoT, devices use X.509 certificates as their credential to prove identity to services like IoT Hub. In your Azure Device Registry namespace, you can define the public key infrastructure (PKI) that manage your X.509 certificates and certificate authorities (CAs). In this preview, we are making it easier to integrate with existing security infrastructure and manage certificates at fleet scale. Certificate Management for Cloud-connected Devices in Azure Bring Your Own Certificate Authority (BYO CA) in Azure Device Registry Organizations that already operate sophisticated certificate authorities, with well‑established compliance controls, audit processes, and key custody requirements, want to integrate their trusted CA with the Azure Device Registry operating model. With BYO CA, customers can use their own private certificate authority while still benefiting from Azure’s fully managed device provisioning, and lifecycle management. Azure handles the heavy lifting of issuing, rotating, and revoking issuing certificate authorities (ICAs) and device certificates - while you stay in control of the top-most CA. Full Ownership of Trust and Keys: By bringing their own CA, organizations maintain absolute control over their private keys and security boundaries. Azure never takes custody of the external CA, ensuring existing governance, auditability, and compliance controls remain fully intact. Automated Lifecycle Management: While the CA remains customer-owned, Azure Device Registry automates the issuance, rotation, and revocation of device certificates. This eliminates the need for custom tooling or manual, per-device workflows that typically slow down deployments. Bring your own Certificate Authority in Azure Device Registry Fleet‑Wide Protection with Certificate Revocations Revocation is a mechanism for selective isolation, used to contain a single or group of devices by decommissioning a single device's certificates or the entire anchor of trust. When a single device is compromised, lost, or retired, device certificate revocation enables a precise, targeted response. This allows organizations to isolate individual devices instantly, reduce blast radius, and maintain uninterrupted operations for healthy devices - without rebuilding device identities. ADR propagates the revocation state to IoT Hub, blocking revoked devices until they’re re-provisioned. When a subset of devices requires isolation, policy revocation allows operators to decommission an entire trust anchor rather than managing individual devices. By mapping a specific Issuing CA to a single ADR policy, organizations gain a high-precision containment mechanism. In a single action, an operator can invalidate a compromised CA and then plan for a staged credential rollover across the entire segment. ADR automatically enforces this updated trust chain within IoT Hub, ensuring that only devices with newly issued certificates can connect. This makes large‑scale certificate rotation predictable, controlled, and operationally simple. Revoking the certificate for a single ADR Device on Azure Portal Flexible Options to renew Device Certificates Managing X.509 certificates at scale doesn’t stop once a device is onboarded. Operational certificates are short-lived by design, ensuring devices do not rely on long-lived credentials for authentication. In real-world IoT fleets, devices are often intermittently connected, deployed in hard-to-reach locations, and expected to run continuously - making certificate renewal one of the most operationally challenging parts of device security. Azure IoT Hub now enables device certificate renewal directly through IoT Hub, complementing the role of Device Provisioning Service (DPS). While DPS remains the solution for first-time device onboarding and certificate issuance, IoT Hub renewal is designed for the steady state - keeping already-connected devices securely authenticated over time without introducing downtime. IoT Hub certificate renewal follows similar patterns as other device-initiated operations such as twin updates and direct methods. With this capability, devices can request a new certificate as part of normal operation, using the same secure MQTT connection they already rely on. Support for IoT Hub and Device Provisioning Service (DPS) Device SDKs Managing credential issuance and renewals at scale is only possible if devices can handle their own credential lifecycles. We’ve added Certificate Signing Request (CSR) support to our C, C# (.NET), Java, Python, and Embedded device SDKs for IoT Hub and Device Provisioning Service (DPS). Beyond developer convenience, this provides multiple device-initiated paths for certificate renewal and trust-chain agility. Devices can generate CSRs and request newly signed X.509 certificates through IoT Hub or DPS as part of normal operation. This allows security teams to rotate and update certificates in the field without touching the hardware, keeping fleets secure as certificate authorities and policies evolve over time. Customer Feedback from Preview We’re grateful to the customers and partners who participated in the preview and shared valuable feedback along the way. Hear some of what our customers had to say: "The availability of a built-in certificate manager is a great upgrade in keeping the IoT space more secure."— Martijn Handels, CTO, Helin Data “Secure data is the starting line for industrial AI. With Azure certificate management, at CogitX we can ingest manufacturing signals safely and confidently - then use domain‑aware models to deliver real‑time insights and agentic workflows that improve throughput, quality, and responsiveness.” – Pradeep Parappil, CEO, CogitX Get Started Explore the new capabilities in preview today and start building the next generation of connected operations with Azure IoT Hub and Azure Device Registry: Get Started with Certificate Management in Preview.245Views1like0CommentsAdvancing Firmware Security: Fleet Visibility and New Capabilities in Firmware Analysis
When we announced general availability of firmware analysis enabled by Azure Arc last October, our goal was clear: help organizations gain deep visibility into the security of the firmware that powers their IoT, OT, and network devices. Since then, adoption has continued to grow as customers use firmware analysis to uncover vulnerabilities, inventory software components, and secure their software supply chain. Leading into the Hannover Messe (HMI) 2026 conference, we’re excited to share the next wave of firmware analysis capabilities, delivering enhancements that help customers connect firmware risk to real-world fleet impact, prioritize vulnerabilities more effectively, scale to larger and more complex firmware images, and expand security analysis for UEFI-based platforms. These updates are driven directly by customer feedback and by the rapidly evolving threat landscape facing embedded and edge devices. Connecting Firmware Risk to Your Deployed Fleet with Azure Device Registry (Preview) Securing connected devices doesn’t stop at identifying vulnerabilities in firmware—it requires understanding where those vulnerabilities exist in your deployed fleet and which devices are affected. We’re excited to announce a new preview integration between firmware analysis enabled by Azure Arc and Azure Device Registry, bringing fleet-level visibility of IoT and OT devices directly into the firmware analysis experience. This helps customers quickly understand how many devices and assets are running a given firmware image, and which ones may be exposed to known security issues. From firmware insights to fleet impact Firmware analysis helps customers uncover security risks hidden deep inside the firmware running IoT, OT, and network devices—risks such as known CVEs, outdated open-source components, weak cryptography, and insecure configurations. Until now, these insights were primarily scoped to the firmware image itself. With this new preview integration, firmware analysis now connects directly to Azure Device Registry, allowing customers to: See how many devices from IoT Hub integration with ADR (preview) and assets from Azure IoT Operations are associated with a specific analyzed firmware image Understand the real-world blast radius of vulnerabilities discovered in firmware Quickly identify which devices may require patching, mitigation, or isolation This preview bridges an important gap between security analysis and operational decision-making. What’s included in this preview With this release, we’re introducing new fleet-level context directly into the firmware analysis experience: A new Devices + Assets count column in the firmware analysis workspace showing how many Azure Device Registry devices and assets are running each analyzed firmware image A click-through experience that lets users view the list of affected devices and assets in Azure Device Registry Visibility spanning both: Devices connected via IoT Hub Assets managed through Azure IoT Operations This information is derived by correlating firmware metadata with device and asset inventory in Azure Device Registry, giving customers immediate insight into deployment exposure. Key use cases Identify vulnerable devices at scale: When critical CVEs are discovered in a firmware image, customers can immediately see how many deployed devices are impacted—without manually correlating spreadsheets, tools, or inventories. Prioritize remediation actions: With fleet visibility, teams can decide whether to patch devices, temporarily isolate affected devices from the network, or disable devices that pose unacceptable risk. Bridge security and operations teams: Security teams gain clear insight into where vulnerabilities exist, while operations teams can quickly act on specific devices and assets—all within the Azure portal. This integration is especially valuable in environments where downtime, safety, or regulatory compliance matter—such as manufacturing, energy, telecommunications, and critical infrastructure. Prioritizing Vulnerabilities with Enhanced CVE Metadata (Preview) The number of publicly disclosed vulnerabilities continues to rise year over year, making it increasingly difficult for security teams to determine which CVEs truly require urgent action. Simply knowing that a vulnerability exists is no longer enough—teams need context to prioritize remediation efforts. With this release, firmware analysis now provides richer metadata for each discovered CVE, helping customers focus on vulnerabilities that pose the greatest real-world risk. New CVE metadata includes: CISA Known Exploited Vulnerabilities (KEV) status – Indicates whether a CVE is listed in the CISA KEV catalog, signaling that the vulnerability is actively exploited in the wild. EPSS score (Exploit Prediction Scoring System) – A data-driven probability score that estimates the likelihood of a vulnerability being exploited in the next 30 days, complementing traditional severity metrics by focusing on exploitation likelihood rather than impact alone. Additional vulnerability context, including CVSS vectors and base scores, CWE classifications, and expanded metadata to support filtering and analysis. Together, these enhancements make it easier to triage findings, align remediation with risk, and communicate priorities across security, engineering, and product teams. Faster Performance for Large and Complex Firmware Images As firmware analysis adoption has grown, we’ve seen customers analyze increasingly large and complex firmware images—particularly in domains like networking equipment, where a single image can generate thousands of findings. To support these scenarios, we’ve made architectural enhancements to the service that significantly improve performance when working with large result sets. Key improvements include: Up to 90% reduction in load times of analysis results, especially for firmware images producing 10,000+ findings More responsive filtering and exploration of results These changes ensure that firmware analysis remains fast and usable at scale, even for complex network and infrastructure firmware images. Expanding UEFI Firmware Analysis (Preview) Modern devices increasingly rely on UEFI firmware as a foundational security boundary. In this release, we’re expanding our UEFI analysis capabilities to provide deeper visibility into UEFI executables and components. New UEFI-focused capabilities include: Detection of OpenSSL libraries and related CVEs within UEFI firmware Binary hardening analysis for UEFI executables, including detection of proper configuration of Data Execution Prevention (DEP) memory protection Continued support for discovering cryptographic material in UEFI images, including embedded certificates and keys This preview allows customers to evaluate the new capabilities, provide feedback, and help shape future enhancements in this area. Note: UEFI SBOM and binary analysis features are currently in preview and intended for evaluation and feedback. Bulk Export of Analysis Results for Supply Chain Collaboration We also recently released a highly requested feature that makes it easier to share firmware analysis results with partners and suppliers. Customers can now: Bulk download analysis results across one or more firmware images Export results as CSV files packaged into a ZIP archive This capability simplifies workflows such as sharing findings with device manufacturers or firmware suppliers, integrating results into downstream analysis or reporting pipelines, and supporting software supply chain security and compliance processes. Looking Ahead We’re excited about the progress we’ve made with this release and what it means for customers securing IoT, OT, and network devices. From connecting firmware risk to fleet-level impact with Azure Device Registry, to richer vulnerability prioritization, improved scalability, and deeper UEFI analysis—these enhancements reinforce firmware analysis as a critical tool for addressing some of the most challenging blind spots in modern infrastructure security. Firmware security is foundational to trustworthy systems—especially as edge devices continue to play a central role in industrial operations, networking, and data collection. If you’re already using firmware analysis and Azure Device Registry, the ADR integration preview will appear directly within the firmware analysis experience as it rolls out. We look forward to your feedback as we continue building secure, observable, and manageable digital operations with Azure. As always, we value your feedback, so please let us know what you think.138Views0likes0Comments