azure
7862 Topicsđš PartnerâExclusive Event: AMA with Fabric Leadership
Weâre excited to invite Fabric Partner Community members to a live Ask Me Anything (AMA) with Fabric leadershipâa rare opportunity to get direct answers and insights from the team shaping Azure Data and Microsoft Fabric. Featured Guest Shireesh Thota CVP, Azure Data Databases Tuesday, March 24 8:00â9:00 AM PT With FabCon + SQLCon wrapping just days before, this session is designed for partners who want to go deeperâask followâup questions, pressureâtest ideas, and understand whatâs next as they plan with customers. Topics may include: Whatâs next for Azure SQL, Cosmos DB, and PostgreSQL Guidance on SQL Server roadmap direction Deepâdive questions on SQL DB in Fabric Questions about the new DPâ800 Analytics Engineer exam going into beta this month Partners can submit any questionsâtechnical, roadmapâfocused, certificationârelated, or customerâscenario driven. This event is exclusively available to members of the Fabric Partner Community. Not a member yet? Join the Fabric Partner Community to attend this AMA and unlock access to partnerâonly events like this: https://aka.ms/JoinFabricPartnerCommunity3Views0likes0CommentsMicrosoft partners with DataBahn to accelerate enterprise deployments for Microsoft Sentinel
Enterprise security teams are collecting more telemetry than ever across cloud platforms, endpoints, SaaS applications, and on-premises infrastructure. Security teams want broader data coverage and longer retention without losing control of cost and data quality. This post explains the new DataBahn integration with Microsoft Sentinel, why it matters for SIEM operations, and how to think about using a security data pipeline alongside Sentinel for onboarding, normalization, routing, and governance. DataBahn joins Microsoft Sentinel partner ecosystem This integration reflects Microsoft Sentinelâs open partner ecosystem, giving customers choice in the partners they use alongside Microsoft Sentinel to manage their security data pipelines. DataBahn joins a broader set of complementary partners, enabling customers to tailor solutions for their unique security data needs.âŻDataBahn is available through Microsoft Marketplace and is eligible for customers to apply existing Azure Consumption Commitments toward the purchase of DataBahn. Why this matters for security operations teams Security teams are under relentless pressure to ingest more data, move faster through SIEM migrations, and preserve data fidelity for detections and investigations, all while managing costs effectively. The challenge isnât just ingesting data, but ensuring the right telemetry arrives in a consistent, governed format that analysts and detections can trust. This is where a security data pipeline, alongside Microsoft Sentinelâs native connectors and DCRs, can add value. It helps streamline onboarding of third-party and custom sources, improve normalization consistency, and provide operational visibility across diverse environments as deployments scale. What DataBahn integration is positioned to do with Microsoft Sentinel Security teams want broader coverage and need to ensure third-party data is consistently shaped, routed, and governed at scale. This is where a security data pipeline like DataBahn complements Microsoft Sentinel. Sitting upstream of ingestion, the pipeline layer standardizes onboarding and shaping across sources while providing operational visibility into data flow and pipeline health. Together, the collaboration focuses on reducing onboarding friction, improving normalization consistency, enabling intentional routing, and strengthening governance signals so teams can quickly detect source changes, parser breaks, or data gapsâwhile staying aligned with Sentinel analytics and detection workflows. This model gives Sentinel customers more choice to move faster, onboard data at scale, and retain control over data routing. Key capabilities Bidirectional data integration The integration enables seamless delivery of telemetry into Sentinel while aligning with Sentinel detection logic and schema expectations. This helps ensure telemetry pipelines remain consistent with: Sentinel detection formats Custom analytics rules Sentinel data models and schemas Automated table and DCR management As detections evolve, pipeline configurations can adapt to maintain detection fidelity and data consistency. Advanced management API DataBahn provides an advanced management API that allows organizations to programmatically configure and manage pipeline integrations with Sentinel. This enables teams to: Automate pipeline configuration Manage operational workflows Integrate pipeline management into broader security or DevOps automation processes Automatic identification of configuration conflicts In complex environments with multiple telemetry sources and routing rules, configuration conflicts can arise across filtering logic, enrichment pipelines, and detection dependencies. The integration helps automatically: Detect conflicts in filtering rules and pipeline logic Identify clashes with detection dependencies Highlight missing configurations or coverage gaps Automated detection of configuration conflicts and pipeline rule dependencies This visibility allows SOC teams to quickly identify issues that could impact detection reliability. Centralized pipeline management The integration enables centralized management of data collection and transformation workflows associated with Sentinel telemetry pipelines. This provides unified visibility and control across telemetry sources while maintaining compatibility with Sentinel analytics and detections. Centralized management simplifies operations across large environments where multiple telemetry pipelines must be maintained. Centralized pipeline management for telemetry sources across the environment Flexible data transformation and customization Security telemetry often arrives in inconsistent formats across vendors and platforms. The platform supports flexible transformation capabilities that allow organizations to: Normalize logs into standard or custom Sentinel table formats Add or derive fields required by Sentinel detections Apply filtering or enrichment rules before ingestion Configuration can be performed through a single-screen workflow, enabling teams to modify schemas and define filtering logic without disrupting downstream analytics. Flexible data transformation to align telemetry with Microsoft Sentinel ASIM schemas The platform also provides schema drift detection and source health monitoring, helping teams maintain reliable telemetry pipelines as environments evolve. Closing Effective security operations depend on how quickly a SOC can onboard new data, scale effectively, and maintain highâquality investigations. Sentinel provides a cloudânative, AI-ready foundation to ingest security data from first- and thirdâparty data sourcesâwhile enabling economical, largeâscale retention and deep analytics using open data formats and multiple analytics engines. DataBahnâs partnership with Sentinel is positioned as a pipeline layer that can help teams onboard third-party sources, shape and normalize data, and apply routing and governance patterns before data lands in Sentinel. Learn more DataBahn for Microsoft Sentinel DataBahn Press Release - Databahn Deepens Partnership with Microsoft Sentinel Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn Microsoft SentinelâAI-Ready Platform | Microsoft Security Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations | Microsoft Learn Microsoft Sentinel data lake is now generally available | Microsoft Community Hub277Views2likes1CommentFebruary 2026 Recap: Azure Database for MySQL
We're excited to share a summary of the Azure Database for MySQL updates from the last couple of months. Extended Support Timeline Update Based on customer feedback requesting additional time to complete major version upgrades, we have extended the grace period before extended support billing begins for Azure Database for MySQL: MySQL 5.7: Extended support billing start date moved from April 1, 2026 to August 1, 2026. MySQL 8.0: Extended support billing start date moved from June 1, 2026 to January 1, 2027. This update provides customers additional time to plan, validate, and complete upgrades while maintaining service continuity and security. We continue to recommend upgrading to a supported MySQL version as early as possible to avoid extended support charges and benefit from the latest improvements. Learn more about performing a major version upgrade in Azure Database for MySQL. When upgrading using a read replica, you can optionally use the Rename Server feature to promote the replica and avoid application connectionâstring updates after the upgrade completes. Rename Server is currently in Private Preview and is expected to enter Public Preview around the April 2026 timeframe. Private Preview - Fabric Mirroring for Azure Database for MySQL This capability enables realâtime replication of MySQL data into Microsoft Fabric with a zeroâETL experience, allowing data to land directly in OneLake in analyticsâready formats. Customers can seamlessly analyse mirrored data using Microsoft Fabric experiences, while isolating analytical workloads from their operational MySQL databases. Stay Connected We welcome your feedback and invite you to share your experiences or suggestions at AskAzureDBforMySQL@service.microsoft.com Stay up to date by visiting What's new in Azure Database for MySQL, and follow us on YouTubeâŻ|âŻLinkedInâŻ|âŻX for ongoing updates. Thank you for choosing Azure Database for MySQL!Implementing the Backend-for-Frontend (BFF) / Curated API Pattern Using Azure API Management
Modern digital applications rarely serve a single type of client. Web portals, mobile apps, partner integrations, and internal tools often consume the same backend servicesâyet each has different performance, payload, and UX requirements. Exposing backend APIs directly to all clients frequently leads to over-fetching, chatty networks, and tight coupling between UI and backend domain models. This is where a Curated API or Backend for Frontend API design pattern becomes useful. What Is the Backend-for-Frontend (BFF) Pattern? The Backend-for-Frontend (BFF)âalso known as the Curated API patternâsolves this problem by introducing a client-specific API layer that shapes, aggregates, and optimizes data specifically for the consuming experience. There is very good architectural guidance on this at Azure Architecture Center [Check out the 1st Link on Citation section] The BFF pattern introduces a dedicated backend layer for each frontend experience. Instead of exposing generic backend services directly, the BFF: Aggregates data from multiple backend services Filters and reshapes responses Optimizes payloads for a specific client Shields clients from backend complexity and change Each frontend (web, mobile, partner) can evolve independently, without forcing backend services to accommodate UI-specific concerns. Why Azure API Management Is a Natural Fit for BFF Azure API Management is commonly used as an API gateway, but its policy engine enables much more than routing and security. Using APIM policies, you can: Call multiple backend services (sequentially or in parallel) Transform request and response payloads to provide a unform experience Apply caching, rate limiting, authentication, and resiliency policies All of this can be achieved without modifying backend code, making APIM an excellent place to implement the BFF pattern. When Should You Use a Curated API in APIM? Using APIM as a BFF makes sense when: Frontend clients require optimized, experience-specific payloads Backend services must remain generic and reusable You want to reduce round trips from mobile or low-bandwidth clients You want to implement uniform polices for cross cutting concerns, authentication/authorization, caching, rate-limiting and logging, etc. You want to avoid building and operating a separate aggregation service You need strong governance, security, and observability at the API layer How the BFF Pattern Works in Azure API Management There is a Git Hub Repository [Check out the 2nd Link on Citation section] that provides a wealth of information and samples on how to create complex APIM policies. I recently contributed to this repository with a sample policy for Curated APIs [Check out the 3rd Link on Citation section] At a high level, the policy follows this flow: APIM receives a single client request APIM issues parallel calls to multiple backend services as shown below <wait for="all"> <send-request mode="copy" response-variable-name="operation1" timeout="{{bff-timeout}}" ignore-error="false"> <set-url>@("{{bff-baseurl}}/operation1?param1=" + context.Request.Url.Query.GetValueOrDefault("param1", "value1"))</set-url> </send-request> <send-request mode="copy" response-variable-name="operation2" timeout="{{bff-timeout}}" ignore-error="false"> <set-url>{{bff-baseurl}}/operation2</set-url> </send-request> <send-request mode="copy" response-variable-name="operation3" timeout="{{bff-timeout}}" ignore-error="false"> <set-url>{{bff-baseurl}}/operation3</set-url> </send-request> <send-request mode="copy" response-variable-name="operation4" timeout="{{bff-timeout}}" ignore-error="false"> <set-url>{{bff-baseurl}}/operation4</set-url> </send-request> </wait> Few things to consider The Wait policy allows us to make multiple requests using nested send-request policies. The for="all" attribute value implies that the policy execution will await all the nested send requests before moving to the next one. {{bff-baseurl}}: This example assumes a single base URL for all end points. It does not have to be. The calls can be made to any endpoint response-variable-name attribute sets a unique variable name to hold response object from each of the parallel calls. This will be used later in the policy to transform and produce the curated result. timeout attribute: This example assumes uniform timeouts for each endpoint, but it might vary as well. ignore-error: set this to true only when you are not concerned about the response from the backend (like a fire and forget request) otherwise keep it false so that the response variable captures the response with error code. Once responses from all the requests have been received (or timed out) the policy execution moves to the next policy Then the responses from all requests are collected and transformed into a single response data <!-- Collect the complete response in a variable. --> <set-variable name="finalResponseData" value="@{ JObject finalResponse = new JObject(); int finalStatus = 200; // This assumes the final success status (If all backend calls succeed) is 200 - OK, can be customized. string finalStatusReason = "OK"; void ParseBody(JObject element, string propertyName, IResponse response){ string body = ""; if(response!=null){ body = response.Body.As<string>(); try{ var jsonBody = JToken.Parse(body); element.Add(propertyName, jsonBody); } catch(Exception ex){ element.Add(propertyName, body); } } else{ element.Add(propertyName, body); //Add empty body if the response was not captured } } JObject PrepareResponse(string responseVariableName){ JObject responseElement = new JObject(); responseElement.Add("operation", responseVariableName); IResponse response = context.Variables.GetValueOrDefault<IResponse>(responseVariableName); if(response == null){ finalStatus = 207; // if any of the responses are null; the final status will be 207 finalStatusReason = "Multi Status"; ParseBody(responseElement, "error", response); return responseElement; } int status = response.StatusCode; responseElement.Add("status", status); if(status == 200){ // This assumes all the backend APIs return 200, if they return other success responses (e.g. 201) add them here ParseBody(responseElement, "body", response); } else{ // if any of the response codes are non success, the final status will be 207 finalStatus = 207; finalStatusReason = "Multi Status"; ParseBody(responseElement, "error", response); } return responseElement; } // Gather responses into JSON Array // Pass on the each of the response variable names here. JArray finalResponseBody = new JArray(); finalResponseBody.Add(PrepareResponse("operation1")); finalResponseBody.Add(PrepareResponse("operation2")); finalResponseBody.Add(PrepareResponse("operation3")); finalResponseBody.Add(PrepareResponse("operation4")); // Populate finalResponse with aggregated body and status information finalResponse.Add("body", finalResponseBody); finalResponse.Add("status", finalStatus); finalResponse.Add("reason", finalStatusReason); return finalResponse; }" /> What this code does is prepare the response into a single JSON Object. using the help of the PrepareResponse function. The JSON not only collects the response body from each response variable, but it also captures the response codes and determines the final response code based on the individual response codes. For the purpose of his example, I have assumed all operations are GET operations and if all operations return 200 then the overall response is 200-OK, otherwise it is 206 -Partial Content. This can be customized to the actual scenario as needed. Once the final response variable is ready, then construct and return a single response based on the above calculation <!-- This shows how to return the final response code and body. Other response elements (e.g. outbound headers) can be curated and added here the same way --> <return-response> <set-status code="@((int)((JObject)context.Variables["finalResponseData"]).SelectToken("status"))" reason="@(((JObject)context.Variables["finalResponseData"]).SelectToken("reason").ToString())" /> <set-body>@(((JObject)context.Variables["finalResponseData"]).SelectToken("body").ToString(Newtonsoft.Json.Formatting.None))</set-body> </return-response> This effectively turns APIM into an experience-specific backend tailored to frontend needs. When not to use APIM for BFF Implementation? While this approach works well when you want to curate a few responses together and apply a unified set of policies, there are some cases where you might want to rethink this approach When the need for transformation is complex. Maintaining a lot of code in APIM is not fun. If the response transformation requires a lot of code that needs to be unit tested and code that might change over time, it might be better to sand up a curation service. Azure Functions and Azure Container Apps are well suited for this. When each backend endpoint requires very complex request transformation, then that also increases the amount of code, then that would also indicate a need for an independent curation service. If you are not already using APIM then this does not warrant adding one to your architecture just to implement BFF. Conclusion Using APIM is one of the many approaches you can use to create a BFF layer on top of your existing endpoint. Let me know your thoughts con the comments on what you think of this approach. Citations Azure Architecture Center â Backend-for-Frontends Pattern Azure API Management Policy Snippets (GitHub) Curated APIs Policy Example (GitHub) Send-request Policy ReferenceEntra ID Object Drift â Are We Measuring Tenant Health Correctly?
In many enterprise environments: Secure Score is green. Compliance dashboards look healthy. Yet directory object inconsistency silently accumulates. Stale devices. Hybrid join remnants. Intune orphan records. Over time, this becomes governance debt. In large tenants this often leads to inaccurate compliance reporting and Conditional Access targeting issues. I recently wrote a breakdown of: âą Entra ID drift patterns âą Hybrid join inconsistencies âą Intune orphan objects âą Lifecycle-based cleanup architecture Curious how others approach object hygiene at scale. Full article: https://www.modernendpoint.tech/entra-id-cleanup-patterns/?utm_source=techcommunity&utm_medium=social&utm_campaign=entra_cleanup_launch&utm_content=discussion One pattern I keep seeing is duplicate device identities after re-enrollment or Autopilot reset. Curious how others handle lifecycle cleanup in large Entra ID environments.24Views0likes1CommentMicrosoft Finland - Software Developing Companies monthly community series.
Tervetuloa jĂ€lleen mukaan Microsoftin webinaarisarjaan teknologiayrityksille! Microsoft Finlandin jĂ€rjestĂ€mĂ€ Software Development monthly Community series on webinaarisarja, joka tarjoaa ohjelmistotaloille ajankohtaista tietoa, konkreettisia esimerkkejĂ€ ja strategisia nĂ€kemyksiĂ€ siitĂ€, miten yhteistyö Microsoftin kanssa voi vauhdittaa kasvua ja avata uusia liiketoimintamahdollisuuksia. Sarja on suunnattu kaikenkokoisille ja eri kehitysvaiheissa oleville teknologiayrityksille - startupeista globaaleihin toimijoihin. Jokaisessa jaksossa pureudutaan kĂ€ytĂ€nnönlĂ€heisesti siihen, miten ohjelmistoyritykset voivat hyödyntÀÀ Microsoftin ekosysteemiĂ€, teknologioita ja kumppanuusohjelmia omassa liiketoiminnassaan. Huom. Microsoft Software Developing Companies monthly community webinars -webinaarisarja jĂ€rjestetÀÀn Cloud Champion -sivustolla, josta webinaarit ovat kĂ€tevĂ€sti saatavilla tallenteina pari tuntia live-lĂ€hetyksen jĂ€lkeen. Muistathan rekisteröityĂ€ Cloud Champion -alustalle ensimmĂ€isellĂ€ kerralla, jonka jĂ€lkeen pÀÀset aina sisĂ€ltöön sekĂ€ tallenteisiin kĂ€siksi. PÀÀset rekisteröitymÀÀn, "Register now"-kohdasta. TĂ€ytĂ€ tietosi ja valitse Distributor kohtaan - Other, mikĂ€li et tiedĂ€ Microsoft-tukkurianne. Webinaarit: 27.3.2026 klo 09:00-09:30 - Agent Factory Microsoft FoundryllĂ€ â miten rakennat ja viet AI-agentteja tuotantoon AIâagentit ovat nopeasti nousemassa enterpriseâohjelmistojen keskeiseksi rakennuspalikaksi, mutta monilla organisaatioilla haasteena on agenttien vieminen tuotantoon asti. Todellinen kilpailuetu syntyy siitĂ€, miten agentit rakennetaan hallitusti, integroidaan osaksi kokonaisarkkitehtuuria ja skaalataan luotettavasti. TĂ€ssĂ€ webinaarissa kĂ€ymme lĂ€pi ja nĂ€ytĂ€mme kĂ€ytĂ€nnön demolla, miten AI-agentti rakennetaan Microsoft Foundry:n Agent Service -palvelulla. NĂ€ytĂ€mme miten agentin rooli ja ohjeet mÀÀritellÀÀn, miten agentille liitetÀÀn tietolĂ€hteitĂ€ ja työkaluja sekĂ€ katsomme miten tĂ€mĂ€ asemoituu Microsoft Agent Factoryyn. Ilmoittautumislinkki: Microsoft Finland â Software Developing Companies monthly community series: Agent Factory Microsoft FoundryllĂ€ â miten rakennat ja viet AI-agentteja tuotantoon â Finland Cloud Champion Puhujat: Juha Karvonen, Sr Partner Tech Strategist Eetu Roponen, Sr Partner Development Manager, Microsoft 27.2.2026 klo 09:00-09:30 - M-Files polku menestykseen yhdessĂ€ Microsoftin kanssa MitĂ€ globaalin kumppanuuden rakentaminen M-Files:in ja Microsoft:in vĂ€lillĂ€ on vaatinut â ja mitĂ€ hyötyĂ€ siitĂ€ on syntynyt? TĂ€ssĂ€ webinaarissa kuulet insiderit suoraan M-Filesin Kimmo JĂ€rvensivulta, Stategic Alliances Director: miten kumppanuus Microsoft kanssa on rakennettu, mitĂ€ matkalla on opittu ja miten yhteistyö on vauhdittanut kasvua. M-Files on Ă€lykĂ€s tiedonhallinta-alusta, joka auttaa organisaatioita hallitsemaan dokumentteja ja tietoa metatiedon avulla sijainnista riippumatta. Se tehostaa tiedon löytĂ€mistĂ€, parantaa vaatimustenmukaisuutta ja tukee modernia työtĂ€ Microsoft-ekosysteemissĂ€. Tule kuulemaan, mitĂ€ menestyksekĂ€s kumppanuus todella vaatii, ja miten siitĂ€ tehdÀÀn strateginen kilpailuetu. Katso nauhoite: Microsoft Finland â Software Developing Companies Monthly Community Series â M-Files polku menestykseen yhdessĂ€ Microsoftin kanssa â Finland Cloud Champion Asiantuntijat: Kimmi JĂ€rvensivu, Strategic Alliances Director, M-Files Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft 30.1.2026 klo 09:00-09:30 - Model Context Protocol (MCP)âavoin standardi, joka mullistaa AI-integraatiot Webinaarissa kĂ€ymme lĂ€pi, mikĂ€ on Model Context Protocol (MCP), miten se mahdollistaa turvalliset ja skaalautuvat yhteydet AIâmallien ja ulkoisten jĂ€rjestelmien vĂ€lillĂ€ ilman rÀÀtĂ€löityĂ€ koodia, mikĂ€ on Microsoftin lĂ€hestyminen MCPâprotokollan hyödyntĂ€miseen sekĂ€ miten softayritykset voivat hyödyntÀÀ MCPâstandardin tarjoamia liiketoimintamahdollisuuksia. Webinaarissa kĂ€ymme lĂ€pi: MikĂ€ MCP on ja miksi se on tĂ€rkeĂ€ nykyaikaisissa AIâprosesseissa Kuinka MCP vĂ€hentÀÀ integraatioiden monimutkaisuutta ja nopeuttaa kehitystĂ€ KĂ€ytĂ€nnön esimerkkejĂ€ Webiinarin asiaosuus kĂ€ydÀÀn lĂ€pi englanniksi. Katso nauhoite: 30.1.2026 klo 09:00-09:30 â Model Context Protocol (MCP)âavoin standardi, joka mullistaa AI-integraatiot â Finland Cloud Champion Asiantuntijat: Massimo Caterino, Kumppaniteknologiastrategisti, Microsoft Europe North Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft 12.12. klo 09:00-09:30 - MitĂ€ Suomen Azure-regioona tarkoittaa ohjelmistotaloille? Microsoftin uusi datakeskusalue Suomeen tuo pilvipalvelut lĂ€hemmĂ€ksi suomalaisia ohjelmistotaloja â olipa kyseessĂ€ startup, scaleup tai globaali toimija. Webinaarissa pureudumme siihen, mitĂ€ mahdollisuuksia uusi Azure-regioona avaa datan sijainnin, suorituskyvyn, sÀÀntelyn ja asiakasvaatimusten nĂ€kökulmasta. Keskustelemme muun muassa: Miten datan paikallinen sijainti tukee asiakasvaatimuksia ja sÀÀntelyĂ€? MitĂ€ hyötyĂ€ ohjelmistotaloille on pienemmĂ€stĂ€ latenssista ja paremmasta suorituskyvystĂ€? Miten Azure-regioona tukee yhteismyyntiĂ€ ja skaalautumista Suomessa? Miten valmistautua teknisesti ja kaupallisesti uuden regioonan avaamiseen? Puhujat: Fama Doumbouya, Sales Director, Cloud Infra and Security, Microsoft Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite: Microsoft Finland â Software Developing Companies Monthly Community Series â MitĂ€ Suomen Azure-regioona tarkoittaa ohjelmistotaloille? â Finland Cloud Champion 28.11. klo 09:00-09:30 - Pilvipalvelut omilla ehdoilla â mitĂ€ Microsoftin Sovereign Cloud tarkoittaa ohjelmistotaloille? YhĂ€ useampi ohjelmistotalo kohtaa vaatimuksia datan sijainnista, sÀÀntelyn noudattamisesta ja operatiivisesta kontrollista â erityisesti julkisella sektorilla ja sÀÀdellyillĂ€ toimialoilla. TĂ€ssĂ€ webinaarissa pureudumme siihen, miten Microsoftin uusi Sovereign Cloud -tarjonta vastaa nĂ€ihin tarpeisiin ja mitĂ€ mahdollisuuksia se avaa suomalaisille ohjelmistoyrityksille. Keskustelemme muun muassa: Miten Sovereign Public ja Private Cloud eroavat ja mitĂ€ ne mahdollistavat? Miten datan hallinta, salaus ja operatiivinen suvereniteetti toteutuvat eurooppalaisessa kontekstissa? MitĂ€ tĂ€mĂ€ tarkoittaa ohjelmistoyrityksille, jotka rakentavat ratkaisuja julkiselle sektorille tai sÀÀdellyille toimialoille? Puhujat: Juha Karppinen, National Security Officer, Microsoft Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite: Microsoft Finland â Software Developing Companies Monthly Community Series â Pilvipalvelut omilla ehdoilla â mitĂ€ Microsoftin Sovereign Cloud tarkoittaa ohjelmistotaloille? â Finland Cloud Champion 31.10. klo 09:00-09:30 - Kasvua ja nĂ€kyvyyttĂ€ ohjelmistotaloille â hyödynnĂ€ ISV Success ja Azure Marketplace rewards -ohjelmia TĂ€ssĂ€ webinaarissa pureudumme ohjelmistotaloille suunnattuihin Microsoftin keskeisiin kiihdytinohjelmiin, jotka tukevat kasvua, skaalautuvuutta ja kansainvĂ€listĂ€ nĂ€kyvyyttĂ€. KĂ€ymme lĂ€pi, miten ISV Success -ohjelma tarjoaa teknistĂ€ ja kaupallista tukea ohjelmistoyrityksille eri kehitysvaiheissa, ja miten Azure Marketplace toimii tehokkaana myyntikanavana uusien asiakkaiden tavoittamiseen. LisĂ€ksi esittelemme Marketplace Rewards -edut, jotka tukevat markkinointia, yhteismyyntiĂ€ ja asiakashankintaa Microsoftin ekosysteemissĂ€. Webinaari tarjoaa: Konkreettisia esimerkkejĂ€ ohjelmien hyödyistĂ€ KĂ€ytĂ€nnön vinkkejĂ€ ohjelmiin liittymiseen ja hyödyntĂ€miseen NĂ€kemyksiĂ€ siitĂ€, miten ohjelmistotalot voivat linjata strategiansa Microsoftin tarjoamiin mahdollisuuksiin Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Nauhoite: Microsoft Finland â Software Developing Companies Monthly Community Series â Kasvua ja nĂ€kyvyyttĂ€ ohjelmistotaloille â hyödynnĂ€ ISV Success ja Azure Marketplace rewards -ohjelmia â Finland Cloud Champion 3.10. klo 09:00-09:30 - Autonomiset ratkaisut ohjelmistotaloille â Azure AI Foundry ja agenttiteknologioiden uudet mahdollisuudet Agenttiteknologiat mullistavat tapaa, jolla ohjelmistotalot voivat rakentaa Ă€lykkĂ€itĂ€ ja skaalautuvia ratkaisuja. TĂ€ssĂ€ webinaarissa tutustumme siihen, miten Azure AI Foundry tarjoaa kehittĂ€jille ja tuoteomistajille työkalut autonomisten agenttien rakentamiseen â mahdollistaen monimutkaisten prosessien automatisoinnin ja uudenlaisen asiakasarvon tuottamisen. Kuulet mm. Miten agenttiteknologiat muuttavat ohjelmistokehitystĂ€ ja liiketoimintaa. Miten Azure AI Foundry tukee agenttien suunnittelua, kehitystĂ€ ja kĂ€yttöönottoa. Miten ohjelmistotalot voivat hyödyntÀÀ agentteja kilpailuetuna. Puhujat: Juha Karvonen, Sr Partner Tech Strategist Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite tÀÀltĂ€: Microsoft Finland â Software Developing Companies Monthly Community Series â Autonomiset ratkaisut ohjelmistotaloille â Azure AI Foundry ja agenttiteknologioiden uudet mahdollisuudet â Finland Cloud Champion 5.9.2025 klo 09:00-09:30 - Teknologiayritysten ja Microsoftin prioriteetit syksylle 2025. Tervetuloa jĂ€lleen mukaan Microsoftin webinaarisarjaan teknologiayrityksille! Jatkamme sarjassa kuukausittain pureutumista siihen, miten yhteistyö Microsoftin kanssa voi vauhdittaa kasvua ja avata uusia mahdollisuuksia eri vaiheissa oleville ohjelmistotaloille â olipa yritys sitten start-up, scale-up tai globaalia toimintaa harjoittava. Jokaisessa jaksossa jaamme konkreettisia esimerkkejĂ€, nĂ€kemyksiĂ€ ja strategioita, jotka tukevat teknologia-alan yritysten liiketoiminnan kehitystĂ€ ja innovaatioita. Elokuun lopun jaksossa keskitymme syksyn 2025 prioriteetteihin ja uusiin mahdollisuuksiin, jotka tukevat ohjelmistoyritysten oman toiminnan suunnittelua, kehittĂ€mistĂ€ ja kasvun vauhdittamista. KĂ€ymme lĂ€pi, mitkĂ€ ovat Microsoftin strategiset painopisteet tulevalle tilikaudelle â ja ennen kaikkea, miten ohjelmistotalot voivat hyödyntÀÀ niitĂ€ omassa liiketoiminnassaan. Tavoitteena on tarjota kuulijoille selkeĂ€ ymmĂ€rrys siitĂ€, miten oma tuote, palvelu tai markkinastrategia voidaan linjata ekosysteemin kehityksen kanssa, ja miten Microsoft voi tukea tĂ€tĂ€ matkaa konkreettisin keinoin. Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoitus tÀÀltĂ€: Teknologiayritysten ja Microsoftin prioriteetit syksylle 2025. â Finland Cloud Champion406Views0likes0CommentsAnnouncing Neon Serverless Postgres as an Azure Native Integration (Preview)
Note: This service is now retired but you can browse similar database service in Azure. We are excited to announce that Neon Serverless Postgres is now available as an Azure Native Integration (in preview) within the Azure Cloud ecosystem. This integration enhances developer experience by combining the power and flexibility of Neonâs serverless Postgres database service with Azure's robust cloud infrastructure. Weâre excited to bring Neon to all Azure developers, especially AI platforms. Neon Serverless Postgres scales automatically to match your workload and can branch instantly for an incredible developer experience. And for AI developers concerned about scale, cost efficiency, and data privacy, Neon enables them to easily adopt database-per-customer architectures, ensuring real-time provisioning and data isolation for their customers. - Nikita Shamgunov, CEO, Neon. What is Neon Serverless Postgres? Neon offers a serverless Postgres solution that leverages the principles of serverless computing to provide scalable and flexible database services. By abstracting away infrastructure complexities, Neon allows businesses to focus on application development rather than database administration. The key features of Neonâs Postgres service include: Instant Provisioning: Neon's architecture allows the creation of new databases in under a second, thanks to its custom-built storage engine. Efficient Scaling: Neon automatically scales resources based on load, ensuring optimal performance during traffic spikes without the need for overprovisioning. Integrated Developer Workflows: With features like database branching, Neon enables shorter software development lifecycles and cost-effective integration into CI/CD pipelines. What is Neon Serverless Postgres as an Azure Native Integration? The Azure Native integration of Neon Serverless Postgres enables users to create a Neon organization from Azure portal. Users can find the Neon Serverless Postgres offering on Azure portal and Azure Marketplace. This integration paves a way forward to effectively use Neon Postgres along with other Azure services. At Microsoft, we are committed to providing seamless and innovative solutions for our Azure developers. The introduction of Neon Postgres as an Azure Native Integration is a significant milestone in this journey. This integration not only simplifies the provisioning and management of Neon organizations directly from Azure but also enhances the overall developer experience. We are excited to see how this collaboration will empower developers to build intelligent and scalable applications on Azure with ease. - Shireesh Thota, CVP, Azure Databases. Benefits of the native integration: This Azure Native Integration brings many benefits to developers and businesses: Seamless Provisioning from Azure: Developers can create and manage Neon organizations directly within the Azure portal, without switching platforms. Single Sign-On (SSO): Users can access Neon via SSO using their Microsoft credentials, streamlining the login process and enhancing security. Enhanced Developer Experience: The integration allows developers to use Azure CLI and SDKs of their choice from .NET, Java, Python, Go, and JavaScript, to manage Neon organizations alongside other Azure resources, keeping development workflows consistent. Unified Billing: Neon usage can be included on existing Azure invoices, simplifying billing and financial management for businesses. By Purchasing Neon through Azure, customers can decrement their Microsoft Azure Consumption Commitment (MACC), if any with Microsoft. How to create a Neon organization from Azure You can find the details of how to create a Neon organization from Azure in the Microsoft docs. The section below summarizes the key steps which will aid you in resource creation. Step 1: Discover and Subscribe to Neon from Azure You can start your journey either from Azure portal or Azure Marketplace. Search for Neon Serverless Postgres in the search bar and select the offering. This will take you to the Marketplace landing page of Neon Serverless Postgres. Choose a plan out of the 3 available public plans. If you are new and exploring, you can start with a free plan. Click on Subscribe to move forward to the resource configuration stage. Step 2: Complete your Neon resource configuration on Azure You are now creating a Neon resource on Azure. The process is similar to creating other Azure resources and requires basic details like Azure subscription, resource group and resource details. The resource can be created in the East US 2, Germany West Central and West US 3 after public preview. Please check the region dropdown to view the available regions. The creation flow also simultaneously creates a Neon organization. For that, mention the name of your Neon organization. Once all details have been filled in, you can review the information by going to Review + Create. This will trigger the deployment process and result in resource creation. Congratulations! You just created a Neon organization from Azure. Let us now visit the Neon organization we just created. Step 3: Transition to Neon from Azure portal Go to the resource created and you will land onto the overview blade where you will find the resource details. We support single-sign-on from Azure portal to Neon. Click on the SSO link to transition to Neon portal, where you can continue with creating projects and databases, inviting users and much more. Step 4: Create Projects, Branches and Databases on Neon On the Neon portal, you will land on the project creation view. Proceed to create your first Neon project. When this project is created, a default branch and database is created as well. Visit the project dashboard to view project details. You can copy the connection URL of the newly created database and use it in your Azure application stack to connect to the database. Go ahead and create more projects in the Azure regions of your choice and explore interesting features like branches and AI based query generation. Now, you are ready to use Neon Serverless Postgres in your real-world applications. Real-World Applications Neonâs Serverless Postgres service is ideal for a variety of use cases, including: AI and Machine Learning: With the ability to generate vector embeddings and integrate with Azure AI services, Neon is well-suited for AI and machine learning applications. Neonâs Autoscaling ensures that even resource-intensive AI models operate seamlessly during periods of high demand without manual intervention. SaaS Applications: The scalability and flexibility of Neonâs Postgres service make it perfect for SaaS applications that need to handle varying levels of traffic. Its serverless architecture eliminates the need for infrastructure management, allowing developers to focus on building features while ensuring cost-effective scaling to meet demand. For more use-cases and success stories, visit Case Studies - Neon to understand how Neon, now on Azure, can create value in your organization. Ready to try out Neon Serverless Postgres as an Azure Native Integration? Check out the next steps and share your feedback with us. This is just the beginning of Neon Serverless Postgres on Azure and stay tuned as we make this integration seamless with more features. Next Steps Subscribe to Neon Serverless Postgres on Azure portal or Azure Marketplace Learn more about Neon Serverless Postgres at Microsoft docs Read the launch blogpost by Neon Discover more about Neon Submit feature suggestions and questions in the Neon discord community or contact feedback@neon.tech. Please mention that you are using Neon Serverless Postgres on Azure in your messages. Learn about Microsoftâs investment in Neon Thank you for reading this blog! Please follow for more updates on Neon Serverless Postgres as an Azure Native Integration.1.2KViews3likes0CommentsHow to Access a Shared OneDrive Folder in Azure Logic Apps
What is the problem? A common enterprise automation scenario involves copying files from a OneDrive folder shared by a colleague into another storage service such as SharePoint or Azure Blob Storage using Azure Logic Apps. However, when you configure the OneDrive for Business â âList files in folderâ action in a Logic App, you quickly run into a limitation: The folder picker only shows: Root directory Subfolders of the authenticated userâs OneDrive Shared folders do not appear at all, even though you can access them in the OneDrive UI This makes it seem like Logic Apps cannot work with shared OneDrive foldersâbut thatâs not entirely true. Why this happens The OneDrive for Business connector is userâcontext scoped. It only enumerates folders that belong to the signed-in userâs drive and does not automatically surface folders that are shared with the user. Even though shared folders are visible under âShared with meâ in the OneDrive UI, they: Live in a different drive Have a different driveId Require explicit identification before Logic Apps can access them How to access a shared OneDrive folder There are two supported ways to access a shared OneDrive directory from Logic Apps. Option 1: Use Microsoft Graph APIs (Delegated permissions) You can invoke Microsoft Graph APIs directly using: HTTP with Microsoft Entra ID (preauthorized) Delegated permissions on behalf of the signedâin user This requires: Admin consent or delegated consent workflows Additional Entra ID configuration đ Reference: HTTP with Microsoft Entra ID (preauthorized) - Connectors | Microsoft Learn While powerful, this approach adds setup complexity. Option 2: Use Graph Explorer to configure the OneDrive connector Instead of calling Graph from Logic Apps directly, you can: Use Graph Explorer to discover the shared folder metadata Manually configure the OneDrive action using that metadata Step-by-step: Using Graph Explorer to access a shared folder Scenario A colleague has shared a OneDrive folder named âTestâ with me, and I need to process files inside it using a Logic App. Step 1: List shared folders using Microsoft Graph In Graph Explorer, run the following request: GET https://graph.microsoft.com/v1.0/users/{OneDrive shared folder owner username}/drive/root/children đReference: List the contents of a folder - Microsoft Graph v1.0 | Microsoft Learn â This returns all root-level folders visible to the signed-in user, including folders shared with you. From the response, locate the shared folder. You only need two values: parentReference.driveId id (folder ID) Graph explorer snippet showing the request sent to the API to list the files & folders shared by a specific user on the root drive Step 2: Configure Logic App âList files in folderâ action In your Logic App: Add OneDrive for Business â List files in folder Do not use the folder picker Manually enter the folder value using this format: {driveId}.{folderId} Once saved, the action successfully lists files from the shared OneDrive folder. Step 3: Build the rest of your workflow After the folder is resolved correctly: You can loop through files Copy them to SharePoint Upload them to Azure Blob Storage Apply filters, conditions, or transformations All standard OneDrive actions now work as expected. Troubleshooting: When Graph Explorer doesnât help If youâre unable to find the driveId or folderId via Graph Explorer, thereâs a reliable fallback. Use browser network tracing Open the shared folder in OneDrive (web) Open Browser Developer Tools â Network Look for requests like: & folderId In the response payload, extract: CurrentFolderUniqueId â folder ID drives/{driveId} from the CurrentFolderSpItemUrl This method is very effective when Graph results are incomplete or filtered.Azure Databricks & Fabric Disaster Recovery: The Better Together Story
Author's: Amudha Palani amudhapalaniâ, Oscar Alvarado oscaralvaradoâ, Eric Kwashie ekwashieâ, Peter Lo PeterLoâ and Rafia Aqil Rafia_Aqilâ Disaster recovery (DR) is a critical component of any cloud-native data analytics platform, ensuring business continuity even during rare regional outages caused by natural disasters, infrastructure failures, or other disruptions. Identify Business Critical Workloads Before designing any disaster recovery strategy, organizations must first identify which workloads are truly businessâcritical and require regional redundancy. Not all Databricks or Fabric processes need full DR protection; instead, customers should evaluate the operational impact of downtime, data freshness requirements, regulatory obligations, SLAs, and dependencies across upstream and downstream systems. By classifying workloads into tiers and aligning DR investments accordingly, customers ensure they protect what matters most without overâengineering the platform. Azure Databricks Azure Databricks requires a customerâdriven approach to disaster recovery, where organizations are responsible for replicating workspaces, data, infrastructure components, and security configurations across regions. Full System Failover (Active-Passive) Strategy A comprehensive approach that replicates all dependent services to the secondary region. Implementation requirements include: Infrastructure Components: Replicate Azure services (ADLS, Key Vault, SQL databases) using Terraform Deploy network infrastructure (subnets) in the secondary region Establish data synchronization mechanisms Data Replication Strategy: Use Deep Clone for Delta tables rather than geo-redundant storage Implement periodic synchronization jobs using Delta's incremental replication Measure data transfer results using time travel syntax Workspace Asset Synchronization: Co-deploy cluster configurations, notebooks, jobs, and permissions using CI/CD Utilize Terraform and SCIM for identity and access management Keep job concurrencies at zero in the secondary region to prevent execution Fully Redundant (Active-Active) Strategy The most sophisticated approach where all transactions are processed in multiple regions simultaneously. While providing maximum resilience, this strategy: Requires complex data synchronization between regions Incurs highest operational costs due to duplicate processing Typically needed only for mission-critical workloads with zero-tolerance for downtime Can be implemented as partial active-active, processing most workload in primary with subset in secondary Enabling Disaster Recovery Create a secondary workspace in a paired region. Use CI/CD to keep Workspace Assets Synchronized continuously. Requirement Approach Tools Cluster Configurations Co-deploy to both regions as code Terraform Code (Notebooks, Libraries, SQL) Co-deploy with CI/CD pipelines Git, Azure DevOps, GitHub Actions Jobs Co-deploy with CI/CD, set concurrency to zero in secondary Databricks Asset Bundles, Terraform Permissions (Users, Groups, ACLs) Use IdP/SCIM and infrastructure as code Terraform, SCIM Secrets Co-deploy using secret management Terraform, Azure Key Vault Table Metadata Co-deploy with CI/CD workflows Git, Terraform Cloud Services (ADLS, Network) Co-deploy infrastructure Terraform Update your orchestrator (ADF, Fabric pipelines, etc.) to include a simple region toggle to reroute job execution. Replicate all dependent services (Key Vault, Storage accounts, SQL DB). Implement Delta âDeep Cloneâ synchronization jobs to keep datasets continuously aligned between regions. Introduce an applicationâlevel âSync Toolâ that redirects: data ingestion compute execution Enable parallel processing in both regions for selected or all workloads. Use biâdirectional synchronization for Delta data to maintain consistency across regions. For performance and cost control, run most workloads in primary and only subset workloads in secondary to keep it warm. Implement Three-Pillar DR Design Primary Workspace: Your production Databricks environment running normal operations Secondary Workspace: A standby Databricks workspace in a different(paired) Azure region that remains ready to take over if the primary fails. This architecture ensures business continuity while optimizing costs by keeping the secondary workspace dormant until needed. The DR solution is built on three fundamental pillars that work together to provide comprehensive protection: 1. Infrastructure Provisioning (Terraform) The infrastructure layer creates and manages all Azure resources required for disaster recovery using Infrastructure as Code (Terraform). What It Creates: Secondary Resource Group: A dedicated resource group in your paired DR region (e.g., if primary is in East US, secondary might be in West US 2) Secondary Databricks Workspace: A standby Databricks workspace with the same SKU as your primary, ready to receive failover traffic DR Storage Account: An ADLS Gen2 storage account that serves as the backup destination for your critical data Monitoring Infrastructure: Azure Monitor Log Analytics workspace and alert action groups to track DR health Protection Locks: Management locks to prevent accidental deletion of critical DR resources Key Design Principle: The Terraform configuration references your existing primary workspace without modifying it. It only creates new resources in the secondary region, ensuring your production environment remains untouched during setup. 2. Data Synchronization (Delta Notebooks) The data synchronization layer ensures your critical data is continuously backed up to the secondary region. How It Works: The solution uses a Databricks notebook that runs in your primary workspace on a scheduled basis. This notebook: Connects to Backup Storage: Uses Unity Catalog with Azure Managed Identity for secure, credential-free authentication to the secondary storage account Identifies Critical Tables: Reads from a configuration list you define (sales data, customer data, inventory, financial transactions, etc.) Performs Deep Clone: Uses Delta Lake's native CLONE functionality to create exact copies of your tables in the backup storage Tracks Sync Status: Logs each synchronization operation, tracks row counts, and reports on data freshness Authentication Flow: The synchronization process leverages Unity Catalog's managed identity capabilities: An existing Access Connector for Unity Catalog is granted "Storage Blob Data Contributor" permissions on the backup storage. Storage credentials are created in Databricks that reference this Access Connector. The notebook uses these credentials transparentlyâno storage keys or secrets are required. What Gets Synced: You define which tables are critical to your business operations. The notebook creates backup copies including: Full table data and schema Table partitioning structure Delta transaction logs for point-in-time recovery 3. Failover Automation (Python Scripts) The failover automation layer orchestrates the switch from primary to secondary workspace when disaster strikes. Microsoft Fabric Microsoft Fabric provides builtâin disaster recovery capabilities designed to keep analytics and Power BI experiences available during regional outages. Fabric simplifies continuity for reporting workloads, while still requiring customer planning for deeper data and workload replication. Power BI Business Continuity Power BI, now integrated into Fabric, provides automatic disaster recovery as a default offering: No opt-in required: DR capabilities are automatically included. Azure storage geo-redundant replication: Ensures backup instances exist in other regions. Read-only access during disasters: Semantic models, reports, and dashboards remain accessible. Always supported: BCDR for Power BI remains active regardless of OneLake DR setting. Microsoft Fabric Fabric's cross-region DR uses a shared responsibility model between Microsoft and customers: Microsoft's Responsibilities: Ensure baseline infrastructure and platform services availability Maintain Azure regional pairings for geo-redundancy. Provide DR capabilities for Power BI as default. Customer Responsibilities: Enable disaster recovery settings for capacities Set up secondary capacity and workspaces in paired regions Replicate data and configurations Enabling Disaster Recovery Organizations can enable BCDR through the Admin portal under Capacity settings: Navigate to Admin portal â Capacity settings Select the appropriate Fabric Capacity Access Disaster Recovery configuration Enable the disaster recovery toggle Critical Timing Considerations: 30-day minimum activation period: Once enabled, the setting remains active for at least 30 days and cannot be reverted. 72-hour activation window: Initial enablement can take up to 72 hours to become fully effective. Azure Databricks & Microsoft Fabric DR Considerations Building a resilient analytics platform requires understanding how disaster recovery responsibilities differ between Azure Databricks and Microsoft Fabric. While both platforms operate within Azureâs regional architecture, their DR models, failover behaviors, and customer responsibilities are fundamentally different. Recovery Procedures Procedure Databricks Fabric Failover Stop workloads, update routing, resume in secondary region. Microsoft initiates failover; customers restore services in DR capacity. Restore to Primary Stop secondary workloads, replicate data/code back, test, resume production. Recreate workspaces and items in new capacity; restore Lakehouse and Warehouse data. Asset Syncing Use CI/CD and Terraform to sync clusters, jobs, notebooks, permissions. Use Git integration and pipelines to sync notebooks and pipelines; manually restore Lakehouses. Business Considerations Consideration Databricks Fabric Control Customers manage DR strategy, failover timing, and asset replication. Microsoft manages failover; customers restore services post-failover. Regional Dependencies Must ensure secondary region has sufficient capacity and services. DR only available in Azure regions with Fabric support and paired regions. Power BI Continuity Not applicable. Power BI offers built-in BCDR with read-only access to semantic models and reports. Activation Timeline Immediate upon configuration. DR setting takes up to 72 hours to activate; 30-day wait before changes allowed.850Views4likes0Comments