azure
887 TopicsAutomating Microsoft Sentinel: A blog series on enabling Smart Security
This entry guides readers through building custom Playbooks in Microsoft Sentinel, highlighting best practices for trigger selection, managed identities, and integrating built-in tools and external APIs. It offers practical steps and insights to help security teams automate incident response and streamline operations within Sentinel.1.3KViews2likes1CommentUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.1.3KViews1like1CommentRun a SQL Query with Azure Arc
Hi All, In this article, you can find a way to retrieve database permission from all your onboarded databases through Azure Arc. This idea is born from a customer request around maintaining a standard permission set, in a very wide environment (about 1000 SQL Server). This solution is based on Azure Arc, so first you need to onboard your SQL Server to Azure Arc and enable the SQL Server extension. If you want to test Azure Arc in a test environment, you can use the Azure Jumpstart, in this repo you will find ready-to-deploy arm templates the deploy demos environments. The other solution components are an automation account, log analytics and a Data collection rule \ endpoint. Here you can find a little recap of the purpose of each component: Automation account: with this resource you can run and schedule a PowerShell script, and you can also store the credentials securely Log Analytics workspace: here you will create a custom table and store all the data that comes from the script Data collection Endpoint / Data Collection Rule: enable you to open a public endpoint to allow you to ingest collected data on Log analytics workspace In this section you will discover how I composed the six phases of the script: Obtain the bearer token and authenticate on the portal: First of all you need to authenticate on the azure portal to get all the SQL instance and to have to token to send your assessment data to log analytics $tenantId = "XXXXXXXXXXXXXXXXXXXXXXXXXXX" $cred = Get-AutomationPSCredential -Name 'appreg' Connect-AzAccount -ServicePrincipal -Tenant $tenantId -Credential $cred $appId = $cred.UserName $appSecret = $cred.GetNetworkCredential().Password $endpoint_uri = "https://sampleazuremonitorworkspace-weu-a5x6.westeurope-1.ingest.monitor.azure.com" #Logs ingestion URI for the DCR $dcrImmutableId = "dcr-sample2b9f0b27caf54b73bdbd8fa15908238799" #the immutableId property of the DCR object $streamName = "Custom-MyTable" $scope= [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default") $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials"; $headers = @{"Content-Type"="application/x-www-form-urlencoded"}; $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token Get all the SQL instances: in my example I took all the instances, you can also use a tag to filter some resources, for example if a want to assess only the production environment you can use the tag as a filter $servers = Get-AzResource -ResourceType "Microsoft.AzureArcData/SQLServerInstances" When you have all the SQL instance you can run your t-query to obtain all the permission , remember now we are looking for the permission, but you can use for any query you want or in other situation where you need to run a command on a generic server $SQLCmd = @' Invoke-SQLcmd -ServerInstance . -Query "USE master; BEGIN IF LEFT(CAST(Serverproperty('ProductVersion') AS VARCHAR(1)),1) = '8' begin IF EXISTS (SELECT TOP 1 * FROM tempdb.dbo.sysobjects (nolock) WHERE name LIKE '#TUser%') begin DROP TABLE #TUser end end ELSE begin IF EXISTS (SELECT TOP 1 * FROM tempdb.sys.objects (nolock) WHERE name LIKE '#TUser%') begin DROP TABLE #TUser end end CREATE TABLE #TUser (DBName SYSNAME,[Name] SYSNAME,GroupName SYSNAME NULL,LoginName SYSNAME NULL,default_database_name SYSNAME NULL,default_schema_name VARCHAR(256) NULL,Principal_id INT); IF LEFT(CAST(Serverproperty('ProductVersion') AS VARCHAR(1)),1) = '8' INSERT INTO #TUser EXEC sp_MSForEachdb ' SELECT ''?'' as DBName, u.name As UserName, CASE WHEN (r.uid IS NULL) THEN ''public'' ELSE r.name END AS GroupName, l.name AS LoginName, NULL AS Default_db_Name, NULL as default_Schema_name, u.uid FROM [?].dbo.sysUsers u LEFT JOIN ([?].dbo.sysMembers m JOIN [?].dbo.sysUsers r ON m.groupuid = r.uid) ON m.memberuid = u.uid LEFT JOIN dbo.sysLogins l ON u.sid = l.sid WHERE (u.islogin = 1 OR u.isntname = 1 OR u.isntgroup = 1) and u.name not in (''public'',''dbo'',''guest'') ORDER BY u.name ' ELSE INSERT INTO #TUser EXEC sp_MSforeachdb ' SELECT ''?'', u.name, CASE WHEN (r.principal_id IS NULL) THEN ''public'' ELSE r.name END GroupName, l.name LoginName, l.default_database_name, u.default_schema_name, u.principal_id FROM [?].sys.database_principals u LEFT JOIN ([?].sys.database_role_members m JOIN [?].sys.database_principals r ON m.role_principal_id = r.principal_id) ON m.member_principal_id = u.principal_id LEFT JOIN [?].sys.server_principals l ON u.sid = l.sid WHERE u.TYPE <> ''R'' and u.TYPE <> ''S'' and u.name not in (''public'',''dbo'',''guest'') order by u.name '; SELECT DBName, Name, GroupName,LoginName FROM #TUser where Name not in ('information_schema') and GroupName not in ('public') ORDER BY DBName,[Name],GroupName; DROP TABLE #TUser; END" '@ $command = New-AzConnectedMachineRunCommand -ResourceGroupName "test_query" -MachineName $server1 -Location "westeurope" -RunCommandName "RunCommandName" -SourceScript $SQLCmd In a second, you will receive the output of the command, and you must send it to the log analytics workspace (aka LAW). In this phase, you can also review the output before sending it to LAW, for example, removing some text or filtering some results. In my case, I’m adding the information about the server where the script runs to each record. $array = ($command.InstanceViewOutput -split "r?n" | Where-Object { $.Trim() }) | ForEach-Object { $line = $ -replace '\', '\\' ù$array = $array | Where-Object { $_ -notmatch "DBName,Name,GroupName,LoginName" } | Where-Object {$_ -notmatch "------"} The last phase is designed to send the output to the log analytics workspace using the dce \ dcr. $staticData = @" [{ "TimeGenerated": "$currentTime", "RawData": "$raw", }]"@; $body = $staticData; $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"}; $uri = "$endpoint_uri/dataCollectionRules/$dcrImmutableId/streams/$($streamName)?api-version=2023-01-01" $rest = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers When the data arrives in log analytics workspace, you can query this data, and you can create a dashboard or why not an alert. Now you will see how you can implement this solution. For the log analytics, dce and dcr, you can follow the official docs: Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Resource Manager templates) - Azure Monitor | Microsoft Learn After you create the dcr and the log analytics workspace with its custom table. You can proceed with the Automation account. Create an automation account using the creating wizard You can proceed with the default parameter. When the Automation Account creation is completed, you can create a credential in the Automation Account. This allows you to avoid the exposition of the credential used to connect to Azure You can insert here the enterprise application and the key. Now you are ready to create the runbook (basically the script that we will schedule) You can give the name you want and click create. Now go in the automation account than Runbooks and Edit in Portal, you can copy your script or the script in this link. Remember to replace your tenant ID, you will find in Entra ID section and the Enterprise application You can test it using the Test Pane function and when you are ready you can Publish and link a schedule, for example daily at 5am. Remember, today we talked about database permissions, but the scenarios are endless: checking a requirement, deploying a small fix, or removing/adding a configuration — at scale. At the end, as you see, Azure Arc is not only another agent, is a chance to empower every environment (and every other cloud provider 😉) with Azure technology. See you in the next techie adventure. **Disclaimer** The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.Making AI Apps Enterprise-Ready with Microsoft Purview and Microsoft Foundry
Building AI apps is easy. Shipping them to production is not. Microsoft Foundry lets developers bring powerful AI apps and agents to production in days. But managing safety, security, and compliance for each one quickly becomes the real bottleneck. Every enterprise AI project hits the same wall: security reviews, data classification, audit trails, DLP policies, retention requirements. Teams spend months building custom logging pipelines and governance systems that never quite keep up with the app itself. There is a faster way. Enable Purview & Ship Faster! Microsoft Foundry now includes native integration with Microsoft Purview. When you enable it, every AI interaction in your subscription flows into the same enterprise data governance infrastructure that already protects your Microsoft 365 and Azure data estate. No SDK changes. No custom middleware. No separate audit system to maintain. Here is what you get: Visibility within 24 hours. Data Security Posture Management (DSPM) shows you total interactions, sensitive data detected in prompts and responses, user activity across AI apps, and insider risk scoring. This dashboard exists the moment you flip the toggle. Automatic data classification. The same classification engine that scans your Microsoft 365 tenant now scans AI interactions. Credit card numbers, health information, SSNs, and your custom sensitive information types are all detected automatically. Audit logs you do not have to build. Every AI interaction is logged in the Purview unified audit log. Timestamps, user identity, the AI app involved, files accessed, sensitivity labels applied. When legal needs six months of AI interactions for an investigation, the data is already there. DLP policy enforcement. Configure policies that block prompts containing sensitive information before they reach the model. This uses the same DLP framework you already know. eDiscovery, retention, and communication compliance. Search AI interactions alongside email and Teams messages. Set retention policies by selecting "Enterprise AI apps" as the location. Detect harmful or unauthorized content in prompts. How to Enable Prerequisite: You need the “Azure AI Account Owner” role assigned by your Subscription Owner. Open the Microsoft Foundry portal (make sure you are in the new portal) Select Operate from the top navigation Select Compliance in the left pane Select the Security posture tab Select the Azure Subscription Enable the toggle next to Microsoft Purview Repeat the above steps for other subscriptions By enabling this toggle, data exchanged within Foundry apps and agents' starts flowing to Purview immediately. Purview reports populate within 24 hours. What shows up in Purview? Purview Data Security Admins: Go to the Microsoft Purview portal, open DSPM, and follow the recommendation to setup “Secure interactions from enterprise AI apps” . Navigate to DSPM > Discover > Apps and Agents to review and monitor the Foundry apps built in your organization Navigate to DSPM > Activity Explorer to review the activity on a given agent/application What About Cost? Enabling the integration is free. Audit Standard is included for Foundry apps. You will only be charged for data security policies you setup for governing Foundry data. A Real-World Scenario: The Internal HR Assistant Consider a healthcare company building an internal AI agent for HR questions. The Old Way: The developer team spends six weeks building a custom logging solution to strip PII/PHI from prompts to meet HIPAA requirements. They have to manually demonstrate these logs to compliance before launch. The Foundry Way: The team enables the Purview toggle. Detection: Purview automatically flags if an employee pastes a patient ID into the chat. Retention: The team selects "Enterprise AI Apps" in their retention policy, ensuring all chats are kept for the required legal period. Outcome: The app ships on schedule because Compliance trusts the controls are inherited, not bolted on. Takeaway Microsoft Purview DSPM is a gamechanger for organizations looking to adopt AI responsibly. By integrating with Microsoft Foundry, it provides a comprehensive framework to discover, protect, and govern AI interactions ensuring compliance, reducing risk, and enabling secure innovation. We built this integration because teams kept spending months on compliance controls that already exist in Microsoft's stack. The toggle is there. The capabilities are real. Your security team already trusts Purview. Your compliance team already knows the tools. Enable it. Ship your agent. Let the infrastructure do what infrastructure does best: work in the background while you focus on what your application does. Additional Resources Documentation: Use Microsoft Purview to manage data security & compliance for Microsoft Foundry | Microsoft LearnIssue connecting Azure Sentinel GitHub app to Sentinel Instance when IP allow list is enabled
Hi everyone, I’m running into an issue connecting the Azure Sentinel GitHub app to my Sentinel workspace in order to create our CI/CD pipelines for our detection rules, and I’m hoping someone can point me in the right direction. Symptoms: When configuring the GitHub connection in Sentinel, the repository dropdown does not populate. There are no explicit errors, but the connection clearly isn’t completing. If I disable my organization’s IP allow list, everything works as expected and the repos appear immediately. I’ve seen that some GitHub Apps automatically add the IP ranges they require to an organization’s allow list. However, from what I can tell, the Azure Sentinel GitHub app does not seem to have this capability, and requires manual allow listing instead. What I’ve tried / researched: Reviewed Microsoft documentation for Sentinel ↔ GitHub integrations Looked through Azure IP range and Service Tag documentation I’ve seen recommendations to allow list the IP ranges published at //api.github.com/meta, as many GitHub apps rely on these ranges I’ve already tried allow listing multiple ranges from the GitHub meta endpoint, but the issue persists My questions: Does anyone know which IP ranges are used by the Azure Sentinel GitHub app specifically? Is there an official or recommended approach for using this integration in environments with strict IP allow lists? Has anyone successfully configured this integration without fully disabling IP restrictions? Any insight, references, or firsthand experience would be greatly appreciated. Thanks in advance!67Views0likes0CommentsSimplifying Code Signing for Windows Apps: Artifact Signing (GA)
Trusted Signing is now Artifact Signing—and it’s officially Generally Available! Artifact Signing is a fully managed, end-to-end code signing service that makes it easier than ever for Windows application developers to sign their apps securely and efficiently. As Artifact Signing rebrands, customers will see changes over the next weeks. Please refer to our Learn docs for the most updated information. What is Artifact Signing? Code signing has traditionally been a complex and manual process. Managing certificates, securing keys, and integrating signing into build pipelines can slow teams down and introduce risk. Artifact Signing changes that by offering a fully managed, end-to-end solution that automates certificate management, enforces strong security controls, and integrates seamlessly with your existing developer tools. With zero-touch certificate management, verified identity, role-based access control, and support for multiple trust models, Artifact Signing makes it easier than ever to build and distribute secure Windows applications. Whether you're shipping consumer apps or internal tools, Artifact Signing helps you deliver software that’s secure. Security Made Simple Zero-Touch Certificate Management No more manual certificate handling. The service provides “zero-touch” certificate management, meaning it handles the creation, protection, and even automatic rotation of code signing certificates on your behalf. These certificates are short-lived and auto renewed behind the scenes, giving you tighter control, faster revocation when needed, and eliminating the risks associated with long-lived certs. Your signing reputation isn’t tied to a single certificate. Instead, it’s anchored to your verified identity in Azure, and every signature reflects that verified identity. Verified Identity Identity validation with Artifact Signing ensures your app’s digital signature displays accurate and verified publisher information. Once validated, your identity details, such as your individual or organization name, are included in the certificate. This means your signed apps will show a verified publisher name, not the dreaded “Unknown Publisher” warning. The entire validation process happens in the Azure portal. You simply submit your individual or organization details, and in some cases, upload supporting documents like business registration papers. Most validations are completed within a few business days, and once approved, you’re ready to start signing your apps immediately. organization validation page Secure and Controlled Signing (RBAC) Artifact Signing enforces Azure’s Role-Based Access Control (RBAC) to secure signing activities. You can assign specific Azure roles to accounts or CI agents that use your Artifact Signing resource, ensuring only authorized developers or build pipelines can initiate signing operations. This tight access control helps prevent unauthorized or rogue signatures. Full Telemetry and Audit Logs Every signing request is tracked. You can see what was signed, when, and by whom in the Azure portal. This logging not only helps with compliance and auditing needs but also enables fast remediation if an issue arises. For example, if you discover a particular signing certificate was used in error or compromised, you can quickly revoke it directly from the portal. The short-lived nature of certificates in Artifact Signing further limits the window of any potential misuse. Artifact Signing gives you enterprise-grade security controls out of the box: strong protection of keys, fine-grained access control, and visibility. For developers and companies concerned about supply chain security, this dramatically reduces risk compared to handling signing keys manually. Built for Developers Artifact Signing was built to slot directly into developers’ existing workflows. You don’t need to overhaul how you build or release software, just plug Artifact Signing into your toolchain: GitHub Actions & Azure DevOps: The service includes first-class support for modern CI/CD. An official GitHub Action is available for easy integration into your workflow YAML, and Azure DevOps has tasks for pipelines. With these tools, every Windows app build can automatically sign binaries or installers—no manual steps required. Since signing credentials are managed in Azure, you avoid storing secrets in your repository. Visual Studio & MSBuild: Use the Artifact Signing client with SignTool to integrate signing into publish profiles or post-build steps. Once the Artifact Signing client is installed, Visual Studio or MSBuild can invoke SignTool as usual, with signatures routed through the Artifact Signing service. SignTool / CLI: Developers using scripts or custom build systems can continue using the familiar signtool.exe command. After a one-time setup, your existing SignTool commands will sign via the cloud service. The actual file signing on your build machine uses a digest signing approach: SignTool computes a hash of your file and sends that to the Artifact Signing service, which returns a signature. The file itself isn’t uploaded, preserving confidentiality and speed. This way, integrating Artifact Signing can be as simple as adding a couple of lines to your build script to point SignTool at Azure. PowerShell & SDK: For advanced automation or custom scenarios, Artifact Signing supports PowerShell modules and an SDK. These tools allow you to script signing operations, bulk-sign files, or integrate signing into specialized build systems. The Right Trust for the Right Audience Artifact Signing has support for multiple trust models to suit different distribution scenarios. You can choose between Public Trust and Private Trust for your code signing, depending on your app’s audience: Public Trust: This is the standard model for software intended to go to consumers. When you use Public Trust signing, the certificates come from a Microsoft CA that’s part of the Microsoft Trusted Root Program. Apps signed under Public Trust are recognized by Windows as coming from a known publisher, enabling a smooth installation experience when security features such as Smart App Control and SmartScreen are enabled. Private Trust: This model is for internal or enterprise apps. These certificates aren’t publicly trusted but are instead meant to work with Windows Defender Application Control (App Control for Business) policies. This is ideal for line-of-business applications, internal tools, or scenarios where you want to tightly control who trusts the app. Artifact Signing ’s Private Trust model is the modern, expanded evolution of Microsoft’s older Device Guard Signing Service (DGSS) -- delivering the same ability to sign internal apps but with ease of access and expanded capabilities. Test Signing: Useful for development and testing. These certificates mimic real signatures but aren’t publicly trusted, allowing you to validate your signing setup in non-production environments before releasing your app. Note on Expanded Scenario Support: Artifact Signing supports additional certificate profiles, including those for VBS enclaves and Private Trust CI Policies. In addition, there is a new preview feature for signing container images using the Notary v2 standard from the CNCF Notary project. This enables developers to sign Docker/OCI container images stored in Azure Container Registry using tools like the notation CLI, backed by Artifact Signing. Having all trust models in one service means you can manage all your signing needs in one place. Whether your code is destined for the world or just your organization, Artifact Signing makes it easy to ensure it is signed with an appropriate level of trust. Misuse and Abuse Management Artifact Signing is engineered with robust safeguards to counter certificate misuse and abuse. The signing platform employs active threat intelligence monitoring to continuously detect suspicious signing activity in real time. The service also emphasizes prevention: certificates are short-lived (renewed daily and valid for only 72 hours), which means any certificate used maliciously can be swiftly revoked without impacting software signed outside its brief lifetime. When misuse is confirmed, Artifact Signing quickly revokes the certificate and suspends the subscriber’s account, removing trust from the malicious code’s signature and stopping further abuse. These measures adhere to strict industry standards for responsible certificate governance. By combining real-time threat detection, built-in preventive controls, and rapid response policies, Artifact Signing gives Windows app developers confidence that any attempt to abuse the platform will be quickly identified and contained, helping protect the broader software ecosystem from emerging threats. Availability and What’s Next Check out the upcoming “What’s New” section in the Artifact Signing Learn Docs for updates on supported file types, new region availability, and more. Microsoft will continue evolving the service to meet developer needs. Conclusion: Enhancing Trust and Security for All Windows Apps Artifact Signing empowers Windows developers to sign their applications with ease and confidence. It integrates effortlessly into your development tools, automates the heavy lifting of certificate management, and ensures every app carries a verified digital signature backed by Microsoft’s Certificate Authorities. For users, it means peace of mind. For developers and organizations, it means fewer headaches, stronger protection against supply chain threats, and complete control over who signs what and when. Now that Artifact Signing is generally available, it’s a must-have for building trustworthy Windows software. It reflects Microsoft’s commitment to a secure, inclusive ecosystem and brings modern security features like Smart App Control and App Control for Business within reach, simply by signing your code. Whether you're shipping consumer apps or internal tools, Artifact Signing helps you deliver software that’s both easy to install and tough to compromise.Trusted Signing Public Preview Update
Nearly a year ago we announced the Public Preview of Trusted Signing with availability for organizations with 3 years or more of verifiable history to onboard to the service to get a fully managed code signing experience to simplify the efforts for Windows app developers. Over the past year, we’ve announced new features including the Preview support for Individual Developers, and we highlighted how the service contributes to the Windows Security story at Microsoft BUILD 2024 in the Unleash Windows App Security & Reputation with Trusted Signing session. During the Public Preview, we have obtained valuable insights on the service features from our customers, and insights into the developer experience as well as experience for Windows users. As we incorporate this feedback and learning into our General Availability (GA) release, we are limiting new customer subscriptions as part of the public preview. This approach will allow us to focus on refining the service based on the feedback and data collected during the preview phase. The limit in new customer subscriptions for Trusted Signing will take effect Wednesday, April 2, 2025, and make the service only available to US and Canada-based organizations with 3 years or more of verifiable history. Onboarding for individual developers and all other organizations will not be directly available for the remainder of the preview, and we look forward to expanding the service availability as we approach GA. Note that this announcement does not impact any existing subscribers of Trusted Signing, and the service will continue to be available for these subscribers as it has been throughout the Public Preview. For additional information about Trusted Signing please refer to Trusted Signing documentation | Microsoft Learn and Trusted Signing FAQ | Microsoft Learn.6.5KViews7likes40CommentsEntra Risky Users Custom Role
My customer implemented unified RBAC (Defender Portal) and removed the Entra Security Operator role. They lost the ability to manage Risky Users in Entra. Two options explored by the customer - Protected Identity Administrator role (licensing unclear) or create a custom role with microsoft.directory/identityProtection/riskyUsers/update, which they couldn't find under custom role. Do you know if there are other options to manage Risky Users without using the Security Operator role?168Views0likes4CommentsOPNsense Firewall as Network Virtual Appliance (NVA) in Azure
This blog is available as a video on YouTube: youtube.com/watch?v=JtnIFiB7jkE Introduction to OPNsense In today’s cloud-driven world, securing your infrastructure is more critical than ever. One powerful solution is OPNsense. OPNsense is a powerful open-source firewall that can be used to secure your virtual networks. Originally forked from pfSense, which itself evolved from m0n0wall. OPNsense could run on Windows, MacOS, Linux including OpenBSD and FreeBSD. It provides a user-friendly web interface for configuration and management. What makes OPNsense Firewall stand out is its rich feature set: VPN Support for point-to-site and site-to-site connections using technologies like WireGuard and OpenVPN. DNS Management with options such as OpenDNS and Unbound DNS. Multi-network handling enabling you to manage different LANs seamlessly. Advanced security features including intrusion detection and forward proxy integration. Plugin ecosystem supporting official and community extensions for third-party integrations. In this guide, you’ll learn how to install and configure OPNsense Firewall on an Azure Virtual Machine, leveraging its capabilities to secure your cloud resources effectively. We'll have three demonstrations: Installing OPNsense on an Azure virtual machine Setting up point-to-site VPN using WireGuard Here is the architecture we want to achieve in this blog, except the Hb and Spoke configuration which is planned for the second part coming soon. 1. Installing OPNsense on an Azure Virtual Machine There are three ways to have OPNsense in a virtual machine. Create a VM from scratch and install OPNsense. Install using the pre-packaged ISO image created by Deciso the company that maintains OPNsense. Use a pre-built VM image from the Azure Marketplace. In this demo, we will use the first approach to have more control over the installation and configuration. We will create an Azure VM with FreeBSD OS and then install OPNsense using a shell script through the Custom Script Extension. All the required files are in this repository: github.com/HoussemDellai/azure-network-course/205_nva_opnsense. The shell script configureopnsense.sh will install OPNsense and apply a predefined configuration file config.xml to set up the firewall rules, VPN, and DNS settings. It will take 4 parameters: GitHub path where the script and config file are hosted, in our case it is /scripts/. OPNsense version to install, currently set to 25.7. Gateway IP address for the trusted subnet. Public IP address of the untrusted subnet. This shell script is executed after the VM creation using the Custom Script Extension in Terraform represented in the file vm_extension_install_opnsense.tf. OPNsense is intended to be used an NVA so it would be good to apply some of the good practices. One of these practices is to have two network interfaces: Trusted Interface: Connected to the internal network (spokes). Untrusted Interface: Connected to the internet (WAN). This setup allows OPNsense to effectively manage and secure traffic between the internal network and the internet. Second good practice is to start with a predefined configuration file config.xml that includes the basic settings for the firewall, VPN, and DNS. This approach saves time and ensures consistency across deployments. It is recommended to start with closed firewall rules and then open them as needed based on your security requirements. But for demo purposes, we will allow all traffic. Third good practice is to use multiple instances of OPNsense in a high-availability setup to ensure redundancy and failover capabilities. However, for simplicity, we will use a single instance in this demo. Let's take a look at the resources that will be created by Terraform using the AzureRM provider: Resource Group Virtual Network (VNET) named vnet-hub with two subnets: Trusted Subnet: Internal traffic between spokes. Untrusted Subnet: Exposes the firewall to the internet. Network Security Group (NSG): attached to the untrusted subnet, with rules allowing traffic to the VPN, OPNsense website and to the internet. Virtual Machine: with the following configuration: FreeBSD OS image using version 14.1. VM size: Standard_D4ads_v6 with NVMe disk for better performance. Admin credentials: feel free to change the username and password with more security. Two NICs (trusted and untrusted) with IP forwarding enabled to allow traffic to pass through the firewall. NAT Gateway: attached to the untrusted subnet for outbound internet connectivity. Apply Terraform configuration To deploy the resources, run the following commands in your terminal from within the 205_nva_opnsense directory: terraform init terraform apply -auto-approve Terraform provisions the infrastructure and outputs resource details. In the Azure portal you should see the newly created resources. Accessing the OPNsense dashboard To access the OPNsense dashboard: Get the VM’s public IP from the Azure portal or from Terraform output. Paste it into your browser. Accept the TLS warning (TLS is not configured yet). Log in with Username: root and Password: opnsense you can change it later in the dashboard. You now have access to the OPNsense dashboard where you can: Monitor traffic and reports. Configure firewall rules for LAN, WAN, and VPN. Set up VPNs (WireGuard, OpenVPN, IPsec). Configure DNS services (OpenDNS, UnboundDNS). Now that the OPNsense firewall is up and running, let's move to the next steps to explore some of its features like VPN. 2. Setting up Point-to-Site VPN using WireGuard We’ll demonstrate how to establish a WireGuard VPN connection to OPNsense firewall. The configuration file config.xml used during installation already includes the necessary settings for WireGuard VPN. For more details on how to set up WireGuard on OPNsense, refer to the official documentation. We will generate a Wireguard peer configuration using the OPNsense dashboard. Navigate to VPN > WireGuard > Peer generator then add a name for the peer, fill in the IP address for the OPNsense which is the public IP of the VM in Azure, use the same IP if you want to use the pre-configured UnboundDNS. Then copy the generated configuration and click on Store and generate next and Apply. Next we'll use that configuration to set up WireGuard on a Windows client. Here you can either use your current machine as a client or create a new Windows VM in Azure. We'll go with this second option for better isolation. We'll deploy the client VM using Terraform file vpn_client_vm_win11.tf. Make sur it is deployed using command terraform apply -auto-approve. Once the VM is ready, connect to it using RDP, download and install WireGuard. Alternatively, you can install WireGuard using the following Winget command: winget install -e --id WireGuard.WireGuard --accept-package-agreements --accept-source-agreements Launch WireGuard application, click on Add Tunnel > Add empty tunnel..., then paste the peer configuration generated from OPNsense and save it. Then click on Activate to start the VPN connection. We should see the data transfer starting. We'll verify the VPN connection by pinging the VM, checking the outbound traffic passes through the Nat Gateway's IPs and also checking the DNS resolution using UnboundDNS configured in OPNsense. ping 10.0.1.4 # this is the trusted IP of OPNsense in Azure # Pinging 10.0.1.4 with 32 bytes of data: # Reply from 10.0.1.4: bytes=32 time=48ms TTL=64 # ... curl ifconfig.me/ip # should display the public IP of the Nat Gateway in Azure # 74.241.132.239 nslookup microsoft.com # should resolve using UnboundDNS configured in OPNsense # Server: UnKnown # Address: 135.225.126.162 # Non-authoritative answer: # Name: microsoft.com # Addresses: 2603:1030:b:3::152 # 13.107.246.53 # 13.107.213.53 # ... The service endpoint ifconfig.me is used to get the public IP address of the client. You can use any other similar service. What's next ? Now that you have OPNsense firewall set up as an NVA in Azure and have successfully established a WireGuard VPN connection, we can explore additional features and configurations such as integrating OPNsense into a Hub and Spoke network topology. That will be covered in the next part of this blog. Special thanks to 'behind the scene' contributors I would like to thank my colleagues Stephan Dechoux thanks to whom I discovered OPNsense and Daniel Mauser who provided a good lab for setting up OPNsense in Azure available here https://github.com/dmauser/opnazure. Disclaimer The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.Issue when ingesting Defender XDR table in Sentinel
Hello, We are migrating our on-premises SIEM solution to Microsoft Sentinel since we have E5 licences for all our users. The integration between Defender XDR and Sentinel convinced us to make the move. We have a limited budget for Sentinel, and we found out that the Auxiliary/Data Lake feature is sufficient for verbose log sources such as network logs. We would like to retain Defender XDR data for more than 30 days (the default retention period). We implemented the solution described in this blog post: https://jeffreyappel.nl/how-to-store-defender-xdr-data-for-years-in-sentinel-data-lake-without-expensive-ingestion-cost/ However, we are facing an issue with 2 tables: DeviceImageLoadEvents and DeviceFileCertificateInfo. The table forwarded by Defender to Sentinel are empty like this row: We created a support ticket but so far, we haven't received any solution. If anyone has experienced this issue, we would appreciate your feedback. Lucas164Views0likes1Comment