Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
This blog is available as a video on YouTube: youtube.com/watch?v=JtnIFiB7jkE
Introduction to OPNsense
In today’s cloud-driven world, securing your infrastructure is more critical than ever....
Jan 05, 2026101Views
0likes
0Comments
*Thank you to my teammates Ian Parramore and David Hoerster for reviewing and contributing to this blog.*
With the launch of the Sentinel Platform, a new suite of features for the Microsoft Sentine...
Dec 29, 20251.2KViews
1like
3Comments
This blog post is about understanding and using ms-settings URIs in Windows to control and customize the Settings experience in enterprise environments.
Dec 29, 2025599Views
2likes
0Comments
8 MIN READ
In November 2023 at Microsoft Ignite, we announced the integration of Microsoft Sentinel with Microsoft Defender XDR into the unified Microsoft Defender portal. Fast forward, in July 2024 we announce...
Dec 23, 20253.4KViews
4likes
1Comment
Recent Discussions
Issue when ingesting Defender XDR table in Sentinel
Hello, We are migrating our on-premises SIEM solution to Microsoft Sentinel since we have E5 licences for all our users. The integration between Defender XDR and Sentinel convinced us to make the move. We have a limited budget for Sentinel, and we found out that the Auxiliary/Data Lake feature is sufficient for verbose log sources such as network logs. We would like to retain Defender XDR data for more than 30 days (the default retention period). We implemented the solution described in this blog post: https://jeffreyappel.nl/how-to-store-defender-xdr-data-for-years-in-sentinel-data-lake-without-expensive-ingestion-cost/ However, we are facing an issue with 2 tables: DeviceImageLoadEvents and DeviceFileCertificateInfo. The table forwarded by Defender to Sentinel are empty like this row: We created a support ticket but so far, we haven't received any solution. If anyone has experienced this issue, we would appreciate your feedback. LucasI have absolutely no idea what Microsoft Defender 365 wants me to do here
The process starts with an emal: There's more below on the email - an offer for credit monitoring, an option to add another device, an option to download the mobile app - but I don't want to do any of the, so I click on the "Open Defender" button, which results in this: OK, so my laptop is the bad boy here, there's that Status not of "Action recommended", with no "recommendations" and the only live link here is "Add device", something I don't need to do. The only potential "problem" I can even guess at here is that Microsoft is telling me that the laptop needs updating. Since I seldom use the laptop, only when traveling, I'd guess the next time I'd fire it up the update will occur, but of course I really don't know that's the recommended action it's warning me about, do I? You'd expect that if something is warning you "ACTION NEEDED!!!" they'd be a little more explicit, wouldn't you?Microsoft Purview Data Map Approach to scan
I plan to scan Purview data assets owner by owner rather than scanning entire databases in one go because this approach aligns with data governance and RBAC (Role-Based Access Control) principles. By segmenting scans by asset ownership, we ensure that only the designated data asset owners have the ability to edit or update metadata for their respective assets in Purview. This prevents broad, unrestricted access and maintains accountability, as each owner manages the metadata for the tables and datasets they are responsible for. Scanning everything at once would make it harder to enforce these permissions and could lead to unnecessary exposure of metadata management rights. This owner-based scanning strategy keeps governance tight, supports compliance, and ensures that metadata stewardship remains with the right people. This approach also aligns with Microsoft Purview best practices and the RBAC model: Microsoft recommends scoping scans to specific collections or assets rather than ingesting everything at once, allowing different teams or owners to manage their own domains securely and efficiently. Purview supports metadata curation via roles such as Data Owner and Data Curator, ensuring that only users assigned as owners; those with write or owner permissions on specific assets; can edit metadata like descriptions, contacts, or column details. The system adheres to the principle of least privilege, where users with Owner/Write permissions can manage metadata for their assets, while broader curation roles apply only where explicitly granted. Therefore, scanning owner by owner not only enforces governance boundaries but also ensures each data asset owner retains exclusive editing rights over their metadata; supporting accountability, security, and compliance. After scanning by ownership, we can aggregate those assets into a logical data product representing the full database without breaking governance boundaries. Is this considered best practice for managing metadata in Microsoft Purview, and does it confirm that my approach is correct?Solved74Views0likes2CommentsRequest for Advice on Managing Shared Glossary Terms in Microsoft Purview
Hi everyone, I'm looking for guidance from others who are working with Microsoft Purview (Unified Catalog), especially around glossary and governance domain design. Scenario; I have multiple governance domains, all within the same Purview tenant. We have some core business concepts from the conceptual data models for example, a term like “PARTY” that are needed in every governance domain. However, based on Microsoft’s documentation: Glossary terms can only be created inside a specific governance domain, not at a tenant‑wide or global level. The Enterprise Glossary is only a consolidated view, not a place to create global terms or maintain a single shared version. It simply displays all terms from all domains in one list. If the same term is needed across domains, Purview requires separate term objects in each domain. Consistency must therefore be managed manually (by re‑creating the term in each domain) or by importing/exporting via CSV or automation (API/PyApacheAtlas)? This leads to questions about maintainability; especially when we want one consistent definition across all domains. What I'm hoping to understand from others: How are you handling shared enterprise concepts – enterprise and conceptual data models that need to appear in multiple governance domains? Are you duplicating terms in each domain and synchronising them manually or via automation? Have you adopted a “central domain” for hosting enterprise‑standard terms and then linking or referencing them in other domains? Is there any better pattern you’ve found to avoid fragmentation and to ensure consistent definitions across domains? Any advice, lessons learned, or examples of how you’ve structured glossary governance in Purview would be really helpful. this is be a primary ORKS - Establish a unified method to consistently link individual entities (e.g., PARTY) to their associated PII‑classified column‑level data assets in Microsoft Purview, ensuring sensitive data is accurately identified, governed, and monitored across all domains. – I.e CDE to Glossary terms Thanks in advance!18Views0likes0CommentsFrom “No” to “Now”: A 7-Layer Strategy for Enterprise AI Safety
The “block” posture on Generative AI has failed. In a global enterprise, banning these tools doesn't stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap in corporate telemetry. The priority has now shifted from stopping AI to hardening the environment so that innovation can run at velocity without compromising data sovereignty. Traditional security perimeters are ineffective against the “slow bleed” of AI leakage - where data moves through prompts, clipboards, and autonomous agents rather than bulk file transfers. To secure this environment, a 7-layer defense-in-depth model is required to treat the conversation itself as the new perimeter. 1. Identity: The Only Verifiable Perimeter Identity is the primary control plane. Access to AI services must be treated with the same rigor as administrative access to core infrastructure. The strategy centers on enforcing device-bound Conditional Access, where access is strictly contingent on device health. To solve the "Account Leak" problem, the deployment of Tenant Restrictions v2 (TRv2) is essential to prevent users from signing into personal tenants using corporate-managed devices. For enhanced coverage, Universal Tenant Restrictions (UTR) via Global Secure Access (GSA) allows for consistent enforcement at the cloud edge. While TRv2 authentication-plane is GA, data-plane protection is GA for the Microsoft 365 admin center and remains in preview for other workloads such as SharePoint and Teams. 2. Eliminating the Visibility Gap (Shadow AI) You can’t secure what you can't see. Microsoft Defender for Cloud Apps (MDCA) serves to discover and govern the enterprise AI footprint, while Purview DSPM for AI (formerly AI Hub) monitors Copilot and third-party interactions. By categorizing tools using MDCA risk scores and compliance attributes, organizations can apply automated sanctioning decisions and enforce session controls for high-risk endpoints. 3. Data Hygiene: Hardening the “Work IQ” AI acts as a mirror of internal permissions. In a "flat" environment, AI acts like a search engine for your over-shared data. Hardening the foundation requires automated sensitivity labeling in Purview Information Protection. Identifying PII and proprietary code before assigning AI licenses ensures that labels travel with the data, preventing labeled content from being exfiltrated via prompts or unauthorized sharing. 4. Session Governance: Solving the “Clipboard Leak” The most common leak in 2025 is not a file upload; it’s a simple copy-paste action or a USB transfer. Deploying Conditional Access App Control (CAAC) via MDCA session policies allows sanctioned apps to function while specifically blocking cut/copy/paste. This is complemented by Endpoint DLP, which extends governance to the physical device level, preventing sensitive data from being moved to unmanaged USB storage or printers during an AI-assisted workflow. Purview Information Protection with IRM rounds this out by enforcing encryption and usage rights on the files themselves. When a user tries to print a "Do Not Print" document, Purview triggers an alert that flows into Microsoft Sentinel. This gives the SOC visibility into actual policy violations instead of them having to hunt through generic activity logs. 5. The “Agentic” Era: Agent 365 & Sharing Controls Now that we're moving from "Chat" to "Agents", Agent 365 and Entra Agent ID provide the necessary identity and control plane for autonomous entities. A quick tip: in large-scale tenants, default settings often present a governance risk. A critical first step is navigating to the Microsoft 365 admin center (Copilot > Agents) to disable the default “Anyone in organization” sharing option. Restricting agent creation and sharing to a validated security group is essential to prevent unvetted agent sprawl and ensure that only compliant agents are discoverable. 6. The Human Layer: “Safe Harbors” over Bans Security fails when it creates more friction than the risk it seeks to mitigate. Instead of an outright ban, investment in AI skilling-teaching users context minimization (redacting specifics before interacting with a model) - is the better path. Providing a sanctioned, enterprise-grade "Safe Harbor" like M365 Copilot offers a superior tool that naturally cuts down the use of Shadow AI. 7. Continuous Ops: Monitoring & Regulatory Audit Security is not a “set and forget” project, particularly with the EU AI Act on the horizon. Correlating AI interactions and DLP alerts in Microsoft Sentinel using Purview Audit (specifically the CopilotInteraction logs) data allows for real-time responses. Automated SOAR playbooks can then trigger protective actions - such as revoking an Agent ID - if an entity attempts to access sensitive HR or financial data. Final Thoughts Securing AI at scale is an architectural shift. By layering Identity, Session Governance, and Agentic Identity, AI moves from being a fragmented risk to a governed tool that actually works for the modern workplace.Updating SDK for Java used by Defender for Server/CSPM in AWS
Hi, I have a customer who is Defender for Cloud/CSPM in AWS. Last week, Cloud AWS Health Dashboard lit up with a recommendation around the use of AWS SDK for Java 1.x in their organization. This version will reach end of support on December 31, 2025. The recommendation is to migrate to AWS SDK for Java 2.x. The issue is present in all of AWS workload accounts. They found that a large amount of these alerts is caused by the Defender CSPM service, running remotely, and using AWS SDK for Java 1.x. Customer attaching a couple of sample events that were gathered from the CloudTrail logs. Please note that in both cases: assumed-role: DefenderForCloud-Ciem sourceIP: 20.237.136.191 (MS Azure range) userAgent: aws-sdk-java/1.12.742 Linux/6.6.112.1-2.azl3 OpenJDK_64-Bit_Server_VM/21.0.9+10-LTS java/21.0.9 kotlin/1.6.20 vendor/Microsoft cfg/retry-mode/legacy cfg/auth-source#unknown Can someone provide guidance about this? How to find out if DfC is going to leverage AWS SDK for Java 2.x after Dec 31, 2025? Thanks, TerruLineage in Purview - Fabric
Dears, I was checking the lineage which is collected in Microsoft Fabric by Purview and I saw this limitations on the Microsoft page : I have some sources like : Dataverse (I am doing a Dataverse Direct Shortcut to bring the information from Dataverse into Fabric - Raw Layer) Sharepoint (I am doing a Fabric Shortcut into sharepoint. - Fabric Raw Layer) to bring the files into Fabric SQL Server MI ( I am doing a Fabric mirroring - Fabric Raw Layer) I did not undertand very well the lineage exceptions mentioned above. Let me ask this questions please: 1) Suppose I have a table which came from SQL Server MI. This table was first ingested into Fabric Raw Layer, then into Bronze Layer, then into Fabric Silver Layer and After to Fabric Gold Layer. Each layer has its own fabric workspace. When I bring the metadata via Purview , into the Purview Unified Catalog do I get automatic lineage between Silver layer and Gold layer for that table (suppose the table is named table 1). Or this is not possible because : 1.1 ) They are different workspaces 1.2) The table was originated outside Fabric , in the SQL Server MI? 2) Suppose I have a table in Silver layer (in the silver layer workspace), then, in the gold layer, I create a shortcut to that table. And then I use a notebook to transform that table into another table in the same gold layer. Will I have lineage between the Silver table and the new transformed table in gold layer? 3) To make the relationship of lineage between tables which is not create automatically I was thinking in using Purview Rest APIs. Which API shall I use specifically for this? Thanks, Pedro Thanks, Pedro36Views0likes1CommentIngesting Windows Security Events into Custom Datalake Tables Without Using Microsoft‑Prefixed Table
Hi everyone, I’m looking to see whether there is a supported method to ingest Windows Security Events into custom Microsoft Sentinel Data Lake–tiered tables (for example, SecurityEvents_CL) without writing to or modifying the Microsoft‑prefixed analytical tables. Essentially, I want to route these events directly into custom tables only, bypassing the default Microsoft‑managed tables entirely. Has anyone implemented this, or is there a recommended approach? Thanks in advance for any guidance. Best Regards, Prabhu KiranLatest Threat Intelligence (December 2025)
Microsoft Defender for IoT has released the December 2025 Threat Intelligence package. The package is available for download from the Microsoft Defender for IoT portal (click Updates, then Download file). Threat Intelligence updates reflect the combined impact of proprietary research and threat intelligence carried out by Microsoft security teams. Each package contains the latest CVEs (Common Vulnerabilities and Exposures), IOCs (Indicators of Compromise), and other indicators applicable to IoT/ICS/OT networks (published during the past month) researched and implemented by Microsoft Threat Intelligence Research - CPS. The CVE scores are aligned with the National Vulnerability Database (NVD). Starting with the August 2023 threat intelligence updates, CVSSv3 scores are shown if they are relevant; otherwise the CVSSv2 scores are shown. Guidance Customers are recommended to update their systems with the latest TI package in order to detect potential exposure risks and vulnerabilities in their networks and on their devices. Threat Intelligence packages are updated every month with the most up-to-date security information available, ensuring that Microsoft Defender for IoT can identify malicious actors and behaviors on devices. Update your system with the latest TI package The package is available for download from the Microsoft Defender for IoT portal (click Updates, then Download file), for more information, please review Update threat intelligence data | Microsoft Docs. MD5 Hash: 5c642a16bf56cb6d98ef8b12fdc89939 For cloud connected sensors, Microsoft Defender for IoT can automatically update new threat intelligence packages following their release, click here for more information.Defender for Identity health issues - Not Closing
We have old issues and they're not being "Closed" as reported. Are we missing something or is this "Microsoft Defender for Identity" Health Issues process broken? Thanks! Closed: A health issue is automatically marked as Closed when Microsoft Defender for Identity detects that the underlying issue is resolved. If you have the Azure ATP (workspace name) Administrator role, you can also manually close a health issue.DLP USB Block
Currently we have DLP policies setup to block the use of USB devices and copying data to it. When checking the activity explorer I am still seeing user's able to copy data to USB devices and for the action item it says "Audit" when in the DLP policies we explicitly set it to block. Has anyone run into this issue or seen similar behavior?Microsoft Compliance Assessment issues - ASD L1
Hi, We are using Microsoft Compliance Assessments in Microsoft Purview In the Microsoft Compliance Manager we have enabled the ASD Essentials Level 1 assessment Under the Microsoft Actions There are 2 actions, one is: Malicious Code Protection - Periodic and Real-Time Scans (SI-0116) The issue that currently the testing status is 'failed low risk' , but the testing status has the date tested as Monday Sep 30 2024, well before we opened the assessment, also with notes that are completely irrelevant to this client and certainly not something we have put in. The information in there is quite long, I can provide a txt file with this information I have checked the documentation and we have implemented the required security configuration With these items set the way they are we have no way to complete the assessment216Views0likes3CommentsUnifying AWS and Azure Security Operations with Microsoft Sentinel
The Multi-Cloud Reality Most modern enterprises operate in multi-cloud environments using Azure for core workloads and AWS for development, storage, or DevOps automation. While this approach increases agility, it also expands the attack surface. Each platform generates its own telemetry: Azure: Activity Logs, Defender for Cloud, Entra ID sign-ins, Sentinel analytics AWS: CloudTrail, GuardDuty, Config, and CloudWatch Without a unified view, security teams struggle to detect cross-cloud threats promptly. That’s where Microsoft Sentinel comes in, bridging Azure and AWS into a single, intelligent Security Operations Center (SOC). Architecture Overview Connect AWS Logs to Sentinel AWS CloudTrail via S3 Connector Enable the AWS CloudTrail connector in Sentinel. Provide your S3 bucket and IAM role ARN with read access. Sentinel will automatically normalize logs into the AWSCloudTrail table. AWS GuardDuty Connector Use the AWS GuardDuty API integration for threat detection telemetry. Detected threats, such as privilege escalation or reconnaissance, appear in Sentinel as the AWSGuardDuty table. Normalize and Enrich Data Once logs are flowing, enrich them to align with Azure activity data. Example KQL for mapping CloudTrail to Sentinel entities: AWSCloudTrail | extend AccountId = tostring(parse_json(Resources)[0].accountId) | extend User = tostring(parse_json(UserIdentity).userName) | extend IPAddress = tostring(SourceIpAddress) | project TimeGenerated, EventName, User, AccountId, IPAddress, AWSRegion Then correlate AWS and Azure activities: let AWS = AWSCloudTrail | summarize AWSActivity = count() by User, bin(TimeGenerated, 1h); let Azure = AzureActivity | summarize AzureActivity = count() by Caller, bin(TimeGenerated, 1h); AWS | join kind=inner (Azure) on $left.User == $right.Caller | where AWSActivity > 0 and AzureActivity > 0 | project TimeGenerated, User, AWSActivity, AzureActivity Automate Cross-Cloud Response Once incidents are correlated, Microsoft Sentinel Playbooks (Logic Apps) can automate your response: Example Playbook: “CrossCloud-Containment.json” Disable user in Entra ID Send a command to the AWS API via Lambda to deactivate IAM key Notify SOC in Teams Create ServiceNow ticket POST https://api.aws.amazon.com/iam/disable-access-key PATCH https://graph.microsoft.com/v1.0/users/{user-id} { "accountEnabled": false } Build a Multi-Cloud SOC Dashboard Use Sentinel Workbooks to visualize unified operations: Query 1 – CloudTrail Events by Region AWSCloudTrail | summarize Count = count() by AWSRegion | render barchart Query 2 – Unified Security Alerts union SecurityAlert, AWSGuardDuty | summarize TotalAlerts = count() by ProviderName, Severity | render piechart Scenario Incident: A compromised developer account accesses EC2 instances on AWS and then logs into Azure via the same IP. Detection Flow: CloudTrail logs → Sentinel detects unusual API calls Entra ID sign-ins → Sentinel correlates IP and user Sentinel incident triggers playbook → disables user in Entra ID, suspends AWS IAM key, notifies SOC Strengthen Governance with Defender for Cloud Enable Microsoft Defender for Cloud to: Monitor both Azure and AWS accounts from a single portal Apply CIS benchmarks for AWS resources Surface findings in Sentinel’s SecurityRecommendations tableC# MIP SDK v1.17.x - AccessViolationException on creation of MIPContext in 64-bit console app
I first logged this on https://stackoverflow.com/questions/79746967/accessviolationexception-when-creating-mipcontext-after-upgrade-to-v1-17 and the responses there have indicated I should raise with Microsoft a a likely bug, but I don't see a clear route to reporting other than here so any response would be appreciated, even if just to direct me to the appropriate reporting location. I've built a simple console app that demonstrates this issue that I'm happy to provide but we're seeing an issue with the 1.17.x version of the C# MIP SDK where an AccessViolationException is being thrown when trying to create an MIP context object. This is for a .Net Framework 4.8 console app built in 64-bit configuration, deployed to a Windows Server 2016 with the latest VC++ redistributable (14.44.35211) installed (both x86 and x64 versions), though we've seen the same on Windows Server 2019 and 2022. When the same app is built in 32-bit and deployed to the same environment the exception doesn't occur. The following code is what I've used to repro the issue: MIP.Initialize(MipComponent.File); var appInfo = new ApplicationInfo { ApplicationId = string.Empty, ApplicationName = string.Empty, ApplicationVersion = string.Empty }; var diagnosticConfiguration = new DiagnosticConfiguration { IsMinimalTelemetryEnabled = true }; var mipConfiguration = new MipConfiguration(appInfo, "mip_data", LogLevel.Info, false, CacheStorageType.InMemory) { DiagnosticOverride = diagnosticConfiguration }; //Expect BadInputException here due to empty properties of appInfo //When built as part of a 64-bit console app this causes AccessViolationException instead MIP.CreateMipContext(mipConfiguration); The AccessViolationException crashes the console app, with the following logged in the Windows Event Log: Framework Version: v4.0.30319 Description: The process was terminated due to an unhandled exception. Exception Info: System.AccessViolationException at Microsoft.InformationProtection.Internal.SdkWrapperPINVOKE.MipContext_Create__SWIG_1(System.Runtime.InteropServices.HandleRef) at Microsoft.InformationProtection.Internal.MipContext.Create(Microsoft.InformationProtection.Internal.MipConfiguration) at Microsoft.InformationProtection.Utils.MIPHelper.CreateMipContext(Microsoft.InformationProtection.MipConfiguration) The issue doesn't occur with the latest 1.16 version (1.16.149) of the SDK but does appear to be in all versions of the 1.17 release. Library: C# MIP SDK v1.17.x Target App: .Net Framework 4.8 console app Deployed OS: Windows Server 2016, 2019 and 2022 (With .Net Framework 4.8 and latest VC++ redist installed)MIP SDK cannot read file labels if a message was encrypted by Outlook Classic.
C++ application uses MIP SDK version 1.14.108. The application does Office files decryption and labels reading. The problem with labels reading is observed. Steps to reproduce: Create a docx file with a label which does not impose encryption. Open Outlook Classic, compose email, attach the document from 1, click Encrypt, send. During message sending our application intercepts encrypted by Outlook docx file in temporary folder C:\Users\UserName\AppData\Local\Temp Application decrypts the intercepted file using mipns::FileHandler::RemoveProtection. Visual inspection demonstrates that decryption runs successfully. Then a separate FileHandler for decrypted file is created, and mipns::FileHandler::GetLabel() returns an empty label. It means that the label was lost during decryption. Upon visual inspection of the decrypted file via Word we can see that the label is missing. Also, we do not see MSIP_Label* entries in meta data (File -> Info -> Properties -> Advanced Properties -> Custom). Here is a fragment of MIP SDK reducted log during file handler creation ================= file_engine_impl.cpp:327 "Creating file handler for: [D:\GitRepos\ ...reducted]" mipns::FileEngineImpl::CreateFileHandlerImpl gsf_utils.cpp:50 "Initialized GSF" `anonymous-namespace'::InitGsfHelper data_spaces.cpp:415 "No LabelInfo stream was found. No v1 custom properties" mipns::DataSpaces::GetLabelInfoStream data_spaces.cpp:428 "No LabelInfo stream was found. No v1 custom properties" mipns::DataSpaces::GetXmlPropertiesV1 file_format_base.cpp:155 "Getting protection from input..." mipns::FileFormatBase::GetProtection license_parser.cpp:233 "XPath returned no results" `anonymous-namespace'::GetXmlNodesFromPath license_parser.cpp:233 "XPath returned no results" `anonymous-namespace'::GetXmlNodesFromPath license_parser.cpp:299 "GetAppDataNode - Failed to get ID in PL app data section, parsing failed" `anonymous-namespace'::GetAppDataNode api_log_cache.cpp:58 "{{============== API CACHED LOGS BEGIN ============}}" mipns::ApiLogCache::LogAllMessages file_engine_impl.cpp:305 "Starting API call: file_create_file_handler_async scenarioId=89fd6484-7db7-4f68-8cf7-132f87825a26" mipns::FileEngineImpl::CreateFileHandlerAsync 37948 default_task_dispatcher_delegate.cpp:83 "Executing task 'ApiObserver-0' on a new detached thread" mipns::DefaultTaskDispatcherDelegate::ExecuteTaskOnIndependentThread 37948 file_engine_impl.cpp:305 "Ended API call: file_create_file_handler_async" mipns::FileEngineImpl::CreateFileHandlerAsync 37948 file_engine_impl.cpp:305 "Starting API task: file_create_file_handler_async scenarioId=89fd6484-7db7-4f68-8cf7-132f87825a26" mipns::FileEngineImpl::CreateFileHandlerAsync file_engine_impl.cpp:327 "Creating file handler for: [D:\GitRepos\...reducted....docx]" mipns::FileEngineImpl::CreateFileHandlerImpl file_format_factory_impl.cpp:88 "Create File Format. Extension: [.docx]" mipns::FileFormatFactoryImpl::Create file_format_base.cpp:363 "V1 metadata is not supported for file extension .docx. Setting metadata version to 0" mipns::FileFormatBase::CalculateMetadataVersion compound_file.cpp:183 "Open compound file for read" mipns::CompoundFile::OpenRead gsf_utils.cpp:50 "Initialized GSF" `anonymous-namespace'::InitGsfHelper compound_file_storage_impl.cpp:351 "Get Metadata" mipns::CompoundFileStorageImpl::GetMetadata compound_file_storage_impl.cpp:356 "No Metadata, not creating GSF object" mipns::CompoundFileStorageImpl::GetMetadata metadata.cpp:119 "Create Metadata" mipns::Metadata::Metadata metadata.cpp:136 "Got [0] properties from DocumentSummaryInformation" mipns::Metadata::GetProperties compound_file_storage_impl.cpp:351 "Get Metadata" mipns::CompoundFileStorageImpl::GetMetadata compound_file_storage_impl.cpp:356 "No Metadata, not creating GSF object" mipns::CompoundFileStorageImpl::GetMetadata metadata.cpp:119 "Create Metadata" mipns::Metadata::Metadata metadata.cpp:136 "Got [0] properties from DocumentSummaryInformation" mipns::Metadata::GetProperties =================What are the prerequisites to see Microsoft Secure Score?
My teammate says that even Basic or Standard M365 license provides Secure Score. Which is kind of right as you can see a basic score when opening a tenant in Lighthouse. But if you try to go to Defender console and then Exposure menu and press on Secure Score, it won't load with just Standard/Basic licenses assigned to users. I have tried to find a definitive list, but i can't. Copilot said you need at least Premium Business or E3/E5 or Defender P1. Which seems to make sense. But i need a confirmation. And also why do i see some score on tenant's page in Lighthouse?SolvedExplorer permission to download an email
Global Admin is allegedly not sufficient access to download an email. So I have a user asking for a copy of her emaill, and I'm telling her 'sorry, I don't have that permission', I'm only global admin' What? The documentation basically forces you to use the new terrible 'role group' system. I see various 'roles' that you need to add to a 'role group' in order to do this.. Some mention Preview, some mention Security Administrator, some mention Security Operator. I've asked copilot 100 different times, and he keeps giving me made up roles. But then linking to the made up role. How is such a basic functionality broken? It makes 0 sense. I don't want to submit this email - it's not malware or anything. I just want to download the **bleep** thing, and I don't want to have to go through the whole poorview process. This is really basic stuff. I can do this on about 10% of my GA accounts. There's no difference in the permissions - it just seems inconsistent.Investigating Excel-Initiated Email Activity Without Sent Items Trace
Two days ago, three emails were sent from a user’s inbox without leaving any copies in the Sent Items folder. The user did not send these emails manually—this is confirmed by the presence of the SimpleMAPI flag in Outlook. **What I know:** **Email Characteristics:** - All three emails contained a Word attachment. - No body text was present. - The subject line matched the attachment file name. - Two of the emails were identical. **Recipients:** - Emails were sent to colleagues who originally created the attached documents. **Attachment Details:** - One attachment appeared to be a temporary file (e.g., a3e6....). **System Behavior:** - No suspicious logins detected before or after the event. - Emails were sent via the Outlook.exe process on the user’s machine. - Excel.exe was identified as the parent initiating process according to Microsoft Defender endpoint logs. **In Defender's Endpoint logs I found this under Typed Details (related to the firing of the 3 emails):** - Downloaded file: `2057_5_0_word_httpsshredder-eu.osi.office.net_main.html` - Path: `C:\Users\s***s\AppData\Local\Microsoft\Office\16.0\TapCache\2057_5_0_word_httpsshredder-eu.osi.office.net_main.html` - Downloaded file: `~$rmalEmail.dotm` - Path: `C:\Users\s***s\AppData\Roaming\Microsoft\Templates\~$rmalEmail.dotm` I am seeking assistance to replicate this issue and accurately determine how these three emails were triggered.