azure
862 TopicsIngesting Akamai Audit Logs into Microsoft Sentinel using Azure Function Apps
Introduction Akamai provides extensive audit logs that can be valuable for security monitoring and compliance. To integrate Akamai Audit logs with Microsoft Sentinel, we can use Azure Function Apps to retrieve logs via the Akamai EdgeGrid API and send them to Log Analytics Workspace. In this guide, we will walk through deploying an Azure Function App that fetches Akamai Audit Logs and ingests them into Microsoft Sentinel. Prerequisites Before starting, ensure you have: An active Azure subscription with Microsoft Sentinel enabled. Akamai API credentials (EdgeGrid authentication: client_token, client_secret, and access_token). A Log Analytics Workspace (LAW) where logs will be ingested. Azure Function App deployed via VS Code. Python installed locally (Use the VSCode for the local deployment). High-Level Architecture Azure Function App calls Akamai API to fetch audit logs. Logs are parsed and sent to Microsoft Sentinel via Log Analytics API request to Azure Function App. Scheduled Execution ensures logs are fetched periodically. Step 1: Create an Azure Function App To deploy an Azure Function App via VS Code: Install the Azure Functions extension for VS Code. Install Azure Core Tools: npm install -g azure-functions-core-tools@4 --unsafe-perm true Create a Python-based Function App: func init AkamaiLogsFunction --python cd AkamaiLogsFunction func new --name FetchAkamaiLogs --template "HTTP trigger" --authlevel "anonymous" Step 2: Install Required Python Packages In your Function App directory, install the required dependencies: pip install requests akamai.edgegrid pip freeze > requirements.txt Step 3: Configure Environment Variables Instead of hardcoding API credentials, store them in Azure Function App settings: Go to Azure Portal > Function App. Navigate to Configuration > Application settings. Add the following environment variables: AKAMAI_CLIENT_TOKEN AKAMAI_CLIENT_SECRET AKAMAI_ACCESS_TOKEN WORKSPACE_ID (Log Analytics Workspace ID) SHARED_KEY (Log Analytics Shared Key) Step 4: Implement the Azure Function Code Create AkamaiLogFetcher.py with the following code: import azure.functions as func import logging import requests from akamai.edgegrid import EdgeGridAuth from urllib.parse import urljoin import os app = func.FunctionApp() # Azure Function HTTP Trigger @app.function_name(name="AkamaiLogFetcher") @app.route(route="fetchlogs", auth_level=func.AuthLevel.ANONYMOUS) def fetch_logs(req: func.HttpRequest) -> func.HttpResponse: logging.info("Processing Akamai log fetch request...") # Akamai API credentials (move these to Azure App Settings for security) baseurl = 'https://YOURBASEHOSTURL.luna.akamaiapis.net/' client_token = os.getenv("AKAMAI_CLIENT_TOKEN", "xxxxxxxxxxxxxx") client_secret = os.getenv("AKAMAI_CLIENT_SECRET", "xxxxxxxxxxxxx") access_token = os.getenv("AKAMAI_ACCESS_TOKEN", "xxxxxxxxxxxxxx") # Initialize session with authentication session = requests.Session() session.auth = EdgeGridAuth( client_token=client_token, client_secret=client_secret, access_token=access_token ) try: # Call Akamai API response = session.get(urljoin(baseurl, '/events/v3/events')) response.raise_for_status() # Raise an error for HTTP errors # Return response as JSON return func.HttpResponse(response.text, mimetype="application/json", status_code=response.status_code) except requests.exceptions.RequestException as e: logging.error(f"Error fetching logs: {e}") return func.HttpResponse(f"Failed to fetch logs: {str(e)}", status_code=500) Step 5: Deploy the Function to Azure Run the following command to deploy the function: func azure functionapp publish <YourFunctionAppName> Step 6: Setting Up the Logic App Workflow Create a new Logic App in Azure: Navigate to the Azure Portal -> Logic Apps -> Create. Choose Consumption Plan and select your preferred region. Click Review + Create, then Create. Add an HTTP Trigger: Select Recurrence as the trigger. Configure it to run every 10 minutes. Configure the HTTP Action to Fetch Logs from Akamai Function App API: Use the HTTP action in Logic Apps. Set the method to GET. Enter the Function App URL. Add the required headers (content type). Parse the JSON Response: Use the "Parse JSON" action to structure the response. Define the schema using a sample response from Akamai Audit Logs. Send Logs to Microsoft Sentinel: Use the "Azure Log Analytics - Send Data" action. Map the Akamai Audit log fields to the Log Analytics schema. Select the appropriate Custom Table in Log Analytics or use CommonSecurityLog. JSON Request body for Send Logs trigger Completed Logic App will look like this: Step 7: Testing and Validation Run a test execution of the Logic App. Check the Logic Apps run history to ensure successful Function App calls and data ingestion. Verify logs in Sentinel: Navigate to Microsoft Sentinel -> Logs. Run a KQL query: RadwareEvents_CL | where TimeGenerated > ago(10m) Summary This guide demonstrated how to use Azure Function Apps and Logic Apps to fetch Akamai Audit Logs via API and send them to Microsoft Sentinel. The serverless approach ensures efficient log collection without requiring dedicated infrastructure.1.2KViews3likes1Comment👉 Microsoft Entra in Action: From Conditional Access to Identity Protection
One of the areas I’m most passionate about is identity-driven security. Microsoft Entra makes it possible to apply Zero Trust principles directly at the identity layer. ⚡ Conditional Access – the backbone of modern access policies. 👤 Privileged Identity Management (PIM) – ensuring just-in-time, least privilege for admins. 🛡️ Identity Protection – risk-based policies to stop compromised sign-ins in real time. In my labs, I’ve seen how these features transform security posture without adding friction for users. Coming soon: - Step-by-step breakdown of a risky user detection scenario. - A visual guide to Conditional Access controls for critical apps. Would love to exchange insights with others experimenting in this space — what Entra features are you finding most impactful? #MicrosoftEntra | #ConditionalAccess | #IdentityProtection | #MicrosoftLearn | #PerparimLabs138Views1like3CommentsPhishing Triage Agent in Defender XDR: Say Goodbye to False Positives and Analyst Fatigue
Phishing remains one of the most common and dangerous attack vectors in cybersecurity. With the rise of user-reported suspicious emails, Security Operations Center (SOC) teams are overwhelmed by the volume and complexity of triage. Enter the Phishing Triage Agent, a new capability within Microsoft Defender XDR and Security Copilot that uses AI to automate phishing classification, reduce false positives, and accelerate incident response. Image from Microsoft Learn - Microsoft Security Copilot Agents What’s the Issue? SOC analysts regularly handle a high volume of suspicious email reports, dedicating substantial time to reviewing each submission, though many prove to be non-threatening. More than 90% of cyberattacks originate from phishing, making it a primary method used to breach organizational defenses. This results in numerous alerts and potential incidents that must be triaged, prioritized, and investigated. Traditional rule-based systems, which were once effective for detecting known threats, now face challenges as attackers adapt their tactics and techniques. The continually changing threat landscape requires defenders to address not only advanced phishing attempts but also alert fatigue and the possibility of missing significant incidents. In this context, scalable and efficient solutions are important for enabling defenders to focus on investigating and mitigating real threats rather than addressing false positives. Image from Microsoft Learn - Type view for the Mailflow status report Why It’s Urgent Phishing is a very popular entry point for attackers, with such attacks growing more frequent and advanced, leaving SOC teams struggling with incident management. The Phishing Triage Agent uses LLMs and state of the art Threat Intelligence to quickly analyze and categorize reported emails, helping analysts focus on real threats. Integrating easily with current workflows, it offers adaptive, AI-driven insights for rapid threat detection and improved situational awareness. Through ongoing learning, it stays aligned with evolving attacker tactics and helps strengthen email security. Image from Microsoft Learn - Defender for Office 365 Phishing block Use Cases Automated Triage: Classify phishing emails without manual rules. False Positive Filtering: Reduce noise and analyst fatigue. Explainable AI: Provide clear reasoning behind verdicts. Threat Prioritization: Focus on high-risk incidents with enriched context. Compliance Auditing: Maintain logs and transparency for regulatory needs. Image from Microsoft Learn – Incident Queue with Phishing Triage Agent How It Works The agent activates when a user reports a suspicious email and does the following: Analyzes the message using LLMs. Classifies it as normal email or phishing. Enriches the incident with threat intelligence. Provides a verdict with natural-language explanation. Escalates or resolves based on severity and confidence. Image was created with AI It integrates with Security Copilot, enabling AI-assisted investigations and automation across Microsoft Defender XDR. Image from Microsoft Learn - Transparency and explainability in phishing triage Pros and Cons This section outlines the main advantages, limitations, and licensing requirements of the Phishing Triage Agent solution. Pros Cons License Needed Scales phishing triage across the enterprise Requires SCU provisioning and Defender licensing Microsoft Defender for Office 365 Plan 2 Reduces false positives and analyst fatigue Currently in preview; may evolve Security Copilot subscription Provides explainable decisions Requires integration with Defender XDR SCUs and plugin configuration The Phishing Triage Agent is a game-changer for SOC teams. By combining AI-powered analysis with human oversight, it accelerates detection, sharpens response, and strengthens organizational security posture. As phishing tactics evolve, this agent ensures your defenses stay ahead. Getting Started with Phishing Triage Agent The Phishing Triage Agent in Microsoft Defender XDR and Security Copilot helps SOC teams automate and accelerate phishing email analysis. Here’s how to get started: Check Prerequisites Ensure your organization has the necessary licenses: Microsoft Defender for Office 365 Plan 2 Security Copilot subscription Security Compute Units (SCUs) provisioned Defender XDR integration enabled Microsoft Defender for Office 365 service description License options for Microsoft 365 Copilot Enable Phishing Triage Agent Go to the Microsoft Defender portal: Settings > Email & Collaboration > Policies & Rules Enable the Phishing Triage Agent under Automated Investigation & Response (AIR). Automated investigation and response examples - Microsoft Defender for Office 365 Integrate with Security Copilot In the Security Copilot interface: Add the Phishing Triage Agent as a plugin Configure it to trigger when users report suspicious emails via Outlook or Defender for Office 365 Use plugins in Microsoft Security Copilot Test the Workflow Simulate a phishing report by submitting a suspicious email. The agent will: Use LLMs to analyze the message Classify it as phishing or safe Enriching the incident with threat intelligence Provide a natural-language explanation Escalate or resolve based on severity Security Copilot Phishing Triage Agent in Microsoft Defender Review and Tune Use the Mailflow status report and Incident Queue to monitor: Classification accuracy False positives Analyst workload reduction Mail flow insights in the new EAC in Exchange Online Prioritize incidents in the Microsoft Defender portal Train Your SOC Team Share explainable AI outputs with analysts to build trust Use the agent’s verdicts to guide manual investigations and reinforce learning Security Copilot Phishing Triage Agent in Microsoft Defender (Preview) Iterate and Improve Review phishing trends Update triage policies Leverage Security Copilot’s adaptive learning to stay ahead of evolving threats What is Microsoft Security Copilot? About the Author: Greetings! Jacques “Jack” here. I am excited to share this remarkable technology with our Defender community, as it has the potential to greatly enhance organizational protection. My role as a Microsoft Technical Trainer has shown me how valuable solutions like Security Copilot and Security AI Agents can be in strengthening defenses and accelerating response to threats. By sharing these advancements, I hope to empower you with the tools needed to safeguard your environment in an ever-evolving security landscape. #MicrosoftLearn #SkilledByMTTLog Ingestion Delay in all Data connectors
Hi, I have integrated multiple log sources in sentinel and all the log sources are ingesting logs between 7:00 pm to 2:00 am I want the log ingestion in real time. I have integrated Azure WAF, syslog, Fortinet, Windows servers. For evidence I am attaching a screenshots. I am totally clueless if anyone can help I will be very thankful!58Views0likes1CommentNo More Guesswork—Copilot Makes Azure Security Crystal Clear
Elevating Azure Security and Compliance In today’s rapidly evolving digital landscape, security and compliance are more critical than ever. As organizations migrate workloads to Azure, the need for robust security frameworks and proactive compliance strategies grows. Security Copilot, integrated with Azure, is transforming how technical teams approach these challenges, empowering users to build secure, compliant environments with greater efficiency and confidence. As a security expert, I’d like to provide clear guidance on how to effectively utilize Security Copilot in the ever-evolving landscape of security and compliance. Security Copilot is a premium offering; it includes advanced capabilities that go beyond standard Azure security tools. These features may require specific licensing or subscription tiers. It provides deeper insights, enhanced automation, and tailored guidance for complex security scenarios. Below, I’ll highlight a range of security topics with sample Copilot prompts that you can use to help create a more secure and compliant environment. Getting Started with Microsoft Security Copilot Before leveraging the advanced capabilities of Security Copilot, it's important to understand the foundational requirements and setup steps: Azure Subscription Requirement Security Copilot is not automatically available in all Azure subscriptions. To use it, your organization must have an active Azure subscription. This is necessary to provision Security Compute Units (SCUs), which are the core resources that power Copilot workloads. Provisioning Security Compute Units (SCUs) SCUs are billed hourly and can be scaled based on workload needs. At least one SCU must be provisioned to activate Security Copilot. You can manage SCUs via the Azure portal or the Security Copilot portal, adjusting capacity as needed for performance and cost optimization. Role-Based Access Control To set up and manage Security Copilot: You need to be an Azure Owner or Contributor to provision SCUs. Users must be assigned appropriate Microsoft Entra roles (e.g., Security Administrator) to access and interact with Copilot features. Embedded Experience Security Copilot can be used as a standalone tool or embedded within other Microsoft services like Defender for Endpoint, Intune, and Purview, offering unified security management experience. Data Privacy and Security: Foundational Best Practices Why settle for generic security advice when Security Copilot delivers prioritized, actionable guidance backed by Microsoft’s best practices? Copilot doesn’t just recommend security measures, it actively helps you implement them, leveraging advanced features like encryption and granular access controls to safeguard every layer of your Azure environment. While Security Copilot doesn’t directly block threats like a firewall or Web Application Firewall (WAF), it enhances data integrity and confidentiality by analyzing security signals across Azure, identifying vulnerabilities, and guiding teams with prioritized, actionable recommendations. It helps implement encryption, access controls, and compliance-aligned configurations, while integrating with existing security tools to interpret logs and suggest containment strategies. By automating investigations and supporting secure-by-design practices, Copilot empowers organizations to proactively reduce breach risks and maintain a strong security posture. Secure Coding and Developer Productivity While Security Copilot supports secure coding by identifying vulnerabilities like SQL injection, Cross-Site Scripting (XSS), and buffer overflows, it is not a direct replacement for traditional code scanning tools, instead, it complements these tools by leveraging telemetry from integrated Microsoft services and applying AI-driven insights to prioritize risks and guide remediation. Copilot enhances developer productivity by interpreting signals, offering tailored recommendations, and embedding security practices throughout the software lifecycle. Understanding Security Protocols and Mechanisms Azure’s security stands on robust protocols and mechanisms but understanding them shouldn’t require a cryptography degree. Security Copilot demystifies encryption, authentication, and secure communications—making complex concepts accessible and actionable. With Security Copilot as your guide, teams can confidently configure Azure resources and respond to threats with informed, best-practice decisions. Compliance and Regulatory Alignment Regulatory requirements such as GDPR, HIPAA, and PCI-DSS don’t have to slow you down. Security Copilot streamlines Azure compliance with ready-to-use templates, clear guidelines, and robust documentation support. From maintaining audit logs to generating compliance reports, Security Copilot keeps every action tracked and organized—reducing non-compliance risk and making audits a breeze. Incident Response Planning No security strategy is complete without a solid incident response plan. Security Copilot equips Azure teams with detailed protocols for identifying, containing, and mitigating threats. It enhances Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solutions through ready-made playbooks tailored to diverse scenarios. With built-in incident simulations, Copilot enables teams to rehearse and refine their responses—minimizing breach impact and accelerating recovery. Security Best Practices for Azure Staying ahead of threats means never standing still. Security Copilot builds on Azure’s proven security features—like multi-factor authentication, regular updates, and least privilege access—by automating their implementation, monitoring usage patterns, and surfacing actionable insights. It connects with tools like Microsoft Defender and Entra ID to interpret signals, recommend improvements, and guide teams in real time. With Copilot, your defenses don’t just follow best practices, they evolve dynamically to meet emerging threats, keeping your team sharp and your environment secure. Integrating Copilot into Your Azure Security Strategy Security Copilot isn’t just a technical tool—it’s your strategic partner for Azure security. By weaving Copilot into your workflows, you unlock advanced security enhancements, optimized code, and robust privacy protection. Its holistic approach ensures security and compliance are seamlessly integrated into every corner of your Azure environment. Conclusion Security Copilot is changing the game for Azure security and compliance. By blending secure coding, advanced security expertise, regulatory support, incident response playbooks, and best practices, Copilot empowers technical teams to build resilient, compliant cloud environments. As threats evolve, Copilot keeps your data protected and your organization ahead of the curve. Ready to take your Azure security and compliance to the next level? Start leveraging Security Copilot today to empower your team, streamline operations, and stay ahead of evolving threats. Dive deeper into best practices, hands-on tutorials, and expert guidance to maximize your security posture and unlock the full potential of Copilot in your organization. Explore, learn, and secure your cloud—your journey starts now! Further Reading & Resources Microsoft Security Copilot documentation Get started with Microsoft Security Copilot Microsoft Copilot in Azure Overview Security best practices and patterns - Microsoft Azure Azure compliance documentation Copilot Learning Hub Microsoft Security Copilot Blog Author: Microsoft Principal Technical Trainer, https://www.linkedin.com/in/eliasestevao/ #MicrosoftLearn #SkilledByMTTTrusted Signing Public Preview Update
Nearly a year ago we announced the Public Preview of Trusted Signing with availability for organizations with 3 years or more of verifiable history to onboard to the service to get a fully managed code signing experience to simplify the efforts for Windows app developers. Over the past year, we’ve announced new features including the Preview support for Individual Developers, and we highlighted how the service contributes to the Windows Security story at Microsoft BUILD 2024 in the Unleash Windows App Security & Reputation with Trusted Signing session. During the Public Preview, we have obtained valuable insights on the service features from our customers, and insights into the developer experience as well as experience for Windows users. As we incorporate this feedback and learning into our General Availability (GA) release, we are limiting new customer subscriptions as part of the public preview. This approach will allow us to focus on refining the service based on the feedback and data collected during the preview phase. The limit in new customer subscriptions for Trusted Signing will take effect Wednesday, April 2, 2025, and make the service only available to US and Canada-based organizations with 3 years or more of verifiable history. Onboarding for individual developers and all other organizations will not be directly available for the remainder of the preview, and we look forward to expanding the service availability as we approach GA. Note that this announcement does not impact any existing subscribers of Trusted Signing, and the service will continue to be available for these subscribers as it has been throughout the Public Preview. For additional information about Trusted Signing please refer to Trusted Signing documentation | Microsoft Learn and Trusted Signing FAQ | Microsoft Learn.4.3KViews6likes13CommentsCodeless Connect Framework (CCF) Template Help
As the title suggests, I'm trying to finalize the template for a Sentinel Data Connector that utilizes the CCF. Unfortunately, I'm getting hung up on some parameter related issues with the polling config. The API endpoint I need to call utilizes a date range to determine the events to return and then pages within that result set. The issue is around the requirements for that date range and how CCF is processing my config. The API expects an HTTP GET verb and the query string should contain two instances of a parameter called EventDates among other params. For example, a valid query string may look something like: ../path/to/api/myEndpoint?EventDates=2025-08-25T15%3A46%3A36.091Z&EventDates=2025-08-25T16%3A46%3A36.091Z&PageSize=200&PageNumber=1 I've tried a few approaches in the polling config to accomplish this, but none have worked. The current config is as follows and has a bunch of extra stuff and names that aren't recognized by my API endpoint but are there simply to demonstrate different things: "queryParameters": { "EventDates.Array": [ "{_QueryWindowStartTime}", "{_QueryWindowEndTime}" ], "EventDates.Start": "{_QueryWindowStartTime}", "EventDates.End": "{_QueryWindowEndTime}", "EventDates.Same": "{_QueryWindowStartTime}", "EventDates.Same": "{_QueryWindowEndTime}", "Pagination.PageSize": 200 } This yields the following URL / query string: ../path/to/api/myEndpoint?EventDates.Array=%7B_QueryWindowStartTime%7D&EventDates.Array=%7B_QueryWindowEndTime%7D&EventDates.Start=2025-08-25T15%3A46%3A36.091Z&EventDates.End=2025-08-25T16%3A46%3A36.091Z&EventDates.Same=2025-08-25T16%3A46%3A36.091Z&Pagination.PageSize=200 There are few things to note here: The query param that is configured as an array (EventDates.Array) does indeed show up twice in the query string and with distinct values. The issue is, of course, that CCF doesn't seem to do the variable substitution for values nested in an array the way it does for standard string attributes / values. The query params that have distinct names (EventDates.Start and .End) both show up AND both have the actual timestamps substituted properly. Unfortunately, this doesn't match the API expectations since the names differ. The query params that are repeated with the same name (EventDates.Same) only show once and it seems to use the value from which comes last in the config (so last one overwrites the rest). Again, this doesn't meet the requirements of the API since we need both. I also tried a few other things ... Just sticking the query params and placeholders directly in the request.apiEndpoint polling config attribute. No surprise, it doesn't do the variable substitution there. Utilizing queryParametersTemplate instead of queryParameters. https://learn.microsoft.com/en-us/azure/sentinel/data-connector-connection-rules-referenceindicates this is a string parameter that expects a JSON string. I tried this with various approaches to the structure of the JSON. In ALL instances, the values here seemed to be completely ignored. All other examples from Azure-Sentinel repository utilize the POST verb. Perhaps that attribute isn't even interpreted on a GET request??? And because some AI agents suggested it and ... sure, why not??? ... I tried queryParametersTemplate as an actual query string template, so "EventDates={_QueryWindowStartTime}&EventDates={_QueryWindowEndTime}". Just as with previous attempts to use this attribute, it was completely ignored. I'm willing to try anything at this point, so if you have suggestions, I'll give it a shot! Thanks for any input you may have!86Views0likes4CommentsPlanning your move to Microsoft Defender portal for all Microsoft Sentinel customers
In November 2023, Microsoft announced our strategy to unify security operations by bringing the best of XDR and SIEM together. Our first step was bringing Microsoft Sentinel into the Microsoft Defender portal, giving teams a single, comprehensive view of incidents, reducing queue management, enriching threat intel, streamlining response and enabling SOC teams to take advantage of Gen AI in their day-to-day workflow. Since then, considerable progress has been made with thousands of customers using this new unified experience; to enhance the value customers gain when using Sentinel in the Defender portal, multi-tenancy and multi-workspace support was added to help customers with more sophisticated deployments. Our mission is to unify security operations by bringing all your data, workflows, and people together to unlock new capabilities and drive better security outcomes. As a strong example of this, last year we added extended posture management, delivering powerful posture insights to the SOC team. This integration helps build a closed-loop feedback system between your pre- and post-breach efforts. Exposure Management is just one example. By bringing everything together, we can take full advantage of AI and automation to shift from a reactive to predictive SOC that anticipates threats and proactively takes action to defend against them. Beyond Exposure Management, Microsoft has been constantly innovating in the Defender experience, adding not just SIEM but also Security Copilot. The Sentinel experience within the Defender portal is the focus of our innovation energy and where we will continue to add advanced Sentinel capabilities going forward. Onboarding to the new unified experience is easy and doesn’t require a typical migration. Just a few clicks and permissions. Customers can continue to use Sentinel in the Azure portal while it is available even after choosing to transition. Today, we’re announcing that we are moving to the next phase of the transition with a target to retire the Azure portal for Microsoft Sentinel by July 1, 2026. Customers not yet using the Defender portal should plan their transition accordingly. “Really amazing to see that coming, because cross querying with tables in one UI is really cool! Amazing, big step forward to the unified [Defender] portal.” Glueckkanja AG “The biggest benefit of a unified security operations solution (Microsoft Sentinel + Microsoft Defender XDR) has been the ability to combine data in Defender XDR with logs from third party security tools. Another advantage developed has been to eliminate the need to switch between Defender XDR and Microsoft Sentinel portals, now having a single pane of glass, which the team has been wanting for some years.” Robel Kidane, Group Information Security Manager, Renishaw PLC Delivering the SOC of the future Unifying threat protection, exposure management and security analytics capabilities in one pane of glass not only streamlines the user experience, but also enables Sentinel customers to realize security outcomes more efficiently: Analyst efficiency: A single portal reduces context switching, simplifies workflows, reduces training overhead, and improves team agility. Integrated insights: SOC-focused case management, threat intelligence, incident correlation, advanced hunting, exposure management, and a prioritized incident queue enriched with business and sensitivity context—enabling faster, more informed detection and response across all products. SOC optimization: Security controls that can be adjusted as threats and business priorities change to control costs and provide better coverage and utilization of data, thus maximizing ROI from the SIEM. Accelerated response: AI-driven detection and response which reduces mean time to respond (MTTR) by 30%, increases security response efficiency by 60%, and enables embedded Gen AI and agentic workflows. What’s next: Preparing for the retirement of the Sentinel Experience in the Azure Portal Microsoft is committed to supporting every single customer in making that transition over the next 12 months. Beginning July 1, 2026, Sentinel users will be automatically redirected to the Defender portal. After helping thousands of customers smoothly make the transition, we recommend that security teams begin planning their migration and change management now to ensure continuity and avoid disruption. While the technical process is very straightforward, we have found that early preparation allows time for workflow validation, training, and process alignment to take full advantage of the new capabilities and experience. Tips for a Successful Migration to Microsoft Defender 1. Leverage Microsoft’s help: Leverage Microsoft documentation, instructional videos, guidance, and in-product support to help you be successful. A good starting point is the documentation on Microsoft Learn. 2. Plan early: Engage stakeholders early including SOC and IT Security leads, MSSPs, and compliance teams to align on timing, training and organizational needs. Make sure you have an actionable timeline and agreement in the organization around when you can prioritize this transition to ensure access to the full potential of the new experience. 3. Prepare your environment: Plan and design your environment thoroughly. This includes understanding the prerequisites for onboarding Microsoft Sentinel workspaces, reviewing and deciding on access controls, and planning the architecture of your tenant and workspace. Proper planning will ensure a smooth transition and help avoid any disruptions to your security operations. 4. Leverage Advanced Threat Detection: The Defender portal offers enhanced threat detection capabilities with advanced AI and machine learning for Microsoft Sentinel. Make sure to leverage these features for faster and more accurate threat detection and response. This will help you identify and address critical threats promptly, improving your overall security posture. 5. Utilize Unified Hunting and Incident Management: Take advantage of the enhanced hunting, incident, and investigation capabilities in Microsoft Defender. This provides a comprehensive view for more efficient threat detection and response. By consolidating all security incidents, alerts, and investigations into a single unified interface, you can streamline your operations and improve efficiency. 6. Optimize Cost and Data Management The Defender portal offers cost and data optimization features, such as SOC Optimization and Summary Rules. Make sure to utilize these features to optimize your data management, reduce costs, and increase coverage and SIEM ROI. This will help you manage your security operations more effectively and efficiently. Unleash the full potential of your Security team The unified SecOps experience available in the Defender portal is designed to support the evolving needs of modern SOCs. The Defender portal is not just a new home for Microsoft Sentinel - it’s a foundation for integrated, AI-driven security operations. We’re committed to helping you make this transition smoothly and confidently. If you haven’t already joined the thousands of security organizations that have done so, now is the time to begin. Resources AI-Powered Security Operations Platform | Microsoft Security Microsoft Sentinel in the Microsoft Defender portal | Microsoft Learn Shifting your Microsoft Sentinel Environment to the Defender Portal | Microsoft Learn Microsoft Sentinel is now in Defender | YouTube32KViews7likes21CommentsEntra App Gallery required for Excel AddIn
Hi, We have an Excel Addin published to Microsoft AppSource: https://appsource.microsoft.com/en-us/product/office/WA200009029?tab=Overview The Excel Addin uses Entra ID to obtain an OIDC token to securely / seamlessly access MS 365 SharePoint on behalf of the user. In order to achive this the Entra ID subscription needs the TR4E application registered as an Enterprise Application / App Registration. My question is whether I need to submit the TR4E application separately to the Entra App Gallery, so it can be installed by the Entra ID admin - or will the registration in Entra ID happen automatically when a new user first tries using TR4E? I note that MS has suspended new application submissions for Entra App Gallery, which means our customers would need to manually create the Entra ID Enterprise Application (which is not a great experience). Cheers, Andrew60Views0likes1CommentMicrosoft Sentinel’s New Data Lake: Cut Costs & Boost Threat Detection
Microsoft Sentinel is leveling up! Already a trusted cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) solution, it empowers security teams to detect, investigate, and respond to threats with speed and precision. Now, with the introduction of its new Data Lake architecture, Sentinel is transforming how security data is stored, accessed, and analyzed, bringing unmatched flexibility and scale to threat investigation. Unlike Microsoft Fabric OneLake, which supports analytics across the organization, Sentinel’s Data Lake is purpose-built for security. It centralizes raw structured, semi-structured, and unstructured data in its original format, enabling advanced analytics without rigid schemas. This article is written by someone who’s spent years helping security teams navigate Microsoft’s evolving ecosystem, translating complex capabilities into practical strategies. What follows is a hands-on look at the key features, benefits, and challenges of Sentinel’s Data Lake, designed to help you make the most of this powerful new architecture. Current Sentinel Features To tackle the challenges security teams, face today—like explosive data growth, integration of varied sources, and tight compliance requirements—organizations need scalable, efficient architectures. Legacy SIEMs often become costly and slow when analyzing multi-year data or correlating diverse events. Security data lakes address these issues by enabling seamless ingestion of logs from any source, schema-on-read flexibility, and parallelized queries over massive datasets. This schema-on-read allows SOC analysts to define how data is interpreted at the time of analysis, rather than when it is stored. This means analysts can flexibly adapt queries and threat detection logic to evolving threats, without reformatting historical data making investigations more agile and responsive to change. This empowers security operations to conduct deep historical analysis, automate enrichment, and apply advanced analytics, such as machine learning, while retaining strict control over data access and residency. Ultimately, decoupling storage and compute allows teams to boost detection and response speed, maintain compliance, and adapt their Security Operation Center (SOC) to future security demands. As organizations manage increasing data and limited budgets, many are moving from legacy SIEMs to advanced cloud-native options. Microsoft Sentinel’s Data Lake separates storage from computing, offering scalable and cost-effective analytics and compliance. For instance, storing 500 TB of logs in Sentinel Data Lake can cut costs by 60–80% compared to Log Analytics, due to lower storage costs and flexible retention. Integration with modern tools and open formats enables efficient threat response and regulatory compliance. Microsoft Sentinel data lake pricing (preview) Sentinel Data Lake Use Cases Log Retention: Long-term retention of security logs for compliance and forensic investigations Hunting: Advanced threat hunting using historical data Interoperability: Integration with Microsoft Fabric and other analytics platforms Cost: Efficient storage prices for high-volume data sources How Microsoft Sentinel Data Lake Helps Microsoft Sentinel’s Data Lake introduces a powerful paradigm shift for security operations by architecting the separation of storage and compute, enabling organizations to achieve petabyte-scale data retention without the traditional overhead and cost penalties of legacy SIEM solutions. Built atop highly scalable, cloud-native infrastructure, Sentinel Data Lake empowers SOCs to ingest telemetry from virtually unlimited sources ranging from on-premises firewalls, proxies, and endpoint logs to SaaS, IaaS, and PaaS environments—while leveraging schema-on-read, a method that allows analysts to define how data is interpreted at query time rather than when it is stored, offering greater flexibility in analytics. For example, a security analyst can adapt to the way historical data is examined as new threats emerge, without needing to reformat or restructure the data stored in the Data Lake. From Microsoft Learn – Retention and data tiering Storing raw security logs in open formats like Parquet (this is a columnar storage file format optimized for efficient data compression and retrieval, commonly used in big data processing frameworks like Apache Spark and Hadoop) enables easy integration with analytics tools and Microsoft Fabric, letting analysts efficiently query historical data using KQL, SQL, or Spark. This approach eliminates the need for complex ETL and archived data rehydration, making incident response faster; for instance, a SOC analyst can quickly search for years of firewall logs for threat detection. From Microsoft Learn – Flexible querying with Kusto Query Language Granular data governance and access controls allow organizations to manage sensitive information and meet legal requirements. Storing raw security logs in open formats enables fast investigations of long-term data incidents, while automated lifecycle management reduces costs and ensures compliance. Data Lakes integrate with Microsoft platforms and other tools for unified analytics and security. Machine learning helps detect unusual login activity across years, overcoming previous storage issues. From Microsoft Learn – Powerful analytics using Jupyter notebooks Pros and Cons The following table highlights the advantages and potential opportunities that Microsoft Sentinel Data Lake offers. This follows the same Pay-As-You-Go pricing model as currently available with Sentinel. Pros Cons License Needed Scalable, cost-effective long-term retention of security data Requires adaptation to new architecture Pay-As-You-Go model Seamless integration with Microsoft Fabric and open data formats Initial setup and integration may involve a learning curve Pay-As-You-Go model Efficient processing of petabyte-scale datasets Transitioning existing workflows may require planning Pay-As-You-Go model Advanced analytics, threat hunting, and AI/ML across historical data Some features may depend on integration with other services Pay-As-You-Go model Supports compliance use cases with robust data governance and audit trails Complexity in new data governance features Pay-As-You-Go model Microsoft Sentinel Data Lake solution advances cloud-native security by overcoming traditional SIEM limitations, allowing organizations to better retain, analyze, and respond to security data. As cyber threats grow, Sentinel Data Lake offers flexible, cost-efficient storage for long-term retention, supporting detection, compliance, and audits without significant expense or complexity. Quick Guide: Deploy Microsoft Sentinel Data Lake Assess Needs: Identify your security data volume, retention, and compliance requirements - Sentinel Data Lake Overview. Prepare Environment: Ensure Azure permissions and workspace readiness - Onboarding Guide. Enable Data Lake: Use Azure CLI or Defender portal to activate - Setup Instructions. Ingest & Import Data: Connect sources and migrate historical logs - Microsoft Sentinel Data Connectors. Integrate Analytics: Use KQL, notebooks, and Microsoft Fabric for scalable analysis - Fabric Overview Train & Optimize: Educate your team and monitor performance - Best Practices. About the Author: Hi! Jacques “Jack” here, I’m a Microsoft Technical Trainer at Microsoft. I wanted to share this as it’s something I often asked during my Security Trainings. This improves the already impressive Microsoft Sentinel feature stack helping the Defender Community to secure their environment in this ever-growing hacked world. I’ve been working with Microsoft Sentinel since September 2019, and I have been teaching learners about this SIEM since March 2020. I have experience using Security Copilot and Security AI Agents, which have been effective in improving my incident response and compromise recovery times.