azure
46 TopicsTrusted Signing Public Preview Update
Nearly a year ago we announced the Public Preview of Trusted Signing with availability for organizations with 3 years or more of verifiable history to onboard to the service to get a fully managed code signing experience to simplify the efforts for Windows app developers. Over the past year, we’ve announced new features including the Preview support for Individual Developers, and we highlighted how the service contributes to the Windows Security story at Microsoft BUILD 2024 in the Unleash Windows App Security & Reputation with Trusted Signing session. During the Public Preview, we have obtained valuable insights on the service features from our customers, and insights into the developer experience as well as experience for Windows users. As we incorporate this feedback and learning into our General Availability (GA) release, we are limiting new customer subscriptions as part of the public preview. This approach will allow us to focus on refining the service based on the feedback and data collected during the preview phase. The limit in new customer subscriptions for Trusted Signing will take effect Wednesday, April 2, 2025, and make the service only available to US and Canada-based organizations with 3 years or more of verifiable history. Onboarding for individual developers and all other organizations will not be directly available for the remainder of the preview, and we look forward to expanding the service availability as we approach GA. Note that this announcement does not impact any existing subscribers of Trusted Signing, and the service will continue to be available for these subscribers as it has been throughout the Public Preview. For additional information about Trusted Signing please refer to Trusted Signing documentation | Microsoft Learn and Trusted Signing FAQ | Microsoft Learn.4.5KViews6likes18CommentsPhishing Triage Agent in Defender XDR: Say Goodbye to False Positives and Analyst Fatigue
Phishing remains one of the most common and dangerous attack vectors in cybersecurity. With the rise of user-reported suspicious emails, Security Operations Center (SOC) teams are overwhelmed by the volume and complexity of triage. Enter the Phishing Triage Agent, a new capability within Microsoft Defender XDR and Security Copilot that uses AI to automate phishing classification, reduce false positives, and accelerate incident response. Image from Microsoft Learn - Microsoft Security Copilot Agents What’s the Issue? SOC analysts regularly handle a high volume of suspicious email reports, dedicating substantial time to reviewing each submission, though many prove to be non-threatening. More than 90% of cyberattacks originate from phishing, making it a primary method used to breach organizational defenses. This results in numerous alerts and potential incidents that must be triaged, prioritized, and investigated. Traditional rule-based systems, which were once effective for detecting known threats, now face challenges as attackers adapt their tactics and techniques. The continually changing threat landscape requires defenders to address not only advanced phishing attempts but also alert fatigue and the possibility of missing significant incidents. In this context, scalable and efficient solutions are important for enabling defenders to focus on investigating and mitigating real threats rather than addressing false positives. Image from Microsoft Learn - Type view for the Mailflow status report Why It’s Urgent Phishing is a very popular entry point for attackers, with such attacks growing more frequent and advanced, leaving SOC teams struggling with incident management. The Phishing Triage Agent uses LLMs and state of the art Threat Intelligence to quickly analyze and categorize reported emails, helping analysts focus on real threats. Integrating easily with current workflows, it offers adaptive, AI-driven insights for rapid threat detection and improved situational awareness. Through ongoing learning, it stays aligned with evolving attacker tactics and helps strengthen email security. Image from Microsoft Learn - Defender for Office 365 Phishing block Use Cases Automated Triage: Classify phishing emails without manual rules. False Positive Filtering: Reduce noise and analyst fatigue. Explainable AI: Provide clear reasoning behind verdicts. Threat Prioritization: Focus on high-risk incidents with enriched context. Compliance Auditing: Maintain logs and transparency for regulatory needs. Image from Microsoft Learn – Incident Queue with Phishing Triage Agent How It Works The agent activates when a user reports a suspicious email and does the following: Analyzes the message using LLMs. Classifies it as normal email or phishing. Enriches the incident with threat intelligence. Provides a verdict with natural-language explanation. Escalates or resolves based on severity and confidence. Image was created with AI It integrates with Security Copilot, enabling AI-assisted investigations and automation across Microsoft Defender XDR. Image from Microsoft Learn - Transparency and explainability in phishing triage Pros and Cons This section outlines the main advantages, limitations, and licensing requirements of the Phishing Triage Agent solution. Pros Cons License Needed Scales phishing triage across the enterprise Requires SCU provisioning and Defender licensing Microsoft Defender for Office 365 Plan 2 Reduces false positives and analyst fatigue Currently in preview; may evolve Security Copilot subscription Provides explainable decisions Requires integration with Defender XDR SCUs and plugin configuration The Phishing Triage Agent is a game-changer for SOC teams. By combining AI-powered analysis with human oversight, it accelerates detection, sharpens response, and strengthens organizational security posture. As phishing tactics evolve, this agent ensures your defenses stay ahead. Getting Started with Phishing Triage Agent The Phishing Triage Agent in Microsoft Defender XDR and Security Copilot helps SOC teams automate and accelerate phishing email analysis. Here’s how to get started: Check Prerequisites Ensure your organization has the necessary licenses: Microsoft Defender for Office 365 Plan 2 Security Copilot subscription Security Compute Units (SCUs) provisioned Defender XDR integration enabled Microsoft Defender for Office 365 service description License options for Microsoft 365 Copilot Enable Phishing Triage Agent Go to the Microsoft Defender portal: Settings > Email & Collaboration > Policies & Rules Enable the Phishing Triage Agent under Automated Investigation & Response (AIR). Automated investigation and response examples - Microsoft Defender for Office 365 Integrate with Security Copilot In the Security Copilot interface: Add the Phishing Triage Agent as a plugin Configure it to trigger when users report suspicious emails via Outlook or Defender for Office 365 Use plugins in Microsoft Security Copilot Test the Workflow Simulate a phishing report by submitting a suspicious email. The agent will: Use LLMs to analyze the message Classify it as phishing or safe Enriching the incident with threat intelligence Provide a natural-language explanation Escalate or resolve based on severity Security Copilot Phishing Triage Agent in Microsoft Defender Review and Tune Use the Mailflow status report and Incident Queue to monitor: Classification accuracy False positives Analyst workload reduction Mail flow insights in the new EAC in Exchange Online Prioritize incidents in the Microsoft Defender portal Train Your SOC Team Share explainable AI outputs with analysts to build trust Use the agent’s verdicts to guide manual investigations and reinforce learning Security Copilot Phishing Triage Agent in Microsoft Defender (Preview) Iterate and Improve Review phishing trends Update triage policies Leverage Security Copilot’s adaptive learning to stay ahead of evolving threats What is Microsoft Security Copilot? About the Author: Greetings! Jacques “Jack” here. I am excited to share this remarkable technology with our Defender community, as it has the potential to greatly enhance organizational protection. My role as a Microsoft Technical Trainer has shown me how valuable solutions like Security Copilot and Security AI Agents can be in strengthening defenses and accelerating response to threats. By sharing these advancements, I hope to empower you with the tools needed to safeguard your environment in an ever-evolving security landscape. #MicrosoftLearn #SkilledByMTTNo More Guesswork—Copilot Makes Azure Security Crystal Clear
Elevating Azure Security and Compliance In today’s rapidly evolving digital landscape, security and compliance are more critical than ever. As organizations migrate workloads to Azure, the need for robust security frameworks and proactive compliance strategies grows. Security Copilot, integrated with Azure, is transforming how technical teams approach these challenges, empowering users to build secure, compliant environments with greater efficiency and confidence. As a security expert, I’d like to provide clear guidance on how to effectively utilize Security Copilot in the ever-evolving landscape of security and compliance. Security Copilot is a premium offering; it includes advanced capabilities that go beyond standard Azure security tools. These features may require specific licensing or subscription tiers. It provides deeper insights, enhanced automation, and tailored guidance for complex security scenarios. Below, I’ll highlight a range of security topics with sample Copilot prompts that you can use to help create a more secure and compliant environment. Getting Started with Microsoft Security Copilot Before leveraging the advanced capabilities of Security Copilot, it's important to understand the foundational requirements and setup steps: Azure Subscription Requirement Security Copilot is not automatically available in all Azure subscriptions. To use it, your organization must have an active Azure subscription. This is necessary to provision Security Compute Units (SCUs), which are the core resources that power Copilot workloads. Provisioning Security Compute Units (SCUs) SCUs are billed hourly and can be scaled based on workload needs. At least one SCU must be provisioned to activate Security Copilot. You can manage SCUs via the Azure portal or the Security Copilot portal, adjusting capacity as needed for performance and cost optimization. Role-Based Access Control To set up and manage Security Copilot: You need to be an Azure Owner or Contributor to provision SCUs. Users must be assigned appropriate Microsoft Entra roles (e.g., Security Administrator) to access and interact with Copilot features. Embedded Experience Security Copilot can be used as a standalone tool or embedded within other Microsoft services like Defender for Endpoint, Intune, and Purview, offering unified security management experience. Data Privacy and Security: Foundational Best Practices Why settle for generic security advice when Security Copilot delivers prioritized, actionable guidance backed by Microsoft’s best practices? Copilot doesn’t just recommend security measures, it actively helps you implement them, leveraging advanced features like encryption and granular access controls to safeguard every layer of your Azure environment. While Security Copilot doesn’t directly block threats like a firewall or Web Application Firewall (WAF), it enhances data integrity and confidentiality by analyzing security signals across Azure, identifying vulnerabilities, and guiding teams with prioritized, actionable recommendations. It helps implement encryption, access controls, and compliance-aligned configurations, while integrating with existing security tools to interpret logs and suggest containment strategies. By automating investigations and supporting secure-by-design practices, Copilot empowers organizations to proactively reduce breach risks and maintain a strong security posture. Secure Coding and Developer Productivity While Security Copilot supports secure coding by identifying vulnerabilities like SQL injection, Cross-Site Scripting (XSS), and buffer overflows, it is not a direct replacement for traditional code scanning tools, instead, it complements these tools by leveraging telemetry from integrated Microsoft services and applying AI-driven insights to prioritize risks and guide remediation. Copilot enhances developer productivity by interpreting signals, offering tailored recommendations, and embedding security practices throughout the software lifecycle. Understanding Security Protocols and Mechanisms Azure’s security stands on robust protocols and mechanisms but understanding them shouldn’t require a cryptography degree. Security Copilot demystifies encryption, authentication, and secure communications—making complex concepts accessible and actionable. With Security Copilot as your guide, teams can confidently configure Azure resources and respond to threats with informed, best-practice decisions. Compliance and Regulatory Alignment Regulatory requirements such as GDPR, HIPAA, and PCI-DSS don’t have to slow you down. Security Copilot streamlines Azure compliance with ready-to-use templates, clear guidelines, and robust documentation support. From maintaining audit logs to generating compliance reports, Security Copilot keeps every action tracked and organized—reducing non-compliance risk and making audits a breeze. Incident Response Planning No security strategy is complete without a solid incident response plan. Security Copilot equips Azure teams with detailed protocols for identifying, containing, and mitigating threats. It enhances Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solutions through ready-made playbooks tailored to diverse scenarios. With built-in incident simulations, Copilot enables teams to rehearse and refine their responses—minimizing breach impact and accelerating recovery. Security Best Practices for Azure Staying ahead of threats means never standing still. Security Copilot builds on Azure’s proven security features—like multi-factor authentication, regular updates, and least privilege access—by automating their implementation, monitoring usage patterns, and surfacing actionable insights. It connects with tools like Microsoft Defender and Entra ID to interpret signals, recommend improvements, and guide teams in real time. With Copilot, your defenses don’t just follow best practices, they evolve dynamically to meet emerging threats, keeping your team sharp and your environment secure. Integrating Copilot into Your Azure Security Strategy Security Copilot isn’t just a technical tool—it’s your strategic partner for Azure security. By weaving Copilot into your workflows, you unlock advanced security enhancements, optimized code, and robust privacy protection. Its holistic approach ensures security and compliance are seamlessly integrated into every corner of your Azure environment. Conclusion Security Copilot is changing the game for Azure security and compliance. By blending secure coding, advanced security expertise, regulatory support, incident response playbooks, and best practices, Copilot empowers technical teams to build resilient, compliant cloud environments. As threats evolve, Copilot keeps your data protected and your organization ahead of the curve. Ready to take your Azure security and compliance to the next level? Start leveraging Security Copilot today to empower your team, streamline operations, and stay ahead of evolving threats. Dive deeper into best practices, hands-on tutorials, and expert guidance to maximize your security posture and unlock the full potential of Copilot in your organization. Explore, learn, and secure your cloud—your journey starts now! Further Reading & Resources Microsoft Security Copilot documentation Get started with Microsoft Security Copilot Microsoft Copilot in Azure Overview Security best practices and patterns - Microsoft Azure Azure compliance documentation Copilot Learning Hub Microsoft Security Copilot Blog Author: Microsoft Principal Technical Trainer, https://www.linkedin.com/in/eliasestevao/ #MicrosoftLearn #SkilledByMTTPlanning your move to Microsoft Defender portal for all Microsoft Sentinel customers
In November 2023, Microsoft announced our strategy to unify security operations by bringing the best of XDR and SIEM together. Our first step was bringing Microsoft Sentinel into the Microsoft Defender portal, giving teams a single, comprehensive view of incidents, reducing queue management, enriching threat intel, streamlining response and enabling SOC teams to take advantage of Gen AI in their day-to-day workflow. Since then, considerable progress has been made with thousands of customers using this new unified experience; to enhance the value customers gain when using Sentinel in the Defender portal, multi-tenancy and multi-workspace support was added to help customers with more sophisticated deployments. Our mission is to unify security operations by bringing all your data, workflows, and people together to unlock new capabilities and drive better security outcomes. As a strong example of this, last year we added extended posture management, delivering powerful posture insights to the SOC team. This integration helps build a closed-loop feedback system between your pre- and post-breach efforts. Exposure Management is just one example. By bringing everything together, we can take full advantage of AI and automation to shift from a reactive to predictive SOC that anticipates threats and proactively takes action to defend against them. Beyond Exposure Management, Microsoft has been constantly innovating in the Defender experience, adding not just SIEM but also Security Copilot. The Sentinel experience within the Defender portal is the focus of our innovation energy and where we will continue to add advanced Sentinel capabilities going forward. Onboarding to the new unified experience is easy and doesn’t require a typical migration. Just a few clicks and permissions. Customers can continue to use Sentinel in the Azure portal while it is available even after choosing to transition. Today, we’re announcing that we are moving to the next phase of the transition with a target to retire the Azure portal for Microsoft Sentinel by July 1, 2026. Customers not yet using the Defender portal should plan their transition accordingly. “Really amazing to see that coming, because cross querying with tables in one UI is really cool! Amazing, big step forward to the unified [Defender] portal.” Glueckkanja AG “The biggest benefit of a unified security operations solution (Microsoft Sentinel + Microsoft Defender XDR) has been the ability to combine data in Defender XDR with logs from third party security tools. Another advantage developed has been to eliminate the need to switch between Defender XDR and Microsoft Sentinel portals, now having a single pane of glass, which the team has been wanting for some years.” Robel Kidane, Group Information Security Manager, Renishaw PLC Delivering the SOC of the future Unifying threat protection, exposure management and security analytics capabilities in one pane of glass not only streamlines the user experience, but also enables Sentinel customers to realize security outcomes more efficiently: Analyst efficiency: A single portal reduces context switching, simplifies workflows, reduces training overhead, and improves team agility. Integrated insights: SOC-focused case management, threat intelligence, incident correlation, advanced hunting, exposure management, and a prioritized incident queue enriched with business and sensitivity context—enabling faster, more informed detection and response across all products. SOC optimization: Security controls that can be adjusted as threats and business priorities change to control costs and provide better coverage and utilization of data, thus maximizing ROI from the SIEM. Accelerated response: AI-driven detection and response which reduces mean time to respond (MTTR) by 30%, increases security response efficiency by 60%, and enables embedded Gen AI and agentic workflows. What’s next: Preparing for the retirement of the Sentinel Experience in the Azure Portal Microsoft is committed to supporting every single customer in making that transition over the next 12 months. Beginning July 1, 2026, Sentinel users will be automatically redirected to the Defender portal. After helping thousands of customers smoothly make the transition, we recommend that security teams begin planning their migration and change management now to ensure continuity and avoid disruption. While the technical process is very straightforward, we have found that early preparation allows time for workflow validation, training, and process alignment to take full advantage of the new capabilities and experience. Tips for a Successful Migration to Microsoft Defender 1. Leverage Microsoft’s help: Leverage Microsoft documentation, instructional videos, guidance, and in-product support to help you be successful. A good starting point is the documentation on Microsoft Learn. 2. Plan early: Engage stakeholders early including SOC and IT Security leads, MSSPs, and compliance teams to align on timing, training and organizational needs. Make sure you have an actionable timeline and agreement in the organization around when you can prioritize this transition to ensure access to the full potential of the new experience. 3. Prepare your environment: Plan and design your environment thoroughly. This includes understanding the prerequisites for onboarding Microsoft Sentinel workspaces, reviewing and deciding on access controls, and planning the architecture of your tenant and workspace. Proper planning will ensure a smooth transition and help avoid any disruptions to your security operations. 4. Leverage Advanced Threat Detection: The Defender portal offers enhanced threat detection capabilities with advanced AI and machine learning for Microsoft Sentinel. Make sure to leverage these features for faster and more accurate threat detection and response. This will help you identify and address critical threats promptly, improving your overall security posture. 5. Utilize Unified Hunting and Incident Management: Take advantage of the enhanced hunting, incident, and investigation capabilities in Microsoft Defender. This provides a comprehensive view for more efficient threat detection and response. By consolidating all security incidents, alerts, and investigations into a single unified interface, you can streamline your operations and improve efficiency. 6. Optimize Cost and Data Management The Defender portal offers cost and data optimization features, such as SOC Optimization and Summary Rules. Make sure to utilize these features to optimize your data management, reduce costs, and increase coverage and SIEM ROI. This will help you manage your security operations more effectively and efficiently. Unleash the full potential of your Security team The unified SecOps experience available in the Defender portal is designed to support the evolving needs of modern SOCs. The Defender portal is not just a new home for Microsoft Sentinel - it’s a foundation for integrated, AI-driven security operations. We’re committed to helping you make this transition smoothly and confidently. If you haven’t already joined the thousands of security organizations that have done so, now is the time to begin. Resources AI-Powered Security Operations Platform | Microsoft Security Microsoft Sentinel in the Microsoft Defender portal | Microsoft Learn Shifting your Microsoft Sentinel Environment to the Defender Portal | Microsoft Learn Microsoft Sentinel is now in Defender | YouTube33KViews8likes21CommentsMicrosoft Sentinel’s New Data Lake: Cut Costs & Boost Threat Detection
Microsoft Sentinel is leveling up! Already a trusted cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) solution, it empowers security teams to detect, investigate, and respond to threats with speed and precision. Now, with the introduction of its new Data Lake architecture, Sentinel is transforming how security data is stored, accessed, and analyzed, bringing unmatched flexibility and scale to threat investigation. Unlike Microsoft Fabric OneLake, which supports analytics across the organization, Sentinel’s Data Lake is purpose-built for security. It centralizes raw structured, semi-structured, and unstructured data in its original format, enabling advanced analytics without rigid schemas. This article is written by someone who’s spent years helping security teams navigate Microsoft’s evolving ecosystem, translating complex capabilities into practical strategies. What follows is a hands-on look at the key features, benefits, and challenges of Sentinel’s Data Lake, designed to help you make the most of this powerful new architecture. Current Sentinel Features To tackle the challenges security teams, face today—like explosive data growth, integration of varied sources, and tight compliance requirements—organizations need scalable, efficient architectures. Legacy SIEMs often become costly and slow when analyzing multi-year data or correlating diverse events. Security data lakes address these issues by enabling seamless ingestion of logs from any source, schema-on-read flexibility, and parallelized queries over massive datasets. This schema-on-read allows SOC analysts to define how data is interpreted at the time of analysis, rather than when it is stored. This means analysts can flexibly adapt queries and threat detection logic to evolving threats, without reformatting historical data making investigations more agile and responsive to change. This empowers security operations to conduct deep historical analysis, automate enrichment, and apply advanced analytics, such as machine learning, while retaining strict control over data access and residency. Ultimately, decoupling storage and compute allows teams to boost detection and response speed, maintain compliance, and adapt their Security Operation Center (SOC) to future security demands. As organizations manage increasing data and limited budgets, many are moving from legacy SIEMs to advanced cloud-native options. Microsoft Sentinel’s Data Lake separates storage from computing, offering scalable and cost-effective analytics and compliance. For instance, storing 500 TB of logs in Sentinel Data Lake can cut costs by 60–80% compared to Log Analytics, due to lower storage costs and flexible retention. Integration with modern tools and open formats enables efficient threat response and regulatory compliance. Microsoft Sentinel data lake pricing (preview) Sentinel Data Lake Use Cases Log Retention: Long-term retention of security logs for compliance and forensic investigations Hunting: Advanced threat hunting using historical data Interoperability: Integration with Microsoft Fabric and other analytics platforms Cost: Efficient storage prices for high-volume data sources How Microsoft Sentinel Data Lake Helps Microsoft Sentinel’s Data Lake introduces a powerful paradigm shift for security operations by architecting the separation of storage and compute, enabling organizations to achieve petabyte-scale data retention without the traditional overhead and cost penalties of legacy SIEM solutions. Built atop highly scalable, cloud-native infrastructure, Sentinel Data Lake empowers SOCs to ingest telemetry from virtually unlimited sources ranging from on-premises firewalls, proxies, and endpoint logs to SaaS, IaaS, and PaaS environments—while leveraging schema-on-read, a method that allows analysts to define how data is interpreted at query time rather than when it is stored, offering greater flexibility in analytics. For example, a security analyst can adapt to the way historical data is examined as new threats emerge, without needing to reformat or restructure the data stored in the Data Lake. From Microsoft Learn – Retention and data tiering Storing raw security logs in open formats like Parquet (this is a columnar storage file format optimized for efficient data compression and retrieval, commonly used in big data processing frameworks like Apache Spark and Hadoop) enables easy integration with analytics tools and Microsoft Fabric, letting analysts efficiently query historical data using KQL, SQL, or Spark. This approach eliminates the need for complex ETL and archived data rehydration, making incident response faster; for instance, a SOC analyst can quickly search for years of firewall logs for threat detection. From Microsoft Learn – Flexible querying with Kusto Query Language Granular data governance and access controls allow organizations to manage sensitive information and meet legal requirements. Storing raw security logs in open formats enables fast investigations of long-term data incidents, while automated lifecycle management reduces costs and ensures compliance. Data Lakes integrate with Microsoft platforms and other tools for unified analytics and security. Machine learning helps detect unusual login activity across years, overcoming previous storage issues. From Microsoft Learn – Powerful analytics using Jupyter notebooks Pros and Cons The following table highlights the advantages and potential opportunities that Microsoft Sentinel Data Lake offers. This follows the same Pay-As-You-Go pricing model as currently available with Sentinel. Pros Cons License Needed Scalable, cost-effective long-term retention of security data Requires adaptation to new architecture Pay-As-You-Go model Seamless integration with Microsoft Fabric and open data formats Initial setup and integration may involve a learning curve Pay-As-You-Go model Efficient processing of petabyte-scale datasets Transitioning existing workflows may require planning Pay-As-You-Go model Advanced analytics, threat hunting, and AI/ML across historical data Some features may depend on integration with other services Pay-As-You-Go model Supports compliance use cases with robust data governance and audit trails Complexity in new data governance features Pay-As-You-Go model Microsoft Sentinel Data Lake solution advances cloud-native security by overcoming traditional SIEM limitations, allowing organizations to better retain, analyze, and respond to security data. As cyber threats grow, Sentinel Data Lake offers flexible, cost-efficient storage for long-term retention, supporting detection, compliance, and audits without significant expense or complexity. Quick Guide: Deploy Microsoft Sentinel Data Lake Assess Needs: Identify your security data volume, retention, and compliance requirements - Sentinel Data Lake Overview. Prepare Environment: Ensure Azure permissions and workspace readiness - Onboarding Guide. Enable Data Lake: Use Azure CLI or Defender portal to activate - Setup Instructions. Ingest & Import Data: Connect sources and migrate historical logs - Microsoft Sentinel Data Connectors. Integrate Analytics: Use KQL, notebooks, and Microsoft Fabric for scalable analysis - Fabric Overview Train & Optimize: Educate your team and monitor performance - Best Practices. About the Author: Hi! Jacques “Jack” here, I’m a Microsoft Technical Trainer at Microsoft. I wanted to share this as it’s something I often asked during my Security Trainings. This improves the already impressive Microsoft Sentinel feature stack helping the Defender Community to secure their environment in this ever-growing hacked world. I’ve been working with Microsoft Sentinel since September 2019, and I have been teaching learners about this SIEM since March 2020. I have experience using Security Copilot and Security AI Agents, which have been effective in improving my incident response and compromise recovery times.Microsoft Purview: The Ultimate AI Data Security Solution
Introduction AI is transforming the way enterprises operate, however with great innovation comes great responsibility. I’ve spent the last few years helping organizations secure their data with tools like Azure Information Protection, Data Loss Prevention, and now Microsoft Purview. As generative AI tools like Microsoft Copilot become embedded in everyday workflows, the need for clear governance and robust data protection is more urgent than ever. Through this blog post, let's explore how Microsoft Purview can help organizations stay ahead of securing AI interactions without slowing down innovation. What’s the Issue? AI agents are increasingly used to process sensitive data, often through natural language prompts. Without proper oversight, this can lead to data oversharing, compliance violations, and security risks. Why It’s Urgent? According to the recent trends of 2025, over half of corporate users bring their own AI tools to work, often consumer-grade apps like ChatGPT or DeepSeek. These tools bypass enterprise protections, making it difficult to monitor and control data exposure. Use Cases Enterprise AI Governance: Apply consistent policies across Microsoft and third-party AI tools. Compliance Auditing: Generate audit logs for AI interactions to meet regulatory requirements. Risk Mitigation: Block risky uploads and enforce adaptive protection based on user behavior. How Microsoft Purview Solves It Data Security Posture Management (DSPM) for AI Purview’s DSPM for AI provides a centralized dashboard to monitor AI activity, assess data risks, and enforce compliance policies across Copilots, agents, and third-party AI apps. It correlates data classification, user behavior, and policy coverage to surface real-time risks, such as oversharing via AI agents, and generates actionable recommendations to remediate gaps. DSPM integrates with tools like Microsoft Security Copilot for AI-assisted investigations and supports automated scanning, trend analytics, and posture reporting. It also extends protection to third-party AI tools like ChatGPT through endpoint DLP and browser extensions, ensuring consistent governance across both managed and unmanaged environments 2. Unified Protection Across AI Agents Whether you're using Microsoft 365 Copilot, Security Copilot, or Azure AI services, Purview applies consistent security and compliance controls. Agents inherit protection from their parent apps, including sensitivity labels, data loss prevention (DLP), and Insider Risk Management. Real-Time Risk Detection Purview enables real-time monitoring of prompts and responses, helping security teams detect oversharing and policy violations instantly. From Microsoft Learn – Insider Risk 4. One-Click Policy Activation Administrators can leverage Microsoft Purview’s Data Security Posture Management (DSPM) for AI to rapidly deploy comprehensive security and compliance controls via one-click policy activation. This streamlined mechanism enables organizations to enforce prebuilt policy templates across AI ecosystems, ensuring prompt implementation of data loss prevention (DLP), sensitivity labeling, and Insider Risk Management on both Microsoft and third-party AI services. Through DSPM’s unified policy orchestration layer, security teams gain granular telemetry into prompt and response flows, real-time policy enforcement, and detailed incident reporting. Automated analytics continuously assess risk posture, enabling adaptive policy adjustments and scalable governance as new AI tools and user workflows are introduced into the enterprise environment. Please note: After implementing policy changes, it can take up to 24 hours for changes to become visible and take full effect across your environment. From Microsoft Learn – Purview Data Security Posture Management (DSPM) portal 5. Support for Third-Party AI Apps Purview extends robust data security and compliance to browser-based AI tools such as ChatGPT and Google Gemini by employing endpoint Data Loss Prevention (DLP) and browser extensions that monitor and control data flows in real time. Through Microsoft Purview’s Data Security Posture Management (DSPM) for AI, organizations can implement granular controls for sensitive data accessed during both Microsoft-native and third-party AI interactions. DSPM offers continuous discovery and classification of data assets, linking AI prompts and responses to their original data sources to automatically enforce data protection policies, including sensitivity labeling, adaptive access controls, and comprehensive content inspection, contextually for each AI transaction. For unsanctioned AI services reached via browsers, the Purview browser extension inspects both input and output, enabling endpoint DLP to block, alert, or redact sensitive material instantly, thus preventing unauthorized uploads, downloads, or copy/paste activities. Security teams benefit from rich telemetry on AI usage patterns, which integrate with user risk profiles and anomaly detection to identify and flag suspicious attempts to extract confidential information. Close integration with Microsoft Security Copilot and automated analytics further enhances visibility across all AI data flows, supporting incident response, audit, and compliance reporting needs. Purview’s adaptive policy orchestration ensures that evolving AI services and workflows are continuously assessed for risk, and that controls are dynamically aligned with business, regulatory, and security requirements, enabling scalable, policy-driven governance for the expanding enterprise AI ecosystem. Pros and Cons The following table outlines the key advantages and potential limitations of implementing AI and agent data security controls within Microsoft Purview. Pros Cons License Needed Centralized AI governance Requires proper licensing and setup Microsoft 365 E5 or equivalent Purview add-on license Real-time risk detection May need browser extensions for full coverage Microsoft 365 E5 or Purview add-on Supports both Microsoft and third-party AI apps Some features limited to enterprise versions Microsoft 365 E5, E5 Compliance, or equivalent Purview add-on Conclusion Microsoft Purview offers a comprehensive solution for securing AI agents and their data interactions. By leveraging DSPM for AI, organizations can confidently adopt AI technologies while maintaining control over sensitive information. Explore Microsoft Purview’s DSPM for AI here. Start by assessing your current AI usage and activate one-click policies to secure your environment today! FAQ 1. What is the purpose of Microsoft Purview’s AI and agent data security controls? The purpose is to ensure that sensitive data accessed or processed by AI systems and agents is governed, protected, and monitored using Microsoft Purview’s compliance and security capabilities. Microsoft Purview data security and compliance protection 2. How does Microsoft Purview help secure AI-generated content? Microsoft Purview applies data loss prevention (DLP), sensitivity labels, and information protection policies to AI-generated content, ensuring it adheres to organizational compliance standards. Microsoft Purview Information Protection 3. Can Microsoft Purview track and audit AI interactions with sensitive data? Yes. Microsoft Purview provides audit logs and activity explorer capabilities that allow organizations to monitor how AI systems and agents interact with sensitive data. Search the audit log 4. What role do sensitivity labels play in AI data governance? Sensitivity labels classify and protect data based on its sensitivity level. When applied, they enforce encryption, access restrictions, and usage rights, even when data is processed by AI. Learn about sensitivity labels 5. How does Microsoft Purview integrate with Copilot and other AI tools? Microsoft Purview extends its data protection and compliance capabilities to Microsoft 365 Copilot and other AI tools by ensuring that data accessed by these tools is governed under existing policies. Microsoft 365 admin center Microsoft 365 Copilot usage 6. Are there specific controls for third-party AI agents? Yes. Microsoft Purview supports conditional access, DLP, and access reviews to manage and monitor third-party AI agents that interact with organizational data. What is Conditional Access in Microsoft Entra ID? 7. How can organizations ensure AI usage complies with regulatory requirements? By using Microsoft Purview’s compliance manager, organizations can assess and manage regulatory compliance risks associated with AI usage. Microsoft Purview Compliance Manager About the Author: Hi! Jacques “Jack” here, I’m a Microsoft Technical Trainer at Microsoft. I wanted to share a topic that is often top of mind, AI governance. I’ve been working with Microsoft Purview since its launch in 2022, building on prior experience with Azure Information Protection and Data Loss Prevention. I also have great expertise with Generative AI technologies since their public release in November 2022, including Microsoft Copilot and other enterprise-grade AI solutions.Introducing Microsoft Sentinel data lake
Today, we announced a significant expansion of Microsoft Sentinel’s capabilities through the introduction of Sentinel data lake, now rolling out in public preview. Security teams cannot defend what they cannot see and analyze. With exploding volumes of security data, organizations are struggling to manage costs while maintaining effective threat coverage. Do-it-yourself security data architectures have perpetuated data silos, which in turn have reduced the effectiveness of AI solutions in security operations. With Sentinel data lake, we are taking a major step to address these challenges. Microsoft Sentinel data lake enables a fully managed, cloud-native, data lake that is purposefully designed for security, right inside Sentinel. Built on a modern lake architecture and powered by Azure, Sentinel data lake simplifies security data management, eliminates security data silos, and enables cost-effective long-term security data retention with the ability to run multiple forms of analytics on a single copy of that data. Security teams can now store and manage all security data. This takes the market-leading capabilities of Sentinel SIEM and supercharges it even further. Customers can leverage the data lake for retroactive TI matching and hunting over a longer time horizon, track low and slow attacks, conduct forensics analysis, build anomaly insights, and meet reporting & compliance needs. By unifying security data, Sentinel data lake provides the AI ready data foundation for AI solutions. Let’s look at some of Sentinel data lake’s core features. Simplified onboarding and enablement inside Defender Portal: Customers can easily discover and enable the new data lake from within the Defender portal, either from the banner on the home page or from settings. Setting up a modern data lake now is just a click away, empowering security teams to get started quickly without a complex setup. Simplified security data management: Sentinel data lake works seamlessly with existing Sentinel connectors. It brings together security logs from Microsoft services across M365, Defender, Azure, Entra, Purview, Intune plus third-party sources like AWS, GCP, network and firewall data from 350+ connectors and solutions. The data lake supports Sentinel’s existing table schemas while customers can also create custom connectors to bring raw data into the data lake or transform it during ingestion. In the future, we will enable additional industry-standard schemas. The data lake expands beyond just activity logs by including a native asset store. Critical asset information is added to the data lake using new Sentinel data connectors for Microsoft 365, Entra, and Azure, enabling a single place to analyze activity and asset data enriched with Threat intelligence. A new table management experience makes it easy for customers to choose where to send and store data, as well as set related retention policies to optimize their security data estate. Customers can easily send critical, high-fidelity security data to the analytics tier or choose to send high-volume, low fidelity logs to the new data lake tier. Any data brought into the analytics tier is automatically mirrored into the data lake at no additional charge, making data lake the central location for all security data. Advanced data analysis capabilities over data in the data lake: Sentinel data lake stores all security data in an open format to enable analysts to do multi-modal security analytics on a single copy of data. Through the new data lake exploration experience in the Defender portal, customers can leverage Kusto query language to analyze historical data using the full power of Kusto. Since the data lake supports the Sentinel table schema, advanced hunting queries can be run directly on the data lake. Customers can also schedule long-running jobs, either once or on a schedule, that perform complex analysis on historical data for in-depth security insights. These insights generated from the data lake can be easily elevated to analytics tier and leveraged in Sentinel for threat investigation and response. Additionally, as part of the public preview, we are also releasing a new Sentinel Visual Studio Code extension that enables security teams to easily connect to the same data lake data and use Python notebooks, as well as spark and ML libraries to deeply analyze lake data for anomalies. Since the environment is fully managed, there is no compute infrastructure to set up. Customers can just install the Visual Studio Code extension and use AI coding agents like GitHub Copilot to build a notebook and execute it in the managed environment. These notebooks can also be scheduled as jobs and the resulting insights can be elevated to analytics tier and leveraged in Sentinel for threat investigation and response. Flexible business model: Sentinel data lake enables customers to separate their data ingestion and retention needs from their security analytics needs, allowing them to ingest and store data cost effectively and then pay separately when analyzing data for their specific needs. Let’s put this all together and show an example of how a customer can operationalize and derive value from the data lake for retrospective threat intelligence matching in Microsoft Sentinel. Network logs are typically high-volume logs but can often contain key insights for detecting initial entry point of an attack, command and control connection, lateral movement or an exfiltration attempt. Customers can now send these high-volume logs to the data lake tier. Next, they can create a python notebook that can join latest threat intelligence from Microsoft Defender Threat Intelligence to scan network logs for any connections to/from a suspicious IP or domain. They can schedule this notebook to run as a scheduled job, and any insights can then be promoted to analytics tiers and leveraged to enrich ongoing investigation, hunts, response or forensics analysis. All this is possible cost-effectively without having to set up any complex infrastructure, enabling security teams to achieve deeper insights. This preview is now rolling out for customers in Defender portal in our supported regions. To learn more, check out our Mechanics video and our documentation or talk to your account teams. Get started today Join us as we redefine what’s possible in security operations: Onboard Sentinel data lake: https://aka.ms/sentineldatalakedocs Explore our pricing: https://aka.ms/sentinel/pricingblog For the supported regions, please refer to https://aka.ms/sentinel/datalake/geos Learn more about our MDTI news: http://aka.ms/mdti-convergence General Availability of Auxiliary Logs and Reduced PricingEmpowering Secure AI Innovation: Data Security and Compliance for AI Agents
As organizations embrace the transformative power of generative AI, agentic AI is quickly becoming a core part of enterprise innovation. Whether organizations are just beginning their AI journey or scaling advanced solutions, one thing is clear: agents are poised to transform every function and workflow across organizations. IDC predicts that over 1 billion new business process agents will be created in the next four years 1 . This surge in AI adoption is empowering employees across roles – from low-code makers to pro-code developers – to build and use AI in new ways. Business leaders are eager to support this momentum, but they also recognize the need to innovate responsibly with AI. Microsoft Purview’s evolution When Microsoft 365 Copilot launched in November 2022, it sparked a wave of excitement and an immediate question: how do we secure and govern the data powering these AI experiences? Microsoft Purview quickly evolved to meet this need, extending its data security and compliance capabilities to the Microsoft 365 Copilot ecosystem. It delivered discoverability, protection, and governance value that helped customers discover data risks such as data oversharing, protect sensitive data to prevent data loss and insider risks, and govern AI usage to meet regulations and policies. Now, as customers move beyond pre-built agents like Copilot to develop their own AI agents and applications, Microsoft Purview has evolved to extend the same data protections built for Microsoft 365 Copilot to AI agents. Today, those protections span the entire development spectrum—from no-code and low-code tools like Copilot Studio to pro-code environments such as Azure AI Foundry. Microsoft Purview helps address challenges across the development spectrum Makers – typically business users or citizen developers who build solutions using low-code or no-code tools – shouldn’t need to become security experts to build AI responsibly. Yet, without proper safeguards, these agents can inadvertently expose sensitive data or violate compliance policies. That is why with Microsoft Purview, security and IT teams can feel confident about the agents being built in their organizations. When makers build agents through the Agent Builder or directly in Copilot Studio, security admins can set up Microsoft Purview’s data security and compliance controls that work behind the scenes to support makers in building secure and compliant agents. These controls automatically enforce policies, monitor data access, and ensure compliance without requiring the maker to become a security expert without requiring makers to take additional actions. In fact, a recent Microsoft study found that 71% of developer decision-makers acknowledge that these constraints result in security trade-offs and development delays 2 . Pro-code developers are under increasing pressure to deliver fast, flexible, and seamlessly integrated solutions, yet data security often becomes a deployment blocker or an afterthought. Building enterprise-grade data security and compliance capabilities from scratch is not only time-consuming but also requires deep domain expertise. This is where Microsoft Purview steps in. As an industry leader in data security and compliance, Purview does the heavy lifting, so developers don’t have to. Now in preview, Purview SDK can be used by developers to embed robust, enterprise-ready data protections directly into their AI applications, instead of building complex security frameworks on their own. The Purview SDK is a comprehensive set of REST APIs, documentation, and code samples, allowing developers to easily incorporate Microsoft Purview’s capabilities into their workflows—regardless of their integrated development environment (IDE). This empowers them to move fast without compromising on security or compliance and at the same time, Microsoft Purview helps security teams remain in control. : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime Startups, ISVs, and partners can leverage the Purview SDK to seamlessly integrate Purview’s industry-leading features into their AI agents and applications. This enables their offerings to become Purview-aware, empowering customers to more easily secure and govern data within their AI environments. For example, Christian Veillette, Chief Technology Officer at Arthur Health, a Quisitive customer, states “The synergistic integration of MazikCare, the Quisitive Intelligence Platform, and the data compliance power of Purview SDK, including its DSPM for AI, forms a foundational pillar for trustworthy and safe AI-driven healthcare transformations. This powerful combination ensures continuous oversight and instant enforcement of compliance policies, giving IT leadership full assurance in the output of every AI model and upholding the highest safety standards. By centralizing policy enforcement, security concerns are significantly eased, empowering leadership to confidently steer their organizations through the AI transformation journey.” Microsoft partner, Infotechtion, has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. Vivek Bhatt, Infotechtion’s Chief Technology Officer says, “Embedding Purview SDK into Infotechtion's AI governance solution improved trust and security by aligning Gen-AI interactions with Microsoft Purview's enterprise policies.” Microsoft Purview also natively integrates with Azure AI Foundry, enabling seamless, built-in security and compliance for AI workloads without requiring additional development effort. With this integration, signals from Azure AI Foundry are automatically surfaced in Microsoft Purview’s Data Security Posture Management (DSPM) for AI, Insider Risk Management, and compliance solutions. This means security teams can monitor AI usage, detect data risks, and enforce compliance policies across AI agents and applications—whether they’re built in-house or with Azure AI Foundry models. This reinforces Microsoft’s commitment to delivering secure-by-default AI innovation—empowering organizations to scale responsibly with confidence. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI. Explore more partner case studies from Ernst & Young and Infosys to see how they’re leveraging Purview SDK. Learn more about Purview SDK and Microsoft Purview for Azure AI Foundry. Unified visibility and control Whether supporting pro-code developers or low-code makers, Microsoft Purview enables organizations to secure and govern AI across organizations. With Purview, security teams can discover data security risks, protect sensitive data against data leakage and insider risks, and govern AI interactions. Discover data security risks With Data Security Posture Management (DSPM) for AI, data security teams can discover detailed data risk insights in AI interactions across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents. Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents all in Microsoft Purview DSPM for AI. Protect sensitive data against data leaks and insider risks In DSPM for AI, data security admins can also get recommended insights to improve their organization’s security posture like minimizing risks of data oversharing. For example, an admin might get a recommendation to set up a data loss prevention (DLP) policy that prevents agents in Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. By setting up this policy, organizations can prevent confidential legal documents—with specific language that could lead to improper guidance—from being summarized. It also ensures that “Internal only” documents aren’t used to create content that might be shared outside the organization. Extend data loss prevention (DLP) policies to agents in Microsoft 365 to protect sensitive data. Agents often pull data from sources like SharePoint and Dataverse, and Microsoft Purview helps protect that data every step of the way. It honors sensitivity labels, enforces access permissions, and applies label inheritance so that AI-generated content carries the same protections as its source. With auto-labeling in Dataverse, sensitive data is classified as soon as it’s ingested—reducing manual effort and maintaining consistent protection. When responses draw from multiple sources with different labels, the most restrictive label is applied to uphold compliance and minimize risk. : Sensitivity labels will be automatically applied to data in Dataverse. : AI-generated responses will inherit and honor the source data’s sensitivity labels. In addition to data and permission controls that help address data oversharing or leakage, security teams also need ways to detect users' risky activities in AI apps and agents that could potentially lead to data security incidents. With risky AI usage indicators, policy template, and analytics report in Microsoft Purview Insider Risk Management, security teams with appropriate permissions can detect risky activities. For example, there could be a departing employee receiving an unusual number of AI responses across Copilots and agents containing sensitive data, deviating from their past activity patterns. Security teams can then effectively detect and respond to these potential incidents to minimize the negative impact. For example, they can configure Adaptive Protection to automatically block a high-risk user from accessing sensitive data. An Insider Risk Management alert from a Risky AI usage policy shows a user with anomalous activities. Govern AI Interactions to detect non-compliant usage Microsoft Purview provides a comprehensive set of tools to govern AI usage and detect non-compliant user activities. AI interactions across Microsoft Copilots, AI apps and agents, are recorded in Audit logs. eDiscovery enables legal and compliance teams with appropriate permissions to collect and review AI-generated content for internal investigations or litigation. Data Lifecycle Management enables teams to set policies to retain or dispose of AI interactions, while Communication Compliance helps detect risky or inappropriate use of AI, such as harmful content or other violations against code-of-conduct policies. Together, these capabilities give organizations the visibility and control they need to innovate responsibly with AI. AI interactions across Microsoft Copilots, AI apps and agents are recorded in Audit logs. AI interactions across Microsoft Copilots, AI apps and agents can be collected and reviewed in eDiscovery. Microsoft Purview Communication Compliance can detect non-compliant content in AI prompts across Microsoft Copilots, AI apps and agents. Securing the Future of AI Innovation — Explore Additional Resources As organizations accelerate their adoption of agentic AI, the need for built-in security and compliance has never been more critical. Microsoft Purview empowers both makers and developers to innovate with confidence—ensuring that every AI interaction is secure, compliant, and aligned with enterprise standards. By embedding protection across the entire development lifecycle, Purview helps organizations unlock the full potential of AI while maintaining the trust, transparency, and control that responsible innovation demands. To dive deeper into how Microsoft Purview supports secure AI development, explore our additional resources, documentation, and integration guides: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Learn more about Purview pricing Get started with Azure AI Foundry Get started with Microsoft Purview 1 IDC, 1 Billion New Logical Applications: More Background, Gary Chen, Jim Mercer, April 2024 https://blogs.idc.com/2025/04/04/the-agentic-evolution-of-enterprise-applications/ 2 Microsoft, AI App Security Quantitative Study, April 2025Fishing for Syslog with Azure Kubernetes and Logstash
Deploy Secure Syslog Collection on Azure Kubernetes Service with Terraform Organizations managing distributed infrastructure face a common challenge: collecting syslog data securely and reliably from various sources. Whether you're aggregating logs from network devices, Linux servers, or applications, you need a solution that scales with your environment while maintaining security standards. This post walks through deploying Logstash on Azure Kubernetes Service (AKS) to collect RFC 5425 syslog messages over TLS. The solution uses Terraform for infrastructure automation and forwards collected logs to Azure Event Hubs for downstream processing. You'll learn how to build a production-ready deployment that integrates with Azure Sentinel, Azure Data Explorer, or other analytics platforms. Solution Architecture The deployment consists of several Azure components working together: Azure Kubernetes Service (AKS): Hosts the Logstash deployment with automatic scaling capabilities Internal Load Balancer: Provides a static IP endpoint for syslog sources within your network Azure Key Vault: Stores TLS certificates for secure syslog transmission Azure Event Hubs: Receives processed syslog data using the Kafka protocol Log Analytics Workspace: Monitors the AKS cluster health and performance Syslog sources send RFC 5425-compliant messages over TLS to the Load Balancer on port 6514. Logstash processes these messages and forwards them to Event Hubs, where they can be consumed by various Azure services or third-party tools. Prerequisites Before starting the deployment, ensure you have these tools installed and configured: Terraform: Version 1.5 or later Azure CLI: Authenticated to your Azure subscription kubectl: For managing Kubernetes resources after deployment Several Azure resources must be created manually before running Terraform, as the configuration references them. This approach provides flexibility in organizing resources across different teams or environments. Step 1: Create Resource Groups Create three resource groups to organize the solution components: az group create --name rg-syslog-prod --location eastus az group create --name rg-network-prod --location eastus az group create --name rg-data-prod --location eastus Each resource group serves a specific purpose: rg-syslog-prod : Contains the AKS cluster, Key Vault, and Log Analytics Workspace rg-network-prod : Holds networking resources (Virtual Network and Subnets) rg-data-prod : Houses the Event Hub Namespace for data ingestion Step 2: Configure Networking Create a Virtual Network with dedicated subnets for AKS and the Load Balancer: az network vnet create \ --resource-group rg-network-prod \ --name vnet-syslog-prod \ --address-prefixes 10.0.0.0/16 \ --location eastus az network vnet subnet create \ --resource-group rg-network-prod \ --vnet-name vnet-syslog-prod \ --name snet-aks-prod \ --address-prefixes 10.0.1.0/24 az network vnet subnet create \ --resource-group rg-network-prod \ --vnet-name vnet-syslog-prod \ --name snet-lb-prod \ --address-prefixes 10.0.2.0/24 The network design uses non-overlapping CIDR ranges to prevent routing conflicts. The Load Balancer subnet will later be assigned the static IP address 10.0.2.100 . Step 3: Set Up Event Hub Namespace Create an Event Hub Namespace with a dedicated Event Hub for syslog data: az eventhubs namespace create \ --resource-group rg-data-prod \ --name eh-syslog-prod \ --location eastus \ --sku Standard az eventhubs eventhub create \ --resource-group rg-data-prod \ --namespace-name eh-syslog-prod \ --name syslog The Standard SKU provides Kafka protocol support, which Logstash uses for reliable message delivery. The namespace automatically includes a RootManageSharedAccessKey for authentication. Step 4: Configure Key Vault and TLS Certificate Create a Key Vault to store the TLS certificate: az keyvault create \ --resource-group rg-syslog-prod \ --name kv-syslog-prod \ --location eastus For production environments, import a certificate from your Certificate Authority: az keyvault certificate import \ --vault-name kv-syslog-prod \ --name cert-syslog-prod \ --file certificate.pfx \ --password <pfx-password> For testing purposes, you can generate a self-signed certificate: az keyvault certificate create \ --vault-name kv-syslog-prod \ --name cert-syslog-prod \ --policy "$(az keyvault certificate get-default-policy)" Important: The certificate's Common Name (CN) or Subject Alternative Name (SAN) must match the DNS name your syslog sources will use to connect to the Load Balancer. Step 5: Create Log Analytics Workspace Set up a Log Analytics Workspace for monitoring the AKS cluster: az monitor log-analytics workspace create \ --resource-group rg-syslog-prod \ --workspace-name log-syslog-prod \ --location eastus Understanding the Terraform Configuration With the prerequisites in place, let's examine the Terraform configuration that automates the remaining deployment. The configuration follows a modular approach, making it easy to customize for different environments. Referencing Existing Resources The Terraform configuration begins by importing references to the manually created resources: data "azurerm_client_config" "current" {} data "azurerm_resource_group" "rg-main" { name = "rg-syslog-prod" } data "azurerm_resource_group" "rg-network" { name = "rg-network-prod" } data "azurerm_resource_group" "rg-data" { name = "rg-data-prod" } data "azurerm_virtual_network" "primary" { name = "vnet-syslog-prod" resource_group_name = data.azurerm_resource_group.rg-network.name } data "azurerm_subnet" "kube-cluster" { name = "snet-aks-prod" resource_group_name = data.azurerm_resource_group.rg-network.name virtual_network_name = data.azurerm_virtual_network.primary.name } data "azurerm_subnet" "kube-lb" { name = "snet-lb-prod" resource_group_name = data.azurerm_resource_group.rg-network.name virtual_network_name = data.azurerm_virtual_network.primary.name } These data sources establish connections to existing infrastructure, ensuring the AKS cluster and Load Balancer deploy into the correct network context. Deploying the AKS Cluster The AKS cluster configuration balances security, performance, and manageability: resource "azurerm_kubernetes_cluster" "primary" { name = "aks-syslog-prod" location = data.azurerm_resource_group.rg-main.location resource_group_name = data.azurerm_resource_group.rg-main.name dns_prefix = "aks-syslog-prod" default_node_pool { name = "default" node_count = 2 vm_size = "Standard_DS2_v2" vnet_subnet_id = data.azurerm_subnet.kube-cluster.id } identity { type = "SystemAssigned" } network_profile { network_plugin = "azure" load_balancer_sku = "standard" network_plugin_mode = "overlay" } oms_agent { log_analytics_workspace_id = data.azurerm_log_analytics_workspace.logstash.id } } Key configuration choices: System-assigned managed identity: Eliminates the need for service principal credentials Azure CNI in overlay mode: Provides efficient pod networking without consuming subnet IPs Standard Load Balancer SKU: Enables zone redundancy and higher performance OMS agent integration: Sends cluster metrics to Log Analytics for monitoring The cluster requires network permissions to create the internal Load Balancer: resource "azurerm_role_assignment" "aks-netcontrib" { scope = data.azurerm_virtual_network.primary.id principal_id = azurerm_kubernetes_cluster.primary.identity[0].principal_id role_definition_name = "Network Contributor" } Configuring Logstash Deployment The Logstash deployment uses Kubernetes resources for reliability and scalability. First, create a dedicated namespace: resource "kubernetes_namespace" "logstash" { metadata { name = "logstash" } } The internal Load Balancer service exposes Logstash on a static IP: resource "kubernetes_service" "loadbalancer-logstash" { metadata { name = "logstash-lb" namespace = kubernetes_namespace.logstash.metadata[0].name annotations = { "service.beta.kubernetes.io/azure-load-balancer-internal" = "true" "service.beta.kubernetes.io/azure-load-balancer-ipv4" = "10.0.2.100" "service.beta.kubernetes.io/azure-load-balancer-internal-subnet" = data.azurerm_subnet.kube-lb.name "service.beta.kubernetes.io/azure-load-balancer-resource-group" = data.azurerm_resource_group.rg-network.name } } spec { type = "LoadBalancer" selector = { app = kubernetes_deployment.logstash.metadata[0].name } port { name = "logstash-tls" protocol = "TCP" port = 6514 target_port = 6514 } } } The annotations configure Azure-specific Load Balancer behavior, including the static IP assignment and subnet placement. Securing Logstash with TLS Kubernetes Secrets store the TLS certificate and Logstash configuration: resource "kubernetes_secret" "logstash-ssl" { metadata { name = "logstash-ssl" namespace = kubernetes_namespace.logstash.metadata[0].name } data = { "server.crt" = data.azurerm_key_vault_certificate_data.logstash.pem "server.key" = data.azurerm_key_vault_certificate_data.logstash.key } type = "Opaque" } The certificate data comes directly from Key Vault, maintaining a secure chain of custody. Logstash Container Configuration The deployment specification defines how Logstash runs in the cluster: resource "kubernetes_deployment" "logstash" { metadata { name = "logstash" namespace = kubernetes_namespace.logstash.metadata[0].name } spec { selector { match_labels = { app = "logstash" } } template { metadata { labels = { app = "logstash" } } spec { container { name = "logstash" image = "docker.elastic.co/logstash/logstash:8.17.4" security_context { run_as_user = 1000 run_as_non_root = true allow_privilege_escalation = false } resources { requests = { cpu = "500m" memory = "1Gi" } limits = { cpu = "1000m" memory = "2Gi" } } volume_mount { name = "logstash-config-volume" mount_path = "/usr/share/logstash/pipeline/logstash.conf" sub_path = "logstash.conf" read_only = true } volume_mount { name = "logstash-ssl-volume" mount_path = "/etc/logstash/certs" read_only = true } } } } } } Security best practices include: Running as a non-root user (UID 1000) Disabling privilege escalation Mounting configuration and certificates as read-only Setting resource limits to prevent runaway containers Automatic Scaling Configuration The Horizontal Pod Autoscaler ensures Logstash scales with demand: resource "kubernetes_horizontal_pod_autoscaler" "logstash_hpa" { metadata { name = "logstash-hpa" namespace = kubernetes_namespace.logstash.metadata[0].name } spec { scale_target_ref { kind = "Deployment" name = kubernetes_deployment.logstash.metadata[0].name api_version = "apps/v1" } min_replicas = 1 max_replicas = 30 target_cpu_utilization_percentage = 80 } } This configuration maintains between 1 and 30 replicas, scaling up when CPU usage exceeds 80%. Logstash Pipeline Configuration The Logstash configuration file defines how to process syslog messages: input { tcp { port => 6514 type => "syslog" ssl_enable => true ssl_cert => "/etc/logstash/certs/server.crt" ssl_key => "/etc/logstash/certs/server.key" ssl_verify => false } } output { stdout { codec => rubydebug } kafka { bootstrap_servers => "${name}.servicebus.windows.net:9093" topic_id => "syslog" security_protocol => "SASL_SSL" sasl_mechanism => "PLAIN" sasl_jaas_config => 'org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://${name}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=${primary_key};EntityPath=syslog";' codec => "json" } } The configuration: Listens on port 6514 for TLS-encrypted syslog messages Outputs to stdout for debugging (visible in container logs) Forwards processed messages to Event Hubs using the Kafka protocol Deploying the Solution With all components configured, deploy the solution using Terraform: Initialize Terraform in your project directory: terraform init Review the planned changes: terraform plan Apply the configuration: terraform apply Connect to the AKS cluster: az aks get-credentials \ --resource-group rg-syslog-prod \ --name aks-syslog-prod Verify the deployment: kubectl -n logstash get pods kubectl -n logstash get svc kubectl -n logstash get hpa Configuring Syslog Sources After deployment, configure your syslog sources to send messages to the Load Balancer: Create a DNS record pointing to the Load Balancer IP (10.0.2.100). For example: syslog.yourdomain.com Configure syslog clients to send RFC 5425 messages over TLS to port 6514 Install the certificate chain on syslog clients if using a private CA or self-signed certificate Example rsyslog configuration for a Linux client: *.* @@syslog.yourdomain.com:6514;RSYSLOG_SyslogProtocol23Format Monitoring and Troubleshooting Monitor the deployment using several methods: View Logstash logs to verify message processing: kubectl -n logstash logs -l app=logstash --tail=50 Check autoscaling status: kubectl -n logstash describe hpa logstash-hpa Monitor in Azure Portal: Navigate to the Log Analytics Workspace to view AKS metrics Check Event Hub metrics to confirm message delivery Review Load Balancer health probes and connection statistics Security Best Practices This deployment incorporates several security measures: TLS encryption: All syslog traffic is encrypted using certificates from Key Vault Network isolation: The internal Load Balancer restricts access to the virtual network Managed identities: No credentials are stored in the configuration Container security: Logstash runs as a non-root user with minimal privileges For production deployments, consider these additional measures: Enable client certificate validation in Logstash for mutual TLS Add Network Security Groups to restrict source IPs Implement Azure Policy for compliance validation Enable Azure Defender for Kubernetes Integration with Azure Services Once syslog data flows into Event Hubs, you can integrate with various Azure services: Azure Sentinel: Configure Data Collection Rules to ingest syslog data for security analytics. See the Azure Sentinel documentation for detailed steps. Azure Data Explorer: Create a data connection to analyze syslog data with KQL queries. Azure Stream Analytics: Process syslog streams in real-time for alerting or transformation. Logic Apps: Trigger workflows based on specific syslog patterns or events. Cost Optimization To optimize costs while maintaining performance: Right-size the AKS node pool based on actual syslog volume Use Azure Spot instances for non-critical environments Configure Event Hub retention based on compliance requirements Enable auto-shutdown for development environments Conclusion This Terraform-based solution provides a robust foundation for collecting syslog data in Azure. The combination of AKS, Logstash, and Event Hubs creates a scalable pipeline that integrates seamlessly with Azure's security and analytics services. The modular design allows easy customization for different environments and requirements. Whether you're collecting logs from a handful of devices or thousands, this architecture scales to meet your needs while maintaining security and reliability. For next steps, consider implementing additional Logstash filters for data enrichment, setting up automated certificate rotation, or expanding the solution to collect other log formats. The flexibility of this approach ensures it can grow with your organization's logging requirements.