cybersecurity
62 TopicsSerious problems in Ring0 kernel-mode modules and security in current versions of Windows
We all know that in the X86 architecture CPUs have four different levels: Ring0(kernel-level), Ring1, Ring2 and Ring3 (user-level). The users, even administrators can only access Ring3, and Microsoft designed the operating system this way to make the system more safe and stable. On the other hand, Microsoft uses signs and security options like "Memory Integrity" in "Core Isolation" in Windows Defender. Normal applications need to use kernel-mode modules to gain access to the kernel (.sys), and if these modules need to be loaded by the system, it should be signed or it will be blocked by Windows Defender or other antivirus software. But now I found a really serious problem in Microsoft's signing activities. BEDaisy.sys is the kernel-mode driver of BattlEye, an anti-cheat software, and it is signed by Microsoft. In BattlEye's EULA, it said that "BattlEye can prevent the cheaters from gaming on the servers which are protected by BattlEye. ", and to make it happen, BattlEye needs to create a service and install kernel-mode components. (Please remember that User Account Control window won't pop up if a service or trusted installer tries to install a kernel-mode driver. ) This EULA is really confusing because it makes the users think "BattlEye does this to protect me from being attacked by other cheaters. " and then accept the EULA and install BattlEye. However, after BattlEye is installed, it can't even block a simple attack from the other cheaters. The other cheaters can even force crash your game. On the contrary, BattlEye tries to block the modules from any other applications which it thinks they are suspicious from loading. It can even block the modules of the anti-cheat software, which makes the protections of the system reduce or even put the system at risk. There is another case. There is a user found his computer attacked by the malware. He was really confused because he had installed the anti-virus software on his system. After looking into his system carefully, he found out that his anti-virus software was down and was killed by mhyprot2.sys, another kernel-mode module of an anti-cheat software. And mhyprotect2.sys is also signed by Microsoft. https://www.trendmicro.com/en_us/research/22/h/ransomware-actor-abuses-genshin-impact-anti-cheat-driver-to-kill-antivirus.html The kernel-mode drivers from both of the cases are signed by Microsoft, and as they run in Ring0 kernel-level, the users have nothing to do to stop them. And as they are signed, most of the anti-cheat software will be less sensitive to them and will be much easier to let them run. . Besides, Windows is designed for everyone, not just for game players. Not all the users would like to sacrifice the security of the system just to play the games. . On the other hand, unlike the cyber security companies, the game companies usually care more about the game itself than the entire system. And they are not responsible for any damage caused by the anti-cheat software. The thing that I am most angry with is that Microsoft actually signed these kinds of kernel-mode modules, which means Microsoft allow these kinds of dangerous things to happen. In my opinion, it is the player's duty to obey the EULA of the games, but it is the game company's duty to do their anti-cheat jobs, and if you want to use the player's device to help you anti-cheat and even want to have Ring0 access, you need to warn the users and notify them. In BattlEye's case, there are three windows will pop up on the screen when you try to install them, but all of them said that BattlEye will minimize its authority and none of them said it needs to gain the authority to shut down other software or block their activities. . . And in total, it is the users who paid for the device and the operating system which they are using, but not the game companies. Taking fully control of the device without noticing the user is illegal. In the end, I really hope that Microsoft can raise the standard of signing a kernel-mode module. These kinds of issues can happen not only in anti-cheat software, but also in any other software, only the problem occurred this time is the anti-cheat software. To tell you the truth, I think Microsoft can only sign the Ring0 kernel-level drivers of the hardware drivers and the anti-virus software. The other applications can only run in Ring3 user-mode like Android. I know it could be hard to make it happen, so you can add whitelist function for the users don't care too much about security or even let them turn off the security options. You can kill the malware by mistake because if that happens, the user can restore them and whitelist them. But you can't miss a malware, because if that happens, the responsibility is usually the one that you can't take. And if the software in the whitelist damages the system, then it is not you Microsoft's responsibility. And for the game players, you can also add isolated gaming environment like Hyper-V, but especially for games, and any other software can't run in it to prevent cheating. Thank you.85Views0likes0CommentsA CISO's Guide to Securing AI - Securing AI for Federal, DIB, and DoW Entities
Artificial Intelligence (AI) is rapidly reshaping federal missions, defense operations, and critical infrastructure. From intelligence analysis to logistics and cyber defense, AI’s transformative power is undeniable. Yet, with great power comes great responsibility and risk.843Views0likes0CommentsJoin Microsoft at IACP 2025: Empower public safety operations with trusted AI
The International Association of Chiefs of Police (IACP) Annual Conference and Exposition is the premier global event for law enforcement leaders, bringing together more than 16,000 public safety professionals. This year, IACP 2025 takes place October 18–22 at the Colorado Convention Center in Denver, and Microsoft is proud to be part of the conversation. As your trusted partner in public safety innovation, we invite you to connect with us at booth #362 to discover how Microsoft and our ecosystem of partners are helping agencies modernize operations, improve decision-making, and build safer communities through trusted AI. Microsoft’s presence at IACP 2025 centers around three key pillars that reflect the evolving needs of law enforcement and public safety agencies: Empower the government workforce Streamline workflows with secure AI copilots, enhance collaboration across departments, and boost efficiency with intuitive digital tools. Enable AI-driven decision making Accelerate officer workflows with real-time insights and unify data to support faster, more informed decisions. Transform emergency response Modernize communications, integrate systems for real-time situational awareness, and automate operations to improve coordination and outcomes. Experience Innovation Firsthand At booth #362, attendees can explore hands-on demos of Microsoft solutions including Microsoft 365 Copilot, Researcher and Analyst agents, and Copilot Studio agents tailored for first responders. These tools are designed to help agencies work smarter, respond faster, and serve communities more effectively. You’ll also have the opportunity to connect with our partners, Altia, DisasterTech, Insight, Pimloc, Remark, Revelen.AI, Triangula, and Zencos who are showcasing their solutions that support officer workflows, evidence management, reporting, and analytics. Don’t miss the Emergency Response Platform vehicle demo, supported by Darley, Dejero, and 3AM which highlights how AI and real-time data can transform field operations and emergency response at the tactical edge. Attend Our Thought Leadership Session Join us for a featured education session in the Leadership Track: Is "Technology Sharing" the Key to Law Enforcement Innovation? 📅 Saturday, October 18 🕤 9:30 – 10:30 AM MT 📍 Room 505/506 This session explores how collaborative platforms and shared technology models can reduce costs, accelerate deployment, and improve outcomes across jurisdictions, offering a blueprint for scalable innovation. Let’s Connect We’d love to meet with you one-on-one to discuss your agency’s goals and challenges. Request a meeting with a Microsoft expert to explore how AI and cloud technologies can support your mission. Visit Microsoft at booth #362 to explore AI-powered public safety solutions and skilling opportunities. Together, we can build safer, more resilient communities through innovation.166Views0likes0CommentsUnderstanding Compliance Between Commercial, Government, DoD & Secret Offerings - July 2025 Update
Understanding compliance between Commercial, Government, DoD & Secret Offerings: There remains much confusion as to what service supports what standards best. If you have CMMC, DFARS, ITAR, FedRAMP, CJIS, IRS and other regulatory requirements and you are trying to understand what service is the best fit for your organization then you should read this article.66KViews5likes7CommentsAZ-500: Microsoft Azure Security Technologies Study Guide
The AZ-500 certification provides professionals with the skills and knowledge needed to secure Azure infrastructure, services, and data. The exam covers identity and access management, data protection, platform security, and governance in Azure. Learners can prepare for the exam with Microsoft's self-paced curriculum, instructor-led course, and documentation. The certification measures the learner’s knowledge of managing, monitoring, and implementing security for resources in Azure, multi-cloud, and hybrid environments. Azure Firewall, Key Vault, and Azure Active Directory are some of the topics covered in the exam.22KViews4likes3CommentsResponsible AI and the Evolution of AI Security
Why Responsible AI Matters Responsible AI means designing, developing, and deploying AI systems that are ethical, transparent, and accountable. It's not just about compliance—it's about building trust, protecting users, and ensuring AI benefits everyone. Key Principles of Responsible AI: Fairness: Avoiding biases and discrimination by using diverse datasets and regular audits. Reliability & Safety: Rigorous testing to ensure AI performs as intended, even in unexpected scenarios. Privacy & Security: Protecting user data with robust safeguards. Transparency: Making AI decisions explainable and understandable. Accountability: Establishing governance to address negative impacts. Inclusiveness: Considering diverse user needs and perspectives. Responsible AI reduces bias, increases transparency, and builds user trust—critical as AI systems increasingly impact finance, healthcare, public services, and more. Implementing Responsible AI isn't just about ethical ideals—it's a foundation that demands technical safeguards. For developers, this means translating principles like fairness and transparency into secure code, robust data handling, and model hardening strategies that preempt real-world AI threats. The Evolution of AI Security: From Afterthought to Essential AI security has come a long way—from an afterthought to a central pillar of modern digital defense. In the early days, security was reactive, with threats addressed only after damage occurred. The integration of AI shifted this paradigm, enabling proactive threat detection and behavioral analytics that spot anomalies before they escalate. Key Milestones in AI Security: Pattern Recognition: Early AI focused on detecting unusual patterns, laying the groundwork for threat detection. Expert Systems: Rule-based systems in the 1970s-80s emulated human decision-making for security assessments. Machine Learning: The late 1990s saw the rise of ML algorithms that could analyze vast data and predict threats. Deep Learning: Neural networks now recognize complex threats and adapt to evolving attack methods. Real-Time Defense: Modern AI-driven platforms (like Darktrace) create adaptive, self-learning security environments that anticipate and neutralize threats proactively. Why AI Security Is Now Mandatory With the explosion of AI-powered applications and cloud services, security risks have multiplied. AI attacks are a new frontier in cybersecurity. What Are AI Attacks? AI attacks are malicious activities that target AI systems and models. Data Poisoning: Attackers manipulate training data to corrupt AI outputs. Model Theft: Sensitive models and datasets can be stolen or reverse-engineered. Adversarial Attacks: Malicious inputs can trick AI systems into making wrong decisions. Privacy Breaches: Sensitive user data can leak if not properly protected. Regulatory frameworks and industry standards now require organizations to adopt robust AI security practices to protect users, data, and critical infrastructure. Tools and Techniques for Secure AI Infrastructure and Applications Zero Trust Architecture Adopt a "never trust, always verify" approach. Enforce strict authentication and authorization for every user and device Data Security Protocols Encrypt data at rest, in transit, and during processing. Use tools like Microsoft Purview for data classification, cataloging, and access control Harden AI Models Train models with adversarial examples. Implement input validation, anomaly detection, and regular security assessments Secure API and Endpoint Management Use API gateways, OAuth 2.0, and TLS to secure endpoints. Monitor and rate-limit API access to prevent abuse. Continuous Monitoring and Incident Response Deploy AI-powered Security Information and Event Management (SIEM) systems for real-time threat detection and response Regularly audit logs and security events across your infrastructure. DevSecOps Integration Embed security into every phase of the AI development lifecycle. Automate security testing in CI/CD pipelines. Employee Training and Governance Train teams on AI-specific risks and responsible data handling. Establish clear governance frameworks for AI ethics and compliance Azure-Specific Security Tools Microsoft Defender for Cloud: Monitors and protects Azure resources. Azure Resource Graph Explorer: Maintains inventory of models, data, and assets. Microsoft Purview: Manages data security, privacy, and compliance across Azure services. Microsoft Purview provides a centralized platform for data governance, security, and compliance across your entire data estate. Why Microsoft Purview Matters for Responsible AI Microsoft Purview offers a unified, cloud-native solution for: Data discovery and classification Access management and policy enforcement Compliance monitoring and risk mitigation Data quality and observability Purview's integrated approach ensures that AI systems are built on trusted, well-governed, and secure data, addressing the core principles of responsible AI: fairness, transparency, privacy, and accountability. Conclusion Responsible AI and strong AI security measures are no longer optional; they are essential pillars of modern application development and integration on Azure. By adhering to ethical principles and utilizing cutting-edge security tools and strategies, organizations can drive innovation with confidence while safeguarding users, data, and the broader society.608Views0likes0Comments