ai security
4 TopicsOn-device AI and security: What really matters for the enterprise
AI is evolving, and so is the way businesses run it. Traditionally, most AI workloads have been processed in the cloud. When a user gives an AI tool a prompt, that input is sent over the internet to remote servers, where the model processes it and sends back a result. This model supports large-scale services like Microsoft 365 Copilot, which integrates AI into apps like Word, Excel, and Teams. Now, a new capability is emerging alongside cloud-based AI. AI can also run directly on a PC—no internet connection or remote server required. This is known as on-device processing. It means the data and the model stay on the device itself, and the work is done locally. Modern CPUs and GPUs are beginning to support this kind of processing. But neural processing units (NPUs), now included in enterprise-grade PCs such as Microsoft Surface Copilot+ PCs, are specifically designed to run AI workloads efficiently. NPUs are designed to perform the types of operations AI needs at high speed while using less power. That makes them ideal for features that need to work instantly, in a sustained fashion in the background, or without an internet connection. A flexible approach to AI deployment NPUs can enable power-efficient on-device processing, fast response times with small models, consistent functionality in offline scenarios, and more control over how data is processed and stored. For organizations, it adds flexibility in choosing how and where to run AI—whether to support real-time interactions at the edge or meet specific data governance requirements. At the same time, cloud-based AI remains essential to how organizations deliver intelligent services across teams and workflows. Microsoft 365 Copilot, for example, is powered by cloud infrastructure and integrates deeply across productivity applications using enterprise-grade identity, access, and content protections. Both models serve different but complementary needs. On-device AI adds new options for responsiveness and control. Cloud-based AI enables broad integration and centralized scale. Together, they give businesses flexibility to align AI processing with the demands of the use case, whether for fast local inference or connected collaboration. For business and IT leaders, the question is not which model is better but how to use each effectively within a secure architecture. That starts with understanding where data flows, how it is protected, and what matters most at the endpoint. Understanding AI data flow and its security impact AI systems rely on several types of input such as user prompts, system context, and business content. When AI runs in the cloud, data is transmitted to remote servers for processing. When it runs on the device, processing happens locally. Both approaches have implications for security. With cloud AI, protection depends on the strength of the vendor’s infrastructure, encryption standards, and access controls. Security follows a shared responsibility model where the cloud provider secures the platform while the enterprise defines its policies for data access, classification, and compliance. Microsoft’s approach to data security and privacy in cloud AI services Although the purpose of this blog post is to talk about on-device AI and security, it’s worth a detour to briefly touch on how Microsoft approaches data governance across its cloud-based AI services. Ultimately, the goal is for employees to be able to use whatever tools work best for what they want to get done, and they may not differentiate between local and cloud AI services. That means having a trusted provider for both is important for long-term AI value and security in the organization. Microsoft’s generative AI solutions, including Azure OpenAI Service and Copilot services and capabilities, do not use your organization’s data to train foundation models without your permission. The Azure OpenAI Service is operated by Microsoft as an Azure service; Microsoft hosts the OpenAI models in Microsoft's Azure environment and the Service does not interact with any services operated by OpenAI (e.g. ChatGPT, or the OpenAI API). Microsoft 365 Copilot and other AI tools operate within a secured boundary, pulling from organization-specific content sources like OneDrive and Microsoft Graph while respecting existing access permissions. For more resources on data privacy and security in Microsoft cloud AI services, check out Microsoft Learn. Local AI security depends on a trusted endpoint When AI runs on the device, the data stays closer to its source. This reduces reliance on network connectivity and can help limit exposure in scenarios where data residency or confidentiality is a concern. But it also means the device must be secured at every level. Running AI on the device does not inherently make it more or less secure. It shifts the security perimeter. Now the integrity of the endpoint matters even more. Surface Copilot+ PCs are built with this in mind. As secured-core PCs, they integrate hardware-based protections that help guard against firmware, OS-level, and identity-based threats. TPM 2.0 and Microsoft Pluton security processors provide hardware-based protection for sensitive data Hardware-based root of trust verifies system integrity from boot-up Microsoft-developed firmware can reduce exposure to third-party supply chain risks and helps address emerging threats rapidly via Windows Update Windows Hello and Enhanced Sign-in Security (ESS) offer strong authentication at the hardware level These protections and others work together to create a dependable foundation for local AI workloads. When AI runs on a device like this, the same enterprise-grade security stack that protects the OS and applications also applies to AI processing. Why application design is part of the security equation Protecting the device is foundational—but it’s not the whole story. As organizations begin to adopt generative AI tools that run locally, the security conversation must also expand to include how those tools are designed, governed, and managed. The value of AI increases dramatically when it can work with rich, contextual data. But that same access introduces new risks if not handled properly. Local AI tools must be built with clear boundaries around what data they can access, how that access is granted, and how users and IT teams can control it. This includes opt-in mechanisms, permission models, and visibility into what’s being stored and why. Microsoft Recall (preview) on Copilot+ PCs is a case study in how thoughtful application design can make local AI both powerful and privacy conscious. It captures snapshots of the desktop embedded with contextual information, enabling employees to find almost anything that has appeared on their screen by describing it in their own words. This functionality is only possible because Recall has access to a wide range of on-device data—but that access is carefully managed. Recall runs entirely on the device. It is turned off by default—even when enabled by IT—and requires biometric sign-in with Windows Hello Enhanced Sign-in Security to activate. Snapshots are encrypted and stored locally, protected by Secured-core PC features and the Microsoft Pluton security processor. These safeguards ensure that sensitive data stays protected, even as AI becomes more deeply embedded in everyday workflows. IT admins can manage Recall through Microsoft Intune, with policies to enable or disable the feature, control snapshot retention, and apply content filters. Even when Recall is enabled, it remains optional for employees, who can pause snapshot saving, filter specific apps or websites, and delete snapshots at any time. This layered approach—secure hardware, secure OS, and secure app design—reflects Microsoft’s broader strategy for responsible local AI and aligns to the overall Surface security approach. It helps organizations maintain governance and compliance while giving users confidence that they are in control of their data and that the tools are designed to support them, not surveil them. This balance is essential to building trust in AI-powered workflows and ensuring that innovation doesn’t come at the expense of privacy or transparency. For more information, check out the related blog post. Choosing the right AI model for the use case Local AI processing complements cloud AI, offering additional options for how and where workloads run. Each approach supports different needs and use cases. What matters is selecting the right model for the task while maintaining consistent security and governance across the entire environment. On-device AI is especially useful in scenarios where organizations need to reduce data movement or ensure AI works reliably in disconnected environments In regulated industries such as finance, legal, or government, local processing can help support compliance with strict data-handling requirements In the field, mobile workers can use AI features such as document analysis or image recognition without relying on a stable connection For custom enterprise models, on-device execution through the Windows AI Foundry Local lets developers embed AI in apps while maintaining control over how data is used and stored These use cases reflect a broader trend. Businesses want more flexibility in how they deploy and manage AI. On-device processing makes that possible without requiring a tradeoff in security or integration. Security fundamentals matter most Microsoft takes a holistic view of AI security across cloud services, on-device platforms, and everything in between. Whether your AI runs in Azure or on a Surface device, the same principles apply. Protect identity, encrypt data, enforce access controls, and ensure transparency. This approach builds on the enterprise-grade protections already established across Microsoft’s technology stack. From the Secure Development Lifecycle to Zero Trust access policies, Microsoft applies rigorous standards to every layer of AI deployment. For business leaders, AI security extends familiar principles—identity, access, data protection—into new AI-powered workflows, with clear visibility and control over how data is handled across cloud and device environments. Securing AI starts with the right foundations AI is expanding from cloud-only services to include new capable endpoints. This shift gives businesses more ways to match the processing model to the use case without compromising security. Surface Copilot+ PCs support this flexibility by delivering local AI performance on a security-forward enterprise-ready platform. When paired with Microsoft 365 and Azure services, they offer a cohesive ecosystem that respects data boundaries and aligns with organizational policies. AI security is not about choosing between cloud or device. It is about enabling a flexible, secure ecosystem where AI can run where it delivers the most value—on the endpoint, in the cloud, or across both. This adaptability unlocks new ways to work, automate, and innovate, without increasing risk. Surface Copilot+ PCs are part of that broader strategy, helping organizations deploy AI with confidence and control—at scale, at speed, and at the edge of what’s next.758Views1like0CommentsEmbracing Responsible AI: A Comprehensive Guide and Call to Action
In an age where artificial intelligence (AI) is becoming increasingly integrated into our daily lives, the need for responsible AI practices has never been more critical. From healthcare to finance, AI systems influence decisions affecting millions of people. As developers, organizations, and users, we are responsible for ensuring that these technologies are designed, deployed, and evaluated ethically. This blog will delve into the principles of responsible AI, the importance of assessing generative AI applications, and provide a call to action to engage with the Microsoft Learn Module on responsible AI evaluations. What is Responsible AI? Responsible AI encompasses a set of principles and practices aimed at ensuring that AI technologies are developed and used in ways that are ethical, fair, and accountable. Here are the core principles that define responsible AI: Fairness AI systems must be designed to avoid bias and discrimination. This means ensuring that the data used to train these systems is representative and that the algorithms do not favor one group over another. Fairness is crucial in applications like hiring, lending, and law enforcement, where biased AI can lead to significant societal harm. Transparency Transparency involves making AI systems understandable to users and stakeholders. This includes providing clear explanations of how AI models make decisions and what data they use. Transparency builds trust and allows users to challenge or question AI decisions when necessary. Accountability Developers and organizations must be held accountable for the outcomes of their AI systems. This includes establishing clear lines of responsibility for AI decisions and ensuring that there are mechanisms in place to address any negative consequences that arise from AI use. Privacy AI systems often rely on vast amounts of data, raising concerns about user privacy. Responsible AI practices involve implementing robust data protection measures, ensuring compliance with regulations like GDPR, and being transparent about how user data is collected, stored, and used. The Importance of Evaluating Generative AI Applications Generative AI, which includes technologies that can create text, images, music, and more, presents unique challenges and opportunities. Evaluating these applications is essential for several reasons: Quality Assessment Evaluating the output quality of generative AI applications is crucial to ensure that they meet user expectations and ethical standards. Poor-quality outputs can lead to misinformation, misrepresentation, and a loss of trust in AI technologies. Custom Evaluators Learning to create and use custom evaluators allows developers to tailor assessments to specific applications and contexts. This flexibility is vital in ensuring that the evaluation process aligns with the intended use of the AI system. Synthetic Datasets Generative AI can be used to create synthetic datasets, which can help in training AI models while addressing privacy concerns and data scarcity. Evaluating these synthetic datasets is essential to ensure they are representative and do not introduce bias. Call to Action: Engage with the Microsoft Learn Module To deepen your understanding of responsible AI and enhance your skills in evaluating generative AI applications, I encourage you to explore the Microsoft Learn Module available at this link. What You Will Learn: Concepts and Methodologies: The module covers essential frameworks for evaluating generative AI, including best practices and methodologies that can be applied across various domains. Hands-On Exercises: Engage in practical, code-first exercises that simulate real-world scenarios. These exercises will help you apply the concepts learned tangibly, reinforcing your understanding. Prerequisites: An Azure subscription (you can create one for free). Basic familiarity with Azure and Python programming. Tools like Docker and Visual Studio Code for local development. Why This Matters By participating in this module, you are not just enhancing your skills; you are contributing to a broader movement towards responsible AI. As AI technologies continue to evolve, the demand for professionals who understand and prioritize ethical considerations will only grow. Your engagement in this learning journey can help shape the future of AI, ensuring it serves humanity positively and equitably. Conclusion As we navigate the complexities of AI technology, we must prioritize responsible AI practices. By engaging with educational resources like the Microsoft Learn Module on responsible AI evaluations, we can equip ourselves with the knowledge and skills necessary to create AI systems that are not only innovative but also ethical and responsible. Join the movement towards responsible AI today! Take the first step by exploring the Microsoft Learn Module and become an advocate for ethical AI practices in your community and beyond. Together, we can ensure that AI serves as a force for good in our society. References Evaluate generative AI applications https://learn.microsoft.com/en-us/training/paths/evaluate-generative-ai-apps/?wt.mc_id=studentamb_263805 Azure Subscription for Students https://azure.microsoft.com/en-us/free/students/?wt.mc_id=studentamb_263805 Visual Studio Code https://code.visualstudio.com/?wt.mc_id=studentamb_263805653Views0likes0CommentsLondon Reactor Meetup September 2024 - Cyber Security
Hey everyone! Thanks for joining the London Reactor Meetup today. Here you can find the resources that have been shared during the meetup and the speakers contact details. Upcoming You can find all upcoming Reactor events HERE Speaker contact and resources: Rafah Knight, CEO @ SecureAI SecureAI LinkedIn Chris Noring LinkedIn Liam Hampton LinkedIn162Views0likes0Comments