May 08 2024 09:18 AM
Generative AI is reshaping business today for every individual, every team, and every industry. Organizations engage with GenAI in a variety of ways – from purchasing and using finished GenAI apps to developing, deploying, and operating custom-built GenAI apps.
GenAI broadens the attack surface of applications through prompts, training data, models, and more – thereby effectively changing the threat landscape with new risks such as direct or indirect prompt injection attacks, data leakage, and data oversharing.
In March this year, we shared how Microsoft Security helps organizations discover, protect, and govern the use of GenAI apps like Copilot for M365. Today, we’re thrilled to introduce additional capabilities for that scenario and new capabilities to secure and govern the development, deployment, and runtime of custom-built GenAI apps.
With these new innovations, Microsoft Security is at the forefront of AI security to support our customers on their AI journey by being the first security solution provider to offer threat protection for AI workloads and providing comprehensive security to secure and govern AI usage and applications.
Secure and govern GenAI you build:
Secure and govern GenAI you use:
Discover new AI attack surfaces
As organizations embrace GenAI, many accelerate adoption with pre-built GenAI applications while others choose to develop GenAI applications in-house, tailored to their unique use cases, security controls and compliance requirements. Organizations from all industries are racing to transform their applications with AI, with over half of Fortune 500 companies using Azure OpenAI.
With all the new components of AI workloads such as models, SDKs, training, and grounding data – the visibility into understanding all the configurations of these new components and the risks associated with them is more important than ever.
With new AI security posture management (AI-SPM) capabilities in Microsoft Defender for Cloud, security admins can continuously discover and inventory their organization’s AI components across Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock – including models, SDKs, and data – as well as sensitive data used in grounding, training, and fine tuning LLMs. Admins can find vulnerabilities, identify exploitable attack paths, and easily remediate risks to get ahead of active threats.
Read the full post here: Secure your AI transformation with Microsoft Security