Blog Post

Marketplace blog
4 MIN READ

Navigating AI security: Identifying risks and implementing mitigations

vicperdana's avatar
vicperdana
Icon for Microsoft rankMicrosoft
May 15, 2025

As artificial intelligence becomes central to software innovation, it also introduces unique security challenges—especially in applications powered by large language models (LLMs). In this edition of the Software Development Company Security Series, we explore the evolving risks facing AI-powered products and share actionable strategies to secure AI solutions throughout the development lifecycle.

Figure 1 - AI Security Exposures
*Data based on 2024–2025 global reports from Cyberhaven, Swimlane, FS-ISAC, Capgemini, Palo Alto Networks, and Pillar Security analyzing AI security incidents across sectors.

Understanding the Evolving AI Threat Landscape

AI systems, particularly LLMs, differ from traditional software in one fundamental way: they’re generative, probabilistic, and nondeterministic. This unpredictability opens the door to novel security risks, including:

  • Sensitive Data Exposure: Leaked personal or proprietary data via model outputs.
  • Prompt Injection: Manipulated inputs crafted to subvert AI behavior.
  • Supply Chain Attacks: Risks from compromised training data, open-source models, or third-party libraries.
  • Model Poisoning: Insertion of malicious content during training to bias outcomes.
  • Jailbreaks & Misuse: Circumventing safeguards to produce unsafe or unethical content.
  • Compliance & Trust Risks: Legal, regulatory, and reputational consequences from unvalidated AI outputs.

These risks underscore the need for a security-first approach to designing, deploying, and operating AI systems.

Key Risks: The OWASP Top 10 for LLMs

The OWASP Top 10 LLM Risks offer a framework for understanding threats specific to generative AI. Key entries include:

  • Prompt Injection
  • Sensitive Data Disclosure
  • Model and Data Poisoning
  • Excessive Model Permissions
  • Hallucination & Misinformation
  • System Prompt Leakage
  • Vector Embedding Exploits
  • Uncontrolled Resource Consumption

Each of these risks presents opportunities for attackers across the AI lifecycle—from model training and prompt design to output handling and API access.

Inherent Risks of LLM-Based Applications

Three core attributes contribute to LLM vulnerabilities:
  • Probabilistic Outputs: Same prompt, different results.
  • Non-Determinism: Inconsistent behavior, compounded over time.
  • Linguistic Flexibility: Prone to manipulation and hallucination.
Common attack scenarios include:
  • Hallucination: Fabricated content presented as fact—dangerous in domains like healthcare or legal.
  • Indirect Prompt Injection: Malicious prompts hidden in user content (emails, docs).
  • Jailbreaks: Bypassing guardrails using clever or multi-step prompting.

Mitigations include retrieval-augmented generation (RAG), output validation, prompt filtering, and user activity monitoring.

Microsoft’s Approach to Securing AI Applications

Securing AI requires embedding Zero Trust principles and responsible AI at every stage. Microsoft supports this through:

Zero Trust Architecture
  • Verify explicitly based on identity and context
  • Use least privilege access controls
  • Assume breach with proactive monitoring and segmentation
Shared Responsibility Model
  • Customer-managed models: You manage model security and data.
  • Microsoft-managed platforms: Microsoft handles infrastructure; you configure securely.
End-to-End Security Controls
  • Protect infrastructure, APIs, orchestration flows, and user prompts.
  • Enforce responsible AI principles: fairness, privacy, accountability, and transparency.
Tools & Ecosystem
  • Microsoft Defender for Cloud: Monitors AI posture and detects threats like credential misuse or jailbreak attempts.
  • Azure AI Foundry: Scans models for embedded risks and unsafe code.
  • Prompt Shield: Filters harmful inputs in real-time.
  • Red Team Tools (e.g., PyRIT): Simulate attacks to harden defenses pre-deployment.

Action Steps for Software Companies Securing AI Products

Here’s a focused checklist for AI builders and software development companies:

  1. Embed Security Early
    1. Apply Zero Trust by default
    2. Use identity and access management
    3. Encrypt data in transit and at rest
  2. Leverage Microsoft Security Ecosystem
    1. Enable Defender for Cloud for AI workload protection
    2. Scan models via Azure AI Foundry
    3. Deploy Prompt Shield to defend against jailbreaks and injection attacks
  3. Secure the Supply Chain
    1. Maintain a Software Bill of Materials (SBOM)
    2. Regularly audit and patch dependencies
    3. Sanitize external data inputs
  4. Mitigate LLM-Specific Risks
    1. Validate outputs and restrict unsafe actions
    2. Use RAG to reduce hallucination
    3. Monitor prompt usage and filter malicious patterns
  5. Build for Multi-Tenancy and Compliance
    1. Use Well-Architected Framework for OpenAI
    2. Isolate tenant data
    3. Ensure alignment with data residency and privacy laws
  6. Continuously Improve
    1. Conduct regular red teaming
    2. Monitor AI systems in production
    3. Establish incident response playbooks
  7. Foster a Security-First Culture
    1. Share responsibility across engineering, product, and security teams
    2. Train users on AI risks and responsible usage
    3. Update policies to adapt to evolving threats

Conclusion: Secure AI Is Responsible AI

AI’s potential can only be realized when it is both innovative and secure. By embedding security and responsibility across the AI lifecycle, software companies can deliver solutions that are not only powerful—but trusted, compliant, and resilient.

Explore More

 

Updated May 21, 2025
Version 2.0
No CommentsBe the first to comment