As artificial intelligence becomes central to software innovation, it also introduces unique security challenges—especially in applications powered by large language models (LLMs). In this edition of the Software Development Company Security Series, we explore the evolving risks facing AI-powered products and share actionable strategies to secure AI solutions throughout the development lifecycle.
Figure 1 - AI Security Exposures*Data based on 2024–2025 global reports from Cyberhaven, Swimlane, FS-ISAC, Capgemini, Palo Alto Networks, and Pillar Security analyzing AI security incidents across sectors.
Understanding the Evolving AI Threat Landscape
AI systems, particularly LLMs, differ from traditional software in one fundamental way: they’re generative, probabilistic, and nondeterministic. This unpredictability opens the door to novel security risks, including:
- Sensitive Data Exposure: Leaked personal or proprietary data via model outputs.
- Prompt Injection: Manipulated inputs crafted to subvert AI behavior.
- Supply Chain Attacks: Risks from compromised training data, open-source models, or third-party libraries.
- Model Poisoning: Insertion of malicious content during training to bias outcomes.
- Jailbreaks & Misuse: Circumventing safeguards to produce unsafe or unethical content.
- Compliance & Trust Risks: Legal, regulatory, and reputational consequences from unvalidated AI outputs.
These risks underscore the need for a security-first approach to designing, deploying, and operating AI systems.
Key Risks: The OWASP Top 10 for LLMs
The OWASP Top 10 LLM Risks offer a framework for understanding threats specific to generative AI. Key entries include:
- Prompt Injection
- Sensitive Data Disclosure
- Model and Data Poisoning
- Excessive Model Permissions
- Hallucination & Misinformation
- System Prompt Leakage
- Vector Embedding Exploits
- Uncontrolled Resource Consumption
Each of these risks presents opportunities for attackers across the AI lifecycle—from model training and prompt design to output handling and API access.
Inherent Risks of LLM-Based Applications
Three core attributes contribute to LLM vulnerabilities:
- Probabilistic Outputs: Same prompt, different results.
- Non-Determinism: Inconsistent behavior, compounded over time.
- Linguistic Flexibility: Prone to manipulation and hallucination.
Common attack scenarios include:
- Hallucination: Fabricated content presented as fact—dangerous in domains like healthcare or legal.
- Indirect Prompt Injection: Malicious prompts hidden in user content (emails, docs).
- Jailbreaks: Bypassing guardrails using clever or multi-step prompting.
Mitigations include retrieval-augmented generation (RAG), output validation, prompt filtering, and user activity monitoring.
Microsoft’s Approach to Securing AI Applications
Securing AI requires embedding Zero Trust principles and responsible AI at every stage. Microsoft supports this through:
Zero Trust Architecture
- Verify explicitly based on identity and context
- Use least privilege access controls
- Assume breach with proactive monitoring and segmentation
Shared Responsibility Model
- Customer-managed models: You manage model security and data.
- Microsoft-managed platforms: Microsoft handles infrastructure; you configure securely.
End-to-End Security Controls
- Protect infrastructure, APIs, orchestration flows, and user prompts.
- Enforce responsible AI principles: fairness, privacy, accountability, and transparency.
Tools & Ecosystem
- Microsoft Defender for Cloud: Monitors AI posture and detects threats like credential misuse or jailbreak attempts.
- Azure AI Foundry: Scans models for embedded risks and unsafe code.
- Prompt Shield: Filters harmful inputs in real-time.
- Red Team Tools (e.g., PyRIT): Simulate attacks to harden defenses pre-deployment.
Action Steps for Software Companies Securing AI Products
Here’s a focused checklist for AI builders and software development companies:
- Embed Security Early
- Apply Zero Trust by default
- Use identity and access management
- Encrypt data in transit and at rest
- Leverage Microsoft Security Ecosystem
- Enable Defender for Cloud for AI workload protection
- Scan models via Azure AI Foundry
- Deploy Prompt Shield to defend against jailbreaks and injection attacks
- Secure the Supply Chain
- Maintain a Software Bill of Materials (SBOM)
- Regularly audit and patch dependencies
- Sanitize external data inputs
- Mitigate LLM-Specific Risks
- Validate outputs and restrict unsafe actions
- Use RAG to reduce hallucination
- Monitor prompt usage and filter malicious patterns
- Build for Multi-Tenancy and Compliance
- Use Well-Architected Framework for OpenAI
- Isolate tenant data
- Ensure alignment with data residency and privacy laws
- Continuously Improve
- Conduct regular red teaming
- Monitor AI systems in production
- Establish incident response playbooks
- Foster a Security-First Culture
- Share responsibility across engineering, product, and security teams
- Train users on AI risks and responsible usage
- Update policies to adapt to evolving threats
Conclusion: Secure AI Is Responsible AI
AI’s potential can only be realized when it is both innovative and secure. By embedding security and responsibility across the AI lifecycle, software companies can deliver solutions that are not only powerful—but trusted, compliant, and resilient.
Explore More
- OWASP Top 10 for Large Language Model Applications | OWASP Foundation
- Overview - AI threat protection - Microsoft Defender for Cloud | Microsoft Learn
- Prompt Shields in Azure AI Content Safety - Azure AI services | Microsoft Learn
- AI Red Teaming Agent - Azure AI Foundry | Microsoft Learn
- AI Trust and AI Risk: Tackling Trust, Risk and Security in AI Models
- What is Azure AI Content Safety? - Azure AI services | Microsoft Learn
- Overview of Responsible AI practices for Azure OpenAI models - Azure AI services | Microsoft Learn
- Architecture Best Practices for Azure OpenAI Service - Microsoft Azure Well-Architected Framework | Microsoft Learn
- Azure OpenAI Landing Zone reference architecture
- AI Workload Documentation - Microsoft Azure Well-Architected Framework | Microsoft Learn
- Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog
- HiddenLayer Model Scanner helps developers assess the security of open models in the model catalog | Microsoft Community Hub
- Inside AI Security with Mark Russinovich | BRK227
- The Price of Intelligence - ACM Queue