In Part 1 of this series, we tackled the critical challenge of protecting the LLM itself from malicious inputs. We implemented three essential security layers using Azure AI services: harmful content detection with Azure Content Safety, PII protection with Azure Text Analytics, and prompt injection prevention with Prompt Shields. These guardrails ensure that your AI model doesn't process harmful requests or leak sensitive information through cleverly crafted prompts.
But even with a perfectly secured LLM, your entire AI chat system can still be compromised through architectural vulnerabilities.
For example, the WotNot incident wasn't about prompt injection—it was 346,000 files sitting in an unsecured cloud storage bucket. Likewise the OmniGPT breach with 34 million lines of conversation logs due to backend database security failures.
The global average cost of a data breach is now $4.44 million, and it takes organizations an average of 241 days to identify and contain an active breach. That's eight months where attackers have free reign in your systems. The financial cost is one thing, but the reputational damage and loss of customer is irreversible.
This article focuses on the architectural security concerns I mentioned at the end of Part 1—the infrastructure that stores your chat histories, the networks that connect your services, and the databases that power your vector searches. We'll examine real-world breaches that happened in 2024 and 2025, understand exactly what went wrong, and implement Azure solutions that would have prevented them.
By the end of this article, you'll have a production-ready, secure architecture for your AI chat system that addresses the most common—and most devastating—security failures we're seeing in the wild.
Let's start with the most fundamental question: where is your data, and who can access it?
1. Preventing Exposed Storage with Network Isolation
The Problem: When Your Database Is One Google Search Away
Let me paint you a picture of what happened with two incidents in 2024-2025:
WotNot AI Chatbot left 346,000 files completely exposed in an unsecured cloud storage bucket—passports, medical records, sensitive customer data, all accessible to anyone on the internet without even a password. Security researchers who discovered it tried for over two months to get the company to fix it.
In May 2025, Canva Creators' data was exposed through an unsecured Chroma vector database operated by an AI chatbot company. The database contained 341 collections of documents including survey responses from 571 Canva Creators with email addresses, countries of residence, and comprehensive feedback. This marked the first reported data leak involving a vector database.
The common thread? Public internet accessibility. These databases and storage accounts were accessible from anywhere in the world. No VPN required. No private network. Just a URL and you were in.
Think about your current architecture. If someone found your Cosmos DB connection string or your Azure Storage account name, what's stopping them from accessing it? If your answer is "just the access key" or "firewall rules," you're one leaked credential away from being in the headlines.
So what to do: Azure Private Link + Network Isolation
The most effective way to prevent public exposure is simple: remove public internet access entirely. This is where Azure Private Link becomes your architectural foundation.
With Azure Private Link, you can create a private endpoint inside your Azure Virtual Network (VNet) that becomes the exclusive gateway to your Azure services. Your Cosmos DB, Storage Accounts, Azure OpenAI Service, and other resources are completely removed from the public internet—they only respond to requests originating from within your VNet. Even if someone obtains your connection strings or access keys, they cannot use them without first gaining access to your private network.
Implementation Overview:
To implement Private Link for your AI chat system, you'll need to:
- Create an Azure Virtual Network (VNet) to host your private endpoints and application resources
- Configure private endpoints for each service (Cosmos DB, Storage, Azure OpenAI, Key Vault)
- Set up private DNS zones to automatically resolve service URLs to private IPs within your VNet
- Disable public network access on all your Azure resources
- Deploy your application inside the VNet using Azure App Service with VNet integration, Azure Container Apps, or Azure Kubernetes Service
- Verify isolation by attempting to access resources from outside the VNet (should fail)
You can configure this through the Azure Portal, Azure CLI, ARM templates, or infrastructure-as-code tools like Terraform. The Azure documentation provides step-by-step guides for each service type.
Figure 1: Private Link Architecture for AI Chat Systems
Private endpoints ensure all data access occurs within the Azure Virtual Network, blocking public internet access to databases, storage, and AI services.
2. Protecting Conversation Data with Encryption at Rest
The Problem: When Backend Databases Become Treasure Troves
Network isolation solves the problem of external access, but what happens when attackers breach your perimeter through other means? What if a malicious insider gains access? What if there's a misconfiguration in your cloud environment? The data sitting in your databases becomes the ultimate prize.
In February 2025, OmniGPT suffered a catastrophic breach where attackers accessed the backend database and extracted personal data from 30,000 users including emails, phone numbers, API keys, and over 34 million lines of conversation logs. The exposed data included links to uploaded files containing sensitive credentials, billing details, and API keys.
These weren't prompt injection attacks. These weren't DDoS incidents. These were failures to encrypt sensitive data at rest. When attackers accessed the storage layer, they found everything in readable format—a goldmine of personal information, conversations, and credentials.
Think about the conversations your AI chat system stores. Customer support queries that might include account numbers. Healthcare chatbots discussing symptoms and medications. HR assistants processing employee grievances. If someone gained unauthorized (or even authorized) access to your database today, would they be reading plaintext conversations?
What to do: Azure Cosmos DB with Customer-Managed Keys
The fundamental defense against data exposure is encryption at rest—ensuring that data stored on disk is encrypted and unreadable without the proper decryption keys. Even if attackers gain physical or logical access to your database files, the data remains protected as long as they don't have access to the encryption keys.
But who controls those keys?
With platform-managed encryption (the default in most cloud services), the cloud provider manages the encryption keys. While this protects against many threats, it doesn't protect against insider threats at the provider level, compromised provider credentials, or certain compliance scenarios where you must prove complete key control.
Customer-Managed Keys (CMK) solve this by giving you complete ownership and control of the encryption keys. You generate, store, and manage the keys in your own key vault. The cloud service can only decrypt your data by requesting access to your keys—access that you control and can revoke at any time. If your keys are deleted or access is revoked, even the cloud provider cannot decrypt your data.
Azure makes this easy with Azure Key Vault integrated with Azure Cosmos DB. The architecture uses "envelope encryption" where your data is encrypted with a Data Encryption Key (DEK), and that DEK is itself encrypted with your Key Encryption Key (KEK) stored in Key Vault. This provides layered security where even if the database is compromised, the data remains encrypted with keys only you control.
While we covered PII detection and redaction using Azure Text Analytics in Part 1—which prevents sensitive data from being stored in the first place—encryption at rest with Customer-Managed Keys provides an additional, powerful layer of protection. In fact, many compliance frameworks like HIPAA, PCI-DSS, and certain government regulations explicitly require customer-controlled encryption for data at rest, making CMK not just a best practice but often a mandatory requirement for regulated industries.
Implementation Overview:
To implement Customer-Managed Keys for your chat history and vector storage:
- Create an Azure Key Vault with purge protection and soft delete enabled (required for CMK)
- Generate or import your encryption key in Key Vault (2048-bit RSA or 256-bit AES keys)
- Grant Cosmos DB access to Key Vault using a system-assigned or user-assigned managed identity
- Enable CMK on Cosmos DB by specifying your Key Vault key URI during account creation or update
- Configure the same for Azure Storage if you're storing embeddings or documents in Blob Storage
- Set up key rotation policies to automatically rotate keys on a schedule (recommended: every 90 days)
- Monitor key usage through Azure Monitor and set up alerts for unauthorized access attempts
Figure 2: Envelope Encryption with Customer-Managed Keys
User conversations are encrypted using a two-layer approach: (1) The AI Chat App sends plaintext messages to Cosmos DB, (2) Cosmos DB authenticates to Key Vault using Managed Identity to retrieve the Key Encryption Key (KEK), (3) Data is encrypted with a Data Encryption Key (DEK), (4) The DEK itself is encrypted with the KEK before storage. This ensures data remains encrypted even if the database is compromised, as decryption requires access to keys stored in your Key Vault.
For AI chat systems in regulated industries (healthcare, finance, government), Customer-Managed Keys should be your baseline. The operational overhead is minimal with proper automation, and the compliance benefits are substantial.
The entire process can be automated using Azure CLI, PowerShell, or infrastructure-as-code tools. For existing Cosmos DB accounts, enabling CMK requires creating a new account and migrating data.
3. Securing Vector Databases and Preventing Data Leakage
The Problem: Vector Embeddings Are Data Too
Vector databases are the backbone of modern RAG (Retrieval-Augmented Generation) systems. They store embeddings—mathematical representations of your documents, conversations, and knowledge base—that allow your AI to retrieve relevant context for every user query. But here's what most developers don't realize: those vectors aren't just abstract numbers. They contain your actual data.
A critical oversight in AI chat architectures is treating vector databases—or in our case, Cosmos DB collections storing embeddings—as less sensitive than traditional data stores. Whether you're using a dedicated vector database or storing embeddings in Cosmos DB alongside your chat history, these mathematical representations need the same rigorous security controls as the original text.
In documented cases, shared vector databases inadvertently mixed data between two corporate clients. One client's proprietary information began surfacing in response to the other client's queries, creating a serious confidentiality breach in what was supposed to be a multi-tenant system.
Even more concerning are embedding inversion attacks, where adversaries exploit weaknesses to reconstruct original source data from its vector representation—effectively reverse-engineering your documents from the mathematical embeddings.
Think about what's in your vector storage right now. Customer support conversations. Internal company documents. Product specifications. Medical records. Legal documents. If you're running a multi-tenant system, are you absolutely certain that Company A can't retrieve Company B's data? Can you guarantee that embeddings can't be reverse-engineered to expose the original text?
What to do: Azure Cosmos DB for MongoDB with Logical Partitioning and RBAC
The security of vector databases requires a multi-layered approach that addresses both storage isolation and access control. Azure Cosmos DB for MongoDB provides native support for vector search while offering enterprise-grade security features specifically designed for multi-tenant architectures.
Logical partitioning creates strict data boundaries within your database by organizing data into isolated partitions based on a partition key (like tenant_id or user_id). When combined with Role-Based Access Control (RBAC), you create a security model where users and applications can only access their designated partitions—even if they somehow gain broader database access.
Implementation Overview:
To implement secure multi-tenant vector storage with Cosmos DB:
- Enable MongoDB RBAC on your Cosmos DB account using the EnableMongoRoleBasedAccessControl capability
- Design your partition key strategy based on tenant_id, user_id, or organization_id for maximum isolation
- Create collections with partition keys that enforce tenant boundaries at the storage level
- Define custom RBAC roles that grant access only to specific databases and partition key ranges
- Create user accounts per tenant or service principal with assigned roles limiting their scope
- Implement partition-aware queries in your application to always include the partition key filter
- Enable diagnostic logging to track all vector retrieval operations with user identity
- Configure cross-region replication for high availability while maintaining partition isolation
Figure 3: Multi-Tenant Data Isolation with Partition Keys and RBAC
Azure Cosmos DB enforces tenant isolation through logical partitioning and Role-Based Access Control (RBAC). Each tenant's data is stored in separate partitions (Partition A, B, C) based on the partition key (tenant_id). RBAC acts as a security gateway, validating every query to ensure users can only access their designated partition. Attempts to access other tenants' partitions are blocked at the RBAC layer, preventing cross-tenant data leakage in multi-tenant AI chat systems.
Azure provides comprehensive documentation and CLI tools for configuring RBAC roles and partition strategies. The key is to design your partition scheme before loading data, as changing partition keys requires data migration.
Beyond partitioning and RBAC, implement these AI-specific security measures:
- Validate embedding sources: Authenticate and continuously audit external data sources before vectorizing to prevent poisoned embeddings
- Implement similarity search thresholds: Set minimum similarity scores to prevent irrelevant cross-context retrieval
- Use metadata filtering: Add security labels (classification levels, access groups) to vector metadata and enforce filtering
- Monitor retrieval patterns: Alert on unusual patterns like one tenant making queries that correlate with another tenant's data
- Separate vector databases per sensitivity level: Keep highly confidential vectors (PII, PHI) in dedicated databases with stricter controls
- Hash document identifiers: Use hashed references instead of plaintext IDs in vector metadata to prevent enumeration attacks
For production AI chat systems handling multiple customers or sensitive data, Cosmos DB with partition-based RBAC should be your baseline. The combination of storage-level isolation and access control provides defense in depth that application-layer filtering alone cannot match.
Bonus: Secure Logging and Monitoring for AI Chat Systems
During development, we habitually log everything—full request payloads, user inputs, model responses, stack traces. It's essential for debugging. But when your AI chat system goes to production and starts handling real user conversations, those same logging practices become a liability.
Think about what flows through your AI chat system: customer support conversations containing account numbers, healthcare queries discussing medical conditions, HR chatbots processing employee complaints, financial assistants handling transaction details. If you're logging full conversations for debugging, you're creating a secondary repository of sensitive data that's often less protected than your primary database.
The average breach takes 241 days to identify and contain. During that time, attackers often exfiltrate not just production databases, but also log files and monitoring data—places where developers never expected sensitive information to end up.
The question becomes: how do you maintain observability and debuggability without creating a security nightmare?
The Solution: Structured Logging with PII Redaction and Azure Monitor
The key is to log metadata, not content. You need enough information to trace issues and understand system behavior without storing the actual sensitive conversations.
Azure Monitor with Application Insights provides enterprise-grade logging infrastructure with built-in features for sanitizing sensitive data. Combined with proper application-level controls, you can maintain full observability while protecting user privacy.
What to Log in Production AI Chat Systems:
DO Log | DON'T Log |
---|---|
Request timestamps and duration | Full user messages or prompts |
User IDs (hashed or anonymized) | Complete model responses |
Session IDs (hashed) | Raw embeddings or vectors |
Model names and versions used | Personally identifiable information (PII) |
Token counts (input/output) | Retrieved document content |
Embedding dimensions and similarity scores | Database connection strings or API keys |
Retrieved document IDs (not content) | Complete stack traces that might contain data |
Error codes and exception types | |
Performance metrics (latency, throughput) | |
RBAC decisions (access granted/denied) | |
Partition keys accessed | |
Rate limiting triggers |
Final Remarks: Building Compliant, Secure AI Systems
Throughout this two-part series, we've addressed the complete security spectrum for AI chat systems—from protecting the LLM itself to securing the underlying infrastructure. But there's a broader context that makes all of this critical: compliance and regulatory requirements.
AI chat systems operate within an increasingly complex regulatory landscape. The EU AI Act, which entered force on August 1, 2024, became the first comprehensive AI regulation by a major regulator, assigning applications to risk categories with high-risk systems subject to specific legal requirements. The NIS2 Directive further requires that AI model endpoints, APIs, and data pipelines be protected to prevent breaches and ensure secure deployment.
Beyond AI-specific regulations, chat systems must comply with established data protection frameworks depending on their use case. GDPR mandates data minimization, user rights to erasure and data portability, 72-hour breach notification, and EU data residency for systems serving European users. Healthcare chatbots must meet HIPAA requirements including encryption, access controls, 6-year audit log retention, and Business Associate Agreements. Systems processing payment information fall under PCI-DSS, requiring cardholder data isolation, encryption, role-based access controls, and regular security testing. B2B SaaS platforms typically need SOC 2 Type II compliance, demonstrating security controls over data availability, confidentiality, continuous monitoring, and incident response procedures.
Azure's architecture directly supports these compliance requirements through its built-in capabilities. Private Link enables data residency by keeping traffic within specified Azure regions while supporting network isolation requirements. Customer-Managed Keys provide the encryption controls and key ownership mandated by HIPAA and PCI-DSS. Cosmos DB's partition-based RBAC creates the access controls and audit trails required across all frameworks. Azure Monitor and diagnostic logging satisfy audit and monitoring requirements, while Azure Policy and Microsoft Purview automate compliance enforcement and reporting. The platform's certifications and compliance offerings (including HIPAA, PCI-DSS, SOC 2, and GDPR attestations) provide the documentation and third-party validation that auditors require, significantly reducing the operational burden of maintaining compliance.
Further Resources:
- Azure Private Link Documentation
- Azure Cosmos DB Customer-Managed Keys
- Azure Key Vault Overview
- Azure Cosmos DB Role-Based Access Control
- Azure Monitor and Application Insights
- Azure Policy for Compliance
- Microsoft Purview Data Governance
- Azure Security Benchmark
Stay secure, stay compliant, and build responsibly.