security
377 TopicsMicrosoft Entra: Building Trust in a Borderless Digital World
As nonprofits embrace hybrid work, multi-cloud environments, and digital transformation to better serve their missions, the need for secure, intelligent access has never been greater. Traditional identity solutions often fall short in protecting diverse user groups like staff, volunteers, donors, and partners. Microsoft Entra offers a unified family of identity and network access products designed to verify every identity, validate every access request, and secure every connection—helping nonprofits stay resilient, compliant, and mission-focused. What Is Microsoft Entra? Microsoft Entra offers a unified family of identity and network access products designed to verify every identity, validate every access request, and secure every connection—helping nonprofits stay resilient, compliant, and mission-focused. The suite includes: Microsoft Entra ID (formerly Azure Active Directory): A cloud-based identity and access management service that supports Single Sign-On (SSO), Multifactor Authentication (MFA), and Conditional Access policies to protect users, apps, and resources. Microsoft Entra ID Governance: Automates identity lifecycle management, ensuring users have the right access at the right time—and nothing more. It supports access reviews, role-based access control, and policy enforcement. Microsoft Entra External ID: Manages secure access for external users like customers, partners, and vendors. It enables personalized, secure experiences without compromising internal systems. Microsoft Entra Private Access: Provides secure, VPN-less access to private apps and resources across hybrid and multi-cloud environments. It’s ideal for remote work scenarios and legacy app support. Microsoft Entra Internet Access: Offers secure web access with identity-aware controls, helping protect users from malicious sites and enforcing compliance policies. Why Microsoft Entra Matters for Nonprofits Unified Identity Protection: Secures access for any identity—human or workload—to any resource, from anywhere. Zero Trust Enablement: Verifies every access request based on identity, device health, location, and risk level. Multi-cloud and Hybrid Ready: Works across Microsoft 365, Azure, AWS, Google Cloud, and on-premises environments. Compliance and Governance: Supports nonprofit regulatory needs with automated access reviews, audit trails, and policy enforcement. Getting Started with Microsoft Entra Assess your security posture through Microsoft Secure Score – Helps nonprofits monitor and improve identity, device, and app security posture. Building Conditional Access policies in Microsoft Entra – Create policies to protect users and data based on risk, location, and device health. Create a lifecycle workflow – Automate onboarding, role changes, and offboarding for staff, volunteers, and contractors. Microsoft Entra External ID documentation – Manage secure access for donors, partners, and community members. Real-World Impact A global nonprofit recently used Microsoft Entra to streamline access for volunteers, staff, and external partners. By automating identity governance and enabling secure access to cloud apps, they reduced administrative overhead and improved security posture—without sacrificing user experience. Conclusion Microsoft Entra empowers nonprofits to modernize identity and access management with a unified, secure, and intelligent approach. Whether you're enabling remote work, collaborating with external partners, or safeguarding sensitive donor data, Entra provides the tools to build trust, enforce least privilege, and stay compliant. By adopting Entra, nonprofits can focus more on their mission and less on managing risk—ensuring that every connection is secure, every identity is verified, and every access is governed.151Views0likes2CommentsStep-by-Step Guide : How to use Temporary Access Pass (TAP) with internal guest users
Passwords are fundamentally weak and vulnerable to being compromised. Even enhancing a password only delays an attack; it does not render it unbreakable. Multi-Factor Authentication (MFA) offers more security but still depends on passwords. This is why passwordless authentication is a more secure and convenient alternative. Source : https://learn.microsoft.com/entra/identity/authentication/media/concept-authentication-passwordless/passwordless-convenience-security.png Microsoft Entra ID supports password less authentication natively. It supports six different password less authentication options. Windows Hello for Business Platform Credential for macOS Platform single sign-on (PSSO) for macOS with smart card authentication Microsoft Authenticator Passkeys (FIDO2) Certificate-based authentication Based on the organisation's requirements, they can select the most convenient options. However, the initial setup requires a method to authenticate the user before onboarding other passwordless authentication methods. For this, we can use: 1) Existing Microsoft MFA methods 2) Temporary Access pass (TAP) A Temporary Access Pass (TAP) is a time-limited passcode that can be configured for single use or multiple sign-ins. Organisations not only have internal users to manage but also guest users. Until now, the TAP method was only available for internal users, and guest users were not permitted to use this method. This makes sense because if guest users also need to use passwordless authentication, it should occur in their home tenant. But now Entra ID supports TAP for “Internal Guest” users. Internal Guests Guest users are typically categorised as user accounts that exist in a remote tenant. However, some organisations prefer to use user accounts in their own directory but with guest-level access. This is typically for contractors, suppliers, vendors, etc. These are known as 'internal guest accounts'. Such accounts were also used for guest users in the past when B2B collaboration wasn't in place. In this demo I am going to demonstrate how to use TAP with internal Guest user. Enable TAP as Authentication method Before we configure TAP for user we need to make sure TAP is enabled as authentication method. To do that, Log in to the Entra portal as an Authentication Policy Administrator or higher. Navigate to Protection > Authentication methods > Policies. Click on Temporary Access Pass Ensure it is enabled and the target is defined. If not, make the necessary changes and click Save. Create TAP for Internal Guest User I already have an internal guest user for this task. As you can see below, the user type is Guest, but the user is still part of the same tenant. To create TAP, Click on the selected user from the Entra ID users list to go to user properties. Next, Click on Authentication methods Then Click on + Add authentication method From the drop-down, select the Temporary Access Pass method. In the settings window, make the adjustments based on the requirements and then click on Add. It will create TAP as expected. Testing To verify the configuration, I am attempting to log in as the test user. This is the user's very first login. As expected, the initial login prompts for the TAP. After a successful login, it allows me to configure the account with passwordless authentication. As we can see, the TAP for the internal guest feature is working as expected.17KViews1like3CommentsSecuring Productivity with Microsoft 365 Copilot
Join the Greek MVP Community hosting Joanna Vathis for an insightful live session about Copilot in M365, Data Protection, Privacy, and Responsible AI Agenda What is Microsoft 365 Copilot? Data Protection Architecture Privacy & Compliance Commitments Responsible AI Practices Security Features & Risk Mitigation Deployment Best Practices Q&A Join for free !7Views0likes0CommentsComprehensive Identity Protection—Across Cloud and On-Premises
Hybrid IT environments, identity is the new perimeter—and protecting it requires visibility across both cloud and on-premises systems. While Microsoft Entra secures cloud identities with intelligent access controls, Microsoft Defender for Identity brings deep insight into your on-premises Active Directory. Together, they form a powerful duo for comprehensive identity protection. Why Hybrid Identity Protection Matters Most organizations haven’t fully moved to the cloud. Legacy systems, on-prem applications, and hybrid user scenarios are still common, and attackers know it. They exploit these gaps using techniques like: Pass-the-Hash and Pass-the-Ticket attacks Credential stuffing and brute-force logins Privilege escalation and lateral movement Without visibility into on-prem identity activity, these threats can go undetected. That’s where Defender for Identity steps in. What Is Microsoft Defender for Identity? Defender for Identity is part of Microsoft Defender XDR—a cloud-based solution that monitors on-premises Active Directory for suspicious behavior. It uses behavioral analytics and threat intelligence to detect identity-based attacks in real time. Key capabilities: Detects compromised accounts and insider threats Monitors lateral movement and privilege escalation Surfaces risky users and abnormal access patterns Integrates with Microsoft 365 Defender and Sentinel for unified response Why It Pairs Perfectly with Microsoft Entra Microsoft Entra (formerly Azure AD) protects cloud identities with features like Conditional Access, Multifactor Authentication, and Identity Governance. But Entra alone can’t see what’s happening in your on-prem AD. By combining Entra and Defender for Identity, you get: End-to-end visibility across cloud and on-prem environments Real-time threat detection for suspicious activities like lateral movement, privilege escalation, and domain dominance Behavioral analytics to identify compromised accounts and insider threats Integrated response capabilities to contain threats quickly and minimize impact Actionable insights that help strengthen your identity posture and reduce risk Together, they deliver comprehensive identity protection—giving you the clarity, control, and confidence to defend against modern threats. Real-World Impact Imagine a scenario where an attacker gains access to a legacy on-prem account and begins moving laterally across systems. Defender for Identity detects the unusual behavior and flags the account as risky. Entra then blocks cloud access based on Conditional Access policies tied to that risk signal—stopping the attack before it spreads. Getting Started Deploy Defender for Identity sensors on your domain controllers Install a sensor - step-by-step instructions to install Defender for Identity sensors on your domain controllers to begin monitoring on-premises identity activity. Activate the sensor on a domain controller - Guidance on activating the installed sensor to ensure it starts collecting and analyzing data. Deployment overview - A high-level walkthrough of the Defender for Identity deployment process, including prerequisites and architecture. Connect Defender for Identity to Microsoft 365 Defender Integration in the Microsoft Defender portal - Learn how to connect Defender for Identity to Microsoft 365 Defender for centralized threat detection and response. Pilot and deploy Defender for Identity - Best practices for piloting Defender for Identity in your environment before full-scale deployment. Enable risk-based Conditional Access in Entra Configure risk policies in Entra ID Protection - Instructions for setting up risk-based policies that respond to identity threats in real time. Risk-based access policies overview - An overview of how Conditional Access uses risk signals to enforce adaptive access controls. Use Entra ID Governance to enforce least privilege Understanding least privilege with Entra ID Governance - Explains how to apply least privilege principles using Entra’s governance tools. Best practices for secure deployment - Recommendations for securely deploying Entra ID Governance to minimize identity-related risks. Integrate both with Microsoft Sentinel for advanced hunting Microsoft Defender XDR integration with Sentinel - How to connect Defender for Identity and other Defender components to Microsoft Sentinel for unified security operations. Send Entra ID data to Sentinel - Instructions for streaming Entra ID logs and signals into Sentinel for deeper analysis. Microsoft Sentinel data connectors - A catalog of available data connectors, including those for Entra and Defender for Identity, to expand your threat detection capabilities. Final Thoughts It's the perfect time to evaluate your identity protection strategy. By pairing Microsoft Entra with Defender for Identity, you gain full visibility across your hybrid environment—so you can detect threats early, respond quickly, and protect every identity with confidence. Ready to strengthen your identity perimeter? Start by deploying Defender for Identity and configuring Entra policies today.183Views1like0CommentsWant to Avoid Accidently Deleting your Resources in Azure? It's Easier Than You Think
Sometimes, knowingly or unknowingly you might delete a resource group in Azure. In this article let's talk about how to configure Azure Resource Locking in order to protect them from being deleted or modified accidentally.9.2KViews3likes2CommentsPartner Blog | Unifying the data platform for real-time insights
Every organization is looking for quicker insights, stronger security, and new ways to drive innovation with AI. The real challenge lies in connecting and preparing the data that makes those outcomes possible. Across industries, customers are realizing that data sprawl and disconnected systems limit what AI can do. To unlock the potential of their data and deliver real-time impact, they need a modern data foundation that’s integrated, governed, secure, and ready for AI. This post is part three of our Cloud and AI Platforms blog series, detailing how partners can accelerate customer transformation across the Microsoft Cloud. In part one, we looked at the market opportunity created by AI and cloud innovation. Part two focused on migration and modernization as the foundation for AI-powered growth. Now we turn to the next phase of that journey: building a unified data platform that connects every source, fuels intelligent applications, and drives measurable outcomes. Microsoft partners are at the center of this opportunity. By enabling customers to unify their data estate with solutions like Microsoft Fabric, Azure Databricks, and Microsoft Purview, partners are critical in helping organizations turn information into action and insight into innovation. Unify the data estate Bringing all data together across on-premises, cloud, and industry-specific sources is the first step toward intelligence at scale. With Microsoft Fabric and Azure Databricks, partners can create an open, lake-centric foundation that simplifies analytics and operational data for AI. Continue reading here38Views1like0CommentsSign up for Microsoft Partner Incentives performance measurement reports
As part of a larger push to empower you to deliver exceptional business outcomes, we've introduced incentives performance measurement reports, available through email. Delivered during the first week of every month, these reports include presales and post-sales partner performance measurements and earning cap status so you can get essential insights into your engagements across hero investments. These hero investments include Azure Accelerate, AI Workforce: Microsoft 365 Copilot + Power Accelerate, Al Business Process, and Security Activities. Here’s what to expect: Performance requirements pausing policy: Starting on November 15, 2025,* partners who are not meeting performance requirements will be paused from nominating new claims. This applies only to new nominations; in-progress projects will not be affected. Further, partners who have reached their earning cap and have nominations paused will be notified when they are eligible to nominate projects again. Reactivation of nominations: Paused partners will be reviewed each Tuesday by Microsoft to determine if they meet performance thresholds. Those who meet thresholds will be notified by email within two business days after their review and reactivated within seven business days after their review. Earning cap reviews: Partners who are approaching their earning cap and are meeting performance benchmarks will be considered for an earning cap increase based on budget availability and quality of claims. Claim quality can be affected by factors such as accuracy of your activity reporting, number of duplicate claims, and how your investments are leveraged. Increased cap value: If you’re approved for an earning cap increase, you can expect up to 50% above the engagement’s initial cap. Earning cap extension approval is not guaranteed. Earning cap pausing: Partners who have reached their earning cap and did not receive a notification of an increased cap based on review will be paused from submitting new claims and will be notified. This pause applies only to new nominations; in-progress projects will remain unaffected. No partner action is required, and Microsoft will proactively review and communicate decisions about earning caps. Review the updated policy for partner performance measurements and earning caps (available November 1, 2025). Sign up now for monthly reports so you can fuel transformation with expertise and funding and drive faster time to value for your projects. Sign up for monthly reports today! *Starting December 1, 2025, Security Activities will issue monthly partner performance reports, and beginning January 1, 2026, partners not meeting requirements will be paused from nominating new claims.224Views2likes0CommentsBuilding Secure AI Chat Systems: Part 2 - Securing Your Architecture from Storage to Network
In Part 1 of this series, we tackled the critical challenge of protecting the LLM itself from malicious inputs. We implemented three essential security layers using Azure AI services: harmful content detection with Azure Content Safety, PII protection with Azure Text Analytics, and prompt injection prevention with Prompt Shields. These guardrails ensure that your AI model doesn't process harmful requests or leak sensitive information through cleverly crafted prompts. But even with a perfectly secured LLM, your entire AI chat system can still be compromised through architectural vulnerabilities. For example, the WotNot incident wasn't about prompt injection—it was 346,000 files sitting in an unsecured cloud storage bucket. Likewise the OmniGPT breach with 34 million lines of conversation logs due to backend database security failures. The global average cost of a data breach is now $4.44 million, and it takes organizations an average of 241 days to identify and contain an active breach. That's eight months where attackers have free reign in your systems. The financial cost is one thing, but the reputational damage and loss of customer is irreversible. This article focuses on the architectural security concerns I mentioned at the end of Part 1—the infrastructure that stores your chat histories, the networks that connect your services, and the databases that power your vector searches. We'll examine real-world breaches that happened in 2024 and 2025, understand exactly what went wrong, and implement Azure solutions that would have prevented them. By the end of this article, you'll have a production-ready, secure architecture for your AI chat system that addresses the most common—and most devastating—security failures we're seeing in the wild. Let's start with the most fundamental question: where is your data, and who can access it? 1. Preventing Exposed Storage with Network Isolation The Problem: When Your Database Is One Google Search Away Let me paint you a picture of what happened with two incidents in 2024-2025: WotNot AI Chatbot left 346,000 files completely exposed in an unsecured cloud storage bucket—passports, medical records, sensitive customer data, all accessible to anyone on the internet without even a password. Security researchers who discovered it tried for over two months to get the company to fix it. In May 2025, Canva Creators' data was exposed through an unsecured Chroma vector database operated by an AI chatbot company. The database contained 341 collections of documents including survey responses from 571 Canva Creators with email addresses, countries of residence, and comprehensive feedback. This marked the first reported data leak involving a vector database. The common thread? Public internet accessibility. These databases and storage accounts were accessible from anywhere in the world. No VPN required. No private network. Just a URL and you were in. Think about your current architecture. If someone found your Cosmos DB connection string or your Azure Storage account name, what's stopping them from accessing it? If your answer is "just the access key" or "firewall rules," you're one leaked credential away from being in the headlines. So what to do: Azure Private Link + Network Isolation The most effective way to prevent public exposure is simple: remove public internet access entirely. This is where Azure Private Link becomes your architectural foundation. With Azure Private Link, you can create a private endpoint inside your Azure Virtual Network (VNet) that becomes the exclusive gateway to your Azure services. Your Cosmos DB, Storage Accounts, Azure OpenAI Service, and other resources are completely removed from the public internet—they only respond to requests originating from within your VNet. Even if someone obtains your connection strings or access keys, they cannot use them without first gaining access to your private network. Implementation Overview: To implement Private Link for your AI chat system, you'll need to: Create an Azure Virtual Network (VNet) to host your private endpoints and application resources Configure private endpoints for each service (Cosmos DB, Storage, Azure OpenAI, Key Vault) Set up private DNS zones to automatically resolve service URLs to private IPs within your VNet Disable public network access on all your Azure resources Deploy your application inside the VNet using Azure App Service with VNet integration, Azure Container Apps, or Azure Kubernetes Service Verify isolation by attempting to access resources from outside the VNet (should fail) You can configure this through the Azure Portal, Azure CLI, ARM templates, or infrastructure-as-code tools like Terraform. The Azure documentation provides step-by-step guides for each service type. Figure 1: Private Link Architecture for AI Chat Systems Private endpoints ensure all data access occurs within the Azure Virtual Network, blocking public internet access to databases, storage, and AI services. 2. Protecting Conversation Data with Encryption at Rest The Problem: When Backend Databases Become Treasure Troves Network isolation solves the problem of external access, but what happens when attackers breach your perimeter through other means? What if a malicious insider gains access? What if there's a misconfiguration in your cloud environment? The data sitting in your databases becomes the ultimate prize. In February 2025, OmniGPT suffered a catastrophic breach where attackers accessed the backend database and extracted personal data from 30,000 users including emails, phone numbers, API keys, and over 34 million lines of conversation logs. The exposed data included links to uploaded files containing sensitive credentials, billing details, and API keys. These weren't prompt injection attacks. These weren't DDoS incidents. These were failures to encrypt sensitive data at rest. When attackers accessed the storage layer, they found everything in readable format—a goldmine of personal information, conversations, and credentials. Think about the conversations your AI chat system stores. Customer support queries that might include account numbers. Healthcare chatbots discussing symptoms and medications. HR assistants processing employee grievances. If someone gained unauthorized (or even authorized) access to your database today, would they be reading plaintext conversations? What to do: Azure Cosmos DB with Customer-Managed Keys The fundamental defense against data exposure is encryption at rest—ensuring that data stored on disk is encrypted and unreadable without the proper decryption keys. Even if attackers gain physical or logical access to your database files, the data remains protected as long as they don't have access to the encryption keys. But who controls those keys? With platform-managed encryption (the default in most cloud services), the cloud provider manages the encryption keys. While this protects against many threats, it doesn't protect against insider threats at the provider level, compromised provider credentials, or certain compliance scenarios where you must prove complete key control. Customer-Managed Keys (CMK) solve this by giving you complete ownership and control of the encryption keys. You generate, store, and manage the keys in your own key vault. The cloud service can only decrypt your data by requesting access to your keys—access that you control and can revoke at any time. If your keys are deleted or access is revoked, even the cloud provider cannot decrypt your data. Azure makes this easy with Azure Key Vault integrated with Azure Cosmos DB. The architecture uses "envelope encryption" where your data is encrypted with a Data Encryption Key (DEK), and that DEK is itself encrypted with your Key Encryption Key (KEK) stored in Key Vault. This provides layered security where even if the database is compromised, the data remains encrypted with keys only you control. While we covered PII detection and redaction using Azure Text Analytics in Part 1—which prevents sensitive data from being stored in the first place—encryption at rest with Customer-Managed Keys provides an additional, powerful layer of protection. In fact, many compliance frameworks like HIPAA, PCI-DSS, and certain government regulations explicitly require customer-controlled encryption for data at rest, making CMK not just a best practice but often a mandatory requirement for regulated industries. Implementation Overview: To implement Customer-Managed Keys for your chat history and vector storage: Create an Azure Key Vault with purge protection and soft delete enabled (required for CMK) Generate or import your encryption key in Key Vault (2048-bit RSA or 256-bit AES keys) Grant Cosmos DB access to Key Vault using a system-assigned or user-assigned managed identity Enable CMK on Cosmos DB by specifying your Key Vault key URI during account creation or update Configure the same for Azure Storage if you're storing embeddings or documents in Blob Storage Set up key rotation policies to automatically rotate keys on a schedule (recommended: every 90 days) Monitor key usage through Azure Monitor and set up alerts for unauthorized access attempts Figure 2: Envelope Encryption with Customer-Managed Keys User conversations are encrypted using a two-layer approach: (1) The AI Chat App sends plaintext messages to Cosmos DB, (2) Cosmos DB authenticates to Key Vault using Managed Identity to retrieve the Key Encryption Key (KEK), (3) Data is encrypted with a Data Encryption Key (DEK), (4) The DEK itself is encrypted with the KEK before storage. This ensures data remains encrypted even if the database is compromised, as decryption requires access to keys stored in your Key Vault. For AI chat systems in regulated industries (healthcare, finance, government), Customer-Managed Keys should be your baseline. The operational overhead is minimal with proper automation, and the compliance benefits are substantial. The entire process can be automated using Azure CLI, PowerShell, or infrastructure-as-code tools. For existing Cosmos DB accounts, enabling CMK requires creating a new account and migrating data. 3. Securing Vector Databases and Preventing Data Leakage The Problem: Vector Embeddings Are Data Too Vector databases are the backbone of modern RAG (Retrieval-Augmented Generation) systems. They store embeddings—mathematical representations of your documents, conversations, and knowledge base—that allow your AI to retrieve relevant context for every user query. But here's what most developers don't realize: those vectors aren't just abstract numbers. They contain your actual data. A critical oversight in AI chat architectures is treating vector databases—or in our case, Cosmos DB collections storing embeddings—as less sensitive than traditional data stores. Whether you're using a dedicated vector database or storing embeddings in Cosmos DB alongside your chat history, these mathematical representations need the same rigorous security controls as the original text. In documented cases, shared vector databases inadvertently mixed data between two corporate clients. One client's proprietary information began surfacing in response to the other client's queries, creating a serious confidentiality breach in what was supposed to be a multi-tenant system. Even more concerning are embedding inversion attacks, where adversaries exploit weaknesses to reconstruct original source data from its vector representation—effectively reverse-engineering your documents from the mathematical embeddings. Think about what's in your vector storage right now. Customer support conversations. Internal company documents. Product specifications. Medical records. Legal documents. If you're running a multi-tenant system, are you absolutely certain that Company A can't retrieve Company B's data? Can you guarantee that embeddings can't be reverse-engineered to expose the original text? What to do: Azure Cosmos DB for MongoDB with Logical Partitioning and RBAC The security of vector databases requires a multi-layered approach that addresses both storage isolation and access control. Azure Cosmos DB for MongoDB provides native support for vector search while offering enterprise-grade security features specifically designed for multi-tenant architectures. Logical partitioning creates strict data boundaries within your database by organizing data into isolated partitions based on a partition key (like tenant_id or user_id). When combined with Role-Based Access Control (RBAC), you create a security model where users and applications can only access their designated partitions—even if they somehow gain broader database access. Implementation Overview: To implement secure multi-tenant vector storage with Cosmos DB: Enable MongoDB RBAC on your Cosmos DB account using the EnableMongoRoleBasedAccessControl capability Design your partition key strategy based on tenant_id, user_id, or organization_id for maximum isolation Create collections with partition keys that enforce tenant boundaries at the storage level Define custom RBAC roles that grant access only to specific databases and partition key ranges Create user accounts per tenant or service principal with assigned roles limiting their scope Implement partition-aware queries in your application to always include the partition key filter Enable diagnostic logging to track all vector retrieval operations with user identity Configure cross-region replication for high availability while maintaining partition isolation Figure 3: Multi-Tenant Data Isolation with Partition Keys and RBAC Azure Cosmos DB enforces tenant isolation through logical partitioning and Role-Based Access Control (RBAC). Each tenant's data is stored in separate partitions (Partition A, B, C) based on the partition key (tenant_id). RBAC acts as a security gateway, validating every query to ensure users can only access their designated partition. Attempts to access other tenants' partitions are blocked at the RBAC layer, preventing cross-tenant data leakage in multi-tenant AI chat systems. Azure provides comprehensive documentation and CLI tools for configuring RBAC roles and partition strategies. The key is to design your partition scheme before loading data, as changing partition keys requires data migration. Beyond partitioning and RBAC, implement these AI-specific security measures: Validate embedding sources: Authenticate and continuously audit external data sources before vectorizing to prevent poisoned embeddings Implement similarity search thresholds: Set minimum similarity scores to prevent irrelevant cross-context retrieval Use metadata filtering: Add security labels (classification levels, access groups) to vector metadata and enforce filtering Monitor retrieval patterns: Alert on unusual patterns like one tenant making queries that correlate with another tenant's data Separate vector databases per sensitivity level: Keep highly confidential vectors (PII, PHI) in dedicated databases with stricter controls Hash document identifiers: Use hashed references instead of plaintext IDs in vector metadata to prevent enumeration attacks For production AI chat systems handling multiple customers or sensitive data, Cosmos DB with partition-based RBAC should be your baseline. The combination of storage-level isolation and access control provides defense in depth that application-layer filtering alone cannot match. Bonus: Secure Logging and Monitoring for AI Chat Systems During development, we habitually log everything—full request payloads, user inputs, model responses, stack traces. It's essential for debugging. But when your AI chat system goes to production and starts handling real user conversations, those same logging practices become a liability. Think about what flows through your AI chat system: customer support conversations containing account numbers, healthcare queries discussing medical conditions, HR chatbots processing employee complaints, financial assistants handling transaction details. If you're logging full conversations for debugging, you're creating a secondary repository of sensitive data that's often less protected than your primary database. The average breach takes 241 days to identify and contain. During that time, attackers often exfiltrate not just production databases, but also log files and monitoring data—places where developers never expected sensitive information to end up. The question becomes: how do you maintain observability and debuggability without creating a security nightmare? The Solution: Structured Logging with PII Redaction and Azure Monitor The key is to log metadata, not content. You need enough information to trace issues and understand system behavior without storing the actual sensitive conversations. Azure Monitor with Application Insights provides enterprise-grade logging infrastructure with built-in features for sanitizing sensitive data. Combined with proper application-level controls, you can maintain full observability while protecting user privacy. What to Log in Production AI Chat Systems: DO Log DON'T Log Request timestamps and duration Full user messages or prompts User IDs (hashed or anonymized) Complete model responses Session IDs (hashed) Raw embeddings or vectors Model names and versions used Personally identifiable information (PII) Token counts (input/output) Retrieved document content Embedding dimensions and similarity scores Database connection strings or API keys Retrieved document IDs (not content) Complete stack traces that might contain data Error codes and exception types Performance metrics (latency, throughput) RBAC decisions (access granted/denied) Partition keys accessed Rate limiting triggers Final Remarks: Building Compliant, Secure AI Systems Throughout this two-part series, we've addressed the complete security spectrum for AI chat systems—from protecting the LLM itself to securing the underlying infrastructure. But there's a broader context that makes all of this critical: compliance and regulatory requirements. AI chat systems operate within an increasingly complex regulatory landscape. The EU AI Act, which entered force on August 1, 2024, became the first comprehensive AI regulation by a major regulator, assigning applications to risk categories with high-risk systems subject to specific legal requirements. The NIS2 Directive further requires that AI model endpoints, APIs, and data pipelines be protected to prevent breaches and ensure secure deployment. Beyond AI-specific regulations, chat systems must comply with established data protection frameworks depending on their use case. GDPR mandates data minimization, user rights to erasure and data portability, 72-hour breach notification, and EU data residency for systems serving European users. Healthcare chatbots must meet HIPAA requirements including encryption, access controls, 6-year audit log retention, and Business Associate Agreements. Systems processing payment information fall under PCI-DSS, requiring cardholder data isolation, encryption, role-based access controls, and regular security testing. B2B SaaS platforms typically need SOC 2 Type II compliance, demonstrating security controls over data availability, confidentiality, continuous monitoring, and incident response procedures. Azure's architecture directly supports these compliance requirements through its built-in capabilities. Private Link enables data residency by keeping traffic within specified Azure regions while supporting network isolation requirements. Customer-Managed Keys provide the encryption controls and key ownership mandated by HIPAA and PCI-DSS. Cosmos DB's partition-based RBAC creates the access controls and audit trails required across all frameworks. Azure Monitor and diagnostic logging satisfy audit and monitoring requirements, while Azure Policy and Microsoft Purview automate compliance enforcement and reporting. The platform's certifications and compliance offerings (including HIPAA, PCI-DSS, SOC 2, and GDPR attestations) provide the documentation and third-party validation that auditors require, significantly reducing the operational burden of maintaining compliance. Further Resources: Azure Private Link Documentation Azure Cosmos DB Customer-Managed Keys Azure Key Vault Overview Azure Cosmos DB Role-Based Access Control Azure Monitor and Application Insights Azure Policy for Compliance Microsoft Purview Data Governance Azure Security Benchmark Stay secure, stay compliant, and build responsibly.188Views0likes0CommentsCybersecurity Is Mission Imperative: What Nonprofits Must Learn from the 2025 Digital Defense Report
In today’s digital-first world, nonprofits depend on technology to deliver services, engage communities, and scale impact. But with that reliance comes growing risk—from identity-based attacks to AI-driven threats and cloud vulnerabilities. The 2025 Microsoft Digital Defense Report offers a strategic lens into the global cybersecurity landscape. For nonprofit leaders, it’s more than a technical document—it’s a wake-up call. Cybersecurity is no longer a back-office concern. It’s a mission-critical priority. Key Takeaways for Nonprofits: Identity is the new attack surface—protect credentials, not just systems. AI is reshaping both threats and defenses—learn to leverage it. Cloud and vendor vulnerabilities are rising—secure your digital supply chain. Resilience matters—build systems that recover quickly and train your teams. The quantum era is coming—start preparing for post-quantum cryptography. Why It Matters: Protecting data means protecting people. Embedding cybersecurity into every layer of your organization—from boardroom strategy to frontline service delivery—is essential to maintaining trust and impact. For More Information: Explore the full Microsoft Digital Defense Report 2025 for deeper insights and practical guidance. Read the full report: Microsoft Digital Defense Report 2025 To learn more and join the conversation, follow Microsoft for Nonprofits LinkedIn for updates, expert insights, and community engagement around nonprofit cybersecurity. Visit: Microsoft for Nonprofits105Views0likes0Comments