security
19 TopicsBuilding Secure AI Chat Systems: Part 2 - Securing Your Architecture from Storage to Network
In Part 1 of this series, we tackled the critical challenge of protecting the LLM itself from malicious inputs. We implemented three essential security layers using Azure AI services: harmful content detection with Azure Content Safety, PII protection with Azure Text Analytics, and prompt injection prevention with Prompt Shields. These guardrails ensure that your AI model doesn't process harmful requests or leak sensitive information through cleverly crafted prompts. But even with a perfectly secured LLM, your entire AI chat system can still be compromised through architectural vulnerabilities. For example, the WotNot incident wasn't about prompt injection—it was 346,000 files sitting in an unsecured cloud storage bucket. Likewise the OmniGPT breach with 34 million lines of conversation logs due to backend database security failures. The global average cost of a data breach is now $4.44 million, and it takes organizations an average of 241 days to identify and contain an active breach. That's eight months where attackers have free reign in your systems. The financial cost is one thing, but the reputational damage and loss of customer is irreversible. This article focuses on the architectural security concerns I mentioned at the end of Part 1—the infrastructure that stores your chat histories, the networks that connect your services, and the databases that power your vector searches. We'll examine real-world breaches that happened in 2024 and 2025, understand exactly what went wrong, and implement Azure solutions that would have prevented them. By the end of this article, you'll have a production-ready, secure architecture for your AI chat system that addresses the most common—and most devastating—security failures we're seeing in the wild. Let's start with the most fundamental question: where is your data, and who can access it? 1. Preventing Exposed Storage with Network Isolation The Problem: When Your Database Is One Google Search Away Let me paint you a picture of what happened with two incidents in 2024-2025: WotNot AI Chatbot left 346,000 files completely exposed in an unsecured cloud storage bucket—passports, medical records, sensitive customer data, all accessible to anyone on the internet without even a password. Security researchers who discovered it tried for over two months to get the company to fix it. In May 2025, Canva Creators' data was exposed through an unsecured Chroma vector database operated by an AI chatbot company. The database contained 341 collections of documents including survey responses from 571 Canva Creators with email addresses, countries of residence, and comprehensive feedback. This marked the first reported data leak involving a vector database. The common thread? Public internet accessibility. These databases and storage accounts were accessible from anywhere in the world. No VPN required. No private network. Just a URL and you were in. Think about your current architecture. If someone found your Cosmos DB connection string or your Azure Storage account name, what's stopping them from accessing it? If your answer is "just the access key" or "firewall rules," you're one leaked credential away from being in the headlines. So what to do: Azure Private Link + Network Isolation The most effective way to prevent public exposure is simple: remove public internet access entirely. This is where Azure Private Link becomes your architectural foundation. With Azure Private Link, you can create a private endpoint inside your Azure Virtual Network (VNet) that becomes the exclusive gateway to your Azure services. Your Cosmos DB, Storage Accounts, Azure OpenAI Service, and other resources are completely removed from the public internet—they only respond to requests originating from within your VNet. Even if someone obtains your connection strings or access keys, they cannot use them without first gaining access to your private network. Implementation Overview: To implement Private Link for your AI chat system, you'll need to: Create an Azure Virtual Network (VNet) to host your private endpoints and application resources Configure private endpoints for each service (Cosmos DB, Storage, Azure OpenAI, Key Vault) Set up private DNS zones to automatically resolve service URLs to private IPs within your VNet Disable public network access on all your Azure resources Deploy your application inside the VNet using Azure App Service with VNet integration, Azure Container Apps, or Azure Kubernetes Service Verify isolation by attempting to access resources from outside the VNet (should fail) You can configure this through the Azure Portal, Azure CLI, ARM templates, or infrastructure-as-code tools like Terraform. The Azure documentation provides step-by-step guides for each service type. Figure 1: Private Link Architecture for AI Chat Systems Private endpoints ensure all data access occurs within the Azure Virtual Network, blocking public internet access to databases, storage, and AI services. 2. Protecting Conversation Data with Encryption at Rest The Problem: When Backend Databases Become Treasure Troves Network isolation solves the problem of external access, but what happens when attackers breach your perimeter through other means? What if a malicious insider gains access? What if there's a misconfiguration in your cloud environment? The data sitting in your databases becomes the ultimate prize. In February 2025, OmniGPT suffered a catastrophic breach where attackers accessed the backend database and extracted personal data from 30,000 users including emails, phone numbers, API keys, and over 34 million lines of conversation logs. The exposed data included links to uploaded files containing sensitive credentials, billing details, and API keys. These weren't prompt injection attacks. These weren't DDoS incidents. These were failures to encrypt sensitive data at rest. When attackers accessed the storage layer, they found everything in readable format—a goldmine of personal information, conversations, and credentials. Think about the conversations your AI chat system stores. Customer support queries that might include account numbers. Healthcare chatbots discussing symptoms and medications. HR assistants processing employee grievances. If someone gained unauthorized (or even authorized) access to your database today, would they be reading plaintext conversations? What to do: Azure Cosmos DB with Customer-Managed Keys The fundamental defense against data exposure is encryption at rest—ensuring that data stored on disk is encrypted and unreadable without the proper decryption keys. Even if attackers gain physical or logical access to your database files, the data remains protected as long as they don't have access to the encryption keys. But who controls those keys? With platform-managed encryption (the default in most cloud services), the cloud provider manages the encryption keys. While this protects against many threats, it doesn't protect against insider threats at the provider level, compromised provider credentials, or certain compliance scenarios where you must prove complete key control. Customer-Managed Keys (CMK) solve this by giving you complete ownership and control of the encryption keys. You generate, store, and manage the keys in your own key vault. The cloud service can only decrypt your data by requesting access to your keys—access that you control and can revoke at any time. If your keys are deleted or access is revoked, even the cloud provider cannot decrypt your data. Azure makes this easy with Azure Key Vault integrated with Azure Cosmos DB. The architecture uses "envelope encryption" where your data is encrypted with a Data Encryption Key (DEK), and that DEK is itself encrypted with your Key Encryption Key (KEK) stored in Key Vault. This provides layered security where even if the database is compromised, the data remains encrypted with keys only you control. While we covered PII detection and redaction using Azure Text Analytics in Part 1—which prevents sensitive data from being stored in the first place—encryption at rest with Customer-Managed Keys provides an additional, powerful layer of protection. In fact, many compliance frameworks like HIPAA, PCI-DSS, and certain government regulations explicitly require customer-controlled encryption for data at rest, making CMK not just a best practice but often a mandatory requirement for regulated industries. Implementation Overview: To implement Customer-Managed Keys for your chat history and vector storage: Create an Azure Key Vault with purge protection and soft delete enabled (required for CMK) Generate or import your encryption key in Key Vault (2048-bit RSA or 256-bit AES keys) Grant Cosmos DB access to Key Vault using a system-assigned or user-assigned managed identity Enable CMK on Cosmos DB by specifying your Key Vault key URI during account creation or update Configure the same for Azure Storage if you're storing embeddings or documents in Blob Storage Set up key rotation policies to automatically rotate keys on a schedule (recommended: every 90 days) Monitor key usage through Azure Monitor and set up alerts for unauthorized access attempts Figure 2: Envelope Encryption with Customer-Managed Keys User conversations are encrypted using a two-layer approach: (1) The AI Chat App sends plaintext messages to Cosmos DB, (2) Cosmos DB authenticates to Key Vault using Managed Identity to retrieve the Key Encryption Key (KEK), (3) Data is encrypted with a Data Encryption Key (DEK), (4) The DEK itself is encrypted with the KEK before storage. This ensures data remains encrypted even if the database is compromised, as decryption requires access to keys stored in your Key Vault. For AI chat systems in regulated industries (healthcare, finance, government), Customer-Managed Keys should be your baseline. The operational overhead is minimal with proper automation, and the compliance benefits are substantial. The entire process can be automated using Azure CLI, PowerShell, or infrastructure-as-code tools. For existing Cosmos DB accounts, enabling CMK requires creating a new account and migrating data. 3. Securing Vector Databases and Preventing Data Leakage The Problem: Vector Embeddings Are Data Too Vector databases are the backbone of modern RAG (Retrieval-Augmented Generation) systems. They store embeddings—mathematical representations of your documents, conversations, and knowledge base—that allow your AI to retrieve relevant context for every user query. But here's what most developers don't realize: those vectors aren't just abstract numbers. They contain your actual data. A critical oversight in AI chat architectures is treating vector databases—or in our case, Cosmos DB collections storing embeddings—as less sensitive than traditional data stores. Whether you're using a dedicated vector database or storing embeddings in Cosmos DB alongside your chat history, these mathematical representations need the same rigorous security controls as the original text. In documented cases, shared vector databases inadvertently mixed data between two corporate clients. One client's proprietary information began surfacing in response to the other client's queries, creating a serious confidentiality breach in what was supposed to be a multi-tenant system. Even more concerning are embedding inversion attacks, where adversaries exploit weaknesses to reconstruct original source data from its vector representation—effectively reverse-engineering your documents from the mathematical embeddings. Think about what's in your vector storage right now. Customer support conversations. Internal company documents. Product specifications. Medical records. Legal documents. If you're running a multi-tenant system, are you absolutely certain that Company A can't retrieve Company B's data? Can you guarantee that embeddings can't be reverse-engineered to expose the original text? What to do: Azure Cosmos DB for MongoDB with Logical Partitioning and RBAC The security of vector databases requires a multi-layered approach that addresses both storage isolation and access control. Azure Cosmos DB for MongoDB provides native support for vector search while offering enterprise-grade security features specifically designed for multi-tenant architectures. Logical partitioning creates strict data boundaries within your database by organizing data into isolated partitions based on a partition key (like tenant_id or user_id). When combined with Role-Based Access Control (RBAC), you create a security model where users and applications can only access their designated partitions—even if they somehow gain broader database access. Implementation Overview: To implement secure multi-tenant vector storage with Cosmos DB: Enable MongoDB RBAC on your Cosmos DB account using the EnableMongoRoleBasedAccessControl capability Design your partition key strategy based on tenant_id, user_id, or organization_id for maximum isolation Create collections with partition keys that enforce tenant boundaries at the storage level Define custom RBAC roles that grant access only to specific databases and partition key ranges Create user accounts per tenant or service principal with assigned roles limiting their scope Implement partition-aware queries in your application to always include the partition key filter Enable diagnostic logging to track all vector retrieval operations with user identity Configure cross-region replication for high availability while maintaining partition isolation Figure 3: Multi-Tenant Data Isolation with Partition Keys and RBAC Azure Cosmos DB enforces tenant isolation through logical partitioning and Role-Based Access Control (RBAC). Each tenant's data is stored in separate partitions (Partition A, B, C) based on the partition key (tenant_id). RBAC acts as a security gateway, validating every query to ensure users can only access their designated partition. Attempts to access other tenants' partitions are blocked at the RBAC layer, preventing cross-tenant data leakage in multi-tenant AI chat systems. Azure provides comprehensive documentation and CLI tools for configuring RBAC roles and partition strategies. The key is to design your partition scheme before loading data, as changing partition keys requires data migration. Beyond partitioning and RBAC, implement these AI-specific security measures: Validate embedding sources: Authenticate and continuously audit external data sources before vectorizing to prevent poisoned embeddings Implement similarity search thresholds: Set minimum similarity scores to prevent irrelevant cross-context retrieval Use metadata filtering: Add security labels (classification levels, access groups) to vector metadata and enforce filtering Monitor retrieval patterns: Alert on unusual patterns like one tenant making queries that correlate with another tenant's data Separate vector databases per sensitivity level: Keep highly confidential vectors (PII, PHI) in dedicated databases with stricter controls Hash document identifiers: Use hashed references instead of plaintext IDs in vector metadata to prevent enumeration attacks For production AI chat systems handling multiple customers or sensitive data, Cosmos DB with partition-based RBAC should be your baseline. The combination of storage-level isolation and access control provides defense in depth that application-layer filtering alone cannot match. Bonus: Secure Logging and Monitoring for AI Chat Systems During development, we habitually log everything—full request payloads, user inputs, model responses, stack traces. It's essential for debugging. But when your AI chat system goes to production and starts handling real user conversations, those same logging practices become a liability. Think about what flows through your AI chat system: customer support conversations containing account numbers, healthcare queries discussing medical conditions, HR chatbots processing employee complaints, financial assistants handling transaction details. If you're logging full conversations for debugging, you're creating a secondary repository of sensitive data that's often less protected than your primary database. The average breach takes 241 days to identify and contain. During that time, attackers often exfiltrate not just production databases, but also log files and monitoring data—places where developers never expected sensitive information to end up. The question becomes: how do you maintain observability and debuggability without creating a security nightmare? The Solution: Structured Logging with PII Redaction and Azure Monitor The key is to log metadata, not content. You need enough information to trace issues and understand system behavior without storing the actual sensitive conversations. Azure Monitor with Application Insights provides enterprise-grade logging infrastructure with built-in features for sanitizing sensitive data. Combined with proper application-level controls, you can maintain full observability while protecting user privacy. What to Log in Production AI Chat Systems: DO Log DON'T Log Request timestamps and duration Full user messages or prompts User IDs (hashed or anonymized) Complete model responses Session IDs (hashed) Raw embeddings or vectors Model names and versions used Personally identifiable information (PII) Token counts (input/output) Retrieved document content Embedding dimensions and similarity scores Database connection strings or API keys Retrieved document IDs (not content) Complete stack traces that might contain data Error codes and exception types Performance metrics (latency, throughput) RBAC decisions (access granted/denied) Partition keys accessed Rate limiting triggers Final Remarks: Building Compliant, Secure AI Systems Throughout this two-part series, we've addressed the complete security spectrum for AI chat systems—from protecting the LLM itself to securing the underlying infrastructure. But there's a broader context that makes all of this critical: compliance and regulatory requirements. AI chat systems operate within an increasingly complex regulatory landscape. The EU AI Act, which entered force on August 1, 2024, became the first comprehensive AI regulation by a major regulator, assigning applications to risk categories with high-risk systems subject to specific legal requirements. The NIS2 Directive further requires that AI model endpoints, APIs, and data pipelines be protected to prevent breaches and ensure secure deployment. Beyond AI-specific regulations, chat systems must comply with established data protection frameworks depending on their use case. GDPR mandates data minimization, user rights to erasure and data portability, 72-hour breach notification, and EU data residency for systems serving European users. Healthcare chatbots must meet HIPAA requirements including encryption, access controls, 6-year audit log retention, and Business Associate Agreements. Systems processing payment information fall under PCI-DSS, requiring cardholder data isolation, encryption, role-based access controls, and regular security testing. B2B SaaS platforms typically need SOC 2 Type II compliance, demonstrating security controls over data availability, confidentiality, continuous monitoring, and incident response procedures. Azure's architecture directly supports these compliance requirements through its built-in capabilities. Private Link enables data residency by keeping traffic within specified Azure regions while supporting network isolation requirements. Customer-Managed Keys provide the encryption controls and key ownership mandated by HIPAA and PCI-DSS. Cosmos DB's partition-based RBAC creates the access controls and audit trails required across all frameworks. Azure Monitor and diagnostic logging satisfy audit and monitoring requirements, while Azure Policy and Microsoft Purview automate compliance enforcement and reporting. The platform's certifications and compliance offerings (including HIPAA, PCI-DSS, SOC 2, and GDPR attestations) provide the documentation and third-party validation that auditors require, significantly reducing the operational burden of maintaining compliance. Further Resources: Azure Private Link Documentation Azure Cosmos DB Customer-Managed Keys Azure Key Vault Overview Azure Cosmos DB Role-Based Access Control Azure Monitor and Application Insights Azure Policy for Compliance Microsoft Purview Data Governance Azure Security Benchmark Stay secure, stay compliant, and build responsibly.164Views0likes0Comments¿Qué es Microsoft Entra y por qué deberías elegirla para proteger tus aplicaciones?
[Blog post original - en inglés] Microsoft Entra es una familia de productos de identidad y acceso a la red, diseñados para implementar una estrategia de seguridad de Zero Trust (Confianza Cero). Forma parte del portafolio de Microsoft Security, que también incluye: Microsoft Defender para la protección contra amenazas cibernéticas y la seguridad en la nube, Microsoft Sentinel para la información de seguridad y la administración de eventos (SIEM), Microsoft Purview para el cumplimiento, Microsoft Priva para privacidad y Microsoft Intune para la administración de endpoints. Estrategia de seguridad Zero Trust La estrategia de seguridad Zero Trust es un enfoque moderno de ciberseguridad que asume que no se debe confiar en ningún usuario o dispositivo, ya sea dentro o fuera de la red, de forma predeterminada. En su lugar, cada solicitud de acceso debe verificarse y autenticarse antes de conceder acceso a los recursos. Esta estrategia está diseñada para abordar las complejidades del entorno digital moderno, incluyendo el trabajo remoto, los servicios en la nube y los dispositivos móviles. ¿Por qué utilizar Entra? Microsoft Entra ID (anteriormente conocido como Azure AD) es una solución de administración de identidades y acceso en la nube que ofrece varias ventajas sobre las soluciones locales tradicionales: Gestión unificada de identidades: Entra ofrece una solución integral para la gestión de identidades y accesos, abarcando tanto entornos híbridos como en la nube. Esto permite administrar de manera unificada las identidades de los usuarios, sus derechos de acceso y permisos, simplificando la administración y mejorando la seguridad. Experiencias de usuario fluidas: Entra admite el inicio de sesión único (SSO), permitiendo a los usuarios acceder a múltiples aplicaciones con un solo conjunto de credenciales. Esto reduce la fatiga de contraseñas y mejora la experiencia del usuario. Políticas de acceso adaptables: Entra permite una autenticación robusta y políticas de acceso adaptativo en tiempo real basadas en riesgos, sin comprometer la experiencia del usuario. Esto ayuda a proteger de manera efectiva el acceso a los recursos y datos. Integración con identidades externas: Entra External ID permite a las organizaciones administrar y autenticar de forma segura a los usuarios que no forman parte de su fuerza laboral interna, como clientes, socios y otros colaboradores externos. Esto es particularmente útil para las empresas que necesitan colaborar de manera segura con socios externos. Desafío del mercado abordado: Entra enfrenta el desafío del mercado al proporcionar una solución integral de IAM en entornos híbridos y en la nube, garantizando la seguridad, simplificando la autenticación de usuarios y permitiendo el acceso seguro a los recursos. Escalabilidad: Las soluciones en la nube como Entra pueden escalar fácilmente para adaptarse a un número creciente de usuarios y aplicaciones sin necesidad de hardware o infraestructura adicional. Rentabilidad: Mediante el uso de una solución en la nube, las organizaciones pueden reducir los costes asociados al mantenimiento de la infraestructura local, como los servidores y los equipos de red. Flexibilidad: Entra ofrece flexibilidad en términos de implementación e integración con diversas aplicaciones y servicios, tanto dentro como fuera del ecosistema de Microsoft. Seguridad: Las soluciones en la nube suelen incluir funciones de seguridad integradas y actualizaciones periódicas para protegerse contra amenazas emergentes. Entra ofrece un soporte solido para el acceso condicional y la autenticación multifactor (MFA), esenciales para proteger los datos confidenciales. Como puedes ver, hay muchas razones para explorar Entra y su conjunto de productos. Más sobre los productos Entra Microsoft Entra está diseñado para proporcionar administración de identidades y accesos, gestión de infraestructura en la nube y verificación de identidad. Funciona en: Las instalaciones. A través de Azure, AWS, Google Cloud. Aplicaciones, sitios web y dispositivos de Microsoft y de terceros. Estos son los productos y soluciones clave dentro de la familia de productos Microsoft Entra. Microsoft Entra ID: Se trata de una solución integral de gestión de identidades y accesos que incluye características como el acceso condicional, el control de acceso basado en roles, la autenticación multifactor y la protección de la identidad. Entra ID ayuda a las organizaciones a administrar y proteger identidades, garantizando un acceso seguro a aplicaciones, dispositivos y datos. Microsoft Entra Domain Services: Este producto proporciona servicios de dominio administrados, como la unión a dominio, políticas de grupo, el Protocolo Ligero de Acceso a Directorios (LDAP) y la autenticación Kerberos/NTLM. Permite a las organizaciones ejecutar aplicaciones heredadas en la nube que no pueden usar métodos de autenticación modernos o en las que no se desea que las búsquedas de directorio vuelvan siempre a un entorno local de Servicios de Dominio de Active Directory (AD DS). Puedes migrar esas aplicaciones heredadas de tu entorno local a un dominio administrado, sin necesidad de administrar el entorno de AD DS en la nube. Microsoft Entra Private Access: proporciona a los usuarios, ya sea en la oficina o trabajando de forma remota, acceso seguro a recursos privados y corporativos. Permite a los usuarios remotos conectarse a los recursos internos desde cualquier dispositivo y red, sin necesidad de una red privada virtual (VPN). El servicio ofrece acceso adaptable por aplicación basado en directivas de acceso condicional, proporcionando una seguridad más granular que una VPN. Microsoft Entra Internet Access: asegura el acceso a los servicios de Microsoft, SaaS y aplicaciones públicas de Internet, mientras protege a los usuarios, dispositivos y datos frente a las amenazas de Internet. Esto se logra a través de la puerta de enlace web segura (SWG) de Microsoft Entra Internet Access, que está centrada en la identidad, es consciente de los dispositivos y se entrega en la nube. Microsoft Entra ID Governance es una solución de gobernanza de identidades que ayuda a garantizar que las personas adecuadas tengan el acceso adecuado a los recursos correctos en el momento oportuno. Esto se logra mediante la automatización de las solicitudes de acceso, las asignaciones y las revisiones a través de la administración del ciclo de vida de la identidad. Microsoft Entra ID Protection ayuda a las organizaciones a detectar, investigar y corregir los riesgos basados en la identidad. Estos riesgos pueden integrarse en herramientas como el acceso condicional para tomar decisiones de acceso, o retroalimentar una herramienta de administración de eventos e información de seguridad (SIEM) para una mayor investigación y correlación. 7. Microsoft Entra Verified ID es un servicio de verificación de credenciales basado en estándares abiertos de identidades descentralizadas (DID). Este producto está diseñado para la verificación y gestión de identidades, garantizando que las identidades de los usuarios se verifiquen de forma segura. Admite escenarios como la verificación de credenciales laborales en LinkedIn. 8. Microsoft Entra External ID se centra en la administración de identidades externas, como clientes, socios y otros colaboradores que no forman parte de la fuerza laboral interna. Permite a las organizaciones administrar y autenticar de forma segura a estos usuarios externos, proporcionando características como experiencias de registro personalizadas, flujos de registro de autoservicio y administración de usuarios. 9. Administración de permisos de Microsoft Entra: Este producto se ocupa de la administración de permisos y controles de acceso en varios sistemas y aplicaciones, garantizando que los usuarios tengan el nivel adecuado de acceso. Permite a las organizaciones detectar, ajustar automáticamente y supervisar continuamente los permisos excesivos y no utilizados en Microsoft Azure, Amazon Web Services (AWS) y Google Cloud Platform (GCP). 10. Microsoft Entra Workload ID: Este producto ayuda a las aplicaciones, contenedores y servicios a acceder de forma segura a los recursos en la nube, proporcionando administración de identidad y acceso para la carga de trabajo. ¿Qué producto Entra elegir? Hemos explicado algunos productos importantes, pero es posible que aún te preguntes cuál elegir. Veamos algunos escenarios para ayudarte a decidir Escenario: Integración de GitHub Actions Un equipo de desarrollo usa GitHub Actions para la integración continua y las canalizaciones de implementación continua (CI/CD). Necesitan acceder de forma segura a los recursos de Azure sin administrar secretos. Producto recomendado: Entra Workload ID ¿Por qué Entra Workload ID? El identificador de carga de trabajo de Microsoft Entra admite la federación de identidades de carga de trabajo, lo que permite a GitHub Actions acceder a los recursos de Azure de forma segura mediante la federación de identidades de GitHub. Esto elimina la necesidad de administrar secretos y reduce el riesgo de fugas de credenciales. Escenario: Gestión interna del acceso de los empleados Una gran empresa necesita gestionar el acceso a sus aplicaciones y recursos internos para miles de empleados. La organización desea implementar la autenticación multifactor (MFA), las directivas de acceso condicional y el control de acceso basado en roles (RBAC) para garantizar un acceso seguro. Producto recomendado: Entra ID ¿Por qué Entra ID? Microsoft Entra ID es ideal para este escenario, ya que proporciona soluciones completas de administración de identidades y acceso, como MFA, acceso condicional y RBAC. Estas características ayudan a garantizar que solo los empleados autorizados puedan acceder a recursos confidenciales, mejorando la seguridad y el cumplimiento. Escenario: Inicio de sesión único (SSO) para aplicaciones internas Una empresa quiere agilizar el proceso de inicio de sesión de sus empleados mediante la implementación de Single Sign-On (SSO) en todas las aplicaciones internas, incluidas Microsoft 365, Salesforce y aplicaciones personalizadas. Producto recomendado: Entra ID ¿Por qué Entra ID? Microsoft Entra ID admite SSO, lo que permite a los empleados usar un único conjunto de credenciales para acceder a varias aplicaciones. Esto mejora la experiencia del usuario, reduce la fatiga de las contraseñas y mejora la seguridad al centralizar la autenticación y la gestión del acceso. Escenario: Cargas de trabajo de Kubernetes Una organización ejecuta varias aplicaciones en clústeres de Kubernetes y necesita acceder de forma segura a los recursos de Azure desde estas cargas de trabajo. Producto recomendado: Entra Workload ID ¿Por qué Entra Workload ID? Entra Workload ID permite que las cargas de trabajo de Kubernetes accedan a los recursos de Azure sin administrar credenciales ni secretos. Al establecer una relación de confianza entre las cuentas de servicio de Azure y Kubernetes, las cargas de trabajo pueden intercambiar tokens de confianza por tokens de acceso de Microsoft Identity Platform. Escenario: Empresa de comercio electrónico, portal del cliente Una empresa de comercio electrónico quiere crear un portal de clientes en el que los usuarios puedan registrarse, iniciar sesión y gestionar sus cuentas. La empresa debe proporcionar una experiencia de registro e inicio de sesión segura y fluida para sus clientes. Producto recomendado: Entra External ID ¿Por qué Entra External ID? El identificador externo de Microsoft Entra está diseñado para administrar identidades externas, como los clientes. Ofrece características como experiencias de registro personalizadas, flujos de registro de autoservicio y autenticación segura, lo que lo convierte en la opción perfecta para crear un portal de clientes. Escenario: Colaboración de socios Una empresa de fabricación colabora con múltiples socios y proveedores externos. La empresa debe proporcionar acceso seguro a los recursos y aplicaciones compartidos y, al mismo tiempo, garantizar que solo los socios autorizados puedan acceder a datos específicos. Producto recomendado: Entra External ID ¿Por qué Entra External ID? El identificador externo de Microsoft Entra es ideal para administrar identidades externas, como asociados y proveedores. Permite a la empresa gestionar y autenticar de forma segura a los usuarios externos, proporcionando funciones como la colaboración B2B y la gestión de acceso, garantizando que solo los socios autorizados puedan acceder a los recursos necesarios. Primeros pasos con Entra ID Por último, te recomendamos algunos recursos estupendos. Microsoft Identity Platform Dev Center Plataforma con documentos, tutoriales, vídeos y más Microsoft identity platform Dev Center | Identity and access for a connected world | Microsoft Developer Aprendizaje sobre Microsoft Entra ID Aumenta tus habilidades en Microsoft Learn Introducción a Microsoft Entra ¿Qué es Microsoft Entra ID? Página de inicio con documentos oficiales que explican Entra ID: un gran lugar para comenzar ¿Qué es Microsoft Entra ID? Tutorial: Inicia sesión de usuario en Entra Node.js tutorial Tutorial: Inicio de sesión de usuarios y adquisición de un token para Microsoft Graph en una aplicación web de Node.js y Express Tutorial: Agregar inicio de sesión con Microsoft Entra Java tutorial Add sign-in with Microsoft Entra account to a Spring web app - Java on Azure | Microsoft Learn Tutorial: Registra una aplicación de Python con Entra Python tutorial Tutorial: Register a Python web app with the Microsoft identity platform - Microsoft identity platform | Microsoft Learn Tutorial: Registra una aplicación de .NET con Entra .NET Core Tutorial: Register an application with the Microsoft identity platform - Microsoft identity platform | Microsoft Learn Primeros pasos con Entra External ID One stop shop, Plataforma de identidad para programadores. Gran punto de comienzo para aprender sobre noticias, documentos, tutoriales, vídeos y más Microsoft Entra External ID | Simplify customer identity management | Microsoft Developer Tutorial: Añade autenticación a una aplicación Vanilla SPA JavaScript tutorial Tutorial: Create a Vanilla JavaScript SPA for authentication in an external tenant - Microsoft Entra External ID | Microsoft Learn Tutorial: Iniciar sesión de usuarios en Node.js aplicación JavaScript/Node.js tutorial Sign in users in a sample Node.js web application - Microsoft Entra External ID | Microsoft Learn Tutorial: Inicio de sesión de usuarios en ASP.NET Core .NET Core tutorial Sign in users to a sample ASP.NET Core web application - Microsoft Entra External ID | Microsoft Learn Iniciar sesión con usuarios en una aplicación Python Flask Python tutorial Sign in users in a sample Python Flask web application - Microsoft Entra External ID | Microsoft Learn Tutorial: Inicio de sesión de usuarios en una aplicación Node.js JavaScript/Node.js tutorial Tutorial: Prepare your external tenant to sign in users in a Node.js web app - Microsoft Entra External ID | Microsoft Learn Tutorial: Inicio de sesión de usuarios en una aplicación .NET Core .NET Core Tutorial Tutorial: Prepare your external tenant to authenticate users in an ASP.NET Core web app - Microsoft Entra External ID | Microsoft Learn Resumen y conclusiones En resumen, te presentamos Entra y algunos de sus productos dentro de una gran familia de soluciones. También te mostramos algunos escenarios y qué productos encajarían mejor en cada uno. Esperamos que hayas tenido un gran comienzo, ¡gracias por leer!236Views0likes0CommentsFree Microsoft Fundamentals certifications for worldwide students
Microsoft offers Fundamentals exam vouchers and practice resources to eligible students free through June 2023. Students will need to verify their enrollment at an accredited academic institution to claim the benefits. 496KViews4likes18CommentsUnlocking Future Skills with Microsoft Learning Hubs
In today’s fast-paced world, staying ahead means continuously evolving your skills. Discover how Microsoft’s Learning Hubs are revolutionizing education and skllling. Explore how Microsoft is empowering individuals and organizations to thrive. Dive into our commitment to keep you skilled and see how we’re shaping a brighter future for everyone. Stay tuned to learn more about how you can harness these tools and initiatives to boost your productivity and drive success!2.5KViews4likes2CommentsAI-900: Microsoft Azure AI Fundamentals Study Guide
This comprehensive study guide provides a thorough overview of the topics covered in the Microsoft Azure AI Fundamentals (AI-900) exam, including Artificial Intelligence workloads, fundamental principles of machine learning, computer vision and natural language processing workloads. Learn about the exam's intended audience, how to earn the certification, and the skills measured as of April 2022. Discover the important considerations for responsible AI, the capabilities of Azure Machine Learning Studio and more. Get ready to demonstrate your knowledge of AI and ML concepts and related Microsoft Azure services with this helpful study guide.39KViews11likes3CommentsHow AI Can Improve Threat Intelligence Gathering and Usage
Cybersecurity is one of the most pressing challenges in the digital age. Cyberattacks can cause significant damage to organizations and individuals, compromising their data, reputation, and operations. Threat intelligence can help organizations anticipate and defend against cyberattacks, as well as improve their security posture and resilience.4.1KViews1like1CommentEraser: Clean Slate for your Kubernetes Nodes
Kubernetes has emerged as the cornerstone for orchestrating and managing containers. However, to keep your Kubernetes clusters secure, efficient, and primed for success, it's crucial to maintain clean and clutter-free nodes. In this blog, we will explore the open-source project Eraser and how it can play a pivotal role in ensuring the health and security of your Kubernetes nodes, catering specifically to the needs of students, startups, and AI developers and entrepreneurs.6.1KViews0likes0CommentsGitHub Copilot Update: New AI Model That Also Filters Out Security Vulnerabilities
GitHub Copilot has announced major updates to its AI model, resulting in significant improvements to the quality and responsiveness of code suggestions. The updated Codex model has led to a large-scale improvement in the quality of code suggestions and a reduction in the time it takes to serve those suggestions to users. In addition, the new vulnerability filtering system uses advanced AI models to detect insecure coding patterns in real-time, making suggestions more secure. These updates are available to all users of GitHub Copilot and are free to students with the GitHub Students Developer Pack.22KViews2likes0Comments