security & compliance
190 TopicsPreparing for Azure PostgreSQL Certificate Authority Rotation: A Comprehensive Operational Guide
The Challenge It started with a standard notification in the Azure Portal: Tracking-ID YK3N-7RZ. A routine Certificate Authority (CA) rotation for Azure Database for PostgreSQL. As Cloud Solution Architects, we’ve seen this scenario play out many times. The moment “certificate rotation” is mentioned, a wave of unease ripples through engineering teams. Let’s be honest: for many of us—ourselves included—certificates represent the edge of our technical “comfort zone.” We know they are critical for security, but the complexity of PKI chains, trust stores, and SSL handshakes can be intimidating. There is a silent fear: “If we touch this, will we break production?” We realized we had a choice. We could treat this as an opportunity, and we could leave that comfort zone. We approached our customer with a proactive proposal: Let’s use this event to stop fearing certificates and start mastering them. Instead of just patching the immediate issue, we used this rotation as a catalyst to review and upgrade the security posture of their database connections. We wanted to move from “hoping it works” to “knowing it’s secure.” The response was overwhelmingly positive. The teams didn’t just want a quick fix; they wanted “help for self-help.” They wanted to understand the mechanics behind sslmode and build the confidence to manage trust stores proactively. This guide is the result of that journey. It is designed to help you navigate the upcoming rotation not with anxiety, but with competence—turning a mandatory maintenance window into a permanent security improvement. Two Levels of Analysis A certificate rotation affects your environment on two distinct levels, requiring different expertise and actions: Level Responsibility Key Questions Actions Platform Level Cloud/Platform Teams Which clusters, services, and namespaces are affected? How do we detect at scale? Azure Service Health monitoring, AKS scanning, infrastructure-wide assessment Application Level Application/Dev Teams What SSL mode? Which trust store? How to update connection strings? Code changes, dependency updates, trust store management This article addresses both levels - providing platform-wide detection strategies (Section 5) and application-specific remediation guidance (Platform-Specific Remediation). Business Impact: In production environments, certificate validation failures cause complete database connection outages. A single missed certificate rotation has caused hours of downtime for enterprise customers, impacting revenue and customer trust. Who’s Affected: DevOps engineers, SREs, database administrators, and platform engineers managing Azure PostgreSQL instances - especially those using: - Java applications with custom JRE cacerts - Containerized workloads with baked-in trust stores - Strict SSL modes (sslmode=verify-full, verify-ca) The Solution What we’ll cover: 🛡️ Reliability: How to prevent database connection outages through proactive certificate management 🔄 Resiliency: Automation strategies that ensure your trust stores stay current 🔒 Security: Maintaining TLS security posture while rotating certificates safely Key Takeaway: This rotation is a client trust topic, not a server change. Applications trusting root CAs (DigiCert Global Root G2, Microsoft RSA Root CA 2017) without intermediate pinning are unaffected. Risk concentrates where strict validation meets custom trust stores. 📦 Platform-Specific Implementation: Detailed remediation guides for Java, .NET, Python, Node.js, and Kubernetes are available in our GitHub Repository. Note: The GitHub Repository. contains community-contributed content provided as-is. Test all scripts in non-production environments before use. 1. Understanding Certificate Authority Rotation What Changes During CA Rotation? Azure Database for PostgreSQL uses TLS/SSL to encrypt client-server connections. The database server presents a certificate chain during the TLS handshake: Certificate Chain Structure: Figure: Certificate chain structure showing the rotation from old intermediate (red, deprecated) to new intermediate (blue, active after rotation). Client applications must trust the root certificates (green) to validate the chain. 📝 Diagram Source: The Mermaid source code for this diagram is available in certificate-chain-diagram.mmd. Why Root Trust Matters Key Principle: If your application trusts the root certificate and allows the chain to be validated dynamically, you are not affected. The risk occurs when: Custom trust stores contain only the old intermediate certificate (not the root) Certificate pinning is implemented at the intermediate level Strict validation is enabled (sslmode=verify-full in PostgreSQL connection strings) 2. Who Is Affected and Why Risk Assessment Matrix Application Type Trust Store SSL Mode Risk Level Action Required Cloud-native app (Azure SDK) OS Trust Store require 🟢 Low None - Azure SDK handles automatically Java app (default JRE) System cacerts verify-ca 🟡 Medium Verify JRE version (11.0.16+, 17.0.4+, 8u381+) Java app (custom cacerts) Custom JKS file verify-full 🔴 High Update custom trust store with new intermediate .NET app (Windows) Windows Cert Store require 🟢 Low None - automatic via Windows Update Python app (certifi) certifi bundle verify-ca 🟡 Medium Update certifi package (pip install --upgrade certifi) Node.js app (default) Built-in CAs verify-ca 🟢 Low None - Node.js 16+, 18+, 20+ auto-updated Container (Alpine) /etc/ssl/certs verify-full 🔴 High Update base image or install ca-certificates-bundle Container (custom) Baked-in certs verify-full 🔴 High Rebuild image with updated trust store How to Read This Matrix Use the above matrix to quickly assess whether your applications are affected by CA rotation. Here is an overview, how you read the matrix: Column Meaning Application Type What kind of application do you have? (e.g., Java, .NET, Container) Trust Store Where does the application store its trusted certificates? SSL Mode How strictly does the application validate the server certificate? Risk Level 🟢 Low / 🟡 Medium / 🔴 High - How likely is a connection failure? Action Required What specific action do you need to take? Risk Level Logic: Risk Level Why? 🟢 Low Automatic updates (OS/Azure SDK) or no certificate validation 🟡 Medium Manual update required but straightforward (e.g., pip install --upgrade certifi) 🔴 High Custom trust store must be manually updated - highest outage risk SSL Mode Security Posture Understanding SSL modes is critical because they determine both security posture AND rotation impact. This creates a dual consideration: SSL Mode Certificate Validation Rotation Impact Security Level Recommendation disable ❌ None ✅ No impact 🔴 INSECURE Never use in production allow ❌ None ✅ No impact 🟠 WEAK Not recommended prefer ❌ Optional ✅ Minimal 🟡 WEAK Not recommended require ❌ No (Npgsql 6.0+) ✅ No impact 🟡 WEAK Upgrade to verify-full verify-ca ✅ Chain only 🔴 Critical 🔵 MODERATE Update trust stores verify-full ✅ Chain + hostname 🔴 Critical 🟢 SECURE Recommended - Update trust stores Key Insight: Applications using weak SSL modes (everything below verify-ca) are technically unaffected by CA rotation but represent security vulnerabilities. The safest path is verify-full with current trust stores. ⚖️ The Security vs. Resilience Trade-off The Paradox: Secure applications (verify-full) have the highest rotation risk 🔴, while insecure applications (require) are unaffected but have security gaps. Teams discovering weak SSL modes during rotation preparation face a critical decision: Option Approach Rotation Impact Security Impact Recommended For 🚀 Quick Fix Keep weak SSL mode (require) ✅ No action needed ⚠️ Security debt remains Emergency situations only 🛡️ Proper Fix Upgrade to verify-full 🔴 Requires trust store updates ✅ Improved security posture All production systems Our Recommendation: Use CA rotation events as an opportunity to improve your security posture. The effort to update trust stores is a one-time investment that pays off in long-term security. Common Scenarios Scenario 1: Enterprise Java Application Problem: Custom trust store created 2+ years ago for PCI compliance Risk: High - contains only old intermediate certificates Solution: Export new intermediate from Azure, import to custom cacerts Scenario 2: Kubernetes Microservices Problem: Init container copies trust store from ConfigMap at startup Risk: High - ConfigMap never updated since initial deployment Solution: Update ConfigMap, redeploy pods with new trust store Scenario 3: Legacy .NET Application Problem: .NET Framework 4.6 on Windows Server 2016 (no Windows Update) Risk: Medium - depends on manual certificate store updates Solution: Import new intermediate to Windows Certificate Store manually 3. Trust Store Overview A trust store is the collection of root and intermediate CA certificates that your application uses to validate server certificates during TLS handshakes. Understanding where your application’s trust store is located determines how you’ll update it for CA rotations. Trust Store Locations by Platform Category Platform Trust Store Location Update Method Auto-Updated? OS Level Windows Cert:\LocalMachine\Root Windows Update ✅ Yes Debian/Ubuntu /etc/ssl/certs/ca-certificates.crt apt upgrade ca-certificates ✅ Yes (with updates) Red Hat/CentOS /etc/pki/tls/certs/ca-bundle.crt yum update ca-certificates ✅ Yes (with updates) Runtime Level Java JRE $JAVA_HOME/lib/security/cacerts Java security updates ✅ With JRE updates Python (certifi) site-packages/certifi/cacert.pem pip install --upgrade certifi ❌ Manual Node.js Bundled with runtime Node.js version upgrade ✅ With Node.js updates Custom Custom JKS Application-specific path keytool -importcert ❌ Manual Container image /etc/ssl/certs (baked-in) Rebuild container image ❌ Manual ConfigMap mount Kubernetes ConfigMap Update ConfigMap, redeploy ❌ Manual Why This Matters for CA Rotation Applications using auto-updated trust stores (OS-managed, current runtime versions) generally handle CA rotations automatically. The risk concentrates in: Custom trust stores created for compliance requirements (PCI-DSS, SOC 2) that are rarely updated Baked-in container certificates from images built months or years ago Outdated runtimes (old JRE versions, frozen Python environments) that haven’t received security updates Air-gapped environments where automatic updates are disabled When planning for CA rotation, focus your assessment efforts on applications in the “Manual” update category. 4. Platform-Specific Remediation 📦 Detailed implementation guides are available in our GitHub repository: azure-certificate-rotation-guide Quick Reference: Remediation by Platform Platform Trust Store Location Update Method Guide Java $JAVA_HOME/lib/security/cacerts Update JRE or manual keytool import java-cacerts.md .NET (Windows) Windows Certificate Store Windows Update (automatic) dotnet-windows.md Python certifi package pip install --upgrade certifi python-certifi.md Node.js Built-in CA bundle Update Node.js version nodejs.md Containers Base image /etc/ssl/certs Rebuild image or ConfigMap containers-kubernetes.md Scripts & Automation Script Purpose Download State Scan-AKS-TrustStores.ps1 Scan all pods in AKS for trust store configurations PowerShell tested validate-connection.sh Test PostgreSQL connection with SSL validation Bash not tested update-cacerts.sh Update Java cacerts with new intermediate Bash not tested 5. Proactive Detection Strategies Database-Level Discovery: Identifying Connected Clients One starting point for impact assessment is querying the PostgreSQL database itself to identify which applications are connecting. We developed a SQL query that joins pg_stat_ssl with pg_stat_activity to reveal active TLS connections, their SSL version, and cipher suites. 🔍 Get the SQL Query: Download the complete detection script from our GitHub repository: detect-clients.sql Important Limitations This query has significant constraints that you must understand before relying on it for CA rotation planning: Limitation Impact Mitigation Point-in-time snapshot Only shows currently connected clients Run query repeatedly over days/weeks to capture periodic jobs and batch processes No certificate details Cannot identify which CA certificate the client is using Requires client-side investigation (trust store analysis) Connection pooling May show pooler instead of actual application Use application_name in connection strings to identify true source Idle connections Long-running connections may be dormant Cross-reference with application activity logs Recommended approach: Use this query to create an initial inventory, then investigate each unique application_name and client_addr combination to determine their trust store configuration and SSL mode. Proactive Monitoring with Azure Monitor To detect certificate-related issues before and after CA rotation, configure Azure Monitor alerts. This enables early warning when SSL handshakes start failing. Why this matters: After CA rotation, applications with outdated trust stores will fail to connect. An alert allows you to detect affected applications quickly rather than waiting for user reports. Official Documentation: For complete guidance on creating and managing alerts, see Azure Monitor Alerts Overview and Create a Log Search Alert. Here is a short example of an Azure Monitor Alert definition as a starting point. { "alertRule": { "name": "PostgreSQL SSL Connection Failures", "severity": 2, "condition": { "query": "AzureDiagnostics | where ResourceType == 'SERVERS' and Category == 'PostgreSQLLogs' and Message contains 'SSL error' | summarize count() by bin(TimeGenerated, 5m)", "threshold": 5, "timeAggregation": "Total", "windowSize": "PT5M" } } } Alert Configuration Notes: Setting Recommended Value Rationale Severity 2 (Warning) Allows investigation without triggering critical incident response Threshold 5 failures/5min Filters noise while catching genuine issues Evaluation Period 5 minutes Balances responsiveness with alert fatigue Action Group Platform Team Ensures quick triage and coordination 6. Production Validation Pre-Rotation Validation Checklist Inventory all applications connecting to Azure PostgreSQL Identify trust store locations for each application Verify root certificate presence in trust stores Test connection with new intermediate in non-production environment Update monitoring alerts for SSL connection failures Prepare rollback plan if issues occur Schedule maintenance window (if required) Notify stakeholders of potential impact Testing Procedure We established a systematic 3-step validation process to ensure zero downtime. This approach moves from isolated testing to gradual production rollout. 🧪 Technical Validation Guide: For the complete list of psql commands, connection string examples for Windows/Linux, and automated testing scripts, please refer to our Validation Guide in the GitHub repository. Connection Testing Strategy The core of our validation strategy was testing connections with explicit sslmode settings. We used the psql command-line tool to simulate different client behaviors. Test Scenario Purpose Expected Result Encryption only (sslmode=require) Verify basic connectivity Connection succeeds even with unknown CA CA validation (sslmode=verify-ca) Verify trust store integrity Connection succeeds only if CA chain is valid Full validation (sslmode=verify-full) Verify strict security compliance Connection succeeds only if CA chain AND hostname match Pro Tip: Test with verify-full and an explicit root CA file containing the new Microsoft/DigiCert root certificates before the rotation date. This validates that your trust stores will work after the intermediate certificate changes. Step 1: Test in Non-Production Validate connections against a test server using the new intermediate certificate (Azure provides test endpoints during the rotation window). Step 2: Canary Deployment Deploy the updated trust store to a single “canary” instance or pod. Monitor: - Connection success rate - Error logs - Response times Step 3: Gradual Rollout Once the canary is stable, proceed with a phased rollout: 1. Update 10% of pods 2. Monitor for 1 hour 3. Update 50% of pods 4. Monitor for 1 hour 5. Complete rollout 7. Best Practices and Lessons Learned Certificate Management Best Practices Practice Guidance Example Trust Root CAs, Not Intermediates Configure trust stores with root CA certificates only. This provides resilience against intermediate certificate rotations. Trust Microsoft TLS RSA Root G2 and DigiCert Global Root G2 instead of specific intermediates Automate Trust Store Updates Use OS-provided trust stores when possible (automatically updated). For custom trust stores, implement CI/CD pipelines. Schedule bi-annual trust store audits Use SSL Mode Appropriately Choose SSL mode based on security requirements. verify-ca is recommended for most scenarios. See Security Posture Matrix in Section 2 Maintain Container Images Rebuild container images monthly to include latest CA certificates. Use init containers for runtime updates. Multi-stage builds with CA certificate update step Avoid Certificate Pinning Never pin intermediate certificates. If pinning is required for compliance, implement automated update processes. Pin only root CA certificates if absolutely necessary SSL Mode Decision Guide SSL Mode Security Level Resilience When to Use require Medium High Encrypted traffic without certificate validation. Use when CA rotation resilience is more important than MITM protection. verify-ca High Medium Validates certificate chain. Recommended for most production scenarios. verify-full Highest Low Strictest validation with hostname matching. Use only when compliance requires it. Organizational Communication Model Effective certificate rotation requires structured communication across multiple layers: Layer Responsibility Key Action Azure Service Health Microsoft publishes announcements to affected subscriptions Monitor Azure Service Health proactively Platform/Cloud Team Receives Azure announcements, triages criticality Follow ITSM processes, assess impact Application Teams Execute application-level changes Update trust stores, validate connections Security Teams Define certificate validation policies Set compliance requirements Ownership and Responsibility Matrix Team Responsibility Deliverable Platform/Cloud Team Monitor Azure Service Health, coordinate response Impact assessment, team notifications Application Teams Application-level changes (connection strings, trust stores) Updated configurations, validation results Security Teams Define certificate policies, compliance requirements Policy documentation, audit reports All Teams (Shared) Certificate lifecycle collaboration Playbooks, escalation paths, training Certificate Rotation Playbook Components Organizations should establish documented playbooks including: Component Recommended Frequency Purpose Trust Store Audits Bi-annual (every 6 months) Ensure certificates are current Certificate Inventory Quarterly review Know what certificates exist where Playbook Updates Annual or after incidents Keep procedures current Team Training Annual Build knowledge and confidence Field Observations: Common Configuration Patterns Pattern Observation Risk Implicit SSL Mode Teams don’t explicitly set sslmode, relying on framework defaults Unexpected behavior during CA rotation Copy-Paste Configurations Connection strings copied without understanding options Works until certificate changes expose gaps Framework-Specific Defaults Java uses JRE trust store, .NET uses Windows Certificate Store, Python depends on certifi package Some require manual updates, some are automatic Framework Trust Store Defaults Framework Default Trust Store Update Method Risk Level Java/Quarkus JRE cacerts Manual or JRE update Medium - requires awareness .NET Windows Certificate Store Windows Update Low - automatic Node.js Bundled certificates Node.js version update Low - automatic Python certifi package pip install --upgrade certifi High - manual intervention required Knowledge and Confidence Challenges Challenge Impact Mitigation Limited certificate knowledge Creates uncertainty and risk-averse behavior Proactive education, hands-on workshops Topic intimidation “Certificates” can seem complex, leading to avoidance Reality: Implementation is straightforward once understood Previous negative experiences Leadership concerns based on past incidents Document successes, share lessons learned Visibility gaps Lack of visibility into application dependencies Maintain certificate inventory, use discovery tools Monitoring Strategy (Recommended for Post-Rotation): While pre-rotation monitoring focuses on inventory, post-rotation monitoring should track: Key Metrics: - Connection failure rates (group by application, SSL error types) - SSL handshake duration (detect performance degradation) - Certificate validation errors (track which certificates fail) - Application error logs (filter for “SSL”, “certificate”, “trust”) Recommended Alerts: - Threshold: >5 SSL connection failures in 5 minutes - Anomaly detection: Connection failure rate increases >50% - Certificate expiry warnings: 30, 14, 7 days before expiration Dashboard Components: - Connection success rate by application - SSL error distribution (validation failures, expired certificates, etc.) - Certificate inventory with expiry dates - Trust store update status across infrastructure These metrics, alerts and thresholds are only starting points and need to be adjusted based on your environment and needs. Post-Rotation Validation and Telemetry Note: This article focuses on preparation for upcoming certificate rotations. Post-rotation metrics and incident data will be collected after the rotation completes and can inform future iterations of this guidance. Recommended Post-Rotation Activities: Here are some thoughts on post-rotation activities that could create more insights on the effectiveness of the preparation. Incident Tracking: After rotation completes, organizations should track: - Production incidents related to SSL/TLS connection failures - Services affected and their business criticality - Mean Time to Detection (MTTD) for certificate-related issues - Mean Time to Resolution (MTTR) from detection to fix Success Metrics to Measure Pre-Rotation Validation: - Number of services inventoried and assessed - Percentage of services requiring trust store updates - Testing coverage (dev, staging, production) Post-Rotation Outcomes: - Zero-downtime success rate (percentage of services with no impact) - Applications requiring emergency patching - Time from rotation to full validation Impact Assessment Telemetry to Collect: - Total connection attempts vs. failures (before and after rotation) - Duration of any service degradation or outages - ustomer-facing impact (user-reported issues, support tickets) - Geographic or subscription-specific patterns Continuous Improvement Post-Rotation Review: - What worked well in the preparation phase? - Which teams or applications were unprepared? - What gaps exist in monitoring or alerting? - How can communication be improved for future rotations? Documentation Updates: - Update playbooks with lessons learned - Refine monitoring queries based on observed patterns - Enhance team training materials - Share anonymized case studies across the organization 8. Engagement & Next Steps Discussion Questions We’d love to hear from the community: What’s your experience with certificate rotations? Have you encountered unexpected connection failures during CA rotation events? Which trust store update method works best for your environment? OS-managed, runtime-bundled, or custom trust stores? How do you handle certificate management in air-gapped environments? What strategies have worked for your organization? Share Your Experience If you’ve implemented proactive certificate management strategies or have lessons learned from CA rotation incidents, we encourage you to: Comment below with your experiences and tips Contribute to the GitHub repository with additional platform guides or scripts Connect with us on LinkedIn to continue the conversation Call to Action Take these steps now to prepare for the CA rotation: Assess your applications - Use the Risk Assessment Matrix (Section 2) to identify which applications use sslmode=verify-ca or verify-full with custom trust stores Import root CA certificates - Add DigiCert Global Root G2 and Microsoft RSA Root CA 2017 to your trust stores Upgrade SSL mode - Change your connection strings to at least sslmode=verify-ca (recommended: verify-full) for improved security Document your changes - Record which applications were updated, what trust stores were modified, and the validation results Automate for the future - Implement proactive certificate management so future CA rotations are handled automatically (OS-managed trust stores, CI/CD pipelines for container images, scheduled trust store audits) 9. Resources Official Documentation Azure PostgreSQL: Azure PostgreSQL SSL/TLS Concepts Azure PostgreSQL - Connect with TLS/SSL PostgreSQL & libpq: PostgreSQL libpq SSL Support - SSL mode options and environment variables PostgreSQL psql Reference - Command-line tool documentation PostgreSQL Server SSL/TLS Configuration Certificate Authorities: DigiCert Root Certificates Microsoft PKI Repository Microsoft Trusted Root Program Community Resources Let’s Encrypt Root Expiration (2021 Incident) NIST SP 800-57: Key Management Guidelines OWASP Certificate Pinning Cheat Sheet Neon Blog: PostgreSQL Connection Security Defaults Tools and Scripts PowerShell AKS Trust Store Scanner (see Platform-Specific Remediation) PostgreSQL Interactive Terminal (psql) PostgreSQL JDBC SSL Documentation Industry Context Certificate rotation challenges are not unique to Azure PostgreSQL. Similar incidents have occurred across the industry: Historical Incidents: - Let’s Encrypt Root Expiration (2021): Widespread impact when DST Root CA X3 expired, affecting older Android devices and legacy systems - DigiCert Root Transitions: Multiple cloud providers experienced customer impact during CA changes - Internal PKI Rotations: Enterprises face similar challenges when rotating internally-issued certificates Relevant Standards: - NIST SP 800-57: Key Management Guidelines (certificate lifecycle best practices) - OWASP Certificate Pinning: Guidance on balancing security and operational resilience - CIS Benchmarks: Recommendations for TLS/SSL configuration in cloud environments Authors Author Role Contact Andreas Semmelmann Cloud Solution Architect, Microsoft LinkedIn Mpho Muthige Cloud Solution Architect, Microsoft LinkedIn Disclaimers Disclaimer: The information in this blog post is provided for general informational purposes only and does not constitute legal, financial, or professional advice. While every effort has been made to ensure the accuracy of the information at the time of publication, Microsoft makes no warranties or representations as to its completeness or accuracy. Product features, availability, and timelines are subject to change without notice. For specific guidance, please consult your legal or compliance advisor. Microsoft Support Statement: This article represents field experiences and community best practices. For official Microsoft support and SLA-backed guidance: Azure Support: https://azure.microsoft.com/support/ Official Documentation: https://learn.microsoft.com/azure/ Microsoft Q&A: https://learn.microsoft.com/answers/ Production Issues: Always open official support tickets for production-impacting problems. Customer Privacy Notice: This article describes real-world scenarios from customer engagements. All customer-specific information has been anonymized. No NDAs or customer confidentiality agreements were violated in creating this content. AI-generated content disclaimer: This content was generated in whole or in part with the assistance of AI tools. AI-generated content may be incorrect or incomplete. Please review and verify before relying on it for critical decisions. See terms Community Contribution: The GitHub repository referenced in this article contains community-contributed scripts and guides. These are provided as-is for educational purposes and should be tested in non-production environments before use. Tags: #AzurePostgreSQL #CertificateRotation #TLS #SSL #TrustStores #Operations #DevOps #SRE #CloudSecurity #AzureDatabaseAzure passowrd protection
We have a hybrid Azure infrastructure with an AD Connector installed on-prem and configured for PTA. We installed the password protection server and registered it with the Azure tenant, then deployed the DC agent on all domain controllers. Both the proxy and agents are operational. We published a few banned words to block in case anyone uses them. For testing, I changed my password to include one of the banned words. To my surprise, I was able to change the password. I checked the corresponding logon server, and the DC event viewer showed that the password was validated, but the banned word was in the password list that Azure set to enforce. Why is it not blocking the change?23Views0likes1CommentSecuring Data with Microsoft Purview IRM + Defender: A Hands-On Lab
Hi everyone I recently explored how Microsoft Purview Insider Risk Management (IRM) integrates with Microsoft Defender to secure sensitive data. This lab demonstrates how these tools work together to identify, investigate, and mitigate insider risks. What I covered in this lab: Set up Insider Risk Management policies in Microsoft Purview Connected Microsoft Defender to monitor risky activities Walkthrough of alerts triggered → triaged → escalated into cases Key governance and compliance insights Key learnings from the lab: Purview IRM policies detect both accidental risks (like data spillage) and malicious ones (IP theft, fraud, insider trading) IRM principles include transparency (balancing privacy vs. protection), configurable policies, integrations across Microsoft 365 apps, and actionable alerts IRM workflow follows: Define policies → Trigger alerts → Triage by severity → Investigate cases (dashboards, Content Explorer, Activity Explorer) → Take action (training, legal escalation, or SIEM integration) Defender + Purview together provide unified coverage: Defender detects and responds to threats, while Purview governs compliance and insider risk This was part of my ongoing series of security labs. Curious to hear from others — how are you approaching Insider Risk Management in your organizations or labs?298Views0likes6CommentsCan't setup MFA on Azure personal account
I'm unable to use the az cli with a personal account because of MFA requirements. I have a free trial of Azure which I'm using for some basic testing. I'd like to deploy some Bicep from az cli, when I do I get: AADSTS50076: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access 'subscription GUID'. The docs say to go to Per-user multifactor authentication and enable MFA. I did this, but I can't set it up. Trying to login to https://aka.ms/MFASetup gives: You can't sign in here with a personal account. Use your work or school account instead. My personal account is the only user on the tenant. There is no other account I can use and I'm prevented from setting up MFA on it.88Views0likes2CommentsOath hardware token
Hi All, I just received my hardware tokens to set up for a few users in our organization that do not have access to company mobile devices. I have uploaded the .csv files with the required information in our Azure portal and it successfully uploaded. I am not able to activate the token, it keeps failing but I’m not sure why and I don’t really get a reason. Is there a clearer way to set this up or do I need to enable something before I set this up. I would like this set up before the end of the week, any help is appreciated. Thanks,5.8KViews0likes10Comments🔒 Strengthening Azure DNS Zone Security with RBAC and Resource Locks
🔎 DNS security is more than just configuration it’s about protecting critical assets against unauthorized changes and accidental deletions. 🔎 Managing DNS zones effectively requires a layered security approach. 🔎 Two powerful mechanisms in Azure : Role-Based Access Control (RBAC) and Resource Locks 🚀 Role-Based Access Control (RBAC) 🚀 * Granular DNS Access Control * RBAC ensures controlled access management at both the DNS zone and record set levels. * Instead of assigning broad permissions, RBAC enables precise delegation using built-in roles such as: 🔹 Owner – Full control over the DNS zone, including configurations and deletions. 🔹 Contributor – Can modify DNS settings but cannot change access permissions. 🔹 Network Contributor – Can manage networking configurations related to DNS, but not modify records. 🔹 DNS Zone Contributor – Dedicated role for managing DNS zones without broader networking privileges. ✅ Key Advantages of RBAC in DNS Security: ✔ Prevent unauthorized modifications by restricting access to only necessary roles. ✔ Ensure operational integrity by limiting exposure to critical configurations. ✔ Improve governance by aligning roles with organizational security policies. 🔐 Resource Locks 🔐 * Guardrails for DNS Protection * Even with well-defined RBAC settings, accidental deletions can still occur. * Azure Resource Locks add an additional safeguard by preventing changes to a DNS zone or specific record sets. 🔹 Zone Lock ----> Protects an entire DNS zone from being deleted, preserving all associated record sets. 🔹 SOA Lock ----> Prevents unintentional zone deletions while allowing record modifications within the zone. ✅ How Resource Locks Enhance Security: ✔ Shields DNS zones from accidental or malicious deletions. ✔ Maintains continuity by ensuring record sets remain intact. ✔ Strengthens compliance controls for critical infrastructure. 🛠 Best Practices for Securing DNS with RBAC & Resource Locks 🔸 Assign least privilege roles—never give unnecessary access. 🔸 Implement locks on essential zones to prevent configuration errors. 🔸 Regularly audit access permissions using Azure Policy & Activity Logs. 🔸 Use Automation & Alerts to track modifications for enhanced security. 🔹 Implementing RBAC & Resource Locks ensures your cloud environment remains secure, operational, and fault-tolerant.417Views0likes1CommentBYOPI - Design your own custom private AI Search indexer with no code ADF (SQLServer on private VM)
Executive Summary Building a fully private search indexing solution using Azure Data Factory (ADF) to sync SQL Server data from private VM to Azure AI Search is achievable but comes with notable complexities and limitations. This blog shares my journey, discoveries, and honest assessment of the BYOPI (Build Your Own Private Indexer) architecture. Architectural flow: Table of Contents Overall Setup How ADF works in this approach with Azure AI Search Challenges - discovered Pros and Cons: An Honest Assessment Conclusion and Recommendations 1. Overall Setup: Phase 1: Resource Group & Network Setup : create resource group and vNET (virtual network) in any region of your choice Phase 2: Deploy SQL Server VM: Phase 3: Create Azure Services - ADF (Azure Data Factory), Azure AI Search and AKV (Azure Key Vault) service from portal or from your choice of deployment. Phase 4: Create Private Endpoints for all the services in their dedicated subnets: Phase 5: Configure SQL Server on VM : connect to VM via bastion and setup database, tables & SP: Sample metadata used as below: CREATE DATABASE BYOPI_DB; GO USE BYOPI_DB; GO CREATE TABLE Products ( ProductId INT IDENTITY(1,1) PRIMARY KEY, ProductName NVARCHAR(200) NOT NULL, Description NVARCHAR(MAX), Category NVARCHAR(100), Price DECIMAL(10,2), InStock BIT DEFAULT 1, Tags NVARCHAR(500), IsDeleted BIT DEFAULT 0, CreatedDate DATETIME DEFAULT GETDATE(), ModifiedDate DATETIME DEFAULT GETDATE() ); CREATE TABLE WatermarkTable ( TableName NVARCHAR(100) PRIMARY KEY, WatermarkValue DATETIME ); INSERT INTO WatermarkTable VALUES ('Products', '2024-01-01'); CREATE PROCEDURE sp_update_watermark @TableName NVARCHAR(100), @NewWatermark DATETIME AS BEGIN UPDATE WatermarkTable SET WatermarkValue = @NewWatermark WHERE TableName = @TableName; END; INSERT INTO Products (ProductName, Description, Category, Price, Tags) VALUES ('Laptop Pro', 'High-end laptop', 'Electronics', 1299.99, 'laptop,computer'), ('Office Desk', 'Adjustable desk', 'Furniture', 599.99, 'desk,office'), ('Wireless Mouse', 'Bluetooth mouse', 'Electronics', 29.99, 'mouse,wireless'); Phase 6: Install Self-Hosted Integration Runtime Create SHIR in ADF: Go to ADF resource in Azure Portal Click "Open Azure Data Factory Studio" Note: You need to access from a VM in the same VNet or via VPN since ADF is private In ADF Studio, click Manage (toolbox icon) Select Integration runtimes → "+ New" Select "Azure, Self-Hosted" → "Self-Hosted" Name: SHIR-BYOPI or of your choice Click "Create" Copy Key1 (save it) Install SHIR on VM In the VM (via Bastion): Open browser, go to: https://www.microsoft.com/download/details.aspx?id=39717 Download and install Integration Runtime During setup: Launch Configuration Manager Paste the Key1 from Step 14 Click "Register" Wait for "Connected" status Phase 7: Create Search Index through below powershell script and saving it as search_index.ps1 $searchService = "search-byopi" $apiKey = "YOUR-ADMIN-KEY" $headers = @{ 'api-key' = $apiKey 'Content-Type' = 'application/json' } $index = @{ name = "products-index" fields = @( @{name="id"; type="Edm.String"; key=$true} @{name="productName"; type="Edm.String"; searchable=$true} @{name="description"; type="Edm.String"; searchable=$true} @{name="category"; type="Edm.String"; filterable=$true; facetable=$true} @{name="price"; type="Edm.Double"; filterable=$true} @{name="inStock"; type="Edm.Boolean"; filterable=$true} @{name="tags"; type="Collection(Edm.String)"; searchable=$true} ) } | ConvertTo-Json -Depth 10 Invoke-RestMethod ` -Uri "https://$searchService.search.windows.net/indexes/products-index?api-version=2020-06-30" ` -Method PUT ` -Headers $headers ` -Body $index Phase 8: Configure AKV & ADF Components - Link AKV and ADF for secrets Create Key Vault Secrets Navigate to kv-byopi (created AKV resource) in Portal Go to "Access policies" Click "+ Create" Select permissions: Get, List for secrets Select principal: adf-byopi-private Create Go to "Secrets" → "+ Generate/Import": Name: sql-password, Value: <> Name: search-api-key, Value: Your search key Create Linked Services in ADF Access ADF Studio from the VM (since it's private): Key Vault Linked Service: Manage → Linked services → "+ New" Search "Azure Key Vault" Configure: Name: LS_KeyVault Azure Key Vault: kv-byopi Integration runtime: AutoResolveIntegrationRuntime Test connection → Create SQL Server Linked Service: "+ New" → "SQL Server" Configure: Name: LS_SqlServer Connect via: SHIR-BYOPI Server name: localhost Database: BYOPI_DB Authentication: SQL Authentication User: sqladmin Password: Select from Key Vault → LS_KeyVault → sql-password Test → Create Azure Search Linked Service: "+ New" → "Azure Search" Configure: Name: LS_AzureSearch URL: https://search-byopi.search.windows.net Connect via: SHIR-BYOPI - Important - use SHIR API Key: From Key Vault → LS_KeyVault → search-api-key Test → Create Phase 9: Create ADF Datasets and PipelineCreate Datasets SQL Products Dataset: Author → Datasets → "+" → "New dataset" Select "SQL Server" → Continue Select "Table" → Continue Properties: Name: DS_SQL_Products Linked service: LS_SqlServer Table: Select Products click OK Watermark Dataset: Repeat with: Name: DS_SQL_Watermark Table: WatermarkTable Search Dataset: "+" → "Azure Search" Properties: Name: DS_Search_Index Linked service: LS_AzureSearch Index name: products-index Create Pipeline Author → Pipelines → "+" → "Pipeline" Name: PL_BYOPI_Private From Activities → General, drag "Lookup" activity Configure Lookup 1: Name: LookupOldWatermark Settings: Source dataset: DS_SQL_Watermark Query: below sql SELECT WatermarkValue FROM WatermarkTable WHERE TableName='Products' - **First row only**: ✓ Add another Lookup: Name: LookupNewWatermark Query: below sql SELECT MAX(ModifiedDate) as NewWatermark FROM Products Add Copy Data activity: Name: CopyToSearchIndex Source: Dataset: DS_SQL_Products Query: sql SELECT CAST(ProductId AS NVARCHAR(50)) as id, ProductName as productName, Description as description, Category as category, Price as price, InStock as inStock, Tags as tags, CASE WHEN IsDeleted = 1 THEN 'delete' ELSE 'upload' END as [@search.action] FROM Products WHERE ModifiedDate > '@{activity('LookupOldWatermark').output.firstRow.WatermarkValue}' AND ModifiedDate <= '@{activity('LookupNewWatermark').output.firstRow.NewWatermark}' Sink: Dataset: DS_Search_Index Write behavior: Merge Batch size: 1000 Add Stored Procedure activity: Name: UpdateWatermark SQL Account: LS_SqlServer Stored procedure: sp_update_watermark Parameters: TableName: Products NewWatermark: @{activity('LookupNewWatermark').output.firstRow.NewWatermark} Connect activities with success conditions Phase 10: Test and Schedule Test Pipeline Click "Debug" in pipeline Monitor in Output panel Check for green checkmarks Create Trigger In pipeline, click "Add trigger" → "New/Edit" Click "+ New" Configure: Name: TR_Hourly Type: Schedule Recurrence: Every 1 Hour OK → Publish All Monitor Go to Monitor tab View Pipeline runs Check Trigger runs Your pipeline should look like this: Phase 11: Validation & Testing Verify Private Connectivity From the VM, run PowerShell: # Test DNS resolution (should return private IPs) nslookup adf-byopi-private.datafactory.azure.net # Should show private IP like : 10.0.2.x nslookup search-byopi.search.windows.net # Should show private IP like : 10.0.2.x nslookup kv-byopi.vault.azure.net # Should show private IP like : 10.0.2.x # Test Search $headers = @{ 'api-key' = 'YOUR-KEY' } Invoke-RestMethod -Uri "https://search-byopi.search.windows.net/indexes/products-index/docs?`$count=true&api-version=2020-06-30" -Headers $headers Test Data Sync (adding few records) and verify in search index: -- Add test record INSERT INTO Products (ProductName, Description, Category, Price, Tags) VALUES ('Test Product Private', 'Testing private pipeline', 'Test', 199.99, 'test,private'); -- Trigger pipeline manually or wait for schedule -- Then verify in Search index 2. How ADF works in this approach with Azure AI search: Azure AI Search uses a REST API for indexing or called as uploading. When ADF sink uploads data to AI Search, it's actually making HTTP POST requests: for example - POST https://search-byopi.search.windows.net/indexes/products-index/docs/index?api-version=2020-06-30 Content-Type: application/json api-key: YOUR-ADMIN-KEY { "value": [ { "@search.action": "upload", "id": "1", "productName": "Laptop", "price": 999.99 }, { "@search.action": "delete", "id": "2" } ] } Delete action used here is soft delete and not hard delete. pipeline query: SELECT CAST(ProductId AS NVARCHAR(50)) as id, -- Renamed to match index field ProductName as productName, -- Renamed to match index field Description as description, Category as category, Price as price, InStock as inStock, Tags as tags, CASE WHEN IsDeleted = 1 THEN 'delete' ELSE 'upload' END as [@search.action] -- Special field with @ prefix FROM Products WHERE ModifiedDate > '2024-01-01' ``` Returns this resultset: ``` id | productName | description | category | price | inStock | tags | @search.action ----|----------------|------------------|-------------|--------|---------|----------------|--------------- 1 | Laptop Pro | High-end laptop | Electronics | 1299 | 1 | laptop,computer| upload 2 | Office Chair | Ergonomic chair | Furniture | 399 | 1 | chair,office | upload 3 | Deleted Item | Old product | Archive | 0 | 0 | old | delete The @search.action Field - The Magic Control This special field tells Azure AI Search what to do with each document: @search.action What It Does When to Use What Happens If Document... upload Insert OR Update Most common - upsert operation Exists: Updates it<br>Doesn't exist: Creates it merge Update only When you know it exists Exists: Updates specified fields<br>Doesn't exist: ERROR mergeOrUpload Update OR Insert Safe update Exists: Updates fields<br>Doesn't exist: Creates it delete Remove from index To remove documents Exists: Deletes it<br>Doesn't exist: Ignores (no error) ADF automatically converts SQL results to JSON format required by Azure Search: { "value": [ { "@search.action": "upload", "id": "1", "productName": "Laptop Pro", "description": "High-end laptop", "category": "Electronics", "price": 1299.00, "inStock": true, "tags": "laptop,computer" }, { "@search.action": "upload", "id": "2", "productName": "Office Chair", "description": "Ergonomic chair", "category": "Furniture", "price": 399.00, "inStock": true, "tags": "chair,office" }, { "@search.action": "delete", "id": "3" // For delete, only ID is needed } ] } ADF doesn't send all records at once. It batches them based on writeBatchSize and each batch is a separate HTTP POST to Azure Search How ADF will detect new changes and run batches: Watermark will be updated after each successful ADF run to detect new changes as below: Handling different scenarios: Scenario 1: No Changes Between Runs: Run at 10:00 AM: - Old Watermark: 09:45:00 - New Watermark: 10:00:00 - Query: WHERE ModifiedDate > '09:45' AND <= '10:00' - Result: 0 rows - Action: Still update watermark to 10:00 - Why: Prevents reprocessing if changes come later Scenario 2: Bulk Insert Happens: Someone inserts 5000 records at 10:05 AM Run at 10:15 AM: - Old Watermark: 10:00:00 - New Watermark: 10:15:00 - Query: WHERE ModifiedDate > '10:00' AND <= '10:15' - Result: 5000 rows - Action: Process all 5000, update watermark to 10:15 Scenario 3: Pipeline Fails Run at 10:30 AM: - Old Watermark: 10:15:00 (unchanged from last success) - Pipeline fails during Copy activity - Watermark NOT updated (still 10:15:00) Next Run at 10:45 AM: - Old Watermark: 10:15:00 (still the last successful) - New Watermark: 10:45:00 - Query: WHERE ModifiedDate > '10:15' AND <= '10:45' - Result: Gets ALL changes from 10:15 to 10:45 (30 minutes of data) - No data loss! Note: There is still room for improvement by refining this logic to handle more advanced scenarios. However, I have not examined the logic in depth, as the goal here is to review how the overall setup functions, identify its limitations, and compare it with the indexing solutions available in AI Search. 3. Challenges - disovered: When I tried to set out to build a private search indexer for SQL Server data residing on an Azure VM with no public IP, the solution seemed straightforward: use Azure Data Factory to orchestrate the data movement to Azure AI Search. The materials made it sound simple. The reality? It's possible, but the devil is in the details. What We Needed: ✅ SQL Server on private VM (no public IP) ✅ Azure AI Search with private endpoint ✅ No data over public internet ✅ Support for full CRUD operations ✅ Near real-time synchronization ✅ No-code/low-code solution Reality Check: ⚠️ DELETE operations not natively supported in ADF sink ⚠️ Complex networking requirements ⚠️ Higher costs than expected ⚠️ Significant setup complexity ✅ But it IS possible with workarounds Components Required Azure VM: ~$150/month (D4s_v3) Self-Hosted Integration Runtime: Free (runs on VM) Private Endpoints: ~$30/month (approx 3 endpoints) Azure Data Factory: ~$15-60/month (depends on frequency) Azure AI Search: ~$75/month (Basic tier) Total: ~$270-315/month** The DELETE Challenge: Despite Azure AI Search REST API fully supporting delete operations via @search.action, ADF's native Azure Search sink does NOT support delete operations. -- This SQL query with delete action SELECT ProductId as id, CASE WHEN IsDeleted = 1 THEN 'delete' ELSE 'upload' END as [@search.action] FROM Products -- Will NOT delete documents in Azure Search when using Copy activity -- The @search.action = 'delete' is ignored by ADF sink! Nevertheless, there is a workaround using the Web Activity approach or by calling the REST API from the ADF side to perform the delete operation. { "name": "DeleteViaREST", "type": "Web", "typeProperties": { "url": "https://search.windows.net/indexes/index/docs/index", "method": "POST", "body": { "value": [ {"@search.action": "delete", "id": "123"} ] } } } Development Challenges No Direct Portal Access: With ADF private, you need: Jump box in the same VNet VPN connection Bastion for access Testing Complexity: Can't use Postman from local machine Need to test from within VNet Debugging requires multiple tools 4. Pros and Cons: An Honest Assessment: Pros: Security: Complete network isolation Compliance: Meets strict requirements No-Code: Mostly configuration-based Scalability: Can handle large datasets Monitoring: Built-in ADF monitoring Managed Service: Microsoft handles updates Cons: DELETE Complexity: Not natively supported Cost: Higher than expected Setup Complexity: Many moving parts Debugging: Difficult with private endpoints Hidden Gotchas: - SHIR requires Windows VM (Linux in preview) - Private endpoint DNS propagation delays - ADF Studio timeout with private endpoints - SHIR auto-update can break pipelines 5. Conclusion and Recommendations: When to Use BYOPI: ✅ Good Fit: Strict security requirements Needs indexing from an un-supported scenarios for example SQL server residing on private VM Budget > $500/month Team familiar with Azure networking Read-heavy workloads ❌ Poor Fit: Simple search requirements Budget conscious Need real-time updates Heavy DELETE operations Small team without Azure expertise BYOPI works, but it's more complex and expensive than initially expected. The lack of native DELETE support in ADF sink is a significant limitation that requires workarounds. Key Takeaways It works but requires significant effort DELETE (hard) operations need workarounds Costs will be higher than expected Complexity is substantial for a "no-code" solution Alternative solutions might be better for many scenarios Disclaimer: The sample scripts provided in this article are provided AS IS without warranty of any kind. The author is not responsible for any issues, damages, or problems that may arise from using these scripts. Users should thoroughly test any implementation in their environment before deploying to production. Azure services and APIs may change over time, which could affect the functionality of the provided scripts. Always refer to the latest Azure documentation for the most up-to-date information. Thanks for reading this blog! I hope you've found this approach of creating own private indexing solution for Azure AI Search (BYOPI) useful 😀207Views1like0CommentsLocked out of Azure account for 5 months! Spent hours on phone, still no resolution! PLEASE HELP!!!
As per the title, back in April I somehow managed to lock myself out of my Azure portal. Typically to sign in my browser would auto-fill in the password field - for some reason that fateful day the auto-fill function didn't work. So I typed in what I believed to be the password, it wasn't. Annoyed, I typed in a few more passwords (even checked browser password manager to ensure I was typing the correct password) and finally locked myself out. Worse still, the email address that Microsoft wanted to send its verification code as a back-up contingency is no longer active (domain wasn't renewed and has since been bought). So Microsoft is sending a verification email to a dead email address... When I try to reset the password via Microsoft's questionnaire, once submitted I then get an automated email response back stating I haven't provided enough correct/relevant information, so the password can't be reset. I have long since lost any hope or faith in Microsoft rectifying this issue. Being courteous on the phone and apologising constantly is all well and good, but is only meaningful if there is a resolution. All that's happened is I've been passed around from one department to another and back again, before eventually being ghosted back in the summer. I have since opened another support ticket which is already winding its way around to ultimately leading me down another dead end. At this stage, all I want is for Microsoft to release my SQL database (my intellectual property) back to me. I am able to provide old invoices relating to my Azure account (when I was able to log in and download invoices!), as well as proof of ID to prove I am who I say I am - enough is enough! Please advise.61Views0likes1CommentSharePoint NoAccess Sites: Search Indexing and Copilot Misconceptions Guide
What is NoAccess Mode in SharePoint? NoAccess mode is a site-level setting in SharePoint Online that restricts user access to the site without permanently deleting it. Think of it as putting the site behind a locked door, the content still exists, but no one can open it. Why Do Organizations Use It? Temporary Lockdown: When a site is under review, being decommissioned, or needs to be secured quickly. Compliance & Security: Helps prevent accidental data exposure during audits or ownership changes. Preserve Data: Unlike deleting a site, NoAccess keeps the content intact for future reference or migration. How Does It Affect Search and Copilot? Search Indexing: By default, NoAccess mode does not remove the site from the search index. This means files may still appear in search results unless additional controls (like Restricted Content Discovery or NoCrawl) are applied. Copilot Behavior: Copilot uses the same index as Microsoft Search. If a site remains indexed, Copilot can surface summaries or references to its content even if users can’t open the files. This is why governance settings like Restricted Access Control or disabling indexing are critical when using Copilot. Why does this happen? NoAccess blocks site access, not indexing. The site remains in the search index unless indexing is explicitly disabled or Restricted Content Discovery (RCD) is enabled. Security trimming still applies. Users will only see items they have direct permissions to (e.g., via shared links). They cannot open anything they don’t have access to. Copilot respects permissions. It uses the same security model as Microsoft Search and Graph, so it never bypasses access controls. Low Priority. Marking a site as NoAccess is a bulk operation that goes into a low priority queue, specifically to avoid system bottlenecks and ensure real-time content changes are prioritized over less critical updates which means it can take much longer than expected for those sites to stop appearing in search results. What are the options to fully hide content? Turn off Allow this site to appear in search results: This setting removes the site from indexing. Note: change the search setting BEFORE setting NoAccess to a site. Enable Restricted Content Discovery (RCD): This hides the site from search and Copilot while keeping it accessible to those with permissions. There is a PowerShell cmdlet available: Set-SPOSite –identity <site-url> -RestrictContentOrgWideSearch $true Please note that for larger sites, both the RCD and no-crawl processes may require a minimum of a week to reflect updates. According to the RCD documentation, sites with more than 500,000 pages could experience update times exceeding one week. What are the options to get Site Crawl information? When setting up the site for NoCrawl, you can run REST to see if the items are returning in search from that site. You can use a simple REST call like: https://contoso.sharepoint.com/_api/search/query?querytext='path:"<siteurl>"'&sourceid='8413cd39-2156-4e00-b54d-11efd9abdb89'&trimduplicates=false. You have to login into the tenant first. An XML object will be generated, please look for <d:TotalRows m:type="Edm.Int32">1</d:TotalRows> you will see the count going down, at some point the count will be equals to 0, that means all items were removed from index. You can use PnP to check the site settings, here an example - Enable/Disable Search Crawling on Sites and Libraries | PnP Samples, remember PnP is open source and it is not supported by Microsoft. Get-PnPSite | Select NoCrawl Key Takeaways Setting a SharePoint site to NoAccess does not automatically remove it from search or Copilot. Copilot and Search always enforce permissions users never see or access unauthorized content. For complete removal, disable site indexing or enable RCD. Monitor index status to confirm content is truly hidden. Understanding and managing these settings ensures secure, seamless experiences with Copilot and Microsoft Search. Helpful Resources Lock and unlock sites - SharePoint in Microsoft 365 | Microsoft Learn Enable/Disable Search Crawling on Sites and Libraries | PnP Samples Restrict discovery of SharePoint sites and content - SharePoint in Microsoft 365 | Microsoft Learn Contributors: Tania MeniceAvoiding Access Errors with SharePoint App-Only Access
To avoid persistent access errors like “403 Forbidden” when using SharePoint Online REST API with app-only permissions, it is essential to authenticate using a self-signed X.509 certificate rather than a client secret, as SharePoint requires certificate-based authentication for app-only access to ensure stronger security and validation. The solution involves generating a certificate, updating the Azure AD app with it, and using it to obtain access tokens, as demonstrated in PnP PowerShell examples and Microsoft documentation.394Views0likes0Comments