azure service fabric
15 TopicsHow Do You Handle Multiple Server Certificate Thumbprints in Azure Service Fabric Managed Clusters?
Hi everyone, I wanted to share a common challenge we’ve encountered in DevOps pipelines when working with Azure Service Fabric Managed Clusters (SFMC) — and open it up for discussion to hear how others are handling it. 🔍 The Issue When retrieving the cluster certificate thumbprints using PowerShell: (Get-AzResource -ResourceId "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RG_NAME>/providers/Microsoft.ServiceFabric/managedclusters/<CLUSTER_NAME>").Properties.clusterCertificateThumbprints …it often returns multiple thumbprints. This typically happens due to certificate renewals or rollovers. Including all of them in your DevOps configuration isn’t practical. ✅ What Worked for Us We’ve had success using the last thumbprint in the list, assuming it’s the most recently active certificate: (Get-AzResource -ResourceId "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RG_NAME>/providers/Microsoft.ServiceFabric/managedclusters/<CLUSTER_NAME>").Properties.clusterCertificateThumbprints | Select-Object -Last 1 This approach has helped us maintain stable and secure connections in our pipelines. 🔍 Solution 2: Get current Server Certificate You can also verify the active certificate using OpenSSL: openssl s_client -connect <MyCluster>.<REGION>.cloudapp.azure.com:19080 -servername <MyCluster>.<REGION>.cloudapp.azure.com | openssl x509 -noout -fingerprint -sha1 🛠️ Tip for New Deployments If you're deploying a new SFMC, consider setting the following property in your ARM or Bicep template: "autoGeneratedDomainNameLabelScope": "ResourceGroupReuse" This ensures the domain name is reused within the resource group, which helps reduce certificate churn and keeps the thumbprint list clean and manageable. ⚠️ Note: This setting only applies during initial deployment and cannot be retroactively applied to existing clusters.71Views0likes0CommentsGuidance for Certificate Use in CI/CD Pipelines for Service Fabric
In non-interactive CI/CD scenarios where certificates are used to authenticate with Azure Service Fabric, consider the following best practices: Use Admin Certificates Instead of Cluster Certificates Cluster certificates are used for node-to-node and cluster-level authentication and are highly privileged. For CI/CD pipelines, prefer using a dedicated Admin client certificate: Grants administrative access only at the client level. Limits the blast radius in case of exposure. Easier to rotate or revoke without impacting cluster internals. Best practices to protect your service fabric certificates: - Provision a dedicated Service Fabric Admin certificate specifically for the CI/CD pipeline instead of cluster certificate. This certificate should not be reused across other services or users. - Restrict access to this certificate strictly to the pipeline environment. It should never be distributed beyond what is necessary. - Secure the pipeline itself, as it is part of the cluster’s supply chain and a high-value target for attackers. - Implement telemetry and monitoring to detect potential exposure—such as unauthorized access to the CI/CD machine or unexpected distribution of the certificate. - Establish a revocation and rotation plan to quickly respond if the certificate is compromised.67Views0likes0CommentsInstalling AzureMonitoringAgent and linking it to your Log Analytics Workspace
The current Service Fabric clusters are currently equipped with the MicrosoftMonitoringAgent (MMA) as the default installation. However, it is essential to note that MMA will be deprecated in August 2024, for more details refer- We're retiring the Log Analytics agent in Azure Monitor on 31 August 2024 | Azure updates | Microsoft Azure. Therefore, if you are currently utilizing MMA, it is imperative to initiate the migration process to AzureMonitoringAgent (AMA). Installation and Linking of AzureMonitoringAgent to a Log Analytics Workspace: Create a Log Analytics Workspace (if not already established): Access the Azure portal and search for "Log Analytics Workspace." Proceed to create a new Log Analytics Workspace. Ensure that you select the identical resource group and geographical region where your cluster is located. Detailed explanation: Create Log Analytics workspaces - Azure Monitor | Microsoft Learn Create Data Collection Rules: Access the Azure portal and search for "Data Collection Rules (DCR)”. Select the same resource group and region as of your cluster. In Platform type, select the type of instance you have like Windows, Linux or both. You can leave data collection endpoint as blank. In the resources section, add the Virtual machine Scale Set (VMSS) resource which is attached to the Service fabric cluster. In the "Collect and deliver" section, click on Add data source and add both Performance Counters and Windows Event Logs one by one. Choose the destination for both the data sources as Azure Monitor Logs and in the Account or namespace dropdown, select the name of the Log Analytics workspace that we have created in step 1 and click on Add data source. Next click on review and create. Note: - For more detailed explanation on how to create DCR and various ways of creating it, you can follow - Collect events and performance counters from virtual machines with Azure Monitor Agent - Azure Monitor | Microsoft Learn Adding the VMSS instances resource with DCR: Once the DCR is created, in the left panel, click on Resources. Check if you can see the VMSS resource that we have added while creating DCR or not. If not then, click on "Add" and navigate to the VMSS attached to service fabric cluster and click on Apply. Refresh the resources tab to see whether you can see VMSS in the resources section or not. If not, try adding a couple of times if needed. Querying Logs and Verifying AzureMonitoringAgent Setup: Please allow for 10-15 minutes waiting period before proceeding. After this time has elapsed, navigate to your Log Analytics workspace, and access the 'Logs' section by scrolling through the left panel. Run your queries to see the logs. For example, query to check the heartbeat of all instances:Heartbeat | where Category contains "Azure Monitor Agent" | where OSType contains "Windows" You will see the logs there in the bottom panel as shown in the above screenshot. Also, you can modify the query as per your requirement. For more details related to Log Analytics queries, you can refer- Log Analytics tutorial - Azure Monitor | Microsoft Learn Perform the uninstallation of the MicrosoftMonitoringAgent (MMA): Once you have verified that the logs are getting generated, you can go to Virtual Machine Scale Set and then to the "Extensions + applications" section and delete the old MMA extension from VMSS.5.5KViews4likes2CommentsPreserve Disk space in ImageStore for Service Fabric Managed Clusters
As mentioned in this article: Service Fabric: Best Practices to preserve disk space in Image Store ImageStore keeps copied packages and provisioned packages. In this article, we will discuss how can you configure cleaning up the copied application package for Service Fabric Managed Cluster 'SFMC'. The mitigation is to set "AllowRuntimeCleanupUnusedApplicationTypesPolicy": "true". For properties, specify the following tags: ... "applicationTypeVersionsCleanupPolicy": { "maxUnusedVersionsToKeep": 3 } Let me show you a step-by-step guide to automatically remove the unwanted application versions in your Service Fabric Managed cluster below: Scenario: - I have deployed 4 versions of my app (1 InUse - 3 UnUsed) to my managed cluster as the following: Symptom: I need Service Fabric to do automatic cleanup for the Application Unused Versions and keep only the last 3 so as not to fill the disk space. Mitigation Steps: From https://resources.azure.com/ open your managed cluster resource and open the read/write mode. Add the tag "AllowRuntimeCleanupUnusedApplicationTypesPolicy": "true" Under fabricsettings, add the param "name": "CleanupUnusedApplicationTypes", "value": "true" under fabricsettings and set the "maxUnusedVersionsToKeep": 3 Click on PUT to save the changes, and I deployed the 5th version (1.0.4) to the cluster, which should make the cleaning happens for the oldest version (1.0.0) Note: The automatic clean-up should be effective after 24 hours of making those changes. Then I tried to deploy a new version, and I could see that the oldest version was also cleaned up. For manual cleanup of the ImageStoreService: You can use PowerShell commands to delete copied packages and unregister application types as needed. This includes using Get-ServiceFabricImageStoreContent to retrieve content and Remove-ServiceFabricApplicationPackage to delete it, as well as Unregister-ServiceFabricApplicationType to remove application packages from the image store and image cache on nodes.2.2KViews0likes0CommentsAzure Logic Apps : HTTP Request OR Custom Connector
Hello, As far as I know, We use HTTP requests while consuming the First-party/third-party API, then when should we use a custom connector? What are those business cases where one should use an HTTP request in PowerAutomate and use in PowerApps Or use a custom connector and use in PowerApps and Power Automate? What are the pros and cons of HTTP Request OR Custom Connector? Thanks and Regards, -Sri750Views0likes1CommentService Fabric Explorer (SFX) web client CVE-2023-23383 spoofing vulnerability
Service Fabric Explorer (SFX) is the web client used when accessing a Service Fabric (SF) cluster from a web browser. The version of SFX used is determined by the version of your SF cluster. We are providing this blog to make customers aware that running Service Fabric versions 9.1.1436.9590 and below are affected. These versions could potentially allow unwanted code execution in the cluster if an attacker can successfully convince a victim to click a malicious link and perform additional actions in the Service Fabric Explorer interface. This issue has been resolved in Service Fabric 9.1.1583.9589 released on March 14th, 2023, as CVE-2023-23383 which had a score of CVSS: 8.2 / 7.1. See the Technical Details section for more information.5.9KViews1like0CommentsCommon causes of SSL/TLS connection issues and solutions
In the TLS connection common causes and troubleshooting guide (microsoft.com) and TLS connection common causes and troubleshooting guide (microsoft.com), the mechanism of establishing SSL/TLS and tools to troubleshoot SSL/TLS connection were introduced. In this article, I would like to introduce 3 common issues that may occur when establishing SSL/TLS connection and corresponding solutions for windows, Linux, .NET and Java. TLS version mismatch Cipher suite mismatch TLS certificate is not trusted TLS version mismatch Before we jump into solutions, let me introduce how TLS version is determined. As the dataflow introduced in the first session(https://techcommunity.microsoft.com/t5/azure-paas-blog/ssl-tls-connection-issue-troubleshooting-guide/ba-p/2108065), TLS connection is always started from client end, so it is client proposes a TLS version and server only finds out if server itself supports the client's TLS version. If the server supports the TLS version, then they can continue the conversation, if server does not support, the conversation is ended. Detection You may test with the tools introduced in this blog(TLS connection common causes and troubleshooting guide (microsoft.com)) to verify if TLS connection issue was caused by TLS version mismatch. If capturing network packet, you can also view TLS version specified in Client Hello. If connection terminated without Server Hello, it could be either TLS version mismatch or Ciphersuite mismatch. Solution Different types of clients have their own mechanism to determine TLS version. For example, Web browsers - IE, Edge, Chrome, Firefox have their own set of TLS versions. Applications have their own library to define TLS version. Operating system level like windows also supports to define TLS version. Web browser In the latest Edge and Chrome, TLS 1.0 and TLS 1.1 are deprecated. TLS 1.2 is the default TLS version for these 2 browsers. Below are the steps of setting TLS version in Internet Explorer and Firefox and are working in Window 10. Internet Explorer Search Internet Options Find the setting in the Advanced tab. Firefox Open Firefox, type about:config in the address bar. Type tls in the search bar, find the setting of security.tls.version.min and security.tls.version.max. The value is the range of supported tls version. 1 is for tls 1.0, 2 is for tls 1.1, 3 is for tls 1.2, 4 is for tls 1.3. Windows System Different windows OS versions have different default TLS versions. The default TLS version can be override by adding/editing DWORD registry values ‘Enabled’ and ‘DisabledByDefault’. These registry values are configured separately for the protocol client and server roles under the registry subkeys named using the following format: <SSL/TLS/DTLS> <major version number>.<minor version number><Client\Server> For example, below is the registry paths with version-specific subkeys: Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client For the details, please refer to Transport Layer Security (TLS) registry settings | Microsoft Learn. Application that running with .NET framework The application uses OS level configuration by default. For a quick test for http requests, you can add the below line to specify the TLS version in your application before TLS connection is established. To be on a safer end, you may define it in the beginning of the project. ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 Above can be used as a quick test to verify the problem, it is always recommended to follow below document for best practices. https://docs.microsoft.com/en-us/dotnet/framework/network-programming/tls Java Application For the Java application which uses Apache HttpClient to communicate with HTTP server, you may check link How to Set TLS Version in Apache HttpClient | Baeldung about how to set TLS version in code. Cipher suite mismatch Like TLS version mismatch, CipherSuite mismatch can also be tested with the tools that introduced in previous article. Detection In the network packet, the connection is terminated after Client Hello, so if you do not see a Server Hello packet, that indicates either TLS version mismatch or ciphersuite mismatch. If server is supported public access, you can also test using SSLLab(https://www.ssllabs.com/ssltest/analyze.html) to detect all supported CipherSuite. Solution From the process of establishing SSL/TLS connections, the server has final decision of choosing which CipherSuite in the communication. Different Windows OS versions support different TLS CipherSuite and priority order. For the supported CipherSuite, please refer to Cipher Suites in TLS/SSL (Schannel SSP) - Win32 apps | Microsoft Learn for details. If a service is hosted in Windows OS. the default order could be override by below group policy to affect the logic of choosing CipherSuite to communicate. The steps are working in the Windows Server 2019. Edit group policy -> Computer Configuration > Administrative Templates > Network > SSL Configuration Settings -> SSL Cipher Suite Order. Enable the configured with the priority list for all cipher suites you want. The CipherSuites can be manipulated by command as well. Please refer to TLS Module | Microsoft Learn for details. TLS certificate is not trusted Detection Access the url from web browser. It does not matter if the page can be loaded or not. Before loading anything from the remote server, web browser tries to establish TLS connection. If you see the error below returned, it means certificate is not trusted on current machine. Solution To resolve this issue, we need to add the CA certificate into client trusted root store. The CA certificate can be got from web browser. Click warning icon -> the warning of ‘isn’t secure’ in the browser. Click ‘show certificate’ button. Export the certificate. Import the exported crt file into client system. Windows Manage computer certificates. Trusted Root Certification Authorities -> Certificates -> All Tasks -> Import. Select the exported crt file with other default setting. Ubuntu Below command is used to check current trust CA information in the system. awk -v cmd='openssl x509 -noout -subject' ' /BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt If you did not see desired CA in the result, the commands below are used to add new CA certificates. $ sudo cp <exported crt file> /usr/local/share/ca-certificates $ sudo update-ca-certificates RedHat/CentOS Below command is used to check current trust CA information in the system. awk -v cmd='openssl x509 -noout -subject' ' /BEGIN/{close(cmd)};{print | cmd}' < /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem If you did not see desired CA in the result, the commands below are used to add new CA certificates. sudo cp <exported crt file> /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust Java The JVM uses a trust store which contains certificates of well-known certification authorities. The trust store on the machine may not contain the new certificates that we recently started using. If this is the case, then the Java application would receive SSL failures when trying to access the storage endpoint. The errors would look like the following: Exception in thread "main" java.lang.RuntimeException: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at org.example.App.main(App.java:54) Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:130) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:371) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:314) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:309) Run the below command to import the crt file to JVM cert store. The command is working in the JDK 19.0.2. keytool -importcert -alias <alias> -keystore "<JAVA_HOME>/lib/security/cacerts" -storepass changeit -file <crt_file> Below command is used to export current certificates information in the JVM cert store. keytool -keystore " <JAVA_HOME>\lib\security\cacerts" -list -storepass changeit > cert.txt The certificate will be displayed in the cert.txt file if it was imported successfully.53KViews4likes0CommentsDeploying an application with Azure CI/CD pipeline to a Service Fabric cluster
Prerequisites Before you begin this tutorial: Install Visual Studio 2019 and install the Azure development and ASP.NET and web development workloads. Install the Service Fabric SDK Create a Windows Service Fabric cluster in Azure, for example by following this tutorial Create an Azure DevOps organization. This allows you to create projects in Azure DevOps and use Azure Pipelines. Configure the Application on the Visual Studio 2019 Clone the Voting Application from the link- https://github.com/Azure-Samples/service-fabric-dotnet-quickstart After that we can enter link to clone the voting application. Once you click on Clone button, we can see that application is ready to open on Solution Explorer. Now we have to build the solution so that all the dependency DLL will be downloaded on the package folder from NuGet store. We need to cross check the NuGet package solution to find if any DLL is deprecated. If so, then we need to update all older version of DLL. After correcting the DLL version, we have to check the application file 'voting.sfproj' Note – For Visual Studio 2022, toolsVersion will be 16.0 and we have to update the MS build version everywhere in the 'voting.sfproj' file. From packages.config file, we can get the MS build version: We must cross check the dotnet version in 'packages.config' of the application and also at the service level. Like in 'packages.config' is having the net40 but in service dotnet version net472. We have to manually add the reference of MS build in service project file. Example – Expected error based on above changes – We must push our changes to our repo. However, prior that we must take care that we should not push our changes on master branch. We need to create a new branch and push our changes to that branch. For that, In Visual Studio we can go to Team Explorer After that sync the local branch on DevOps repo. Now we have to create a Pipeline – Click on New Pipeline Then click in Use Classic Editor --> select the repository. Select the template – search for Service Fabric template- After that all the Task will be generated. In Agent Specification we need to select the same version as Visual Studio version. Like we have selected the 2019 because we have built the project on VS 2019. Use NuGet latest stable version. At time of this blog creation, NuGet version is 5.5.1. Also uncheck the checkbox “Always download the latest matching version”. In Build solution we must select 2019 as my Visual Studio version is 2019. In “Update Service Fabric Manifest” task we can directly change the version in manifest. In Copy files – we can gather the data from application manifest and application parameters file. Please refer below image for above points (19-23) Enable continuous integration checkbox, so that whenever we do any commit on the repo Automatically the build pipeline is triggered. We can add some static variables while executing the pipeline by putting the value in variable. Build Success mail – Build failed mail- Release Pipeline- Release pipeline is the final step where application is deployed to the cluster. 2. Click on “New Release Pipeline” then again select the template for Service Fabric. 3. Then add the Artifact by selecting the correct build pipeline. 4. Click on 1 job,1 task 5. Click on stages --> then we have to select the cluster connection. If no cluster connection is created, then click on “New” 6. Create a Service Connection as given in below image- Note: - For Azure Active Directory credentials, add the Server certificate thumbprint of the server certificate used to create the cluster and the credentials you want to use to connect to the cluster in the Username and Password fields. 7. How to generate the client certificate value – Open to PowerShell ISE with Admin access. Paste the command- [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes("C:\Users\pritamsinha\Downloads\certi\certestuskv.pfx"). Paste the output in the same PowerShell workspace area and remove all the space from beginning and end. 8. In case of some error with base 64 value then deployment will fail- Enable Grant access permission to all pipelines. Note – in case when cluster certificate is expired, and we have updated the cluster certificate then we need to update the thumbprint and client certificate value. Post above, Deploy Service Fabric application section – In Application Parameter – We need to select the target location of the file where the application parameter file is placed. Enable compressed package so that application package will be converted to zip file. CopyPackageTimeoutSec-Timeout in seconds for copying application package to image store. If specified, this will override the value in the published profile. RegisterPackageTimeoutSec -Timeout in seconds for registering or un-registering application package. Enable the Skip upgrade for same Type and Version (Indicates whether an upgrade will be skipped if the same application type and version already exists in the cluster, otherwise the upgrade fails during validation. If enabled, re-deployments are idempotent.) Enable the Unregister Unused Versions (Indicates whether all unused versions of the application type will be removed after an upgrade.) Configure the “Continuous deployment trigger” – Then save the config and run the release pipeline. Expected output- References- Azure pipeline reference link -https://learn.microsoft.com/en-us/azure/devops/pipelines/get-started/what-is-azure-pipelines?view=az... Service Fabric Azure CICD pipeline doc- https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-tutorial-deploy-app-with-cicd-...5KViews6likes1Comment