Azure Service Fabric
48 TopicsInstalling AzureMonitoringAgent and linking it to your Log Analytics Workspace
The current Service Fabric clusters are currently equipped with the MicrosoftMonitoringAgent (MMA) as the default installation. However, it is essential to note that MMA will be deprecated in August 2024, for more details refer- We're retiring the Log Analytics agent in Azure Monitor on 31 August 2024 | Azure updates | Microsoft Azure. Therefore, if you are currently utilizing MMA, it is imperative to initiate the migration process to AzureMonitoringAgent (AMA). Installation and Linking of AzureMonitoringAgent to a Log Analytics Workspace: Create a Log Analytics Workspace (if not already established): Access the Azure portal and search for "Log Analytics Workspace." Proceed to create a new Log Analytics Workspace. Ensure that you select the identical resource group and geographical region where your cluster is located. Detailed explanation: Create Log Analytics workspaces - Azure Monitor | Microsoft Learn Create Data Collection Rules: Access the Azure portal and search for "Data Collection Rules (DCR)”. Select the same resource group and region as of your cluster. In Platform type, select the type of instance you have like Windows, Linux or both. You can leave data collection endpoint as blank. In the resources section, add the Virtual machine Scale Set (VMSS) resource which is attached to the Service fabric cluster. In the "Collect and deliver" section, click on Add data source and add both Performance Counters and Windows Event Logs one by one. Choose the destination for both the data sources as Azure Monitor Logs and in the Account or namespace dropdown, select the name of the Log Analytics workspace that we have created in step 1 and click on Add data source. Next click on review and create. Note: - For more detailed explanation on how to create DCR and various ways of creating it, you can follow - Collect events and performance counters from virtual machines with Azure Monitor Agent - Azure Monitor | Microsoft Learn Adding the VMSS instances resource with DCR: Once the DCR is created, in the left panel, click on Resources. Check if you can see the VMSS resource that we have added while creating DCR or not. If not then, click on "Add" and navigate to the VMSS attached to service fabric cluster and click on Apply. Refresh the resources tab to see whether you can see VMSS in the resources section or not. If not, try adding a couple of times if needed. Querying Logs and Verifying AzureMonitoringAgent Setup: Please allow for 10-15 minutes waiting period before proceeding. After this time has elapsed, navigate to your Log Analytics workspace, and access the 'Logs' section by scrolling through the left panel. Run your queries to see the logs. For example, query to check the heartbeat of all instances:Heartbeat | where Category contains "Azure Monitor Agent" | where OSType contains "Windows" You will see the logs there in the bottom panel as shown in the above screenshot. Also, you can modify the query as per your requirement. For more details related to Log Analytics queries, you can refer- Log Analytics tutorial - Azure Monitor | Microsoft Learn Perform the uninstallation of the MicrosoftMonitoringAgent (MMA): Once you have verified that the logs are getting generated, you can go to Virtual Machine Scale Set and then to the "Extensions + applications" section and delete the old MMA extension from VMSS.5.5KViews4likes2CommentsPreserve Disk space in ImageStore for Service Fabric Managed Clusters
As mentioned in this article: Service Fabric: Best Practices to preserve disk space in Image Store ImageStore keeps copied packages and provisioned packages. In this article, we will discuss how can you configure cleaning up the copied application package for Service Fabric Managed Cluster 'SFMC'. The mitigation is to set "AllowRuntimeCleanupUnusedApplicationTypesPolicy": "true". For properties, specify the following tags: ... "applicationTypeVersionsCleanupPolicy": { "maxUnusedVersionsToKeep": 3 } Let me show you a step-by-step guide to automatically remove the unwanted application versions in your Service Fabric Managed cluster below: Scenario: - I have deployed 4 versions of my app (1 InUse - 3 UnUsed) to my managed cluster as the following: Symptom: I need Service Fabric to do automatic cleanup for the Application Unused Versions and keep only the last 3 so as not to fill the disk space. Mitigation Steps: From https://resources.azure.com/ open your managed cluster resource and open the read/write mode. Add the tag "AllowRuntimeCleanupUnusedApplicationTypesPolicy": "true" Under fabricsettings, add the param "name": "CleanupUnusedApplicationTypes", "value": "true" under fabricsettings and set the "maxUnusedVersionsToKeep": 3 Click on PUT to save the changes, and I deployed the 5th version (1.0.4) to the cluster, which should make the cleaning happens for the oldest version (1.0.0) Note: The automatic clean-up should be effective after 24 hours of making those changes. Then I tried to deploy a new version, and I could see that the oldest version was also cleaned up. For manual cleanup of the ImageStoreService: You can use PowerShell commands to delete copied packages and unregister application types as needed. This includes using Get-ServiceFabricImageStoreContent to retrieve content and Remove-ServiceFabricApplicationPackage to delete it, as well as Unregister-ServiceFabricApplicationType to remove application packages from the image store and image cache on nodes.2.2KViews0likes0CommentsNot enough disk space issue in Service Fabric cluster
Time by time, when you use Service Fabric cluster, the cluster may meet different issues and in the reported error/warning message, it’s marked that one or some specific nodes do not have enough disk space. This may be caused by different reasons. This blog will talk about the common solution of this issue. Possible root causes: There will be lots of possible root causes for the not enough disk space issue. In this blog, we'll mainly talk about following five: Diagnostic log files (.trace and .etl) consumes too much space Paging file consumes too much space Too many application packages existed in node Too many registered versions of application type Too many images existed (Only for cluster used with container) To better identify which one is matching your own scenario, please kindly check the following description of them: For the log files, we can RDP into the node reporting not enough disk space and check the size of the folder D:\SvcFab\Log. If the size of this folder is bigger than expected, then we can try to reconfigure the cluster to decrease the size limit of the diagnostic log files. For the paging files, it's a built-in feature of Windows system. For detailed introduction, please check this document. To verify if we got this issue, we can RDP into the node and check whether we can find the hidden file D:\pagefile.sys. If we can find it, that means your service fabric cluster is consuming some disk space as RAM. We can consider about configuring the Paging file to be saved in disk C instead of disk D. For too many application packages existed in node which consume too much disk space, we can verify it in Service Fabric Explorer (SFX). By visiting SFX from Azure Portal Service Fabric overview page, we can turn to the Image Store page of the cluster and verify whether there is any record with name different from Store and WindowsFabricStore. If yes, then please click on Load Size button to check its size. Similar to point 3, for too many registered versions of application type, we can check same page, but pay attention to size of Store and see if it consumes lots of disk space. When a version of application type is registered in SF cluster, SF will save some files used for deploy the services included by this new version into each node. More versions are registered, more disk space it will consume. For the too many images existed cause, this will only happen when our service fabric cluster is running with Container feature. We can RDP into the node and use command docker image ls to list all used images on this node. If there are some images which were used before but not removed/pruned even after no longer being used, it will consume a lot of disk space since the image file is normally with a huge size. For example, the size of image for windows server core is more than 10 GB. Possible solutions: Then let's talk about the solutions of the above four kinds of issue. 1. To reconfigure the size limit of the diagnostic log files, we need to open a PowerShell command window with Az module installed. Please refer to the official document for how to install. After login successfully, we can use the following command to set the expected size limit. Set-AzServiceFabricSetting -ResourceGroupName SF-normal -Name sfhttpjerry -Section Diagnostics -Parameter MaxDiskQuotaInMB -Value 25600 Please remember to replace the resource group name, service fabric cluster name and the number for size limit by yourself before running command. Once this command is run successfully, it may not take effect immediately. The Service Fabric cluster will scan size of the diagnostic log periodically. We need to wait until next scan is triggered. Once it's triggered and if the size of the diagnostic log files are bigger than your setting number (25600MB = 20 GB in my example), cluster will automatically delete some log files to release more disk space. 2. To change the path of paging file, we can follow these steps to switch. Check the status of our Service Fabric cluster in Service Fabric Explorer to make sure every node, service and application is healthy. RDP into the VMSS node In the Search bar, type in "Advanced System Setting". Then choose Advanced -> Advanced -> Change -> Next is to set D drive to No Paging file and set C drive to System Managed Size. This setting change will need user to reboot the VMSS node to take effect. Please reboot the node and wait until everything is back to healthy status in Service Fabric Explorer before RDP into next node. Repeat above steps for all nodes. 3. To clean up the application package, this is easy to do in SFX. Once we visit SFX, go to the same Image Store page as how we check the issue for this kind of root cause. Then on the left side, there will be a menu to delete the unneeded package. After typing in the name in confirmation window and select Delete Image Store content, cluster will automatically delete the unneeded application on every node. 4. For the issue caused by too many registered versions of application type, we need to manually unregister the not needed versions. In the Service Fabric Explorer, we can click on the Application/Application type to see the currently existing versions. If there is any not currently used and no more needed versions, please use following command to unregister: Unregister-ServiceFabricApplicationType -ApplicationTypeName "application type name" -ApplicationTypeVersion "version number" -Force 5. For the issue caused by too many images, we can configure the cluster to automatically deleted the unused image. The detailed configuration can be found in this document and the way about how to update cluster configuration will be as following: a. Visit Azure Resource Explorer with Read/Write mode, login and find the Service Fabric cluster. b. Click Edit button and modify the json format cluster configuration as expected. In this solution, it will be to add some configuration into fabricSettings part. c. Send out the request to save the new configuration by clicking the green PUT button and wait until the provisioning status of this cluster becomes Succeeded. To make this solution work, there is one more thing which we need to do is to unregister all unnecessary and unused applications. This can also be done by the command documented here. Since the parameter ApplicationTypeName and ApplicationTypeVersion are both required for this command, that means we can only unregister one version of an application type after running the command once. But since maybe you may have very many versions and many application types, here are 2 following possible ways: If there is/are actually some versions of some application types which you want to keep it registered for future use in this cluster, please unregister those unnecessary versions by running command Unregister-ServiceFabricApplicationType -ApplicationTypeName VotingType -ApplicationTypeVersion 1.0.1 (Remember to replace the ApplicationTypeName and ApplicationTypeVersion and Use step 2.e to connect to cluster at first.) If there isn't any version of any application type which you want to keep specially, which means we only need to keep the application type being used by running application, then we can use step 2.e to connect to cluster and then run the following script: $apptypes = Get-ServiceFabricApplicationType $apps = Get-ServiceFabricApplication $using = $false foreach ($apptype in $apptypes) { $using = $false foreach ($app in $apps) { if ($apptype.ApplicationTypeName -eq $app.ApplicationTypeName -and $apptype.ApplicationTypeVersion -eq $app.ApplicationTypeVersion) { $using = $true break } } if ($using -eq $false) { Unregister-ServiceFabricApplicationType -ApplicationTypeName $apptype.ApplicationTypeName -ApplicationTypeVersion $apptype.ApplicationTypeVersion -Force } } In additional to the above four possible causes and solutions, there are three more possible solutions for the "Not enough disk space" issue. The following is the explanation. Scale out VMSS: Sometimes scaling out, which means increasing the number of nodes of Service Fabric cluster will also help us to mitigate the disk full issue. This operation will not only be useful to improve the CPU and memory usage, but will also auto-balance the distribution of the services among nodes to improve the disk usage. When using the Silver or higher durability, to scale out the VMSS instances, we can scale out the nodes number in the VMSS directly. Scale up VMSS: It is easy to understand this point. Since the issue is about the full disk, we can simply change the VM sku to a bigger size to have bigger disk space. But please kindly check all above solutions at first to make sure everything is reasonable and we do really need more disk size to handle more data. For example, if our application is with stateful services and the full disk happens due to our stateful services save too much data, then we should consider about improving the code logic but not scaling out the VMSS at first. Otherwise, with bigger VM sku, the issue will still reproduce sooner or later. To scale up the VMSS, we can following two ways: We can use the command Update-AzVmss to update the state of a VMSS. This is the simple way however the solution is not recommended, because there is a little risk of data loss/instance down. When using the Silver or higher durability, the risk can be mitigated because they support repair tasks. The second way to upgrade the size of the SF primary node type is adding the new node type with the bigger SKU. The option is much more difficult than the option one but officially recommended and you can check the document for more information. Reconfigure the ReplicatorLog size: Please be careful that the ReplicatorLog is not saving any kind of log file. It's saving some important data of both Service Fabric cluster and application. Delete this folder will possibly cause data loss. And the size of this folder is fixed to the configured size, by default 8 GB. It will always be the same size no matter how much data is saved. It's NOT recommended to modify this setting. You should only do it if you absolute have to do so. It may run the risk of data loss. This should only be done if absolutely required. For the ReplicatorLog size, as mentioned above, the key point is to add a customized ktlLogger setting into the Service Fabric cluster. To do that, we need to: a. Visit Azure Resource Explorer with Read/Write mode, login and find the Service Fabric cluster. b. Add the ktlLogger setting into fabricSettings part. The expected expression will be such as following: { "name": "KtlLogger", "parameters": [{ "name": "SharedLogSizeInMB", "value": "4096" }] } c. Send out the request to save the new configuration by clicking the green PUT button and wait until the provisioning status of this cluster becomes Succeeded. d. Visit SFX and check the status to make sure everything is in healthy state. e. Open a PowerShell command window from a computer where the cluster certificate is installed. If the Service Fabric module is not installed yet, please refer to our document to install at first. Then run following command to connect to the Service Fabric cluster. Here the thumbprint is the one of the cluster certificate and also remember to replace the cluster name by correct URL. $ClusterName= "xxx.australiaeast.cloudapp.azure.com:19000" $CertThumbprint= "7279972D160AB4C3CBxxxxx34EA2BCFDFAC2B42" Connect-serviceFabricCluster -ConnectionEndpoint $ClusterName -KeepAliveIntervalInSec 10 -X509Credential -ServerCertThumbprint $CertThumbprint -FindType FindByThumbprint -FindValue $CertThumbprint -StoreLocation CurrentUser -StoreName My f. Use command to disable one node from Service Fabric cluster. (_nodetype1_0 in example code) Disable-ServiceFabricNode -NodeName "_nodetype1_0" -Intent RemoveData -Force g. Monitor in SFX until the node included in last command is with status disabled. h. RDP into this node and manually delete the D:\SvcFab\ReplicatorLog folder. Attention! This operation will remove all logs in ReplicatorLog. Please double-confirm whether any context there is still needed before deletion. i. Use following command to enable the disabled node. Monitor until the node is with status Up. Enable-ServiceFabricNode -NodeName "_nodetype1_0" j. Wait until everything is healthy in SFX and repeat step f to step i on every node. After that, the ReplicatorLog folder in node will be with new customized size.9.9KViews3likes2CommentsSSL/TLS connection issue troubleshooting guide
You may experience exceptions or errors when establishing TLS connections with Azure services. Exceptions are vary dramatically depending on the client and server types. A typical ones such as "Could not create SSL/TLS secure channel." "SSL Handshake Failed", etc. In this article we will discuss common causes of TLS related issue and troubleshooting steps.40KViews9likes1CommentService Fabric Explorer (SFX) web client CVE-2023-23383 spoofing vulnerability
Service Fabric Explorer (SFX) is the web client used when accessing a Service Fabric (SF) cluster from a web browser. The version of SFX used is determined by the version of your SF cluster. We are providing this blog to make customers aware that running Service Fabric versions 9.1.1436.9590 and below are affected. These versions could potentially allow unwanted code execution in the cluster if an attacker can successfully convince a victim to click a malicious link and perform additional actions in the Service Fabric Explorer interface. This issue has been resolved in Service Fabric 9.1.1583.9589 released on March 14th, 2023, as CVE-2023-23383 which had a score of CVSS: 8.2 / 7.1. See the Technical Details section for more information.5.9KViews1like0CommentsForce delete application while the Application/Service stuck in deleting state
Few days back i was working on scenario where I was unable to delete the application and service from Azure Service fabric explorer and PowerShell. By looking at the application state in Service Fabric Explorer the application was in deleting state. Executing the PowerShell command left out with the following error: Remove-ServiceFabricApplication -ApplicationName fabric:/Voting -Force -TimeoutSec 350 Remove-ServiceFabricApplication : Operation timed out. At line:1 char:1 + Remove-ServiceFabricApplication -ApplicationName fabric:/Voting -Forc ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : OperationTimeout: (Microsoft.Servi...usterConnection:ClusterConnection) [Remove-ServiceFabricApplication], TimeoutException + FullyQualifiedErrorId : RemoveApplicationInstanceErrorId,Microsoft.ServiceFabric.Powershell.RemoveApplication At this scenario, there are PowerShell cmdlet switches which play a vital role and can help to mitigate the issue. I have tried using -ForceRemove switch. The following cmdlet helps me to delete all application that are distributed across multiple nodes. $ClusterName= "clustername.cluster_region.cloudapp.azure.com:19000" $Certthumprint = "{replace_with_ClusterThumprint}" Connect-ServiceFabricCluster -ConnectionEndpoint $ClusterName -KeepAliveIntervalInSec 10 ` -X509Credential ` -ServerCertThumbprint $Certthumprint ` -FindType FindByThumbprint ` -FindValue $Certthumprint ` -StoreLocation CurrentUser ` -StoreName My $ApplicationName = "fabric:/voting" foreach($node in Get-ServiceFabricNode) { [void](Get-ServiceFabricDeployedReplica -NodeName $node.NodeName -ApplicationName $ApplicationName | Remove-ServiceFabricReplica -NodeName $node.NodeName -ForceRemove) } Remove-ServiceFabricApplication -ApplicationName $ApplicationName -Force There might be situation where -Force switch could also fail. In that scenario try with 1) Move the Cluster Manager primary to another node and try to remove the app again: a) Move-ServiceFabricPrimaryReplica -PartitionId [Cluster Manager System Service Partition Id] -ServiceName fabric:/System/ClusterManagerService b) Remove-ServiceFabricApplication -ApplicationName fabric:/voting -Force -ForceRemove 2) If it still doesn't work due to stuck services or PLB affinity, try to move the primary for each partition of NamingService and try again. There might also a scenario that the issue is happening with only one service distributed in multiple nodes instead of the whole application. Then use the below script which is like the one for application but with a little change which will only delete a specific Service. $ClusterName= "clustername.cluster_region.cloudapp.azure.com:19000" $Certthumprint = "{replace_with_ClusterThumprint}" Connect-ServiceFabricCluster -ConnectionEndpoint $ClusterName -KeepAliveIntervalInSec 10 ` -X509Credential ` -ServerCertThumbprint $Certthumprint ` -FindType FindByThumbprint ` -FindValue $Certthumprint ` -StoreLocation CurrentUser ` -StoreName My $ApplicationName = "fabric:/voting" $ServiceName = "fabric:/Voting/FEService" foreach($node in Get-ServiceFabricNode) { [void](Get-ServiceFabricDeployedReplica -NodeName $node.NodeName -ApplicationName $ApplicationName | Where-Object {$_.ServiceName -match $ServiceName} | Remove-ServiceFabricReplica -NodeName $node.NodeName -ForceRemove) } Remove-ServiceFabricService -ServiceName $ServiceName -Force Hope this helps.7.6KViews2likes2CommentsCommon causes of SSL/TLS connection issues and solutions
In the TLS connection common causes and troubleshooting guide (microsoft.com) and TLS connection common causes and troubleshooting guide (microsoft.com), the mechanism of establishing SSL/TLS and tools to troubleshoot SSL/TLS connection were introduced. In this article, I would like to introduce 3 common issues that may occur when establishing SSL/TLS connection and corresponding solutions for windows, Linux, .NET and Java. TLS version mismatch Cipher suite mismatch TLS certificate is not trusted TLS version mismatch Before we jump into solutions, let me introduce how TLS version is determined. As the dataflow introduced in the first session(https://techcommunity.microsoft.com/t5/azure-paas-blog/ssl-tls-connection-issue-troubleshooting-guide/ba-p/2108065), TLS connection is always started from client end, so it is client proposes a TLS version and server only finds out if server itself supports the client's TLS version. If the server supports the TLS version, then they can continue the conversation, if server does not support, the conversation is ended. Detection You may test with the tools introduced in this blog(TLS connection common causes and troubleshooting guide (microsoft.com)) to verify if TLS connection issue was caused by TLS version mismatch. If capturing network packet, you can also view TLS version specified in Client Hello. If connection terminated without Server Hello, it could be either TLS version mismatch or Ciphersuite mismatch. Solution Different types of clients have their own mechanism to determine TLS version. For example, Web browsers - IE, Edge, Chrome, Firefox have their own set of TLS versions. Applications have their own library to define TLS version. Operating system level like windows also supports to define TLS version. Web browser In the latest Edge and Chrome, TLS 1.0 and TLS 1.1 are deprecated. TLS 1.2 is the default TLS version for these 2 browsers. Below are the steps of setting TLS version in Internet Explorer and Firefox and are working in Window 10. Internet Explorer Search Internet Options Find the setting in the Advanced tab. Firefox Open Firefox, type about:config in the address bar. Type tls in the search bar, find the setting of security.tls.version.min and security.tls.version.max. The value is the range of supported tls version. 1 is for tls 1.0, 2 is for tls 1.1, 3 is for tls 1.2, 4 is for tls 1.3. Windows System Different windows OS versions have different default TLS versions. The default TLS version can be override by adding/editing DWORD registry values ‘Enabled’ and ‘DisabledByDefault’. These registry values are configured separately for the protocol client and server roles under the registry subkeys named using the following format: <SSL/TLS/DTLS> <major version number>.<minor version number><Client\Server> For example, below is the registry paths with version-specific subkeys: Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client For the details, please refer to Transport Layer Security (TLS) registry settings | Microsoft Learn. Application that running with .NET framework The application uses OS level configuration by default. For a quick test for http requests, you can add the below line to specify the TLS version in your application before TLS connection is established. To be on a safer end, you may define it in the beginning of the project. ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 Above can be used as a quick test to verify the problem, it is always recommended to follow below document for best practices. https://docs.microsoft.com/en-us/dotnet/framework/network-programming/tls Java Application For the Java application which uses Apache HttpClient to communicate with HTTP server, you may check link How to Set TLS Version in Apache HttpClient | Baeldung about how to set TLS version in code. Cipher suite mismatch Like TLS version mismatch, CipherSuite mismatch can also be tested with the tools that introduced in previous article. Detection In the network packet, the connection is terminated after Client Hello, so if you do not see a Server Hello packet, that indicates either TLS version mismatch or ciphersuite mismatch. If server is supported public access, you can also test using SSLLab(https://www.ssllabs.com/ssltest/analyze.html) to detect all supported CipherSuite. Solution From the process of establishing SSL/TLS connections, the server has final decision of choosing which CipherSuite in the communication. Different Windows OS versions support different TLS CipherSuite and priority order. For the supported CipherSuite, please refer to Cipher Suites in TLS/SSL (Schannel SSP) - Win32 apps | Microsoft Learn for details. If a service is hosted in Windows OS. the default order could be override by below group policy to affect the logic of choosing CipherSuite to communicate. The steps are working in the Windows Server 2019. Edit group policy -> Computer Configuration > Administrative Templates > Network > SSL Configuration Settings -> SSL Cipher Suite Order. Enable the configured with the priority list for all cipher suites you want. The CipherSuites can be manipulated by command as well. Please refer to TLS Module | Microsoft Learn for details. TLS certificate is not trusted Detection Access the url from web browser. It does not matter if the page can be loaded or not. Before loading anything from the remote server, web browser tries to establish TLS connection. If you see the error below returned, it means certificate is not trusted on current machine. Solution To resolve this issue, we need to add the CA certificate into client trusted root store. The CA certificate can be got from web browser. Click warning icon -> the warning of ‘isn’t secure’ in the browser. Click ‘show certificate’ button. Export the certificate. Import the exported crt file into client system. Windows Manage computer certificates. Trusted Root Certification Authorities -> Certificates -> All Tasks -> Import. Select the exported crt file with other default setting. Ubuntu Below command is used to check current trust CA information in the system. awk -v cmd='openssl x509 -noout -subject' ' /BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt If you did not see desired CA in the result, the commands below are used to add new CA certificates. $ sudo cp <exported crt file> /usr/local/share/ca-certificates $ sudo update-ca-certificates RedHat/CentOS Below command is used to check current trust CA information in the system. awk -v cmd='openssl x509 -noout -subject' ' /BEGIN/{close(cmd)};{print | cmd}' < /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem If you did not see desired CA in the result, the commands below are used to add new CA certificates. sudo cp <exported crt file> /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust Java The JVM uses a trust store which contains certificates of well-known certification authorities. The trust store on the machine may not contain the new certificates that we recently started using. If this is the case, then the Java application would receive SSL failures when trying to access the storage endpoint. The errors would look like the following: Exception in thread "main" java.lang.RuntimeException: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at org.example.App.main(App.java:54) Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:130) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:371) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:314) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:309) Run the below command to import the crt file to JVM cert store. The command is working in the JDK 19.0.2. keytool -importcert -alias <alias> -keystore "<JAVA_HOME>/lib/security/cacerts" -storepass changeit -file <crt_file> Below command is used to export current certificates information in the JVM cert store. keytool -keystore " <JAVA_HOME>\lib\security\cacerts" -list -storepass changeit > cert.txt The certificate will be displayed in the cert.txt file if it was imported successfully.53KViews4likes0Comments