azure storage
109 TopicsPublic access is not permitted on this storage account
Have you ever received an error message saying that public access is not permitted on this storage account while trying to access a blob? Just in case you are really needing to allow public access to your Storage Account, I will show you how to set this configuration, but before doing that, please note the following: Disallowing public access for the storage account overrides the public access settings for all containers in that storage account, preventing anonymous access to blob data in that account. When public access is disallowed for the account, it is not possible to configure the public access setting for a container to permit anonymous access, and any future anonymous requests to that account will fail. To set the AllowBlobPublicAccess property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the Microsoft.Storage/storageAccounts/write action. Built-in roles with this action include: The Azure Resource Manager Owner role The Azure Resource Manager Contributor role The Storage Account Contributor role Once you have an account with such permissions, you can work your way through any of the following ways to enable the "Blob public access" setting on your Storage Account: Azure Portal: PowerShell: AZ CLI: Now, even though you have the ability to enable the Storage Account "allowPublicAccess" setting, we still recommend using the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. To that end, you can also set the public access setting at the container and blob levels: The following table shows the effect that the combination of both settings for a container: When Blob Storage receives an anonymous request, that request will succeed if all of the following conditions are true: Anonymous public access is allowed for the storage account. The container is configured to allow anonymous public access. The request is for read access. If any of those conditions are not true, then the request will fail. The response code on failure depends on whether the anonymous request was made with a version of the service that supports the bearer challenge. The bearer challenge is supported with service versions 2019-12-12 and newer: If the anonymous request was made with a service version that supports the bearer challenge, then the service returns error code 401 (Unauthorized). If the anonymous request was made with a service version that does not support the bearer challenge and anonymous public access is disallowed for the storage account, then the service returns error code 409 (Conflict). If the anonymous request was made with a service version that does not support the bearer challenge and anonymous public access is allowed for the storage account, then the service returns error code 404 (Not Found). I'll now use Postman to show the above scenarios: Successful anonymous request: Failed anonymous request (API version 2023-01-03): Failed anonymous request (API version 2009-09-19) Failed anonymous request (API version 2009-09-19, public access allowed at the Storage Account level and access level set to "Private" at the container level): As you can see, you can get different responses for the same scenario and just based on the API version that was used to send the request. Hopefully this article helps on identifying the root cause for the error message you may get and gives you some helpful information on how to approach your specific scenario. In the meantime, independently on what you are planning to do to meet your business needs, always remember to use the principle of least privilege to lower the potential for security risks. References ======= Configure anonymous public read access for containers and blobs https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure Set-AzStorageAccount https://learn.microsoft.com/en-us/powershell/module/az.storage/set-azstorageaccount az storage account update https://learn.microsoft.com/en-us/cli/azure/storage/account?view=azure-cli-latest#az-storage-account-update89KViews0likes0CommentsMount ADLS Gen2 or Blob Storage in Azure Databricks
Azure Databricks offers many of the same features as the open-source Databricks platform, such as a web-based workspace for managing Spark clusters, notebooks, and data pipelines, along with Spark-based analytics and machine learning tools. It is fully integrated with Azure cloud services, providing native access to Azure Blob Storage, Azure Data Lake Storage, Azure SQL Database, and other Azure services. This blog shows example of mounting Azure Blob Storage or Azure Data Lake Storage in the Databricks File System (DBFS), with two authentication methods for mount: Access Key and SAS token.60KViews4likes1CommentCommon causes of SSL/TLS connection issues and solutions
In the TLS connection common causes and troubleshooting guide (microsoft.com) and TLS connection common causes and troubleshooting guide (microsoft.com), the mechanism of establishing SSL/TLS and tools to troubleshoot SSL/TLS connection were introduced. In this article, I would like to introduce 3 common issues that may occur when establishing SSL/TLS connection and corresponding solutions for windows, Linux, .NET and Java. TLS version mismatch Cipher suite mismatch TLS certificate is not trusted TLS version mismatch Before we jump into solutions, let me introduce how TLS version is determined. As the dataflow introduced in the first session(https://techcommunity.microsoft.com/t5/azure-paas-blog/ssl-tls-connection-issue-troubleshooting-guide/ba-p/2108065), TLS connection is always started from client end, so it is client proposes a TLS version and server only finds out if server itself supports the client's TLS version. If the server supports the TLS version, then they can continue the conversation, if server does not support, the conversation is ended. Detection You may test with the tools introduced in this blog(TLS connection common causes and troubleshooting guide (microsoft.com)) to verify if TLS connection issue was caused by TLS version mismatch. If capturing network packet, you can also view TLS version specified in Client Hello. If connection terminated without Server Hello, it could be either TLS version mismatch or Ciphersuite mismatch. Solution Different types of clients have their own mechanism to determine TLS version. For example, Web browsers - IE, Edge, Chrome, Firefox have their own set of TLS versions. Applications have their own library to define TLS version. Operating system level like windows also supports to define TLS version. Web browser In the latest Edge and Chrome, TLS 1.0 and TLS 1.1 are deprecated. TLS 1.2 is the default TLS version for these 2 browsers. Below are the steps of setting TLS version in Internet Explorer and Firefox and are working in Window 10. Internet Explorer Search Internet Options Find the setting in the Advanced tab. Firefox Open Firefox, type about:config in the address bar. Type tls in the search bar, find the setting of security.tls.version.min and security.tls.version.max. The value is the range of supported tls version. 1 is for tls 1.0, 2 is for tls 1.1, 3 is for tls 1.2, 4 is for tls 1.3. Windows System Different windows OS versions have different default TLS versions. The default TLS version can be override by adding/editing DWORD registry values ‘Enabled’ and ‘DisabledByDefault’. These registry values are configured separately for the protocol client and server roles under the registry subkeys named using the following format: <SSL/TLS/DTLS> <major version number>.<minor version number><Client\Server> For example, below is the registry paths with version-specific subkeys: Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client For the details, please refer to Transport Layer Security (TLS) registry settings | Microsoft Learn. Application that running with .NET framework The application uses OS level configuration by default. For a quick test for http requests, you can add the below line to specify the TLS version in your application before TLS connection is established. To be on a safer end, you may define it in the beginning of the project. ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 Above can be used as a quick test to verify the problem, it is always recommended to follow below document for best practices. https://docs.microsoft.com/en-us/dotnet/framework/network-programming/tls Java Application For the Java application which uses Apache HttpClient to communicate with HTTP server, you may check link How to Set TLS Version in Apache HttpClient | Baeldung about how to set TLS version in code. Cipher suite mismatch Like TLS version mismatch, CipherSuite mismatch can also be tested with the tools that introduced in previous article. Detection In the network packet, the connection is terminated after Client Hello, so if you do not see a Server Hello packet, that indicates either TLS version mismatch or ciphersuite mismatch. If server is supported public access, you can also test using SSLLab(https://www.ssllabs.com/ssltest/analyze.html) to detect all supported CipherSuite. Solution From the process of establishing SSL/TLS connections, the server has final decision of choosing which CipherSuite in the communication. Different Windows OS versions support different TLS CipherSuite and priority order. For the supported CipherSuite, please refer to Cipher Suites in TLS/SSL (Schannel SSP) - Win32 apps | Microsoft Learn for details. If a service is hosted in Windows OS. the default order could be override by below group policy to affect the logic of choosing CipherSuite to communicate. The steps are working in the Windows Server 2019. Edit group policy -> Computer Configuration > Administrative Templates > Network > SSL Configuration Settings -> SSL Cipher Suite Order. Enable the configured with the priority list for all cipher suites you want. The CipherSuites can be manipulated by command as well. Please refer to TLS Module | Microsoft Learn for details. TLS certificate is not trusted Detection Access the url from web browser. It does not matter if the page can be loaded or not. Before loading anything from the remote server, web browser tries to establish TLS connection. If you see the error below returned, it means certificate is not trusted on current machine. Solution To resolve this issue, we need to add the CA certificate into client trusted root store. The CA certificate can be got from web browser. Click warning icon -> the warning of ‘isn’t secure’ in the browser. Click ‘show certificate’ button. Export the certificate. Import the exported crt file into client system. Windows Manage computer certificates. Trusted Root Certification Authorities -> Certificates -> All Tasks -> Import. Select the exported crt file with other default setting. Ubuntu Below command is used to check current trust CA information in the system. awk -v cmd='openssl x509 -noout -subject' ' /BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt If you did not see desired CA in the result, the commands below are used to add new CA certificates. $ sudo cp <exported crt file> /usr/local/share/ca-certificates $ sudo update-ca-certificates RedHat/CentOS Below command is used to check current trust CA information in the system. awk -v cmd='openssl x509 -noout -subject' ' /BEGIN/{close(cmd)};{print | cmd}' < /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem If you did not see desired CA in the result, the commands below are used to add new CA certificates. sudo cp <exported crt file> /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust Java The JVM uses a trust store which contains certificates of well-known certification authorities. The trust store on the machine may not contain the new certificates that we recently started using. If this is the case, then the Java application would receive SSL failures when trying to access the storage endpoint. The errors would look like the following: Exception in thread "main" java.lang.RuntimeException: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at org.example.App.main(App.java:54) Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:130) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:371) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:314) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:309) Run the below command to import the crt file to JVM cert store. The command is working in the JDK 19.0.2. keytool -importcert -alias <alias> -keystore "<JAVA_HOME>/lib/security/cacerts" -storepass changeit -file <crt_file> Below command is used to export current certificates information in the JVM cert store. keytool -keystore " <JAVA_HOME>\lib\security\cacerts" -list -storepass changeit > cert.txt The certificate will be displayed in the cert.txt file if it was imported successfully.56KViews4likes0CommentsSSL/TLS connection issue troubleshooting guide
You may experience exceptions or errors when establishing TLS connections with Azure services. Exceptions are vary dramatically depending on the client and server types. A typical ones such as "Could not create SSL/TLS secure channel." "SSL Handshake Failed", etc. In this article we will discuss common causes of TLS related issue and troubleshooting steps.41KViews9likes1CommentThe MAC signature found in the HTTP request 'XXXX' is not the same as computed signature
This issue happens when the authorization signature is incorrect. In this blog, we will be covering the cause and troubleshooting of this issue. While the client application sends the request, it builds the authorization header by building a string to sign and encoding it with one of the AccountKey. Here are the main causes of this issue. 1. The StringToSign is correct, but it was not encoded correctly using Base64. In this case, you need to investigate the application code logic and ensure that the stringtosign in created properly. This documentation explains how to build the string to sign. The logic involved in the StringtoSign generation is below: Construct the String to Sign. Decode the Base64 storage key. Use the HMAC-SHA256 algorithm and the decoded storage key from previous step to compute a hash of the string to sign. Base64 encode the hash and include this in the Authorization header. Syntax of signature looks like below: Signature=Base64(HMAC-SHA256(UTF8(StringToSign), Base64.decode(<your_azure_storage_account_shared_key>))) 2. The account key that was used to encode the StringToSign got renewed. If the application uses hard-coded accountkey and the storage key gets renewed or regenerated at the Storage side. It is possible that this causes this issue too since storage will try to validate the authorization header by computing the stringtosign and encode it with the new keys. Since keys differ, storage service will never be able to get the same signature as the one provided by your application. In that case, you could try to find out if there has been any regeneration events of the Storage access keys which correspond to the time of the issue. You can also double check the AccessKey used in your application and check if it’s valid. You can check if there has been any Key Regeneration event that was done on the Azure Storage account from the Activity logs: 3. The client signed the authorization header using the wrong StringToSign. Refer this article to construct the signature string. Troubleshooting steps to be followed: 1. You could also capture a Fiddler trace while reproducing your issue and look at the response body to see the server's string to sign. Compare this to the one used by the client (the application will need to log the string to string to sign used in their code). Below is the screenshot from fiddler for a failing scenario (error 403): In the Response Body under the RAW section, you can see the String to sign used by the Server (Storage): You can debug your application code and log the output to show the string to sign it used. Compare the server's string to sign that you received in the fiddler response body with the client's string to sign. They should match. If you are not creating the string to sign using REST API / manually in your code but using the Azure Storage SDK to build the string to sign then you can set a breakpoint on the buildstringtosign function and check the string to sign computed by the application. Hope this helps.35KViews0likes2CommentsTroubleshooting connectivity to Blob Storage using Azure Storage Explorer with Private Endpoint
Scenario: You want to connect to Blob Storage having Private Endpoint via Azure Storage Explorer. This blog talks about some of steps to verify the setup and troubleshooting that can be followed depending upon the error message you are encountering. Actions: Creating/Verifying the Setup Configuration There is certain list of steps that you need to follow in case you are creating a fresh setup. The documentation will be very helpful in the setup process. In case you already have setup, below are the pointers to verify: The VM from where you are trying to connect to, and your storage account need to be part of same Virtual Network and Subnet. You can verify them by navigating via respective resources through Azure Portal. Another mechanism you can try is to do the nslookup over the storage account. It should resolve in a private IP and you can verify this from the IP assigned to FQDN under private endpoint configuration. Lastly, you can verify if the machine IP from where connection is being made is part of same subnet Troubleshooting Scenarios Troubleshooting depends upon the operations you are trying to perform on the storage. The connection might get established however the actual error might appear when you try to perform listing or other operation. A common error you might get will be unable to retrieve child resources however the important point here is check on the error in the details and to what error it points too. If that points to some kind of “403 - Authorization Error”, you need to isolate based on what kind of error it is and why it is coming. Some common scenarios here could be in-sufficient roles, Firewall and VNET configurations etc. Ensure that you have right access already in place. In case, if points to error such as “Account Does Not Exsist”, first verify the account exists and hasn’t been deleted. In case you have a setup, where in you are making use of Hosts File by specifying IP of the storage account, kindly ensure that you are having updated public IP mentioned in the host file entry. The file can be found at the path C:\Windows\System32\drivers\etc. Although, the public IP does not get changed that often however still verify it again too. If the IP has got updated then also this message may appear as explorer won’t be find out the account with the one mentioned in file. In that scenario, kindly update the entry in the Hosts file with the current public IP Address for the storage account. If there are any other error observed specific to storage explorer, you can review this link as well. Hope this helps!32KViews2likes0CommentsHow use Storage (ADLS Gen2) REST API to upload file via AAD access token
Azure Data Lake Storage Account supports two types of interface: blob storage interface and file system interface. Blob storage interface: https://docs.microsoft.com/en-us/rest/api/storageservices/operations-on-blobs File system interface: https://docs.microsoft.com/en-us/rest/api/storageservices/data-lake-storage-gen2 These interfaces allow you to create and manage file systems, as well as to create and manage directories and files in file system. Azure Data Lake Storage Gen2 APIs support Azure Active Directory (Azure AD), Shared Key, and shared access signature (SAS) authorization. Here is an example to use the Azure Active Directory access token to upload a file.29KViews4likes1CommentTroubleshooting Azurite and Azure Storage Emulator issues
The Azure Storage Emulator is deprecated, and it is recommended to make use of Azurite. Azurite is an open-source Azure Storage API compatible server (emulator). Based on Node.js, Azurite provides cross platform experiences for customers wanting to try Azure Storage easily in a local environment. Troubleshooting Azurite issues: If you face any issues with Azurite, you can follow the below troubleshooting steps: 1. If you face issue during launch check in the cmd prompt if the Blob, Table and Queue service are started and listening as shown below: 2. You need to then check if node.exe process is in running state in the Task Manager: 3. You can try to connect using Storage explorer and check if you are able to connect and perform the tests as explained here. 4. By default, the Azurite shows the access logs in the cmd prompt from where it is launched as shown below: If you want to disable the access log you can run it with --silent switch. This doesn’t display any access logs in the cmd prompt. 5. If you need detailed error information, you can also run the command with --debug switch and the path where you need the logs to be saved as shown below: 6. If you are running Azurite within the docker you can follow this article and run the command with right switch to gather the debug logs. Troubleshooting Storage Emulator issues: If you are still using Storage emulator and facing the issue during its launch and installation you can follow the below steps: 1. You need to make sure that the Storage emulator is started and running fine. Once you launch the Storage emulator you should see the below message in the cmd prompt: Also the process should be running in the TaskManager: 2. Make sure there are no other processes listening on the Azure Storage emulator ports. Browse to “C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator”. Open the file ‘AzureStorageEmulator.exe.config’ in a text editor. Note the ports being used by the service. The default config uses ports 10000, 10001, 10002 e.x. <service name="Blob" url="http://127.0.0.1:10000/"/> <service name="Queue" url="http://127.0.0.1:10001/"/> <service name="Table" url="http://127.0.0.1:10002/"/> Now using these ports, from an administrator command prompt query to see if any ports are in use. netstat -p tcp -ano | findstr :10000 netstat -p tcp -ano | findstr :10001 netstat -p tcp -ano | findstr :10002 If you see any output then note the process that is using this port. You will either need to stop this process or reconfigure the ports in the AzureStorageEmulator.exe.config file so that it is using a port not being used by another application. 3. Open an elevated command prompt and initialize the Azure Storage Emulator. Right-click the Start button and choose ‘Command Prompt(Admin’. From the command prompt cd to the directory “C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator”. Run the command: AzureStorageEmulator.exe init If this is successful, you will see the output ‘The storage emulator was successfully initialized and is ready to use.’ If this is not successful, then check the error details displayed in the cmd prompt. 4. You can check the status of the Azure Storage Emulator by executing the command below: ‘AzureStorageEmulator.exe status’. You should see output like the following: C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator>AzureStorageEmulator.exe status Windows Azure Storage Emulator 5.10.0.0 command line tool IsRunning: True BlobEndpoint: http://127.0.0.1:10000/ QueueEndpoint: http://127.0.0.1:10001/ TableEndpoint: http://127.0.0.1:10002/ IsRunning will show if the emulator is currently running. You can use ‘AzureStorageEmulator.exe start’ and ‘AzureStorageEmulator.exe stop’ to start and stop the emulator. 4. If (1) or (2) does not resolve the issue, then check the Application Event log for any errors (right-click the Start button and go to ‘Event Viewer’). 5. Try to Delete/Reinitialize the Azure Storage Emulator dataset. NOTE that this will delete all contents in the local storage. AzureStorageEmulator.exe init -forcecreate Install or Uninstall Issues If it's an MSI install, so we can turn on verbose logging in the registry: [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\Installer] "Logging"= "voicewarmup!" The Installer will create a log file in the temp directory with the random name MSI*.LOG. Review the log which will have more details about the issue. Relevant Articles: https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azurite?tabs=visual-studio https://docs.microsoft.com/en-us/azure/storage/blobs/use-azurite-to-run-automated-tests https://hub.docker.com/_/microsoft-azure-storage-azurite Hope this helps.25KViews1like0Comments