Forum Widgets
Latest Discussions
Can't add IP address to storage account
I can no longer add an IP address to a storage account networking using the portal. This started within the last week. I get the message "Invalid argument: 'The prefix must be between 0 and 30.'." I've tried a valid IP address and CIDR notation. (71.59.0.15 or 71.59.0.15/32). Is this a problem with the Azure portal?Tom-S-2060Mar 30, 2021Copper Contributor33KViews2likes20CommentsAzure Files - Map file share on Azure AD joined machine
Hello, We know articles says there is restrictions for mapping Azure File share on Azure AD Joined machine as per MS article. "Neither Azure AD DS authentication nor on-premises AD DS authentication is supported against Azure AD-joined devices or Azure AD-registered devices." Is there any workaround ? Or Does anyone know if that feature is in development or coming soon as preview ? Thanks!Amit_Trivedi112214Jun 19, 2020Copper Contributor10KViews2likes11CommentsProblems accessing Tiered Azure Files using MacOS 14.3.x (Sonoma)
Has anyone else having any issues with accessing files that have been tiered by an Azure Storage Sync enabled server? We have a handful of Macs, not enough to test this with fully, but the ones running Mac OS 13.x or earlier have no issues, but clients running 14.3.1 do. Now I have no idea if this started prior to this last update like in 14.0, but the troubled hosts are on 14.3.1 now. They can access the share with no issue, they can access files that are not tiered without issue, they can even access small, tiered files, like just a couple hundred MB, but if you try to access a file around a 1GB or larger you just get an error. To make it even more odd, you can use the duplicate function on the file while tiered, it will duplicate the file in the existing location and once done both files are accessible, both the original and the new duplicate. Not looking for a solution per se, just looking for corroboration... that this issue does exist and I'm not crazy or that I have to go rip apart a working infrastructure... all because a trillion-dollar company needs to save face by not publishing a known issue document when they push OS updates without testing. Thanks in advance!Andrew_AllstonFeb 28, 2024Iron Contributor1.5KViews0likes9CommentsAzure Files with AD DS authentication - DNS forwarder setup
I have the setup running via Private Endpoint and now want to be able to find the private endpoint IP through my own DNS setup. I'm trying to do this via the DNS forwarder setup here https://docs.microsoft.com/en-us/azure/storage/files/storage-files-networking-dns I have already used the AzHybridFiles module for the setup, so it seems to work, but when running this: import-module AzFilesHybrid $ruleSet = New-AzDnsForwardingRuleSet -AzureEndpoints StorageAccountEndpoint Connect-AzAccount $SubscriptionId = "subscriptionID" Select-AzSubscription $SubscriptionId # Deploy and configure DNS forwarders New-AzDnsForwarder ` -DnsForwardingRuleSet $ruleSet ` -VirtualNetworkResourceGroupName "vnetRG" ` -VirtualNetworkName "vnetname" ` -VirtualNetworkSubnetName "vnetSubnet" -SkipParentDomain I'm getting this: Get-ArmTemplateObject : A parameter cannot be found that matches parameter name 'Depth'. At C:\Users\username\Documents\WindowsPowerShell\Modules\AzFilesHybrid\0.2.0.0\AzFilesHybrid.psm1:5113 char:24 + ... teVersion = Get-ArmTemplateObject -ArmTemplateUri $DnsForwarderTempla ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Get-ArmTemplateObject], ParameterBindingException + FullyQualifiedErrorId : NamedParameterNotFound,Get-ArmTemplateObject Does anyone have an idea, what this could come from?Dan HansenJul 07, 2020Copper Contributor6.9KViews0likes8CommentsAzure Shared Storage Error
I am trying to create the shared disk using the same jason script mentioned here and getting below error- { "error": { "code": "BadRequest", "message": "This subscription is not registered to use Microsoft.Compute/SharedDisksForPremium feature." } } -------------------------------------------- Original script- https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-shared-enable { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "dataDiskName": { "type": "string", "defaultValue": "mySharedDisk" }, "dataDiskSizeGB": { "type": "int", "defaultValue": 1024 }, "maxShares": { "type": "int", "defaultValue": 2 } }, "resources": [ { "type": "Microsoft.Compute/disks", "name": "[parameters('dataDiskName')]", "location": "[resourceGroup().location]", "apiVersion": "2019-07-01", "sku": { "name": "Premium_LRS" }, "properties": { "creationData": { "createOption": "Empty" }, "diskSizeGB": "[parameters('dataDiskSizeGB')]", "maxShares": "[parameters('maxShares')]" } } ] }SolvedRaviCSPApr 25, 2020Copper Contributor4.4KViews0likes7CommentsDisaster Recovery options for Premium Block Blob HNS storage
Looking to implement a Data Lakehouse in Azure and DR is becoming a large focus. Ideally we'd use an Azure Storage Account with Hierarchical Namespace enabled and then also use Premium and not Standard to provide better read performance, then the block blob option. However, there is no GRS option for any Azure Storage Account Premium. Are there any DR options to get this data to another region in Azure? Well beside a dual write (2 phased commit) into each storage account.haveyoumetcpJun 12, 2024Copper Contributor587Views0likes6Commentswhat backup options are there for containers with hierarchical namespace enabled
I saw that this question was previously registered in 2021 without receiving an answer/recommendation. When a storage container is deployed with hierarchical namespace, we lose point-in-time backup as an option. What alternatives are there other than soft delete and retention period are there to protect the content? In advance thank you.akroflyJul 05, 2023Copper Contributor1.8KViews0likes6CommentsEstimating costs with Blob storage (GRS)
Dear Community, I brought a comparison of storage accounts with almost identical configuration (only Access Tier is different) from Azure Pricing Calculator (see below) that makes me think. The variant with GRS costs 20$. I have not found any references in the documentation that explain the behavior. Is the calculation correct? Can anyone give me a hint for the price difference? Kind regards, TomSolvedTomWiessnerJan 05, 2023Copper Contributor1.8KViews1like6CommentsCopy already uploaded files from SPO or OneDrive to Azure blob
Hi everyone, since mover.io is now retired, we are looking for an alternative to copy large (!) files (>10GB) and a huge amount of data in total (500GB-1TB) that are already in our sharepoint directory to an existing azure blob storage, including the structure (folders). We don't want to use third party tools like ShareGate or others. I have been researching for days now, but cannot find any solution, but as all services are built by Microsoft I guess there has to be a way somehow. For the newly launched 'migration manager' as replacement for mover, https://learn.microsoft.com/de-de/sharepointmigration/migrate-to-sharepoint-online that it does not support Azure Blob Storage. Since Power Automate does not support large files (afaik), that is also no workaround. It is of course possible to migrate data from on-premis devices, but as the data is already uploaded into sharepoint, we would have to download it, to upload it back into the cloud... which of course does not make a lot of sense. As it is easy to copy data from sharepoint to OneDrive, we can also copy the data from there into Azure. Does anyone have the same problem? I'd be glad if you could share your experiences with us.brand007Feb 22, 2024Copper Contributor1.6KViews0likes5CommentsUnable to write csv to azure blob storage using Pyspark
Hi there, I am trying to write a csv to an azure blob storage using pyspark but receiving error as follows: Caused by: com.microsoft.azure.storage.StorageException: One of the request inputs is not valid. at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89) at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:305) at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:175) at com.microsoft.azure.storage.blob.CloudBlob.startCopy(CloudBlob.java:883) at com.microsoft.azure.storage.blob.CloudBlob.startCopyFromBlob(CloudBlob.java:825) at org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.startCopyFromBlob(StorageInterfaceImpl.java:399) at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2449) ... 22 more The code used is : def put_data_to_azure(self, df, fs_azure, fs_account_key, destination_path, file_format, repartition): self.code_log.info('in put_data_to_azure') try: self.sc._jsc.hadoopConfiguration().set("fs.azure", fs_azure) self.sc._jsc.hadoopConfiguration().set("fs.wasbs.impl", "org.apache.hadoop.fs.azure.NativeAzureFileSystem") self.sc._jsc.hadoopConfiguration().set("fs.azure.account.key.%s.blob.core.windows.net" % fs_azure, fs_account_key) df.repartition(repartition).write.format(file_format).save(destination_path) except Exception as e: error1 = str(e).splitlines()[:2] exception = "Exception in put_data_to_azure: " + ''.join(error1) raise ExceptionHandler(exception) The destination path of azure is ' wasbs://<container>@<storage account>.blob.core.windows.net/folder' Jars used are hadoop-azure-2.7.0.jar and azure-storage-2.0.0.jar. I have used these as I found them to be stable versions and could read from azure successfully but could not write. I have also tried newer versionsAshwini_AkulaAug 05, 2020Copper Contributor8.5KViews0likes5Comments
Resources
Tags
- azure blob storage22 Topics
- azure blob13 Topics
- azure files9 Topics
- azure backup7 Topics
- storage explorer6 Topics
- Azure Data Lake4 Topics
- queue storage3 Topics
- Azure Archive3 Topics
- updates2 Topics
- azure disks1 Topic