azure virtual machines
24 TopicsResiliency Best Practices You Need For your Blob Storage Data
Maintaining Resiliency in Azure Blob Storage: A Guide to Best Practices Azure Blob Storage is a cornerstone of modern cloud storage, offering scalable and secure solutions for unstructured data. However, maintaining resiliency in Blob Storage requires careful planning and adherence to best practices. In this blog, I’ll share practical strategies to ensure your data remains available, secure, and recoverable under all circumstances. 1. Enable Soft Delete for Accidental Recovery (Most Important) Mistakes happen, but soft delete can be your safety net and. It allows you to recover deleted blobs within a specified retention period: Configure a soft delete retention period in Azure Storage. Regularly monitor your blob storage to ensure that critical data is not permanently removed by mistake. Enabling soft delete in Azure Blob Storage does not come with any additional cost for simply enabling the feature itself. However, it can potentially impact your storage costs because the deleted data is retained for the configured retention period, which means: The retained data contributes to the total storage consumption during the retention period. You will be charged according to the pricing tier of the data (Hot, Cool, or Archive) for the duration of retention 2. Utilize Geo-Redundant Storage (GRS) Geo-redundancy ensures your data is replicated across regions to protect against regional failures: Choose RA-GRS (Read-Access Geo-Redundant Storage) for read access to secondary replicas in the event of a primary region outage. Assess your workload’s RPO (Recovery Point Objective) and RTO (Recovery Time Objective) needs to select the appropriate redundancy. 3. Implement Lifecycle Management Policies Efficient storage management reduces costs and ensures long-term data availability: Set up lifecycle policies to transition data between hot, cool, and archive tiers based on usage. Automatically delete expired blobs to save on costs while keeping your storage organized. 4. Secure Your Data with Encryption and Access Controls Resiliency is incomplete without robust security. Protect your blobs using: Encryption at Rest: Azure automatically encrypts data using server-side encryption (SSE). Consider enabling customer-managed keys for additional control. Access Policies: Implement Shared Access Signatures (SAS) and Stored Access Policies to restrict access and enforce expiration dates. 5. Monitor and Alert for Anomalies Stay proactive by leveraging Azure’s monitoring capabilities: Use Azure Monitor and Log Analytics to track storage performance and usage patterns. Set up alerts for unusual activities, such as sudden spikes in access or deletions, to detect potential issues early. 6. Plan for Disaster Recovery Ensure your data remains accessible even during critical failures: Create snapshots of critical blobs for point-in-time recovery. Enable backup for blog & have the immutability feature enabled Test your recovery process regularly to ensure it meets your operational requirements. 7. Resource lock Adding Azure Locks to your Blob Storage account provides an additional layer of protection by preventing accidental deletion or modification of critical resources 7. Educate and Train Your Team Operational resilience often hinges on user awareness: Conduct regular training sessions on Blob Storage best practices. Document and share a clear data recovery and management protocol with all stakeholders. 8. "Critical Tip: Do Not Create New Containers with Deleted Names During Recovery" If a container or blob storage is deleted for any reason and recovery is being attempted, it’s crucial not to create a new container with the same name immediately. Doing so can significantly hinder the recovery process by overwriting backend pointers, which are essential for restoring the deleted data. Always ensure that no new containers are created using the same name during the recovery attempt to maximize the chances of successful restoration. Wrapping It Up Azure Blob Storage offers an exceptional platform for scalable and secure storage, but its resiliency depends on following best practices. By enabling features like soft delete, implementing redundancy, securing data, and proactively monitoring your storage environment, you can ensure that your data is resilient to failures and recoverable in any scenario. Protect your Azure resources with a lock - Azure Resource Manager | Microsoft Learn Data redundancy - Azure Storage | Microsoft Learn Overview of Azure Blobs backup - Azure Backup | Microsoft Learn Protect your Azure resources with a lock - Azure Resource Manager | Microsoft Learn422Views0likes0CommentsAzure VMs Not Applying GPOs Correctly
Hi everyone, Quick question… if my Azure VMs are joined to my domain, they should be applying all my configured GPOs, right? For some reason, my VMs are not applying the GPOs, even after running a GPUPDATE /force. At the moment, I am testing some simple GPOs like: Creating a folder on the desktop Setting the time format to Brazilian (dd/mm/yyyy) Adjusting the timezone to Brasília When I run gpresult /r, it shows that the GPOs are being applied, but for some reason, the VM just doesn’t reflect them. Any idea what might be causing this?140Views0likes1CommentAzure Extended Zones: Optimizing Performance, Compliance, and Accessibility
Azure Extended Zones are small-scale Azure extensions located in specific metros or jurisdictions to support low-latency and data residency workloads. They enable users to run latency-sensitive applications close to end users while maintaining compliance with data residency requirements, all within the Azure ecosystem.2.6KViews2likes0CommentsConsiderations when replicating VMs to new Azure subscriptions?
I've got a couple domain-joined (not Entra/AAD joined) virtual machines in Azure in a subscription that I want to copy over to another subscription. I only want to copy these VMs over so I can test some patches on them. However, the original VMs will still be online and in use. I've seen others bring up AAD/Entra as a possible issue. Is there anything else I need to be worried about before replicating these VMs?250Views0likes1CommentHow to avoid "This Size is not available in zone. . ."
Hi, Anybody has found the optimal solution to avoiding the following text,when selecting the VM on Azure: This size is not available in zone 2. Zones '1' are supported. Of course the zone numbers can be mixing. I tried to use theGet-AzComputeResourceSku CMDlet, but did not got any smarter: PS C:\> Get-AzComputeResourceSku -Location "westeurope" | Where-Object { $_.name -EQ "Standard_D4_v5" } ResourceType Name Location Zones RestrictionInfo ------------ ---- -------- ----- --------------- virtualMachines Standard_D4_v5 westeurope {1, 2, 3} type: Zone, locations: westeurope, zones: 2, 3 Basically Zones says, this should be available, but when trying to select that on portal or run PS. PowerShell gives the following error: New-AzVM: The requested size for resource '/subscriptions/.../virtualMachines/myVM' is currently not available in location 'westeurope' zones '2' for subscription '111.....111'. Please try another size or deploy to a different location or zones. ErrorCode: SkuNotAvailable ErrorTarget: StatusCode: 409 ReasonPhrase: Conflict The funny thing is, this speaks about "subscription" and if I choose a different subscription this VM size is available on the same zone. So I'm interest to hear what others are using for listing available VM sizes, to see how to choose similar VMs on all zones? I have tested with different sizes, and seems to be the case a bit randomly.5.2KViews0likes4CommentsConnection was denied because the user account is not authorised for remote log in
Hi experts, ... there was a restart of the server that is running as VM in azure... and since then, I've been experiencing some issues that are seriously affecting the company. When I want to RDP with a required account (lets call it "azvmuser") to the server, I get the message that I'm not authorized. With Administrator account, I can connect with no issues... The only way I could fix it was by adding the "azvmuser" via "User Accounts" -> "Give other users access to this computer" .... This works for a while however at some point the issue returns and when I check the Users Accounts again, the "azvmuser" is missing there again and have to add it again... There is a task created in "Task Scheduler" that runs for that user and due to the issue above, the task is failing... When I add the user to "Give other users access...", the Task runs fine... Any idea how to fix it? ... for now, I just manually check the VM and add the user "azvmuser" when I get the error message... It is happening for a business critical VM..... other 3 VMs we have in Azure are working fine 😕210KViews0likes4CommentsAzure, Hypher-V machine getting out to the internet
Hello there, I would like to ask a question: In Azure, I have created a Virtual Machine with Windows 2016. This machine has connectivity to the internet ( 10.0.0.4 , gateway 10.0.0.1 , and DNS itselft (10.0.0.4) because it is a Domain Controler ) In that Virtual Machine I have installed the Hyper-V feature. Inside Hyper-V I have created a Virtual machine. This virtual machine is domain-joined ( 10.0.0.5 , Gateway: 10.0.0.1 , DNS: 10.0.0.4 ) This virtual machine is attached to a Hyper-V virtual switch set as external. In that scenario, why does the virtual machine inside Hyper-V cannot reach 8.8.8.8 through ping ? I tried to set "IP Forwarding" to the network nic attached to the Hypher-V host , so that the machine inside Hyper-V could get to the internet, but it hasn't worked. I don't get much information throught "tracert 8.8.8.8" , all it says is "not reachable" but it doesn't even reach 10.0.0.1 , maybe because that IP is Azure-managed ( the gateway ) Any thoughts on this? , thanks in advance!675Views0likes2Comments