migration
829 TopicsRegistration Now Open for Series "Sentinel to Defender: Your Path to the Unified SOC Experience"
We're excited to announce a 3-part technical webinar series designed to guide security teams through the transition from Microsoft Sentinel to the unified Microsoft Defender portal! Who should attend: Security Architects, Engineers, and Analysts working with Sentinel and Defender implementations What you'll gain: Step-by-step onboarding guidance and real-world configurations Hands-on demos covering incident handling, threat hunting, and automation Clarity on RBAC changes, analytics rules, and new capabilities like Copilot, MTO, and UEBA Register now20Views0likes1CommentMaking Azure DMS More Secure: Azure Portal Permission Enhancements
Migrating databases to Azure SQL Managed Instance or Azure SQL Virtual Machine is a critical step in modernizing enterprise infrastructure. With security and compliance top of mind, Azure Database Migration Service (DMS) has introduced key changes to its Azure portal experience—especially around permission for blob container access. Why the Change? Previously, in case of Azure Portal, DMS relied on account key-based access to Azure Blob Storage for listing and accessing backup files on the migration configuration page. While functional, this approach is not best in terms of security, especially for industries which prohibit the use of shared keys. Now, DMS's Azure portal uses security context of the current signed in user on the Azure portal to list and access backup files in the blob container, making it better security approach. Impact of the Change When migrating to Azure SQL Managed Instance or Azure SQL Virtual Machine via Azure portal make sure the current signed in user has Storage Blob Data Reader role on the Blob container that contains the backup files. This permission is needed to list folders and files in the blob container during migration setup via Azure portal only. If the current signed in user lacks the Storage Blob Data Reader role on the Blob container, users will encounter the following error: Error: "Blob container selection error: Error listing the contents of the container: This request is not authorized to perform this operation using this permission." Solution: Make sure the current signed in user has "Storage Blob Data Reader" role on the Blob container that contains the backup files. For more information, refer : Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Azure Database Migration Service | Microsoft Learn Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine Using Azure Data Studio - Azure Database Migration Service | Microsoft Learn82Views0likes0CommentsGeneral Availability of Online Migration to Azure Database for PostgreSQL Flexible server
Online migration minimizes downtime by keeping your source database operational during the migration process, with continuous data synchronization until cut over. How can I use Online migration? The Online migration is available in the Azure portal on the Migration setup screen, in the “Migration mode” drop down selection box, once you initiate a migration from the Flexible server page. Figure 1: Screenshot from the Azure Portal from the Migration setup page. Here you can select the “Online” migration mode to migrate from any of the listed PostgreSQL sources to Azure Database for PostgreSQL- Flexible server It can also be used from the Azure CLI by specifying the 'migration-mode' parameter as 'Online'. How does Online migration work? In an online database migration to Azure Database for PostgreSQL – Flexible Server, your application that is connecting to your Postgres source is not stopped while your database(s) are copied to Flexible Server target. Instead, the initial copy of the database(s) is followed by replication to keep the Postgres Flexible Server in sync with the Postgres source. A cutover is performed when the Azure Database for PostgreSQL - Flexible Server is in complete sync with the Postgres source, resulting in minimal downtime. Figure 2: Cutover in Online migration: Screenshot from the Migration status screen, where you can execute the cutover and complete the migration. The latency here is zero indicating that target Postgres Flex server is in sync with the source Postgres instance. In the ‘OnlineMigrationDemo’ above, the Latency is 0 which indicates that the Azure Database for PostgreSQL - Flexible Server is in sync with the source Postgres instance. Similarly, Online migration can be executed using the Command Line Interface (CLI) as well. Figure 3: Online migration through CLI: Screenshot when you execute ‘show’ to get the Migration status displays latency for the individual Databases In the ‘OnlineMigrationDemo’ above, the Latency is 0 for the ‘customer-info’ Database being migrated which indicates that the target is in sync with the source. Whether you execute the migration from the Portal or the CLI, once the latency parameter decreases to 0 or close to 0, you can go ahead and execute the cutover to complete the migration. Before you execute the cutover, it is essential that you: Stop all writes at the source Postgres instance Validate the data that has been migrated to the target Flexible server Copy any custom server parameters and connection security details from the source to the target server Once you execute the cutover, the migration shows successful completion. At the point, ensure that you make changes to your application to point all connection strings to the Flexible server. What are the differences between Offline and Online migration? The following table gives an overview of Offline and Online modes of migration. Comparison of Migration modes Online Offline Ideal for small Databases ✓ Simple to execute, with no manual intervention for cutover ✓ Migrate without logical replication restrictions ✓ Ideal for Production databases ✓ Minimal downtime to Application & better user experience ✓ Depending on the nature of your workload, you can choose either Offline or Online migration. Get started with Online migration If you’re looking to migrate to Flexible Server from any of the listed PostgreSQL sources, you’ll find the Migration service overview quite useful. If you only have a small downtime window in particular and you want to minimize the downtime of moving your production workloads from any compatible PostgreSQL source to Flexible Server, then Online migration could be a good fit for your situation. Where to find more info about Online migration for Azure Database for PostgreSQL – Flexible Server? Overview: How to migrate from your PostgreSQL source to Flexible server Tutorials: How to migrate Online from your Azure VM/On-premise instance to Flexible server How to migrate Online from your Amazon RDS instance to Flexible server How to migrate Online from your Amazon Aurora instance to Flexible server How to migrate Online from your Google Cloud SQL for PostgreSQL instance to Flexible server We’re always eager to hear from you, so please reach out to us at migrationpm@service.microsoft.com.Exchange Migration Cross forest
Hello, I planning to do an exchange Migration from forest A to forest B and the email address will be changed. I don't want to set any trust between the two, i will simply create the active directory users from scratch with new passwords and will install and configure the exchange environment in the target forest and will create the mailboxes. i will use a third-party tool to migrate the mailboxes between the two forests and after cutover i will add the alias of the old email as additional proxy address to the mailboxes. i am assuming everything will work fine however i have seen lot of articles on the internet about we have to migrate the legacyexchangeDN from the source environment and to add it as X500 proxy address as well in the destination or we might face issues in the migrated recurrent meetings when someone wants to edit or when someone reply to a migrated email.... I can afford having some meeting corrupted after migration but i cant afford the headache of NDR when someone reply to the migrated emails it will be a disaster.... anyone had this experience before to share it with me especially regarding the reply to the migrated emails and i did not lot of documentation from Microsoft about such scenario? Thank You.790Views0likes1CommentRevolutionizing log collection with Azure Monitor Agent
The much awaited deprecation of the MMA agent is finally here. While still sunsetting, this blog post reviews the advantages of AMA, different deployment options and important updates to your favorite Windows, Syslog and CEF events via AMA data connectors.9.3KViews1like3CommentsAzure VMWare (AVS) Cost Optimization Using Azure Migrate Tool
What is AVS? Azure VMware Solution provides private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and Azure Government. The minimum initial deployment is three hosts, with the option to add more hosts, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. Learn More: https://learn.microsoft.com/en-us/azure/azure-vmware/introduction What is Azure Migrate Tool? Azure Migrate is a comprehensive service designed to help you plan and execute your migration to Azure. It provides a unified platform to discover, assess, and migrate your on-premises resources, including servers, databases, web apps, and virtual desktops, to Azure. The tool offers features like dependency analysis, cost estimation, and readiness assessments to ensure a smooth and efficient migration process. Learn More: https://learn.microsoft.com/en-us/azure/migrate/migrate-services-overview How Azure Migrate can be used to Discover and Assess AVS? Azure Migrate enables the discovery and assessment of Azure VMware Solution (AVS) environments by collecting inventory and performance data from on-premises VMware environments, either through direct integration with vCenter (via Appliance) or by importing data from tools like RVTools. Using Azure Migrate, organizations can analyze the compatibility of their VMware workloads for migration to AVS, assess costs, and evaluate performance requirements. The process involves creating an Azure Migrate project, discovering VMware VMs, and generating assessments that provide insights into resource utilization, right-sizing recommendations, and estimated costs in AVS. This streamlined approach helps plan and execute migrations effectively while ensuring workloads are optimized for the target AVS environment. Note: We will be narrating the RVtools Import method in this article. What Is RVTools? RVTools is a lightweight, free utility designed for VMware administrators to collect, analyze, and export detailed inventory and performance data from VMware vSphere environments. Developed by Rob de Veij, RVTools connects to vCenter or ESXi hosts using VMware's vSphere Management SDK to retrieve comprehensive information about the virtual infrastructure. Key Features of RVTools: Inventory Management: Provides detailed information about virtual machines (VMs), hosts, clusters, datastores, networks, and snapshots. Includes details like VM names, operating systems, IP addresses, resource allocations (CPU, memory, storage), and more. Performance Insights: Offers visibility into resource utilization, including CPU and memory usage, disk space, and VM states (e.g., powered on/off). Snapshot Analysis: Identifies unused or orphaned snapshots, helping to optimize storage and reduce overhead. Export to Excel: Allows users to export all collected data into an Excel spreadsheet (.xlsx) for analysis, reporting, and integration with tools like Azure Migrate. Health Checks: Identifies configuration issues, such as disconnected hosts, orphaned VMs, or outdated VMware Tools versions. User-Friendly Interface: Displays information in tabular form across multiple tabs, making it easy to navigate and analyze specific components of the VMware environment. Hand-on LAB Disclaimer: The data used for this LAB has no relationship with real world scenarios. This sample data is self-created by the author and purely for understanding the concept. To discover and assess your Azure VMware Solution (AVS) environment using an RVTools extract report in the Azure Migrate tool, follow these steps: Prerequisites RVTools Setup: Download and install RVTools from the RVTools Download Ensure connectivity to your vCenter server. Extract the data by running RVTools and saving the output as an Excel (.xlsx) file Permissions: You need at least the Contributor role on the Azure Migrate project. Ensure that you have appropriate permissions in your vCenter environment to collect inventory and performance data. File Requirements: The RVTools file must be saved in .xlsx format without renaming or modifying the tabs or column headers. Note: Sample Sheet: Please check the attachment included with this article. Note that this is not the complete format; some tabs and columns have been removed for simplicity. During the actual discovery and assessment process, please do not modify the tabs or columns. Procedure Step 1: Export Data from RVTools Follow the steps provided in official website to get RVTools Extract Sample Sheet: Please check the attachment included with this article. Note that this is not the complete format; some tabs and columns have been removed for simplicity. During the actual discovery and assessment process, please do not modify the tabs or columns. Step 2: Discover Log in to the Azure portal. Navigate to Azure Migrate and select your project or create new project. Under Migration goals, select Servers, databases and web apps. On Azure Migrate | Servers, databases and web apps page, under Assessment tools, select Discover and then select Using import. In Discover page, in File type, select VMware inventory (RVTools XLSX). In the Step 1: Import the file section, select the RVTools XLSX file and then select Import. Wait for some time to Import Once import completed check for Error Messages if any and rectify those and re upload, otherwise wait 10-15 minutes to reflect imported VMs in the discovery. Post discovery Reference Link: https://learn.microsoft.com/en-us/azure/migrate/vmware/tutorial-import-vmware-using-rvtools-xlsx?context=%2Fazure%2Fmigrate%2Fcontext%2Fvmware-context Step 3: Assess After the upload is complete, navigate to the Servers tab. Click on Assess -->Azure VMware Solution to assess the discovered machines. Edit assessment settings based on your requirements and Save Target region: Select the Azure region for the migration. Node Type: Specify the Azure VMware Solution series (e.g., AV36, AV36P). Pricing model: Select pay-as-you-go or reserved instance pricing. Discount: Specify any available discounts. Note: We will be explaining all the parameters in optimize session. As of now just review and leave parameters as it is. In Assess Servers, select Next. In Select servers to assess > Assessment name > specify a name for the assessment. In Select or create a group > select Create New and specify a group name. Select the appliance and select the servers you want to add to the group. Then select Next. In Review + create assessment, review the assessment details, and select Create Assessment to create the group and run the assessment. Step 4: Review the Assessment View an assessment In Windows, Linux and SQL Server > Azure Migrate: Discovery and assessment, select the number next to Azure VMware Solution. In Assessments, select an assessment to open it. As an example (estimations and costs, for example, only): Review the assessment summary. You can select Sizing assumptions to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties or recalculate the assessment. Step 5: Optimize We have received a report without any optimization in our previous steps. Now we can follow below steps to optimize the cost and node count even further High level steps: Find limiting factor Find which component in settings are mapped for optimization depending on limiting factor Try to adjust the mapped component according to Scenario and Comfort Find Limiting factor: First understand which component (CPU, memory and storage) is deciding your ESXI Node count. This will be highlighted in the report The limiting factor shown in assessments could be CPU or memory or storage resources based on the utilization on nodes. It is the resource, which is limiting or determining the number of hosts/nodes required to accommodate the resources. For example, in an assessment if it was found that after migrating 8 VMware VMs to Azure VMware Solution, 50% of CPU resources will be utilized, 14% of memory is utilized and 18% of storage will be utilized on the 3 Av36 nodes and thus CPU is the limiting factor. Find which option in the setting can be used to optimize: This is depending on the limiting factor. For eg: If Limiting factor is CPU, which means you have high CPU requirement and CPU oversubscription can be used to optimize ESXI Node. Likewise, if storage is the limiting factor editing FTT, RAID or introducing External storage like ANF will help you to reduce Node count. Even reducing one node count will create a huge impact in dollar value. Let's understand how over commitment or over subscription works with simple example. Let's suppose I have two VMs with below specification Name CPU Memory Storage VM1 9 vCPU 200 GB 500 GB VM2 4 vCPU 200 GB 500 GB Total 13 vCPU 400 GB 1000 GB We have EXSI Node which has below capacity: vCPU 10 Memory 500 GB storage 1024 GB Now without optimization I need two ESXI node to accommodate 13 vCPU of total requirement. But let's suppose VM1 and VM2 doesn't consume entire capacity all the time. The total capacity usage at a time will not go beyond 10. then I can accommodate both VM in same ESXI node, Hence I can reduce my node count and cost. Which means it is possible to share resources among both VMs. Without optimization With optimization Parameters effecting Sizing and Pricing CPU Oversubscription Specifies the ratio of number of virtual cores tied to one physical core in the Azure VMware Solution node. The default value in the calculations is 4 vCPU:1 physical core in Azure VMware Solution. API users can set this value as an integer. Note that vCPU Oversubscription > 4:1 may impact workloads depending on their CPU usage. Memory overcommit factor Specifies the ratio of memory overcommit on the cluster. A value of 1 represents 100% memory use, 0.5, for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place. Deduplication and compression factor Specifies the anticipated deduplication and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage configurations. These vary by workload. A value of 3 would mean 3x so for 300GB disk only 100GB storage would be used. A value of 1 would mean no deduplication or compression. You can only add values from 1 to 10 up to one decimal place. FTT : How many device failure can be tolerated for a VM RAID : RAID stands for Redundant Arrays of Independent Disks Explains how data should be stored for redundancy Mirroring : Data will be duplicated as it is to another disk E.g.: To protect a 100 GB VM object by using RAID-1 (Mirroring) with an FTT of 1, you consume 200 GB. Erasure Coding : Erasure coding divides data into chunks and calculates parity information (redundant data) across multiple storage devices. This allows data reconstruction even if some chunks are lost, similar to RAID, but typically more space-efficient E.g.: to protect a 100 GB VM object by using RAID-5 (Erasure Coding) with an FTT of 1, you consume 133.33 GB. Comfort Factor: Azure Migrate considers a buffer (comfort factor) during assessment. This buffer is applied on top of server utilization data for VMs (CPU, memory and disk). The comfort factor accounts for issues such as seasonal usage, short performance history, and likely increases in future usage. For example, a 10-core VM with 20% utilization normally results in a 2-core VM. However, with a comfort factor of 2.0x, the result is a 4-core VM instead. AVS SKU Sizes Optimization Result In this example we got to know that CPU is my limiting factor hence I have adjusted CPU over subscription value from 4:1 to 8:1 Reduced node count from 6 (3 AV36P+3 AV64) to 5 AV36P Reduced Cost by 31% Note: Over-provisioning or over-committing can put your VMs at risk. However, in Azure Cloud, you can create alarms to warn you of unexpected demand increases and add new ESXi nodes on demand. This is the beauty of the cloud: if your resources are under-provisioned, you can scale up or down at any time. Running your resources in an optimized environment not only saves your budget but also allows you to allocate funds for more innovative ideas.2.6KViews1like1CommentHow to Backup Emails in Outlook?
If you want to backup emails in Outlook, the easiest and most reliable way is by using the Mails.Daddy Email Backup Tool. I’ve used it personally to export my Outlook.com emails to formats like PST, EML, and MBOX with zero data loss. It connects via IMAP and lets you back up selective folders or the entire mailbox. Whether you're planning to backup Outlook emails to a hard drive or migrate them to another email client, this tool is fast, secure, and beginner-friendly. For anyone asking how to backup emails in Outlook, I strongly recommend trying this — it’s a smooth experience and saves a lot of time.40Views0likes1CommentUnified SecOps XDR
Hi, I am reaching out to community to seek understanding regarding Unified SecOps XDR portal for Multi-tenant Multi-workspace. Our organization already has a Azure lighthouse setup. My question is if M365 lighthouse license also required for the Multi-tenant Multi-workspace in unified SecOps XDR portal?199Views2likes4CommentsPostgreSQL Discovery and Assessment in Azure Migrate – Public Preview
We’re excited to announce the public preview of PostgreSQL discovery and assessment in Azure Migrate! This feature helps organizations plan their migration journey to Azure by providing deep insights into on-premises PostgreSQL environments. Why This Matters Migrating PostgreSQL workloads to Azure can be challenging without visibility into your current environment. Azure Migrate now offers a unified experience to: Discover PostgreSQL instances across your infrastructure. Assess migration readiness and identify potential blockers. Get configuration-based SKU recommendations for Azure Database for PostgreSQL. Estimate Azure costs for your PostgreSQL workloads. Key Capabilities Comprehensive Discovery Inventory: Catalog PostgreSQL versions and related components. Discovery: Collect database parameters, configurations, table structures, and storage details. Assessment Features Readiness Rules: Determine if your PostgreSQL instances are: Ready: The instance can be migrated to Azure Database for PostgreSQL without any migration issues. Ready with Conditions: The instance has one or more migration issues. Review the identified issues and apply the recommended remediation steps before migration. Not Ready: The assessment did not identify an Azure Database for PostgreSQL configuration that meets the desired performance and configuration requirements. Review the recommendations provided to make the PostgreSQL instance ready for migration. Unknown: Azure Migrate can't assess readiness because discovery is still in progress or there are issues that need to be resolved. To fix discovery issues, check the Notifications blade for details. If the issue persists, contact Microsoft support. Configuration-Based SKU Recommendations: Based on vCores and memory from the machine and storage from the PostgreSQL instance, Example: Memory Optimized – E20ds_v5 Pricing Estimates: Approximate Azure cost for recommended SKUs. Database Parameter Collections - Deep insights into Database parameters How to Get Started? To begin using the PostgreSQL Discovery and Assessment feature in Azure Migrate, follow this four-step onboarding process: Create an Azure Migrate project Initiate your migration journey by setting up a project in the Azure portal. Configure the Azure Migrate appliance Install Windows-based Azure Migrate appliance to obtain a software inventory of servers, PostgreSQL instances, and their attributes, and perform discovery. Review discovered inventory Examine the detailed attributes of the discovered PostgreSQL instances. Create an assessment Evaluate readiness and get detailed recommendations for migration to Azure Database for PostgreSQL. Benefits of Using Azure Migrate for PostgreSQL Single Pane of Glass: Manage PostgreSQL migrations alongside servers, apps, and other databases. Simple Setup: Lightweight collector, no heavy appliances. Actionable Insights: Readiness rules and SKU recommendations tailored to your configuration. For comprehensive, step-by-step instructions, please refer to the discovery and assessment tutorials in the documentation: Provide server credentials to discover software inventory, dependencies, web apps, and SQL Server instances and databases - Azure Migrate | Microsoft Learn Discovery methods in Azure Migrate - Azure Migrate | Microsoft Learn Assessing On-Premises PostgreSQL for Migration to Azure Flexible Server - Azure Migrate | Microsoft Learn Join the Preview and Share Your Feedback! The PostgreSQL Discovery and Assessment feature in Azure Migrate enables you to effortlessly discover, assess, and plan your PostgreSQL database migrations to Azure. Try the features out in public preview and fast-track your migration journey! If you have any queries, feedback, or suggestions, please let us know by leaving a comment below or by directly contacting us at askazurepostgresql@microsoft.com. We are eager to hear your feedback and support you on your journey to Azure