User Profile
infocloud
Copper Contributor
Joined 3 years ago
User Widgets
Recent Discussions
Re: Why you need a Cloud Adoption Framework (CAF), and probably a WAF too!
It is recommended for organizations to adopt Cloud Adoption Framework (CAF) and Azure Well-Architected Framework (WAF) to ensure a smooth and successful cloud migration and transformation journey. The CAF provides a set of tools, guidance, and best practices to help organizations plan, prepare, and manage the adoption of cloud technologies, including Azure. The framework helps organizations to assess their current state, establish a cloud strategy, design a target state, and execute the migration and transformation plan. The CAF provides a holistic approach to cloud adoption, covering the people, process, and technology aspects of the transformation journey. On the other hand, the WAF is a set of best practices, guidelines, and tools to help organizations design and operate secure, reliable, and efficient cloud workloads on Azure. The framework focuses on five pillars: cost optimization, operational excellence, performance efficiency, reliability, and security. The WAF provides a consistent approach to designing and operating cloud workloads, ensuring that they are aligned with industry standards and best practices. By using the CAF and WAF frameworks, organizations can ensure that their cloud transformation journey is aligned with industry standards and best practices, leading to a more secure, reliable, and cost-effective cloud environment. These frameworks provide a systematic approach to cloud adoption, reducing the risk of failure and ensuring a smooth transition to the cloud. As for how organizations can use these frameworks, they can start by assessing their current state and identifying their goals and objectives. They can then use the CAF to establish a cloud strategy and design a target state. Once they have a clear vision of their cloud environment, they can use the WAF to design and operate cloud workloads that are aligned with industry best practices and standards. In conclusion, the CAF and WAF frameworks are essential tools for organizations that want to adopt cloud technologies, including Azure. These frameworks provide a systematic approach to cloud adoption, ensuring a smooth and successful transformation journey. Organizations that use these frameworks can benefit from a more secure, reliable, and cost-effective cloud environment, leading to increased agility and competitiveness. here reference for the Microsoft Azure Well-Architected Framework: Official Microsoft documentation: https://docs.microsoft.com/en-us/azure/architecture/framework/wellarchitected/3KViews0likes0CommentsRe: Transfer an enterprise account to another enterprise account
Regarding your question, transferring an enterprise account to a new enrollment could be considered as a transfer of ownership, as it involves moving the billing and financial responsibility for the account from one entity to another. To transfer an enterprise account and its associated subscriptions to a new enrollment, you can follow the instructions in the "Transfer Azure Enterprise enrollment accounts and subscriptions" documentation that you mentioned. This will involve several steps, including preparing for the transfer, initiating the transfer, and completing the transfer process. In addition to the official documentation, you may also find helpful resources on the Azure support forums or by contacting Azure customer support directly. It may also be helpful to consult with legal and financial experts to ensure that the transfer is carried out properly and in compliance with any relevant regulations or requirements. I hope this helps! Let me know if you have any further questions. Here's a link to the Microsoft documentation on transferring Azure Enterprise enrollments and subscriptions: https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/transfer-ownership-enterprise-agreement1.9KViews0likes0CommentsRe: Azure Authentication using different username
Yes, you can change the username used for Azure Active Directory (Azure AD) authentication without changing the email address. Here are some suggestions: You can create an alternate userPrincipalName attribute for your users in Azure AD. This attribute can be used as the username for authentication purposes, while the email address can still be used for communication purposes. To create the alternate userPrincipalName, you can use Azure AD PowerShell commands or the Azure AD Graph API. Here is a reference article that explains how to create an alternate userPrincipalName: https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-fed-create-alternate-upn Another option is to use Azure AD Connect to synchronize on-premises AD user accounts with Azure AD. During the synchronization process, you can map a different attribute to the userPrincipalName attribute in Azure AD. For example, you can map the samAccountName attribute to the userPrincipalName attribute. This way, users can use their samAccountName as the username for authentication purposes. Here is a reference article that explains how to configure attribute mapping in Azure AD Connect: https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-install-custom Finally, you can use Azure AD B2C to create custom usernames for your users. Azure AD B2C is a cloud-based identity management solution that allows you to customize the user authentication and authorization process. With Azure AD B2C, you can create custom policies that allow users to sign in with a custom username and password, or with a social identity provider such as Facebook or Google. Here is a reference article that explains how to create custom policies in Azure AD B2C: https://docs.microsoft.com/en-us/azure/active-directory-b2c/custom-policy-get-started I hope this helps! Let me know if you have any further questions.8.1KViews0likes0CommentsRe: S/4 HANA in Azure.
If you are not allowed to create public IP addresses on your Azure VMs due to organizational policy, there are several ways you can allow external users to access your SAP system. Here is one possible solution: Deploy an Azure Application Gateway: Azure Application Gateway is a web traffic load balancer that can be used to manage and secure web traffic to your SAP system. You can create an Application Gateway and configure it to listen on a specific port (e.g. port 33XX for SAP HANA). You can then configure the Application Gateway to forward traffic to your SAP system running on the VM. Create a SAP router: You can create a SAP router on your VM to allow external users to access your SAP system. The SAP router acts as a gateway between the external user and the SAP system running on the VM. To create a SAP router, you can follow the instructions in the SAP documentation. Configure network security groups: To ensure that only authorized traffic is allowed to reach your SAP system, you can configure network security groups (NSGs) in Azure. You can create an NSG and associate it with your SAP system's network interface. You can then configure the NSG to allow inbound traffic on the port used by the SAP router and outbound traffic on the ports required by SAP HANA. Use Azure Bastion for secure access: Azure Bastion is a fully managed service that provides secure and seamless RDP/SSH connectivity to your VMs directly from the Azure portal. You can use Azure Bastion to securely access your VM and configure your SAP system. Here is a reference architecture that illustrates how these components can be used together: https://learn.microsoft.com/en-us/azure/sap/large-instances/hana-architecture This reference architecture includes details on how to configure an Application Gateway, create a SAP router, and use Azure Bastion for secure access. In summary, to allow external users to access your SAP system running on an Azure VM without a public IP, you can use an Azure Application Gateway, a SAP router, network security groups, and Azure Bastion.940Views0likes0CommentsRe: URL Based Routing when wildcard in the middle of the URL
There are several Azure services that can be used to achieve URL-based routing with a wildcard in the middle of the URL. Here are some possible solutions: Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer that can be used to route incoming requests to different endpoints based on routing rules. With Traffic Manager, you can create a routing rule based on the {appId} wildcard in the middle of the URL. Traffic Manager can route traffic to any endpoint that is capable of responding to DNS queries. Azure API Management: Azure API Management is a fully managed service that enables you to create, publish, and manage APIs. With API Management, you can create a URL-based routing rule that includes the {appId} wildcard. You can use policies in API Management to transform the URL before it is passed to the backend API. Azure Kubernetes Service (AKS): If you are using a containerized approach for your APIs, you can deploy them to AKS and use Kubernetes Ingress to handle URL-based routing. With Ingress, you can define routing rules based on the {appId} wildcard and route traffic to different services based on those rules. Azure Functions: Azure Functions is a serverless compute service that allows you to run code in response to events or triggers. With Functions, you can create a HTTP-triggered function that handles requests for your API. You can use the {appId} wildcard in the URL and use code in your function to route traffic to the appropriate backend service. Azure Logic Apps: Azure Logic Apps is a cloud-based service that allows you to create workflows and integrations with other services. With Logic Apps, you can create a workflow that handles requests for your API. You can use the {appId} wildcard in the URL and use code in your workflow to route traffic to the appropriate backend service. These are just a few examples of Azure services that can be used to achieve URL-based routing with a wildcard in the middle of the URL. The best solution for your specific use case will depend on your specific requirements and constraints. here are more details and reference links for each of the solutions I suggested: Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer that can be used to route incoming requests to different endpoints based on routing rules. With Traffic Manager, you can create a routing rule based on the {appId} wildcard in the middle of the URL. Traffic Manager can route traffic to any endpoint that is capable of responding to DNS queries. To set up Traffic Manager, you can follow the instructions in the Azure documentation: https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-overview Azure API Management: Azure API Management is a fully managed service that enables you to create, publish, and manage APIs. With API Management, you can create a URL-based routing rule that includes the {appId} wildcard. You can use policies in API Management to transform the URL before it is passed to the backend API. To set up API Management, you can follow the instructions in the Azure documentation: https://learn.microsoft.com/en-us/azure/api-management/ Azure Kubernetes Service (AKS): Azure Kubernetes Service (AKS) is a fully managed service that makes it easy to deploy and manage containerized applications. With AKS, you can deploy your APIs to Kubernetes and use Kubernetes Ingress to handle URL-based routing. With Ingress, you can define routing rules based on the {appId} wildcard and route traffic to different services based on those rules. To set up AKS and Ingress, you can follow the instructions in the Azure documentation: https://kubernetes.io/docs/concepts/services-networking/ingress/ Azure Logic Apps: Azure Logic Apps is a cloud-based service that allows you to create workflows and integrations with other services. With Logic Apps, you can create a workflow that handles requests for your API. You can use the {appId} wildcard in the URL and use code in your workflow to route traffic to the appropriate backend service. To set up Azure Logic Apps, you can follow the instructions in the Azure documentation: https://learn.microsoft.com/en-us/azure/logic-apps/2.1KViews0likes0CommentsRe: Migrate from VM to Azure cloud tech
Migrating from an Azure VM to a cloud-based architecture can provide many benefits, such as better scalability, cost savings, and improved security. Here are some suggestions on how to upgrade your architecture: Consider moving your application to an Azure App Service: Azure App Service is a platform-as-a-service (PaaS) offering that allows you to easily deploy and manage web applications. You can deploy your .NET Core application to an App Service and take advantage of features such as auto-scaling, load balancing, and automatic backups. Use Azure Functions for processing tasks: Instead of executing .bat files on your VM, consider using Azure Functions to perform processing tasks. Azure Functions is a serverless compute service that allows you to run code in response to events or triggers. You can create a function to process data in your Azure SQL database and trigger it whenever new data is added. Move your database to Azure SQL Database Managed Instance: Azure SQL Database Managed Instance is a fully managed relational database service that provides near 100% compatibility with SQL Server. You can migrate your existing Azure SQL database to a managed instance to take advantage of features such as automatic backups, automated patching, and built-in high availability. Consider using Azure DevOps for CI/CD: Azure DevOps is a fully integrated set of services that provides everything you need to build, test, and deploy your applications. You can use Azure DevOps to set up continuous integration and continuous deployment (CI/CD) for your .NET Core application, allowing you to easily deploy changes to your application. By using these suggestions, you can modernize your application and take advantage of the many benefits of cloud-based architectures. For more information on Azure App Service, Azure Functions, Azure SQL Database Managed Instance, and Azure DevOps, please refer to the following links: https://azure.microsoft.com/en-us/products/app-service/ https://azure.microsoft.com/en-us/products/functions/ https://azure.microsoft.com/en-us/products/devops/1.3KViews0likes0CommentsRe: App service environment and App Service plan availability
The Azure App Service Environment (ASE) is a fully isolated and dedicated environment that enables customers to host their web apps, mobile app backends, and RESTful APIs in a secure and scalable manner. The ASE provides features such as custom domain names, SSL certificates, and virtual network integration. When it comes to selecting an App Service plan in an ASE, only certain plan types are supported. These plan types are: Isolated plan Premium plan PremiumV2 plan PremiumV3 plan This means that you cannot use other plan types, such as Basic or Standard, in an ASE. If you are trying to select a PremiumV3 plan in your ASE and only isolated plans are being shown, it's possible that the region you have selected does not support PremiumV3 plans. You can check the availability of PremiumV3 plans in different regions using the Azure Region Availability page. It's important to note that App Service plans in an ASE come with certain requirements and restrictions. For example, App Service plans in an ASE can only be deployed to a virtual network, and you must configure your network to meet certain requirements. You can find more information on these requirements in the Azure documentation. In summary, the App Service Environment in Azure provides a highly secure and scalable environment for hosting web apps, mobile app backends, and RESTful APIs. However, it is important to choose the appropriate App Service plan type that is compatible with the ASE. If you have any questions or need assistance, you can refer to the Azure documentation or contact Azure support. Reference: https://learn.microsoft.com/en-us/azure/app-service/environment/app-service-app-service-https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?products=app-service1.5KViews0likes0CommentsRe: Azure VM's patching
Hi Yes, it is true that Microsoft offers automated patching for virtual machines (VMs) hosted on the Azure platform. With automated patching, Microsoft will automatically apply critical security patches and updates to the VMs in your subscription, reducing the need for manual patching and ensuring that your VMs remain secure and up to date. However, it's important to note that automated patching is an optional service and is not enabled by default. You can choose to enable or disable the service, and you can also specify a maintenance window during which updates should be applied. It's also worth noting that Microsoft will not apply patches to custom applications or configurations on your VMs, so it's still important to test patches and updates in lower environments before applying them to production environments. Regarding your question about whether Microsoft has admin access to your VMs, it's important to understand that Microsoft does not have access to your VMs unless you have explicitly granted them access. Even with automated patching enabled, Microsoft does not have direct access to your VMs, and all patches are applied automatically using a secure and controlled process. In summary, while Microsoft does offer automated patching for Azure VMs, it's important to understand that this is an optional service and that you can choose to enable or disable it as needed. Automated patching can help reduce the need for manual patching and ensure that your VMs remain secure and up to date, but it's still important to test patches and updates in lower environments before applying them to production environments. Here are some usefull links that provide more information on Azure virtual machine patching: "Azure Update Management overview" - Microsoft documentation on automated patching for Azure VMs: https://docs.microsoft.com/en-us/azure/azure-update-management/overview "Automate patching for Azure VMs" - A step-by-step guide on how to enable automated patching for Azure VMs: https://docs.microsoft.com/en-us/azure/automation/automation-update-management "Azure VM patching strategies" - A Microsoft blog post that discusses different patching strategies for Azure VMs: https://azure.microsoft.com/en-us/blog/azure-vm-patching-strategies/ "Security update validation and deployment" - Microsoft documentation on best practices for testing and deploying security updates: https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-update-validation-and-deployment I hope these resources are helpful for you!3.8KViews0likes0CommentsRe: Hibernate Azure VM
Hibernation support for Azure virtual machines is not currently available, but there are other ways to save on costs and reduce the time needed to start up a virtual machine. One option is to use the Azure VM auto-shutdown feature, which allows you to schedule when your virtual machines will shut down. This can be a great way to save on costs for development and test environments that are only used during certain hours of the day or week. The auto-shutdown feature can be configured through the Azure portal or using PowerShell scripts. Another option is to use Azure Spot VMs, which provide access to unused Azure compute capacity at a lower cost. Spot VMs are available in all regions where Azure is offered and can be used for a variety of workloads. However, it's important to note that Spot VMs are subject to preemption and may be reclaimed by Azure with just a 30-second notice. They are ideal for non-critical workloads and batch processing tasks. While hibernation is not currently supported, Microsoft is continually evaluating customer feedback and is open to the possibility of adding hibernation support to Azure virtual machines in the future. In fact, Microsoft is currently investigating a feature that is similar to hibernation and is designed to provide faster startup times for virtual machines. This feature is known as "Predictive VM Scale-out" and is currently in preview. Predictive VM Scale-out analyzes usage patterns for a virtual machine and pre-warms the virtual machine before an expected increase in usage. This can significantly reduce the time it takes to start up a virtual machine, as the machine is already warmed up and ready to go when the expected increase in usage occurs. In summary, while hibernation is not currently supported for Azure virtual machines, there are other ways to save on costs and reduce startup times. Microsoft is continually working to improve its services and is always open to customer feedback, so it is possible that hibernation support may be added in the future. In the meantime, customers can take advantage of other features such as auto-shutdown and Spot VMs.4.5KViews0likes1CommentRe: ASR Failover network architecture
phantom2000 When configuring a disaster recovery (DR) solution for an on-premises server with Azure Site Recovery (ASR), it is essential to ensure that the networking architecture is set up correctly to support failover and failback. This involves creating a network architecture that allows for seamless connectivity between the on-premises network and the Azure network in the event of a disaster. Here are a few key points to consider: Establishing connectivity: In order to provide connectivity between on-premises and Azure networks, a site-to-site (S2S) VPN should be set up. This allows for secure communication between the on-premises network and Azure virtual network (VNet). Creating subnets: When setting up the Azure VNet, ensure that subnets are created for the resources that will be created within Azure, such as the ASR target virtual machine. This VNet should be on a different subnet than the on-premises network. It is also recommended to create a subnet specifically for the VPN gateway. Configuring routing: Once the S2S VPN is set up, you will need to configure routing between the on-premises network and the Azure VNet. This is usually done using a VPN gateway, which can be set up in Azure to route traffic to and from the on-premises network. Configuring failover: In the event of a disaster, the Azure virtual machine will failover to the Azure VNet. During failover, the Azure virtual machine will be assigned a private IP address from the Azure VNet, which will allow it to communicate with the on-premises network via the S2S VPN. Configuring failback: Once the on-premises network is restored, the Azure virtual machine will need to failback to the on-premises network. During this process, the routing will need to be reconfigured to ensure that traffic flows correctly between the on-premises network and the Azure VNet. In summary, the key to establishing a successful DR solution with ASR is to create a network architecture that allows for secure connectivity between the on-premises network and the Azure VNet via a site-to-site VPN. Once this is set up, routing can be configured to ensure that the Azure virtual machine can communicate with the on-premises network during failover and failback.2.5KViews0likes0CommentsRe: File Share with private endpoint
If you are getting "Access Denied" errors when trying to mount a file share with a private endpoint on your local machine, it's likely that the private endpoint is not properly configured to allow access from your local machine's IP address. Here are some steps you can take to troubleshoot the issue: Verify that the private endpoint is configured correctly: Check that the private endpoint is properly configured to allow access to the storage account and file share from your VNet and the IP address of your local machine. You can do this by reviewing the private endpoint settings in the Azure portal, specifically the "Allowed DNS names" and "Private DNS zone configuration" settings. Also, verify that you have added the IP address of your local machine to the list of allowed IP addresses on the private endpoint. Verify that the VPN connection is established: Ensure that your VPN connection to the VNet is properly established and that you are able to access resources within the VNet, such as virtual machines and other services. Check the firewall settings on your local machine: Ensure that the firewall on your local machine is not blocking traffic to the private endpoint. Specifically, make sure that port 445 is not blocked, as this is the port used for SMB file sharing. Verify that the private endpoint is resolving correctly: Ensure that the private endpoint is resolving correctly from your local machine. You can do this by pinging the private endpoint's DNS name from your local machine and verifying that the IP address returned is the same as the private IP address assigned to the private endpoint. Check the private endpoint logs: Check the private endpoint logs in the Azure portal to see if there are any errors or issues related to the private endpoint configuration or connectivity.4.1KViews0likes0CommentsRe: Azure AD test tenant
Yes, having a separate test tenant can be useful for testing changes and new features before deploying them to production. To create a test tenant that is similar to your production tenant, you will need to set up a separate Azure AD tenant and configure it to match your production tenant as closely as possible. Here are some recommended steps to create a test tenant: Create a separate Azure AD tenant: To create a separate Azure AD tenant, go to the Azure Portal and select the "Azure Active Directory" section. From there, you can create a new tenant by selecting "Create a tenant" and following the prompts to create a new tenant. Configure the test tenant to match the production tenant: To make the test tenant similar to your production tenant, you will need to configure it with the same settings, policies, and permissions. This includes creating the same users and groups and configuring the same Azure AD Connect settings to sync the same users from your on-premises Active Directory. You can use Azure AD PowerShell or Azure AD Graph API to automate the creation of users, groups, and policies in the test tenant. Here are some recommended steps to configure the test tenant: Create the same users and groups: You can use Azure AD PowerShell or Azure AD Graph API to create the same users and groups in the test tenant as you have in your production tenant. This will ensure that the test tenant has the same user base as the production tenant. Configure the same Azure AD Connect settings: You will need to configure Azure AD Connect to sync the same users from your on-premises Active Directory to the test tenant. This will ensure that the test tenant has the same user data as the production tenant. You can use the Azure AD Connect Configuration Wizard to configure the same settings in the test tenant. Configure the same policies: You will need to configure the same policies in the test tenant as you have in your production tenant. This includes policies for password settings, device management, and access control. You can use Azure AD PowerShell or Azure AD Graph API to automate the creation of policies in the test tenant. Test changes and new features in the test tenant: Once you have set up the test tenant, you can test changes and new features in the test tenant before deploying them to production. You can use the test tenant to perform functional testing, security testing, and load testing to ensure that the changes and new features work as expected. Deploy changes and new features to production: After testing changes and new features in the test tenant, you can deploy them to production. It's important to note that any changes or new features that are deployed to production will not be reflected in the test tenant unless you manually configure them. Keep the test tenant up-to-date: To ensure that the test tenant remains a reliable representation of the production tenant, you will need to keep it up-to-date with any changes or new features that are deployed to production. You can automate this process using Azure AD PowerShell or Azure AD Graph API to sync the changes from production to the test tenant. Keep in mind that having a separate test tenant will incur additional costs, so you should plan and budget accordingly. Additionally, you should follow best practices for managing your test tenant, such as keeping it secure and up-to-date, to ensure that it remains an effective tool for testing changes and new features.29KViews1like3CommentsRe: Azure AD account expiration date
In Azure AD, you can set an account expiration date for user accounts to restrict access to resources for a specific period. To set an expiration date for a user account in Azure AD, follow these steps: Connect to Azure AD using PowerShell or Graph API: You can use either Azure AD PowerShell or Azure AD Graph API to manage Azure AD user accounts. To connect to Azure AD using PowerShell, you will need to install the Azure AD PowerShell module and authenticate with your Azure AD tenant. To connect to Azure AD using Graph API, you will need to create an Azure AD app and authenticate with the app's client ID and secret. Retrieve the user object: Once you have connected to Azure AD, you can retrieve the user object that you want to set the expiration date for. You can use the user's User Principal Name (UPN) or Object ID to retrieve the user object. For example, to retrieve a user object using PowerShell, you can use the following command: Get-AzureADUser -ObjectId <user_object_id> Set the account expiration date: After you have retrieved the user object, you can set the account expiration date using the "AccountExpirationDate" attribute of the user object. The "AccountExpirationDate" attribute is a DateTime attribute that can be set to a specific date and time or to a relative time. For example, to set the account expiration date for a user account to July 1, 2023, you can use the following command in PowerShell: Set-AzureADUser -ObjectId <user_object_id> -AccountExpirationDate "2023-07-01T00:00:00Z" This will set the account expiration date to July 1, 2023, at midnight UTC time. Verify the account expiration date: To verify that the account expiration date has been set correctly, you can use the "Get-AzureADUser" cmdlet or the Azure AD Graph API to retrieve the user object again. The "AccountExpirationDate" attribute of the user object should now be set to the date and time that you specified. It's important to note that setting an account expiration date will prevent the user from signing in after the expiration date, but it will not remove the user account or any associated data. If you want to delete the user account and associated data after the expiration date has passed, you will need to do so manually or through an automated process.79KViews0likes2CommentsRe: How to get date part of a filename
Assuming you have a blob container with multiple files named in the format filename_us_ddmmyyyy.csv, you can use the following steps to extract the date part of the filename and decrement it by one day: Create a pipeline in Azure Data Factory and add a "Get Metadata" activity to list all the files in the blob container. In the "Blob container" field, select the name of the container that contains the files you want to copy. In the "Child items" field, enter the file name pattern you want to search for, for example, filename_us_*.csv. In the "Metadata" tab, select the "Child items" property and the "Name" property. This will retrieve the filename of each file in the container. Add a "Foreach" activity to iterate over the list of files returned by the "Get Metadata" activity. In the "Items" field, select the output of the "Get Metadata" activity. This will iterate over each file in the list. Inside the "Foreach" activity, add a "Set Variable" activity to extract the date part of the filename and decrement it by one day. In the "Name" field, enter a name for the variable that will store the decremented date. In the "Value" field, enter the following expression to extract the date from the filename and decrement it by one day: @{addDays('1900-01-01',int(substring(item().name, item().name.IndexOf('_')+4, 2))*365+int(substring(item().name, item().name.IndexOf('_')+2, 2))*30+int(substring(item().name, item().name.IndexOf('_')+6, 4)),-1)} This expression assumes that the date in the filename is in the format "ddmmyyyy". The expression first extracts the day, month, and year from the filename using the "substring" function. It then converts the day and month to days using simple multiplication and adds them to the year in days. Finally, it subtracts one day using the "addDays" function, which takes a starting date of '1900-01-01' and the number of days to add or subtract. Map the output of the "Set Variable" activity to an additional column in the copy activity. In the copy activity, select the file you want to copy from the blob container, and in the "Mapping" tab, select the "Additional column" property. In the "Column name" field, enter a name for the additional column that will store the decremented date. In the "Value" field, select the output of the "Set Variable" activity. This will add a column to the copied file that contains the decremented date. Run the pipeline to copy the files and create an additional column with the decremented date. When you run the pipeline, it will iterate over each file in the blob container, extract the date from the filename, and create an additional column with the decremented date for each file.2.3KViews1like1Comment
Groups
Recent Blog Articles
No content to show