azure cloud service
37 TopicsHow to manage the VIP swap in cloud service extended support via Powershell
You can swap between two independent cloud service deployments in Azure Cloud Services (extended support). Unlike in Azure Cloud Services (classic), the Azure Resource Manager model in Azure Cloud Services (extended support) doesn't use deployment slots. In Azure Cloud Services (extended support), when you deploy a new release of a cloud service, you can make the cloud service "swappable" with an existing cloud service in Azure Cloud Services (extended support). In this blog, we can see how to have a version update via Powershell and REST API.5.9KViews3likes1CommentThe History of Microsoft Azure
Learn about the history of Microsoft Azure a leading giant in the cloud service industry which offers rich services on platform as a service (PaaS), software as a service (Saas) and infrastructure as a service (IaaS). This will take us back to the moments on how this powerful and sophisticated service began, revealing the resilience and vision of the Microsoft company as a brand, the present stages and how to partake of the cake Microsoft has provided for businesses and developers.71KViews5likes3CommentsAdjusting VM Size in Classic Cloud Services Without .csdef access for Migration to CSES
With the deprecation of Classic Cloud Service, many customers are moving to Cloud Service Extended Support, however, it is common for some customers to not have or have lost the code used by the application. Some customers try to use in-place migration and run into problems related to the VM size, however this can only be modified through the .csdef, this blog hopes to help customers who have these types of problems.3.8KViews1like0CommentsMigrate classic Cloud Service to CSES when SKU is unsupported without original project
With the impending retirement of the classic Cloud Service (CS) on August 31st, 2024, an increasing number of users have initiated the migration of their classic Cloud Service to Cloud Service Extended Support (CSES). To facilitate this transition, an official feature known as in-place migration has been introduced, enabling the seamless migration of classic CS to CSES without incurring any downtime. However, certain limitations exist, with the VM size used by the CS role being a notable factor. As per documentation, the A-series, encompassing Small, Medium, and Large VM sizes, is no longer supported in CSES, necessitating their conversion to corresponding VM sizes as a preliminary step. To apply a change in the VM size utilized in classic CS, a redeployment/upgrade is required subsequent to modifying the VM size in the .csdef file. While this is generally a straightforward operation, in cases where the project deployed in the classic CS is considerably dated, there is a possibility that the original project may be lost. Consequently, re-packaging the project into .cspkg and .cscfg files for redeployment/upgrade becomes unfeasible. This blog will primarily address this specific scenario and outline strategies for resolving this predicament. Pre-requirement: A healthy and running classic CS with project deployed in production slot. The used VM size of at least one role is A-series. A classic Storage Account under same subscription as classic CS. (The UI to create classic Storage Account in Azure Portal is already invisible because this resource type is supposed to be retired for new creation. But it’s still possible to use command to create it for now. Sample command like: New-AzResource -ResourceName <accountname> -ResourceGroupName <resourcegroupname> -ResourceType "Microsoft.ClassicStorage/StorageAccounts" -Location <location> -Properties @{ AccountType = "Standard_LRS" } -ApiVersion "2015-06-01") Install Az PowerShell module in local machine. Attention! By following this way, a short downtime is unavoidable. If this needs to be applied on a production environment, please do the same test in another environment at first. Details: Refer to New Deployment Based On Existing Classic Cloud Service - Microsoft Community Hub to get the .cspkg and .cscfg files at first. (Please remember to install the .pfx certificate in the machine where the Get Package request will be sent.) The expected result is that the .cspkg and .cscfg files will be found in the classic storage account container. Please download them to local machine. (optional) If the new CSES needs to use same IP address as original classic CS, please follow these steps. Install legacy Azure PowerShell module Reserve current IP as reserved IP address New-AzureReservedIP –ReservedIPName <reserved ip name> –Location <classic CS location> -ServiceName <classic CS name> The reserved IP address will be found in a resource group called Default-Networking. c. Cut down the association between reserved IP and classic CS. After running this command, the classic CS IP address will be changed. If the application/client side is using IP address to connect to classic CS, it will start failing. Remove-AzureReservedIPAssociation –ReservedIPName <reserved ip name> -ServiceName <classic CS name> Reference of step b and c can be found here: Manage Azure reserved IP addresses (Classic) | Microsoft Learn d. Refer to this document, convert the reserved IP address into a public IP address which can be used by CSES. Move-AzureReservedIP -ReservedIPName <reserved IP name> -Validate Move-AzureReservedIP -ReservedIPName <reserved IP name> -Prepare Move-AzureReservedIP -ReservedIPName <reserved IP name> -Commit After the command is finished, you will find the converted public IP in a resource group called <publicipaddress-name>-Migrated. 3. By default the public IP address is without any domain name. If it’s needed, please configure it in Configuration page. 4. Move the public IP address to the same resource group where the new CSES resource will be created. (optional) If the classic CS is using certificate, create a Key Vault resource in the same subscription and same region, then upload the certificate(s) into Key Vault, Certificates. For more information, please refer to here. Create a Virtual Network in the same resource group and same region as new CSES resource. Open the downloaded .cscfg file with any text editor and add/modify the NetworkConfiguration part: <NetworkConfiguration> <VirtualNetworkSite name="xxx" /> <AddressAssignments> <InstanceAddress roleName="xxx"> <Subnets> <Subnet name="xxx" /> </Subnets> </InstanceAddress> </AddressAssignments> </NetworkConfiguration> (optional) If the step 2 is followed, please also add ReservedIPs part. <NetworkConfiguration> <VirtualNetworkSite name="xxx" /> <AddressAssignments> <InstanceAddress roleName="xxx"> <Subnets> <Subnet name="xxx" /> </Subnets> </InstanceAddress> <ReservedIPs> <ReservedIP name="xxx" /> </ReservedIPs> </AddressAssignments> </NetworkConfiguration> After modifying the .cscfg file, please upload the .cscfg and .cspkg file into a storage account, blob container, then generate and note down the SAS URL of these two files. If the step 2 is not followed, please manually create a public IP address with Basic sku and static IP address assignment mode. To create the new CSES resource, there are two possible ways: Using PowerShell command and using ARM template. The key point is to use the SKU override feature to replace the VM size setting in the .csdef file. (Attention! Since the VM size configured inside of the .csdef file is still the unsupported VM size, please remember to use the override SKU feature in ARM template or PowerShell command every time in the future as well. Otherwise the deployment/upgrade will be failed.) Using PowerShell script: If the Key Vault is not used, remove the first $osProfile part and the last OSProfile parameter of New-AzCloudService command. $keyVault = Get-AzKeyVault -ResourceGroupName <key vault resource group> -VaultName <key vault resource name> $certificate = Get-AzKeyVaultCertificate -VaultName <key vault resource name> -Name <certificate name in Key Vault> $secretGroup = New-AzCloudServiceVaultSecretGroupObject -Id $keyVault.ResourceId -CertificateUrl $certificate.SecretId $osProfile = @{secret = @($secretGroup)} $cspkgSAS = <SAS URL of cspkg file> $cscfgSAS = <SAS URL of cscfg file> $role1 = New-AzCloudServiceRoleProfilePropertiesObject -Name <Role1 name> -SkuName <new supported vm size> -SkuTier 'Standard' -SkuCapacity <instance number> $role2 = New-AzCloudServiceRoleProfilePropertiesObject -Name <Role2 name> -SkuName <new supported vm size> -SkuTier 'Standard' -SkuCapacity <instance number> $roleProfile = @{role = @($role1, $role2)} $publicIP = Get-AzPublicIpAddress -ResourceGroupName <public IP resource group> -Name <public IP name> $feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name <frontend IP config name> -PublicIPAddressId $publicIP.Id $loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name <load balancer config> -FrontendIPConfiguration $feIpConfig $networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig} # Create Cloud Service New-AzCloudService -Name <CSES name> -ResourceGroupName <resource group name> -Location <CSES Location> -AllowModelOverride -PackageUrl $cspkgSAS -ConfigurationUrl $cscfgSAS -UpgradeMode 'Auto' -RoleProfile $roleProfile -NetworkProfile $networkProfile -OSProfile $osProfile Using ARM template: If Key Vault is not used, remember to remove the secrets in osprofile, keep it empty as "osProfile": {}, remove secrets parameter part in template file and remove secrets parameter from parameter file. Template file: { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "cloudServiceName": { "type": "string", "metadata": { "description": "Name of the cloud service" } }, "location": { "type": "string", "metadata": { "description": "Location of the cloud service" } }, "deploymentLabel": { "type": "string", "metadata": { "description": "Label of the deployment" } }, "packageSasUri": { "type": "securestring", "metadata": { "description": "SAS Uri of the CSPKG file to deploy" } }, "configurationSasUri": { "type": "securestring", "metadata": { "description": "SAS Uri of the service configuration (.cscfg)" } }, "roles": { "type": "array", "metadata": { "description": "Roles created in the cloud service application" } }, "publicIPName": { "type": "string", "defaultValue": "contosocsIP", "metadata": { "description": "Name of public IP address" } }, "upgradeMode": { "type": "string", "defaultValue": "Auto", "metadata": { "UpgradeMode": "UpgradeMode of the CloudService" } }, "secrets": { "type": "array", "metadata": { "description": "The key vault id and certificates referenced in the .cscfg file" } } }, "variables": { "cloudServiceName": "[parameters('cloudServiceName')]", "subscriptionID": "[subscription().subscriptionId]", "lbName": "[concat(variables('cloudServiceName'), 'LB')]", "lbFEName": "[concat(variables('cloudServiceName'), 'LBFE')]", "resourcePrefix": "[concat('/subscriptions/', variables('subscriptionID'), '/resourceGroups/', resourceGroup().name, '/providers/')]" }, "resources": [ { "apiVersion": "2020-10-01-preview", "type": "Microsoft.Compute/cloudServices", "name": "[variables('cloudServiceName')]", "location": "[parameters('location')]", "tags": { "DeploymentLabel": "[parameters('deploymentLabel')]" }, "properties": { "packageUrl": "[parameters('packageSasUri')]", "configurationUrl": "[parameters('configurationSasUri')]", "upgradeMode": "[parameters('upgradeMode')]", "allowModelOverride": true, "roleProfile": { "roles": "[parameters('roles')]" }, "networkProfile": { "loadBalancerConfigurations": [ { "id": "[concat(variables('resourcePrefix'), 'Microsoft.Network/loadBalancers/', variables('lbName'))]", "name": "[variables('lbName')]", "properties": { "frontendIPConfigurations": [ { "name": "[variables('lbFEName')]", "properties": { "publicIPAddress": { "id": "[concat(variables('resourcePrefix'), 'Microsoft.Network/publicIPAddresses/', parameters('publicIPName'))]" } } } ] } } ] }, "osProfile": { "secrets": "[parameters('secrets')]" } } } ] } Parameter file: { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "cloudServiceName": { "value": <CSES name> }, "location": { "value": <CSES region> }, "deploymentLabel": { "value": "deployment label of cses by ARM template" }, "packageSasUri": { "value": <.csdef SAS URL> }, "configurationSasUri": { "value": <.cscfg SAS URL> }, "roles": { "value": [ { "name": <role1 name>, "sku": { "name": <new supported VM size>, "tier": "Standard", "capacity": <instance number> } }, { "name": <role2 name>, "sku": { "name": <new supported VM size>, "tier": "Standard", "capacity": <instance number> } } ] }, "publicIPName": { "value": <public IP address name> }, "upgradeMode": { "value": "Auto" }, "secrets": { "value": [ { "sourceVault": { "id": "/subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.KeyVault/vaults/<key vault name>" }, "vaultCertificates": [ { "certificateUrl": "https://<key vault name>.vault.azure.net/secrets/<certificate name>/<secret ID>" } ] } ] } } } Result:3.8KViews0likes0CommentsAzure Logic Apps : HTTP Request OR Custom Connector
Hello, As far as I know, We use HTTP requests while consuming the First-party/third-party API, then when should we use a custom connector? What are those business cases where one should use an HTTP request in PowerAutomate and use in PowerApps Or use a custom connector and use in PowerApps and Power Automate? What are the pros and cons of HTTP Request OR Custom Connector? Thanks and Regards, -Sri723Views0likes1CommentAzure Logic App workflow (Standard) Resubmit and Retry
Hello Experts, A workflow is scheduled to run daily at a specific time and retrieves data from different systems using REST API Calls (8-9). The data is then sent to another system through API calls using multiple child flows. We receive more than 1500 input data, and for each data, an API call needs to be made. During the API invocation process, there is a possibility of failure due to server errors (5xx) and client errors (4xx). To handle this, we have implemented a "Retry" mechanism with a fixed interval. However, there is still a chance of flow failure due to various reasons. Although there is a "Resubmit" feature available at the action level, I cannot apply it in this case because we are using multiple child workflows and the response is sent back from one flow to another. Is it necessary to utilize the "Resubmit" functionality? The Retry Functionality has been developed to handle any Server API errors (5xx) that may occur with Connectors (both Custom and Standard), including client API errors 408 and 429. In this specific scenario, it is reasonable to attempt retrying or resubmitting the API Call from the Azure Logic Apps workflow. Nevertheless, there are other situations where implementing the retry and resubmit logic would result in the same error outcome. Is it acceptable to proceed with the Retry functionality in this particular scenario? It would be highly appreciated if you could provide guidance on the appropriate methodology. Thanks -Sri928Views0likes0CommentsInstance level public ip address configuration in the cloud service.
An Instance-Level Public IP Address (PIP) unlike the Virtual IP Address (VIP) is not load balanced. While the virtual ip is assigned to the cloud service and shared by all virtual machines and role instances in it, the public ip is associated only with a single instance’s NIC. The public ip is particularly useful in multi-instance deployments where each instance can be reachable independently from the Internet. The picture below illustrates the value of PIP and differentiates it from the VIP.4.6KViews2likes2CommentsAZURE CLOUD SERVICE_Download Package
Use Case: Download Cloud Service Package Prerequisite/More Information: You need to download the azure tool from the below link: https://dsazure.blob.core.windows.net/azuretools/AzureTools.exe 2) More Details regarding the tools can be found in the below link: http://blogs.msdn.com/b/kwill/archive/2013/08/26/azuretools-the-diagnostic-utility-used-by-the-windows-azure-developer-support-team.aspx Steps to Follow: Steps to execute the Get Package API and create self-sign certificate: Download the AzureTools.exe from the below link. https://dsazure.blob.core.windows.net/azuretools/AzureTools.exe Double click on AzureTools.exe Click on Utils Tab Click on Misc Tools under Utils Tab A new window will be displayed. Click on Service Management REST API tab. Creating Self Sign Certificate (Ignore if you have already created) Follow the steps on the below link https://technet.microsoft.com/en-us/library/ff710475(v=ws.10).aspx Install the exported certificate on your machine. Upload the exported certificate to azure portal (https://portal.azure.com) Go to Subscriptions Select the subscription for the cloud service whose package you want to download Go to Management Certificates Click on Upload and upload the certificate that you have created (Please make sure that the certificate’s extension is .cer) In the AzureTools Service Management REST API tab, enter the subscription ID and, the Certification Path. Select POST Operation from the Drop Down. Follow this article - https://msdn.microsoft.com/en-us/library/azure/jj154121.aspx and create the URI. You will able to find the Subscription ID, cloud service name by logging into the azure portal under your cloud service. Please create a classic storage account in the same subscription. You need to create a public container and specify the URI of the container to which the packages will be saved. https://management.core.windows.net/84237cfe-3de2-42a7-84fa-506a329fc0a4/services/hostedservices/pranj/deploymentslots/production/package?containerUri=https://clascstrgaccnt.blob.core.windows.net/case1 Click on the Submit button. You able to see the Response as 202 Accepted. You will find the package files in the public container you have created under the classic storage account.5.3KViews0likes0CommentsAzure Logic Apps vs Power Automate
Hello Experts, Please guide me in selecting the more suitable option between Azure Logic Apps and Power Automate for developing an Enterprise application that operates on a scheduled basis. This application must interact with multiple on-premises and SaaS systems by making several REST API calls (approximately 8 - 10 calls) and storing the retrieved data (structural and unstructured). Thanks -Sri5.2KViews0likes3Comments