logic apps
11 TopicsNew User Call
Hello, I am new to Azure. I want to use Azure to send our employee details from IFS10 to Kallidus Learning Portal(third party) using API's. We use IFS10(on premise) and with the available IFS10 API's. We are doing a proof of concept on Azure Integration Services. Question is, when a new employee is made active in the database, how would Azure know there is a new record? Thanks Ross359Views0likes0CommentsAZURE LOGICAPPS: BLOB FROM DEFENDER ADVANCED HUNTING DATA
This Workshop is an integration of Microsoft Defender Advanced Hunting queries, and Logic Apps. As we already know Infrastructure as a Code is the way and for our Workshops we will utilize Azure DevOps , Terraform and GitHub. For this one we will use DevOps pipelines to run our az cli script on a Self Hosted Windows Agent, grabbing the templates from GitHub Repos. What we need: Azure Subscription Microsoft 365 Admin Access to the Security Portal Azure DevOps account, sign in here: https://aex.dev.azure.com/ GitHub account You can deploy without the need of a Self Hosted Agent, but as we are proceeding with more demanding deployments a Hosted Agent is coming very handy. We can have a look at the Microsoft Hosted Agents limitations, and decide when it is best to deploy our Self Hosted agent. Create the resources (Azure Pipelines) Let’s create a Service Principal with Contributor rights to connect our DevOps Project with our Azure Subscription, and a Personal Token from GitHub. From Azure Cloud Shell run the command, give a name and add your Subscription Id : az ad sp create-for-rbac --name azdev-sp --role Contributor --scopes /subscriptions/xxxxxxx-xxxxxxxx-xxxxxxx Keep the output values. Login to https://aex.dev.azure.com/ and select or create an Organization and a Project. On the Project Settings you will find Service Connections and the option to create a new one: Select Azure Resource Manager, Service Principal ( Manual ) , and add the details in the fields. You may create the connection with the Automatic option as well, my preference is Manual. Verify the Service Connection, give a name and an optional Description and check Grant Access to all Pipelines, for our Workshop. Fork or Copy the https://github.com/Azure/azure-quickstart-templates Repo and create a new one let’s name it logicappworkflow. Now in the GitHub Portal select in the upper right Profile section the Settings and in the left vertical menu access the ‘Developer Settings”. Generate a new Personal Access Token, give it a name with Repo:All Checkboxes permissions and copy it somewhere safe. Return to Azure DevOps Project Settings and in a similar manner create a new Service Connection with GitHub of Type Personal Access Token. Allow the Repos “azure-quickstart-templates” and the “logicappworkflow‘ , verify and save. The Template for a blank Consumption Logic App Workflow is this , followed by the parameters.json : template.json { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "logicAppApiVersion": { "type": "string" }, "name": { "type": "string" }, "location": { "type": "string" }, "workflowSchema": { "type": "string" }, "logicAppState": { "type": "string", "defaultValue": "Enabled" }, "definition": { "type": "string", "defaultValue": "[concat('{\"contentVersion\":\"1.0.0.0\",\"parameters\":{},\"actions\":{},\"triggers\":{},\"outputs\":{},\"$schema\":\"', parameters('workflowSchema'), '\"}')]" }, "parameters": { "type": "object", "defaultValue": {} } }, "resources": [ { "apiVersion": "[parameters('logicAppApiVersion')]", "name": "[parameters('name')]", "type": "Microsoft.Logic/workflows", "location": "[parameters('location')]", "tags": {}, "properties": { "definition": "[json(parameters('definition'))]", "parameters": "[parameters('parameters')]", "state": "[parameters('logicAppState')]" } } ] } parameters.json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "logicAppApiVersion": { "value": "2016-10-01" }, "name": { "value": "logicdemo" }, "location": { "value": "North Europe" }, "workflowSchema": { "value": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#" } } } Upload\Create these files as template.json and parameters.json to your logicappworkflow repo. In this example we created a folder inside the repository named logic. So far we have created the DevOps connections to Azure and GitHub and we are ready to deploy! I know it would be much easier to use the Portal, but we may consider this approaches as introductory to DevOps. Trust me you will like it … a lot! So , this time we will create a Release Pipeline, with 2 Tasks. One task to create the resource group and the storage account, and the other one to create a blank Logic App workflow. From the DevOps portal, Pipelines -Releases- New Release Pipeline, select Empty Job , leave default “Stage 1” and close the Pop Up. We won’t need an Artifact since we will use Az CLI with Remote Template Deployment, so we should be on this screen: When we click on the 1 Job, 0 Tasks link, we can start adding Tasks to this Stage. The first thing we see is the Agent Job, it is the Compute where we will run our scripts; here we can leave the preset selection or change the Pool to our own pool where we have added our Self Hosted Agent. Now we are ready to write our Release Pipeline, which will create an Azure Resource group with two Deployments, one for the Storage account and another one for the Logic App. Again, several options are available from cloning the Template repos to uploading them to Azure and so on. We are going to run a release pipeline with 2 tasks, the Az Cli scripts for Remote Template deployments. Here are the 2 scripts, insert them as Batch type and Inline Script : call az group create --name rg-demo-01 --location "North Europe" call az deployment group create --name "DeployStorage" --resource-group rg-demo-01 --template-uri "https://raw.githubusercontent.com/passadis/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json" --parameters storageAccountType=Standard_LRS call az deployment group create --name "DeployLogic" --resource-group rg-demos --template-uri "https://raw.githubusercontent.com/passadis/webapp2021az/master/logic/template.json" --parameters "https://raw.githubusercontent.com/passadis/webapp2021az/master/logic/parameters.json" Go to the Repo, select the first file from which we want the link and click Raw, so the correct link is opened. Copy the link and insert to the relevant Script inputs (–template-uri & –parameters): We have our Pipeline ready , so Save and create a new Release. Select that Stage1 is triggered manually and deploy! We will see in Azure Management Portal the resource group and the deployements we created, as each Task from the Pipeline completes. Design the Logic App Workflow, Create CSV to Azure Blob. Now we are going to use the Logic App Designer to create our Workflow. The trigger is what will activate our workflow so it is the first step to create. Next are the ingestion of the Results from the Defender Advanced Hunting query of our choice. We have already seen the Schema and the capabilities regarding information, and the Kusto Language in a previous post. We make the connection to our Azure AD with an account which must have Security Administrator permissions on the Tenant ( additional settings if RBAC is turned on for Defender). So far , so good! Create a sample Query and test it on the Defender Portal. For now we have : DeviceEvents | where ActionType contains "Antivirus" | where Timestamp > ago(15days) The next part does all the trick for us! We have a number of Device Events and we want these to create a CSV file and Upload to an Azure Blob. There is an action called Initialize Variable which allows us to create different types of variables, among them is the array variable. Pretty cool right ? Nice and neat the Results output is declared as the variable “myArray”, or whatever name, so we can work with it into the next steps. We have our array ready, the only thing now is to create our CSV. The Data Operation “Create CSV table” is here for us! Select the array we declared earlier as the input, and customize the Columns. It is a little tricky to make it work so the trick is to write as an expression the data. So for the Name header we add into the value a custom expressin item()?['DeviceName'] , for the Type item()?['ActionType'] and we need one more detail, which file is affected so item()?['FileName']. Ok lets add another one, that shows which process initiated the Action. This step will look like this: Now we have declared our Outputs and these exist in our CSV table. So let’s put the table into our Blob Storage.The “Update Blob (V2)” step authenticates with Managed Identity, so we must create for our Logic App a System Managed Identity with the Storage Blob Data Owner Role assignment. The new connection asks for the Storage Account name or Endpoint ( Detailed explanations HERE, is a must for you to understand which one needs what). We add the connection, set the Content to “Output” from the previous step ( remember to select Output from the Dynamic Content search bar), and add the content-type as application/octet-stream since we are dealing with CSV files. Alright i know what are you thinking about…What if the Blob is not there ? Our action Updates the Blob it does not create anything! Well, a Logic App should apply logic right ? yes! We will create a next step which Creates the Blob , only if the previous one fails! That’s right! On the dotted selection of each step we can see the “Configure Run After” option. This simple command allows us to select when the step will run based on the success, failure, expiration or skipped outcome of the previous step. So, we will add a new step with the “Create Blob V2” task , and select to Run after the previous step has failed! Notice the Output is collected via the Dynamic Expression search bar And the Final Designer view : And now we have our Files Uploaded to Azure Blob Storage , files that contain critical info as they are constructed from Defender for Endpoint Advanced Hunting Queries. Since Logic Apps now supports Data Lake Storage V2, we will explore in another workshop the beauty of Data Analytics with the help of Azure Data Factory, and Azure Synapse Analytics. We may also have a peek on the final stage of Data Ingestion and Analysis with PowerBI Reports and Dashboards! Remember this example is a basic “blueprint”. It’s purpose is to present and explore the various Deployment methods with Azure DevOps, the Integration of different Microsoft services through Applications and Connectors, and to give a taste of the vast amount of Scenarios and Deployments that can make our digital assets safer, making the better use of the enormous possibilities that come with Cloud , Software and Platform as a Service offerings. References : Data Operations in Logic Apps How to use ARM deployment templates with Azure CLI1.1KViews0likes0CommentsOrchestration vs Choreography: How to Pick the Right Integration Pattern in Azure
Introduction Microservices architecture has become a popular approach for building scalable and flexible applications. Instead of a monolithic architecture, where all components of an application are tightly coupled, microservices divide the application into more independent services that can be developed, deployed and maintained separately. This approach allows for more agility, easier maintenance, and greater scalability. However, with microservices come challenges in integration. Each service must be independent, communicating and exchanging data with other benefits. This is where integration patterns come in. Two popular integration patterns for microservices are orchestration and choreography. This article will explore the differences between these two patterns and guide how to choose the correct pattern for your microservices in Azure. We will also provide step-by-step instructions on implementing each design using Azure services and best practices for effective integration. Orchestration vs Choreography Imagine you’re building a ride-sharing app that needs to connect riders with drivers in real-time. The app is built as a set of microservices, with separate services for user authentication, location tracking, ride matching, and payment processing. As a developer, you must choose between orchestration and choreography to integrate these microservices in Azure. Orchestration Orchestration involves a central component, often called an orchestrator, that manages the flow of communication between services. The orchestrator is responsible for initiating and coordinating service interactions and controlling the order and flow of messages between services. In an orchestration pattern, each service communicates directly with the orchestrator, which acts as a mediator between services. Let’s take the example of a user requesting a ride: The ride service receives a request from the user, which includes the user’s location and destination. The ride service sends the ride request data to the orchestrator. The orchestrator queries the driver service for available drivers. The driver service responds with a list of available drivers. The orchestrator selects an available driver and sends the ride request data to the driver service. The driver service confirms the ride request with the selected driver and sends the driver’s details to the orchestrator. The orchestrator sends the ride details to the payment service to authorise payment. The payment service authorises payment and sends a confirmation to the orchestrator. The orchestrator sends the start ride command to the driver service. The driver service starts the ride and sends periodic updates to the orchestrator, which updates the user through the app’s API. In this scenario, the central controller service manages the flow of information between the microservices, ensuring that the ride request is handled in the correct order and all the necessary steps are completed. Choreography Conversely, choreography is a distributed approach where each service communicates with other services directly, without a central mediator. In a choreography pattern, services publish and subscribe to events or messages, and each service reacts to these events or announcements independently. This approach allows for greater autonomy and scalability of services, as each service can respond to events without relying on a central orchestrator. Let’s take the same example of a user requesting a ride: The ride service receives a request from the user, which includes the user’s location and destination. The ride service sends an event to the location service to find nearby drivers. The location service listens for the event and responds with a list of available drivers. The ride service listens for the response and selects the best driver. The ride service sends an event to the payment service to authorise the payment for the ride. The payment service listens to the event and confirms the payment. The driver service listens for the confirmation and assigns the ride to the selected driver. The location service listens for a request from the driver service to track the driver’s location and sends updates to the rider. In this scenario, each microservice communicates with the other microservices through events, with each microservice determining its order of execution based on the circumstances it receives. The microservices work together in a decentralised manner without a central controller service directing the flow of information. Considerations for Choosing the Right Integration Pattern Both patterns have their pros and cons. Orchestration provides centralised control over communication flow and a clear, linear order of execution, making it easier to manage complex workflows. However, it can also introduce a single point of failure and become a bottleneck as the system grows. Conversely, choreography provides greater autonomy and scalability but can be more challenging to manage, leading to more complex service coordination. Choosing the correct pattern for your microservices depends on your specific requirements and goals; there are several factors to consider: Complexity and Scale: The complexity and scale of your microservices system can influence your choice of integration pattern. Orchestration may better suit complex workflows with many services, while choreography may be better for more loosely-coupled, decentralised systems. Team Expertise and Resources: The skills and resources of your development team can also influence your choice. If your team has experience with centralised control and workflow management, orchestration may be a good choice. If your team is more experienced with event-driven architecture and decentralised communication, choreography may be a better fit. Integration Requirements and Goals: Your integration requirements and goals can also influence your choice. If you must enforce strict order and coordination between services, orchestration may be the best choice, and choreography may be a better fit for flexibility and scalability. Future Scalability and Flexibility Needs: Finally, it’s essential to consider future scalability and flexibility needs. If you anticipate significant growth in your system or need to add or remove services frequently, choreography may be a better choice. If you expect a more stable system with a fixed number of benefits, orchestration may be a better fit. By considering these factors, you can decide on the best integration pattern for your microservices system in Azure. The following sections will provide step-by-step guidance on implementing each design using Azure services. Implementing Orchestration in Azure Azure offers several services that can be used to implement orchestration, including Azure Logic Apps, Azure Functions, and Azure Durable Functions. For the ride-sharing app scenario, we’ll use Azure Logic Apps to orchestrate the communication between the microservices. To implement orchestration using Azure Logic Apps, you would create a workflow that defines the steps for handling a ride request. The workflow would be triggered by an HTTP request to the ride service, which would then execute the workflow steps in the correct order. Here’s what the ride request workflow might look like: When the ride service receives a request from the user, it triggers the Logic App workflow. The workflow sends a message to the location service to find nearby drivers using the HTTP action. The workflow waits for the location service to respond with a list of available drivers using the HTTP action with a polling action to wait for the response. Once the response is received, the workflow selects the best driver based on criteria such as proximity and driver rating using a conditional action. The workflow sends a message to the payment service to authorise the payment for the ride using the HTTP action. The workflow waits for the payment service to confirm the payment using the HTTP action with a polling action to wait for the response. Once the payment is confirmed, the workflow sends a message to the driver service to assign the ride to the selected driver using the HTTP action. The workflow sends a message to the location service to track the driver’s location in real-time and update the rider using the HTTP action. In this scenario, Azure Logic Apps acts as the central controller service, managing the flow of information between the microservices and ensuring that the ride request is handled in the correct order. Implementing Choreography in Azure Choreography in Azure involves each microservice responsible for its events and state changes. In the ride-sharing app scenario, each microservice would communicate with other microservices to perform their respective tasks. Azure provides several services for implementing choreography, including Azure Event Grid and Azure Service Bus. For the ride-sharing app scenario, we’ll use Azure Event Grid to facilitate communication between the microservices. In a choreographed system, each microservice subscribes to events from other microservices that it depends on. In the ride-sharing app example, the location service would publish an event when a driver becomes available. The ride service would subscribe to these events to find available drivers for a ride request. Similarly, the payment service would publish an event when a payment is authorised, and the driver service would subscribe to these events to start the ride. Here’s how the ride request process might look in a choreographed system using Azure Event Grid: When the user requests a ride, the ride service sends a message to the location service to find available drivers using the HTTP action. The location service publishes an event on Azure Event Grid when a driver becomes available. The ride service subscribes to the event and receives a message with information about the available driver. The ride service sends a message to the payment service to authorise the payment for the ride using the HTTP action. When the payment is authorised, the payment service publishes an event on Azure Event Grid. The driver service subscribes to the event and receives the message to start the ride. The driver service sends location updates to the location service, which publishes events on Azure Event Grid for the rider to see the driver’s location. In this scenario, each microservice communicates with other microservices through events published to Azure Event Grid, and there is no central controller to manage the flow of information. Conclusion In conclusion, choosing the correct integration pattern for your microservices system in Azure depends on your specific requirements and goals. Orchestration provides centralised control and a clear order of execution, while choreography offers greater autonomy and scalability. By considering factors such as complexity, team expertise, integration requirements, and future scalability needs, you can decide on the best pattern for your system. Azure provides several services for implementing orchestration and choreography patterns, such as Azure Service Bus and Azure Logic Apps for orchestration and Azure Event Grid and Azure Functions for choreography. By following the step-by-step guidance provided in this article, you can implement the integration pattern that best suits your microservices system in Azure.12KViews0likes0CommentsSecure Azure API behind API management gateway from external systems
Hello, I have few API's configured behind Azure API management. These API's will be called by external systems either legacy or from another tenant. I am using subscription key to validate the request but I am looking for additional ways of securing API's. Below is my analysis so far: Oauth2: Uses client_credentials as grant type that means I will have to share client I'd, client secrets to the external systems. I think this will a problem over the time since managing bunch of app registrations will be a challenge for admins. TLS/Client certificate: works with matching issuer, thumbprint, subject, certificate authority. Basic authentication: provide username and password in inbound policy. It would be great if someone shares their experience with this scenario. What is the best way to achieve this? Regards, Konild722Views0likes0CommentsProper resubmit information in the portal and in the logic app instance
Hi Not sure this is the right place for feature requests, but I'll try 🙂 When doing a resubmit for a specific logic app instance, there is a lot of information missing in my opinion - both from a portal perspective but also from within the logic app. Currently we cannot detect resubmits, neither in the portal i during an execution. Coming from the integration area, this is something most vendors support in their BPM engines. This is crucial both from development perspective, but also from a maintenance/operations perspective, and should be a basic functionality that really needs bigger attention. 1) The portal The portal must show that a particular instance has been resubmitted, when and how many times All resubmit should have the same "main instance id", while each resubmit gets a new child instance id. Currently, we can see canceled processes, but we have no idea which is the matching resubmitted instance (or how many times it was resubmitted). 2) Within a logic app execution It must be possible to detect a resubmit (during the execution) as well in case of situations when things cannot rolled back in case of an error, or if we just want to highlight (e.g. post a notification) that we have a resubmit. Currently, we can only get the normal execution id via @{workflow()['run']['name']} Sample portal view This is what it could look like940Views0likes0CommentsLogic Apps Service Bus connector - correlation with Service Bus
Hi, Now that Application Insights is supported for Logic App (Standard), it could be good if we can have a full correlation between Logic Apps and Service Bus. In my use case, I have a very simple and very common pattern: one Logic App that publishes a message to a Service Bus queue, then another Logic App triggered by this message using the Service Bus connector. In Application Insight, I only have the trace between the publisher and the Service Bus queue and unfortunately, despite Functions, for example, I don't have the correlation between the published message and the receiving Logic App.864Views0likes0CommentsRunning Logic App (Preview) locally using VSCode not working.
Hello, I've tried to run preview version of Logic App on my local environment and I've followed instructions found from here https://docs.microsoft.com/en-gb/azure/logic-apps/create-stateful-stateless-workflows-visual-studio-code#tools. For some reason VSCode says Functions host is no longer running. When I try to run any other function app, everything works. Any ideas where should I look? I have installed all the versions mentioned in instructions and even tried the template found from Github, but no success.1.4KViews0likes0CommentsLogic Apps and VNET access without ISE ?
Hello, So the Azure Integrated Service Environment (ISE) is an awesome thing, but not cheap. With the ultimate goal of using Logic Apps to fetch (and push) data from on-prem data sources via ExpressRoute, is there some way (a workaround - perhaps with Function Apps or an APIM?) that doesn't require ISE to do this? I'd rather not fall back to using Data Gateways or a Relay... Regards, J. Kahl,1.3KViews0likes1Comment