Combining Azure Lighthouse with Sentinel’s DevOps capabilities

Published 03-05-2020 02:34 AM 19K Views


A few weeks ago, we published this article explaining how to automate the deployment and operations of Azure Sentinel using Infrastructure as Code and DevOps principles.


We received great feedback about the article, but also some questions about how to do this in a multi-tenant environment using Azure Lighthouse. We will try to tackle the different considerations in this post, showing how to implement this with our DevOps tool of preference, Azure DevOps.


If you don’t use Lighthouse and don’t plan to (maybe you only have one Azure AD tenant), you can still apply the concepts explained in this post when working in a multi-workspace Sentinel deployment (just skip the Azure Lighthouse section).


Onboarding your customers into Azure Lighthouse

The first thing you need to do as an MSP (or in a multi-tenant organization) working on Azure is to onboard your customers into your Azure Lighthouse environment. We will not cover this in this post, but you can read here how to do that. You can also refer to this other article published in this blog about the Sentinel-Lighthouse integration.


As a result of the Lighthouse onboarding operation, an identity (or set of identities) from the MSP tenant will be able to access the customer environments with the appropriate roles defined in your onboarding ARM template or Managed Services Marketplace Offer.


Depending on the type of service you’re offering to the customer, these roles will vary, but for a Sentinel-only service, we recommend using the Built-in Sentinel roles (Sentinel Reader, Sentinel Responder, and Sentinel Contributor). Also, take into account that you cannot delegate custom roles with Lighthouse.


Also, it is important to mention that, with Lighthouse, you can provide access to the customer environment to user principals and/or service principals that exist in your tenant. Here is an example of the roles that you could use in your Lighthouse Marketplace Offer (or ARM template) and how it’s applied to customers:



As you can see, we have defined one marketplace offer with two plans inside it. Different plans can have different permissions for the MSP to access the customer environment. It’s also important to mention that you can make plans public or private. If you choose public, any Azure user will see your offer available in the Marketplace and can purchase it. If you make it private, you specify which customer/s have access to it.


In our example, we have created two plans, one that offers a full Sentinel managed service and another with limited permissions. Each plan contains a set of delegations, some for User Groups and one for Service Principals (SPN) used for Automation purposes. That way, the customer is onboarded with all the access needed from your side to perform the requested service. The service principal is the identity that we will use to connect our Azure DevOps environment to the customer Azure subscription.


After multiple customers have purchased your Plan, the service principals you defined will have access to all those customers' subscriptions with the role you specified (in our example, Sentinel Responder or Sentinel Contributor).


Multiple customers in Azure DevOps

Now that we have onboarded one or more customers into Azure and our identities have access to multiple tenants, we can automate the deployment and management of their Azure Sentinel environments. But how do we set up our Azure DevOps service connections, projects, repositories, and pipelines to operate multiple customers?


Azure DevOps Service connections

A service connection is the first thing to set up for a multi-tenant environment. As you might remember from our previous article on the DevOps topic, we created a single service connection pointing to a single subscription. As we are now managing multiple customers, we will need to create one for each of them. On the positive side, we can use the same service principal for all of them because it was onboarded into Lighthouse, and it now has access to all our customer subscriptions. :smiling_face_with_smiling_eyes:


Azure DevOps variables

As you now have multiple customer environments, you also need to create multiple variable groups, one for each customer. You will then use these accordingly inside your pipelines, stages, or jobs.


Code repositories

Managing code repositories requires a difficult design decision that you need to make. Are you going to use a single repository for all your customers? Are you going to create a separate repository for each customer? Do you need further isolation between customers?


These are some of the typical design choices:


  • Single repository with no further customer separation. If you think that all your customers will have the same configuration, you could manage with just a repo where you put all your configuration files for Connectors, Analytics Rules, Workbooks, etc. Obviously, this means that any change to your config files will reflect in all your customer environments. This is the repo structure that we showed in our previous article and can be found here. In general, your customers will have different needs over time, so this approach is recommended only if you’re using your DevOps tool to just automate the initial setup of the customer environments. The moment your customer configurations deviate from each other, this approach can be challenging.


  • Single repository with a separate folder for each customer. In this case, you can have all your script artifacts in a central location but then create separate folders for each customer's config files. This way, you can modify customer configurations independently while keeping your master scripts in a single place. As an example:







|- Artifacts/
|  |- Scripts/_________________________ # Folder for scripts helpers
|- CustomerA/  ________________________ # Folder for Customer A
|  |- AnalyticsRules/  ______________________ # Subfolder for Analytics Rules
|     |- analytics-rules.json _________________ # Analytics Rules definition file (JSON)
|  |- Connectors/  ______________________ # Subfolder for Connectors
|     |- connectors.json _________________ # Connectors definition file (JSON)
|  |- HuntingRules/ _____________________ # 
|     |- hunting-rules.json _______________ # Hunting Rules definition file (JSON)
|- CustomerB/
|  |- AnalyticsRules/  ______________________ # Subfolder for Analytics Rules
|     |- analytics-rules.json _________________ # Analytics Rules definition file (JSON)
|  |- Connectors/  ______________________ # Subfolder for Connectors
|     |- connectors.json _________________ # Connectors definition file (JSON)
|  |- HuntingRules/ _____________________ # 
|     |- hunting-rules.json _______________ # Hunting Rules definition file (JSON)







We only show 3 subfolders per customer for brevity, but there would be more for Playbooks, Workbooks, etc.


  • Multiple Az DevOps projects, one per customer. With this approach, you have even more isolation between customers as you place them into separate Azure DevOps projects. Each project has a separate set of permissions, service connections, repositories, etc. This could be useful when there’s a clear need for the separation of responsibilities between teams working in the environment's operation. For each project, you also get a separate Az DevOps Boards instance, so you can collaborate with your customer (or other teams within your company) creating work items, Kanban boards, etc... Within each Az DevOps project, you would also place a repository for a specific customer. If you want to know more about projects within Azure DevOps, please go here.


Azure DevOps Pipelines

Once you have your repository structure identified, it's time to decide how you will organize your deployment pipelines.


There are several options when it comes to building your pipelines in a multi-tenant environment:


  • One pipeline per customer. This is the recommended way if you are keeping your customer configuration files separate. Imagine that you have a separate folder for each customer within your repo, with subfolders for their Connectors, Analytics Rules, Playbooks, etc. configurations. In this case, you can create separate pipelines for each customer, taking just the configuration files for that specific customer. Each pipeline would use a different variable group pointing to that customer’s Sentinel environment. This approach gives you full control when each customer environment is deployed/updated and under what conditions the different stages are executed.


This is the approach that we used in our previous post here, although we were only managing a single Sentinel environment in that case.


  • A single pipeline for all your customers with multiple deployment stages inside it, each pointing to one customer subscription.  You will use this approach to keep a common set of configuration files for all your clients. In this case, you can just create multiple stages within each of your deployment pipelines, one for each customer environment. Each release stage would use a different variable group pointing to a different customer environment. This approach will provide you more flexibility over the next one because you can specify different approvals, dependencies, and conditions on when the stage will (or will not) run. For example, you could define that Customer A production environment is only deployed if a previous stage has been completed successfully (maybe your internal test environment. This option still gives you good control over when each customer is deployed while keeping the configuration simpler.


You can see an example of this approach in our repository here.


  • A single pipeline with a single deployment stage with multiple deployment jobs, each pointing to one customer subscription. Similar to the previous approach, but keeping it even more condensed. In this case, you just have a single deployment stage, but inside it, you create multiple jobs, each using a different set of variables pointing to a different customer subscription. With this option, you have less granularity and control options on when each customer is deployed.


You can see an example of this approach in our repo here.


Please refer to this page to better understand the different concepts (stages, steps, jobs, tasks, etc.) on structuring your pipelines.


In Summary

In this post, we have explained how to combine Azure Sentinel’s DevOps capabilities using Azure Lighthouse to manage a multi-tenant environment. As you have seen, there are several implementation options; deciding which one to use greatly depends on your organization's size and how your teams collaborate with each other…at the end, this is what the DevOps culture is all about!

Senior Member

Is there a way I can add approvals before deploying to to a customer? Example - before deploying to customer A I want customer A to be sent an approval email and only after someone approve the release continues.


Hi @kay106 , Yes, that can be done in Azure DevOps with environment approvals. See more details here:



Occasional Contributor

Thanks for the great info for MSSPs; sharing with my Linkedin Network

Senior Member

Tremendous write-up on the architecture and different design options, Javier! Our team's using this to develop our CI/CD pipelines, so this is a great help--thank you.


Thanks for the feedback @fkhancyber !! :smile:

Occasional Contributor


Thank you for the article. Just have a question around this architecture.


We have a Production Azure Tenant. And we are also an MSSP signing up our first client.

Do we run Azure Lighthouse from our Production tenant, and use that to connect to our client tenant OR

Do we spin up another separate Azure Tenant, and run Lighthouse from there to manage our customers?

Is there a best practice around this setup from Microsoft?


Thank you


There's no general best practice from Microsoft on this @ShimKwan . At the end, it comes down to how do you manage identities within your organization and whether or not you want to use the same tenant for your own IT and to serve your customers or not.


For example, if your production tenant is managed by your central IT team and you have limited permissions, that can be a hurdle sometimes, but this can be alleviated with good governance. If you create a separate tenant to serve your customer, you have full control over that directory, but you will have to maintain multiple identities and you might need to buy additional licenses for things like Identity Protection. 


Again, no right/wrong answer's something that you need to discuss internally and find a balance between control, flexibility and cost.



Senior Member

Hello Javier,

I've been trying to set up a DevOps CI/CD deployment that works across multiple subscriptions and I was wondering if you could help me with an issue that is preventing me from deploying across subscriptions. 


I was able to finish deploying the CI/CD DevOps pipeline from you Sentinel as Code article for the MSSP subscription, and everything works great. When I was setting up the lighthouse connection that would give the MSSP permissions in the customer subscription, I gave the DevOps service connection in the MSSP subscription access to the customer subscription through lighthouse.


I tried running the pipelines with a customer specific variable group and a customer specific YAML pipeline, but no matter how I arrange the configurations I am not able to get the deployment to come through on the customer side. I eventually noticed that there was a function running in the pipeline called "Az-Module" which I did not write that sets the Az-Context for me. I tried defining the Az-Context in the scripts in our GitHub, but the "Az-Module" always overrides the Az-Context. I left an image of the section I am referring to below highlighted in red. 





So what I am asking is...

  1. Am I supposed to give the DevOps service connection permissions through lighthouse, or is there another method I should be using?
  2. Is there a way to edit the "Az-Module" or prevent it from running so that I can set the Az-Context to point to the customer subscription?




Hi @nruzicka , yes, the SP that is being used by AzDevOps, is the one that should be added to the lighthouse delegation. That way, the service connection that you have defined in AzDevOps, will have access to the MSSP subscription and also the customer subscription. No need to do anything with Az-Module or Az-Context...AzDevOps takes care of everything in the background.



Frequent Visitor

@Javier Soriano  Something to think about. There isn't a parameter to set the time a scheduled Alert starts its cycle. Many In-built rules have a frequency of 1 Day , if a pipeline deploys tens of Alert rules with a 1 day frequency they will all report their findings at the same time, if the pipeline deploys at 5pm a SOC would moving forward potentially get a lot of incidents at this time. 


We are having to create a AM and PM pipeline to spread the load of 1 day frequency Alerts, this adds complication.


Be good if there was a parameter of "Start time" for Alert rules.


Thanks Mark (ITC Secure)


@qsecurity thanks, that is good feedback

Version history
Last update:
‎Dec 29 2020 12:26 AM
Updated by: