Codename Project Bose: Part 2 - how to implement the codebase from GitHub repo
Published Mar 27 2023 03:48 PM 5,601 Views
Microsoft

I have been bombarded with requests to explain how to implement the codebase of Project Bose (https://github.com/pranabpaul-tech/projectbose) as soon as I released the link to my GitHub repo. Unfortunately, I don't have enough time to explain to every individual. Hence, I am trying to explain it through this new post. It's also better to fork my repo to your own GitHub account and clone it from there instead, as we have used GitHub actions to deploy Azure assets going forward. I made several changes in the codebase to ensure it is not too complex. Hence, please clone, fork it afresh. First thing first, once you clone the codebase, start with the main branch. 

 

Step1

Open up the Infra project from Infra folder in your Visual Studio and replace 2 strings - "<your-subscription-id>" and "<your-resourcegroup-name>" in all files (remember all files). Also, change the name of the ACR in the template. ACR name should be unique (I already used both "projectboseacr" and "acrprojectbose", my bad :) ). You have to replace the value of "repositoryUrl" attribute under "Static web section" within "azuredeploy.json" file with your repo URL. And then deploy the project. This will create your basic infra. Somewhat like this:

 

Picture5.png

Step 2

As you know, this will create an additional resource group for AKS agent pools. Open up that resource group in Azure Portal and create a storage account called "projectboseazurecost" under it and then add 2 containers "input" and "output". Just like below:

 

Picture6.png

 

Step 3

Now, the principle behind Project Bose is the last/bottom units (project or application) within an enterprise hierarchy can be associated with one or more Azure resource groups. We will take dumps of Azure Cost Analysis data for an Azure subscription and then later will align those with multiple projects while creating the enterprise hierarchy. To do that, select any resource group under the targeted subscription and select "Cost analysis" > Scope > Select the subscription. 

 

Picture7.png

 

In the next screen select "Configure subscription"

Picture8.png

 

And "Exports" in the following screen:

 

Picture9.png

 

You can now create exports to get dump of Azure Cost Analysis data. Create a new export as follow:

 

Picture10.png

 

This will create a weekly dump within the "input" folder of the storage account container you created earlier. You can even try dump using "Run now" button.

 

Step 4

Now, we will start deploying DB layer, API layer and Front End. DB layer (MySQL and Cassandra DB) are implemented as Stateful Set and .Net Core API (microservices) are implemented as Deployment in Azure Kubernetes Service. From the codebase, all the above can be done manually if you wish (generate docker containers, upload those to ACR and then create stateful sets and deployments within AKS). But I'd like to automate the whole process through GitHub action.

 

Switch to DB branch:

git clone https://github.com/pranabpaul-tech/projectbose (use your forked repo here)

cd projectbose

git switch db

 

Now open up VSCode

code .

It should look like this:

 

Picture11.png

 

Marked files are the files you need to change. In workflow files (deploycassandra.yml, deploymysql.yml), change the acrname, and the secrets (GitHub secrets) suitably according to your deployment. In the manifest files, blob-secret,yaml holds the storage account name and access key values we previously created. Remember both are Base64 encoded. You can create it using "kubectl create secret generic .... -o yaml" statement. Other 2 manifest files (mysqlset.yaml and cassandraset.yaml), are definition files for stateful sets. Here you need to change the name of your ACR as well.

 

Once done, push to your repo and GitHub action will create the stateful sets as shown below:

 

Picture12.png

 

You can check the cassandra pod and the blob storage mounted in it:

 

Picture13.png

 

You can check the associated services within AKS. These will be HeadLess ClusterIP type services, applicable for Stateful Sets. Also, you can look into MySQL DB:

 

$ kubectl exec mysqlset-0 -it -n projectbose -- mysql -u root -p

$ password: password

mysql> use mydb;

mysql> show tables;

 

You will find 3 tables (level, leveldetail and resourcedetail).

 

Step 5

Now switch to API branch:

git switch api

And open up VSCode

code .

It should look like this:

 

Picture14.png

 

In workflow files and in manifest files, change the acrname and then push to create the APIs. If everything goes well you can see the APIs like this:

 

Picture15.png

 

If you see the services - those will be ClusterIP type:

 

Picture16.png

 

I used AGIC to implement ingress. You can use any other ingress/ingress controller such as NGINX as you please. I am not going to explain the whole setup of AGIC as it's already available in Azure documentation. I used a self-signed certificate (OpenSSL) to add a https URL to the Application Gateway. So, I have both 80 and 443 public endpoints. I used 80 for Power BI and 443 to consume from static web app. For your convenience, here is how my ingress template looks like:

 

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: api-ingress
  namespace: projectbose
  annotations:
    appgw.ingress.kubernetes.io/rewrite-rule-set: add-allowed-origins
    kubernetes.io/ingress.class: azure/application-gateway
spec:
  rules:
    - http:
        paths:
         - path: /api/Levels*
            pathType: ImplementationSpecific
           backend:
             service:
              name: levelservice
              port:
                number: 80
         - path: /api/Leveldetails*
            pathType: ImplementationSpecific
            backend:
              service:
              name: leveldetailsservice
              port:
                number: 80
         - path: /api/Resourcedetails*
           pathType: ImplementationSpecific
           backend:
             service:
               name: resourcedetailsservice
               port:
                number: 80
         - path: /api/PrepareData*
           pathType: ImplementationSpecific
           backend:
             service:
              name: preparedataservice
              port:
               number: 80

 

Now, you can check all the services like "http:(or https:)//<your ingress public ip>/api/Levels". All of them will return empty sets except PrepareData. This is because our data in Cassandra DB is not ready yet.

 

Step 6

Now we will implement our React JS application to Azure Static Web App. I have created a very basic no-fancy app to do CRUD operations. But if you want to skip this step, you are welcome. Instead, you can use Postman or VSCode to POST/PUT/DELETE to the APIs directly to add/edit/delete.

 

First, go to your GitHub repo and add a secret "REACT_APP_API" under Settings > Actions > Secrets. The value will be "https://<your ingress public ip>/api/". And then switch to apps branch:

git switch apps

And open up VSCode

code .

You will have a github workflow file with the URL of the static web app already. You need to add a ".env" section shown as below:

 

Picture17.png 

Delete all other workflow files under .github/workflow which are not associated to your static web app. And push this to your GitHub repo. This will build and deploy the React JS application to your Static Web App. Open up the URL on your browser and it should look like as below:

 

Picture18.pngYou can Add/Edit/Delete values by going to the other tabs. Here is how it looks like for me:

Picture19.png

 Picture20.png

 

Picture21.png

 

Once you are done with data entry go to Home page and hit the button "Format Levels" once. This will create a new table in MySQL DB - "levelinfo".

 

Next, hit the "Format Final Data". This will create an empty table "azurecost" in Cassandra DB under "azuredata" keyspace.

 

Step 7

Now we will implement the automation (Azure Function) part.  Let's switch to automation branch:

 

git switch automation

And then open up Visual Studio project by clicking solution file under automation folder. And then hit publish option from the menu:

 

Picture22.png

 

In the publish screen select "New" and "Azure". And then create a new Azure function (Linux) instance under your subscription and your resource group. At the end select "Deployment Type" as "CI/CD using GitHub Workflow".

 

Once Azure Function is already deployed, add two configurations. One with the connection string from your Blob storage where we are collecting Cost Analysis data:

Picture23.png

 

And the other with name "BlobConnectionString" and value "http://<your api IP>/api/PrepareData/resourcelevel". Now restart your Function App. And go back to the setting of weekly dump of data we created earlier for Cost Analysis and hit "Run Now" button for a test. This will create a new dump (.csv) file within input container. On the other hand, the Azure Function will club this data from csv file with your metadata you created earlier (in MySQL, level, leveldetail and resourcedetail) and create final data dump in output folder in a file - "output.csv".

 

Step 8

Open up the Cassandra Db Pod and import data:

kubectl exec cassandraset-0 -n projectbose -- bash

root@cassandraset-0:/# /usr/bin/python /mnt/blob/input/insertdata.py

You can automate this part by implementing Crontab or some other mechanism. I leave this up to you.

 

Now you can visualize the final data using the api call:

http://<your api IP>/api/Preparedata or https://<your api IP>/api/Preparedata.

 

If you would like to visualize this data in Power BI, then use the http URL, as our https certificate is only a self-signed one and will not work for Power BI.

Co-Authors
Version history
Last update:
‎Mar 27 2023 03:48 PM
Updated by: