First published on MSDN on Jul 26, 2018
Containerization is an approach to software development in which an application or service, its dependencies, and its configuration (abstracted as deployment manifest files) are packaged together as a container image. The containerized application can be tested as a unit and deployed as a container image instance to the host operating system (OS). Docker is an open-source project for automating the deployment of applications as portable, self-sufficient containers that can run on the cloud or on-premises. More information here.
Microsoft Machine Learning Server is your flexible enterprise platform for analyzing data at scale, building intelligent apps, and discovering valuable insights across your business with full support for Python and R. Operationalization refers to the process of deploying R and Python models and code to Machine Learning Server in the form of web services and the subsequent consumption of these services within client applications to affect business results.
In this article, We will look into how to build a docker image containing Machine Learning Server 9.3 using Dockerfiles and how-to-perform the following operations using the docker image :
Prerequisites :
Any Linux VM with docker community edition installed. For this article, I have spinned a Ubuntu 16.04 VM and installed docker CE.
Step 0 :
First let us create an image called "mlserver" with Machine Learning Server 9.3 installed using the following Dockerfile :
Use docker build command to build image "mlserver" using the above dockerfile :
docker build -f mlserver-dockerfile -t mlserver .
Check if image "mlserver" is built successfully using the following command :
docker images
docker run -it mlserver R
docker run -it mlserver mlserver-python
docker run -p 8888:8888 -it mlserver /opt/microsoft/mlserver/9.3.0/runtime/python/bin/jupyter notebook --no-browser --port=8888 --ip=0.0.0.0 --allow-root
Running the above command gives a link that you can open in your browser to use Jupyter Notebooks.
You can configure Microsoft Learning Server after installation to act as a deployment server and to host analytic web services for operationalization .
Build image "mlserver-onebox" using the above dockerfile :
docker build -f mlserver-onebox-dockerfile -t mlserver-onebox .
Check if image "mlserver-onebox" is built successfully using the following command :
docker images
Start a onebox container using the following command :
docker run --name mlserver-onebox-container -dit mlserver-onebox
Check the status of container using :
docker logs mlserver-onebox-container
Once you have confirmed that the diagnostic tests have passed successfully from the above command, you can start using it as one-box. (docker logs output should contain the following string : "All Diagnostic Tests have passed.")
Obtain the IP address of the container using the following command :
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mlserver-onebox-container
'172.17.0.3'
and use it like a one-box :
az login --mls --mls-endpoint "http://172.17.0.3:12800" --username "admin" --password "Microsoft@2018"
az ml admin diagnostic run
We can also develop an image with a webservice pre-configured so that it is ready for consumption as soon as we spin a container. Here is an example to build an image with Manual Transmission R Web Service pre-configured inside the image.
Build image "rmanualtransmission" using the above dockerfile :
docker build -f r-manualtransmission-dockerfile -t rmanualtransmission .
Check if image "rmanualtransmission" is built successfully using the following command :
docker images
Start container using the following command :
docker run --name rmanualtransmission-container -dit rmanualtransmission
Check the status of container using :
docker logs rmanualtransmission-container
Once you have confirmed that the diagnostic tests have passed successfully and web service has been published, you can start consuming the web service.
Obtain the IP address of the container using the following command :
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' rmanualtransmission-container
'172.17.0.3'
you can consume (or) obtain the swagger.json of ManualTranmission R Web Service using curl commands :
apt-get -y install jq
curl -s --header "Content-Type: application/json" --request POST --data '{"username":"admin","password":"Microsoft@2018"}' http://172.17.0.3:12800/login | jq -r '.access_token'
<access token>
curl -s --header "Content-Type: application/json" --header "Authorization: Bearer <access token>" --request POST --data '{"hp":120,"wt":2.8}' http://172.17.0.3:12800/api/ManualTransmissionService/1.0.0
{"success":true,"errorMessage":"","outputParameters":{"answer":0.64181252840938208},"outputFiles":{},"consoleOutput":"","changedFiles":[]}
curl -s --header "Authorization: Bearer <access token>" --request GET http://172.17.0.3:12800/api/ManualTransmissionService/1.0.0/swagger.json -o swagger.json
The swagger.json file can then be used to build a client library in any language .
Here is an example to build an image with Manual Transmission Python Web Service pre-configured inside the image.
Build image "pymanualtransmission" using the above dockerfile :
docker build -f py-manualtransmission-dockerfile -t pymanualtransmission .
Check if image "pymanualtransmission" is built successfully using the following command :
docker images
Start container using the following command :
docker run --name pymanualtransmission-container -dit pymanualtransmission
Check the status of container using :
docker logs pymanualtransmission-container
Once you have confirmed that the diagnostic tests have passed successfully and web service has been published, you can start consuming the web service.
Obtain the IP address of the container using the following command :
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' pymanualtransmission-container
'172.17.0.3'
you can obtain the swagger.json of ManualTranmission Python Web Service using curl commands :
apt-get -y install jq
curl -s --header "Content-Type: application/json" --request POST --data '{"username":"admin","password":"Microsoft@2018"}' http://172.17.0.3:12800/login | jq -r '.access_token'
<access token>
curl -s --header "Content-Type: application/json" --header "Authorization: Bearer <access token>" --request POST --data '{"hp":120,"wt":2.8}' http://172.17.0.3:12800/api/ManualTransmissionService/1.0.0
{"success":true,"errorMessage":"","outputParameters":{"answer":0.64181252840938208},"outputFiles":{},"consoleOutput":"","changedFiles":[]}
curl -s --header "Authorization: Bearer <access token>" --request GET http://172.17.0.3:12800/api/ManualTransmissionService/1.0.0/swagger.json -o swagger.json
The swagger.json file can then be used to build a client library in any language .
NOTE : You can also modify webnode appsettings.json using Dockerfile magic and enable LDAP/AAD Authentication .
The developed local docker images can be pushed to Azure Container Registry (ACR).
Create Azure Kubernetes Service (AKS) cluster using images in ACR which can be scaled up and down automatically using Autoscale pods.
In order to run 9.3 linux onebox image in a Kubernetes Cluster, we will have to initialize JWTSigningCertificate with a certificate value and add "kubepods" group to webnode/computenode autoStartScripts. These extra steps can be added to the dockerfile as follows :
https://github.com/johnpaulada/microsoftmlserver-docker
https://github.com/rcarmo/docker-ml-server
https://success.docker.com/article/use-a-script-to-initialize-stateful-container-data
https://docs.docker.com/v17.09/engine/userguide/eng-image/dockerfile_best-practices
http://www.tothenew.com/blog/dockerizing-nginx-and-ssh-using-supervisord
https://microsoft.github.io/deployr-api-docs
https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/container-docker-introduction
Containerization is an approach to software development in which an application or service, its dependencies, and its configuration (abstracted as deployment manifest files) are packaged together as a container image. The containerized application can be tested as a unit and deployed as a container image instance to the host operating system (OS). Docker is an open-source project for automating the deployment of applications as portable, self-sufficient containers that can run on the cloud or on-premises. More information here.
Microsoft Machine Learning Server is your flexible enterprise platform for analyzing data at scale, building intelligent apps, and discovering valuable insights across your business with full support for Python and R. Operationalization refers to the process of deploying R and Python models and code to Machine Learning Server in the form of web services and the subsequent consumption of these services within client applications to affect business results.
In this article, We will look into how to build a docker image containing Machine Learning Server 9.3 using Dockerfiles and how-to-perform the following operations using the docker image :
- Run R Shell
- Run Python Shell
- Run Jupyter Notebook
- Run Onebox Configuration
- Run R Web Service
- Run Python Web Service
Prerequisites :
Any Linux VM with docker community edition installed. For this article, I have spinned a Ubuntu 16.04 VM and installed docker CE.
Step 0 :
First let us create an image called "mlserver" with Machine Learning Server 9.3 installed using the following Dockerfile :
Use docker build command to build image "mlserver" using the above dockerfile :
docker build -f mlserver-dockerfile -t mlserver .
Check if image "mlserver" is built successfully using the following command :
docker images
Run R Shell
docker run -it mlserver R
Run Python Shell :
docker run -it mlserver mlserver-python
Run Jupyter Notebook :
docker run -p 8888:8888 -it mlserver /opt/microsoft/mlserver/9.3.0/runtime/python/bin/jupyter notebook --no-browser --port=8888 --ip=0.0.0.0 --allow-root
Running the above command gives a link that you can open in your browser to use Jupyter Notebooks.
Run One-box Configuration :
You can configure Microsoft Learning Server after installation to act as a deployment server and to host analytic web services for operationalization .
Build image "mlserver-onebox" using the above dockerfile :
docker build -f mlserver-onebox-dockerfile -t mlserver-onebox .
Check if image "mlserver-onebox" is built successfully using the following command :
docker images
Start a onebox container using the following command :
docker run --name mlserver-onebox-container -dit mlserver-onebox
Check the status of container using :
docker logs mlserver-onebox-container
Once you have confirmed that the diagnostic tests have passed successfully from the above command, you can start using it as one-box. (docker logs output should contain the following string : "All Diagnostic Tests have passed.")
Obtain the IP address of the container using the following command :
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mlserver-onebox-container
'172.17.0.3'
and use it like a one-box :
az login --mls --mls-endpoint "http://172.17.0.3:12800" --username "admin" --password "Microsoft@2018"
az ml admin diagnostic run
Run R Web Service :
We can also develop an image with a webservice pre-configured so that it is ready for consumption as soon as we spin a container. Here is an example to build an image with Manual Transmission R Web Service pre-configured inside the image.
Build image "rmanualtransmission" using the above dockerfile :
docker build -f r-manualtransmission-dockerfile -t rmanualtransmission .
Check if image "rmanualtransmission" is built successfully using the following command :
docker images
Start container using the following command :
docker run --name rmanualtransmission-container -dit rmanualtransmission
Check the status of container using :
docker logs rmanualtransmission-container
Once you have confirmed that the diagnostic tests have passed successfully and web service has been published, you can start consuming the web service.
Obtain the IP address of the container using the following command :
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' rmanualtransmission-container
'172.17.0.3'
you can consume (or) obtain the swagger.json of ManualTranmission R Web Service using curl commands :
apt-get -y install jq
curl -s --header "Content-Type: application/json" --request POST --data '{"username":"admin","password":"Microsoft@2018"}' http://172.17.0.3:12800/login | jq -r '.access_token'
<access token>
curl -s --header "Content-Type: application/json" --header "Authorization: Bearer <access token>" --request POST --data '{"hp":120,"wt":2.8}' http://172.17.0.3:12800/api/ManualTransmissionService/1.0.0
{"success":true,"errorMessage":"","outputParameters":{"answer":0.64181252840938208},"outputFiles":{},"consoleOutput":"","changedFiles":[]}
curl -s --header "Authorization: Bearer <access token>" --request GET http://172.17.0.3:12800/api/ManualTransmissionService/1.0.0/swagger.json -o swagger.json
The swagger.json file can then be used to build a client library in any language .
Run Python Web Service :
Here is an example to build an image with Manual Transmission Python Web Service pre-configured inside the image.
Build image "pymanualtransmission" using the above dockerfile :
docker build -f py-manualtransmission-dockerfile -t pymanualtransmission .
Check if image "pymanualtransmission" is built successfully using the following command :
docker images
Start container using the following command :
docker run --name pymanualtransmission-container -dit pymanualtransmission
Check the status of container using :
docker logs pymanualtransmission-container
Once you have confirmed that the diagnostic tests have passed successfully and web service has been published, you can start consuming the web service.
Obtain the IP address of the container using the following command :
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' pymanualtransmission-container
'172.17.0.3'
you can obtain the swagger.json of ManualTranmission Python Web Service using curl commands :
apt-get -y install jq
curl -s --header "Content-Type: application/json" --request POST --data '{"username":"admin","password":"Microsoft@2018"}' http://172.17.0.3:12800/login | jq -r '.access_token'
<access token>
curl -s --header "Content-Type: application/json" --header "Authorization: Bearer <access token>" --request POST --data '{"hp":120,"wt":2.8}' http://172.17.0.3:12800/api/ManualTransmissionService/1.0.0
{"success":true,"errorMessage":"","outputParameters":{"answer":0.64181252840938208},"outputFiles":{},"consoleOutput":"","changedFiles":[]}
curl -s --header "Authorization: Bearer <access token>" --request GET http://172.17.0.3:12800/api/ManualTransmissionService/1.0.0/swagger.json -o swagger.json
The swagger.json file can then be used to build a client library in any language .
NOTE : You can also modify webnode appsettings.json using Dockerfile magic and enable LDAP/AAD Authentication .
Run on Kubernetes :
The developed local docker images can be pushed to Azure Container Registry (ACR).
Create Azure Kubernetes Service (AKS) cluster using images in ACR which can be scaled up and down automatically using Autoscale pods.
In order to run 9.3 linux onebox image in a Kubernetes Cluster, we will have to initialize JWTSigningCertificate with a certificate value and add "kubepods" group to webnode/computenode autoStartScripts. These extra steps can be added to the dockerfile as follows :
REFERENCES :
https://github.com/johnpaulada/microsoftmlserver-docker
https://github.com/rcarmo/docker-ml-server
https://success.docker.com/article/use-a-script-to-initialize-stateful-container-data
https://docs.docker.com/v17.09/engine/userguide/eng-image/dockerfile_best-practices
http://www.tothenew.com/blog/dockerizing-nginx-and-ssh-using-supervisord
https://microsoft.github.io/deployr-api-docs
https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/container-docker-introduction
Updated Nov 09, 2023
Version 3.0SQL-Server-Team
Microsoft
Joined March 23, 2019
SQL Server Blog
Follow this blog board to get notified when there's new activity