Library to turn Azure ML Compute into Ray and Dask cluster

Published Dec 30 2021 04:17 PM 5,179 Views
Microsoft

Ray and Dask are two among the most popular frameworks to parallelize and scale Python computation. They are very helpful to speed up computing for data processing, hyperparameter tunning, reinforcement learning and model serving and many other scenarios.

For an Azure ML compute instance, we can easily install Ray and Dask to take advantage of parallel computing for all cores within the node. However, there is yet an easy way in Azure Machine Learning to extend this to a multi-node cluster when the computing and ML problems require the power of more than one nodes. One would need to setup a separate environment using VMs or K8s outside Azure ML to run multi-node Ray/Dask. This would mean losing all capabilities of Azure ML.

To address this gap, we have developed a library that can easily turn Azure ML compute instance and compute cluster into Ray and Dask cluster. The library does all the complex wirings and setup of a Ray cluster with Dask behind the scene while exposing a simple Ray context object for users perform parallel Python computing tasks. In addition, it is shipped with high performance Pyarrow APIs to access Azure storage and simple interface to install additional libraries.

The library also comes with support for both Interactive mode and job mode. Data scientist can perform fast interactive work with the cluster during exploratory phase then easily turn the code into the job mode with minimal change.

Checkout library repo at https://github.com/microsoft/ray-on-aml for details.

In this post, we'll walk through steps to setup and use the library

 

jamesnguyen_0-1640901046641.png

 

Installation of the library

  1. Prepare compute environment 

For Interactive use at your compute instance, create a compute cluster in the same vnet where your compute instance is. 

Check list

[ ] Azure Machine Learning Workspace

[ ] Virtual network/Subnet

[ ] Create Compute Instance in the Virtual Network

[ ] Create Compute Cluster in the same Virtual Network

Use azureml_py38 conda environment from (Jupyter) Notebook in Azure Machine Learning Studio.

Again, don't forget to create the virtual network for Compute Instance and Compute Cluster. Without it, they cannot communicate to each other.

   

    2.  Install library

 

 

 

 

 

 

 

pip install --upgrade ray-on-aml

 

 

 

 

 

 

 

Installing this library will also install ray[default]==1.9.1, pyarrow>= 5.0.0, dask[complete]==2021.12.0, adlfs==2021.10.0 and fsspec==2021.10.1

 

    3. Use cluster in interactive mode

       Run in interactive mode in compute instance's notebook. Notice the option ci_is_head to enable your current CI as head node.

 

 

 

 

 

 

from ray_on_aml.core import Ray_On_AML
ws = Workspace.from_config()
ray_on_aml =Ray_On_AML(ws=ws, compute_cluster =NAME_OF_COMPUTE_CLUSTER, additional_pip_packages=['torch==1.10.0', 'torchvision', 'sklearn'], maxnode=4)
ray = ray_on_aml.getRay()
# Note that by default, ci_is_head=True which means  compute instance as head node and all nodes in the remote compute cluster as workers 
# But if you want to use one of the nodes in the remote AML compute cluster is used as head node and the remaining are worker nodes.
# then simply specify ray = ray_on_aml.getRay(ci_is_head=False)
# To install additional library, use additional_pip_packages and additional_conda_packages parameters.

 

 

 

 

 

 

At this point, you have the ray object where you can use to perform various parallel computing tasks using ray API.

 

* Advanced usage:There are two arguments to Ray_On_AML() object initilization with to specify base configuration for the library with following default values. Although it's possible, you should not change the default values of base_conda_dep and base_pip_dep as it may break the package. Only do so when you need to customize the cluster default configuration such as ray version.

 

 

 

 

 

 

Ray_On_AML(ws=ws, compute_cluster ="Name_of_Compute_Cluster",base_conda_dep =['adlfs==2021.10.0','pip'],base_pip_dep = ['ray[tune]==1.9.1', 'xgboost_ray==0.1.5', 'dask==2021.12.0','pyarrow >= 5.0.0','fsspec==2021.10.1'])

 

 

 

 

 

 

    4. Use the cluster in job mode

 For use in an AML job, simply include ray_on_aml as a pip dependency then inside your script, do this to get ray

 

 

 

 

 

 

 

from ray_on_aml.core import Ray_On_AML
ray_on_aml =Ray_On_AML()
ray = ray_on_aml.getRay()

if ray: #in the headnode
    #logic to use Ray for distributed ML training, tunning or distributed data transformation with Dask

else:
    print("in worker node")

 

 

 

 

 

 

 

Example scenarios

  1. Perform big data analysis with Dask on Ray

 

 

 

 

 

 

from adlfs import AzureBlobFileSystem

abfs = AzureBlobFileSystem(account_name="azureopendatastorage",  container_name="isdweatherdatacontainer")
data = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2012/"], filesystem=abfs)
data1 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2015/"], filesystem=abfs)
data2 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2010/"], filesystem=abfs)
data3 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2009/"], filesystem=abfs)
data4 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2011/"], filesystem=abfs)
data5 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2013/"], filesystem=abfs)
data6 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2014/"], filesystem=abfs)
all_data =data.union(data1).union(data2).union(data3).union(data4).union(data5).union(data6)
print(all_data.count())
all_data_dask = data.to_dask().describe().compute()
print(all_data_dask)

 

 

 

 

 

 

 

        2. Distributed hypeparam tunning with ray.tune

 

 

 

 

 

 

 import sklearn.datasets
 import sklearn.metrics
 from sklearn.model_selection import train_test_split
 import xgboost as xgb

 from ray import tune


 def train_breast_cancer(config):
     # Load dataset
     data, labels = sklearn.datasets.load_breast_cancer(return_X_y=True)
     # Split into train and test set
     train_x, test_x, train_y, test_y = train_test_split(
         data, labels, test_size=0.25)
     # Build input matrices for XGBoost
     train_set = xgb.DMatrix(train_x, label=train_y)
     test_set = xgb.DMatrix(test_x, label=test_y)
     # Train the classifier
     results = {}
     xgb.train(
         config,
         train_set,
         evals=[(test_set, "eval")],
         evals_result=results,
         verbose_eval=False)
     # Return prediction accuracy
     accuracy = 1. - results["eval"]["error"][-1]
     tune.report(mean_accuracy=accuracy, done=True)


 config = {
     "objective": "binary:logistic",
     "eval_metric": ["logloss", "error"],
     "max_depth": tune.randint(1, 9),
     "min_child_weight": tune.choice([1, 2, 3]),
     "subsample": tune.uniform(0.5, 1.0),
     "eta": tune.loguniform(1e-4, 1e-1)
 }
 analysis = tune.run(
     train_breast_cancer,
     resources_per_trial={"cpu": 1},
     config=config,
     num_samples=10)

 

 

 

 

 

 

 

       3. Distributed XGBoost 

 

 

 

 

 

 

from xgboost_ray import RayXGBClassifier, RayParams
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split

seed = 42

X, y = load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(
    X, y, train_size=0.25, random_state=42
)

clf = RayXGBClassifier(
    n_jobs=4,  # In XGBoost-Ray, n_jobs sets the number of actors
    random_state=seed
)

# scikit-learn API will automatically conver the data
# to RayDMatrix format as needed.
# You can also pass X as a RayDMatrix, in which case
# y will be ignored.

clf.fit(X_train, y_train)

pred_ray = clf.predict(X_test)
print(pred_ray.shape)

pred_proba_ray = clf.predict_proba(X_test)
print(pred_proba_ray.shape)

# It is also possible to pass a RayParams object
# to fit/predict/predict_proba methods - will override
# n_jobs set during initialization

clf.fit(X_train, y_train, ray_params=RayParams(num_actors=4))

pred_ray = clf.predict(X_test, ray_params=RayParams(num_actors=4))
print(pred_ray.shape)

 

 

 

 

 

 

 

4. Use with in job mode with AML job

 

 

 

 

 

 

ws = Workspace.from_config()

compute_cluster = 'worker-cpu-v3'
maxnode =5
vm_size='STANDARD_DS3_V2'
vnet='rayvnet'
subnet='default'
exp ='ray_on_aml_job'
ws_detail = ws.get_details()
ws_rg = ws_detail['id'].split("/")[4]
vnet_rg=None
try:
    ray_cluster = ComputeTarget(workspace=ws, name=compute_cluster)

    print('Found existing cluster, use it.')
except ComputeTargetException:
    if vnet_rg is None:
        vnet_rg = ws_rg
    compute_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
                                                        min_nodes=0, max_nodes=maxnode,
                                                        vnet_resourcegroup_name=vnet_rg,
                                                        vnet_name=vnet,
                                                        subnet_name=subnet)
    ray_cluster = ComputeTarget.create(ws, compute_cluster, compute_config)

    ray_cluster.wait_for_completion(show_output=True)


rayEnv = Environment.from_conda_specification(name = "rayEnv",
                                             file_path = "../examples/conda_env.yml")

# rayEnv = Environment.get(ws, "rayEnv", version=19)


src=ScriptRunConfig(source_directory='../examples/job',
                script='aml_job.py',
                environment=rayEnv,
                compute_target=ray_cluster,
                distributed_job_config=PyTorchConfiguration(node_count=maxnode),
                    # arguments = ["--master_ip",master_ip]
                )
run = Experiment(ws, exp).submit(src)

 

 

 

 

 

 

 

This is the code inside aml_job.py with details omitted for brevity 

 

 

 

 

 

 

 


if __name__ == "__main__":
    run = Run.get_context()
    ws = run.experiment.workspace
    account_key = ws.get_default_keyvault().get_secret("adls7-account-key")
    ray_on_aml =Ray_On_AML()
    ray = ray_on_aml.getRay()

    if ray: #in the headnode
        print("head node detected")

        datasets.MNIST("~/data", train=True, download=True)

        analysis = tune.run(train_mnist, config=search_space)
        print(ray.cluster_resources())
        print("data count result", get_data_count(account_key))

    else:
        print("in worker node")

 

 

 

 

 

 

5. View Ray dashboard

The easiest way to view Ray dashboard is using the connection from VSCode for Azure ML. Open VSCode to your Compute Instance, open a terminal windows, type http://127.0.0.1:8265/ then ctrl+click to open the Ray Dashboard in your local computer.

 

VS code terminalVS code terminal

This trick tells VScode to forward port to your local machine without having to setup ssh port forwarding using VScode's extension on the CI.

 

Ray DashboardRay Dashboard

 

 

 

        See more examples at quick start examples

        In partnership with Hyun Suk Shin.

%3CLINGO-SUB%20id%3D%22lingo-sub-3048784%22%20slang%3D%22en-US%22%3ELibrary%20to%20turn%20Azure%20ML%20Compute%20into%20Ray%20and%20Dask%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-3048784%22%20slang%3D%22en-US%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fwww.ray.io%2F%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3ERay%3C%2FA%3E%20and%20%3CA%20href%3D%22https%3A%2F%2Fdask.org%2F%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3EDask%3C%2FA%3E%20are%20two%20among%20the%20most%20popular%20frameworks%20to%20parallelize%20and%20scale%20Python%20computation.%20They%20are%20very%20helpful%20to%20speed%20up%20computing%20for%20data%20processing%2C%20hyperparameter%20tunning%2C%20reinforcement%20learning%20and%20model%20serving%20and%20many%20other%20scenarios.%3C%2FP%3E%0A%3CP%3EFor%20an%20Azure%20ML%20compute%20instance%2C%20we%20can%20easily%20install%20Ray%20and%20Dask%20to%20take%20advantage%20of%20parallel%20computing%20for%20all%20cores%20within%20the%20node.%20However%2C%20there%20is%20yet%20an%20easy%20way%20in%20Azure%20Machine%20Learning%20to%20extend%20this%20to%20a%20multi-node%20cluster%20when%20the%20computing%20and%20ML%20problems%20require%20the%20power%20of%20more%20than%20one%20nodes.%20One%20would%20need%20to%20setup%20a%20separate%20environment%20using%20VMs%20or%20K8s%20outside%20Azure%20ML%20to%20run%20multi-node%20Ray%2FDask.%20This%20would%20mean%20losing%20all%20capabilities%20of%20Azure%20ML.%3C%2FP%3E%0A%3CP%3ETo%20address%20this%20gap%2C%20we%20have%20developed%20a%20library%20that%20can%20easily%20turn%20Azure%20ML%20compute%20instance%20and%20compute%20cluster%20into%20Ray%20and%20Dask%20cluster.%20The%20library%20does%20all%20the%20complex%20wirings%20and%20setup%20of%20a%20Ray%20cluster%20with%20Dask%20behind%20the%20scene%20while%20exposing%20a%20simple%20Ray%20context%20object%20for%20users%20perform%20parallel%20Python%20computing%20tasks.%20In%20addition%2C%20it%20is%20shipped%20with%20high%20performance%20Pyarrow%20APIs%20to%20access%20Azure%20storage%20and%20simple%20interface%20to%20install%20additional%20libraries.%3C%2FP%3E%0A%3CP%3EThe%20library%20also%20comes%20with%20support%20for%20both%20Interactive%20mode%20and%20job%20mode.%20Data%20scientist%20can%20perform%20fast%20interactive%20work%20with%20the%20cluster%20during%20exploratory%20phase%20then%20easily%20turn%20the%20code%20into%20the%20job%20mode%20with%20minimal%20change.%3C%2FP%3E%0A%3CP%3ECheckout%20library%20repo%20at%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fray-on-aml%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3Ehttps%3A%2F%2Fgithub.com%2Fmicrosoft%2Fray-on-aml%3C%2FA%3E%20for%20details.%3C%2FP%3E%0A%3CP%3EIn%20this%20post%2C%20we'll%20walk%20through%20steps%20to%20setup%20and%20use%20the%20library%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22jamesnguyen_0-1640901046641.png%22%20style%3D%22width%3A%20832px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F336615i9D987A37AA173C28%2Fimage-dimensions%2F832x438%3Fv%3Dv2%22%20width%3D%22832%22%20height%3D%22438%22%20role%3D%22button%22%20title%3D%22jamesnguyen_0-1640901046641.png%22%20alt%3D%22jamesnguyen_0-1640901046641.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CFONT%20size%3D%225%22%3E%3CSTRONG%3EInstallation%20of%20the%20library%3C%2FSTRONG%3E%3C%2FFONT%3E%3C%2FP%3E%0A%3COL%3E%0A%3CLI%20dir%3D%22auto%22%3E%3CSTRONG%3EPrepare%20compute%20environment%26nbsp%3B%3C%2FSTRONG%3E%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3EFor%20Interactive%20use%20at%20your%20compute%20instance%2C%20create%20a%20compute%20cluster%3CSTRONG%3E%20in%20the%20same%20vnet%3C%2FSTRONG%3E%20where%20your%20compute%20instance%20is.%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3ECheck%20list%3C%2FP%3E%0A%3CBLOCKQUOTE%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E%5B%20%5D%20Azure%20Machine%20Learning%20Workspace%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E%5B%20%5D%20Virtual%20network%2FSubnet%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E%5B%20%5D%20Create%20Compute%20Instance%20in%20the%20Virtual%20Network%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E%5B%20%5D%20Create%20Compute%20Cluster%20in%20the%20same%20Virtual%20Network%3C%2FP%3E%0A%3C%2FBLOCKQUOTE%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3EUse%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CCODE%3Eazureml_py38%3C%2FCODE%3E%3CSPAN%3E%26nbsp%3Bconda%20environment%26nbsp%3B%3C%2FSPAN%3Efrom%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CCODE%3E(Jupyter)%20Notebook%3C%2FCODE%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Ein%20Azure%20Machine%20Learning%20Studio.%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3EAgain%2C%20don't%20forget%20to%20create%20the%20virtual%20network%20for%20Compute%20Instance%20and%20Compute%20Cluster.%20Without%20it%2C%20they%20cannot%20communicate%20to%20each%20other.%3C%2FP%3E%0A%3CP%3E%3CSPAN%3E%26nbsp%3B%20%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%3CSPAN%3E%26nbsp%3B%20%26nbsp%3B%202.%26nbsp%3B%20Install%20library%3C%2FSPAN%3E%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Epip%20install%20--upgrade%20ray-on-aml%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3EInstalling%20this%20library%20will%20also%20install%20ray%5Bdefault%5D%3D%3D1.9.1%2C%20pyarrow%26gt%3B%3D%205.0.0%2C%20dask%5Bcomplete%5D%3D%3D2021.12.0%2C%20adlfs%3D%3D2021.10.0%20and%20fsspec%3D%3D2021.10.1%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%26nbsp%3B%20%26nbsp%3B%203.%26nbsp%3B%3CSPAN%3EUse%20cluster%20in%20interactive%20mode%3C%2FSPAN%3E%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3BRun%20in%20interactive%20mode%20in%20compute%20instance's%20notebook.%20Notice%20the%20option%20%3CEM%3Eci_is_head%3C%2FEM%3E%20to%20enable%20your%20current%20CI%20as%20head%20node.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Efrom%20ray_on_aml.core%20import%20Ray_On_AML%0Aws%20%3D%20Workspace.from_config()%0Aray_on_aml%20%3DRay_On_AML(ws%3Dws%2C%20compute_cluster%20%3DNAME_OF_COMPUTE_CLUSTER%2C%20additional_pip_packages%3D%5B'torch%3D%3D1.10.0'%2C%20'torchvision'%2C%20'sklearn'%5D%2C%20maxnode%3D4)%0Aray%20%3D%20ray_on_aml.getRay()%0A%23%20Note%20that%20by%20default%2C%20ci_is_head%3DTrue%20which%20means%20%20compute%20instance%20as%20head%20node%20and%20all%20nodes%20in%20the%20remote%20compute%20cluster%20as%20workers%20%0A%23%20But%20if%20you%20want%20to%20use%20one%20of%20the%20nodes%20in%20the%20remote%20AML%20compute%20cluster%20is%20used%20as%20head%20node%20and%20the%20remaining%20are%20worker%20nodes.%0A%23%20then%20simply%20specify%20ray%20%3D%20ray_on_aml.getRay(ci_is_head%3DFalse)%0A%23%20To%20install%20additional%20library%2C%20use%20additional_pip_packages%20and%20additional_conda_packages%20parameters.%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3EAt%20this%20point%2C%20you%20have%20the%20ray%20object%20where%20you%20can%20use%20to%20perform%20various%20parallel%20computing%20tasks%20using%20ray%20API.%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E*%20Advanced%20usage%3AThere%20are%20two%20arguments%20to%20Ray_On_AML()%20object%20initilization%20with%20to%20specify%20base%20configuration%20for%20the%20library%20with%20following%20default%20values.%20Although%20it's%20possible%2C%20you%20should%20not%20change%20the%20default%20values%20of%20base_conda_dep%20and%20base_pip_dep%20as%20it%20may%20break%20the%20package.%20Only%20do%20so%20when%20you%20need%20to%20customize%20the%20cluster%20default%20configuration%20such%20as%20ray%20version.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3ERay_On_AML(ws%3Dws%2C%20compute_cluster%20%3D%22Name_of_Compute_Cluster%22%2Cbase_conda_dep%20%3D%5B'adlfs%3D%3D2021.10.0'%2C'pip'%5D%2Cbase_pip_dep%20%3D%20%5B'ray%5Btune%5D%3D%3D1.9.1'%2C%20'xgboost_ray%3D%3D0.1.5'%2C%20'dask%3D%3D2021.12.0'%2C'pyarrow%20%26gt%3B%3D%205.0.0'%2C'fsspec%3D%3D2021.10.1'%5D)%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%26nbsp%3B%20%26nbsp%3B%204.%20Use%20the%20cluster%20in%20job%20mode%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E%26nbsp%3BFor%20use%20in%20an%20AML%20job%2C%20simply%20include%20ray_on_aml%20as%20a%20pip%20dependency%20then%20inside%20your%20script%2C%20do%20this%20to%20get%20ray%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Efrom%20ray_on_aml.core%20import%20Ray_On_AML%0Aray_on_aml%20%3DRay_On_AML()%0Aray%20%3D%20ray_on_aml.getRay()%0A%0Aif%20ray%3A%20%23in%20the%20headnode%0A%20%20%20%20%23logic%20to%20use%20Ray%20for%20distributed%20ML%20training%2C%20tunning%20or%20distributed%20data%20transformation%20with%20Dask%0A%0Aelse%3A%0A%20%20%20%20print(%22in%20worker%20node%22)%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20dir%3D%22auto%22%20id%3D%22toc-hId--1463100141%22%20id%3D%22toc-hId--1459614685%22%3E%3CFONT%20size%3D%225%22%3EExample%20scenarios%3C%2FFONT%3E%3C%2FH3%3E%0A%3COL%3E%0A%3CLI%3E%3CSTRONG%3EPerform%20big%20data%20analysis%20with%3CA%20href%3D%22https%3A%2F%2Fdocs.ray.io%2Fen%2Flatest%2Fdata%2Fdask-on-ray.html%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3E%20Dask%20on%20Ray%3C%2FA%3E%3C%2FSTRONG%3E%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Efrom%20adlfs%20import%20AzureBlobFileSystem%0A%0Aabfs%20%3D%20AzureBlobFileSystem(account_name%3D%22azureopendatastorage%22%2C%20%20container_name%3D%22isdweatherdatacontainer%22)%0Adata%20%3D%20ray.data.read_parquet(%5B%22az%3A%2F%2Fisdweatherdatacontainer%2FISDWeather%2Fyear%3D2012%2F%22%5D%2C%20filesystem%3Dabfs)%0Adata1%20%3D%20ray.data.read_parquet(%5B%22az%3A%2F%2Fisdweatherdatacontainer%2FISDWeather%2Fyear%3D2015%2F%22%5D%2C%20filesystem%3Dabfs)%0Adata2%20%3D%20ray.data.read_parquet(%5B%22az%3A%2F%2Fisdweatherdatacontainer%2FISDWeather%2Fyear%3D2010%2F%22%5D%2C%20filesystem%3Dabfs)%0Adata3%20%3D%20ray.data.read_parquet(%5B%22az%3A%2F%2Fisdweatherdatacontainer%2FISDWeather%2Fyear%3D2009%2F%22%5D%2C%20filesystem%3Dabfs)%0Adata4%20%3D%20ray.data.read_parquet(%5B%22az%3A%2F%2Fisdweatherdatacontainer%2FISDWeather%2Fyear%3D2011%2F%22%5D%2C%20filesystem%3Dabfs)%0Adata5%20%3D%20ray.data.read_parquet(%5B%22az%3A%2F%2Fisdweatherdatacontainer%2FISDWeather%2Fyear%3D2013%2F%22%5D%2C%20filesystem%3Dabfs)%0Adata6%20%3D%20ray.data.read_parquet(%5B%22az%3A%2F%2Fisdweatherdatacontainer%2FISDWeather%2Fyear%3D2014%2F%22%5D%2C%20filesystem%3Dabfs)%0Aall_data%20%3Ddata.union(data1).union(data2).union(data3).union(data4).union(data5).union(data6)%0Aprint(all_data.count())%0Aall_data_dask%20%3D%20data.to_dask().describe().compute()%0Aprint(all_data_dask)%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%202.%20Distributed%20hypeparam%20tunning%20with%20%3CA%20href%3D%22https%3A%2F%2Fdocs.ray.io%2Fen%2Flatest%2Ftune%2Findex.html%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3Eray.tune%3C%2FA%3E%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3E%20import%20sklearn.datasets%0A%20import%20sklearn.metrics%0A%20from%20sklearn.model_selection%20import%20train_test_split%0A%20import%20xgboost%20as%20xgb%0A%0A%20from%20ray%20import%20tune%0A%0A%0A%20def%20train_breast_cancer(config)%3A%0A%20%20%20%20%20%23%20Load%20dataset%0A%20%20%20%20%20data%2C%20labels%20%3D%20sklearn.datasets.load_breast_cancer(return_X_y%3DTrue)%0A%20%20%20%20%20%23%20Split%20into%20train%20and%20test%20set%0A%20%20%20%20%20train_x%2C%20test_x%2C%20train_y%2C%20test_y%20%3D%20train_test_split(%0A%20%20%20%20%20%20%20%20%20data%2C%20labels%2C%20test_size%3D0.25)%0A%20%20%20%20%20%23%20Build%20input%20matrices%20for%20XGBoost%0A%20%20%20%20%20train_set%20%3D%20xgb.DMatrix(train_x%2C%20label%3Dtrain_y)%0A%20%20%20%20%20test_set%20%3D%20xgb.DMatrix(test_x%2C%20label%3Dtest_y)%0A%20%20%20%20%20%23%20Train%20the%20classifier%0A%20%20%20%20%20results%20%3D%20%7B%7D%0A%20%20%20%20%20xgb.train(%0A%20%20%20%20%20%20%20%20%20config%2C%0A%20%20%20%20%20%20%20%20%20train_set%2C%0A%20%20%20%20%20%20%20%20%20evals%3D%5B(test_set%2C%20%22eval%22)%5D%2C%0A%20%20%20%20%20%20%20%20%20evals_result%3Dresults%2C%0A%20%20%20%20%20%20%20%20%20verbose_eval%3DFalse)%0A%20%20%20%20%20%23%20Return%20prediction%20accuracy%0A%20%20%20%20%20accuracy%20%3D%201.%20-%20results%5B%22eval%22%5D%5B%22error%22%5D%5B-1%5D%0A%20%20%20%20%20tune.report(mean_accuracy%3Daccuracy%2C%20done%3DTrue)%0A%0A%0A%20config%20%3D%20%7B%0A%20%20%20%20%20%22objective%22%3A%20%22binary%3Alogistic%22%2C%0A%20%20%20%20%20%22eval_metric%22%3A%20%5B%22logloss%22%2C%20%22error%22%5D%2C%0A%20%20%20%20%20%22max_depth%22%3A%20tune.randint(1%2C%209)%2C%0A%20%20%20%20%20%22min_child_weight%22%3A%20tune.choice(%5B1%2C%202%2C%203%5D)%2C%0A%20%20%20%20%20%22subsample%22%3A%20tune.uniform(0.5%2C%201.0)%2C%0A%20%20%20%20%20%22eta%22%3A%20tune.loguniform(1e-4%2C%201e-1)%0A%20%7D%0A%20analysis%20%3D%20tune.run(%0A%20%20%20%20%20train_breast_cancer%2C%0A%20%20%20%20%20resources_per_trial%3D%7B%22cpu%22%3A%201%7D%2C%0A%20%20%20%20%20config%3Dconfig%2C%0A%20%20%20%20%20num_samples%3D10)%0A%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B3.%20Distributed%20%3CA%20href%3D%22https%3A%2F%2Fdocs.ray.io%2Fen%2Flatest%2Fxgboost-ray.html%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3EXGBoost%26nbsp%3B%3C%2FA%3E%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Efrom%20xgboost_ray%20import%20RayXGBClassifier%2C%20RayParams%0Afrom%20sklearn.datasets%20import%20load_breast_cancer%0Afrom%20sklearn.model_selection%20import%20train_test_split%0A%0Aseed%20%3D%2042%0A%0AX%2C%20y%20%3D%20load_breast_cancer(return_X_y%3DTrue)%0AX_train%2C%20X_test%2C%20y_train%2C%20y_test%20%3D%20train_test_split(%0A%20%20%20%20X%2C%20y%2C%20train_size%3D0.25%2C%20random_state%3D42%0A)%0A%0Aclf%20%3D%20RayXGBClassifier(%0A%20%20%20%20n_jobs%3D4%2C%20%20%23%20In%20XGBoost-Ray%2C%20n_jobs%20sets%20the%20number%20of%20actors%0A%20%20%20%20random_state%3Dseed%0A)%0A%0A%23%20scikit-learn%20API%20will%20automatically%20conver%20the%20data%0A%23%20to%20RayDMatrix%20format%20as%20needed.%0A%23%20You%20can%20also%20pass%20X%20as%20a%20RayDMatrix%2C%20in%20which%20case%0A%23%20y%20will%20be%20ignored.%0A%0Aclf.fit(X_train%2C%20y_train)%0A%0Apred_ray%20%3D%20clf.predict(X_test)%0Aprint(pred_ray.shape)%0A%0Apred_proba_ray%20%3D%20clf.predict_proba(X_test)%0Aprint(pred_proba_ray.shape)%0A%0A%23%20It%20is%20also%20possible%20to%20pass%20a%20RayParams%20object%0A%23%20to%20fit%2Fpredict%2Fpredict_proba%20methods%20-%20will%20override%0A%23%20n_jobs%20set%20during%20initialization%0A%0Aclf.fit(X_train%2C%20y_train%2C%20ray_params%3DRayParams(num_actors%3D4))%0A%0Apred_ray%20%3D%20clf.predict(X_test%2C%20ray_params%3DRayParams(num_actors%3D4))%0Aprint(pred_ray.shape)%0A%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E%3CSTRONG%3E4.%20Use%20with%20in%20job%20mode%20with%20AML%20job%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Ews%20%3D%20Workspace.from_config()%0A%0Acompute_cluster%20%3D%20'worker-cpu-v3'%0Amaxnode%20%3D5%0Avm_size%3D'STANDARD_DS3_V2'%0Avnet%3D'rayvnet'%0Asubnet%3D'default'%0Aexp%20%3D'ray_on_aml_job'%0Aws_detail%20%3D%20ws.get_details()%0Aws_rg%20%3D%20ws_detail%5B'id'%5D.split(%22%2F%22)%5B4%5D%0Avnet_rg%3DNone%0Atry%3A%0A%20%20%20%20ray_cluster%20%3D%20ComputeTarget(workspace%3Dws%2C%20name%3Dcompute_cluster)%0A%0A%20%20%20%20print('Found%20existing%20cluster%2C%20use%20it.')%0Aexcept%20ComputeTargetException%3A%0A%20%20%20%20if%20vnet_rg%20is%20None%3A%0A%20%20%20%20%20%20%20%20vnet_rg%20%3D%20ws_rg%0A%20%20%20%20compute_config%20%3D%20AmlCompute.provisioning_configuration(vm_size%3Dvm_size%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20min_nodes%3D0%2C%20max_nodes%3Dmaxnode%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20vnet_resourcegroup_name%3Dvnet_rg%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20vnet_name%3Dvnet%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20subnet_name%3Dsubnet)%0A%20%20%20%20ray_cluster%20%3D%20ComputeTarget.create(ws%2C%20compute_cluster%2C%20compute_config)%0A%0A%20%20%20%20ray_cluster.wait_for_completion(show_output%3DTrue)%0A%0A%0ArayEnv%20%3D%20Environment.from_conda_specification(name%20%3D%20%22rayEnv%22%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20file_path%20%3D%20%22..%2Fexamples%2Fconda_env.yml%22)%0A%0A%23%20rayEnv%20%3D%20Environment.get(ws%2C%20%22rayEnv%22%2C%20version%3D19)%0A%0A%0Asrc%3DScriptRunConfig(source_directory%3D'..%2Fexamples%2Fjob'%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20script%3D'aml_job.py'%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20environment%3DrayEnv%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20compute_target%3Dray_cluster%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20distributed_job_config%3DPyTorchConfiguration(node_count%3Dmaxnode)%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%23%20arguments%20%3D%20%5B%22--master_ip%22%2Cmaster_ip%5D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20)%0Arun%20%3D%20Experiment(ws%2C%20exp).submit(src)%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3EThis%20is%20the%20code%20inside%20aml_job.py%20with%20details%20omitted%20for%20brevity%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3E%0Aif%20__name__%20%3D%3D%20%22__main__%22%3A%0A%20%20%20%20run%20%3D%20Run.get_context()%0A%20%20%20%20ws%20%3D%20run.experiment.workspace%0A%20%20%20%20account_key%20%3D%20ws.get_default_keyvault().get_secret(%22adls7-account-key%22)%0A%20%20%20%20ray_on_aml%20%3DRay_On_AML()%0A%20%20%20%20ray%20%3D%20ray_on_aml.getRay()%0A%0A%20%20%20%20if%20ray%3A%20%23in%20the%20headnode%0A%20%20%20%20%20%20%20%20print(%22head%20node%20detected%22)%0A%0A%20%20%20%20%20%20%20%20datasets.MNIST(%22~%2Fdata%22%2C%20train%3DTrue%2C%20download%3DTrue)%0A%0A%20%20%20%20%20%20%20%20analysis%20%3D%20tune.run(train_mnist%2C%20config%3Dsearch_space)%0A%20%20%20%20%20%20%20%20print(ray.cluster_resources())%0A%20%20%20%20%20%20%20%20print(%22data%20count%20result%22%2C%20get_data_count(account_key))%0A%0A%20%20%20%20else%3A%0A%20%20%20%20%20%20%20%20print(%22in%20worker%20node%22)%0A%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22%20padding-left%20%3A%2030px%3B%20%22%3E%3CSTRONG%3E5.%20View%20Ray%20dashboard%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3EThe%20easiest%20way%20to%20view%20Ray%20dashboard%20is%20using%20the%20connection%20from%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fcode.visualstudio.com%2Fdocs%2Fdatascience%2Fazure-machine-learning%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EVSCode%20for%20Azure%20ML%3C%2FA%3E.%20Open%20VSCode%20to%20your%20Compute%20Instance%2C%20open%20a%20terminal%20windows%2C%20type%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22http%3A%2F%2F127.0.0.1%3A8265%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3Ehttp%3A%2F%2F127.0.0.1%3A8265%2F%3C%2FA%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Ethen%20ctrl%2Bclick%20to%20open%20the%20Ray%20Dashboard%20in%20your%20local%20computer.%3C%2FP%3E%0A%3CDIV%20id%3D%22tinyMceEditorJamesN_0%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22VS%20code%20terminal%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F337670i6C980418637A8E53%2Fimage-size%2Flarge%3Fv%3Dv2%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22vs_terminal.jpg%22%20alt%3D%22VS%20code%20terminal%22%20%2F%3E%3CSPAN%20class%3D%22lia-inline-image-caption%22%20onclick%3D%22event.preventDefault()%3B%22%3EVS%20code%20terminal%3C%2FSPAN%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EThis%20trick%20tells%20VScode%20to%20forward%20port%20to%20your%20local%20machine%20without%20having%20to%20setup%20ssh%20port%20forwarding%20using%20VScode's%20extension%20on%20the%20CI.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22Ray%20Dashboard%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F337671iAADEAAB039E617AE%2Fimage-size%2Flarge%3Fv%3Dv2%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22ray_dashboard.jpg%22%20alt%3D%22Ray%20Dashboard%22%20%2F%3E%3CSPAN%20class%3D%22lia-inline-image-caption%22%20onclick%3D%22event.preventDefault()%3B%22%3ERay%20Dashboard%3C%2FSPAN%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CDIV%20id%3D%22tinyMceEditorJamesN_1%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20See%20more%20examples%20at%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fray-on-aml%2Fblob%2Fmaster%2Fexamples%2Fquick_start_examples.ipynb%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3Equick%20start%20examples%3C%2FA%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20In%20partnership%20with%20Hyun%20Suk%20Shin.%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-3048784%22%20slang%3D%22en-US%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fwww.ray.io%2F%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3ERay%3C%2FA%3E%20and%20%3CA%20href%3D%22https%3A%2F%2Fdask.org%2F%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3EDask%3C%2FA%3E%20are%20two%20among%20the%20most%20popular%20frameworks%20to%20parallelize%20and%20scale%20Python%20computation.%20They%20are%20very%20helpful%20to%20speed%20up%20computing%20for%20data%20processing%2C%20hyperparameter%20tunning%2C%20reinforcement%20learning%20and%20model%20serving%20and%20other%20many%20other%20scenarios.%3C%2FP%3E%0A%3CP%3EFor%20an%20Azure%20ML%20compute%20instance%2C%20we%20can%20easily%20install%20Ray%20and%20Dask%20to%20take%20advantage%20of%20parallel%20computing%20for%20all%20cores%20within%20the%20node.%20However%2C%20there%20is%20yet%20an%20easy%20way%20in%20Azure%20Machine%20Learning%20to%20extend%20this%20to%20a%20multi-node%20cluster%20when%20the%20computing%20and%20ML%20problem%20require%20the%20power%20of%20more%20than%20one%20node.%20One%20would%20need%20to%20setup%20a%20separate%20environment%20using%20VMs%20or%20K8s%20outside%20Azure%20ML%20to%20run%20multi-node%20Ray%2FDask.%20This%20would%20mean%20losing%20all%20capabilities%20of%20Azure%20ML.%3C%2FP%3E%0A%3CP%3ETo%20address%20this%20gap%2C%20we%20have%20developed%20a%20library%20that%20can%20easily%20turn%20Azure%20ML%20compute%20instance%20and%20compute%20cluster%20into%20Ray%20and%20Dask%20cluster.%20The%20library%20does%20all%20the%20complex%20wirings%20and%20setup%20of%20a%20Ray%20cluster%20with%20Dask%20behind%20the%20scene%20while%20exposing%20a%20simple%20Ray%20context%20object%20for%20users%20perform%20parallel%20Python%20computing%20tasks.%20In%20addition%2C%20it%20is%20shipped%20with%20high%20performance%20APIs%20based%20on%20Pyarrow%20to%20access%20Azure%20storage%20and%20simple%20interface%20for%20user%20to%20install%20additional%20libraries.%3C%2FP%3E%0A%3CP%3EThe%20library%20also%20comes%20with%20support%20for%20both%20Interactive%20mode%20and%20job%20mode.%20Data%20scientist%20can%20perform%20fast%20interactive%20work%20with%20the%20cluster%20during%20exploratory%20phase%20then%20easily%20turn%20the%20code%20into%20the%20job%20mode%20with%20minimal%20change.%3C%2FP%3E%3C%2FLINGO-TEASER%3E
Co-Authors
Version history
Last update:
‎Apr 29 2022 01:58 PM
Updated by: