Leveling-up Local Experiment Runs with the VS Code AML Extension
Published Sep 29 2020 03:57 PM 2,643 Views
Microsoft

Hey AzML community! The VS Code team is excited to announce version 0.6.15 of the AzML extension, with a brand new way for you to validate your scripts, environments, and datasets before submitting to a remote cluster.

 

If you'd like to follow along with the blog post and try out the new features, you can install the extension here!

Gaining confidence in your experiment runs

Experiencing a sense of worry or anxiety when submitting a remote experiment is common and expected. It's hard to predict how the training script you've been working very hard on is going to behave once it runs on your remote target. Many of you have expressed pain in not:

  1. Knowing whether the environment you want to use will correctly work with your training script.
  2. Knowing whether your datasets are materialized and being referenced correctly.
  3. Having the confidence to submit your remote experiment and context-switch to another project you're working on.

 

The VS Code AzML extension team has been working hard over the past few weeks to bring a new capability to alleviate your pains: running a local experiment with an interactive debugging session.

Interactive Debugging with the AML ExtensionInteractive Debugging with the AML Extension
You might be asking yourself - how is this different from my running my training script in VS Code? Here are some key differences:

 

  1. The AzML service always uses an environment when submitting a remote run. These environments are materialized as Docker containers. When running a local experiment, the AzML extension will build the same Docker image and same Docker container that's used when running remotely.
  2. Running a Python script normally assumes that you've taken care of data materialization and access. When running remotely, you're recommended to use AzML Datasets giving you the advantage of working with helper functions and configuration options. The extension enables you to configure a local run and work with Datasets the same way in which you would remotely, helping you validate that your dataset is being used correctly.
  3. The extension streamlines setting up an optional debug session when running your experiment. This allows you to set breakpoints and step through your code with ease.
  4. The extension has tightly coupled components of the debugging experience, like the debug console, with your experiment. Expressions you evaluate or print to the console will be written in your 70_driver_log.txt.

 

Running a local experiment is straightforward and closely resembles the extension's current functionality for submitting a remote run. Here's a summary of the steps for submitting a run.

  1. Right-click on an experiment node in the tree view and choose the Run Experiment option.
  2. Pick the local run option and choose whether you want to debug.
  3. Create a new run configuration or pick a previous created one. The rest of the steps assume the former.
  4. Pick an environment and dataset for your training.
  5. (Only when debugging) Add the debugpy package to your environment. Debugpy is required when running an interactive debug session.
  6. Validate the final configuration options and submit your run.
  7. (Optional) If you've chosen to debug, start the debugger via the prompt or from your run node.


Local Experiment Submission with AML ExtensionLocal Experiment Submission with AML Extension

Congratulations! If you've followed the above steps you've successfully submitted a local experiment and can now confidently proceed to submit a remote run.

 

For more detailed step-by-step instructions you can follow our docs here.

Feedback

We're working hard to further improve your run experience from within VS Code, with focus on the following scenarios:

  1. Debugging a single-node remote run on AmlCompute targets.
  2. Streamlining submitting a remote run after succeeding locally.
  3. Streamlining running a local debug experiment from a failed remote run.

If there's anything that you would like us to prioritize, please feel free to let us know on Github

 

If you would like to provide feedback on the overall extension, please feel free to do so via our survey.

Co-Authors
Version history
Last update:
‎Jan 13 2022 03:18 PM
Updated by: