(Nearly) Everything you need to know about computer vision in one repo

By
Published 12-17-2019 11:12 AM 38.2K Views
Microsoft

This post was co-authored by @JS Tan, @Patrick Buehler, @Anupam Sharma and @Jun Ki Min

 

In recent years, we've seen extraordinary growth in Computer Vision, with applications in image understanding, search, mapping, semi-autonomous or autonomous vehicles and many more 
 

The ability for models to understand actions in a videoa task that was unthinkable just a few years ago, is now something that we can achieve with relatively high accuracy and in near real-time. 

 

action_recognition2.gif

 Action Recognition

 

 

However, the field is noparticularly welcoming for newcomers. Without prior experience or guidance, building an accurate classifier can easily take weeks. Unless you're ready to spend a long-time learning computer vision, it's extremely hard to master the basics, let alone begin to explore some of the cutting-edge technologies in the field. Even for computer vision experts, building a quick Proof of Concept (POC) can be nontrivial and could easily end up taking many days to put together.  

 

At Microsoft, we have been working for many years on diverse Computer Vision solutions for our customers and collected our learnings into our new public Microsoft repository: https://github.com/microsoft/ComputerVision-recipes 

 

The goal of this repository is to provide examples and best practice guidelines for building computer vision systems on Azure, and to share this with the open-source communityMore specifically, our goal was to create a repository that will help us to provide solutions rapidly to the community and to customers that we work with, or with on-boarding new team members who may have expertise in data science, but not specifically in computer vision. From mastering some of the most common scenarios in the field, like image classification, object detection, and image similarity, to exploring cutting edge scenarios like activity recognition and crowd counting, this repo will guide you through building models, fine-tuning them, and using them in real-world scenarios. 

 

We're kicking off our repo with 5 scenarios: 

Scenario 

Support 

Description 

Classification 

Base 

Image Classification is a way to learn and predict the category of a given image. (Ex: Is the picture of a ‘dog’ or a ‘cat’?) 

Similarity 

Base  

Image Similarity is a way to compute a similarity score given a pair of images. Given an image, it allows you to identify the most similar images in dataset. (Ex: This picture of a dog is the most like which of the following images of animals?) 

Detection 

Base  

Object Detection is a supervised machine learning technique that allows you to detect where on a given image an object of interest is. (Ex: Where in the image are there animals?) 

Action Recognition 

Contrib 

Action Recognition is used to identify in video footage what actions are performed and at what respective start/end times. (Ex: When is there someone drinking in the video?) 

Crowd Counting 

Contrib 

Crowd Counting is a use-case that leverages supervised machine learning techniques to count the number of people in an image – this applies to both low-crowd-density (e.g. less than 50 people) and high-crowd-density (e.g. thousands of people). (Ex. How many pedestrians are in this image of a street?) 

 

Rather than creating implementations from scratch, we draw from popular state-of-the-art libraries (e.g. fast.ai and torchvision), and we build additional utility around loading image data, optimizing models, and evaluating models. In addition, we aim to answer the frequently asked questions, try to explain the deep learning intuitions, and highlight common pitfalls.  

 

Whether you are an expert in computer vision or just getting your hands wet, we believe this repository offers something for you. For the beginner, this repo will guide you through building a state-of-the-art model and help you develop an intuition for the craft. For the experts, this repository can quickly get you to a strong baseline model which is easy to extend using custom Python/PyTorch code. In addition, the repository also aims to provide support with 1) the full data science process, and 2) the tooling to succeed on Azure. 

 

We hope that these examples and utilities will make it easier and faster for developers to create custom vision applications. 

 

The Data Science Process 

The Computer Vision Recipes GitHub repository shows you how to approach the five key steps of the data science process and provides utilities to enrich each of the steps: 

 

  1. Data preparation - Prepare and load your data. 
  2. Modeling - Build models using deep learning algorithms. 
  3. Evaluating – Evaluate your model. Depending on the metric you’re interested in optimizing, you may want to explore different methods of evaluation. 
  4. Model selection and optimization - Tune and optimize hyperparameters to get the highest performing model. Because Computer Vision models are often computationally costly, we show you how to seamlessly scale your parameter tuning into Azure. 
  5. Operationalizing - Operationalize models in a production environment on Azure by deploying it onto Kubernetes. 

 

Inside the computer vision recipes repo, we have added a lot of utility to support common tasks such as loading datasets in the format expected by different algorithms, splitting training/test data, and evaluating model outputs. 

 

Azure Machine Learning  

This computer vision repository also has deep integration with the Azure Machine Learning service to complement your work locally. We provide code examples on how you can optionally and easily scale your training into the cloud, and how you can deploy your models for production workloads.  

 

Azure Cognitive Services 

Note that for certain computer vision problems, you may not need to build your own models. Instead, pre-built or easily customizable solutions exist which do not require any custom coding or machine learning expertise.  

 

  • Vision Services are a set of pre-trained REST APIs which can be called for image tagging, OCR, video analytics, and more. These APIs work out of the box and require minimal expertise in machine learning but have limited customization capabilities. See the various demos available to get a feel for the functionality (e.g. Computer Vision). 
  • Custom Vision is a SaaS service to train and deploy a model as a REST API given a user-provided training set. All steps including image upload, annotation, and model deployment can be performed using either the UI or a Python SDK. Training image classification or object detection models can be achieved with minimal machine learning expertise. The Custom Vision offers more flexibility than using the pre-trained cognitive services APIs but requires the user to bring and annotate their own data. 

 

Before using the Computer Vision repository, we strongly recommend evaluating if these can sufficiently solve your problem. 

 

Scenario Example: Object Detection 

To give you a sense of how you can use our repo to build a state of the art (SOTA) model, here is a preview of how simple it is to create an Object Detection model. Of course, you can go much deeper and add custom PyTorch code, but getting started is as simple as this: 

 

1. Load your data 

The first step is to load your data – we help you do this with a simple object that automatically parses your data and the annotations: 

 

 

 

from utils_cv.detection.data import DetectionLoader 
data = DetectionLoader("path/to/data") 

 

 

 

 

2. Train/fine-tune your model 

Then we create a 'learner' object that helps you manage and train your model. By default, it will use torchvision's Faster R-CNN model. But you can easily switch it out. 

 

 

 

from utils_cv.detection.model import DetectionLearner 
detector = DetectionLearner(data) 
detector.fit() 

 

 

 

 

3. Evaluate 

Finally, lets evaluate our model using the built-in helper functions. We can look at the precision and recall curves to give us a sense of how our model is performing. 

 

 

 

from utils_cv.detection.plot import plot_pr_curves 
eval = detector.evaluate() 
plot_pr_curves(eval) 

 

 

 

 

As we continue to build out of repository, we will be looking for new computer vision scenarios to unlock. Feel free to reach out to cvbp@microsoft.com or post an issue if you wish to see us cover a scenario. 
 

%3CLINGO-SUB%20id%3D%22lingo-sub-1070311%22%20slang%3D%22en-US%22%3E(Nearly)%20Everything%20you%20need%20to%20know%20about%20computer%20vision%20in%20one%20repo%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1070311%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3E%3CEM%3EThis%20post%20was%20co-authored%20by%26nbsp%3B%40JS%20Tan%2C%26nbsp%3B%40Patrick%20Buehler%2C%20%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fuser%2Fviewprofilepage%2Fuser-id%2F8372%22%20target%3D%22_blank%22%3E%40Anupam%20Sharma%3C%2FA%3E%20and%20%40Jun%20Ki%20Min%3C%2FEM%3E%3CBR%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EIn%20recent%20years%2C%20we've%20seen%20extraordinary%20growth%20in%20Computer%20Vision%2C%20with%20applications%20in%20image%20understanding%2C%20search%2C%20mapping%2C%20semi-autonomous%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eor%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eautonomous%20v%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eehicles%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Band%20many%20more%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%3E%26nbsp%3B%3CBR%20%2F%3E%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3EThe%20ability%20for%20models%20to%20understand%20actions%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bin%20a%20video%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%2C%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ea%20task%20that%20was%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eunthinkable%20just%20a%20few%20years%20ago%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%2C%20is%20now%20something%20that%20we%20can%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eachieve%20with%20relatively%20high%20accuracy%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Band%20in%20near%20real-time%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E.%3C%2FSPAN%3E%3CSPAN%3E%26nbsp%3B%3CBR%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%20style%3D%22text-align%3A%20center%3B%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22text-align%3A%20center%3B%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22action_recognition2.gif%22%20style%3D%22width%3A%20405px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F162163i5F2EA32E6EDC8F94%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20title%3D%22action_recognition2.gif%22%20alt%3D%22action_recognition2.gif%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%20style%3D%22text-align%3A%20left%3B%22%3E%26nbsp%3BAction%20Recognition%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22text-align%3A%20center%3B%22%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3EHowever%2C%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ethe%20field%20is%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3En%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eo%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Et%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eparticularly%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ewelcoming%20for%20newcomers.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EWithout%20prior%20experience%20or%20guidance%2C%20building%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ean%20accurate%20classifier%20c%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ean%20easily%20take%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eweeks.%20Unless%20you're%20ready%20to%20spend%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ea%26nbsp%3Blong-time%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Elearning%20computer%20vision%2C%20it's%20extremely%20hard%20to%20master%20the%20basics%2C%20let%20alone%20begin%20to%20explore%20some%20of%20the%20cutting-edge%20technologies%20in%20the%20field.%20Even%20for%20computer%20vision%20experts%2C%20building%20a%20quick%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EProof%20of%20Concept%20(POC)%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ecan%20be%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Enon%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Etrivial%20and%20could%20easily%20end%20up%20taking%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Emany%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Edays%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eto%20put%20together.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3EAt%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EMicrosoft%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%2C%20we%20have%20been%20working%20for%20many%20years%20on%20diverse%20Computer%20Vision%20solutions%20for%20our%20customers%20and%20collected%20our%20learnings%20into%20our%20new%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Epublic%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EMicrosoft%20repo%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Esitory%3A%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2FComputerVision-recipes%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3Ehttps%3A%2F%2Fgithub.com%2Fmicrosoft%2FComputerVisio%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3En-recipes%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-contrast%3D%22auto%22%3E.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3ETh%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ee%20goal%20of%20th%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eis%20repository%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eis%20to%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eprovide%20examples%20and%20best%20practice%20guidelines%20for%20building%20computer%20vision%20systems%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Bon%20Azure%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%2C%20and%20to%20share%20this%20with%20the%20open-source%20community%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3EMore%20specifically%2C%20o%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eur%20goal%20was%20to%20create%20a%20repository%20that%20will%20help%20us%20to%20provide%20solutions%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Erapidly%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eto%20the%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ecommunity%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eand%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eto%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ecustomers%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ethat%20we%20work%20with%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%2C%20or%20with%20on-boarding%20new%20team%20members%20who%20may%20have%20expertise%20in%20data%20science%2C%20but%20not%20specifically%20in%20computer%20vision.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3EFrom%20mastering%20some%20of%20the%20most%20common%20scenarios%20in%20the%20field%2C%20like%20image%20classification%2C%20object%20detection%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%2C%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Band%20image%20similarity%2C%20to%20exploring%20cutting%20edge%20scenarios%20like%20activity%20recognition%20and%20crowd%20counting%2C%20this%20repo%20will%20guide%20you%20through%20building%20models%2C%20fine-tuning%20them%2C%20and%20using%20them%20in%20real-world%20scenarios.%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EWe're%20kicking%20off%20our%20repo%20with%205%20scenarios%3A%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CTABLE%20style%3D%22height%3A%20504px%3B%22%20data-tablestyle%3D%22MsoTableGridLight%22%20data-tablelook%3D%221696%22%3E%0A%3CTBODY%3E%0A%3CTR%20style%3D%22height%3A%2030px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2030px%3B%20width%3A%20107px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EScenario%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335551550%26quot%3B%3A2%2C%26quot%3B335551620%26quot%3B%3A2%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2030px%3B%20width%3A%2077px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3ESupport%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335551550%26quot%3B%3A2%2C%26quot%3B335551620%26quot%3B%3A2%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2030px%3B%20width%3A%20539px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EDescription%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335551550%26quot%3B%3A2%2C%26quot%3B335551620%26quot%3B%3A2%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20style%3D%22height%3A%2057px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2057px%3B%20width%3A%20107px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2FComputerVision%2Ftree%2Fmaster%2Fscenarios%2Fclassification%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3EClassification%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2057px%3B%20width%3A%2077px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EBase%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2057px%3B%20width%3A%20539px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EImage%20Classification%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eis%20a%20way%20to%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Blearn%20and%20predict%20the%20category%20of%20a%20given%20image.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E(%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3EEx%3A%20Is%20the%20picture%20of%20a%20%E2%80%98dog%E2%80%99%20or%20a%20%E2%80%98cat%E2%80%99%3F%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E)%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20style%3D%22height%3A%20111px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%20111px%3B%20width%3A%20107px%3B%22%20data-celllook%3D%2265536%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2FComputerVision%2Ftree%2Fmaster%2Fscenarios%2Fsimilarity%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3ESimilarity%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%20111px%3B%20width%3A%2077px%3B%22%20data-celllook%3D%2265536%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EBase%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%20111px%3B%20width%3A%20539px%3B%22%20data-celllook%3D%2265536%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EImage%20Similarity%20is%20a%20way%20to%20compute%20a%20similarity%20score%20given%20a%20pair%20of%20images.%20Given%20an%20image%2C%20it%20allows%20you%20to%20identify%20the%20most%20similar%20image%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Es%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ein%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ea%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Edataset.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E(%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3EEx%3A%20This%20picture%20of%20a%20dog%20is%20the%20most%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Elike%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Bwhich%20of%20the%20following%20images%20of%20animals%3F%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E)%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20style%3D%22height%3A%2084px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2084px%3B%20width%3A%20107px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2FComputerVision%2Ftree%2Fmaster%2Fscenarios%2Fdetection%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3EDetection%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2084px%3B%20width%3A%2077px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EBase%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2084px%3B%20width%3A%20539px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EObject%20Detection%20is%20a%20supervised%20machine%20learning%20technique%20that%20allows%20you%20to%20detect%20where%20on%20a%20given%20image%20an%20object%20of%20interest%20is.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E(%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3EEx%3A%20Where%20in%20the%20image%20are%20there%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eanimals%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%3F%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E)%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20style%3D%22height%3A%2084px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2084px%3B%20width%3A%20107px%3B%22%20data-celllook%3D%2265536%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2FComputerVision%2Ftree%2Fmaster%2Fcontrib%2Faction_recognition%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3EAction%20Recognition%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2084px%3B%20width%3A%2077px%3B%22%20data-celllook%3D%2265536%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EContrib%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2084px%3B%20width%3A%20539px%3B%22%20data-celllook%3D%2265536%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EAction%20Recognition%20is%20used%20to%20identify%20in%20video%20footage%20what%20actions%20are%20performed%20and%20at%20what%20respective%20start%2Fend%20times.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E(%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3EEx%3A%20When%20is%20there%20someone%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Edrinking%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ein%20the%20video%3F%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E)%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20style%3D%22height%3A%20138px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%20138px%3B%20width%3A%20107px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2FComputerVision%2Ftree%2Fmaster%2Fcontrib%2Fcrowd_counting%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3ECrowd%20Counting%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%20138px%3B%20width%3A%2077px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EContrib%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%20138px%3B%20width%3A%20539px%3B%22%20data-celllook%3D%220%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3ECrowd%20Counting%20is%20a%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Euse-case%20that%20leverages%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Esupervised%20machine%20learning%20technique%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Es%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Bto%20count%20the%20number%20of%20people%20in%20an%20image%20%E2%80%93%20this%20applies%20to%20both%20low-crowd-density%20(e.g.%20less%20than%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E5%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E0%20people)%20and%20high-crow%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ed-density%20(e.g.%20thousands%20of%20people).%20(Ex.%20How%20many%20p%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eedestrians%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eare%20in%20this%20image%20of%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ea%20street%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%3F)%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A120%2C%26quot%3B335559739%26quot%3B%3A120%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3ERather%20than%20creating%20implementations%20from%20scratch%2C%20we%20draw%20from%20popular%20state-of-the-art%20libraries%20(e.g.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%3Efast.ai%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Band%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fpytorch.org%2Fdocs%2Fstable%2Ftorchvision%2Findex.html%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3Etorchvision%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-contrast%3D%22auto%22%3E)%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%2C%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Band%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ewe%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ebuild%20additional%20utility%20around%20loading%20image%20data%2C%20optimizing%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bmodels%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%2C%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eand%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eevaluating%20models.%20In%20addition%2C%20we%20aim%20to%20answer%20the%20frequently%20asked%20questions%2C%20try%20to%20explain%20the%20deep%20learning%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eintuitions%2C%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Band%20highlight%20common%20pitfalls.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3EWhether%20you%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Ba%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ere%20an%20expert%20in%20computer%20vision%20or%20just%20getting%20your%20hands%20wet%2C%20we%20believe%20this%20repository%20offers%20something%20for%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eyou%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E.%20For%20the%20beginner%2C%20this%20repo%20will%20guide%20you%20through%20building%20a%20state-of-the-art%20model%20and%20help%20you%20develop%20an%20intuition%20for%20the%20craft.%20For%20the%20experts%2C%20this%20repo%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Esitory%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bcan%20quickly%20get%20you%20to%20a%20strong%20baseline%20model%20which%20is%20easy%20to%20extend%20using%20custom%20Python%2FPy%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3ET%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eorch%26nbsp%3Bcode.%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3BIn%20addition%2C%20the%20repository%20also%20aims%20to%20provide%20support%20with%201)%20the%20full%20data%20science%20process%2C%20and%202)%20the%20tooling%20to%20succeed%20on%20Azure.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3EWe%20hope%20that%20these%20examples%20and%20utilities%20will%20make%20it%20easier%20and%20faster%20for%20developers%20to%20create%20custom%20vision%20applications.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--1438625893%22%20id%3D%22toc-hId--1438627973%22%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EThe%20Data%20Science%20Process%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FH2%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3EThe%20Computer%20Vision%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3ERecipes%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EGitHub%20repository%20shows%20you%20how%20to%20approach%20the%20five%20key%20steps%20of%20the%20data%20science%20process%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Band%20provides%20utilities%20to%20enrich%20each%20of%20the%20steps%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%3A%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3COL%3E%0A%3CLI%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EData%20preparation%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B-%20Prepar%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ee%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Band%20load%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Byour%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bdata%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A40%2C%26quot%3B335559739%26quot%3B%3A40%2C%26quot%3B335559740%26quot%3B%3A276%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EModeling%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B-%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EBuild%20models%20using%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Edeep%20learning%20algorithms%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A40%2C%26quot%3B335559739%26quot%3B%3A40%2C%26quot%3B335559740%26quot%3B%3A276%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EEvaluating%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B%E2%80%93%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EEvaluat%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ee%20your%20model.%20Depending%20on%20the%20metric%20you%E2%80%99re%20interested%20in%20optimizing%2C%20you%20may%20want%20to%20explore%20different%20methods%20of%20evaluation.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A40%2C%26quot%3B335559739%26quot%3B%3A40%2C%26quot%3B335559740%26quot%3B%3A276%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EModel%20selection%20and%20optimization%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B-%20Tun%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ee%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Band%20optimiz%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ee%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bhyperparameters%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bto%20get%20the%20highest%20performing%20model.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EBecause%20Computer%20Vision%20models%20are%20often%20computationally%20costly%2C%20we%20show%20you%20how%20to%20seamlessly%20scale%20your%20parameter%20tuning%20into%20Azure%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A40%2C%26quot%3B335559739%26quot%3B%3A40%2C%26quot%3B335559740%26quot%3B%3A276%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EOperationalizing%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3B-%20Operationaliz%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ee%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bmodels%20in%20a%20production%20environment%20on%20Azure%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bby%20deploying%20it%20onto%20Kubernetes.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559738%26quot%3B%3A40%2C%26quot%3B335559739%26quot%3B%3A40%2C%26quot%3B335559740%26quot%3B%3A276%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CP%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3EInside%20the%20computer%20vision%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Brecipes%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Brepo%2C%20we%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bha%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eve%20added%20a%20lot%20of%20utility%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eto%20support%20common%20tasks%20such%20as%20loading%20datasets%20in%20the%20format%20expected%20by%20different%20algorithms%2C%20splitting%20training%2Ftest%20data%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%2C%20and%20evaluating%20model%20outputs%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-1048886940%22%20id%3D%22toc-hId-1048884860%22%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EAzure%26nbsp%3B%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EMachine%20Learning%3C%2FSPAN%3E%3C%2FSTRONG%3E%26nbsp%3B%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FH2%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3EThis%20computer%20vision%20repository%20also%20has%20deep%20integration%20with%20the%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fazure%2Fmachine-learning%2F%3FWT.mc_id%3Dazureai-blog-azureai%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3EAzure%20Machine%20Learning%20service%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bto%20complement%20your%20work%20locally.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EW%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ee%20provide%20code%20examples%20on%20how%20you%20can%20optionally%20and%20easily%20scale%20your%20training%20into%20the%20cloud%2C%20and%20how%20you%20can%20deploy%20your%20models%20for%20production%20workloads.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22auto%22%3EAzure%20Cognitive%20Services%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3ENote%20that%20for%20certain%20computer%20vision%20problems%2C%20you%20may%20not%20need%20to%20build%20your%20own%20models.%20Instead%2C%20pre-built%20or%20easily%20customizable%20solutions%20exist%20which%20do%20not%20require%20any%20custom%20coding%20or%20machine%20learning%20expertise.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%20data-leveltext%3D%22%EF%82%B7%22%20data-font%3D%22Symbol%22%20data-listid%3D%222%22%20aria-setsize%3D%22-1%22%20data-aria-posinset%3D%221%22%20data-aria-level%3D%221%22%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fazure%2Fcognitive-services%2Fcomputer-vision%2F%3FWT.mc_id%3Dazureai-blog-azureai%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3EVision%20Services%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bare%20a%20set%20of%20pre-trained%20REST%20APIs%20which%20can%20be%20called%20for%20image%20tagging%2C%20OCR%2C%20video%20analytics%2C%20and%20more.%20These%20APIs%20work%20out%20of%20the%20box%20and%20require%20minimal%20expertise%20in%20machine%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Elearning%20but%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bhave%20limited%20customization%20capabilities.%20See%20the%20various%20demos%20available%20to%20get%20a%20feel%20for%20the%20functionality%20(e.g.%20Computer%20Vision).%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A0%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A259%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FLI%3E%0A%3CLI%20data-leveltext%3D%22%EF%82%B7%22%20data-font%3D%22Symbol%22%20data-listid%3D%222%22%20aria-setsize%3D%22-1%22%20data-aria-posinset%3D%222%22%20data-aria-level%3D%221%22%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fazure%2Fcognitive-services%2Fcustom-vision-service%2F%3FWT.mc_id%3Dazureai-blog-azureai%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSPAN%20data-contrast%3D%22none%22%3ECustom%20Vision%3C%2FSPAN%3E%3C%2FA%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Bis%20a%20SaaS%20service%20to%20train%20and%20deploy%20a%20model%20as%20a%20REST%20API%20given%20a%20user-provided%20training%26nbsp%3Bset.%26nbsp%3BAll%20steps%20including%20image%20upload%2C%20annotation%2C%20and%20model%20deployment%20can%20be%20performed%20using%20either%20the%20UI%20or%20a%20Python%20SDK.%20Training%20image%20classification%20or%20object%20detection%20models%20can%20be%20achieved%20with%20minimal%20machine%20learning%20expertise.%20The%20Custom%20Vision%20offers%20more%20flexibility%20than%20using%20the%20pre-trained%20cognitive%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eservices%20APIs%20but%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Brequires%20the%20user%20to%20bring%20and%20annotate%20their%20own%20data.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22auto%22%3EBefore%20using%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ethe%20Computer%20Vision%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Erepository%2C%20we%20strongly%20recommend%20evaluating%20if%20these%20can%20sufficiently%20solve%20your%20problem.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--758567523%22%20id%3D%22toc-hId--758569603%22%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22none%22%3EScenario%20Example%3A%26nbsp%3B%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22none%22%3EObject%20Detection%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FH2%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3ETo%20give%20you%20a%20sense%20of%20how%20you%20can%20use%20our%20repo%20to%20build%20a%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Estate%20of%20the%20art%20(SOTA)%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Bmodel%2C%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ehere%20is%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Ba%20preview%20of%20how%20simple%20it%20is%20to%20create%20an%20Object%20Detection%20model.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3EOf%20course%2C%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Byou%20can%20go%20much%20deeper%20and%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3E%26nbsp%3Badd%20custom%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3EP%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Ey%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3ET%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Eorch%26nbsp%3Bcode%2C%20but%20getting%20started%20is%20as%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Bsimple%20as%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%3Ethis%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%3A%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22none%22%3E1.%20Load%20your%20data%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EThe%20first%20step%20is%20to%20load%20your%20data%20%E2%80%93%20we%20help%20you%20do%20this%20with%20a%20simple%20object%20that%20automatically%20parses%20your%20data%20and%20the%20annotations%3A%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559685%26quot%3B%3A720%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Efrom%20utils_cv.detection.data%20import%20DetectionLoader%20%0Adata%20%3D%20DetectionLoader(%22path%2Fto%2Fdata%22)%20%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22padding-left%3A%2030px%3B%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22none%22%3E2.%20Train%2Ffine-tune%20your%20model%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EThen%20we%20create%20a%20'learner'%20object%20that%20helps%20you%20manage%20and%20train%20your%20model.%20By%20default%2C%20it%20will%20use%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22auto%22%3Etorchvision's%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3EFaster%20R-CNN%20model.%20But%20you%20can%20easily%20switch%20it%20out.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559685%26quot%3B%3A720%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Efrom%20utils_cv.detection.model%20import%20DetectionLearner%20%0Adetector%20%3D%20DetectionLearner(data)%20%0Adetector.fit()%20%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22padding-left%3A%2030px%3B%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%3CSPAN%20data-contrast%3D%22none%22%3E3.%20Evaluate%3C%2FSPAN%3E%3C%2FSTRONG%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B134233279%26quot%3B%3Atrue%2C%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EFinally%2C%20lets%20evaluate%20our%20model%20using%20the%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ebuilt-in%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Bhelper%20functions.%20We%20can%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Elook%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Bat%20the%20precision%20and%20recall%20curves%20to%20give%20us%20a%20sense%20of%20how%20our%20model%20is%20performing.%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559685%26quot%3B%3A720%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Efrom%20utils_cv.detection.plot%20import%20plot_pr_curves%20%0Aeval%20%3D%20detector.evaluate()%20%0Aplot_pr_curves(eval)%20%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20style%3D%22padding-left%3A%2030px%3B%22%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EAs%20we%20continue%20to%20build%20out%20of%20repository%2C%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Ewe%20will%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Bbe%20looking%20for%20new%20computer%20vision%20scenarios%20to%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eunlock%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E.%20Feel%20free%20to%20reach%20out%20to%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Bcvbp%40microsoft.com%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Eor%20post%20an%20issue%20if%20you%20wish%20to%20see%20us%20cover%20a%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3Escenario%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E.%3C%2FSPAN%3E%3CSPAN%3E%26nbsp%3B%3CBR%20%2F%3E%3C%2FSPAN%3E%3CSPAN%20data-ccp-props%3D%22%7B%26quot%3B201341983%26quot%3B%3A1%2C%26quot%3B335559739%26quot%3B%3A160%2C%26quot%3B335559740%26quot%3B%3A285%7D%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1070311%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSPAN%20data-contrast%3D%22none%22%3EFrom%20mastering%20common%20scenarios%20like%20image%20classification%2C%20object%20detection%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%2C%3C%2FSPAN%3E%3CSPAN%20data-contrast%3D%22none%22%3E%26nbsp%3Band%20image%20similarity%2C%20to%20exploring%20cutting%20edge%20scenarios%20like%20activity%20recognition%20and%20crowd%20counting%2C%20this%20repo%20will%20help%20you%20build%20models%2C%20fine-tune%20them%2C%20and%20use%20them%20in%20real-world%20scenarios.%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1070311%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3ECognitive%20Services%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Version history
Last update:
‎Mar 03 2020 09:53 AM
Updated by: