%3CLINGO-SUB%20id%3D%22lingo-sub-1551968%22%20slang%3D%22en-US%22%3ECreating%20Custom%20Vision%20Demos%20on%20the%20fly%20using%20Bing%20Image%20Search%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1551968%22%20slang%3D%22en-US%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-au%2Fservices%2Fcognitive-services%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20Cognitive%20Services%3C%2FA%3E%20provides%20a%20suite%20of%20AI%20services%20and%20APIs%20that%20lets%20developers%20work%20with%20AI%20technologies%20without%20having%20a%20deep%20expertise%20in%20machine%20learning.%20This%20post%20will%20cover%20how%20we%20can%20use%20two%20of%20these%20services%20together%2C%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fcognitive-services%2Fcustom-vision-service%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3ECustom%20Vision%3C%2FA%3E%20and%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fcognitive-services%2Fbing-image-search-api%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBing%20Image%20Search%20APIs%3C%2FA%3E%2C%20along%20with%20a%20.net%20core%20console%20application%20for%20rapid%20prototyping%20of%20Custom%20Vision%20models.%3C%2FP%3E%3CP%3ECustom%20Vision%20is%20a%20service%20that%20lets%20user%20build%20and%20deploy%20customized%20computer%20vision%20models%20using%20their%20own%20image%20datasets.%20The%20process%20of%20training%20a%20customized%20computer%20vision%20model%20is%20simplified%20as%20the%20machine%20learning%20happening%20under%20the%20hood%20is%20all%20managed%20by%20Azure%2C%20only%20the%20image%20data%20for%20the%20model%20itself%20is%20required%20by%20the%20user.%20A%20separate%20%3CA%20href%3D%22https%3A%2F%2Fwww.customvision.ai%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Euser%20interface%3C%2FA%3E%20is%20also%20provided%20as%20part%20of%20the%20Custom%20Vision%20service%20which%20makes%20it%20very%20simple%20to%20understand%20and%20use.%3C%2FP%3E%3CP%3EBing%20Image%20Search%20APIs%20is%20a%20service%20that%20executes%20a%20search%20query%20and%20returns%20a%20result%20of%20images%20and%20functions%20very%20similarly%20to%20an%20image%20search%20done%20the%20web%20version%20of%20%3CA%20href%3D%22https%3A%2F%2Fwww.bing.com%2Fimages%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBing%20Image%20Search%3C%2FA%3E.%20Query%20filters%20can%20also%20be%20applied%20as%20part%20of%20the%20Bing%20Image%20Search%20APIs%20to%20refine%20the%20results%20e.g%3A%20filtering%20for%20specific%20colours%2C%20selecting%20image%20type%20(photograph%2C%20clipart%2C%20GIF).%20The%20image%20below%20shows%20the%20Bing%20Image%20Search%20APIs%20through%20a%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fcognitive-services%2Fbing-image-search-api%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Evisual%20interface%3C%2FA%3E%20that%20users%20can%20try%20their%20own%20search%20terms%20on%2C%20as%20well%20as%20apply%20some%20query%20filters%20such%20as%20the%20image%20type%20and%20content%20freshness.%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22taeyh_0-1595984669709.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F208671iE1C9BF125CCBEF28%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20title%3D%22taeyh_0-1595984669709.png%22%20alt%3D%22taeyh_0-1595984669709.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3ETo%20create%20a%20Custom%20Vision%20model%2C%20it%20is%20recommended%20to%20have%20at%20least%2050%20images%20for%20each%20label%20before%20beginning%20to%20train%20a%20model.%20This%20can%20be%20a%20time%20consuming%20process%20especially%20when%20you%20have%20no%20pre-existing%20datasets%20and%20looking%20to%20prototype%20multiple%20models.%20By%20using%20a%20combination%20of%20the%20Bing%20Image%20Search%20APIs%20and%20the%20Custom%20Vision%20REST%20APIs%2C%20the%20process%20of%20populating%20a%20Custom%20Vision%20project%20with%20tagged%20images%20can%20be%20accelerated%2C%20and%20once%20all%20the%20images%20are%20in%20the%20Custom%20Vision%20project%20and%20tagged%2C%20a%20model%20can%20immediately%20be%20trained.%20The%20flow%20of%20this%20process%20is%20captured%20in%20a%20.net%20core%20console%20application%20that%20easily%20be%20altered%20to%20test%20this%20process%20with%20different%20Bing%20Image%20Search%20terms%20and%20query%20filters%20to%20understand%20what%20results%20are%20returned%20and%20how%20to%20further%20improve%20the%20model.%20The%20below%20diagram%20shows%20the%20flow%20between%20the%20components%20of%20this%20application.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22taeyh_6-1595984229629.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F208670i40622E8584CD95DA%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20title%3D%22taeyh_6-1595984229629.png%22%20alt%3D%22taeyh_6-1595984229629.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EAfter%20creating%20the%20necessary%20resources%20on%20Azure%2C%20the%20console%20application%20of%20this%20solution%20can%20be%20opened%20to%20specify%20the%20name%20of%20the%20tag%20and%20the%20search%20term%20to%20be%20queried%20in%20Bing%20Image%20Search.%20In%20this%20example%2C%20two%20subjects%20are%20set%2C%20the%20first%20one%20with%20a%20tag%20name%20of%20%22Apple%22%20and%20a%20search%20term%20of%20%22Red%20Apple%22%2C%20and%20the%20second%20one%20with%20a%20tag%20name%20of%20%22Pear%22%20and%20a%20search%20term%20of%20%22Green%20Pear%22.%20Afterwards%2C%20the%20console%20application%20is%20run%20and%20the%20user%20populates%20all%20the%20required%20values%20such%20as%20the%20resource%20keys.%20This%20will%20then%20trigger%20off%20the%20application%20at%20it%20starts%20with%20carrying%20out%20a%20search%20query%20and%20populates%20the%20specified%20Custom%20Vision%20Project.%20Once%20the%20application%20has%20finished%20running%2C%20the%20Custom%20Vision%20project%20should%20be%20populated%20with%20tagged%20images%20of%20red%20apples%20and%20green%20pears.%20To%20now%20train%20the%20model%2C%20the%20user%20can%20select%20between%20two%20options%3A%20quick%20training%20and%20advanced%20training.%3CBR%20%2F%3E%26nbsp%3B%3C%2FP%3E%3CUL%3E%3CLI%3EQuick%20training%20trains%20the%20model%20in%20a%20few%20minutes%20which%20is%20good%20for%20quick%20testing%20of%20simpler%20models.%3C%2FLI%3E%3CLI%3EAdvanced%20training%20option%20provides%20the%20option%20of%20allocating%20virtual%20machines%20over%20a%20selected%20amount%20of%20time%20to%20train%20a%20more%20in-depth%20model.%3C%2FLI%3E%3C%2FUL%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22taeyh_2-1595983281981.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F208663i2A46F3FBD7126FC6%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20title%3D%22taeyh_2-1595983281981.png%22%20alt%3D%22taeyh_2-1595983281981.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EIn%20this%20example%2C%20within%202%20minutes%20after%20selecting%20the%20quick%20training%20option%2C%20a%20model%20for%20distinguishing%20between%20apples%20and%20pears%20has%20been%20trained.%20To%20test%20my%20model%2C%20I've%20used%20a%20photo%20of%20an%20apple%20at%20home%20which%20has%20been%20correctly%20identified%20as%20being%20an%20apple.%20If%20the%20user%20wanted%20to%20expand%20on%20this%20and%20include%20more%20fruit%20as%20part%20of%20this%20model%2C%20this%20can%20easily%20be%20done%20with%20very%20minor%20changes%20to%20the%20code.%20Otherwise%2C%20by%20also%20changing%20the%20count%20%26amp%3B%20offset%20values%20when%20running%20the%20console%20application%2C%20more%20images%20of%20apples%20and%20pears%20can%20be%20populated%20in%20the%20project%20to%20retrain%20an%20updated%20model.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22taeyh_3-1595983281971.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F208664i47DFE4A06BB480E4%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20title%3D%22taeyh_3-1595983281971.png%22%20alt%3D%22taeyh_3-1595983281971.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EMore%20detailed%20steps%20on%20running%20this%20solution%20are%20available%20in%20the%20Readme%20as%20part%20of%20the%20GitHub%20repository%20for%20this%20solution%20which%20can%20be%20used%20to%20not%20just%20classify%20between%20apples%20and%20pears%20but%20any%20other%20examples%20you%20have%20in%20mind%20-%20I%20have%20also%20used%20this%20solution%20to%20create%20a%20Custom%20Vision%20model%20that%20classifies%20between%205%2B%20different%20car%20models.%20At%20the%20time%20of%20writing%20this%20post%2C%20this%20solution%20can%20be%20run%20on%20the%20free%20tiers%20of%20both%20Custom%20Vision%20and%20Bing%20Image%20Search%20APIs%20so%20please%20feel%20free%20to%20try%20this%20in%20your%20own%20environment.%26nbsp%3B%3C%2FP%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Ftaeyh%2Fcustomvision-bingsearch%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3E%3CSTRONG%3ELink%20to%20GitHub%20Repository%26nbsp%3B%3C%2FSTRONG%3E%3C%2FA%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1551968%22%20slang%3D%22en-US%22%3E%3CP%3EWant%20to%20create%20your%20own%20customized%20computer%20vision%20model%20on%20the%20fly%20using%20streamlined%20images%20directly%20from%20the%20web%3F%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1551968%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EApps%20%26amp%3B%20DevOps%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EArtificial%20Intelligence%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Senior Member

Azure Cognitive Services provides a suite of AI services and APIs that lets developers work with AI technologies without having a deep expertise in machine learning. This post will cover how we can use two of these services together, Custom Vision and Bing Image Search APIs, along with a .net core console application for rapid prototyping of Custom Vision models.

Custom Vision is a service that lets user build and deploy customized computer vision models using their own image datasets. The process of training a customized computer vision model is simplified as the machine learning happening under the hood is all managed by Azure, only the image data for the model itself is required by the user. A separate user interface is also provided as part of the Custom Vision service which makes it very simple to understand and use.

Bing Image Search APIs is a service that executes a search query and returns a result of images and functions very similarly to an image search done the web version of Bing Image Search. Query filters can also be applied as part of the Bing Image Search APIs to refine the results e.g: filtering for specific colours, selecting image type (photograph, clipart, GIF). The image below shows the Bing Image Search APIs through a visual interface that users can try their own search terms on, as well as apply some query filters such as the image type and content freshness.

taeyh_0-1595984669709.png

 

To create a Custom Vision model, it is recommended to have at least 50 images for each label before beginning to train a model. This can be a time consuming process especially when you have no pre-existing datasets and looking to prototype multiple models. By using a combination of the Bing Image Search APIs and the Custom Vision REST APIs, the process of populating a Custom Vision project with tagged images can be accelerated, and once all the images are in the Custom Vision project and tagged, a model can immediately be trained. The flow of this process is captured in a .net core console application that easily be altered to test this process with different Bing Image Search terms and query filters to understand what results are returned and how to further improve the model. The below diagram shows the flow between the components of this application.

 

taeyh_6-1595984229629.png

 

After creating the necessary resources on Azure, the console application of this solution can be opened to specify the name of the tag and the search term to be queried in Bing Image Search. In this example, two subjects are set, the first one with a tag name of "Apple" and a search term of "Red Apple", and the second one with a tag name of "Pear" and a search term of "Green Pear". Afterwards, the console application is run and the user populates all the required values such as the resource keys. This will then trigger off the application at it starts with carrying out a search query and populates the specified Custom Vision Project. Once the application has finished running, the Custom Vision project should be populated with tagged images of red apples and green pears. To now train the model, the user can select between two options: quick training and advanced training.
 

  • Quick training trains the model in a few minutes which is good for quick testing of simpler models.
  • Advanced training option provides the option of allocating virtual machines over a selected amount of time to train a more in-depth model.

taeyh_2-1595983281981.png

 

In this example, within 2 minutes after selecting the quick training option, a model for distinguishing between apples and pears has been trained. To test my model, I've used a photo of an apple at home which has been correctly identified as being an apple. If the user wanted to expand on this and include more fruit as part of this model, this can easily be done with very minor changes to the code. Otherwise, by also changing the count & offset values when running the console application, more images of apples and pears can be populated in the project to retrain an updated model.

 

taeyh_3-1595983281971.png

 

More detailed steps on running this solution are available in the Readme as part of the GitHub repository for this solution which can be used to not just classify between apples and pears but any other examples you have in mind - I have also used this solution to create a Custom Vision model that classifies between 5+ different car models. At the time of writing this post, this solution can be run on the free tiers of both Custom Vision and Bing Image Search APIs so please feel free to try this in your own environment. 

Link to GitHub Repository