%3CLINGO-SUB%20id%3D%22lingo-sub-1784002%22%20slang%3D%22en-US%22%3EVision%20on%20the%20Edge%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1784002%22%20slang%3D%22en-US%22%3E%3CDIV%20class%3D%22application-main%20%22%20data-commit-hovercards-enabled%3D%22%22%20data-discussion-hovercards-enabled%3D%22%22%20data-issue-and-pr-hovercards-enabled%3D%22%22%3E%0A%3CDIV%20class%3D%22%22%3E%0A%3CDIV%20class%3D%22container-xl%20clearfix%20new-discussion-timeline%20px-3%20px-md-4%20px-lg-5%22%3E%0A%3CDIV%20class%3D%22repository-content%20%22%3E%0A%3CDIV%20class%3D%22Box%20mt-3%20position-relative%0A%20%20%20%20%20%20%22%3E%0A%3CDIV%20id%3D%22readme%22%20class%3D%22Box-body%20readme%20blob%20js-code-block-container%20p-5%20p-xl-6%20gist-border-0%22%3E%0A%3CARTICLE%20class%3D%22markdown-body%20entry-content%20container-lg%22%3E%0A%3CH1%20id%3D%22toc-hId-559801009%22%20id%3D%22toc-hId-559801011%22%3E%3CA%20id%3D%22user-content-vision-on-the-edge%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23vision-on-the-edge%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EVision%20on%20the%20Edge%3C%2FH1%3E%0A%3CH2%20id%3D%22toc-hId-1250362483%22%20id%3D%22toc-hId-1250362485%22%3E%3CA%20id%3D%22user-content-introduction%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23introduction%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EIntroduction%3C%2FH2%3E%0A%3CP%3EVisual%20inspection%20of%20products%2C%20resources%2C%20and%20environments%20has%20been%20a%20core%20practice%20for%20most%20Enterprises%2C%20and%20was%2C%20until%20recently%2C%20a%20very%20manual%20process.%20An%20individual%2C%20or%20group%20of%20individuals%2C%20was%20responsible%20for%20performing%20a%20manual%20inspection%20of%20the%20asset%20or%20environment%2C%20which%2C%20depending%20on%20the%20circumstances%2C%20could%20become%20inefficient%2C%20inaccurate%20or%20both%2C%20due%20to%20human%20error%20and%20limitations.%3C%2FP%3E%0A%3CP%3EIn%20an%20effort%20to%20improve%20the%20efficacy%20of%20visual%20inspection%2C%20Enterprises%20began%20turning%20to%20deep%20learning%20artificial%20neural%20networks%20known%20as%20convolutional%20neural%20networks%2C%20or%20CNNs%2C%20to%20emulate%20human%20vision%20for%20analysis%20of%20images%20and%20video.%20Today%20this%20is%20commonly%20called%20computer%20vision%2C%20or%20simply%20Vision%20AI.%20Artificial%20Intelligence%20for%20image%20analytics%20spans%20a%20wide%20variety%20of%20industries%2C%20including%20manufacturing%2C%20retail%2C%20healthcare%2C%20and%20the%20public%20sector%2C%20and%20an%20equally%20wide%20area%20of%20use%20cases.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EVision%20as%20Quality%20Assurance%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%E2%80%93%20In%20manufacturing%20environments%2C%20quality%20inspection%20of%20parts%20and%20processes%20with%20a%20high%20degree%20of%20accuracy%20and%20velocity%20is%20one%20of%20the%20use%20cases%20for%20Vision%20AI.%20An%20enterprise%20pursuing%20this%20path%20automates%20the%20inspection%20of%20a%20product%20for%20defects%20to%20answer%20questions%20such%20as%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EIs%20the%20manufacturing%20process%20producing%20consistent%20results%3F%3C%2FLI%3E%0A%3CLI%3EIs%20the%20product%20assembled%20properly%3F%3C%2FLI%3E%0A%3CLI%3ECan%20I%20get%20notification%20of%20a%20defect%20sooner%20to%20reduce%20waste%3F%3C%2FLI%3E%0A%3CLI%3EHow%20can%20I%20leverage%20drift%20in%20my%20computer%20vision%20model%20to%20prescribe%20predictive%20maintenance%3F%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%3CSTRONG%3EVision%20as%20Safety%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%E2%80%93%20In%20any%20environment%2C%20safety%20is%20a%20fundamental%20concern%20for%20every%20Enterprise%20on%20the%20planet%2C%20and%20the%20reduction%20of%20risk%20is%20a%20driving%20force%20for%20adopting%20Vision%20AI.%20Automated%20monitoring%20of%20video%20feeds%20to%20scan%20for%20potential%20safety%20issues%20affords%20critical%20time%20to%20respond%20to%20incidents%2C%20and%20opportunities%20to%20reduce%20exposure%20to%20risk.%20Enterprises%20looking%20at%20Vision%20AI%20for%20this%20use%20case%20are%20commonly%20trying%20to%20answer%20questions%20such%20as%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EHow%20compliant%20is%20my%20workforce%20with%20using%20personal%20protective%20equipment%3F%3C%2FLI%3E%0A%3CLI%3EHow%20often%20are%20people%20entering%20unauthorized%20work%20zones%3F%3C%2FLI%3E%0A%3CLI%3EAre%20products%20being%20stored%20in%20a%20safe%20manner%3F%3C%2FLI%3E%0A%3CLI%3EAre%20there%20non-reported%20close%20calls%20in%20a%20facility%2C%20i.e.%20pedestrian%2Fequipment%20%E2%80%9Cnear%20misses%3F%E2%80%9D%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH2%20id%3D%22toc-hId--557091980%22%20id%3D%22toc-hId--557091978%22%3E%3CA%20id%3D%22user-content-why-vision-on-the-edge%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23why-vision-on-the-edge%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EWhy%20vision%20on%20the%20Edge%3C%2FH2%3E%0A%3CP%3EOver%20the%20past%20decade%2C%20computer%20vision%20has%20become%20a%20rapidly%20evolving%20area%20of%20focus%20for%20Enterprises%2C%20as%20cloud-native%20technologies%2C%20such%20as%20containerization%2C%20have%20enabled%20portability%20and%20migration%20of%20this%20technology%20toward%20the%20network%20edge.%20For%20instance%2C%20custom%20vision%20inference%20models%20trained%20in%20the%20Cloud%20can%20be%20easily%20containerized%20for%20use%20in%20an%20Azure%20IoT%20Edge%20runtime-enabled%20device.%3C%2FP%3E%0A%3CP%3EThe%20rationale%20behind%20migrating%20workloads%20from%20the%20cloud%20to%20the%20edge%20for%20Vision%20AI%20generally%20falls%20into%20two%20categories%20%E2%80%93%20performance%20and%20cost.%3C%2FP%3E%0A%3CP%3EOn%20the%20performance%20side%20of%20the%20equation%2C%20exfiltrating%20large%20quantities%20of%20data%20can%20cause%20an%20unintended%20performance%20strain%20on%20existing%20network%20infrastructure.%20Additionally%2C%20the%20latency%20of%20sending%20images%20and%2For%20video%20streams%20to%20the%20Cloud%20to%20retrieve%20results%20may%20not%20meet%20the%20needs%20of%20the%20use%20case.%20For%20instance%2C%20a%20person%20straying%20into%20an%20unauthorized%20area%20may%20require%20immediate%20intervention%2C%20and%20that%20scenario%20can%20ill%20afford%20latency%20when%20every%20second%20counts.%20Positioning%20the%20inferencing%20model%20near%20the%20point%20of%20ingest%20allows%20for%20near-real-time%20scoring%20of%20the%20image%2C%20and%20alerting%20can%20be%20performed%20either%20locally%20or%20through%20the%20cloud%2C%20depending%20on%20network%20topology.%3C%2FP%3E%0A%3CP%3EIn%20terms%20of%20cost%2C%20sending%20all%20of%20the%20data%20to%20the%20Cloud%20for%20analysis%20could%20significantly%20impact%20the%20ROI%20of%20a%20Vision%20AI%20initiative.%20With%20Azure%20IoT%20Edge%2C%20a%20Vision%20AI%20module%20could%20be%20designed%20to%20only%20capture%20the%20relevant%20images%20that%20have%20a%20reasonable%20confidence%20level%20based%20on%20the%20scoring%2C%20which%20significantly%20limits%20the%20amount%20of%20data%20being%20sent.%3C%2FP%3E%0A%3CP%3EThe%20purpose%20of%20this%20document%20is%20to%20give%20some%20concrete%20guidance%20on%20some%20of%20the%20key%20decisions%20when%20designing%20an%20end-to-end%20vision%20on%20the%20edge%20solution.%20Specifically%2C%20we%20will%20address%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23camera-considerations%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3ECamera%20selection%20and%20placement%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23hardware-acceleration%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EHardware%20acceleration%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23machine-learning-and-data-science%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EMachine%20learning%20and%20data%20science%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23image-storage-and-management%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EImage%20storage%20and%20management%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23inferencing-results-persistence%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EPersistence%20of%20alerts%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23user-interface%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EUser%20Interface%3C%2FA%3E%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH2%20id%3D%22toc-hId-1930420853%22%20id%3D%22toc-hId-1930420855%22%3E%3CA%20id%3D%22user-content-camera-considerations%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23camera-considerations%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ECamera%20Considerations%3C%2FH2%3E%0A%3CH3%20id%3D%22toc-hId--1673984969%22%20id%3D%22toc-hId--1673984967%22%3E%3CA%20id%3D%22user-content-camera-selection%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23camera-selection%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ECamera%20selection%3C%2FH3%3E%0A%3CP%3EOne%20of%20the%20most%20critical%20components%20to%20any%20vision%20workload%20is%20selecting%20the%20correct%20camera.%20The%20items%20that%20are%20being%20identified%20in%20a%20vision%20workload%20must%20be%20presented%20in%20such%20a%20way%20so%20that%20a%20computer%E2%80%99s%20artificial%20intelligence%20or%20machine%20learning%20models%20can%20evaluate%20them%20correctly.%20To%20further%20understand%20this%20concept%2C%20you%20need%20to%20understand%20the%20different%20camera%20types%20that%20can%20be%20used.%20One%20thing%20to%20note%20in%20this%20article%20as%20we%20move%20forward%2C%20there%20are%20a%20lot%20of%20different%20manufacturers%20of%20Area%2C%20Line%2C%20and%20Smart%20Cameras.%20Microsoft%20does%20not%20recommend%20any%20vendor%20over%20another%20-%20instead%2C%20we%20recommend%20that%20you%20select%20a%20vendor%20that%20fits%20your%20specific%20needs.%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId--983423495%22%20id%3D%22toc-hId--983423493%22%3E%3CA%20id%3D%22user-content-area-scan-cameras%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23area-scan-cameras%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EArea%20Scan%20Cameras%3C%2FH4%3E%0A%3CP%3EThis%20is%20more%20your%20traditional%20camera%20image%2C%20where%20a%202D%20image%20is%20captured%20and%20then%20sent%20over%20to%20the%20Edge%20hardware%20to%20be%20evaluated.%20This%20camera%20typically%20has%20a%20matrix%20of%20pixel%20sensors.%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EWhen%20should%20you%20use%20an%20Area%20Scan%20Camera%3F%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EAs%20the%20name%20suggests%2C%20Area%20Scan%20Cameras%20look%20at%20a%20large%20area%20and%20are%20great%20at%20detecting%20change%20in%20an%20area.%20Some%20examples%20of%20workloads%20that%20would%20use%20an%20Area%20Scan%20Camera%20would%20be%20workplace%20safety%20or%20detecting%20or%20counting%20objects%20(people%2C%20animals%2C%20cars%2Cetc.)%20in%20an%20environment.%3C%2FP%3E%0A%3CP%3EExamples%20of%20manufacturers%20of%20Area%20Scan%20Cameras%20are%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.baslerweb.com%2Fen%2Fproducts%2Findustrial-cameras%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBasler%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.axis.com%2Fen-us%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAxis%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.sony.co.jp%2FProducts%2FISP%2Fproducts%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3ESony%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fcommerce.boschsecurity.com%2Fus%2Fen%2FIP-Cameras%2Fc%2F10164917899%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBosch%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.flir.com%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EFLIR%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.alliedvision.com%2Fen%2Fdigital-industrial-camera-solutions.html%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAllied%20Vision%3C%2FA%3E%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId-1504089338%22%20id%3D%22toc-hId-1504089340%22%3E%3CA%20id%3D%22user-content-line-scan-cameras%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23line-scan-cameras%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ELine%20Scan%20Cameras%3C%2FH4%3E%0A%3CP%3EUnlike%20the%20Area%20Scan%20Cameras%2C%20the%20Line%20Scan%20Camera%20has%20a%20single%20row%20of%20linear%20pixel%20sensors.%20This%20can%20allow%20the%20camera%20to%20take%20one-pixel%20width%20in%20very%20quick%20successions%20and%20then%20stitches%20these%20one-pixel%20images%20into%20a%20video%20stream%20that%20is%20sent%20over%20to%20an%20Edge%20Device%20for%20processing%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EWhen%20should%20you%20use%20a%20Line%20Scan%20Camera%3F%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3ELine%20Scan%20Cameras%20are%20great%20for%20vision%20workloads%20where%20the%20items%20to%20be%20identified%20are%20moving%20past%20the%20camera%20or%20items%20that%20need%20to%20be%20rotated%20to%20detect%20defects.%20The%20Line%20Scan%20Camera%20would%20then%20be%20able%20to%20produce%20a%20continuous%20image%20stream%20that%20can%20then%20be%20evaluated.%20Some%20examples%20of%20workloads%20that%20would%20work%20best%20with%20a%20Line%20Scan%20Camera%20would%20be%20item%20defect%20detection%20on%20parts%20that%20are%20moved%20on%20a%20conveyer%20belt%2C%20workloads%20that%20require%20spinning%20to%20see%20a%20cylindrical%20object%20or%20any%20workload%20that%20requires%20rotation.%3C%2FP%3E%0A%3CP%3EExamples%20of%20manufacturers%20of%20Area%20Scan%20Cameras%20are%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.baslerweb.com%2Fen%2Fproducts%2Findustrial-cameras%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBasler%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.teledynedalsa.com%2Fen%2Fhome%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3ETeledyne%20Dalsa%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.hamamatsu.com%2Fus%2Fen%2Findex.html%3Fnfxsid%3D5ede4ac8e12e41591626440%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EHamamatsu%20Corporation%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.datalogic.com%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EDataLogic%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fvieworks.com%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EVieworks%3C%2FA%3E%2C%20and%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.xenics.com%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EXenics%3C%2FA%3E%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId--303365125%22%20id%3D%22toc-hId--303365123%22%3E%3CA%20id%3D%22user-content-embedded-smart-camera%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23embedded-smart-camera%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EEmbedded%20Smart%20Camera%3C%2FH4%3E%0A%3CP%3EThis%20type%20of%20camera%20can%20use%20either%20a%20Area%20Scan%20or%20Line%20Scan%20Camera%20for%20the%20acquisition%20of%20the%20images%2C%20however%2C%20the%20Line%20Scan%20Smart%20Camera%20is%20rare.%20The%20main%20feature%20of%20this%20camera%20is%20that%20it%20not%20only%20acquires%20the%20image%2C%20but%20it%20can%20also%20process%20the%20image%20as%20they%20are%20a%20self-contained%20stand-alone%20system.%20They%20typically%20have%20either%20and%20RS232%20or%20Ethernet%20port%20output%2C%20and%20this%20allows%20the%20Smart%20Cameras%20to%20be%20integrated%20directly%20into%20a%20PLC%20or%20other%20IIoT%20interfaces.%3C%2FP%3E%0A%3CP%3EExamples%20of%20manufacturers%20of%20Embedded%20Smart%20Cameras%20are%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.baslerweb.com%2Fen%2Fproducts%2Findustrial-cameras%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBasler%3C%2FA%3E%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.leuze.com%2Fen%2Fusa%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3ELesuze%20Electronics%3C%2FA%3E%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId--2110819588%22%20id%3D%22toc-hId--2110819586%22%3E%3CA%20id%3D%22user-content-other-camera-features-to-consider%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23other-camera-features-to-consider%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EOther%20camera%20features%20to%20consider%3C%2FH4%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3ESensor%20size%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThis%20is%20one%20of%20the%20most%20important%20factors%20to%20evaluate%20in%20any%20vision%20workload.%20A%20sensor%20is%20the%20hardware%20within%20a%20camera%20that%20is%20capturing%20the%20light%20and%20converting%20into%20signals%20which%20then%20produces%20an%20image.%20The%20sensor%20contains%20millions%20of%20semiconducting%20photodetectors%20that%20are%20called%20photosites.%20One%20thing%20that%20is%20a%20bit%20of%20a%20misconception%20is%20that%20a%20higher%20megapixel%20count%20is%20a%20better%20image.%20For%20example%2C%20let%E2%80%99s%20look%20at%20two%20different%20sensor%20sizes%20for%20a%2012-megapixel%20camera.%20Camera%20A%20has%20a%20%C2%BD%20inch%20sensor%20with%2012%20million%20photosites%20and%20camera%20B%20has%20a%201-inch%20sensor%20with%2012%20million%20photosites.%20In%20the%20same%20lighting%20conditions%2C%20the%20camera%20that%20has%20a%201-inch%20sensor%20will%20be%20cleaner%20and%20sharper.%20Many%20cameras%20that%20would%20be%20typically%20be%20used%20in%20vision%20workloads%20would%20have%20a%20sensor%20between%20%C2%BC%20inch%20to%201%20inch.%20In%20some%20cases%2C%20much%20larger%20sensors%20might%20be%20required.%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3E%3CEM%3EIf%20a%20camera%20has%20a%20choice%20between%20a%20larger%20sensor%20or%20a%20smaller%20sensor%20some%20factors%20consider%20as%20to%20why%20you%20might%20choose%20the%20larger%20sensor%20are%3A%3C%2FEM%3E%3C%2FSTRONG%3E%3CUL%3E%0A%3CLI%3Eneed%20for%20precision%20measurements%3C%2FLI%3E%0A%3CLI%3ELower%20light%20conditions%3C%2FLI%3E%0A%3CLI%3EShorter%20exposure%20times%2C%20i.e.%20fast-moving%20items%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EResolution%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThis%20is%20another%20very%20important%20factor%20to%20both%20Line%20Scan%20and%20Area%20Scan%20camera%20workloads.%20If%20your%20workload%20must%20identify%20fine%20features%20(Ex.%20writing%20on%20an%20IC%20Chip)%20then%20you%20need%20greater%20resolutions%20of%20the%20cameras%20used.%20If%20your%20workload%20is%20trying%20to%20detect%20a%20face%2C%20then%20higher%20resolution%20is%20required.%20And%20if%20you%20need%20to%20identify%20a%20vehicle%20from%20a%20distance%2C%20again%20this%20would%20require%20higher%20resolution.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ESpeed%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3ESensors%20come%20in%20two%20types%20a%20CCD%20and%20a%20CMOS.%20If%20the%20vision%20workload%20requires%20high%20number%20of%20images%20per%20second%20capture%20rate%2C%20then%20there%20are%20two%20factors%20that%20come%20into%20play.%20The%20first%20is%20how%20fast%20is%20the%20connection%20on%20the%20interface%20of%20the%20camera%20and%20the%20second%20is%20what%20type%20of%20sensor%20is%20it.%20CMOS%20sensors%20have%20a%20direct%20readout%20from%20the%20photosites%20and%20because%20of%20this%20they%20typically%20offer%20a%20higher%20frame%20rate.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CBLOCKQUOTE%3E%0A%3CP%3ENOTE%3A%20There%20are%20more%20camera%20features%20to%20consider%20when%20selecting%20the%20correct%20camera%20for%20your%20vision%20workload.%20These%20include%20lens%20selection%2C%20focal%20length%2C%20monochrome%2C%20color%20depth%2C%20stereo%20depth%2C%20triggers%2C%20physical%20size%2C%20and%20support.%20Sensor%20manufacturers%20can%20help%20you%20understand%20the%20specific%20feature%20that%20your%20application%20may%20require.%3C%2FP%3E%0A%3C%2FBLOCKQUOTE%3E%0A%3CH3%20id%3D%22toc-hId--2121322692%22%20id%3D%22toc-hId--2121322690%22%3E%3CA%20id%3D%22user-content-camera-placement-location-angle-lighting-etc%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23camera-placement-location-angle-lighting-etc%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ECamera%20Placement%20(location%2C%20angle%2C%20lighting%2C%20etc.)%3C%2FH3%3E%0A%3CP%3EDepending%20on%20the%20items%20that%20you%20are%20capturing%20in%20your%20vision%20workload%20will%20determine%20the%20location%20and%20angles%20that%20the%20camera%20should%20be%20placed.%20The%20camera%20location%20can%20also%20affect%20the%20sensor%20type%2C%20lens%20type%2C%20and%20camera%20body%20type.%20There%20are%20several%20key%20concepts%20to%20keep%20in%20mind%20when%20figuring%20out%20the%20perfect%20spot%20to%20place%20the%20camera%20in.%3C%2FP%3E%0A%3CP%3EThere%20are%20several%20different%20factors%20that%20can%20weigh%20into%20the%20overall%20decision%20for%20camera%20placement.%20Two%20of%20the%20most%20critical%20are%20lighting%20and%20field%20of%20view%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId--199890619%22%20id%3D%22toc-hId--199890617%22%3E%3CA%20id%3D%22user-content-camera-lighting%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23camera-lighting%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ECamera%20Lighting%3C%2FH4%3E%0A%3CP%3EIn%20a%20computer%20vision%20workload%2C%20lighting%20is%20a%20critical%20component%20to%20camera%20placement.%20There%20are%20several%20different%20lighting%20conditions.%20While%20some%20of%20the%20lighting%20conditions%20would%20be%20useful%20for%20one%20vision%20workload%2C%20it%20might%20produce%20an%20undesirable%20condition%20in%20another.%20Types%20of%20lighting%20that%20are%20commonly%20used%20in%20computer%20vision%20workloads%20are%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EDirect%20lighting%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThis%20is%20the%20most%20commonly%20used%20lighting%20condition.%20This%20light%20source%20is%20projected%20at%20the%20object%20to%20be%20captured%20for%20evaluation.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3ELine%20lighting%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThis%20is%20a%20single%20array%20of%20lights%20that%20are%20most%20used%20with%20line%20scan%20camera%20applications%20to%20create%20a%20single%20line%20of%20light%20where%20the%20camera%20is%20focused.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EDiffused%20lighting%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThis%20type%20of%20lighting%20is%20used%20to%20illuminate%20an%20object%20but%20prevent%20harsh%20shadows%20and%20is%20mostly%20used%20around%20specular%20objects.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EBack%20lighting%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThis%20type%20of%20light%20source%20is%20used%20behind%20the%20object%2C%20in%20which%20produces%20a%20silhouette%20of%20the%20object.%20This%20is%20most%20useful%20when%20taking%20measurements%2C%20edge%20detection%2C%20or%20object%20orientation.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EAxial%20diffused%20lighting%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThis%20type%20of%20light%20source%20is%20often%20used%20with%20highly%20reflective%20objects%2C%20or%20to%20prevent%20shadows%20on%20the%20part%20that%20will%20be%20captured%20for%20evaluation.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3ECustom%20Grid%20lighting%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThis%20is%20a%20structured%20lighting%20condition%20that%20lays%20out%20a%20grid%20of%20light%20on%20the%20object%2C%20the%20intent%20is%20to%20have%20a%20known%20grid%20projection%20to%20then%20provide%20more%20accurate%20measurements%20of%20components%2C%20parts%2C%20placement%20of%20items%2C%20etc.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EStrobe%20lighting%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EStrobe%20lighting%20is%20used%20for%20high%20speed%20moving%20parts.%20The%20strobe%20must%20be%20in%20sync%20with%20the%20camera%20to%20take%20a%20%E2%80%9Cfreeze%E2%80%9D%20of%20the%20object%20for%20evaluation%2C%20this%20lighting%20helps%20to%20prevent%20motion%20blurring%20effect.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EDark%20Field%20lighting%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThis%20type%20of%20light%20source%20uses%20several%20lights%20in%20conjunction%20with%20different%20angles%20to%20the%20part.%20For%20example%2C%20if%20the%20part%20is%20laying%20flat%20on%20a%20conveyor%20belt%20the%20lights%20would%20be%20placed%20at%20a%2045-degree%20angle%20to%20the%20part.%20This%20type%20of%20lighting%20is%20most%20useful%20when%20looking%20at%20highly%20reflective%20clear%20objects%E2%80%A6and%20is%20most%20commonly%20used%20with%20lens%20scratch%20detections.%3C%2FP%3E%0A%3CP%3EAngular%20placement%20of%20light%3C%2FP%3E%0A%3CDIV%20id%3D%22tinyMceEditorbonJoeV_15%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22image.png%22%20style%3D%22width%3A%20550px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F226917iC4FF7ACB26D39B96%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22image.png%22%20alt%3D%22image.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH4%20id%3D%22toc-hId--2007345082%22%20id%3D%22toc-hId--2007345080%22%3EField%20of%20View%3C%2FH4%3E%0A%3CP%3EIn%20a%20vision%20workload%20you%20need%20to%20know%20the%20distance%20to%20the%20object%20that%20you%20are%20trying%20to%20evaluate.%20This%20also%20will%20play%20a%20part%20in%20the%20camera%20selection%2C%20sensor%20selection%2C%20and%20lens%20configuration.%20Some%20of%20the%20components%20that%20make%20up%20the%20field%20of%20view%20are%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3EDistance%20to%20object(s)%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EFor%20an%20example%20is%20the%20object%20that%20we%20are%20monitoring%20with%20computer%20vision%20on%20a%20conveyor%20belt%20and%20the%20camera%20is%202%20feet%20above%20it%2C%20or%20is%20the%20object%20across%20a%20parking%20lot%3F%20As%20the%20distance%20changes%20so%20does%20the%20camera%E2%80%99s%20sensors%20and%20lens%20configurations.%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EArea%20of%20coverage%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eis%20the%20area%20that%20the%20computer%20vision%20trying%20to%20monitor%20small%20or%20large%3F%20This%20has%20direct%20correlation%20to%20the%20camera%E2%80%99s%20resolution%20overall%2C%20lens%2C%20and%20sensor%20type.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EDirection%20of%20the%20Sun%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eif%20the%20computer%20vision%20workload%20is%20outside%2C%20such%20as%20monitoring%20a%20job%20construction%20site%20for%20worker%20safety%2C%20will%20the%20camera%20be%20pointed%20in%20the%20sun%20at%20any%20time%3F%20Keep%20in%20mind%20that%20if%20the%20sun%20is%20casting%20a%20shadow%20over%20the%20object%20that%20the%20vision%20workload%20is%20monitoring%2C%20items%20might%20be%20obscured%20a%20bit.%20Also%2C%20if%20the%20camera%20is%20getting%20direct%20sunlight%20in%20the%20lens%2C%20the%20camera%20might%20be%20%E2%80%9Cblinded%E2%80%9D%20until%20the%20angle%20of%20the%20sun%20changes.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ECamera%20angle%20to%20the%20object(s)%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eangle%20of%20the%20camera%20to%20the%20object%20that%20the%20vision%20workload%20is%20monitoring%20is%20also%20critical%20component%20to%20think%20about.%20If%20the%20camera%20is%20too%20high%20it%20might%20miss%20the%20details%20that%20the%20vision%20workload%20is%20trying%20to%20monitor%20for%2C%20and%20the%20same%20may%20be%20true%20if%20it%20is%20too%20low.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH3%20id%3D%22toc-hId-351085032%22%20id%3D%22toc-hId-351085034%22%3E%3CA%20id%3D%22user-content-communication-interface%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23communication-interface%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ECommunication%20Interface%3C%2FH3%3E%0A%3CP%3EIn%20building%20a%20computer%20vision%20workload%20it%20is%20also%20important%20to%20understand%20how%20the%20system%20will%20interact%20with%20the%20output%20of%20the%20camera.%20Below%20are%20a%20few%20of%20the%20standard%20ways%20that%20a%20camera%20will%20communicate%20to%20IoT%20Edge%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EReal%20Time%20Streaming%20Protocol(RTSP)%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3ERTSP%20is%20a%20protocol%20that%20transfers%20real-time%20video%20data%20from%20a%20device%20(in%20our%20case%20the%20camera)%20to%20an%20endpoint%20device%20(Edge%20compute)%20directly%20over%20a%20TCP%2FIP%20connection.%20It%20functions%20in%20a%20client%20server%20application%20model%20that%20is%20at%20the%20application%20level%20in%20the%20network.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EOpen%20Network%20Video%20Interface%20Forum%20(ONVIF)%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Ea%20global%20and%20open%20industry%20forum%20that%20is%20developing%20open%20standards%20for%20IP-based%20cameras.%20This%20standard%20is%20aimed%20at%20standardization%20of%20communication%20between%20the%20IP%20Camera%20and%20down%20stream%20systems%2C%20Interoperability%2C%20and%20Open%20sourced.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EUSB%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EUnlike%20RTSP%20and%20ONVIF%20USB%20connected%20cameras%20connect%20over%20the%20Universal%20Serial%20Bus%20directly%20on%20the%20Edge%20compute%20device.%20This%20is%20less%20complex%3B%20however%2C%20it%20is%20limited%20on%20distance%20that%20the%20camera%20can%20be%20placed%20away%20from%20the%20Edge%20compute.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3ECamera%20Serial%20Interface%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3ECSI%20specification%20is%20from%20Mobile%20Industry%20Processor%20Interface(MIPI).%20It%20is%20an%20interface%20that%20describes%20how%20to%20communicate%20between%20a%20camera%20and%20a%20host%20processor.%3C%2FP%3E%0A%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EThere%20are%20several%20standards%20defined%20for%20CSI%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3ECSI-1%3C%2FSTRONG%3E%3A%20This%20was%20the%20original%20standard%20that%20MIPI%20started%20with.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ECSI-2%3C%2FSTRONG%3E%3A%20This%20standard%20was%20released%20in%202005%2C%20and%20uses%20either%20D-PHY%20or%20C-PHY%20as%20physical%20layers%20options.%20This%20is%20further%20divided%20into%20several%20layers%3A%3COL%3E%0A%3CLI%3EPhysical%20Layer%20(C-PHY%2C%20D-PHY)%3C%2FLI%3E%0A%3CLI%3ELane%20Merger%20layer%3C%2FLI%3E%0A%3CLI%3ELow%20Level%20Protocol%20Layer%3C%2FLI%3E%0A%3CLI%3EPixel%20to%20Byte%20Conversion%20Layer%3C%2FLI%3E%0A%3CLI%3EApplication%20layer%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EThe%20specification%20was%20updated%20in%202017%20to%20v2%2C%20which%20added%20support%20for%20RAW-24%20color%20depth%2C%20Unified%20Serial%20Link%2C%20and%20Smart%20Region%20of%20Interest.%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--1585452150%22%20id%3D%22toc-hId--1585452148%22%3E%3CA%20id%3D%22user-content-hardware-acceleration%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23hardware-acceleration%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EHardware%20Acceleration%3C%2FH2%3E%0A%3CP%3EAlong%20with%20the%20camera%20selection%2C%20one%20of%20the%20other%20critical%20decisions%20in%20Vision%20on%20the%20Edge%20projects%20is%20hardware%20acceleration.%20Options%20include%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3ECPU%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EThe%20Central%20Processing%20Unit%20(CPU)%20is%20your%20default%20compute%20for%20most%20processes%20running%20on%20a%20computer%2C%20it%20is%20designed%20for%20general%20purpose%20compute.%20Some%20Vision%20Workloads%20where%20timing%20is%20not%20as%20critical%20this%20might%20be%20a%20good%20option.%20However%2C%20most%20workloads%20that%20involve%20critical%20timing%2C%20multiple%20camera%20streams%2C%20and%2For%20high%20frame%20rates%20will%20require%20more%20specific%20hardware%20acceleration%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EGPU%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EMany%20people%20are%20familiar%20with%20the%20Graphics%20Processing%20Unit%20(GPU)%20as%20this%20is%20the%20de-facto%20processor%20for%20any%20high-end%20PC%20graphics%20card.%20In%20recent%20years%20the%20GPU%20has%20been%20leveraged%20in%20high%20performance%20computer%20(HPC)%20scenarios%2C%20and%20in%20data%20mining%2C%20and%20in%20computer%20AI%2FML%20workloads.%20The%20GPU%E2%80%99s%20massive%20potential%20of%20parallel%20computing%20can%20be%20used%20in%20a%20vision%20workload%20to%20accelerate%20the%20processing%20of%20pixel%20data.%20The%20downside%20to%20a%20GPU%20is%20its%20higher%20power%20consumption%2C%20which%20is%20a%20critical%20factor%20to%20consider%20for%20your%20vision%20workload.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EFPGA%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EField%20Programmable%20Gate%20Arrays%20are%20reconfigurable%20hardware%20accelerators.%20These%20powerful%20accelerators%20allow%20for%20the%20growth%20of%20Deep%20Learning%20Neural%20networks%2C%20which%20are%20still%20evolving.%20These%20accelerators%20have%20millions%20of%20programmable%20gates%2C%20hundred%20of%20I%2FO%20pins%2C%20and%20exceptional%20compute%20power%20(in%20the%20Trillions%20of%20tera-MAC%E2%80%99s)%20There%20also%20a%20lot%20of%20different%20libraries%20available%20for%20FPGA%E2%80%99s%20to%20use%20that%20are%20optimized%20for%20vision%20workloads.%20Some%20of%20these%20libraries%20also%20include%20preconfigured%20interfaces%20to%20connect%20to%20downstream%20cameras%20and%20devices.%20One%20area%20that%20FPGA%E2%80%99s%20tend%20to%20fall%20short%20on%20is%20floating%20point%20operations%2C%20however%2C%20manufacturers%20are%20currently%20working%20on%20this%20issue%20and%20have%20made%20a%20lot%20of%20improvements%20in%20this%20area.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EASIC%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EApplication%20Specific%20Integrated%20Circuit%20is%20by%20far%20the%20fastest%20accelerator%20on%20the%20market%20today.%20While%20they%20are%20the%20fastest%2C%20they%20are%20the%20hardest%20to%20change%20as%20they%20are%20manufactured%20to%20function%20for%20a%20specific%20task.%20These%20custom%20chips%20are%20gaining%20popularity%20due%20to%20size%2C%20power%20per%20watt%20performance%2C%20and%20IP%20protection%20(because%20the%20IP%20is%20burned%20into%20the%20ASIC%20accelerator%20it%20is%20much%20harder%20to%20backwards%20engineer%20proprietary%20algorithms).%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH2%20id%3D%22toc-hId-902060683%22%20id%3D%22toc-hId-902060685%22%3E%3CA%20id%3D%22user-content-machine-learning-and-data-science%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23machine-learning-and-data-science%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EMachine%20learning%20and%20data%20science%3C%2FH2%3E%0A%3CP%3EThe%20process%20of%20designing%20the%20machine%20learning%20(ML)%20approach%20for%20a%20vision%20on%20the%20edge%20scenario%20one%20of%20the%20biggest%20challenges%20in%20the%20entire%20planning%20process.%20Therefore%2C%20it%20is%20important%20to%20understand%20how%20to%20consider%20and%20think%20about%20ML%20in%20the%20context%20of%20edge%20devices.%20Some%20of%20the%20considerations%20and%20hurdles%20are%20outlined%20below%20to%20help%20begin%20to%20think%20in%20terms%20of%20using%20machine%20learning%20to%20address%20business%20problems%20and%20pain%20points%20with%20guidance%20including%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EAlways%20consider%20first%20how%20to%20solve%20the%20problem%20without%20ML%20or%20with%20a%20simple%20ML%20algorithm%3C%2FLI%3E%0A%3CLI%3EHave%20a%20plan%20to%20test%20several%20ML%20architectures%20as%20they%20will%20have%20different%20capacities%20to%20%22learn%22%3C%2FLI%3E%0A%3CLI%3EHave%20a%20system%20in%20place%20to%20collect%20new%20data%20from%20the%20device%20to%20retrain%20an%20ML%20model%3C%2FLI%3E%0A%3CLI%3EFor%20a%20poorly%20performing%20ML%20models%2C%20often%20a%20simple%20fix%20is%20to%20add%20more%20representative%20data%20to%20the%20training%20process%20and%20ensure%20it%20has%20variability%20with%20all%20classes%20represented%20equally%3C%2FLI%3E%0A%3CLI%3ERemember%2C%20this%20is%20often%20an%20iterative%20process%20with%20both%20the%20choice%20of%20data%20and%20choice%20of%20architecture%20being%20updated%20in%20the%20exploratory%20phase%3C%2FLI%3E%0A%3CLI%3EMore%20guidance%20below%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EIt%20is%20not%20an%20easy%20space%20and%2C%20for%20some%2C%20a%20very%20new%20way%20of%20thinking.%20It%20is%20a%20data%20driven%20process.%20Careful%20planning%20will%20be%20critical%20to%20successful%20results%20especially%20on%20very%20constrained%20devices.%3C%2FP%3E%0A%3CP%3EIn%20ML%20it%20is%20always%20critical%20to%20clearly%20define%20the%20problem%20trying%20to%20be%20solved%20because%20the%20data%20science%20and%20machine%20learning%20approach%20will%20depend%20upon%20this%20and%20decisions%20will%20be%20easier%20the%20more%20specific%20it%20is.%20It%20is%20also%20very%20important%20to%20consider%20what%20type%20of%20data%20will%20be%20encountered%20in%20the%20edge%20scenario%20as%20this%20will%20determine%20the%20kind%20of%20ML%20algorithm%20that%20should%20be%20used.%3C%2FP%3E%0A%3CP%3EEven%20at%20the%20start%2C%20before%20training%20any%20models%2C%20real%20world%20data%20collection%20and%20examination%20will%20help%20this%20process%20greatly%20and%20new%20ideas%20could%20even%20arise.%20Below%2C%20we%20will%20discuss%20data%20considerations%20in%20detail.%20Of%20course%2C%20the%20equipment%20itself%20will%20help%20determine%20the%20ML%20approach%20with%20regard%20to%20device%20attributes%20like%20limited%20memory%2C%20compute%2C%20and%2For%20power%20consumption%20limits.%3C%2FP%3E%0A%3CP%3EFortunately%2C%20data%20science%20and%20machine%20learning%20are%20iterative%20processes%2C%20so%20if%20the%20ML%20model%20has%20poor%20performance%2C%20there%20are%20many%20ways%20to%20address%20issues%20through%20experimention.%20Below%2C%20we%20will%20discuss%20consideratinos%20around%20ML%20architecture%20choices.%20Often%2C%20there%20will%20be%20some%20trial%20and%20error%20involved%20as%20well.%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId--776311061%22%20id%3D%22toc-hId--776311059%22%3E%3CA%20id%3D%22user-content-machine-learning-data%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23machine-learning-data%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EMachine%20learning%20data%3C%2FH3%3E%0A%3CP%3EBoth%20the%20source(s)%20and%20attributes%20of%20data%20will%20dictate%20how%20the%20intelligent%20edge%20system%20is%20built.%20For%20vision%2C%20it%20could%20be%20images%2C%20videos%2C%20or%20even%20LiDAR%2C%20as%20the%20streaming%20signal.%20Regardless%20of%20the%20signal%2C%20when%20training%20an%20ML%20model%20and%20using%20it%20to%20score%20new%20data%20(called%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Einferencing%3C%2FEM%3E)%20domain%20knowledge%20will%20be%20required%2C%20such%20as%20experience%20in%20designing%20and%20using%20ML%20algorithms%20or%20neural%20network%20architectures%20and%20expertise%20deploying%20them%20to%20the%20specialized%20hardware.%20Below%20are%20a%20few%20considerations%20related%20to%20ML%2C%20however%2C%20it%20is%20recommended%20to%20gain%20some%20deeper%20knowledge%20in%20order%20to%20open%20up%20more%20possibilities%20or%20find%20an%20ML%20expert%20with%20edge%20experience%20to%20help%20with%20the%20project.%3C%2FP%3E%0A%3CP%3ECollecting%20and%20using%20a%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Ebalanced%20dataset%3C%2FEM%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eis%20critical%2C%20that%20is%2C%20equally%20representating%20all%20classes%20or%20categories.%20When%20the%20ML%20model%20is%20trained%20on%20a%20dataset%2C%20generally%20that%20dataset%20has%20been%20split%20into%20train%2C%20validate%20and%20test%20subsets.%20The%20purpose%20of%20these%20subsets%20is%20as%20follows.%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EThe%20training%20dataset%20is%20used%20for%20the%20actual%20model%20training%20over%20many%20passes%20or%20iterations%20(often%20called%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Eepochs%3C%2FEM%3E).%3C%2FLI%3E%0A%3CLI%3EThrougout%20the%20training%20process%2C%20the%20model%20is%20spot-checked%20for%20how%20well%20it%20is%20doing%20on%20the%20validation%20dataset.%3C%2FLI%3E%0A%3CLI%3EAfter%20a%20model%20is%20done%20training%2C%20the%20final%20step%20is%20to%20pass%20the%20test%20dataset%20through%20it%20and%20assess%20how%20well%20it%20did%20as%20a%20proxy%20to%20the%20real-world.%20Note%3A%20be%20wary%20of%20optimizing%20for%20the%20test%20dataset%20(in%20addition%20to%20the%20training%20dataset)%20once%20one%20test%20has%20been%20run.%20It%20might%20be%20good%20to%20have%20a%20few%20different%20test%20datasets%20available.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3ESome%20good%20news%20is%20if%20using%20deep%20learning%2C%20often%20costly%20and%20onerous%20feature%20engineering%2C%20featurization%2C%20and%20preprocessing%20can%20be%20avoided%20because%20of%20how%20deep%20learning%20works%20to%20find%20signal%20in%20noise%20better%20than%20traditional%20ML.%20However%2C%20in%20deep%20learning%2C%20transformations%20may%20still%20be%20utilized%20to%20clean%20or%20reformat%20data%20for%20model%20input%20during%20training%20as%20well%20as%20inference.%20Note%2C%20the%20same%20preprocessing%20needs%20to%20be%20used%20in%20training%20and%20when%20the%20model%20is%20scoring%20new%20data.%3C%2FP%3E%0A%3CP%3EWhen%20advanced%20preprocessing%20is%20used%20such%20as%20de-noising%2C%20adjusting%20brightness%20or%20contrast%2C%20or%20transformations%20like%20RGB%20to%20HSV%2C%20it%20must%20be%20noted%20that%20this%20can%20dramatically%20change%20the%20model%20performance%20for%20the%20better%20or%2C%20sometimes%2C%20for%20the%20worse.%20In%20general%2C%20it%20is%20part%20of%20the%20data%20science%20exploration%20process%20and%20sometimes%20it%20is%20something%20that%20must%20be%20observed%20once%20the%20device%20and%20other%20components%20are%20placed%20in%20a%20real-world%20location.%3C%2FP%3E%0A%3CP%3EAfter%20the%20hardware%20is%20installed%20into%20its%20permanent%20location%2C%20the%20incoming%20data%20stream%20should%20be%20monitored%20for%20data%20drift.%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3EData%20drift%3C%2FSTRONG%3E%3A%20deviation%20due%20to%20changes%20in%20the%20current%20data%20compared%20to%20the%20original.%20Data%20drift%20will%20often%20result%20in%20a%20degradation%20in%20in%20model%20performance%20(like%20accuracy)%2C%20albeit%2C%20this%20is%20not%20the%20only%20cause%20of%20decreased%20performance%20(e.g.%20hardware%20or%20camera%20failure).%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EThere%20should%20be%20an%20allowance%20for%20data%20drift%20testing%20in%20the%20system.%20This%20new%20data%20should%20also%20be%20collected%20for%20another%20round%20of%20training%20(the%20more%20representative%20data%20collected%20for%20training%2C%20the%20better%20the%20model%20will%20perform%20in%20almost%20all%20cases!)%2C%20therefore%2C%20preparing%20for%20this%20kind%20of%20collection%20is%20always%20a%20good%20idea.%3C%2FP%3E%0A%3CP%3EIn%20addition%20to%20using%20data%20for%20training%20and%20inference%2C%20new%20data%20coming%20from%20the%20device%20could%20be%20used%20to%20monitor%20the%20device%2C%20camera%20or%20other%20components%20for%20hardware%20degradation.%3C%2FP%3E%0A%3CP%3EIn%20summary%2C%20here%20are%20the%20key%20considerations%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EAlways%20use%20a%20balanced%20dataset%20with%20all%20classes%20represented%20equally%3C%2FLI%3E%0A%3CLI%3EThe%20more%20representative%20data%20used%20to%20train%20a%20model%2C%20the%20better%3C%2FLI%3E%0A%3CLI%3EHave%20a%20system%20in%20place%20to%20collect%20new%20data%20from%20device%20to%20retrain%3C%2FLI%3E%0A%3CLI%3EHave%20a%20system%20in%20place%20to%20test%20for%20data%20drift%3C%2FLI%3E%0A%3CLI%3EOnly%20run%20a%20test%20set%20through%20a%20new%20ML%20model%20once%20-%20if%20you%20iterate%20and%20retest%20on%20the%20same%20test%20set%20this%20could%20cause%20overfitting%20to%20the%20test%20set%20in%20addition%20to%20the%20training%20set%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH3%20id%3D%22toc-hId-1711201772%22%20id%3D%22toc-hId-1711201774%22%3E%3CA%20id%3D%22user-content-machine-learning-architecture-choices%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23machine-learning-architecture-choices%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EMachine%20learning%20architecture%20choices%3C%2FH3%3E%0A%3CP%3EAn%20ML%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Earchitecture%3C%2FEM%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eis%20the%20layout%20of%20the%20mathematical%20operations%20that%20process%20input%20into%20our%20desired%2C%20actionable%20output.%20For%20instance%2C%20in%20deep%20learning%20this%20would%20be%20the%20number%20of%20layers%20and%20neurons%20in%20each%20layer%20of%20a%20deep%20neural%20network%2C%20plus%20their%20arrangement.%20It%20is%20important%20to%20note%20that%20there%20is%20no%20guarantee%20that%20the%20performance%20metric%20goal%20(e.g.%20high%20enough%20accuracy)%20for%20one%20ML%20architecture%20will%20be%20achieved.%20To%20mitigate%20this%2C%20several%20different%20architectures%20should%20be%20considered.%20Often%2C%20two%20or%20three%20different%20architectures%20are%20tried%20before%20a%20choice%20is%20made.%20Remember%2C%20this%20is%20often%20an%20iterative%20process%20with%20both%20the%20choice%20of%20data%20and%20choice%20of%20architecture%20being%20updated%20in%20the%20exploratory%20phase%20of%20the%20development%20process.%3C%2FP%3E%0A%3CP%3EIt%20helps%20to%20understand%20the%20issues%20that%20can%20arise%20when%20training%20an%20ML%20model%20that%20may%20only%20be%20seen%20after%20training%20or%2C%20even%2C%20at%20the%20point%20of%20inferencing%20on%20device.%20Some%20such%20issues%20include%20overfitting%20and%20underfitting%20as%20introduced%20below.%3C%2FP%3E%0A%3CP%3EIn%20the%20training%20and%20testing%20process%2C%20one%20should%20keep%20an%20eye%20out%20for%20overfitting%20and%20underfitting%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EOverfitting%3C%2FSTRONG%3E%3A%20can%20give%20a%20false%20sense%20of%20success%20because%20the%20performance%20metric%20(like%20accuracy)%20might%20be%20very%20good%20when%20the%20input%20data%20looks%20like%20the%20training%20data.%20However%2C%20overfitting%20can%20occur%20when%20the%20model%20fits%20to%20the%20training%20data%20too%20closely%20and%20can%20not%20generalize%20well%20to%20new%20data.%20For%20instance%2C%20it%20may%20become%20apparent%20that%20the%20model%20only%20performs%20well%20indoors%20because%20the%20training%20data%20was%20from%20an%20indoor%20setting.%20This%20can%20be%20caused%20by%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EThe%20model%20learned%20to%20focus%20on%20incorrect%2C%20non-representative%20features%20specifically%20found%20in%20the%20training%20dataset%3C%2FLI%3E%0A%3CLI%3EThe%20model%20architecture%20may%20have%20too%20many%20learnable%20parameters%20(correlated%20to%20the%20number%20of%20layers%20in%20a%20neural%20network%20and%20units%20per%20layer)%20-%20note%2C%20the%20model's%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Ememorization%20capacity%3C%2FEM%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eis%20determined%20by%20the%20number%20of%20learnable%20parameters%3C%2FLI%3E%0A%3CLI%3ENot%20enough%20complexity%20or%20variation%20in%20the%20training%20data%3C%2FLI%3E%0A%3CLI%3ETrained%20over%20too%20many%20iterations%3C%2FLI%3E%0A%3CLI%3EOther%20reasons%20for%20good%20performance%20in%20training%20and%20significantly%20worse%20performance%20in%20validation%20and%20testing%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3C%2FLI%3E%0A%3CLI%3E%3CP%3E%3CSTRONG%3EUnderfitting%3C%2FSTRONG%3E%3A%20the%20model%20has%20generalized%20so%20well%20that%20it%20can%20not%20tell%20the%20difference%20between%20classes%20with%20confidence%20-%20e.g.%20the%20training%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Eloss%3C%2FEM%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Ewill%20still%20be%20unacceptably%20high.%20This%20can%20be%20caused%20by%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3ENot%20enough%20samples%20in%20training%20data%3C%2FLI%3E%0A%3CLI%3ETrained%20for%20too%20few%20iterations%20-%20too%20generalized%3C%2FLI%3E%0A%3CLI%3EOther%20reasons%20related%20to%20the%20model%20not%20being%20able%20to%20recognize%20any%20objects%20or%20poor%20recogntion%20and%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Eloss%20values%3C%2FEM%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eduring%20training%20(the%20assessment%20values%20used%20to%20direct%20the%20training%20process%20through%20a%20process%20called%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Eoptimization%3C%2FEM%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eand%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Eweight%20updates%3C%2FEM%3E)%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EThere%20is%20a%20trade-off%20between%20too%20much%20capacity%20(a%20large%20network%20or%20one%20with%20big%20number%20of%20learnable%20parameters)%20and%20too%20little%20capacity.%20In%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Etransfer%20learning%3C%2FEM%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E(where%20some%20network%20layers%20are%20set%20as%20not%20trainable%2C%20i.e.%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Efrozen%3C%2FEM%3E)%20increasing%20capacity%20would%20equate%20to%20%22opening%20up%22%20more%2C%20earlier%20layers%20in%20the%20network%20versus%20only%20using%20the%20last%20few%20layers%20in%20training%20(with%20the%20rest%20remaining%20frozen).%3C%2FP%3E%0A%3CP%3EThere%20isn't%20a%20hardfast%20rule%20for%20determining%20number%20of%20layers%20for%20deep%20neural%20networks%2C%20thus%20sometimes%20several%20model%20architectures%20must%20be%20evaluated%20within%20an%20ML%20task.%20However%2C%20in%20general%2C%20it%20is%20good%20to%20start%20with%20fewer%20layers%20and%2For%20parameters%20(%22smaller%22%20networks)%20and%20gradually%20increase%20the%20complexity.%3C%2FP%3E%0A%3CP%3ESome%20considerations%20when%20coming%20up%20with%20the%20best%20architecture%20choice%20will%20include%20the%20inference%20speed%20requirements%20which%20will%20need%20to%20include%20an%20assessment%20and%20acceptance%20of%20the%20speed%20versus%20accuracy%20tradeoff.%20Often%2C%20a%20faster%20inference%20speed%20is%20associated%20with%20lower%20performance%20(e.g.%20accuracy%2C%20confidence%20or%20precision%20could%20suffer).%3C%2FP%3E%0A%3CP%3EA%20discussion%20around%20requirements%20for%20the%20ML%20training%20and%20inferencing%20will%20be%20necessary%20based%20upon%20the%20considerations%20above%20and%20any%20company%20specific%20requirements.%20For%20instance%2C%20if%20the%20company%20policy%20allows%20open%20source%20solutions%20to%20be%20utilized%2C%20it%20will%20open%20up%20a%20great%20deal%20of%20ML%20algorithmic%20possibilities%20as%20most%20cutting%20edge%20ML%20work%20is%20in%20the%20open%20source%20domain.%3C%2FP%3E%0A%3CP%3EIn%20summary%2C%20here%20are%20the%20key%20considerations%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EKeep%20an%20eye%20out%20for%20overfitting%20and%20underfitting%3C%2FLI%3E%0A%3CLI%3ETesting%20several%20ML%20architectures%20is%20often%20a%20good%20idea%20-%20this%20is%20an%20iterative%20process%3C%2FLI%3E%0A%3CLI%3EThere%20will%20be%20a%20trade-off%20between%20too%20much%20network%20capaticy%20and%20too%20little%2C%20but%20often%20it's%20good%20to%20start%20with%20too%20little%20and%20build%20up%20from%20there%3C%2FLI%3E%0A%3CLI%3EThere%20will%20be%20a%20trade-off%20between%20speed%20and%20your%20performance%20metric%20(e.g.%20accuracy)%3C%2FLI%3E%0A%3CLI%3EIf%20the%20performance%20of%20the%20ML%20model%20is%20acceptable%2C%20the%20exploratory%20phase%20is%20complete%20(one%20can%20be%20tempted%20to%20iterate%20indefinitely)%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH3%20id%3D%22toc-hId--96252691%22%20id%3D%22toc-hId--96252689%22%3E%3CA%20id%3D%22user-content-data-science-workflows%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23data-science-workflows%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EData%20science%20workflows%3C%2FH3%3E%0A%3CP%3EThe%20data%20science%20process%20for%20edge%20deployments%20has%20a%20general%20pattern.%20After%20a%20clear%20data-driven%20problem%20statement%20is%20formulated%2C%20the%20next%20steps%20generally%20include%20the%20following.%3C%2FP%3E%0A%3CDIV%20id%3D%22tinyMceEditorbonJoeV_7%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3EData%20Collection%3C%2FSTRONG%3E.%20Data%20collection%20or%20acquisition%20could%20be%20an%20online%20image%20search%2C%20from%20a%20currently%20deployed%20device%2C%20or%20other%20representative%20data%20source.%20Generally%2C%20the%20more%20data%20the%20better.%20In%20addition%2C%20the%20more%20variability%2C%20the%20better%20the%20generalization.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EData%20Labeling%3C%2FSTRONG%3E.%20If%20only%20hundreds%20of%20images%20need%20to%20be%20labeled%20usually%20(e.g.%20when%20using%20transfer%20learning)%20this%20is%20done%20in-house%2C%20whereas%2C%20if%20tens%20of%20thousands%20of%20images%20need%20to%20be%20labeled%2C%20a%20vendor%20could%20be%20enlisted%20for%20both%20data%20collection%20and%20labeling.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ETrain%20a%20Model%20with%20ML%20Framework%3C%2FSTRONG%3E.%20An%20ML%20framework%20such%20as%20TensorFlow%20or%20PyTorch%20(both%20with%20Python%20and%20C%2B%2B%20APIs)%20will%20need%20to%20be%20chosen%2C%20but%20usually%20this%20depends%20upon%20what%20code%20samples%20are%20available%20in%20open%20source%20or%20in-house%2C%20plus%20experience%20of%20the%20ML%20practitioner.%20Azure%20ML%20may%20be%20used%20to%20train%20a%20model%20using%20any%20ML%20framework%20and%20approach%20-%20it%20is%20agnostic%20of%20framework%20and%20has%20Python%20and%20R%20bindings%2C%20plus%20many%20wrappers%20around%20popular%20frameworks.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EConvert%20the%20Model%20for%20Inferencing%20on%20Device%3C%2FSTRONG%3E.%20Almost%20always%2C%20a%20model%20will%20need%20to%20be%20converted%20to%20work%20with%20a%20particular%20runtime%20(model%20conversion%20usually%20involves%20advantageous%20optimizations%20like%20faster%20inference%20and%20smaller%20model%20footprints).%20This%20step%20differs%20for%20each%20ML%20framework%20and%20runtime%2C%20but%20there%20are%20open-source%20interoperability%20frameworks%20available%20such%20as%20ONNX%20and%20MMdnn.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EBuild%20the%20Solution%20for%20Device%3C%2FSTRONG%3E.%20The%20solution%20is%20usually%20built%20on%20the%20same%20type%20of%20device%20as%20will%20be%20used%20in%20the%20final%20deployment%20because%20binary%20files%20are%20created%20that%20are%20system%20specific.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EUsing%20Runtime%2C%20Deploy%20Solution%20to%20Device%3C%2FSTRONG%3E.%20Once%20a%20runtime%20has%20been%20chosen%20(that%20is%20usually%20chosen%20in%20conjunction%20with%20ML%20framework%20choice)%2C%20the%20compiled%20solution%20may%20be%20deployed.%20The%20Azure%20IoT%20Runtime%20is%20a%20Docker-based%20system%20in%20which%20the%20ML%20runtimes%20may%20be%20deployed%20as%20containers.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EThe%20diagram%20below%20gives%20a%20picture%20with%20an%20example%20data%20science%20process%20wherein%20open%20source%20tools%20may%20be%20leveraged%20for%20the%20data%20science%20workflow.%20Data%20availability%20and%20type%20will%20drive%20most%20of%20the%20choices%2C%20even%2C%20potentially%2C%20the%20devices%2Fhardware%20chosen.%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22image.png%22%20style%3D%22width%3A%20761px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F226919iB30CFCAC7397A308%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22image.png%22%20alt%3D%22image.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EIf%20a%20workflow%20is%20already%20in%20existance%20for%20the%20data%20scientists%20and%20app%20developers%2C%20a%20few%20other%20considerations%20exist.%20First%2C%20it%20is%20advised%20to%20have%20a%20code%2C%20model%20and%20data%20versioning%20system%20in%20place.%20Secondly%2C%20an%20automation%20plan%20for%20code%20and%20integration%20testing%20along%20with%20other%20aspects%20of%20the%20data%20science%20process%20(triggers%2C%20build%2Frelease%20process%2C%20etc.)%20will%20help%20speed%20up%20time%20to%20production%20and%20cultivate%20collaboration%20within%20the%20team.%3C%2FP%3E%0A%3CP%3EThe%20language%20of%20choice%20can%20help%20dictate%20what%20API%20or%20SDK%20is%20used%20for%20inferencing%20and%20training%20ML%20models%20which%20will%20then%20dictate%20what%20type%20of%20ML%20model%2C%20what%20type(s)%20of%20device%2C%20what%20type%20of%20IoT%20Edge%20Module%2C%20etc.%20For%20example%2C%20PyTorch%20has%20a%20C%2B%2B%20API%20for%20inferencing%20(and%20now%20for%20training)%20that%20works%20well%20in%20conjunction%20with%20the%20OpenCV%20C%2B%2B%20API.%20If%20the%20app%20developer%20working%20on%20the%20deployment%20strategy%20is%20building%20a%20C%2B%2B%20application%2C%20or%20has%20this%20experience%2C%20one%20might%20consider%20PyTorch%20or%20others%20(TensorFlow%2C%20CNTK%2C%20etc.)%20that%20have%20C%2B%2B%20inferencing%20APIs.%3C%2FP%3E%0A%3CP%3EIn%20summary%2C%20here%20are%20the%20key%20considerations%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EConverting%20models%20also%20involves%20optimizations%20such%20as%20faster%20inference%20and%20smaller%20model%20footprints%2C%20critical%20for%20very%20resource-constrained%20devices%3C%2FLI%3E%0A%3CLI%3EThe%20solution%20will%20usually%20need%20to%20be%20built%20on%20a%20build-dedicated%20device%20(the%20same%20type%20of%20device%20to%20which%20the%20solution%20will%20be%20deployed)%3C%2FLI%3E%0A%3CLI%3EThe%20language%20and%20framework%20of%20choice%20will%20depend%20upon%20both%20the%20ML%20practitioners%20experience%20as%20well%20as%20what%20is%20available%20in%20open%20source%3C%2FLI%3E%0A%3CLI%3EThe%20runtime%20of%20choice%20will%20depend%20upon%20the%20device%20and%20hardware%20acceleration%20for%20ML%20available%3C%2FLI%3E%0A%3CLI%3EIt%20is%20important%20to%20have%20a%20code%2C%20model%20and%20data%20versioning%20system%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH2%20id%3D%22toc-hId--2032789873%22%20id%3D%22toc-hId--2032789871%22%3E%3CA%20id%3D%22user-content-image-storage-and-management%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23image-storage-and-management%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EImage%20storage%20and%20management%3C%2FH2%3E%0A%3CP%3EStorage%20and%20management%20of%20the%20images%20involved%20in%20a%20computer%20vision%20application%20is%20a%20critical%20function.%20Some%20of%20the%20key%20considerations%20for%20managing%20those%20images%20are%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EAbility%20to%20store%20all%20raw%20images%20during%20training%20with%20ease%20of%20retrieval%20for%20labeling%3C%2FLI%3E%0A%3CLI%3EFaster%20storage%20medium%20to%20avoid%20pipeline%20bottleneck%20and%20loss%3C%2FLI%3E%0A%3CLI%3EStorage%20on%20the%20edge%20as%20well%20as%20in%20the%20cloud%2C%20as%20labeling%20activity%20can%20be%20performed%20in%20both%3C%2FLI%3E%0A%3CLI%3ECategorization%20of%20images%20for%20easy%20retrieval%3C%2FLI%3E%0A%3CLI%3ENaming%20and%20tagging%20images%20to%20link%20it%20with%20inferred%20metadata%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EThe%20combination%20of%20Azure%20Blob%20Storage%2C%20Azure%20IoT%20Hub%2C%20and%20Azure%20IoT%20Edge%20allow%20several%20potential%20options%20for%20the%20storage%20of%20image%20data%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EUse%20of%20the%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fiot-edge%2Fhow-to-store-data-blob%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20IoT%20Edge%20Blob%20Storage%20module%3C%2FA%3E%2C%20which%20will%20automatically%20sync%20images%20to%20Azure%20Blob%20based%20on%20policy%3C%2FLI%3E%0A%3CLI%3EStore%20images%20to%20local%20host%20file%20system%20and%20upload%20to%20Azure%20blob%20service%20using%20a%20custom%20module%3C%2FLI%3E%0A%3CLI%3EUse%20of%20local%20database%20to%20store%20images%2C%20which%20then%20can%20be%20synced%20to%20cloud%20database%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EWe%20believe%20that%20the%20IoT%20Edge%20Blob%20Storage%20module%20is%20the%20most%20powerful%20and%20straightforward%20solution%20and%20is%20our%20preferred%20approach.%20A%20typical%20workflow%20for%20this%20might%20be%3A%3C%2FP%3E%0A%3COL%3E%0A%3CLI%3ERaw%20messages%20post%20ingestion%20will%20be%20stored%20locally%20on%20the%20Edge%20Blob%20Module%2C%20with%20time%20stamp%20and%20sequence%20number%20to%20uniquely%20identify%20the%20image%20files%3C%2FLI%3E%0A%3CLI%3EPolicy%20can%20be%20set%20on%20the%20Edge%20Blob%20Module%20for%20automatic%20upload%20to%20Azure%20Blob%20with%20ordering%3C%2FLI%3E%0A%3CLI%3ETo%20conserve%20space%20on%20the%20Edge%20device%2C%20auto%20delete%20after%20certain%20time%20can%20be%20configured%20along%20with%20retain%20while%20uploading%20option%20to%20ensure%20all%20images%20get%20synced%20to%20the%20cloud%3C%2FLI%3E%0A%3CLI%3ELocal%20categorization%20or%20domain%20and%20labeling%20can%20be%20implemented%20using%20module%20that%20can%20read%20these%20images%20into%20UX.%20The%20label%20data%20will%20be%20associated%20to%20the%20image%20URI%20along%20with%20the%20coordinates%20and%20category.%3C%2FLI%3E%0A%3CLI%3EAs%20Label%20data%20needs%20to%20be%20saved%2C%20a%20local%20database%20is%20preferred%20to%20store%20this%20metadata%20as%20it%20will%20allow%20easy%20lookup%20for%20the%20UX%20and%20can%20be%20synced%20to%20cloud%20via%20telemetry%20messages.%3C%2FLI%3E%0A%3CLI%3EDuring%20scoring%20run%2C%20the%20model%20will%20detect%20matching%20patterns%20and%20generate%20events%20of%20interest.%20This%20metadata%20will%20be%20sent%20to%20cloud%20via%20telemetry%20referring%20the%20image%20URI%20and%20optionally%20stored%20in%20local%20database%20for%20edge%20UX.%20The%20images%20will%20continue%20to%20be%20stored%20to%20Edge%20Blob%20and%20synced%20with%20Azure%20Blob%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CH2%20id%3D%22toc-hId-454722960%22%20id%3D%22toc-hId-454722962%22%3E%3CA%20id%3D%22user-content-alerts-persistence%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23alerts-persistence%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EAlerts%20persistence%3C%2FH2%3E%0A%3CP%3EIn%20the%20context%20of%20vision%20on%20edge%2C%20alerts%20is%20a%20response%20to%20an%20event%20that%20is%20triggered%20by%20the%20AI%20model%20(in%20other%20words%2C%20the%20inferencing%20results).%20The%20type%20of%20event%20is%20determined%20by%20the%20training%20imparted%20to%20the%20model.%20These%20events%20are%20separate%20from%20operational%20events%20raised%20by%20the%20processing%20pipeline%20and%20any%20related%20to%20the%20health%20of%20the%20runtime.%3C%2FP%3E%0A%3CP%3ESome%20of%20the%20common%20alerts%20types%20are%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EImage%20classification%3C%2FLI%3E%0A%3CLI%3EMovement%20detection%3C%2FLI%3E%0A%3CLI%3EDirection%20of%20movement%3C%2FLI%3E%0A%3CLI%3EObject%20detection%3C%2FLI%3E%0A%3CLI%3ECount%20of%20objects%3C%2FLI%3E%0A%3CLI%3ETotal%20Count%20of%20objects%20over%20period%20of%20time%3C%2FLI%3E%0A%3CLI%3EAverage%20Count%20of%20objects%20over%20period%20of%20time%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EAlerts%20by%20their%20definition%20are%20required%20to%20be%20monitored%20as%20they%20drive%20certain%20actions.%20They%20are%20critical%20to%20operations%2C%20being%20time%20sensitive%20in%20terms%20of%20processing%20and%20required%20to%20be%20logged%20for%20audit%20and%20further%20analysis.%3C%2FP%3E%0A%3CP%3EThe%20persistence%20of%20alerts%20needs%20to%20happen%20locally%20on%20the%20edge%20where%20it%20is%20raised%20and%20then%20passed%20on%20to%20the%20cloud%20for%20further%20processing%20and%20storage.%20This%20is%20to%20ensure%20quick%20response%20locally%20and%20avoid%20losing%20critical%20alerts%20due%20to%20any%20transient%20failures.%3C%2FP%3E%0A%3CP%3ESome%20options%20to%20achieve%20this%20persistence%20and%20cloud%20syncing%20are%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EUtilize%20built-in%20store%20and%20forward%20capability%20of%20IoT%20Edge%20runtime%2C%20which%20automatically%20gets%20synced%20with%20Azure%20IoT%20Hub%20in%20case%20of%20losing%20connectivity%3C%2FLI%3E%0A%3CLI%3EPersist%20alerts%20on%20host%20file%20system%20as%20log%20files%2C%20which%20can%20be%20synced%20periodically%20to%20a%20blob%20storage%20in%20cloud%3C%2FLI%3E%0A%3CLI%3EUtilized%20Azure%20Blob%20Edge%20module%2C%20which%20will%20sync%20this%20data%20to%20Azure%20Blob%20in%20cloud%20based%20on%20policies%20that%20can%20be%20configured%3C%2FLI%3E%0A%3CLI%3EUse%20local%20database%20on%20IoT%20Edge%2C%20such%20as%20SQL%20Edge%20for%20storing%20data%2C%20sync%20with%20Azure%20SQL%20DB%20using%20SQL%20Data%20Sync.%20Other%20lightweight%20database%20option%20is%20SQLite%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EThe%20preferred%20option%20is%20to%20use%20the%20built-in%20store%20and%20forward%20capability%20of%20IoT%20Edge%20runtime.%20This%20is%20more%20suitable%20for%20the%20alerts%20due%20to%20its%20time%20sensitivity%2Ctypically%20small%20messages%20sizes%2C%20and%20ease%20of%20use.%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--654569562%22%20id%3D%22toc-hId--654569560%22%3E%3CA%20id%3D%22user-content-user-interface%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23user-interface%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EUser%20Interface%3C%2FH2%3E%0A%3CP%3EThe%20user%20interface%20requirements%20of%20an%20IoT%20solution%20will%20vary%20depending%20on%20the%20overall%20solution%20objectives.%20In%20general%2C%20there%20are%20four%20user%20interfaces%20that%20are%20commonly%20found%20on%20IoT%20solutions%3A%20Administrator%2C%20Operator%2C%20Consumer%20and%20Analytics.%20In%20this%20guidance%2C%20we%20are%20going%20to%20focus%20on%20simple%20operator%E2%80%99s%20user%20interface%20and%20visualization%20dashboard.%20We%20will%20provide%20a%20reference%20implementation%20of%20the%20latter%20two%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3EAdministrator%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EAllows%20full%20access%20to%20device%20provisioning%2C%20device%20and%20solution%20configuration%2C%20user%20management%20etc.%20These%20features%20could%20be%20provided%20as%20part%20of%20one%20solution%20or%20as%20separate%20solutions.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EConsumer%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EOnly%20applicable%20to%20consumer%20solution.%20They%20provide%20similar%20access%20to%20the%20operators%E2%80%99%20solution%20but%20limited%20to%20devices%20owned%20by%20the%20user%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EOperator%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EProvides%20centralize%20access%20to%20the%20operational%20components%20of%20the%20solutions%20which%20typically%20includes%20device%20management%2C%20alerts%20monitoring%20and%20configuration.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EAnalytics%3A%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EInteractive%20dashboard%20which%20provide%20visualization%20of%20telemetry%20and%20other%20data%2Fanalysis.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH3%20id%3D%22toc-hId-1962025990%22%20id%3D%22toc-hId-1962025992%22%3E%3CA%20id%3D%22user-content-technology-options%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23technology-options%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ETechnology%20Options%3C%2FH3%3E%0A%3CP%3EPower%20BI%20is%20a%20compelling%20option%20for%20our%20Analytics%2FVirtualization%20needs.%20It%20provides%20power%20features%20to%20create%20customizable%20interactive%20dashboards.%20It%20also%20allows%20connectivity%20to%20many%20popular%20database%20systems%20and%20services.%20It%20is%20available%20as%20a%20managed%20service%20and%20as%20a%20self-hosted%20package.%20The%20former%20is%20the%20most%20popular%20and%20recommend%20options.%20With%20Power%20BI%20embedded%20you%20could%20add%20customer-facing%20reports%2C%20dashboards%2C%20and%20analytics%20in%20your%20own%20applications%20by%20using%20and%20branding%20Power%20BI%20as%20your%20own.%20Reduce%20developer%20resources%20by%20automating%20the%20monitoring%2C%20management%2C%20and%20deployment%20of%20analytics%2C%20while%20getting%20full%20control%20of%20Power%20BI%20features%20and%20intelligent%20analytics.%3C%2FP%3E%0A%3CP%3EAnother%20suitable%20technology%20for%20IoT%20visualizations%20is%20Azure%20Maps%20which%20allows%20you%20to%20create%20location-aware%20web%20and%20mobile%20applications%20using%20simple%20and%20secure%20geospatial%20services%2C%20APIs%2C%20and%20SDKs%20in%20Azure.%20Deliver%20seamless%20experiences%20based%20on%20geospatial%20data%20with%20built-in%20location%20intelligence%20from%20world-class%20mobility%20technology%20partners.%3C%2FP%3E%0A%3CP%3EAzure%20App%20Service%20is%20a%20managed%20platform%20with%20powerful%20capabilities%20for%20building%20web%20and%20mobile%20apps%20for%20many%20platforms%20and%20mobile%20devices.%20It%20allows%20developers%20to%20quickly%20build%2C%20deploy%2C%20and%20scale%20web%20apps%20created%20with%20popular%20frameworks%20.NET%2C%20.NET%20Core%2C%20Node.js%2C%20Java%2C%20PHP%2C%20Ruby%2C%20or%20Python%2C%20in%20containers%20or%20running%20on%20any%20operating%20system.%20You%20can%20also%20meet%20rigorous%2C%20enterprise-grade%20performance%2C%20security%2C%20and%20compliance%20requirements%20by%20using%20the%20fully%20managed%20platform%20for%20your%20operational%20and%20monitoring%20tasks.%3C%2FP%3E%0A%3CP%3EFor%20real%20time%20data%20reporting%2C%20Azure%20SignalR%20Service%2C%20makes%20adding%20real-time%20communications%20to%20your%20web%20application%20is%20as%20simple%20as%20provisioning%20a%20service%E2%80%94no%20need%20to%20be%20a%20real-time%20communications%20guru!%20It%20easily%20integrates%20with%20services%20such%20as%20Azure%20Functions%2C%20Azure%20Active%20Directory%2C%20Azure%20Storage%2C%20Azure%20App%20Service%2C%20Azure%20Analytics%2C%20Power%20BI%2C%20IoT%2C%20Cognitive%20Services%2C%20Machine%20Learning%2C%20and%20more.%20To%20secure%20your%20user%20interface%20solutions%2C%20the%20Azure%20Active%20Directory%20(Azure%20AD)%20enterprise%20identity%20service%20provides%20single%20sign-on%20and%20multi-factor%20authentication%20to%20help%20protect%20your%20users%20from%2099.9%20percent%20of%20cybersecurity%20attacks.%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-25488808%22%20id%3D%22toc-hId-25488810%22%3E%3CA%20id%3D%22user-content-scenarios%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23scenarios%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EScenarios%3C%2FH2%3E%0A%3CH3%20id%3D%22toc-hId--1652882936%22%20id%3D%22toc-hId--1652882934%22%3E%3CA%20id%3D%22user-content-use-case-1%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23use-case-1%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EUse%20case%201%3C%2FH3%3E%0A%3CH4%20id%3D%22toc-hId-963712616%22%20id%3D%22toc-hId-963712618%22%3E%3CA%20id%3D%22user-content-overview%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23overview%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EOverview%3C%2FH4%3E%0A%3CP%3EContoso%20Boards%20produces%20high%20quality%20circuit%20boards%20used%20in%20computers.%20Their%20number%20one%20product%20is%20a%20motherboard.%20Lately%20they%20have%20been%20seeing%20an%20increase%20in%20issues%20with%20chip%20placement%20on%20the%20board.%20Through%20their%20investigation%20they%20have%20noticed%20that%20the%20circuit%20boards%20are%20getting%20placed%20incorrectly%20on%20the%20assembly%20line.%20They%20need%20a%20way%20to%20identify%20if%20the%20circuit%20board%20is%20placed%20on%20the%20assembly%20line%20correctly.%20The%20data%20scientist%20at%20Contoso%20Boards%20are%20most%20familiar%20with%20TensorFlow%20and%20would%20like%20to%20continue%20using%20it%20as%20their%20primary%20ML%20model%20structure.%20Contoso%20Boards%20has%20several%20assembly%20lines%20that%20produce%20these%20mother%20boards.%20Contoso%20Boards%20would%20also%20like%20to%20centralized%20management%20of%20the%20entire%20solution.%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId--843741847%22%20id%3D%22toc-hId--843741845%22%3E%3CA%20id%3D%22user-content-questions%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23questions%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EQuestions%3C%2FH4%3E%0A%3CP%3EWhat%20are%20we%20analyzing%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EMotherboard%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EWhere%20are%20we%20going%20to%20be%20viewing%20the%20motherboard%20from%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EAssembly%20Line%20Conveyor%20belt%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EWhat%20camera%20do%20we%20need%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EArea%20or%20Line%20scan%3C%2FLI%3E%0A%3CLI%3EColor%20or%20Monochrome%3C%2FLI%3E%0A%3CLI%3ECCD%20or%20CMOS%20Sensor%3C%2FLI%3E%0A%3CLI%3EGlobal%20or%20rolling%20shutter%3C%2FLI%3E%0A%3CLI%3EFrame%20Rate%3C%2FLI%3E%0A%3CLI%3EResolution%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EWhat%20type%20of%20lighting%20is%20needed%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EBacklighting%3C%2FLI%3E%0A%3CLI%3EShade%3C%2FLI%3E%0A%3CLI%3EDarkfield%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EHow%20should%20the%20camera%20be%20mounted%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3ETop%20down%3C%2FLI%3E%0A%3CLI%3ESide%20view%3C%2FLI%3E%0A%3CLI%3EAngular%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EWhat%20hardware%20should%20be%20used%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3ECPU%3C%2FLI%3E%0A%3CLI%3EFPGA%3C%2FLI%3E%0A%3CLI%3EGPU%3C%2FLI%3E%0A%3CLI%3EASIC%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH4%20id%3D%22toc-hId-1643770986%22%20id%3D%22toc-hId-1643770988%22%3E%3CA%20id%3D%22user-content-solution%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23solution%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ESolution%3C%2FH4%3E%0A%3CP%3EBased%20on%20the%20overall%20solution%20that%20the%20Contoso%20Boards%20is%20looking%20for%20with%20this%20vision%20use%20case%20we%20should%20be%20looking%20for%20edge%20detection%20of%20the%20part.%20Based%20on%20this%20we%20need%20to%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3Eposition%20a%20camera%20directly%20above%20the%20at%2090%20degrees%20and%20about%2016%20inches%20above%20the%20part%3C%2FSTRONG%3E.%20Since%20the%20conveyer%20system%20moves%20relatively%20slowly%2C%20we%20can%20use%20an%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3EArea%20Scan%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Ecamera%20with%20a%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3EGlobal%20shutter%3C%2FSTRONG%3E.%20For%20this%20use%20case%20our%20camera%20should%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3Ecapture%20about%2030%20frames%20per%20second%3C%2FSTRONG%3E.%20As%20for%20the%20resolution%20using%20the%20formula%20of%20Res%3D(Object%20Size)%20Divided%20by%20(details%20to%20be%20captured).%20Based%20on%20the%20formula%20Res%3D16%E2%80%9D%2F8%E2%80%9D%20give%202MP%20in%20x%20and%204%20in%20y%20so%20we%20need%20a%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3Ecamera%20capable%20of%204MP%3C%2FSTRONG%3E.%20As%20for%20the%20sensor%20type%2C%20we%20are%20not%20fast%20moving%2C%20and%20really%20looking%20for%20an%20edge%20detection%2C%20so%20a%20CCD%20sensor%20could%20be%20used%2C%20however%20a%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3ECMOS%20sensor%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Ewill%20be%20used.%20One%20of%20the%20more%20critical%20aspects%20for%20any%20vision%20workload%20is%20lighting.%20In%20this%20application%20Contoso%20Boards%20should%20choose%20to%20use%20a%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3Ewhite%20diffused%20filter%20back%20light%3C%2FSTRONG%3E.%20This%20will%20make%20the%20part%20look%20almost%20black%20and%20have%20a%20high%20amount%20of%20contrast%20for%20edge%20detection.%20When%20it%20comes%20to%20color%20options%20for%20this%20application%20it%20is%20better%20to%20be%20in%20black%20and%20white%2C%20as%20this%20is%20what%20will%20yield%20the%20sharpest%20edge%20for%20the%20detection%20AI%20model.%20Looking%20at%20what%20kind%20of%20hard%2C%20the%20data%20scientist%20are%20most%20familiar%20with%20TensorFlow%20and%20learning%20ONNX%20or%20others%20would%20slow%20down%20the%20time%20for%20development%20of%20the%20model.%20Also%20because%20there%20are%20several%20assembly%20lines%20that%20will%20use%20this%20solution%2C%20and%20Contoso%20Boards%20would%20like%20a%20centrally%20managed%20edge%20solution%20so%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3EAzure%20Stack%20Edge%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E(with%20GPU%20option)%20would%20work%20well%20here.%20Based%20on%20the%20workload%2C%20the%20fact%20that%20Contoso%20Boards%20already%20know%20TensorFlow%2C%20and%20this%20will%20be%20used%20on%20multiple%20assembly%20lines%2C%20GPU%20based%20hardware%20would%20be%20the%20choice%20for%20hardware%20acceleration.%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId--163683477%22%20id%3D%22toc-hId--163683475%22%3E%3CA%20id%3D%22user-content-sample-of-what-the-camera-would-see%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23sample-of-what-the-camera-would-see%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ESample%20of%20what%20the%20camera%20would%20see%3C%2FH4%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22image.png%22%20style%3D%22width%3A%20800px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F226920iA78CBD27914729B3%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22image.png%22%20alt%3D%22image.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId--2100220659%22%20id%3D%22toc-hId--2100220657%22%3E%3CA%20id%3D%22user-content-use-case-2%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23use-case-2%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EUse%20Case%202%3C%2FH3%3E%0A%3CH4%20id%3D%22toc-hId-516374893%22%20id%3D%22toc-hId-516374895%22%3E%3CA%20id%3D%22user-content-overview-1%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23overview-1%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EOverview%3C%2FH4%3E%0A%3CP%3EContoso%20Shipping%20recently%20has%20had%20several%20pedestrian%20accidents%20at%20their%20loading%20docks.%20Most%20of%20the%20accidents%20are%20happening%20when%20a%20truck%20leaves%20the%20loading%20dock%2C%20and%20the%20driver%20does%20not%20see%20a%20dock%20worker%20walking%20in%20front%20of%20the%20truck.%20Contoso%20Shipping%20would%20like%20a%20solution%20that%20would%20watch%20for%20people%2C%20predict%20the%20direction%20of%20travel%2C%20and%20warn%20the%20drivers%20of%20potential%20dangers%20of%20hitting%20the%20workers.%20The%20distance%20from%20the%20cameras%20to%20Contoso%20Shipping's%20server%20room%20is%20to%20far%20for%20GigE%20connectivity%2C%20however%2C%20they%20do%20have%20a%20large%20WIFI%20mesh%20that%20could%20be%20used%20for%20connectivity.%20Most%20of%20the%20data%20scientist%20that%20Contoso%20Shipping%20employ%20are%20familiar%20with%20Open-VINO%20and%20they%20would%20like%20to%20be%20able%20to%20reuse%20the%20models%20on%20additional%20hardware%20in%20the%20future.%20The%20solution%20will%20also%20need%20to%20ensure%20that%20devices%20are%20operating%20as%20power%20efficiently%20as%20possible.%20Finally%2C%20Contoso%20Shipping%20needs%20a%20way%20to%20manage%20the%20solution%20remotely%20for%20updates.%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId--592917629%22%20id%3D%22toc-hId--592917627%22%3E%3CA%20id%3D%22user-content-questions-1%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23questions-1%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3EQuestions%3C%2FH4%3E%0A%3CP%3EWhat%20are%20we%20analyzing%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EPeople%20and%20patterns%20of%20movement%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EWhere%20are%20we%20going%20to%20be%20viewing%20the%20people%20from%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EThe%20loading%20docks%20are%20165%20feet%20long%3C%2FLI%3E%0A%3CLI%3ECameras%20will%20be%20placed%2017%20feet%20high%20to%20keep%20with%20city%20ordnances.%3C%2FLI%3E%0A%3CLI%3ECameras%20will%20need%20to%20be%20positioned%20100%20feet%20away%20from%20the%20front%20of%20the%20trucks.%3C%2FLI%3E%0A%3CLI%3ECamera%20focus%20will%20need%20to%20be%2010%20feet%20behind%20the%20front%20of%20the%20truck%2C%20and%2010%20additional%20feet%20in%20front%20of%20the%20truck%2C%20giving%20us%20a%2020%20foot%20depth%20on%20focus.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EWhat%20camera%20do%20we%20need%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EArea%20or%20Line%20scan%3C%2FLI%3E%0A%3CLI%3EColor%20or%20Monochrome%3C%2FLI%3E%0A%3CLI%3ECCD%20or%20CMOS%20Sensor%3C%2FLI%3E%0A%3CLI%3EGlobal%20or%20rolling%20shutter%3C%2FLI%3E%0A%3CLI%3EFrame%20Rate%3C%2FLI%3E%0A%3CLI%3EResolution%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EWhat%20type%20of%20lighting%20is%20needed%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EBacklighting%3C%2FLI%3E%0A%3CLI%3EShade%3C%2FLI%3E%0A%3CLI%3EDarkfield%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EWhat%20hardware%20should%20be%20used%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3ECPU%3C%2FLI%3E%0A%3CLI%3EFPGA%3C%2FLI%3E%0A%3CLI%3EGPU%3C%2FLI%3E%0A%3CLI%3EASIC%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EHow%20should%20the%20camera%20be%20mounted%3F%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3ETop%20down%3C%2FLI%3E%0A%3CLI%3ESide%20view%3C%2FLI%3E%0A%3CLI%3EAngular%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH4%20id%3D%22toc-hId-1894595204%22%20id%3D%22toc-hId-1894595206%22%3E%3CA%20id%3D%22user-content-solution-1%22%20class%3D%22anchor%22%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%2Fblob%2Fmaster%2Fdocumentation%2Fguidance.md%23solution-1%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%20aria-hidden%3D%22true%22%3E%3C%2FA%3ESolution%3C%2FH4%3E%0A%3CP%3EBased%20on%20the%20distance%20of%20the%20loading%20dock%20size%20Contoso%20Shipping%20will%20require%20several%20cameras%20to%20cover%20the%20entire%20dock.%20Based%20on%20zoning%20laws%20that%20Contoso%20Shipping%20must%20adhere%20to%20require%20that%20the%20surveillance%20cameras%20cannot%20be%20mounted%20higher%20that%2020%20feet.%20In%20this%20use%20case%20the%20average%20size%20of%20a%20worker%20is%205%20foot%208%20inches.%20The%20solution%20must%20use%20the%20least%20number%20of%20cameras%20as%20possible.%3C%2FP%3E%0A%3CP%3EFormula%3A%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22image.png%22%20style%3D%22width%3A%20572px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F226922i769971B6772B1E23%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22image.png%22%20alt%3D%22image.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFor%20an%20example%20if%20we%20look%20at%20the%20following%20images%3A%3C%2FP%3E%0A%3CP%3ETaken%20with%20480%20horizontal%20pixels%20at%2020%20foot%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22image.png%22%20style%3D%22width%3A%20600px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F226923iE6E8E5CA3651901E%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22image.png%22%20alt%3D%22image.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3ETaken%20with%205184%20horizontal%20pixels%20at%2020%20foot%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22image.png%22%20style%3D%22width%3A%20600px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F226925i129DD86F3593F1EC%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22image.png%22%20alt%3D%22image.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EThe%20red%20square%20is%20shown%20to%20illustrate%20one%20pixel%20color.%3C%2FP%3E%0A%3CP%3E%3CEM%3ENote%3A%20This%20is%20the%20issue%20with%20using%20the%20wrong%20resolution%20camera%20for%20a%20given%20use%20case.%20Lens%20can%20impact%20the%20FOV%2C%20however%2C%20if%20the%20wrong%20sensor%20is%20used%20for%20that%20given%20use%20case%20the%20results%20could%20be%20less%20than%20expected.%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3EWith%20the%20above%20in%20mind%2C%20when%20choosing%20a%20camera%20for%20the%20overall%20solution%20required%20for%20Contoso%20Shipping%2C%20we%20need%20to%20think%20about%20how%20many%20cameras%20and%20at%20what%20resolution%20is%20needed%20to%20get%20the%20correct%20amount%20of%20details%20to%20detect%20a%20person.%20Since%20we%20are%20only%20trying%20to%20identify%20if%20a%20person%20is%20in%20the%20frame%20or%20not%2C%20our%20PPF%20does%20not%20need%20to%20be%20around%2080%20(which%20is%20what%20is%20about%20needed%20for%20facial%20identification)%20and%20we%20can%20use%20somewhere%20around%2015-20.%20That%20would%20place%20the%20FOV%20around%2016%20feet.%20A%2016-foot%20FOV%20would%20give%20us%20about%2017.5%20pixels%20per%20foot%E2%80%A6which%20fits%20within%20our%20required%20PPF%20of%2015-20.%20This%20would%20mean%20that%20we%20need%20a%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3E10MP%20camera%20that%20has%20a%20horizontal%20resolution%20of%20~5184%20pixels%3C%2FSTRONG%3E%2C%20and%20a%20lens%20that%20would%20allow%20for%20a%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3EFOV%20of%2016%20feet%3C%2FSTRONG%3E.%20When%20looking%20at%20the%20solution%20the%20cameras%20would%20need%20to%20be%20placed%20outside%2C%20and%20the%20choice%20of%20sensor%20type%20should%20not%20allow%20for%20%E2%80%9Cbloom%E2%80%9D.%20Bloom%20is%20when%20light%20hits%20the%20sensor%20and%20overloads%20the%20sensor%20with%20light%E2%80%A6this%20causes%20a%20view%20of%20almost%20over%20exposure%20or%20a%20%E2%80%9Cwhite%20out%E2%80%9D%20kind%20of%20condition.%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3ECMOS%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eis%20the%20choice%20here.%20Contoso%20operates%2024x7%20and%20as%20such%20needs%20to%20ensure%20that%20nighttime%20personal%20are%20also%20protected.%20When%20looking%20at%20color%20vs%20Monochrome%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3EMonochrome%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Ehandles%20low%20light%20conditions%20much%20better%2C%20and%20we%20are%20not%20looking%20to%20identify%20a%20person%20based%20on%20color%20monochrome%20sensors%20are%20a%20little%20cheaper%20as%20well.%20How%20many%20cameras%20will%20it%20take%3F%20Since%20we%20have%20figured%20out%20that%20our%20cameras%20can%20look%20at%20a%2016%20foot%20path%2C%20it%20is%20just%20simple%20math.%20165%20foot%20dock%20divided%20by%2016%20foot%20FOV%20gives%20us%2010.3125%20cameras.%20So%20the%20solution%20would%20need%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3E11%20Monochrome%205184%20horizontal%20pixel%20(or%2010MP)%20CMOS%20cameras%20with%20IPX67%20housings%20or%20weather%20boxes%3C%2FSTRONG%3E.%20The%20cameras%20would%20be%20mounted%20on%2011%20poles%20100%20feet%20from%20the%20trucks%20at%2017f%20high.%20Based%20on%20the%20fact%20that%20the%20data%20scientist%20are%20more%20familiar%20with%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3EOpen-VINO%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Edata%20models%20should%20be%20built%20in%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3EONNX%3C%2FSTRONG%3E.%20When%20looking%20for%20what%20hardware%20should%20be%20used%2C%20they%20need%20a%20device%20that%20can%20be%20connected%20over%20WIFI%2C%20and%20use%20as%20little%20power%20as%20possible.%20Based%20on%20this%20they%20should%20look%20to%20an%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3EFPGA%20processor%3C%2FSTRONG%3E.%20Potentially%20an%20ASIC%20processor%20could%20also%20be%20utilized%2C%20but%20due%20to%20the%20nature%20of%20how%20an%20ASIC%20processor%20works%2C%20it%20would%20not%20meet%20the%20requirement%20of%20being%20able%20to%20use%20the%20models%20on%20different%20hardware%20in%20the%20future.%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22image.png%22%20style%3D%22width%3A%20900px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F226926iF9A0EF7CAAF3BF05%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22image.png%22%20alt%3D%22image.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EFor%20more%20and%20updates%20to%20this%20project%2C%20see%20our%20GitHub%20repo%20here%3A%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fgithub.com%2FAzureIoTGBB%2Fiot-edge-vision%3C%2FA%3E%3C%2FP%3E%0A%3C%2FARTICLE%3E%0A%3C%2FDIV%3E%0A%3C%2FDIV%3E%0A%3C%2FDIV%3E%0A%3C%2FDIV%3E%0A%3C%2FDIV%3E%0A%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1784002%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSPAN%3EVisual%20inspection%20of%20products%2C%20resources%20and%20environments%20has%20been%20a%20core%20practice%20for%20most%20Enterprises%2C%20and%20was%2C%20until%20recently%2C%20a%20very%20manual%20process.%20An%20individual%2C%20or%20group%20of%20individuals%2C%20was%20responsible%20for%20performing%20a%20manual%20inspection%20of%20the%20asset%20or%20environment%2C%20which%2C%20depending%20on%20the%20circumstances%2C%20could%20become%20inefficient%2C%20inaccurate%20or%20both%2C%20due%20to%20human%20error%20and%20limitations.%3C%2FSPAN%3E%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1784002%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EArtificial%20Intelligence%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EInfrastructure%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

Vision on the Edge

Introduction

Visual inspection of products, resources, and environments has been a core practice for most Enterprises, and was, until recently, a very manual process. An individual, or group of individuals, was responsible for performing a manual inspection of the asset or environment, which, depending on the circumstances, could become inefficient, inaccurate or both, due to human error and limitations.

In an effort to improve the efficacy of visual inspection, Enterprises began turning to deep learning artificial neural networks known as convolutional neural networks, or CNNs, to emulate human vision for analysis of images and video. Today this is commonly called computer vision, or simply Vision AI. Artificial Intelligence for image analytics spans a wide variety of industries, including manufacturing, retail, healthcare, and the public sector, and an equally wide area of use cases.

 

Vision as Quality Assurance – In manufacturing environments, quality inspection of parts and processes with a high degree of accuracy and velocity is one of the use cases for Vision AI. An enterprise pursuing this path automates the inspection of a product for defects to answer questions such as:

  • Is the manufacturing process producing consistent results?
  • Is the product assembled properly?
  • Can I get notification of a defect sooner to reduce waste?
  • How can I leverage drift in my computer vision model to prescribe predictive maintenance?

Vision as Safety – In any environment, safety is a fundamental concern for every Enterprise on the planet, and the reduction of risk is a driving force for adopting Vision AI. Automated monitoring of video feeds to scan for potential safety issues affords critical time to respond to incidents, and opportunities to reduce exposure to risk. Enterprises looking at Vision AI for this use case are commonly trying to answer questions such as:

  • How compliant is my workforce with using personal protective equipment?
  • How often are people entering unauthorized work zones?
  • Are products being stored in a safe manner?
  • Are there non-reported close calls in a facility, i.e. pedestrian/equipment “near misses?”

Why vision on the Edge

Over the past decade, computer vision has become a rapidly evolving area of focus for Enterprises, as cloud-native technologies, such as containerization, have enabled portability and migration of this technology toward the network edge. For instance, custom vision inference models trained in the Cloud can be easily containerized for use in an Azure IoT Edge runtime-enabled device.

The rationale behind migrating workloads from the cloud to the edge for Vision AI generally falls into two categories – performance and cost.

On the performance side of the equation, exfiltrating large quantities of data can cause an unintended performance strain on existing network infrastructure. Additionally, the latency of sending images and/or video streams to the Cloud to retrieve results may not meet the needs of the use case. For instance, a person straying into an unauthorized area may require immediate intervention, and that scenario can ill afford latency when every second counts. Positioning the inferencing model near the point of ingest allows for near-real-time scoring of the image, and alerting can be performed either locally or through the cloud, depending on network topology.

In terms of cost, sending all of the data to the Cloud for analysis could significantly impact the ROI of a Vision AI initiative. With Azure IoT Edge, a Vision AI module could be designed to only capture the relevant images that have a reasonable confidence level based on the scoring, which significantly limits the amount of data being sent.

The purpose of this document is to give some concrete guidance on some of the key decisions when designing an end-to-end vision on the edge solution. Specifically, we will address:

  • Camera selection and placement
  • Hardware acceleration
  • Machine learning and data science
  • Image storage and management
  • Persistence of alerts
  • User Interface

Camera Considerations

Camera selection

One of the most critical components to any vision workload is selecting the correct camera. The items that are being identified in a vision workload must be presented in such a way so that a computer’s artificial intelligence or machine learning models can evaluate them correctly. To further understand this concept, you need to understand the different camera types that can be used. One thing to note in this article as we move forward, there are a lot of different manufacturers of Area, Line, and Smart Cameras. Microsoft does not recommend any vendor over another - instead, we recommend that you select a vendor that fits your specific needs.

Area Scan Cameras

This is more your traditional camera image, where a 2D image is captured and then sent over to the Edge hardware to be evaluated. This camera typically has a matrix of pixel sensors.

When should you use an Area Scan Camera? As the name suggests, Area Scan Cameras look at a large area and are great at detecting change in an area. Some examples of workloads that would use an Area Scan Camera would be workplace safety or detecting or counting objects (people, animals, cars,etc.) in an environment.

Examples of manufacturers of Area Scan Cameras are Basler, Axis, Sony, Bosch, FLIR, Allied Vision

Line Scan Cameras

Unlike the Area Scan Cameras, the Line Scan Camera has a single row of linear pixel sensors. This can allow the camera to take one-pixel width in very quick successions and then stitches these one-pixel images into a video stream that is sent over to an Edge Device for processing

When should you use a Line Scan Camera? Line Scan Cameras are great for vision workloads where the items to be identified are moving past the camera or items that need to be rotated to detect defects. The Line Scan Camera would then be able to produce a continuous image stream that can then be evaluated. Some examples of workloads that would work best with a Line Scan Camera would be item defect detection on parts that are moved on a conveyer belt, workloads that require spinning to see a cylindrical object or any workload that requires rotation.

Examples of manufacturers of Area Scan Cameras are Basler, Teledyne Dalsa, Hamamatsu Corporation, DataLogic, Vieworks, and Xenics

Embedded Smart Camera

This type of camera can use either a Area Scan or Line Scan Camera for the acquisition of the images, however, the Line Scan Smart Camera is rare. The main feature of this camera is that it not only acquires the image, but it can also process the image as they are a self-contained stand-alone system. They typically have either and RS232 or Ethernet port output, and this allows the Smart Cameras to be integrated directly into a PLC or other IIoT interfaces.

Examples of manufacturers of Embedded Smart Cameras are Basler, Lesuze Electronics

Other camera features to consider

  • Sensor size: This is one of the most important factors to evaluate in any vision workload. A sensor is the hardware within a camera that is capturing the light and converting into signals which then produces an image. The sensor contains millions of semiconducting photodetectors that are called photosites. One thing that is a bit of a misconception is that a higher megapixel count is a better image. For example, let’s look at two different sensor sizes for a 12-megapixel camera. Camera A has a ½ inch sensor with 12 million photosites and camera B has a 1-inch sensor with 12 million photosites. In the same lighting conditions, the camera that has a 1-inch sensor will be cleaner and sharper. Many cameras that would be typically be used in vision workloads would have a sensor between ¼ inch to 1 inch. In some cases, much larger sensors might be required. If a camera has a choice between a larger sensor or a smaller sensor some factors consider as to why you might choose the larger sensor are:
    • need for precision measurements
    • Lower light conditions
    • Shorter exposure times, i.e. fast-moving items
  • Resolution: This is another very important factor to both Line Scan and Area Scan camera workloads. If your workload must identify fine features (Ex. writing on an IC Chip) then you need greater resolutions of the cameras used. If your workload is trying to detect a face, then higher resolution is required. And if you need to identify a vehicle from a distance, again this would require higher resolution.
  • Speed: Sensors come in two types a CCD and a CMOS. If the vision workload requires high number of images per second capture rate, then there are two factors that come into play. The first is how fast is the connection on the interface of the camera and the second is what type of sensor is it. CMOS sensors have a direct readout from the photosites and because of this they typically offer a higher frame rate.

NOTE: There are more camera features to consider when selecting the correct camera for your vision workload. These include lens selection, focal length, monochrome, color depth, stereo depth, triggers, physical size, and support. Sensor manufacturers can help you understand the specific feature that your application may require.

Camera Placement (location, angle, lighting, etc.)

Depending on the items that you are capturing in your vision workload will determine the location and angles that the camera should be placed. The camera location can also affect the sensor type, lens type, and camera body type. There are several key concepts to keep in mind when figuring out the perfect spot to place the camera in.

There are several different factors that can weigh into the overall decision for camera placement. Two of the most critical are lighting and field of view

Camera Lighting

In a computer vision workload, lighting is a critical component to camera placement. There are several different lighting conditions. While some of the lighting conditions would be useful for one vision workload, it might produce an undesirable condition in another. Types of lighting that are commonly used in computer vision workloads are:

  • Direct lighting: This is the most commonly used lighting condition. This light source is projected at the object to be captured for evaluation.

  • Line lighting: This is a single array of lights that are most used with line scan camera applications to create a single line of light where the camera is focused.

  • Diffused lighting: This type of lighting is used to illuminate an object but prevent harsh shadows and is mostly used around specular objects.

  • Back lighting: This type of light source is used behind the object, in which produces a silhouette of the object. This is most useful when taking measurements, edge detection, or object orientation.

  • Axial diffused lighting: This type of light source is often used with highly reflective objects, or to prevent shadows on the part that will be captured for evaluation.

  • Custom Grid lighting: This is a structured lighting condition that lays out a grid of light on the object, the intent is to have a known grid projection to then provide more accurate measurements of components, parts, placement of items, etc.

  • Strobe lighting: Strobe lighting is used for high speed moving parts. The strobe must be in sync with the camera to take a “freeze” of the object for evaluation, this lighting helps to prevent motion blurring effect.

  • Dark Field lighting: This type of light source uses several lights in conjunction with different angles to the part. For example, if the part is laying flat on a conveyor belt the lights would be placed at a 45-degree angle to the part. This type of lighting is most useful when looking at highly reflective clear objects…and is most commonly used with lens scratch detections.

    Angular placement of light

     

    image.png

Field of View

In a vision workload you need to know the distance to the object that you are trying to evaluate. This also will play a part in the camera selection, sensor selection, and lens configuration. Some of the components that make up the field of view are:

  • Distance to object(s): For an example is the object that we are monitoring with computer vision on a conveyor belt and the camera is 2 feet above it, or is the object across a parking lot? As the distance changes so does the camera’s sensors and lens configurations.

  • Area of coverage: is the area that the computer vision trying to monitor small or large? This has direct correlation to the camera’s resolution overall, lens, and sensor type.
  • Direction of the Sun: if the computer vision workload is outside, such as monitoring a job construction site for worker safety, will the camera be pointed in the sun at any time? Keep in mind that if the sun is casting a shadow over the object that the vision workload is monitoring, items might be obscured a bit. Also, if the camera is getting direct sunlight in the lens, the camera might be “blinded” until the angle of the sun changes.
  • Camera angle to the object(s): angle of the camera to the object that the vision workload is monitoring is also critical component to think about. If the camera is too high it might miss the details that the vision workload is trying to monitor for, and the same may be true if it is too low.

Communication Interface

In building a computer vision workload it is also important to understand how the system will interact with the output of the camera. Below are a few of the standard ways that a camera will communicate to IoT Edge:

  • Real Time Streaming Protocol(RTSP): RTSP is a protocol that transfers real-time video data from a device (in our case the camera) to an endpoint device (Edge compute) directly over a TCP/IP connection. It functions in a client server application model that is at the application level in the network.

  • Open Network Video Interface Forum (ONVIF): a global and open industry forum that is developing open standards for IP-based cameras. This standard is aimed at standardization of communication between the IP Camera and down stream systems, Interoperability, and Open sourced.

  • USB: Unlike RTSP and ONVIF USB connected cameras connect over the Universal Serial Bus directly on the Edge compute device. This is less complex; however, it is limited on distance that the camera can be placed away from the Edge compute.

  • Camera Serial Interface: CSI specification is from Mobile Industry Processor Interface(MIPI). It is an interface that describes how to communicate between a camera and a host processor.

There are several standards defined for CSI

  • CSI-1: This was the original standard that MIPI started with.
  • CSI-2: This standard was released in 2005, and uses either D-PHY or C-PHY as physical layers options. This is further divided into several layers:
    1. Physical Layer (C-PHY, D-PHY)
    2. Lane Merger layer
    3. Low Level Protocol Layer
    4. Pixel to Byte Conversion Layer
    5. Application layer

The specification was updated in 2017 to v2, which added support for RAW-24 color depth, Unified Serial Link, and Smart Region of Interest.

Hardware Acceleration

Along with the camera selection, one of the other critical decisions in Vision on the Edge projects is hardware acceleration. Options include:

  • CPU: The Central Processing Unit (CPU) is your default compute for most processes running on a computer, it is designed for general purpose compute. Some Vision Workloads where timing is not as critical this might be a good option. However, most workloads that involve critical timing, multiple camera streams, and/or high frame rates will require more specific hardware acceleration
  • GPU: Many people are familiar with the Graphics Processing Unit (GPU) as this is the de-facto processor for any high-end PC graphics card. In recent years the GPU has been leveraged in high performance computer (HPC) scenarios, and in data mining, and in computer AI/ML workloads. The GPU’s massive potential of parallel computing can be used in a vision workload to accelerate the processing of pixel data. The downside to a GPU is its higher power consumption, which is a critical factor to consider for your vision workload.
  • FPGA: Field Programmable Gate Arrays are reconfigurable hardware accelerators. These powerful accelerators allow for the growth of Deep Learning Neural networks, which are still evolving. These accelerators have millions of programmable gates, hundred of I/O pins, and exceptional compute power (in the Trillions of tera-MAC’s) There also a lot of different libraries available for FPGA’s to use that are optimized for vision workloads. Some of these libraries also include preconfigured interfaces to connect to downstream cameras and devices. One area that FPGA’s tend to fall short on is floating point operations, however, manufacturers are currently working on this issue and have made a lot of improvements in this area.
  • ASIC: Application Specific Integrated Circuit is by far the fastest accelerator on the market today. While they are the fastest, they are the hardest to change as they are manufactured to function for a specific task. These custom chips are gaining popularity due to size, power per watt performance, and IP protection (because the IP is burned into the ASIC accelerator it is much harder to backwards engineer proprietary algorithms).

Machine learning and data science

The process of designing the machine learning (ML) approach for a vision on the edge scenario one of the biggest challenges in the entire planning process. Therefore, it is important to understand how to consider and think about ML in the context of edge devices. Some of the considerations and hurdles are outlined below to help begin to think in terms of using machine learning to address business problems and pain points with guidance including:

  • Always consider first how to solve the problem without ML or with a simple ML algorithm
  • Have a plan to test several ML architectures as they will have different capacities to "learn"
  • Have a system in place to collect new data from the device to retrain an ML model
  • For a poorly performing ML models, often a simple fix is to add more representative data to the training process and ensure it has variability with all classes represented equally
  • Remember, this is often an iterative process with both the choice of data and choice of architecture being updated in the exploratory phase
  • More guidance below

It is not an easy space and, for some, a very new way of thinking. It is a data driven process. Careful planning will be critical to successful results especially on very constrained devices.

In ML it is always critical to clearly define the problem trying to be solved because the data science and machine learning approach will depend upon this and decisions will be easier the more specific it is. It is also very important to consider what type of data will be encountered in the edge scenario as this will determine the kind of ML algorithm that should be used.

Even at the start, before training any models, real world data collection and examination will help this process greatly and new ideas could even arise. Below, we will discuss data considerations in detail. Of course, the equipment itself will help determine the ML approach with regard to device attributes like limited memory, compute, and/or power consumption limits.

Fortunately, data science and machine learning are iterative processes, so if the ML model has poor performance, there are many ways to address issues through experimention. Below, we will discuss consideratinos around ML architecture choices. Often, there will be some trial and error involved as well.

Machine learning data

Both the source(s) and attributes of data will dictate how the intelligent edge system is built. For vision, it could be images, videos, or even LiDAR, as the streaming signal. Regardless of the signal, when training an ML model and using it to score new data (called inferencing) domain knowledge will be required, such as experience in designing and using ML algorithms or neural network architectures and expertise deploying them to the specialized hardware. Below are a few considerations related to ML, however, it is recommended to gain some deeper knowledge in order to open up more possibilities or find an ML expert with edge experience to help with the project.

Collecting and using a balanced dataset is critical, that is, equally representating all classes or categories. When the ML model is trained on a dataset, generally that dataset has been split into train, validate and test subsets. The purpose of these subsets is as follows.

  • The training dataset is used for the actual model training over many passes or iterations (often called epochs).
  • Througout the training process, the model is spot-checked for how well it is doing on the validation dataset.
  • After a model is done training, the final step is to pass the test dataset through it and assess how well it did as a proxy to the real-world. Note: be wary of optimizing for the test dataset (in addition to the training dataset) once one test has been run. It might be good to have a few different test datasets available.

Some good news is if using deep learning, often costly and onerous feature engineering, featurization, and preprocessing can be avoided because of how deep learning works to find signal in noise better than traditional ML. However, in deep learning, transformations may still be utilized to clean or reformat data for model input during training as well as inference. Note, the same preprocessing needs to be used in training and when the model is scoring new data.

When advanced preprocessing is used such as de-noising, adjusting brightness or contrast, or transformations like RGB to HSV, it must be noted that this can dramatically change the model performance for the better or, sometimes, for the worse. In general, it is part of the data science exploration process and sometimes it is something that must be observed once the device and other components are placed in a real-world location.

After the hardware is installed into its permanent location, the incoming data stream should be monitored for data drift.

  • Data drift: deviation due to changes in the current data compared to the original. Data drift will often result in a degradation in in model performance (like accuracy), albeit, this is not the only cause of decreased performance (e.g. hardware or camera failure).

There should be an allowance for data drift testing in the system. This new data should also be collected for another round of training (the more representative data collected for training, the better the model will perform in almost all cases!), therefore, preparing for this kind of collection is always a good idea.

In addition to using data for training and inference, new data coming from the device could be used to monitor the device, camera or other components for hardware degradation.

In summary, here are the key considerations:

  • Always use a balanced dataset with all classes represented equally
  • The more representative data used to train a model, the better
  • Have a system in place to collect new data from device to retrain
  • Have a system in place to test for data drift
  • Only run a test set through a new ML model once - if you iterate and retest on the same test set this could cause overfitting to the test set in addition to the training set

Machine learning architecture choices

An ML architecture is the layout of the mathematical operations that process input into our desired, actionable output. For instance, in deep learning this would be the number of layers and neurons in each layer of a deep neural network, plus their arrangement. It is important to note that there is no guarantee that the performance metric goal (e.g. high enough accuracy) for one ML architecture will be achieved. To mitigate this, several different architectures should be considered. Often, two or three different architectures are tried before a choice is made. Remember, this is often an iterative process with both the choice of data and choice of architecture being updated in the exploratory phase of the development process.

It helps to understand the issues that can arise when training an ML model that may only be seen after training or, even, at the point of inferencing on device. Some such issues include overfitting and underfitting as introduced below.

In the training and testing process, one should keep an eye out for overfitting and underfitting:

  • Overfitting: can give a false sense of success because the performance metric (like accuracy) might be very good when the input data looks like the training data. However, overfitting can occur when the model fits to the training data too closely and can not generalize well to new data. For instance, it may become apparent that the model only performs well indoors because the training data was from an indoor setting. This can be caused by:

    • The model learned to focus on incorrect, non-representative features specifically found in the training dataset
    • The model architecture may have too many learnable parameters (correlated to the number of layers in a neural network and units per layer) - note, the model's memorization capacity is determined by the number of learnable parameters
    • Not enough complexity or variation in the training data
    • Trained over too many iterations
    • Other reasons for good performance in training and significantly worse performance in validation and testing
  • Underfitting: the model has generalized so well that it can not tell the difference between classes with confidence - e.g. the training loss will still be unacceptably high. This can be caused by:

    • Not enough samples in training data
    • Trained for too few iterations - too generalized
    • Other reasons related to the model not being able to recognize any objects or poor recogntion and loss values during training (the assessment values used to direct the training process through a process called optimization and weight updates)

There is a trade-off between too much capacity (a large network or one with big number of learnable parameters) and too little capacity. In transfer learning (where some network layers are set as not trainable, i.e. frozen) increasing capacity would equate to "opening up" more, earlier layers in the network versus only using the last few layers in training (with the rest remaining frozen).

There isn't a hardfast rule for determining number of layers for deep neural networks, thus sometimes several model architectures must be evaluated within an ML task. However, in general, it is good to start with fewer layers and/or parameters ("smaller" networks) and gradually increase the complexity.

Some considerations when coming up with the best architecture choice will include the inference speed requirements which will need to include an assessment and acceptance of the speed versus accuracy tradeoff. Often, a faster inference speed is associated with lower performance (e.g. accuracy, confidence or precision could suffer).

A discussion around requirements for the ML training and inferencing will be necessary based upon the considerations above and any company specific requirements. For instance, if the company policy allows open source solutions to be utilized, it will open up a great deal of ML algorithmic possibilities as most cutting edge ML work is in the open source domain.

In summary, here are the key considerations:

  • Keep an eye out for overfitting and underfitting
  • Testing several ML architectures is often a good idea - this is an iterative process
  • There will be a trade-off between too much network capaticy and too little, but often it's good to start with too little and build up from there
  • There will be a trade-off between speed and your performance metric (e.g. accuracy)
  • If the performance of the ML model is acceptable, the exploratory phase is complete (one can be tempted to iterate indefinitely)

Data science workflows

The data science process for edge deployments has a general pattern. After a clear data-driven problem statement is formulated, the next steps generally include the following.

 

 

  • Data Collection. Data collection or acquisition could be an online image search, from a currently deployed device, or other representative data source. Generally, the more data the better. In addition, the more variability, the better the generalization.
  • Data Labeling. If only hundreds of images need to be labeled usually (e.g. when using transfer learning) this is done in-house, whereas, if tens of thousands of images need to be labeled, a vendor could be enlisted for both data collection and labeling.
  • Train a Model with ML Framework. An ML framework such as TensorFlow or PyTorch (both with Python and C++ APIs) will need to be chosen, but usually this depends upon what code samples are available in open source or in-house, plus experience of the ML practitioner. Azure ML may be used to train a model using any ML framework and approach - it is agnostic of framework and has Python and R bindings, plus many wrappers around popular frameworks.
  • Convert the Model for Inferencing on Device. Almost always, a model will need to be converted to work with a particular runtime (model conversion usually involves advantageous optimizations like faster inference and smaller model footprints). This step differs for each ML framework and runtime, but there are open-source interoperability frameworks available such as ONNX and MMdnn.
  • Build the Solution for Device. The solution is usually built on the same type of device as will be used in the final deployment because binary files are created that are system specific.
  • Using Runtime, Deploy Solution to Device. Once a runtime has been chosen (that is usually chosen in conjunction with ML framework choice), the compiled solution may be deployed. The Azure IoT Runtime is a Docker-based system in which the ML runtimes may be deployed as containers.

The diagram below gives a picture with an example data science process wherein open source tools may be leveraged for the data science workflow. Data availability and type will drive most of the choices, even, potentially, the devices/hardware chosen.

image.png

If a workflow is already in existance for the data scientists and app developers, a few other considerations exist. First, it is advised to have a code, model and data versioning system in place. Secondly, an automation plan for code and integration testing along with other aspects of the data science process (triggers, build/release process, etc.) will help speed up time to production and cultivate collaboration within the team.

The language of choice can help dictate what API or SDK is used for inferencing and training ML models which will then dictate what type of ML model, what type(s) of device, what type of IoT Edge Module, etc. For example, PyTorch has a C++ API for inferencing (and now for training) that works well in conjunction with the OpenCV C++ API. If the app developer working on the deployment strategy is building a C++ application, or has this experience, one might consider PyTorch or others (TensorFlow, CNTK, etc.) that have C++ inferencing APIs.

In summary, here are the key considerations:

  • Converting models also involves optimizations such as faster inference and smaller model footprints, critical for very resource-constrained devices
  • The solution will usually need to be built on a build-dedicated device (the same type of device to which the solution will be deployed)
  • The language and framework of choice will depend upon both the ML practitioners experience as well as what is available in open source
  • The runtime of choice will depend upon the device and hardware acceleration for ML available
  • It is important to have a code, model and data versioning system

Image storage and management

Storage and management of the images involved in a computer vision application is a critical function. Some of the key considerations for managing those images are:

  • Ability to store all raw images during training with ease of retrieval for labeling
  • Faster storage medium to avoid pipeline bottleneck and loss
  • Storage on the edge as well as in the cloud, as labeling activity can be performed in both
  • Categorization of images for easy retrieval
  • Naming and tagging images to link it with inferred metadata

The combination of Azure Blob Storage, Azure IoT Hub, and Azure IoT Edge allow several potential options for the storage of image data:

  • Use of the Azure IoT Edge Blob Storage module, which will automatically sync images to Azure Blob based on policy
  • Store images to local host file system and upload to Azure blob service using a custom module
  • Use of local database to store images, which then can be synced to cloud database

We believe that the IoT Edge Blob Storage module is the most powerful and straightforward solution and is our preferred approach. A typical workflow for this might be:

  1. Raw messages post ingestion will be stored locally on the Edge Blob Module, with time stamp and sequence number to uniquely identify the image files
  2. Policy can be set on the Edge Blob Module for automatic upload to Azure Blob with ordering
  3. To conserve space on the Edge device, auto delete after certain time can be configured along with retain while uploading option to ensure all images get synced to the cloud
  4. Local categorization or domain and labeling can be implemented using module that can read these images into UX. The label data will be associated to the image URI along with the coordinates and category.
  5. As Label data needs to be saved, a local database is preferred to store this metadata as it will allow easy lookup for the UX and can be synced to cloud via telemetry messages.
  6. During scoring run, the model will detect matching patterns and generate events of interest. This metadata will be sent to cloud via telemetry referring the image URI and optionally stored in local database for edge UX. The images will continue to be stored to Edge Blob and synced with Azure Blob

Alerts persistence

In the context of vision on edge, alerts is a response to an event that is triggered by the AI model (in other words, the inferencing results). The type of event is determined by the training imparted to the model. These events are separate from operational events raised by the processing pipeline and any related to the health of the runtime.

Some of the common alerts types are:

  • Image classification
  • Movement detection
  • Direction of movement
  • Object detection
  • Count of objects
  • Total Count of objects over period of time
  • Average Count of objects over period of time

Alerts by their definition are required to be monitored as they drive certain actions. They are critical to operations, being time sensitive in terms of processing and required to be logged for audit and further analysis.

The persistence of alerts needs to happen locally on the edge where it is raised and then passed on to the cloud for further processing and storage. This is to ensure quick response locally and avoid losing critical alerts due to any transient failures.

Some options to achieve this persistence and cloud syncing are:

  • Utilize built-in store and forward capability of IoT Edge runtime, which automatically gets synced with Azure IoT Hub in case of losing connectivity
  • Persist alerts on host file system as log files, which can be synced periodically to a blob storage in cloud
  • Utilized Azure Blob Edge module, which will sync this data to Azure Blob in cloud based on policies that can be configured
  • Use local database on IoT Edge, such as SQL Edge for storing data, sync with Azure SQL DB using SQL Data Sync. Other lightweight database option is SQLite

The preferred option is to use the built-in store and forward capability of IoT Edge runtime. This is more suitable for the alerts due to its time sensitivity,typically small messages sizes, and ease of use.

User Interface

The user interface requirements of an IoT solution will vary depending on the overall solution objectives. In general, there are four user interfaces that are commonly found on IoT solutions: Administrator, Operator, Consumer and Analytics. In this guidance, we are going to focus on simple operator’s user interface and visualization dashboard. We will provide a reference implementation of the latter two

  • Administrator: Allows full access to device provisioning, device and solution configuration, user management etc. These features could be provided as part of one solution or as separate solutions.
  • Consumer: Only applicable to consumer solution. They provide similar access to the operators’ solution but limited to devices owned by the user
  • Operator: Provides centralize access to the operational components of the solutions which typically includes device management, alerts monitoring and configuration.
  • Analytics: Interactive dashboard which provide visualization of telemetry and other data/analysis.

Technology Options

Power BI is a compelling option for our Analytics/Virtualization needs. It provides power features to create customizable interactive dashboards. It also allows connectivity to many popular database systems and services. It is available as a managed service and as a self-hosted package. The former is the most popular and recommend options. With Power BI embedded you could add customer-facing reports, dashboards, and analytics in your own applications by using and branding Power BI as your own. Reduce developer resources by automating the monitoring, management, and deployment of analytics, while getting full control of Power BI features and intelligent analytics.

Another suitable technology for IoT visualizations is Azure Maps which allows you to create location-aware web and mobile applications using simple and secure geospatial services, APIs, and SDKs in Azure. Deliver seamless experiences based on geospatial data with built-in location intelligence from world-class mobility technology partners.

Azure App Service is a managed platform with powerful capabilities for building web and mobile apps for many platforms and mobile devices. It allows developers to quickly build, deploy, and scale web apps created with popular frameworks .NET, .NET Core, Node.js, Java, PHP, Ruby, or Python, in containers or running on any operating system. You can also meet rigorous, enterprise-grade performance, security, and compliance requirements by using the fully managed platform for your operational and monitoring tasks.

For real time data reporting, Azure SignalR Service, makes adding real-time communications to your web application is as simple as provisioning a service—no need to be a real-time communications guru! It easily integrates with services such as Azure Functions, Azure Active Directory, Azure Storage, Azure App Service, Azure Analytics, Power BI, IoT, Cognitive Services, Machine Learning, and more. To secure your user interface solutions, the Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks.

Scenarios

Use case 1

Overview

Contoso Boards produces high quality circuit boards used in computers. Their number one product is a motherboard. Lately they have been seeing an increase in issues with chip placement on the board. Through their investigation they have noticed that the circuit boards are getting placed incorrectly on the assembly line. They need a way to identify if the circuit board is placed on the assembly line correctly. The data scientist at Contoso Boards are most familiar with TensorFlow and would like to continue using it as their primary ML model structure. Contoso Boards has several assembly lines that produce these mother boards. Contoso Boards would also like to centralized management of the entire solution.

Questions

What are we analyzing?

  • Motherboard

Where are we going to be viewing the motherboard from?

  • Assembly Line Conveyor belt

What camera do we need?

  • Area or Line scan
  • Color or Monochrome
  • CCD or CMOS Sensor
  • Global or rolling shutter
  • Frame Rate
  • Resolution

What type of lighting is needed?

  • Backlighting
  • Shade
  • Darkfield

How should the camera be mounted?

  • Top down
  • Side view
  • Angular

What hardware should be used?

  • CPU
  • FPGA
  • GPU
  • ASIC

Solution

Based on the overall solution that the Contoso Boards is looking for with this vision use case we should be looking for edge detection of the part. Based on this we need to position a camera directly above the at 90 degrees and about 16 inches above the part. Since the conveyer system moves relatively slowly, we can use an Area Scan camera with a Global shutter. For this use case our camera should capture about 30 frames per second. As for the resolution using the formula of Res=(Object Size) Divided by (details to be captured). Based on the formula Res=16”/8” give 2MP in x and 4 in y so we need a camera capable of 4MP. As for the sensor type, we are not fast moving, and really looking for an edge detection, so a CCD sensor could be used, however a CMOS sensor will be used. One of the more critical aspects for any vision workload is lighting. In this application Contoso Boards should choose to use a white diffused filter back light. This will make the part look almost black and have a high amount of contrast for edge detection. When it comes to color options for this application it is better to be in black and white, as this is what will yield the sharpest edge for the detection AI model. Looking at what kind of hard, the data scientist are most familiar with TensorFlow and learning ONNX or others would slow down the time for development of the model. Also because there are several assembly lines that will use this solution, and Contoso Boards would like a centrally managed edge solution so Azure Stack Edge (with GPU option) would work well here. Based on the workload, the fact that Contoso Boards already know TensorFlow, and this will be used on multiple assembly lines, GPU based hardware would be the choice for hardware acceleration.

Sample of what the camera would see

image.png

Use Case 2

Overview

Contoso Shipping recently has had several pedestrian accidents at their loading docks. Most of the accidents are happening when a truck leaves the loading dock, and the driver does not see a dock worker walking in front of the truck. Contoso Shipping would like a solution that would watch for people, predict the direction of travel, and warn the drivers of potential dangers of hitting the workers. The distance from the cameras to Contoso Shipping's server room is to far for GigE connectivity, however, they do have a large WIFI mesh that could be used for connectivity. Most of the data scientist that Contoso Shipping employ are familiar with Open-VINO and they would like to be able to reuse the models on additional hardware in the future. The solution will also need to ensure that devices are operating as power efficiently as possible. Finally, Contoso Shipping needs a way to manage the solution remotely for updates.

Questions

What are we analyzing?

  • People and patterns of movement

Where are we going to be viewing the people from?

  • The loading docks are 165 feet long
  • Cameras will be placed 17 feet high to keep with city ordnances.
  • Cameras will need to be positioned 100 feet away from the front of the trucks.
  • Camera focus will need to be 10 feet behind the front of the truck, and 10 additional feet in front of the truck, giving us a 20 foot depth on focus.

What camera do we need?

  • Area or Line scan
  • Color or Monochrome
  • CCD or CMOS Sensor
  • Global or rolling shutter
  • Frame Rate
  • Resolution

What type of lighting is needed?

  • Backlighting
  • Shade
  • Darkfield

What hardware should be used?

  • CPU
  • FPGA
  • GPU
  • ASIC

How should the camera be mounted?

  • Top down
  • Side view
  • Angular

Solution

Based on the distance of the loading dock size Contoso Shipping will require several cameras to cover the entire dock. Based on zoning laws that Contoso Shipping must adhere to require that the surveillance cameras cannot be mounted higher that 20 feet. In this use case the average size of a worker is 5 foot 8 inches. The solution must use the least number of cameras as possible.

Formula:

image.png

For an example if we look at the following images:

Taken with 480 horizontal pixels at 20 foot

image.png

Taken with 5184 horizontal pixels at 20 foot

image.png

The red square is shown to illustrate one pixel color.

Note: This is the issue with using the wrong resolution camera for a given use case. Lens can impact the FOV, however, if the wrong sensor is used for that given use case the results could be less than expected.

With the above in mind, when choosing a camera for the overall solution required for Contoso Shipping, we need to think about how many cameras and at what resolution is needed to get the correct amount of details to detect a person. Since we are only trying to identify if a person is in the frame or not, our PPF does not need to be around 80 (which is what is about needed for facial identification) and we can use somewhere around 15-20. That would place the FOV around 16 feet. A 16-foot FOV would give us about 17.5 pixels per foot…which fits within our required PPF of 15-20. This would mean that we need a 10MP camera that has a horizontal resolution of ~5184 pixels, and a lens that would allow for a FOV of 16 feet. When looking at the solution the cameras would need to be placed outside, and the choice of sensor type should not allow for “bloom”. Bloom is when light hits the sensor and overloads the sensor with light…this causes a view of almost over exposure or a “white out” kind of condition. CMOS is the choice here. Contoso operates 24x7 and as such needs to ensure that nighttime personal are also protected. When looking at color vs Monochrome, Monochrome handles low light conditions much better, and we are not looking to identify a person based on color monochrome sensors are a little cheaper as well. How many cameras will it take? Since we have figured out that our cameras can look at a 16 foot path, it is just simple math. 165 foot dock divided by 16 foot FOV gives us 10.3125 cameras. So the solution would need 11 Monochrome 5184 horizontal pixel (or 10MP) CMOS cameras with IPX67 housings or weather boxes. The cameras would be mounted on 11 poles 100 feet from the trucks at 17f high. Based on the fact that the data scientist are more familiar with Open-VINO data models should be built in ONNX. When looking for what hardware should be used, they need a device that can be connected over WIFI, and use as little power as possible. Based on this they should look to an FPGA processor. Potentially an ASIC processor could also be utilized, but due to the nature of how an ASIC processor works, it would not meet the requirement of being able to use the models on different hardware in the future.

image.png

 

For more and updates to this project, see our GitHub repo here: https://github.com/AzureIoTGBB/iot-edge-vision