Azure Video Indexer is an applied AI service, provides a performant, easy to use solution that unlocks video insights leading to deep understanding of its content while making video estates accessible and actionable. Our customers say that with Azure Video Indexer, they were able to save up to 77% of editing and analyzing costs when it comes to media archives.
Video Indexer was announced GA in 2018 and introduced as a unique integrated bundle of Microsoft's cloud-based artificial intelligence and cognitive capabilities applied specifically for video content. It has since become the industry's most comprehensive Video AI service available, making it easy to extract insights from videos. This year we collected feedback from National Association of Broadcasters (NAB) and the International Broadcasting Convention (IBC), we are excited to share significant feature updates to the Azure Video Indexer product.
For Ignite 2022, we are happy to announce that we expand the product insights and functionality as following:
Configure confidence level in a person face model with an API to get better results
It is no secret that ad providers allow AI and machine learning to monitor ad effectiveness and automatically optimize the ad spot opportunity. In collaboration with Bing, Azure Video Indexer now supports (in preview) featured clothing detection. When indexing a video using Azure Video Indexer advanced video preset, you can view the featured clothing of an observed person. The insight provides information of key items worn by individuals within a video and the timestamp in which the clothing appears. This allows high-quality in-video contextual advertising, where relevant clothing ads are matched on the timeline within the video in which they are presented.
Read more about the business case in this blog post by Maayan and about the algorithm in this blog post by Tom.
Screeshot of Azure Video Indexer Website
Azure Video Indexer player and potential ad to be inserted based on Donovan Mitchell detection (example only)
Companies like Streamland Media, which edit and produce lots of video content, need to detect clapperboards, digital pattens and slates in hours over hours of raw video files. Slate detection insights (listed below) are automatically identified when indexing a video using the advanced indexing preset. These insights are most useful to customers involved in the movie post-production process.
You can learn more about this feature in our documentation.
Azure Video Indexer does not replace STT (speech-to-text), translation API or Azure search, but actually use those AIs in a single endpoint. We now added support source languages for STT, translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in Azure Video Indexer web applications, widgets and APIs.
For more information, see supported languages. In addition, we expanded the languages supported in LID (language identification) and MLID (multi-language identification) using the Azure Video Indexer API, so you can specify up to 10 potential languages in the API (from a closed list), this will improve the model results (as you narrow down the number of language identification checks and focus the model to use particular languages as provided in the list). This is important to improve accuracy, e.g. in court evidence use cases.
With an Azure Resource Management (ARM) based paid (unlimited) accounts for Azure Video Indexer, you are able to use:
Now that this capability was released in GA, ARM-based accounts managements is the default and recommended way to create, edit and delete accounts for Azure Video Indexer going forward. Already use Video Indexer and need to migrate? No worries, we offer you the option to connect existing classic accounts to ARM-based accounts (it only takes two minutes). To create an ARM-based account, see create an account.
Some of the new Azure Video Indexer features and AIs from now on will become available only for ARM-based accounts. for example integration with Azure Monitor. We released new set of telemetry logs, which enable you to better monitor your indexing pipeline. Azure Video Indexer now supports Diagnostics settings for indexing events. You can now export logs, monitor upload, and re-indexing of media files through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution. It enables you to create event-based triggers for your business logic like monitoring and alerting.
Another examples, is the new restricted viewer role. By setting this role, you can share Azure Video Indexer Website with other employees in your organization, but without any creation, editing and deletion rights. The restricted viewer role can be used to view the extracted insights - a common customer request in companies with substantial video analyzers with ranging roles and rights.
We allow to edit speaker's names in the transcription through the API and we also released word level time annotation with confidence score. An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file. In addition, you can configure confidence level in a person model with an API simply by using the Patch person model API to configure the confidence level for face recognition within a person model.
Some of the features mentioned will be added to production by the end of the month.
|Feature||Link to Product Documentation|
|Featured Clothing (Preview)||https://learn.microsoft.com/azure/azure-video-indexer/observed-people-featured-clothing|
|Slate Detection (Preview)||https://learn.microsoft.com/azure/azure-video-indexer/slate-detection-insight|
|More languages support||https://learn.microsoft.com/azure/azure-video-indexer/language-support|
|Edit the name of the speakers||https://learn.microsoft.com/azure/azure-video-indexer/release-notes#edit-a-speakers-name-in-the-tran...|
|Word level time annotation||https://learn.microsoft.com/azure/azure-video-indexer/release-notes#word-level-time-annotation-with-...|
|Configure confidence level in a person face model (API only)||Patch person model|
|ARM-based Account management in GA||create an account.|
|New set of logs (for ARM-based accounts only)||https://learn.microsoft.com/azure/azure-video-indexer/release-notes#azure-monitor-integration-enabli...|
|New restricted viewer role (for ARM-based accounts only)||To be published soon|
For those of you who are new to our technology, we encourage you to get started today with these helpful resources:
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.