At IBC 2019, Microsoft announced new updates to Azure Media Services including the popular Video Indexer. Azure Media Services Video Indexer allows you to search for videos by person, object, visual text, spoken word, entity, or emotion. It can automatically extract insights and metadata from videos. With the new update, Video Indexer now supports animated character recognition and multilingual speech transcription.
- Video Indexer now supports a new set of models that automatically detect and group animated characters and allow customers to then tag and recognize them easily via integrated custom vision models.
- New automatic spoken language identification for multiple content features leverages machine learning technology to identify the different languages used in a media asset. Once detected, each language segment undergoes an automatic transcription process in the language identified, and all segments are integrated back together into one transcription file consisting of multiple languages.
- Improved brand detection capabilities to also incorporate well-known names and locations, such as the Eiffel Tower in Paris or Big Ben in London.
- The new feature adds a set of “tags” in the metadata attached to an individual shot in the insights JSON to represent its editorial type (such as wide shot, medium shot, close up, extreme close up, two shot, multiple people, outdoor and indoor, etc.).
You can learn about other updates from the source link below.