The architecture of the block corruption detector. An example of how we introduced audio clicks into clean audio After using this dataset to develop detectors, we validate that the detectors transfer to production content by testing them on a set of actual defects. We tackle this challenge with a dataset that simulates defects in pristine content. This enables us to process video at the scale of hundreds of thousands of live events and catalogue items.Īn interesting challenge we face is the lack of positive cases in training data due to the extremely low prevalence of audiovisual defects in Prime Video offerings. Our team at VQA trains computer vision models to watch video and spot issues that may compromise the customer viewing experience, such as blocky frames, unexpected black frames, and audio noise. More recently, we’ve been applying the same techniques to problems such as real-time quality monitoring of our thousands of channels and live events and to analyzing new catalogue content at scale. Three years ago, the Video Quality Analysis (VQA) group in Prime Video started using machine learning to identify defects in captured content from devices, such as gaming consoles, TVs, and set-top boxes, to validate new application releases or offline changes to encoding profiles. ![]() The initial version of Amazon Prime Video's block corruption detector uses a residual neural network to produce a map indicating the probability of corruption at particular image locations, binarizes that map, and computes the ratio between the corrupted area and the total image area.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |