Evertz is proud to collaborate with Amazon Web Services (AWS) to support the AWS Media Intelligence (AWS MI) solutions in offering broadcasters and media streaming services advanced image and video analysis and speech-to-text transcription tools for use in Evertz’ Emmy content management platform, Mediator-X.
“Effective transcription is increasingly used by content providers to pair text and video files. This can assist in making content more searchable and have it stand out from the crowd,” says Martin Whittaker, Technical Director of MAM and Automation at Evertz. “Studies have also proven captions and subtitling can be utilized to drive engagement with audiences who are streaming video without audio or watching in other regions around the world.”
Once imported into Mediator-X, the platform can perform a number of reviews of the transcription files, including vocabulary filtering to confirm if content includes explicit language, brand names and other references not suitable for broadcast. If further manual review is required, the same files can be exported from Mediator-X and delivered to external captioning/subtitling services for final caption and subtitle editing.
Additionally, Mediator-X stores speech-to-text files with timecode to enable video indexing and jump-to-timecode for QC, replay and post-production.
Mediator-X also offers content providers powerful image and video analysis to identify objects, people, text, scenes, and activities within their content, and to use that knowledge to better categorise content or apply it to specific business needs.
Mediator-X sends low res browse files created by Render-X transcode engine or evertz.io, to Amazon Rekognition. Amazon Rekognition’s deep learning technology reviews every frame to perform text detection, face detection and analysis, celebrity recognition, content moderation and more. Standard and custom content labels are automatically generated and imported into Mediator-X for review against the content’s other assets in the Mediator-X QC timeline.
Amazon Rekognition’s analysis coupled with Mediator-X’s content management tools gives users complete oversight of their content. Furthermore the Mediator API can be leveraged to further integrate metadata search and aggregation with other business systems for global search and analysis across all content.
“Mediator-X harnesses granular metadata to more accurately identify and organise media and strategically group content to deliver targeted playlists or channels tailored for specific audience demographics,” says Whittaker. “Mediator-X can and identify unique features within content that can be isolated for monetisation opportunities.
Mediator-X also uses standard and custom content labels generated by Amazon Rekognition to fulfil automatic and manual media compliance checks. Video or images flagged as containing explicit images, vulgar language, non-licensed material, and more can be manually reviewed by an operator, who can add comments to any piece of media. Flagged media can then be sent to a non-linear editors (Evertz DreamCatcher, AVID, Adobe Premiere Pro, etc.) for additional production or censorship.