For years, audio and video content owners have invested heavily in digital asset and media asset management (MAM) that promised to enhance their media archiving operations and enhance the overall value of their content. However, DAM and MAM solutions on the market have largely failed to deliver on this promise, with the software unable to generate the kind of extensive metadata required to fully capitalize on today’s media content monetization opportunities.
New artificial intelligence (AI) technologies can address this shortcoming using cognitive engines to process media and generate comprehensive metadata, including searchable transcriptions of dialog, facial recognition of performers, and identification of important objects.
This solution brief discusses how Veritone has leveraged it’s proprietary technology, aiWARE™, the world’s first operating system for AI to enhance it’s DAM solutions for improved management, deliver, and monetization of assets both current and archival.
The media content landscape continues to transform at a staggering rate. Media and entertainment (M&E) organizations face increasing challenges to organize their growing library of content, grow audiences, prove the effectiveness of advertising campaigns, index for quality and compliance, and increase revenue. The sheer volume of content under management and the rate at which it is being created can make finding and retrieve content a challenge. With the explosion of content creation, from user-generated content to multi-version studio releases, AI will have an increasing role to play in the critical task of discovering relevant content all along the digital supply chain so that it can then be easily accessed, shared, and monetized.
Mining Value from ContentPent up within M&E archives is a gold mine of content just waiting to be discovered. From access, delivery, and delivery, M&E companies have a wealth of opportunities to preserve, repurpose, and monetize their content. With M&E organizations under pressure to expand their audiences and increase revenue, it’s essential to capitalize on existing content to generate new revenue streams. However, manual processes, such as logging, prevent M&E companies from realizing this value. M&E organizations traditionally have developed homegrown or tried inadequate MAM or DAM solutions alongside operational teams to deliver and monetize post-production content. However, due to the limitations of labor, and many MAM or DAM tools, much of the content produced never gets distributed or repurposed. Over time, this valuable content can grow into huge repositories that are static, costly, and unsearchable.
Metadata Gathering
The vast majority of media libraries aren’t equipped to capitalize on these monetization efforts. Such libraries are housed in tools that deliver and/or store only rudimentary information about collected content, such as a timestamp or basic metadata like the names of actors or the locations in a movie. This level of detail is far from sufficient to support the multivariate search and indexing of information needed for continuous reuse and monetization of content.
Moreover, traditional manual techniques of metadata-gathering are becoming impractical, given the significant investment in labor required for humans to review and tag the vast amount of audio, video, still images, and content.
AI Driving Value in Media & Entertainment
Manual processes and basic metadata tools limit the amount of content that can be utilized, resulting in the growth of inaccessible, massive, and static libraries. AI enhances this process by searching and gathering information about a wide range of video and audio elements from content every frame of an asset. AI transforms how humans engage with content and enable the M&E industry to drive better efficiency and value.
Many have expressed concerns that AI will eventually replace humans in the M&E value chain. However, those concerns are largely misplaced. AI’s value is in augmenting human labor, taking on the operational task of compiling, analyzing, and delivering content. Once compiled, a human operator will still be required to identify the best moments for consideration and subsequent use.
Many content repositories are simply not equipped to address future consumption requirements — which can range from on-demand distribution to historical archive discovery and monetization. AI automates the processes that connect the content to the consumer.
Putting AI into Action in a Cloud Asset Management Environment: Digital Media Hub
Core and Digital Media Hub customers have the opportunity to enrich their valuable content more intelligently — and more efficiently — than ever before by gleaning additional information from their assets through automated metadata extraction powered by AI. Integrating Core with the aiWARE operating system unlocks access to hundreds of cognitive engines across 16 classes, including facial recognition, object recognition, logo recognition, sentiment recognition, transcription, translation, and more. aiWARE routes media to the most appropriate engines to optimize metadata extraction from the content. Now individual rights holders, production houses, media companies, sports organizations, news companies, and other users of Core and Digital Media Hub can choose from best-of-breed engines curated for their individual use cases.
Veritone’s extensible AI ecosystem of cognitive engines and powerful applications makes it possible for Core and Digital Media Hub users to enhance their search and exploit every frame of video and every second of audio for objects, faces, brands, text, sentiments, keywords, and more. Users can discover unique insights, dissect and analyze content programmatically and by multivariate search, and monitor media in near-real-time. The technical collaboration enables the correlation and transformation of both structured and unstructured data in a seamless manner via AI, at scale.
An AI Solution That Gets Results
The use cases are vast and diverse. In one recent example, an international media conglomerate—home to premier global television, motion picture, gaming, and other brands—used the solution to underpin a broadcast compliance workflow. To comply with the U.S. Federal Communications Commission’s Children’s Television Act of 1990, the company is required to identify the talent used in any advertisements that run during children’s educational programs. Veritone leveraged automated facial recognition, speech to text, and enriched metadata within Core to identify the talent and provide data back to the company. As a result, the company can be sure that the ads do not contain talent that is also in the concurrent program.