Audio fingerprinting — also known as acoustic fingerprinting — engines in the Veritone cognitive engine ecosystem identify pre-recorded audio snippets in audio and video files based on a particular signature or fingerprint.
On Veritone aiWARE, audio fingerprinting engines are trained based on one or more libraries containing advertisements, compact representations of music, environmental sounds, and more with their unique identifiers. After training, the engines generate a condensed digital summary as a reference clip, which they then use to quickly locate the same item within multiple media files during processing and report the time span(s) in which it occurs.
Locate unique audio fingerprints quickly with cognitive engine results containing time spans where audio signatures or fingerprints have been matched.
Create custom models using unique audio files and metadata with the Veritone Library application or your own library to identify a custom set of ads, songs, sounds, and more in audio and video files. Learn more.
Identify the audio snippets you are looking for quickly within large audio files with searchable audio fingerprinting engine output via API.
Process audio and video files in near real-time for use cases requiring nearly immediate audio detection and identification.
Transform short-form or long-form audio into text in audio and video recordings.
Deploy in a new or existing application in the cloud via aiWARE GraphQL APIs, or with a subset that can be deployed on-premise via a Docker container. Learn more.
Leverage advanced audio fingerprinting machine learning algorithms from the Veritone managed cognitive engine ecosystem — including algorithms from Veritone, niche providers, and industry giants.