Back to the audio resources page or to my home page.  Flattr this

AV16.3: an Audio-Visual Corpus for Speaker Localization and Tracking

Two examples: Snapshots, videos, audio and 3D annotation

The red ellipses indicate the locations of the two microphone arrays. The colored balls on the heads were used in the 3-D annotation process.
Click on a snapshot to watch the corresponding video.
seq18 - camera #1 snapshot
Camera 1
seq18 - camera #2 snapshot with arrays marked in red
Camera 2
seq18 - camera #3 snapshot
Camera 3
seq18-2p-0101: access 16kHz audio and 3D mouth annotation.
seq45 - camera #1 snapshot
Camera 1
seq45 - camera #2 snapshot
Camera 2
seq45 - camera #3 snapshot
Camera 3
seq45-3p-1111: access 16kHz audio and 3D mouth annotation.

High-level description

The AV16.3 corpus is an audio-visual corpus of 43 real indoor multispeaker recordings, designed to test algorithms for audio-only, video-only and audio-visual speaker localization and tracking. Real human speakers were used. The variety of recordings was chosen to test algorithms to their limits, and to cover a wide range of applicative scenarii (meetings, surveillance). The emphasis is on overlapped speech and multiple moving speakers. Recordings include mostly dynamic scenarii, with single and multiple moving speakers. A few meeting scenarii, with mostly seated speakers, are also included. More...


Using the AV16.3 corpus, If you would like to be cited here just drop me a note.

Technical details

Recordings were made with two 8-microphone Uniform Circular Arrays (16 kHz sampling frequency) and three digital cameras (25 frames per second) around the meeting room, hence the "AV16.3" name. Whenever possible, lapel microphones were also worn by each speaker. All sensors were synchronized. Thus, the three cameras were calibrated and used to determine the ground-truth 3-D location of the mouth of each speaker, with a maximum error of 1.2 cm. To the best of our knowledge, this audio-visual annotated corpus was the first to be made publicly available (recorded in fall 2003, published in June 2004 at the MLMI'04 workshop).

How to access the corpus (data + annotation + tools)

How to use the corpus


Back to the audio resources page
Last updated on 2008-09-17 by Guillaume Lathoud - glathoud at yahoo dot fr