Feature - AMI is watching: making the most of your meetings
Can you quickly catch up on a meeting for which you are late? Can you easily track back through meeting archives to discover why a particular decision was taken?
These are some of the key questions addressed by the AMI Consortium, a project using grid computing to better understand human communication in meetings.
Technology to enhance collaboration
When people meet, their communication is spread across a number of modalities: the spoken word, gestures, postures, intonation and timing.
AMI aims to capture these signals using an instrumented meeting room, equipped with multiple video cameras and microphones, along with devices to capture handwriting, whiteboard activity and projected data.
The AMI project makes sense of the captured signals using speech recognition, video and multimodal processing, and language processing techniques such as automatic summarization and the segmentation of a discussion into topics.
These are computationally challenging recognition tasks: for example, the recognition of speech recorded using tabletop microphones requires considerable extensions to the state-of-the-art, especially given the casual, conversational and overlapping style of speech in meetings.
AMI is using these captured, recognized signals in applications that improve engagement and efficiency.
These AMI applications are built on "the Hub," a software system for managing meeting recordings and related information using a database that end-user applications can query.
For instance, a "meeting browser" might retrieve a speech transcription from the Hub, or a cellphone interface might provide information on how people are interacting.
The Hub handles data for all meetings-past and present-making it our key resource for building applications.
- Steve Renals, the AMI Consortium