• Subscribe

iSGTW Feature - AMI is watching: making the most of your meetings

Feature - AMI is watching: making the most of your meetings


AMI's engagement tool increases the sense of presence for users who join a meeting by cellphone: the cartoon images dynamically indicate who is looking at whom during the meeting.
Images courtesy of the AMI Consortium

Can you quickly catch up on a meeting for which you are late? Can you easily track back through meeting archives to discover why a particular decision was taken?

These are some of the key questions addressed by the AMI Consortium, a project using grid computing to better understand human communication in meetings.

Technology to enhance collaboration

When people meet, their communication is spread across a number of modalities: the spoken word, gestures, postures, intonation and timing.

AMI aims to capture these signals using an instrumented meeting room, equipped with multiple video cameras and microphones, along with devices to capture handwriting, whiteboard activity and projected data.

The AMI project makes sense of the captured signals using speech recognition, video and multimodal processing, and language processing techniques such as automatic summarization and the segmentation of a discussion into topics.

These are computationally challenging recognition tasks: for example, the recognition of speech recorded using tabletop microphones requires considerable extensions to the state-of-the-art, especially given the casual, conversational and overlapping style of speech in meetings.

Meeting-support applications

AMI is using these captured, recognized signals in applications that improve engagement and efficiency.

Applications include:
• Meeting browsers, which automatically generate meeting summaries and enable meetings to be indexed so users can search and browse archives for relevant material.
• Content linking, which uses recognized words to automatically retrieve related documents and relevant meeting segments during an ongoing meeting.
• Phone conference interfaces, which provide remote users with an indication of who is looking at whom during meetings.

Steve Renals and the AMI team pose in their "wired" meeting room, which enables a variety of inputs generated during ordinary meetings to be recorded; the team use this data to enable meeting-support applications.
Images courtesy of the AMI Consortium

"The Hub"

These AMI applications are built on "the Hub," a software system for managing meeting recordings and related information using a database that end-user applications can query.

For instance, a "meeting browser" might retrieve a speech transcription from the Hub, or a cellphone interface might provide information on how people are interacting.

The Hub handles data for all meetings-past and present-making it our key resource for building applications.

The AMI Consortium comprises two EC Integrated Projects: AMI and AMIDA; it is coordinated by the Centre for Speech Technology Research at the University of Edinburgh and the IDIAP Research Institute

- Steve Renals, the AMI Consortium

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2021 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.