HOME
PROJECTS
ORGANIZATION
PROJECTS & TEAMS
TRAVEL & LIVING
  Projects & Teams

eNTERFACE'10 Projects
1
CoMediAnnotate: an usable multimodal annotation framework
Description   Full Project   Contact   Final Presentation   Team   Collapse  

Project Description :

This project proposes to combine efforts gathered in fields such as rapid prototyping, information visualization, gestural interaction; towards a framework for the design and implementation of multimodal annotation tools dedicated to specific tasks elicited by different needs and use cases. More precisely, this project consists in adding all the necessary and remaining components to a rapid prototyping tool so as to allow the development of multimodal annotation tools by visually programming the application workflow. During the workshop, once the framework is finalized, one simple prototype would be developed and usability testing would be undertaken so as to validate the need of this framework.
Christian Frisson
Université Catholique de Louvain
Lionel Lawson
Université Catholique de Louvain
Johannes Wagner
University of Augsburg
Florian Lingenfelser
University of Augsburg
Dirk van Oosterbosch
MakeShiftLab
Sema Alaçam
Istanbul Technical University
Emirhan Coşkun
Istanbul Technical University
Dominik Ertl
Technical University of Vienna
Ceren Kayalar
Sabancı University
Christian Frisson (UCL, Belgium) - christian.frisson@uclouvain.be
2
Looking around in a virtual world
Description   Full Project   Contact   Final Presentation   Team   Collapse  

Project Description :

The aim of this project is to develop a smart camera for virtual worlds, based on EEG (electroencephalographic) measurements. The influence on the camera could range from a mouse-look alternative, to gently nudging the camera to where the user is attending. Currently, there are two methods that could be used: covert attention, and eye movement. For this project, we will implement these pipelines, evaluate them offline, and design a mapping to camera movement, to culminate in on-line experiments to determine the usability and user experience.
Danny Plass-Oude Bos
University of Twente
Mannes Poel
University of Twente
Bram van de Laar
University of Twente
Boris Reunderink
University of Twente
Ali Bahramisharif
Radboud University Nijmegen
Linsey Roijendijk
Radboud University Nijmegen
Wouter Marijn van Vliet
University of Twente
Jaime Delgado Saa
Sabancı University
Matthieu Duvinage
University of Mons
Luca Tonin
EPFL
Mesut Oytun Oktay
Namık Kemal University
Ayhan İstanbullu
Balıkesir University
Hüseyin Gürüler
Muğla University
Danny Plass-Oude Bos (Univ. Twente, the Netherlands) - d.plass@ewi.utwente.nl
3
Parameterized user modelling of people with disabilities and simulation of their behaviour in a virtual environment
Description   Full Project   Contact   Final Presentation   Team   Collapse  

Project Description :

Due to technical difficulties, the project is withdrawn from the Workshop.
Dimitrios Tzovaras, Konstantinos Moustakas (ITI-CERTH, Greece) - (tzovaras, moustak)@iti.gr
4
Continuous interaction for ECAs
Description   Full Project   Contact   Final Presentation   Team   Collapse  

Project Description :

The main objective of this project is to develop an Embodied Conversational Agent able to receive and to handle certain kinds of feedback, backchannel and interruptions from the user. We plan on modeling and implementing the sensing, interaction and generation for what we call continuous interaction. A continuous interactive ECA will be able to perceive and generate conversational (non-)verbal behavior fully in parallel, and can coordinate this behavior to perception continuously -- a capability which is not present in most state-of-the-art ECAs. We propose to do this specifically by looking at feedback, backchannel and interruption behavior from the human user who is listening to the ECA that serves as a virtual route guide. The ECA will present information to the user in a multi-modal way. Actively dealing with and responding to the above mentioned behaviors from the user requires the ECA to be able to handle overlap, replanning and re-timing of expressions, ignoring attempts by the user to interrupt, and abandoning of planned utterances (letting itself in effect be interrupted). An evaluation study will show how the ECA developed is perceived by human users in terms of politeness and certain personality traits.
Dennis Reidsma
University of Twente
Khiet Truongh
University of Twente
Herwin van Welbergen
University of Twente
Bart van Straalen
University of Twente
Iwan de Kok
University of Twente
Sathish Pammi
DFKI
Daniel Neiberg
KTH
Elckerlyc
University of Twente
Dennis Reidsma (University of Twente, the Netherlands) - dennisr@ewi.utwente.nl
5
Multimodal Speaker Verification in NonStationary Noise Environments
Description   Full Project   Contact   Final Presentation  Team   Collapse  

Project Description :

Due to technical difficulties, the project is withdrawn from the Workshop.
Cenk Demiroglu (Ozyegin University, Turkey) - Cenk.Demiroglu@ozyegin.edu.tr, Devrim Unay (Sabanci University, Turkey) - unay@sabanciuniv.edu
6
Vision based Hand Puppet
Description   Full Project   Contact   Final Presentation   Team   Collapse  

Project Description :

In this project, a virtual 3D puppet will be animated by visually tracking the bare hand of a performer. The performer will manipulate the digital puppet via predefined hand movements that are akin to the ones used in traditional hand puppetry. Hand puppets are traditionally worn like a glove on the hand and controlled by moving fingers, which are directly translated into movements of the limbs and body parts of the puppet. In this digital version of the same act, the performer will not use actual puppets, gloves or markers. Instead, the bare hand of the performer will be tracked by one or more cameras in real-time, and the estimated hand posture will be mapped in an intermediary step to the animation parameters of the digital puppet. Also, upon performing some predefined hand gestures, i.e. some combinations of changes in hand posture and motion, the system will initiate complex sequences of animations that will enrich the performance. The puppet will be immediately animated, allowing the performer to have visual feedback. Moreover, using a separate camera, the facial expressions of the performer will be tracked, recognized and mapped onto the puppet as well.
Lale Akarun
Boğaziçi University
Rainer Stiefelhagen
Karlsruhe University
Hazım Kemal Ekenel
Karlsruhe University
İsmail Arı
Boğaziçi University
Cem Keskin
Boğaziçi University
Furkan Kıraç
Boğaziçi University
Mustafa Tolga Eren
Sabancı University
Lukas Rybok
Karlsruhe University
Lale Akarun (Bogazici University, Turkey) - akarun@boun.edu.tr, Rainer Stiefelhagen, Hazim Kemal Ekenel (Karlsruhe University, Germany) - ekenel@kit.edu
7
Audio-visual speech recognition
Description   Full Project   Contact   Final Presentation   Team   Collapse  

Project Description :

Speech recognition is an open research area that requires continuing research effort for further advancement. Although one can obtain high recognition rates in audio-only speech recognition in controlled environments, recognition accuracy degrades in noisy environments. For such cases, supporting audio information with visual information is a commonly recommended approach in the literature. This approach, called audio-visual speech recognition, aims to provide the supplemental visual information from a video of the lip region of the subject of interest. We propose an Enterface 2010 workshop project to work on building real-time software that showcases the current and emerging audio-visual speech recognition technology. Our group at Sabanci University has an ongoing nationally supported project on improving the performance of audio-visual speech recognition. In this project, we propose a tandem classifier fusion approach for combining audio and visual information. In addition, we work on improving visual feature extraction by extracting stable features correcting for head motion artifacts and normalized features correcting for speaker and environment variability. This project will collaborate with Project 5 to jointly develop some of its modules.
Hakan Erdoğan
Sabancı University
Saygın Topkaya
Sabancı University
Berkay Yılmaz
Sabancı University
Umut Şen
Sabancı University
Alexey Tarasov
Dublin Institute of Technology
Hakan Erdogan, Saygin Topkaya, (Sabanci University, Turkey) - (haerdogan, isaygint)@sabanciuniv.edu
8
Affect-responsive interactive photo-frame
Description   Full Project   Contact   Final Presentation   Team   Collapse  

Project Description :

The project aims to develop an interactive photo-frame system in which the user can upload a series of videos of a single person (or a child). The system will be composed of three parts. The first part will analyze the uploaded videos and prepare segments for interactive play. The second part will use multi-modal input (sound analysis, facial expression, etc.) to generate a user state. The third part will synthesize continuous video streams from the prepared segments in accordance with the modeled state of the user.
Albert Ali Salah
University of Amsterdam
Marcos Ortega Hortas
University of A Coruna
Hamdi Dibeklioğlu
University of Amsterdam
Ilkka Kosunen
Helsinki Institute of Technology
Petr Zuzánek
Czech Technical University
Ilkay Ulusoy (METU, Turkey) - iulusoy06@gmail.com, Albert Ali Salah (Universiteit van Amsterdam, the Netherlands) - a.a.salah@uva.nl
9
Automatic Fingersign to Speech Translator
Description   Full Project   Contact   Final Presentation   Team   Collapse  

Project Description :

The aim of this project is to help the communication of two people, one hearing impaired and one without any hearing disabilities by converting speech to finger spelling and finger spelling to speech. Finger spelling is a subset of Sign Language, and uses finger signs to spell words of the spoken or written language. We aim to convert finger spelled words to speech and vice versa. Different spoken languages and sign languages such as English, Russian, Turkish and Czech will be considered.
Oya Aran
IDIAP
Lale Akarun
Boğaziçi University
Milos Zelezny
University of West Bohemia
Alexei Karpov
SPIIRAS
Pavel Campr
University of West Bohemia
Marek Hruz
University of West Bohemia
Zdenek Krnoul
University of West Bohemia
Alexander Ronzhin
SPIIRAS
Haşim Sak
Boğaziçi University
Daniel Schorno
STEIM
Alp Kındıroğlu
Boğaziçi University
Murat Saraçlar
Boğaziçi University
Erinç Dikici
Boğaziçi University
Oya Aran (IDIAP, Switzerland) - Oya.Aran@idiap.ch, Lale Akarun, Murat Saraçlar (Bogazici University, Turkey), Alexey Karpov (SPIIRAS, Russia), Milos Zelezny (UWB, Czech Republic)


eNTERFACE 2010 Web Site eNTERFACE | EURASIP| OpenInterface Foundation| SSPNet | IOP-MMI