The ability to recognize, interpret and express emotions plays a key role in human communication. Current computer interfaces have become able to ”see”, thanks to advanced video sensors and video processing algorithms; however, until recently they could not plausibly ”guess” user intentions, because available feature extraction techniques could not provide the adequate level of service needed to support sophisticated interpretation capabilities. Our approach relies on a set of novel face and posture recognition techniques efficient and robust enough to be at the basis of a fully video-enabled intelligent pervasive workplace, capable of providing value added services based on the real time of facial and postural data. We propose to build on our current work in this area to create an infrastructure for lightweight facial and posture analysis allowing a variety of extended interactions between users and their work, market and entertainment environments.
|Titolo della pubblicazione ospite||Lectures Notes in Computer Science. Ubiquitous intelligence and computing|
|Numero di pagine||12|
|Stato di pubblicazione||Pubblicato - 2006|
|Nome||LECTURE NOTES IN COMPUTER SCIENCE|
- face tracking
- human-machine interaction