Challenges

Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation (DCER&HPE 2017)

– website: http://icv.tuit.ut.ee/fc2017

– organizers: Gholamreza Anbarjafari, Jüri Allik, Cagri Ozcinar, Sylwia Hyniewska, Hasan Demirel

We organize a contest and an associated workshop around two important problems which are recognition of detailed emotions, that require, in addition to performing an effective visual analysis, to deal with recognition of micro emotions and estimation of head-pose of the person who is interacting with a computer in order to ease the process of facial alignment. This can be used in many important applications such as face recognition and 3D face modelling.

Track 1: Dominant and Complementary Multi-Emotional Facial Expression Recognition: Human-computer interaction performs more realistic if the computer is capable of recognising more detailed expressions of the human who is interacting with it. For this purpose, we created a large facial expression database, iCV Multi-Emotion Facial Expression Dataset (iCV-MEFED), designed for multi-emotion recognition. The database includes 31250 facial faces with different emotions of 125 samples whose gender distribution is almost uniform. Each subject acts 50 different emotions and for each one 5 samples have been taken by Canon 60D camera under uniform lightening condition with relatively unchanged background and resolution of 5184×3456. The images are taken and labelled under supervision of psychologists and the subjects have been trained about the acting emotions that they posed. The main challenge of this new datasets are automatic recognition of these multi-emotions from facial expressions and handling analysis of micro emotions.

Track 2: Head-Pose Estimation from RGB-D data: In human-computer interaction head-pose estimation plays an important role. Nowadays various cheap, but quite accurate depth sensors are available and thus many 3D head-pose estimation algorithms are being developed. For that purpose, we have compiled a 3D head pose database of 50 subjects with varying poses and respective labels, called SASE. The data in the challenge have been collected using Kinect v2. The subjects changed their head-poses in front of the sensor and the captured sequences consist of RGB frames, depth frames and corresponding labels. The distance from the camera fluctuates at around 1m, background is white, and light blue markers were attached to the faces of the subjects.

Keynotes: Yun Raymond Fu (Northeastern University, USA)
Guoying Zhao (University of Oulu, Finland)

deadline for paper submission: 24th March, 2017


3d Facial Expression Recognition and Analysis Challenge (FERA 2017)

– website: http://sspnet.eu/fera2017/

– organizers: Michel Valstar, Jeff Cohn, Lijun Yin, Maja Pantic

Previous work in action unit (AU) detection has focused primarily on frontal or near-frontal images. How well does AU detection perform when head pose varies markedly from frontal? FERA 2017 (Facial Expression Recognition and Analysis Challenge) presents participants with the challenge of AU detection across a wide range of head pose. Nine different angles of the face in all. Two FACS AU based sub-challenges are addressed:

1). Occurrence Sub-Challenge. Detect AU occurrence in 9 different facial views.
2). Fully Automatic Intensity Sub-Challenge. Detect both the occurrence and intensity of AUs in 9 different facial views.

Train and validation sets will be made available by the organizers. Algorithms will be submitted to the organizers for independent testing on a hold-out set. All video is derived from 3D meshes from the BP4D and BP4D+ datasets of young adults responding to varied emotion inductions.

Keynotes: Fernando de la Torre (CMU, PA, USA)

deadline for paper submission: 28 January 2017

Comments are closed