2016-075 – Automatic Facial Action Unit Coding

Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To enable fast and reliable automated coding of spontaneous facial actions, our system exploits the dense 3D registration and makes use of far larger FACS coded data than ever attempted. We use the BP4D-Spontaneous, CK+, Group Formation Task corpus, that together contain over 800,000 manually annotated frames. To measure the deformation of the face caused by expression, the system extracts regional image descriptors surrounding a selected sparse set of facial landmark. The descriptor measures features that correspond to changes in facial texture and orientation (e.g., facial wrinkles, folds, and bulges). It is robust to changes in illumination and scale variations. The last building blocks of the system are Support Vector Machines (SVM) that perform AU classification using the different FACS labels and a regression procedure for AU intensity estimation. The system detects occurrence and intensity of AU and emotion expression over a range +/- 30 degrees of head rotation. Scott McEvoy smcevoy@andrew.cmu.edu 412-268-6053

Related Blog

Smart, interactive desk

Get ready to take your space management game to the next level with the University of Glasgow’s innovative project! By combining the

Mechanical Hamstring™

University of Delaware Technology Overview This device was created to allow athletes who suffer a hamstring strain to return to the field

Join Our Newsletter

                                                   Receive Innovation Updates, New Listing Highlights And More