Automated Detection of Gesture and Facial Expression
This project aims to identify and demonstrate the feasibility of using automatic detection of gestures and expressions in a reliably acurate way for use in online education assessments.
Assessment Micro-Analytics (AMA) are experts in the collection and analysis of observational data on user experience and assessment response. Micro gestures reveal insights into how people engage with test items. AMA uses these observations to revolutionise the way educational tests and exams are designed.
Through AMA’s strong working relationships with commercial test providers, they identified the need for developing their tools and techniques on an AI platform. A preliminary literature review had already been undertaken, which identified a set of gestures and facial expressions. EIRA supported this project through the Innovation Voucher scheme to achieve specific objectives, including:
- To demonstrate the feasibility of automatic detection of micro-expressions and gestures of participants in on-line educational assessments.
- To appraise the accuracy and reliability of automatic detection of those expressions and gestures for use with diverse populations, such as children, adults and different cultural groups.
- To provide technical specifications and plans on how to combine data on gesture and facial expression with other sources of digital data.
The project was led by Dr Sarah Taylor, with support from Tahmina Zebin and Dr David Greenwood. The work was broken down into two work packages. The first work package was focused on data collection. A diverse image dataset was constructed from existing online sources such as YouTube. This contained examples of occlusion (closed / hidden expressions) and subjects of various ages and ethnicities.
The second work package was concerned with automatic face tracking. The academic team developed a light framework around two algorithms from the face alignment literature and performed both qualitative and quantitative evaluation on the detected landmarks under different conditions.
Dr Sarah Taylor is a Research Fellow in UEA’s School of Computing Sciences and leads the Digital Humans Group in the Graphics, Vision and Speech Laboratory. Her research interests concern analysis and synthesis of faces and bodies during speech. She has worked on projects relating to computer lip-reading, automatic redubbing of video and speech-driven facial animation.
Dr. Taylor’s PhD thesis concerned identifying and formally defining units of visual speech to be used for facial animation. During her PhD studies, she completed two internships at Disney Research in Pittsburgh, USA, working on faces and speech. Following her PhD, Dr. Taylor worked as an Associate Research Scientist for two years at Disney Research.
In 2015, Dr. Taylor returned to UEA to establish her own group. In 2017, she became a Lecturer in Computing Science and in 2018 achieved an EPSRC UKRI Innovation Fellowship.
Tahmina Zebin received her first degree and an MS in Applied Physics, Electronics and Communication Engineering from University of Dhaka, Bangladesh. She also completed an MSc in Digital Image and Signal Processing from the University of Manchester in 2012 and she has been the recipient of the Presidents Doctoral Scholarship (2013-2016) for conducting her PhD in Electrical and Electronic Engineering. Before joining UEA as a Lecturer, Tahmina was employed as a Postdoctoral Research Associate on the EPSRC funded project Wearable Clinic: Self, Help and Care at the University of Manchester and was a Research Fellow in Health Innovation Ecosystem at the University of Westminster. Her current research interests include Advanced Image and Signal Processing, Human Activity Recognition, Risk Prediction Modelling from Electronic Health Records using various statistical and deep learning techniques. Tahmina also attended EIRA’s residential Early Career Researcher training in 2019.
Dr David Greenwood is a lecturer in Computer Science at the University of East Anglia. His research concerns visual prosody; the movements and gestures that accompany speech in humans, and his research interests are speech and language processing, machine learning and computer vision. His PhD thesis established data driven methods to predict head pose during speech.
Dr Greenwood worked with Active Appearance Models for face tracking at Disney Research in 2015. In 2017, he worked at Oculus Research, part of the Facebook group, where he joined the digital humans project, working on social presence in virtual reality.
In 2018, Dr Greenwood joined the Digital Humans Group at UEA, before joining the faculty in 2021.
This project explored the efficacy of automated detection of face and body in video for human gesture and expression recognition. It was discovered that automatic detection of a set of body landmarks is possible using existing tools, and provided full code for fitting to an image. A set of recommendations was suggested for maximising detection accuracy by controlling the capture environment. The team performed an exploration into the performance of face trackers on a diverse population, and this revealed that detections on images containing subjects from certain ethnic groups were more accurate than those from others. The exploratory research also found that detections on the younger age group achieved good accuracy. Finally, a pipeline for processing multimodal data in a machine learning framework for human behaviour recognition was proposed.
“It is really excellent that there is world class expertise in the area of automated face and gesture recognition at UEA. The project should enable us to pivot as a company to respond to the new commercial opportunities that have arisen in the assessment market as a result of the move to on-line teaching and assessment. Those changes have accelerated due to the Covid pandemic. The EIRA funds enabled us to undertake this important strategic project.”
Dr Sarah Taylor, School of Computing Sciences, UEA said,
“It was a fantastic experience working with AMA on this project. I learned about on-line educational assessment and gained real insight into practical challenges associated with real-world applications. The EIRA funding has enabled us to explore some of these challenges, and the work that was undertaken will hopefully feed into the development of AMA’s AI technology going forward. I look forward to collaborating with AMA again in future.”