Skip to main content

Emotion recognition from sound events based on facial expression recognition

Buy Article:

$15.00 + tax (Refund Policy)

Previous studies used subjective questionnaire to evaluate the acoustic environment and soundscape. In this paper, the facial expressions of the subjects during the test are recorded through the camera under 32 different sound events. Based on a machine learning method, the emotion through facial expression recognition are recognized. The results showed that there were significant differences in the subjects' emotion with disgust and pleasure to these 32 sound events. The change of the subject's emotion over time, the influence of age and gender were discussed. This method provides a valuable reference for the subjective evaluation of acoustic environment and the study of soundscape.

The requested document is freely available to subscribers. Users without a subscription can purchase this article.

Sign in

Document Type: Research Article

Affiliations: School of Physics and Optoelectronics, South China University of Technology

Publication date: 30 November 2023

More about this publication?
  • The Noise-Con conference proceedings are sponsored by INCE/USA and the Inter-Noise proceedings by I-INCE. NOVEM (Noise and Vibration Emerging Methods) conference proceedings are included. All NoiseCon Proceedings one year or older are free to download. InterNoise proceedings from outside the USA older than 10 years are free to download. Others are free to INCE/USA members and member societies of I-INCE.

  • Membership Information
  • INCE Subject Classification
  • Ingenta Connect is not responsible for the content or availability of external websites
  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content