Revolutionary Emotion Recognition Technology Unveiled: Wearable Device Reads Human Feelings in Real Time
In a groundbreaking development, Professor Jiyun Kim and his research team at UNIST’s Department of Materials Science and Engineering have created a technology that can recognize human emotions in real time. This innovative system is expected to revolutionize several industries, particularly next-generation wearable systems that provide emotion-based services.
Main Points:
- Developed a technology that can recognize human emotions in real time
- Utilizes a multimodal human emotion recognition system combining verbal and non-verbal expression data
- Features a personalized skin-integrated facial interface (PSiFI) system for real-time emotion recognition
The challenge of understanding and accurately extracting emotional information has long been an obstacle due to the abstract and ambiguous nature of human affects such as emotions, moods, and feelings. To address this issue, the research team has developed a multimodal human emotion recognition system that combines verbal and non-verbal expression data to efficiently utilize comprehensive emotional information.
At the heart of this system is the personalized skin-integrated facial interface (PSiFI) system. This self-powered, simple, extendable, and transparent interface features a bi-directional triboelectric strain and vibration sensor—the first of its kind—that enables simultaneous detection and integration of verbal and non-verbal expression data. The system is fully integrated with a data processing circuit for wireless data transfer, enabling real-time emotion recognition.
Utilizing machine learning algorithms, the developed technology demonstrates accurate real-time human emotion recognition tasks, even when individuals are wearing masks. The system has also been successfully applied in a digital concierge application within a virtual reality (VR) environment.
Based on the phenomenon of “friction charging,” where objects separate into positive and negative charges upon friction, the PSiFI system is self-generating and does not require an external power supply or complex measurement devices for data recognition.
Professor Kim commented on these technologies: “We have developed a skin-integrated facial interface (PSiFI) system that can be customized for each individual.” The team used semi-curing techniques to fabricate transparent conductors for the friction charging electrodes. Additionally, they created custom masks using multi-angle shooting techniques which combine flexibility, elasticity, and transparency.
Through successful integration of detection of facial muscle deformation and vocal fold vibrations enabling real-time emotion recognition; personalized services were provided based on users’ emotions in a virtual reality “digital concierge” application. The research team conducted real-time emotion recognition experiments collecting multimodal data such as facial muscle deformation and voice with high accuracy in emotional recognition with minimal training due to its wireless nature ensuring portability.
Furthermore, applying this system to virtual reality environments as a “digital concierge” for various settings allows it to identify individual emotions in different situations providing personalized recommendations for music movies books etc., highlighting potential future applications in next-generation wearable systems according to Professor Kim’s emphasis on effective HMI devices being able to collect various types of complex integrated information exemplifying potential use cases using complex forms like emotions in next-generation wearable systems.
This study was published In Nature Communications. For more detailed information about this exciting development please refer to this source.
Source