Effective automated emotion analysis enables a better understanding of user and customersâ€™ behavior and needs, allowing businesses to adapt to real demand and usage, with great potential for real economic gains and improved user experience. Emotions can be detected and...
Effective automated emotion analysis enables a better understanding of user and customersâ€™ behavior and needs, allowing businesses to adapt to real demand and usage, with great potential for real economic gains and improved user experience. Emotions can be detected and tracked through user interaction in a large variety of forms, such as explicit feedback through email, call center interaction, social media comments, etc., and can include implicit acknowledgment of approval or rejection through facial expressions, speech or other non-verbal feedback. The information can be found in speech, text, or audio-visual content.
One ambitious goal of MixedEmotions is to tackle these multiple â€œmixedâ€ data source and modality challenges. Another ambition is to provide services in multiple languages, to open up to a wider audience. A third important challenge is to handle large amounts of data as is often required in customer service and user feedback industries. Different aspects and facets of the technologies developed in MixedEmotions will thus serve different targets and users of those technologies will be able to build a wide range of applications that serve diverse use cases, in diverse industries.
MixedEmotions produced an Integrated Big Linked Data Platform for Emotion Recognition, tackling all the challenges mentioned above, which are not currently fully addressed by the commercial solutions on the market. The MixedEmotions Platform is free, open source, and accessible online via http://mixedemotions.insight-centre.org/.
The MixedEmotions platform has been developed and evaluated in the context of three Pilot Projects that are representative of a variety of data analytics markets: Social TV, Brand Reputation Management, and Call Centre Operations. In each case, the relevant industry partner having reached their specific innovation objectives.
MixedEmotions project and platform have been developed around three use cases for which business scenarii were established in the early stages of the project through consultation between industry and academic partners.
A flexible Micro-services architecture was chosen to be followed for the MixedEmotions platform, allowing it to act both as a toolbox in which components could be used separately, and as a Big Data analysis tool, with the ability to distribute computing resources across many physical machines and to manage sophisticated data analysis pipelines.
After agreeing on which representations for the concept of emotion would be used, modules were developed for each modality (text, audio, video, social media).
A two dimensional emotion scheme (arousal and valence) was used for emotion detection in audio. Cross-lingual recognition was included in the module as well as detection of age, gender and personality. For text, modules are provided for recognition of sentiment and emotion for several languages. Both categorical and dimensional emotion schemes are used, and a crowd-sourced emotion annotated corpus was collected. In parallel, a multilingual WordNet-Affect lexicon in 23 European languages was developed, providing baseline emotion detection capabilities in those languages.
In the image and video modalities, face-based classifiers, including emotional facial expressions, were implemented with capability to detect multiple faces as well as age and gender.
Finally, a fusion model has been built for multimodal emotion recognition.
Tools and interfaces for search and analytics on mixed data enriched with emotions have been developed, providing big data analysis and semi-structured knowledge graph capabilities and APIs. The social semantic knowledge graph carries structured information about named entities enriched with emotions. A platform of analysis for social media and social context using graph analytics has also been developed.
The three pilots have been successfully implemented through the project. Two use cases were identified in the Social TV pilot: providing an emotion-based recommender system in Smart TV, and an Editorial Dashboard in which emotions associated with DW RSS News can be visualized alongside their corresponding tweets. The Brand Reputation management pilot can now monitor brands in social networks, including in videos, provide a multimodal emotion recognition analysis, and a visualizations of the results. As for the third pilot, dealing with Call Centre Operations, the project allowed Phonexia to highlight potentially problematic calls through the detection of emotional valence and arousal variation in the speech and provide rich feedback on customer satisfaction.
The communication and dissemination strategies were conducted successfully, and MixedEmotions was present and advertised through a wide range of material (video, posters, flyers, etc.), media (website, Twitter, LinkedIn, etc.), and events (webinars, tutorial, conferences, etc.). Concerning exploitation of project outcomes, the components made available by the platform provide additional technologies that enabled the industry partners (and by extension other SMEs) to improve their business processes by providing emotion analysis decision support tools, enhancing their offering and targeting new markets and customers. The industrial partners finalized their market analyses for the vertical markets associated to each pilot by selecting early customers installations and commercial piloting.
The MixedEmotions project now enables the European content analytics industry to build emotion analysis solutions. This is especially true for SMEs, which are a driving force of this market and generally cannot afford to finance core research and development in the area.
Moreover, the MixedEmotions platform enabled innovations in services and products currently offered in the consortium SMEs, giving them access to fine-grained and semantically integrated large-scale analysis of emotion in customer feedback and other relevant content streams, across different European languages, content modalities (audio, video, text, social media) and including open and linked data sources.
Although MixedEmotions is an innovation action, most components integrated into the platform haven been extended with capabilities exceeding the state of the art in their respective fields.
For acoustic emotion recognition, a specific end-to-end deep neural network based approach new to the audio processing domain was developed. Furthermore, using canonical correlation analysis, we improved classification accuracy for multilingual audio emotion recognition.
For emotion detection from video, the state of the art was extended in several detection tasks relevant to emotion detection, including facial expressions (neutral, smile, sad, angry, surprise). We achieved beyond state-of-the-art performance in deceit and emotion analysis by the combination of these tasks.
For emotion detection from text, we have developed methodologies for building and extending linguistic resources for poorly resourced languages, including the transfer of emotion lexicons to those languages. The lack of ground truth data for evaluation and model training was addressed through a substantial emotion annotation effort. In addition to emotion detection, MixedEmotions has developed novel suggestion mining methods and tools.
Finally, through fusion of text, audio and video analysis, we could yield higher performance for emotion recognition.
More info: http://mixedemotions-project.eu/.