Johannes Wagner
Cited by
Cited by
From physiological signals to emotions: Implementing and comparing selected methods for feature extraction and classification
J Wagner, J Kim, E André
2005 IEEE international conference on multimedia and expo, 940-943, 2005
The relevance of feature type for the automatic classification of emotional user states: low level descriptors and functionals
B Schuller, A Batliner, D Seppi, S Steidl, T Vogt, J Wagner, L Devillers, ...
Automatic recognition of emotions from speech: a review of the literature and recommendations for practical realisation
T Vogt, E André, J Wagner
Affect and Emotion in Human-Computer Interaction: From Theory to …, 2008
The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time
J Wagner, F Lingenfelser, T Baur, I Damian, F Kistler, E André
Proceedings of the 21st ACM international conference on Multimedia, 831-834, 2013
Whodunnit–searching for the most important feature types signalling emotion-related user states in speech
A Batliner, S Steidl, B Schuller, D Seppi, T Vogt, J Wagner, L Devillers, ...
Computer Speech & Language 25 (1), 4-28, 2011
Dawn of the transformer era in speech emotion recognition: closing the valence gap
J Wagner, A Triantafyllopoulos, H Wierstorf, M Schmitt, F Burkhardt, ...
arXiv preprint arXiv:2203.07378, 2022
Exploring fusion methods for multimodal emotion recognition with missing data
J Wagner, E Andre, F Lingenfelser, J Kim
IEEE Transactions on Affective Computing 2 (4), 206-218, 2011
The NoXi database: multimodal recordings of mediated novice-expert interactions
A Cafaro, J Wagner, T Baur, S Dermouche, M Torres Torres, C Pelachaud, ...
Proceedings of the 19th ACM International Conference on Multimodal …, 2017
Smart sensor integration: A framework for multimodal emotion recognition in real-time
J Wagner, E André, F Jung
2009 3rd international conference on affective computing and intelligent …, 2009
Integrating information from speech and physiological signals to achieve emotional sensitivity
J Kim, E André, M Rehm, T Vogt, J Wagner
INTERSPEECH, 809-812, 2005
Laugh-aware virtual agent and its impact on user amusement
R Niewiadomski, J Hofmann, J Urbain, T Platt, J Wagner, P Bilal, T Ito, ...
University of Zurich, 2013
A systematic comparison of different HMM designs for emotion recognition from acted and spontaneous speech
J Wagner, T Vogt, E André
Affective Computing and Intelligent Interaction: Second International …, 2007
Bi-channel sensor fusion for automatic sign language recognition
J Kim, J Wagner, M Rehm, E André
2008 8th IEEE International Conference on Automatic Face & Gesture …, 2008
Deep learning in paralinguistic recognition tasks: Are hand-crafted features still relevant?
J Wagner, D Schiller, A Seiderer, E André
Exploring interaction strategies for virtual characters to induce stress in simulated job interviews
P Gebhard, T Baur, I Damian, G Mehlmann, J Wagner, E André
The social signal interpretation framework (SSI) for real time signal processing and recognition
J Wagner, F Lingenfelser, E André
INTERSPEECH, 3245-3248, 2011
The AVLaughterCycle Database.
J Urbain, E Bevacqua, T Dutoit, A Moinet, R Niewiadomski, C Pelachaud, ...
LREC, 2010
Towards robust speech emotion recognition using deep residual networks for speech enhancement
A Triantafyllopoulos, G Keren, J Wagner, I Steiner, B Schuller
AVLaughterCycle: Enabling a virtual agent to join in laughing with a conversational partner using a similarity-driven audiovisual laughter animation
J Urbain, R Niewiadomski, E Bevacqua, T Dutoit, A Moinet, C Pelachaud, ...
Journal on Multimodal User Interfaces 4, 47-58, 2010
Patterns, prototypes, performance: classifying emotional user states
D Seppi, A Batliner, B Schuller, S Steidl, T Vogt, J Wagner, L Devillers, ...
The system can't perform the operation now. Try again later.
Articles 1–20