Carlos Busso
Carlos Busso
Professor of Electrical Engineering, The University of Texas at Dallas
Bestätigte E-Mail-Adresse bei - Startseite
Zitiert von
Zitiert von
IEMOCAP: Interactive emotional dyadic motion capture database
C Busso, M Bulut, CC Lee, A Kazemzadeh, E Mower, S Kim, JN Chang, ...
Language resources and evaluation 42, 335-359, 2008
The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing
F Eyben, KR Scherer, BW Schuller, J Sundberg, E André, C Busso, ...
IEEE transactions on affective computing 7 (2), 190-202, 2015
Analysis of emotion recognition using facial expressions, speech and multimodal information
C Busso, Z Deng, S Yildirim, M Bulut, CM Lee, A Kazemzadeh, S Lee, ...
Proceedings of the 6th international conference on Multimodal interfaces …, 2004
Emotion recognition using a hierarchical binary decision tree approach
CC Lee, E Mower, C Busso, S Lee, S Narayanan
Speech Communication 53 (9-10), 1162-1171, 2011
Analysis of emotionally salient aspects of fundamental frequency for emotion detection
C Busso, S Lee, S Narayanan
IEEE transactions on audio, speech, and language processing 17 (4), 582-596, 2009
Emotion recognition based on phoneme classes
CM Lee, S Yildirim, M Bulut, A Kazemzadeh, C Busso, Z Deng, S Lee, ...
Eighth international conference on spoken language processing, 2004
MSP-IMPROV: An acted corpus of dyadic interactions to study emotion perception
C Busso, S Parthasarathy, A Burmania, M AbdelWahab, N Sadoughi, ...
IEEE Transactions on Affective Computing 8 (1), 67-80, 2016
An acoustic study of emotions expressed in speech
S Yildirim, M Bulut, CM Lee, A Kazemzadeh, Z Deng, S Lee, S Narayanan, ...
Eighth International Conference on Spoken Language Processing, 2004
Rigid head motion in expressive speech animation: Analysis and synthesis
C Busso, Z Deng, M Grimm, U Neumann, S Narayanan
IEEE transactions on audio, speech, and language processing 15 (3), 1075-1086, 2007
Building naturalistic emotionally balanced speech corpus by retrieving emotional speech from existing podcast recordings
R Lotfian, C Busso
IEEE Transactions on Affective Computing 10 (4), 471-483, 2017
Interrelation between speech and facial gestures in emotional utterances: a single subject study
C Busso, SS Narayanan
IEEE Transactions on Audio, Speech, and Language Processing 15 (8), 2331-2347, 2007
Interpreting ambiguous emotional expressions
E Mower, A Metallinou, CC Lee, A Kazemzadeh, C Busso, S Lee, ...
Affective Computing and Intelligent Interaction and Workshops, 2009. ACII …, 2009
Domain Adversarial for Acoustic Emotion Recognition
M Abdelwahab, C Busso
IEEE/ACM Transactions on Audio, Speech, and Language Processing 26 (12 …, 2018
Jointly Predicting Arousal, Valence and Dominance with Multi-Task Learning.
S Parthasarathy, C Busso
Interspeech 2017, 1103-1107, 2017
Correcting time-continuous emotional labels by modeling the reaction lag of evaluators
S Mariooryad, C Busso
IEEE Transactions on Affective Computing 6 (2), 97-108, 2014
Using neutral speech models for emotional speech analysis.
C Busso, S Lee, SS Narayanan
Interspeech, 2225-2228, 2007
Natural head motion synthesis driven by acoustic prosodic features
C Busso, Z Deng, U Neumann, S Narayanan
Computer Animation and Virtual Worlds 16 (3‐4), 283-290, 2005
The ordinal nature of emotions
GN Yannakakis, R Cowie, C Busso
2017 Seventh International Conference on Affective Computing and Intelligent …, 2017
Increasing the reliability of crowdsourcing evaluations using online quality assessment
A Burmania, S Parthasarathy, C Busso
IEEE Transactions on Affective Computing 7 (4), 374-388, 2015
Toward effective automatic recognition systems of emotion in speech
C Busso, M Bulut, S Narayanan, J Gratch, S Marsella
Social emotions in nature and artifact: emotions in human and human-computer …, 2013
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20