The area of affective computing, and in particular recognition of emotion from voice, has received continually increasing attention in recent years. At the same time, there remain significant challenges to speech-based emotion recognition.
This paper presents the Cogito submission to the Interspeech Computational Paralinguistics Challenge (ComParE), for the second sub-challenge. The aim of this second sub-challenge is to recognize the self-assessed effect from short clips of speech containing audio data.