Recently, the importance of reacting to the emotional state of a user has been generally accepted in the field of human-computer interaction and especially speech has received increased focus as a modality from which to automatically deduct information on emotion. So far, mainly not very application-oriented offline studies based on previously recorded and annotated databases with emotional speech were conducted. However, demands of online analysis differ from that of offline analysis, in particular, conditions are more challenging and less predictable. Therefore, this book investigates real-time automatic emotion recognition from acoustic features of speech in several experiments for suitable audio segmenation, feature extraction and classification algorithms. Results lead to the implementation of the Open Source online emotion recognition framework EmoVoice. A further emphasis was set on multimodality and the use of speech emotion recognition in applications.