The aim of a speech emotion recognizer is to produce an estimate of the emotional state of the speaker given a speech fragment as an input. In other words we seek a solution for the tricky problem: given a speech fragment how to know what the speaker is feeling, even if she did not intend us to know that. How would it be possible to construct such recognizers? Intuitively, when emotion is experienced, there are physiological changes, for example faster heart rate or higher blood pressure. Some of these physiological changes affect speech production organs and they are in a state different from normal, therefore the speech signal comes out of the mouth distorted as compared to emotionally neutral. Different emotions trigger different physiological changes. Thus, we can record speech in different emotional states, measure acoustic parameters of the wave, form feature vectors from these measurements, divide our data set into the training and the test sets. Train the classifier, say, the decision tree, on the train set and see how well it can predict on the test set.