Now a day, the problem of detection of depressed person from the speech signal becomes initial matter. Analysis of voice for depressed and normal person is done using PSD and MFCC. Acoustic properties of speech have previously been identified as possible cues to depression, and there is evidence that certain vocal parameters may be used further to objectively discriminate between depressed and normal speech. In this research, these questions are addressed empirically using classifies configurations employed in emotion recognition from speech, evaluated on depressed/neutral speech database. Results demonstrate that detailed spectra features are well suited to the task; speaker normalization provides benefits mainly for less detailed features and dynamic information appears to provide little benefit. The PSD and MFC was extracted from the collected voiced speech samples using the method of the spectrum analysis and it is used to discriminate between depressed and non-depressed person. The results indicated that using the construct based speech contents in the problem solving interactions session improved the detection accuracy.