Autoencoders are feedforward neural networks which can have more than one hidden layer. These networks attempt to reconstruct the input data at the output layer. Since the size of the hidden layer in the autoencoders is smaller than the size of the input data, the dimensionality of input data is reduced to a smaller-dimensional code space at the hidden layer. However, training a multilayer autoencoder is tedious. This is due to the fact that the weights at deep hidden layers are hardly optimized. The research work has focused on the characteristics, training and performance evaluation of autoencoders. The concepts of stacking and Restricted Boltzmann Machine have also been discussed in detail. Two datasets, namely ORL face dataset and MNIST handwritten digit dataset have been employed in these experiments. The performances of the autoencoders have also been compared with that of PCA. It has been shown that the autoencoders can also be used for image compression. The compression efficiency has been studied using DDSM dataset (mammogram dataset). Since image patches were used for training, it was possible to compress and decompress mammograms of different sizes.