The speech recognition algorithm for healthcare industries has three phases Speech Detection (SD), Doctor Recognition (DR) and Speech Recognition (SR). Speech is captured from a remote computer by recording continuously the sound signals and removing unwanted data. A modified K-means clustering is proposed to reduce the search space for matching the doctor’s voice features with the database voice features. After identifying the cluster that closely matches the input data, four methods, namely, cross correlation, frequency multiplication, frequency cross correlation and peak signal comparison, are used for DR. The Neyman-Pearson likelihood ratio test was used to combine the result of the four tests. The final step performs SR using two hybrid models that combines Multi Layer Perceptron (MLP)and Hidden Markov Model (HMM). The experimental results proved that speaker and speech recognition can be applied successfully in healthcare environment to store details and can improve the quality of medical care, while controlling the associated costs.