The ability of computers to visually recognize and track hand motion is important for a wide range of applications in the field of Human-Computation Interaction. Though it is effortless for the human eye to locate and track a gesturing hand in video sequences, it is far more complex for computers to achieve perfect image segmentation and tracking. In this research we present a fairly robust multi-cue based segmentation approach that identifies candidate hand regions by simultaneously fusing motion, edges and skin-colour information. A self re-orienting boundary tracing algorithm is then used to identify the outlines of all candidate hand regions. Once the image blob boundaries are identified, the Gaussian statistics that describe each image blob are extracted. Blob tracking is achieved by probabilistically aligning closely matching blob patterns. Nonpersistent blob patterns are discarded as they are assumed to have been generated by image noise. Although there are no pervasive segmentation and tracking algorithms upon which we can benchmark our algorithms, the algorithms presented in this research successfully tracked about 80% of the samples of image sequences.