Large scale reconstructions from community photo collections is rich in information that can interpreted as a sampling of the geometry and appearance of the underlying space. In this dissertation, we encode the visibility information between and among points and cameras as visibility probabilities. The conditional visibility probability of a set of points on a point (or a set of cameras on a camera) can be used to select points (or cameras) based on their dependence or independence. We use it to efﬁciently solve the problems of image localization and feature triangulation. Other than image localization and feature triangulation, bundle adjustment is a key component of the reconstruction pipeline and often its slowest and the most computational resource intensive. We also a present a hybrid implementation of sparse bundle adjustment on the GPU using CUDA, with the CPU working in parallel. The algorithm is decomposed into smaller steps, each of which is scheduled on the GPU or the CPU. We develop efﬁcient kernels for the steps and make use of existing libraries for several steps. Our implementation outperforms the CPU implementation signiﬁcantly, achieving a speedup of 30-40 times.