Mathematical models for the optimal allocation of resources over time under uncertainty are notoriously difficult to solve, both for computational and technical reasons. This book addresses two complex stochastic and dynamic optimization problems, with application in modern sensor systems: (i) hunting multiple elusive targets and (ii) tracking multiple moving targets. These problems are formulated as Multi-armed Restless Bandit models with continuous state, which augments the technical difficulties to derive and compute its optimal solution and motivates the focus on designing tractable and well-performing heuristic policies of priority-index type instead. The book presents how to derive such policies based on a Lagrangian relaxation and decomposition approach. The contents of this book are of relevance both for applied and methodological communities. For modern sensor systems' operators, it provides nearly-optimal and tractable novel scheduling rules. For researchers concerned with the design of near-optimal solution policies for Multi-armed Restless Bandit Problems, it deploys a systematic procedure to do so based on recent extensions of indexation theory.