Many technological fields produce data characterised by a steadily growing number of dimensions. That number is however often growing much faster than the number of points available. A typical illustration of this trend is Genomics. This setting makes many machine learning applications subject to the curse of dimensionality, making difficult the estimation of robust predictive models. This book focuses on the design and application of techniques achieving both sparse feature selection and estimation of models with good classification performance in high-dimensional, empty spaces. This challenge can be successfully addressed provided that adequate inductive biases are used to mitigate the lack of extra samples. Those biases can consist either of taking many different views of the data only (ensemble methods), or of the use of external extra information, either field expert prior knowledge or other datasets about related tasks (transfer learning or multi-task learning). The proposed methods are tested over gene expression microarray datasets for diagnosis and biomarker discovery. Those datasets are typically made of few tens of samples (patients) and thousands of dimensions (genes).