An outlier or anomaly is a data point that is inconsistent with the rest of the data population. Outlier or anomaly detection has been used for centuries to detect and remove anomalous observations from data. It is used to monitor vital infrastructure such as utility distribution networks, transportation networks, machinery or computer networks for faults. Detection can identify faults before they escalate with potentially catastrophic consequences. Today, principled and systematic detection techniques are used, drawn from the full gamut of Computer Science and Statistics. The book forms a survey of techniques covering statistical, proximity-based, density-based, neural, natural computation, machine learning, distributed and hybrid systems. It identifies their respective motivations and distinguishes their advantages and disadvantages in a comparative review. It aims to provide the reader with a feel of the diversity and multiplicity of techniques available. The survey should be useful to advanced undergraduate and postgraduate computer and library/information science students and researchers analysing and developing outlier and anomaly detection systems.