The way scientists work in traditional sciences has changed dramatically in recent years. Computer science is increasingly supporting them in performing and analyzing their experiments. Today, obtaining the raw data from instruments is merely the first step. No longer is the data analyzed on paper or with simple computational tools. Instead, massive amounts of data obtained raw from instruments are processed in complex and long-running computational pipelines. This trend of supporting traditional sciences with computational tools has lead to a significant speedup in executing experiments and has also enabled experiments which would not have been possible before. Scientists increasingly depend on adequate infrastructure to process experiment data using computational pipelines and to manage the plethora of data used and produced by them. Such computational pipelines are typically modeled as workflows and so this trend consequently challenges the current infrastructure for executing workflows as well as the infrastructure to manage the resulting data deluge. This book addresses challenges arising from this trend in the areas of scientific workflow execution and data management.