In recent years, Privacy Preserving Data Mining has emerged as a very active research area. This field of research studies how knowledge or patterns can be extracted from large data stores while maintaining commercial or legislative privacy constraints. Quite often, these constraints pertain to individuals represented in the data stores. While data collectors strive to derive new insights that would allow them to improve customer service and increase sales, consumers are concerned about the vast quantities of information collected about them and how this information is put to use. The question how these two contrasting goals can be reconciled is the focus of this work. We seek ways to improve the tradeoff between privacy and utility when mining data. We address this tradeoff problem by considering the privacy and algorithmic requirements simultaneously, in the context of two privacy models that attracted considerable attention in recent years, k-anonymity and differential privacy. Our analysis and experimental evaluations confirm that algorithmic decisions made with privacy considerations in mind may have a profound impact on the accuracy of the resulting data mining models.