Document Type


Publication Date



Applied Ethics | Artificial Intelligence and Robotics | Computer Sciences | Ethics and Political Philosophy | Inequality and Stratification | Other Computer Sciences


Noreen Herzfeld, Computer Science; Erica Stonestreet, Philosophy; Peter Ohmann, Computer Science


Machine Learning (ML) is an important component of computer science and a mainstream way of making sense of large amounts of data. Although the technology is establishing new possibilities in different fields, there are also problems to consider, one of which is bias. Due to the inductive reasoning of ML algorithms in creating mathematical models, the predictions and trends found by the models will never necessarily be true – just more or less probable. Knowing this, it is unreasonable for us to expect the applied deductive reasoning of these models to ever be fully unbiased. Therefore, it is important that we set expectations for ML that account for the limitations of reality.

The current conversation of ML regards how and when to implement the technology to mitigate the effect of bias on its results. This thesis suggests that the question of “whether” should be addressed first. We tackle the issue of bias from the standpoint of justice and fairness in ML, developing a framework tasked with determining whether the implementation of a specific ML model is warranted. We accomplish this by emphasizing the liberal values that drive our definitions of societal fairness and justice, such as the separateness of persons, moral evaluation, freedom and understanding of choice, and accountability for wrongdoings.