New Principles to Make Machine Learning More Human

Strong standards are urgently needed to prevent discrimination and marginalization of humans in artificial intelligence. This is the finding of a new white paper, How to Prevent Discriminatory Outcomes in Machine Learning, published today by the World Economic Forum’s Global Future Council on Human Rights.

The paper has been produced after a long consultation period and is based on research and interviews with industry experts, academics, human rights professionals and others working at the intersection of machine learning and human rights. The key recommendation for developers and all businesses looking to use machine learning is to prioritize non-discrimination by adopting a framework based on four guiding principles: active inclusion; fairness; right to understanding; and access to redress.

Recent examples of how machine learning has failed to prevent discrimination include:

Loan services – applicants from rural backgrounds, who have less digital infrastructure, could be unfairly excluded by algorithms trained on data points captured from more urban populations

Criminal justice – the underlying data used to train an algorithm may be biased, reflecting a history of discrimination.

Recruitment – applications might filter out people from lower-income backgrounds, those who attended less prestigious schools, based on factors such as educational attainment status

“We encourage companies working with machine learning to prioritize non-discrimination along with accuracy and efficiency to comply with human rights standards and uphold the social contract,” said Erica Kochi, Co-Chair of the Global Future Council for Human Rights and Co-Founder of UNICEF Innovation.

“One of the most important challenges we face today is ensuring we design positive values into systems that use machine learning. This means deeply understanding how and where we bias systems and creating innovative ways to protect people from being discriminated against,” said Nicholas Davis, Head of Society and Innovation, Member of the Executive Committee, World Economic Forum.

The white paper is part of a broader workstream within the Global Future Council looking at the social impact of machine learning, such as the way it amplifies longstanding problems related to unequal access.