BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Pymetrics Open-Sources Fairness-Aware Machine Learning Algorithms

Pymetrics Open-Sources Fairness-Aware Machine Learning Algorithms

Pymetrics, an AI start-up that specializes in providing recruitment services for organizations, has recently open-sourced their bias detection algorithms on GitHub. The tool, also known as Audit AI, is used to mitigate discriminatory patterns that exist within training data sets which influence or improve the probability of a population being selected by a machine learning algorithm.

As more and more workloads are being automated by processes leveraging machine learning, it is important to ensure these algorithms don't develop biases that create unfair advantages. Pymetrics seeks to ensure that machine learning algorithms remain fair.

The overall goal of this research is to come up with a reasonable way to think about how to make machine learning algorithms more fair. While identifying potential bias in training datasets and by consequence the machine learning algorithms trained on them is not sufficient to solve the problem of discrimination, in a world where more and more decisions are being automated by Artifical Intelligence, our ability to understand and identify the degree to which an algorithm is fair or biased is a step in the right direction.

Organizations are increasing their focus on diversity, but there have been some concerns about how algorithms that learn biases may actually undermine these diversity efforts. Recently, France declared their desire to be an AI powerhouse and being able to compete on the global stage. In March of this year, President Emmanuel Macron outlined his national strategy for AI which includes spending $1.85 billion USD over the next five years to develop the ecosystem that can compete with Silicon Valley and China. Making this level of investment in AI hasn't come without some careful consideration for the president:

I think that AI could totally jeopardize democracy. For instance, we are using artificial intelligence to organize the access to universities for our students That puts a lot of responsibility on an algorithm. A lot of people see it as a black box, they don't understand how the student selection process happens. But the day they start to understand that this relies on an algorithm, this algorithm has a specific responsibility. If you want, precisely, to structure this debate, you have to create the conditions of fairness of the algorithm and of its full transparency. I have to be confident for my people that there is no bias, at least no unfair bias, in this algorithm.

For France, having fair algorithms ensures there are no bias in terms of gender, age or other individual characteristics. Without building fairness and transparency into the algorithms, President Macron expects that "people will eventually reject this innovation".

Pymetrics' clients include companies in consumer-goods, technology and research firms. As part of Pymetrics' offering, they provide a set of games for candidates to play. These games are usually played early in the recruitment process and ignore characteristics like race, gender and level of education. Instead, candidates are evaluated based upon 80 traits, including memory and attitude towards risk. Pymetrics is then able to measure candidates against existing top performers to predict future success for that role.

Pymetrics chose to open source Audit AI for social responsibility reasons. Priyanka Jain, a product lead at Pymetrics, explains:

As creators of technology, we feel really strongly it's our responsibility to build AI that is creating a future that we all want to live in, and if we have a way to help other creators of technology continue to build that feature as well, it's our responsibility to share it.

Within the Github repository, developers can find a Python library which has been built on existing functionality provided by the pandas and scikit-learn frameworks that implements these "fair" machine learning algorithms, and two example datasets that cover German credit and student performance scenarios.

Rate this Article

Adoption
Style

BT