BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News AWS Researchers Propose a Method That Predicts Bias in Face Recognition Models Using Unlabeled Data

AWS Researchers Propose a Method That Predicts Bias in Face Recognition Models Using Unlabeled Data

Bookmarks

AWS researchers presented a novel method for evaluating bias in face recognition algorithms at this year's European Conference on Computer Vision (ECCV), which does not require data with identity annotations. The tests show that, despite the method's limitation to estimating a model's performance on data from various demographic groups, those estimates are reliable enough to identify performance discrepancies that are indicative of bias.

In recent years, algorithmic bias has become a key topic of research in artificial intelligence. When used on people from different racial or ethnic origins, bias in facial recognition software is described as a discrepancy in the software's performance.

The easiest way to determine if a face recognition algorithm is biased is to train it on a sizable sample of human faces from different demographics and then examine the results. However, this necessitates the collection of identity-annotated data, which is prohibitively expensive, particularly on the large scale required to evaluate a face recognition machine.

An approach for assessing bias in facial recognition systems without employing identification annotations is presented in a recent study by Amazon. Their results demonstrate that, despite merely estimating a model's performance on data from different demographic groups, the suggested method efficiently finds differences in performance suggestive of bias. This surprising discovery raises the possibility of an evaluation paradigm that should make it considerably easier for developers of facial recognition software to test their algorithms for bias.

AWS has made code available to help combat bias in machine learning models. Using data sets where specific demographic information had been deleted to generate bias, the researchers trained face recognition programs. In every circumstance, their method accurately revealed differential performance in the hidden demographic groups.

They contrasted their approach to Bayesian calibration, the industry-recognized method for predicting the outcomes of ML models. Their findings show that the proposed method regularly outperforms Bayesian calibration, sometimes by a significant margin, as it only used unannotated data.

The methodology developed by AWS researchers is intended to assist AI professionals who are working on face recognition or other similar biometric tasks to assure the fairness of their models.

About the Author

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT