Meta Releases FACET Dataset to Investigate Bias in Computer Vision Models

Inherent Biases

Meta, formerly known as Facebook, has unveiled a new AI benchmark called FACET (FAirness in Computer Vision EvaluaTion), which aims to evaluate the fairness and potential biases of AI models used for classifying and detecting objects in photos and videos. This release is part of Meta’s ongoing efforts to address bias in AI systems and promote responsible AI development.

FACET consists of 32,000 images containing 50,000 labeled people, focusing on demographic attributes, physical characteristics, and classes related to occupations and activities. The dataset aims to provide a comprehensive evaluation of biases against various classes, allowing researchers and practitioners to better understand disparities present in their own models and monitor the impact of fairness mitigations.

Meta’s goal with the release of FACET is to encourage researchers to use the dataset to benchmark fairness across different computer vision and multimodal tasks. By providing a tool for evaluating biases, Meta hopes to foster transparency and accountability in AI model development.

While benchmarks for assessing biases in computer vision algorithms are not new, Meta claims that FACET offers a more thorough and deep evaluation than previous benchmarks. It poses questions such as whether models show biases in classifying people based on gender presentation or if biases are magnified when considering attributes like hair type.

To create FACET, Meta employed human annotators to label each image for demographic attributes, physical characteristics, and classes. The company combined these annotations with labels from its previously designed dataset, Segment Anything 1 Billion, to train computer vision models. The sourced images were purchased from a photo provider, though it is unclear whether the pictured individuals were informed about the dataset’s purpose.

See also  OpenAI Introduces ChatGPT Enterprise with Enhanced Security and Privacy Features

It’s important to note that Meta’s past AI ethics practices have faced criticism. Reports have highlighted instances of racial biases, socioeconomic inequalities, and inadequate AI-bias tools. Nonetheless, Meta claims that FACET represents an advancement in probing biases and aims to address potential shortcomings in future iterations.

While FACET provides a valuable resource for researchers, concerns arise regarding the wages and working conditions of the annotators involved. Historically, many annotators from developing countries are paid low rates, raising ethical concerns around fair compensation.

Meta acknowledges that FACET may not capture real-world concepts and demographic groups comprehensively, and it does not plan to update the dataset at this time. However, it welcomes users to flag objectionable content and will remove it as necessary.

In addition to the dataset, Meta has provided a web-based dataset explorer tool for developers to evaluate, test, and benchmark computer vision models. The tool emphasizes that the dataset should be used for evaluation purposes only, rather than training new models.

Meta’s release of the FACET dataset represents a step towards addressing biases in computer vision models. By providing a benchmark and encouraging transparency, Meta aims to foster responsible AI development and ensure fair and unbiased outcomes in AI systems.

About Author

Teacher, programmer, AI advocate, fan of One Piece and pretends to know how to cook. Michael graduated Computer Science and in the years 2019 and 2020 he was involved in several projects coordinated by the municipal education department, where the focus was to introduce students from the public network to the world of programming and robotics. Today he is a writer at Wicked Sciences, but says that his heart will always belong to Python.