Subscribe
About
  • Home
  • /
  • Software
  • /
  • New Meta, Google tools detect AI-generated images

New Meta, Google tools detect AI-generated images

Lungile Msomi
By Lungile Msomi, ITWeb journalist
Johannesburg, 01 Sep 2023
Meta and Google have created tools to identify AI-generated images.
Meta and Google have created tools to identify AI-generated images.

Meta and Google are tackling the emerging issues and concerns around artificial intelligence (AI)-generated images.

The tech companies this week released separate tools designed to identify content that has been created using AI and also to detect any potential cultural or gender biases in AI.

Google’s SynthID and Meta’s FACET (FAirness in Computer Vision EvaluaTion) are tools created by the tech giants to create fairness in generative AI by better identifying and classifying AI-generated images.

Google says while generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information. “Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation.”

SynthID, which is still in beta testing, was developed by the company’s research unit, in partnership with Google Cloud, to watermark and identify AI-generated images.

The tool embeds a digital watermark directly into the pixels of an image, making it unnoticeable to the human eye, but detectable for identification.

“SynthID also detects AI generation without compromising image quality, and works by allowing the watermark to remain detectable even after modifications, such as filters, colour changes and saving with various compression schemes − most commonly used for JPEGs,” says Google.

While SynthID seeks to identify when an image has been generated using AI, Meta’s FACET has been created to erase bias in models that create AI images.

In a statement, Meta says FACET is for researchers to improve models that can create cultural bias when reading AI images, and evaluate fairness of computer models across classification, detection, instance segmentation and visual grounding tasks.

FACET is used to ensure there is fairness in computer-generated content. The tool, made up of a dataset of 32 000 images containing 50 000 people, is designed to help researchers identify, evaluate and ensure robustness across a more inclusive set of demographic attributes.

In a statement, the Facebook owner says it wants to continue advancing AI systems, while acknowledging and addressing potentially harmful effects of the technological process on historically marginalised communities.

Meta says images fed into the FACET dataset are labelled by human annotators for demographic attributes, such as perceived gender, age group and additional physical attributes, like skin tone and hairstyle.

The company recently unveiled its Llama 2 open source AI model, in partnership with Microsoft.

“FACET can be used to probe classification, detection, instance segmentation and visual grounding models across individual and intersectional demographic attributes, in order to develop a concrete, quantitative understanding of potential fairness concerns with computer vision models,” Meta explains.

“By releasing FACET, our goal is to enable researchers and practitioners to perform similar benchmarking to better understand the disparities present in their own models and monitor the impact of mitigations put in place to address fairness concerns. We encourage researchers to use FACET to benchmark fairness across other vision and multimodal tasks.”

FACET is available publicly, while Google’s SynthID is available to a limited number of Vertex AI customers via Google’s text-to-image offerings.

Share