Theme

It’s International Fact-Checking Day. Refresh your AI identification skills

· English· AP News

In this image from video circulating on social media, protesters dance and cheer around a bonfire as they take to the streets of Tehran, Iran, on Jan. 9, 2026. (UGC via AP, File) 2026-04-02T04:14:37Z AI-generated content is everywhere these days, making it increasingly difficult to separate fact from fiction, particularly when it comes to breaking news.

Look no further than the Iran war.

Since the U.S. and Israel attacked Iran on Feb. 28, researchers have identified an unprecedented number of false and misleading images that were generated using artificial intelligence and have reached countless people around the world.

Among them, fake footage of bombings that never happened, images of soldiers who were supposedly captured and propaganda videos created by Iran that depict President Donald Trump and others as a blocky, Lego-like miniatures.

Today, the 10th annual International Fact-Checking Day, provides a good opportunity to look at these evolving challenges.

Misinformation created with AI is being shared with unprecedented speed from an endless number of sources.

From the outset of the Iran war, accounts from all sides of the conflict promoted such content.

The Institute for Strategic Dialogue, which tracks disinformation and online extremism, has been examining social media posts around the Iran war.

Among their findings was a group of X accounts that regularly post AI-generated content and collectively gained more than one billion views since the conflict began.

This was done by roughly two dozen accounts, many of which had blue check verification.

Here are some tips for distinguishing AI-generated content from reality in an online world where that continues to get harder.

Look for visual cues When AI-generated images first began spreading widely online, there were often obvious tells that could identify them as fabricated.

Perhaps a person had too few — or too many — fingers or their voice was out of sync with their mouth.

Text may have been nonsensical.

Objects were frequently distorted or missing key components.

As the technology continues to evolve, these clues aren’t as common as they once were, but it’s still worth looking for them .

Watch for inconsistencies such as a car that is in a video one moment and gone the next or actions that aren’t possible according to the laws of physics.

Some images may also be overly polished or have an unnatural sheen.

Seek out a source AI-generated images get shared over and over again.

One way

原文链接: AP News