November 22, 2024

Millions of Americans are falling for AI-generated content on Facebook

As the 2024 U.S. presidential election draws near, social media is more saturated with disinformation than ever before. Traditional disinformation tactics are still at play, but now we also face AI-generated disinformation — an issue that remains largely unchecked.

A recent report by the Center for Countering Digital Hate (CCDH) highlights the impact of AI-generated images. These often depict fictitious veterans, police officers, and everyday citizens, gaining millions of interactions and swaying public opinion. These images, meant to elicit emotional responses, aren’t labeled as AI-generated on Facebook, despite the platform’s policy on transparency.

Millions Of Americans Are Falling For AI-generated Content On Facebook
Capture from the report showing AI-crafted political propaganda.

The CCDH, an NGO that aims to stop the spread of online hate speech and disinformation, analyzed around 170 AI-generated posts spread between July and October 2024. These images were shared more than 476,000 times and gathered over 2.4 million interactions. They were not labeled as AI-crafted, despite Facebook’s own policies.

The images tend to use a similar approach. They incorporate powerful symbols like American flags or soldiers and try to target current issues like veterans’ rights or immigration.

A prominent example highlighted in the report depicts a fabricated veteran holding a sign reading: “They’ll hate me for this, but learning English should be a requirement for citizenship.” This post alone accrued 168,000 interactions, with most commenters expressing their agreement. Other images show fake veterans advocating against student loan forgiveness or pushing for a veterans’ month to rival Pride Month, all designed to resonate with key (typically conservative) voter demographics.

Millions Of Americans Are Falling For AI-generated Content On Facebook

Despite subtle signs of AI generation — distorted hands, misaligned or nonsensical text on uniforms, and vague backgrounds — most users seem to be unaware they are interacting with artificial content. And so, they are unknowingly contributing to the spread of digital disinformation.

AI is already fooling people

Meta, Facebook’s parent company, introduced AI labeling policies early in 2024, promising transparency for AI-generated images. The CCDH found no sign of these labels.

It’s unclear whether Facebook was unable or unwilling to tag these images as AI-made. But, in the end, these posts ended up tricking a lot of users. Facebook users, relying on the platform’s safeguards, remain largely in the dark, unable to discern whether an image is a genuine endorsement or a synthetic creation.

<!– Tag ID: zmescience_300x250_InContent_3

[jeg_zmescience_ad_auto size=”__300x250″ id=”zmescience_300x250_InContent_3″]

–>

AI generated image of facebook logos - Millions Of Americans Are Falling For AI-generated Content On Facebook
AI-generated image.

Additionally, the report highlights that Facebook’s user-reporting tools provide no clear way to flag suspected AI-generated content. While users can report posts for hate speech or misinformation, there is no specific option for manipulated media. This gap leaves Facebook users without a clear route to alert moderators to AI-generated political content that could skew perceptions during a critical election period.

The CCDH also found that the images, clearly aimed at the US public, were made by pages outside of the US. Of the ten most active pages analyzed, six are managed from outside the United States. They are based in countries like Morocco, Pakistan, and Indonesia. These foreign-administered pages collectively attracted over 1.5 million interactions on their AI-generated content, shaping discourse on U.S. policies from abroad. Despite the foreign administration, these pages present themselves as authentically American, featuring personas that appear homegrown.

These images are aimed at a particular demographic

The messages are often targeting vulnerable, less tech savvy voterss. These fabricated images aim to exploit emotional appeals and patriotic symbols, which makes them highly influential and very dangerous. Images of fake veterans, for example, aim to evoke respect and admiration, adding weight to the political messages they appear to endorse. For many voters, military service is deeply tied to patriotism, making these endorsements highly persuasive.

The approach also targets frustrated voters.

Millions Of Americans Are Falling For AI-generated Content On Facebook
Example of an AI-made political meme from the report.

The report describes numerous instances where these artificial veterans appear with political statements, such as “Veterans deserve better than being second to student loans” or “Maybe it’s just me, but I believe veterans deserve far better benefits than those offered to undocumented immigrants.” Both sentiments target specific political frustrations among certain voter segments, appealing to those who feel their values are underrepresented or neglected.

These tactics reflect a broader trend in online disinformation, where AI-generated personas cater to niche political identities, crafting messages tailored to resonate with specific groups. This is the already classic disinformation playbook: by simulating “average” American views, these posts tap into cultural debates and amplify divisive topics. AI just adds a new spin to it.

Tech companies should take accountability

The simplest thing to do to address this would be to make it easier for users to report suspected manipulated media. However, this won’t actually solve the problem. By the time you have enough reports and someone actually checking the image, the damage will be already done.

As AI continues to advance, social media platforms must adapt their policies to ensure the technology is used responsibly. The onus cannot solely be on users to spot manipulation. For Facebook, this means implementing reliable detection and labeling processes that actively inform users when they encounter synthetic content.

Platforms like Facebook wield a great deal of influence over public opinion. And their policies — or the lack thereof — have real-world implications for democratic processes. With the U.S. presidential election approaching, it’s more important than ever for companies to be transparent and tackle disinformation. Unfortunately, this doesn’t seem to be really happening.

As the line between authentic and artificial content blurs, society should have a clear idea how to deal with things like this (and who bears the responsibility). This type of problem will only get worse.

The report can be read in its entirety here.