Skip to Content

The AI slop that friends share on Facebook is annoying. The problem is going to get so much worse

No, Alan Alda did not get picked up by MASH co-star Mike Farrell for a warm-hearted reunion ride on a motorcycle for his 90th birthday. Nor did Alda have a warm-hearted reunion on a beach with fellow MASH alumni.

In reality, Alan Alda had an evidently pleasant dinner with family on Jan. 28, enjoying a dinner out with family and blowing out a candle.

I can’t share a news story about this birthday party on Facebook, because I live in Canada and Meta — the owner of Facebook, Instagram, Threads and more — does not permit the sharing of news stories inside the country. (Meta refuses to comply with Bill C-18, the Online News Act, which aims to divert revenues from tech giants that aggregate news toward local journalism.)

Facebook though has no problem at all with fake stories about Alan Alda being shared, and widely. I’ve seen versions in the last week that just make me groan.

That one about Mike Farrell picking up Alan Alda for a heroic ride came with text that was corny, sure, but designed to pull a nostalgic heartstring.

Ditto for the story about the reunion on the beach. (Another sign of fakery: it was published to Facebook before Alda’s actual birthday!)

As Elaine said on Seinfeld, “fake, fake, fake, fake.”

Facebook is of course not the only place filled with AI slop and faked stories, but it’s worth discussing because a) the platform is still so widely used and b) generative AI is getting better and better.

Just this morning, my wife sent me a charming reel about cats, with wisdom about cat behaviour from what appeared to be a monk named Shen Yu. But the monk’s demeanour, monotone delivery and sharp imagery didn’t quite sit right with me; sure enough, the whole account is AI. The Facebook account was created only in late December and is already filled with the wisdom of a monk who doesn’t exist.

My wife responded to that particular reel because we both love cats and the message rang true.

It spoke to her, in other words.

And that’s what AI-powered accounts are tapping into: content that speaks to people’s feelings, values and thoughts.

A friend of ours has shared inspiring stories of women who overcame adversity.

One friend shared a story of an aging nurse who still battles through fatigue and indifference at her hospital.

I’ve seen several stories about not judging proverbial books by their covers, with several about bikers with tattoos.

There can be truth in these stories. A real reel by Mick Jagger’s grandson about his rock legend granddad grooving by a bar playing Moves Like Jagger was turned into AI slop with fake images … perhaps because the original was too grainy.

There are countless pages on Facebook churning this stuff out.

And critical thinking is getting eclipsed by emotional responses — well-meaning as they may be. I should also note that generative AI can be harder to discern on the smaller screen of a phone; on a computer screen, the shiny conformity stands out more. But even there, generative AI is getting better and better at looking real.

The scary future

So far, these kinds of things are annoying.

We all know things are going to get much worse, and the implications for democracy, politics and civility are pretty dire.

We’ve already seen for years how social media can be manipulated to spread disinformation and confusion in an election cycle.

The worries are more present, with U.S. midterms coming soon and a presidential election race only a couple of years away. From the Guardian:

Political leaders could soon launch swarms of human-imitating AI agents to reshape public opinion in a way that threatens to undermine democracy, a high profile group of experts in AI and online misinformation has warned.

The Nobel peace prize-winning free-speech activist Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge and Yale are among a global consortium flagging the new “disruptive threat” posed by hard-to-detect, malicious “AI swarms” infesting social media and messaging channels.

Our online behaviours are being constantly tracked, so that messages can be better targeted. Advertising is one thing; we need to be better able to counter “content” that is fine-tuned to hit our emotions, and prompt reactions.

I’ve fallen for generative AI images myself, like anyone. I sent our child an amusing image of an ice cream cone where the towering scoops look like Snoopy; Nick wrote back quickly with thanks, and cautioned it was AI. They were right. Even simple joys these days can be soured.

My advice: think a little more critically when you see something in your feed. Look carefully at the images. Look at the page that’s spreading it.

These generative AI content farms are rife, and they’re getting worse. We’re feeding the beast unwittingly all too often.

[This post appeared first today on my Substack.]

Share this post


Follow Me on Substack