Grok’s ‘White Genocide’ Responses Show How Generative AI Can Be Weaponized

Originally published at: Grok’s ‘White Genocide’ Responses Show How Generative AI Can Be Weaponized - TPM – Talking Points Memo

This article is part of TPM Cafe, TPM’s home for opinion and news analysis. It was originally published at The Conversation. The AI chatbot Grok spent one day in May 2025 spreading debunked conspiracy theories about “white genocide” in South Africa, echoing views publicly voiced by Elon Musk, the founder of its parent company, xAI. While there…

1 Like

Calling a stochastic parrot “intelligent” is marketing nonsense.

Large language models are good at exactly one thing: mimicry.

8 Likes

I’ve said it many times, AI is not ready for prime time. Irrespective of that, the hucksters are trying to convince us that it will solve all of the world’s problems. Not yet….

6 Likes

On the other hand, it’s an indication of the so-called “intelligence” of the marketeers. And, for that matter, the “journalists” who breathlessly report on something they have not bothered to gain any understanding of.

So we are all left stupider. Which serves as input to the next-gen of “AI” codes. A kind of “Dope spiral”.

5 Likes

Exactly. Literally all the mainstream media spews nonsense that is easily check-able but they have agendas or are just too lazy, So LLMs are not worse.

3 Likes

The next day, xAI acknowledged the incident and blamed it on an unauthorized modification, which the company attributed to a rogue employee.

Does the employee’s name happen rhyme with “melon husk”?

7 Likes

“Weaponized” AI will always be one or two steps ahead of “white hat” AI.

3 Likes

AI is nothing but a plagiarism machine that simply repeats whatever has been fed into it, like a child who’s been brainwashed by religion.

4 Likes

AI is the tool that will be used to brainwash future generations into submission.

3 Likes

In case you haven’t noticed the Big Beautiful Bill has a provision that says Congress won’t regulate AI for 10 years. Let that sink in.

4 Likes

That’s a huge net. I don’t think any so-called journalists have a fucking clue what they’re writing about. They’re stenographers who work for the highest bidder.

1 Like

Amusingly enough, LLMs do mimicry well because humans mimic each other. If humans didn’t mimic each others speech and writing patterns, then LLMs would not work. Strange, isn’t it?

2 Likes

I’m no AI pusher. In my opinion, the current corporate AI “leadership” is trying to con the rest of us into buying what they can’t yet deliver. However, I worked in the software industry for a number of decades. AI is already prime time, it’s been embedded in just about every “productivity” app for at least the past decade. You just don’t recognize it as such, as it’s a bit more pedestrian than what we’re seeing now. In 5 years or so you won’t have the choice of using the LLM embedded in your app, it will just be part of how it works.

2 Likes

Yup. Right now it’s annoying as hell and can be ignored in most apps, but just about everything my office is buying today (I work in D1 collegiate athletics) has AI embedded.

1 Like

Not just how it “can be weaponized” but how it will be, indeed, is already being, weaponized.

When you control what the AI learns from, indeed, what you teach it, you control what it learns. (Kinda like homeschooling, but that’s another thread.) Witness ElMu’s upset that his AI is also telling truth about where domestic terrorism is coming from and it drives him nuts and he vows to “fix it”–i.e., teach it “alternative facts”. It’s all very 1984.

2 Likes

As noted, a lot depends on who it is that decides what to teach the AI: