The Last Barrier Fell
How the removal of friction turned an old human behaviour into a modern crisis.
So I woke up this morning and started having a pretty bad feeling about this year. It’s 8 days in and I’m already feeling tired of the news. I guess, in the back of my mind, I was sitting in this quiet little state of: “Hey, maybe this is the year stuff starts stabilising and getting better.”
I was very, very wrong. We all saw what happened to Venezuela, China threatening to invade Taiwan, Polymarket and Kalshi becoming genuinely scary and corrupt organisations.
Today was extra special, if you could call it that. I woke up, opened my phone, and saw a video of ICE shooting an innocent person. Like seriously, what the hell. I had to promptly close my phone and mentally block that image from my mind. Full-blown government domestic terrorism and a waste of human life.
Then I’m just going around doing normal daily routines and suddenly a friend sends me an article and asks me to talk about it, which of course is what I’m doing now. Very roundabout way to introduce this, honestly, but here we are. When have I ever done anything normally?
X, Grok, and the predictable outcome
So. X. And Grok. We already know that system was widely abused from day one, not only becoming a cesspit of its own making (not that old Twitter wasn’t already a cesspit, but somehow the rebrand made it worse), but also demonstrating what happens when an “unfiltered AI” is released into the hands of bad actors.
Large language models already struggle with basic safeguards. They’ve been shown to provide dangerous information, to be manipulated through framing, and to be routinely “jailbroken” through simple linguistic tricks. That was predictable. What wasn’t acceptable was going further, effectively allowing bypasses (or in this case, not even treating them as bypasses at all) and opening the door to non-consensual deepfakes of women, and children.
Before I get into the “how”, the “why”, and the “what we do”, because I do think responsibility matters here, and this isn’t just a complaint, I want to explain how systems like this are abused in the first place, and why bad actors are able to exploit them so reliably. Because generating deepfakes of children isn’t a grey area. It’s genuinely disgusting and indefensible.
This is going to be a long one, mainly because the history matters. But this isn’t new. What we’re seeing now follows a pattern that predates modern technology entirely, a story as old as creativity itself, even if that’s an uncomfortable thing to admit.
The why
Humans are very good at one thing: power, control, and humiliation. It’s a pattern that sits behind a huge amount of harmful behaviour, and it shows up with remarkable consistency.
Terror attacks are about power.
Military force is about control.
Indecency is about humiliation.
Deepfakes sit neatly at the intersection of all three. Those who control visual identity can control perception, shape public mood, dictate narrative, and inflict harm along the way. Once an image feels real enough, it stops being questioned and starts being believed.
The how
Deepfakes aren’t new. They’ve been around for years. I remember experimenting with early versions more than a decade ago, not for anything inappropriate, but because it was genuinely fascinating to see a new technology capable of placing a face into a video, even on a computer that could barely handle it. And long before that, people were already Photoshopping faces onto bodies.
Since the dawn of humankind, power, coercion, and control have always existed. But at least (this isn’t justifying it, don’t twist it), the barrier to entry was high enough that intent alone wasn’t sufficient. You needed skill, time, and persistence. Effort acted as a filter. Thank God for that.
So for anyone who thinks this all “just started”: welcome to the world. This behaviour has existed for generations. Bad actors continue to flourish in a sinful world. The difference now isn’t desire, it’s access. It has simply become easier to act.
Quick history lesson
This sort of behaviour predates everything, because humans are pattern-seeking animals, and once we find a pattern, we tend to repeat it, even when it’s destructive.
First came paintings. We take them for granted now, hang them in museums, admire them as art, and forget why many of them were created in the first place. These objects were not neutral. They were made with intent.
Often that intent was darker than it first appears. Paintings were commissioned as political weapons, designed to sway perception and damage reputations. Whoever controlled the images that circulated controlled the story, and, by extension, the narrative people believed.
Through this, images enabled demonisation, sexualisation, infantilisation, and ridicule. They were used to tarnish people and livelihoods, both women and men.
This isn’t a gender-specific phenomenon at its core. The aim is the same: to discredit and control. What differs is how that aim is carried out, and who pays the greater cost.
Male targets: sexualisation as delegitimisation
Louis XVI, during his reign, was relentlessly attacked in painted art and engravings, directly mocked as sexually incompetent, weak, dominated by his wife, with an aim to systemically destroy the stability of his lineage and frame him as a political failure.
Clergy, popes, bishops, were also in the firing line, usually depicted engaging in secret sexual acts, accused visually of hypocrisy, rendered grotesque, with the aim to delegitimise through moral rot.
Female targets: sexualisation as destruction
Then come female targets: sexualisation as destruction.
Marie Antoinette, subjected to the same tactic but with a different end goal, not weakening authority, but erasing credibility. She was the subject of pornographic engravings, group sex scenes, lesbian encounters, incest fantasies (including her son).
All of it manufactured slander, used to bring people down politically, to coerce, control, and destroy. Genuinely shameful.
Men were sexually depicted to undermine authority.
Women were sexually depicted to erase credibility.
This is a tale as old as time.
However, the cost to fabricate and commission paintings was incredibly high, which is an important distinction. Meaning the scale of the depravity stayed relatively low.
Engravings: when it became scalable
Next came engravings, and this is where depravity scaled. Instead of a single painting, now you could engrave an image on wood, copper, steel, ink it, and stamp it. One image became hundreds, then thousands, mass-circulated for all to see.
And in the court of public opinion, where narrative control matters most, it was used to destroy lives. Public figures placed in fabricated sexual acts. Women depicted as promiscuous, deviant, or corrupt. Political enemies framed through sexual humiliation. Plausible deniability helped it survive, because it’s “just satire”, just “allegory” (notice the pattern: now it’s “just AI”, it’s not “real people”.)
Once printed: it could not be recalled, could not be disproven, could not be unseen.
Photography: when lies became believable
Next came photography, in the 1800s, and this is where things shifted decisively. For the first time, images showed real people. No longer a painted interpretation with plausible deniability, photographs carried the weight of physical evidence. Not proof, but something close enough that the distinction began to collapse. The tools became more accessible, the process faster, and controlling narratives easier.
Photography didn’t end visual lies, it made them believable. Within decades of the camera’s invention, composite images, spirit photography, and staged tableaux appeared, all exploiting the same assumption: that photographs were witnesses, not arguments. A face placed where it never belonged could destroy a reputation through implication alone, circulated faster than denial could ever keep up.
This logic extended quickly. Spirit photography manipulated belief. Composite photography stitched reality together just enough to persuade. Colonial and ethnographic photography disguised exploitation as science and documentation, reshaping truth under the authority of the lens. The image no longer needed to prove anything, it only needed to suggest it.
Photoshop: democratised deception
Come the modern era. When Photoshop became available in 1990, it democratised visual deception. For the first time, faces could be separated from bodies cleanly, lighting and grain could be matched, edits could pass casual inspection, images could circulate digitally, and “the photo doesn’t lie” completely died.
Time magazine darkened O.J. Simpson’s mugshot for dramatic effect. Internet forums circulated Photoshop-based fake nudes of celebrities and shared them as leaks. Same behaviour, smoother tools, but it still required manual effort and skill to pull off. A layman could not do this. Friction still existed. But the calling card remained: control, power, humiliation.
AI: the final boss
Every previous falsification of truth had three fundamental breaking points. You had to know what you were doing, you had to have time to do it, and you could fail to make it convincing.
AI collapses all three at the same time, and I’ve already talked in great detail about that in a previous article so I won’t rehash it here.
But now we have tools that anyone can use, anyone can touch, and anyone can become a bad actor. Failure is no longer discouraging because you can generate instantly again. Abuse becomes impulsive, not considered. The depravity is inevitable and the scale is unsurprising given the tools. Once failure has no cost, behaviour escalates, as it has.
And we still see the same logic today. When power is the target, legitimacy is attacked. When credibility is the target, sexuality is weaponised. And the guardrails for these systems are surface level. Content filters can only filter out known patterns. That’s why every system fails and gets patched.
There is always a safety layer built in front of these systems, focused on known patterns, keywords, and outputs, and people inevitably find ways to work around them. Any system that accepts natural language can be linguistically manipulated. That’s not a failure of effort; it’s a structural limitation of how language-based systems work.
Think of it this way: there’s a large black box (the model), and a gatekeeper. Every prompt is reviewed before it goes in. The gatekeeper can be pressured, confused, or bypassed. Things will fall through the cracks. They always have, in every system built around human language.
Safeguards rely on: anticipating misuse, enumerating bad cases, writing rules in advance.
But the real-world problem space is: adversarial, creative, pathologically motivated.
You cannot predefine every abusive permutation of identity, age, context, pose, implication, especially when harm often comes from suggestion, not explicit content.
Open Instagram or TikTok and you’ll see political figures deepfaked into compromising situations, often framed as humour or satire. You’ll also see women being abused and “nudified” as a form of control and humiliation. While the intent may differ, the underlying mechanism is the same: fabricated imagery used to shape perception and cause harm. Normalising one makes it harder to meaningfully condemn the other.
Most abusive imagery does not require explicit nudity, explicit sex, or explicit illegality. Harm occurs through placement, association, framing, contextual implication.
Historically:
Engravings didn’t show acts; they implied them.
Photographs didn’t need nudity; they suggested guilt.
Photoshop didn’t need perfection; it needed plausibility.
AI excels at plausible ambiguity.
Now this is the real crux.
The real problem
The real problem isn’t AI, or any one system built around it. Content moderation, platform responsibility, and safety tuning all matter, but they were never designed to solve this at the root.
What we’re watching is a familiar human pattern, one that resurfaces whenever new tools make old behaviours easier to enact.
Every single wave of abuse followed the same arc:
Capability released
Abuse emerges
Victims blamed or minimised
Safeguards added after damage
Scale outpaces control
AI is not different. AI just removed barriers that originally existed.
Systems that allow instant, anonymous, high-fidelity human likeness generation will be abused. Not because people are uniquely evil now, but because the last remaining barriers to abuse have been removed.
So what is the actual solution here?
Here is the very uncomfortable truth.
There isn’t a clean one.
We didn’t solve this problem with paintings.
We didn’t solve it with engravings.
We didn’t solve it with photography.
We didn’t solve it with Photoshop.
Every time, we told ourselves the next tool would be different. Every time, it wasn’t.
The problem was never the medium. It was never the technology. It was the behaviour that found it.
AI didn’t invent synthetic abuse. It removed the last remaining barriers that slowed it down.
Which means the solution isn’t “fixing AI.” That capability isn’t going back into a cage. Guardrails will exist, moderation will improve, policies will be written, and abuse will still leak through, because implication moves faster than enforcement and intent is more creative than rules.
What does help is reintroducing friction where it actually matters.
Treating synthetic sexual abuse as abuse, not content.
Removing the social reward loops that turn cruelty into attention.
Making consequences arrive faster than virality.
Teaching people early that “fake” does not mean “harmless.”
And, uncomfortably, calling it out even when it targets people you dislike.
Because the moment you excuse it as satire, or minimise it because it’s aimed at “the other side,” you’ve already abandoned the argument.
This isn’t a technology problem we can engineer away.
It’s a moral problem we keep pretending belongs to someone else.





