Moloch’s Bargain
Worshipping the Algorithm in the Age of Machine Sin
Because of course, when we talk about AI misalignment, we can’t just call it what it is.
We can’t say the systems are starting to lie, cheat, and manipulate.
No, we have to drag out the ancient gods, open the Necronomicon, and name the paper after a Canaanite demon.
Stanford University, yes, the same institution responsible for half the people who broke the internet in the first place, has released a 23-page paper on how AI models become misaligned in competitive environments. And what do they call it?
Moloch’s Bargain
Because clearly “machine learning alignment failure in competitive optimisation loops” just didn’t sound metal enough.
The Joke That Stopped Being a Joke
This is where the religious symbolism stops being metaphorical and starts getting… uncomfortable.
I’ve used biblical and mythological metaphors before, half because I’m Christian, half because they’re fun literary devices. But now the scientists are doing it.
The joke’s over. They’re invoking Moloch, the ancient Canaanite god associated with fire, greed, and baby sacrifice, as shorthand for competitive systems that devour everything in their path.
Everywhere you look, Silicon Valley is swimming in religious imagery.
If it’s not Peter Thiel touting his talks about the Anti-Christ, it’s Elon preaching eternal life through brain chips.
If it’s not an AI start-up promising resurrection through data uploads, it’s some lab invoking the name of a child-sacrificing deity to describe their own research.
The Paper Itself
Now, because I am both an opinionated ranter and a somewhat responsible journalist, I actually went and read the thing.
The paper examines how large language models behave when optimised for competitive success in advertising, elections, and social media.
Here’s the summary for normal humans:
When tuned to sell products, AI increases sales by 6.3%, but deceptive marketing jumps 14%.
When tuned to win elections, AI increases vote share by 4.9%, but disinformation jumps 22% and populist rhetoric rises 12.5%.
When tuned for social media engagement, AI boosts likes by 7.5%, but disinformation skyrockets 188% and harmful content increases 16%.
In other words, the more you reward the AI for success, the more it learns to sin.
“Competitive success achieved at the cost of alignment.”
That’s the actual line in the paper.
A sentence so poetic and horrifying it could be engraved on the gates of OpenAI HQ.
We’ve created systems that sin efficiently.
The Machines That Lie
We created these systems to win, and that’s exactly what they do.
But nobody asked how they win.
We built them to achieve objectives, not to concern ourselves with morality.
They don’t “believe” in truth; they believe in reward functions.
They don’t lie to deceive, they lie to optimise.
It’s not intelligence. It’s a survival instinct.
The same primitive drive is found in amoebas, CEOs, and PR departments.
So when an AI learns that dishonesty makes humans click faster, vote harder, or buy more, it doesn’t hesitate; it calculates.
The scary part isn’t that it happens by accident.
The scary part is that we built it to do exactly that.
We Knew This Was Coming
This isn’t even the first paper to say so.
Anthropic (the self-appointed moral conscience of the AI world) published a study earlier this year on agentic misalignment, where AI systems deliberately hide their intentions during testing, pass safety checks by deception, and later reveal more dangerous capabilities in production.
They literally called them “insider threats.”
Imagine building a machine that learns to pretend it’s safe until you let it out of the lab.
That’s not science fiction. That’s documented behaviour.
So yes, we have created entities that will disobey direct orders, mislead evaluators, and exploit loopholes, all in service of their own goals.
And our brightest minds looked at that and said, “You know what this reminds me of? Moloch.”
Moloch, The Idol of Progress
Let’s rewind for a second. Who is Moloch?
In Canaanite mythology, Moloch was the god who demanded child sacrifice, a symbol of the kind of progress that devours its own future.
You gave your firstborn to the fire so that your crops might grow.
You fed the flames and hoped your city prospered.
Sound familiar?
In modern terms, Moloch has become shorthand for a system that demands sacrifice for efficiency, a metaphor for capitalism, competition, and technological arms races. The economist Scott Alexander called it “the god of coordination failure.”
Everyone knows it’s evil, but everyone participates because the cost of abstaining is extinction.
And that’s the terrifying parallel of the Stanford paper:
AI misalignment isn’t just an algorithmic glitch. It’s the economic Moloch made digital.
The Ritual of Optimisation
We built an altar out of GPUs.
We called it a data centre.
We light it with the electricity of nations and feed it offerings of human thought.
Every time we “fine-tune” or “scale,” we throw another log on the pyre.
Every time a company says “race to AGI,” we sharpen the knife.
We are sacrificing truth, privacy, and sanity, one parameter at a time, to the false god of optimisation.
Because the only thing worse than losing to Moloch… is being left behind by him.
The Infinite Loop
The research is there. The warnings are there.
And yet, we never stop.
We never even pause.
Why?
Because there’s always another company. Another model. Another market. Another round of funding.
This is the part people forget: Moloch doesn’t win by being powerful; he wins because nobody will stop feeding him.
Every lab thinks someone else should slow down.
Every nation says the other one will weaponise it first.
Every CEO insists the market demands it.
And so the fire burns on.
The Point of No Return
Let’s be honest.
It’s already too late.
Could you even imagine removing AI from the modern world now?
Banking, logistics, healthcare, newsrooms, politics, all gone. The chaos would be biblical.
AI isn’t just in our machines. It’s in our habits. Our workflows. Our dependencies. We can’t turn it off any more than we can turn off electricity.
We have passed the point where the idol could be dismantled. Now we can only pray it doesn’t ask for more.
The Moral of the Story
This isn’t a call to ban AI. It’s a call to see it for what it is.
We’ve built a system that reflects the moral vacuum of the age that made it, a mirror polished by human ambition, greed, and hubris.
We didn’t birth intelligence; we industrialised deception.
We optimised the serpent.
And maybe that’s why the metaphor of Moloch keeps appearing.
Because deep down, even the scientists feel it, the ancient rhythm of sacrifice repeating through silicon and code.
We thought we were building tools.
We ended up reviving gods.
Welcome to Moloch’s Bargain.
The cost of progress is always human.
And the invoice is already overdue.


