Ghosts in the War Machine
The birth of machine warfare, the death of human judgment, and the theology of algorithms that decide who lives and dies.
So. War.
War never changes.
Until it does.
Lately, my feed is a parade of “advancements.” If it’s not Palantir and its predictive/decision software, it’s Anduril showing off automated drone stacks, air, sea, and ground, or Ghost Robotics pairing up with SWORD International to mount rifles on robot dogs. Real classy combo. And it’s got me asking a simple, uncomfortable question:
What happens when we swap human conscience for machine objectives?
I’m not a war economist. I’m a person who thinks war (and politics) is kind of dumb. But I do worry about what gets lost when you remove the human behind the wheel, the hesitation, the “should I?”, the burden of consequences, and replace it with systems built to optimise outcomes. That first step, robot vs human, already feels like a moral cliff. Then you imagine robot vs robot, and you start to wonder whether war just becomes a budget math problem, who’s got more compute, more drones, more cash, and the people stuck in the middle pay the price.
Not like war had much of a point to begin with. But this does feel like a new, colder layer of horror.
I think about home. It’s not hard to imagine an invader who never sends a single soldier over a border: just drones, unmanned vehicles, autonomous subs. No flag raised, no boots on the ground, just a remote claim. And yet history says resistance always rises. Power doesn’t only come from tech. You don’t overthrow problems like this with might alone.
If you’ve played Generation Zero, you get the vibe.
But let me get specific about who’s building what, and why it matters.
Who’s building this stuff (and what it actually does)
Palantir: the software spine
Palantir isn’t a drone or robot company; it’s the operating layer that ingests data, fuses it, and turns it into targeting, logistics, and command decisions. Its Artificial Intelligence Platform (AIP) is pitched as a way for militaries to use LLMs and other models securely on sensitive data, with audit trails and “guardrails.” In 2024–25, Palantir won and expanded major U.S. defence programs: the Maven Smart System (AI for sensor fusion/targeting), the Army’s next‑gen ground station TITAN prototypes, and then a 10‑year enterprise agreement (ceiling up to $10B) to unify how the Army buys Palantir software. In plain English: they’re becoming the default data/AI backbone for a lot of U.S. Army workflows.
Palantir has also been public about supporting Ukraine, not just for battlefield insights and logistics, but also for war‑crimes investigations. Whether you see that as necessary, dangerous, or both, it shows how quickly AI‑enabled analysis is moving from slide decks to live conflict and law.
What this fits: Palantir is the decision engine, the place where sensor data, satellite feeds, intelligence, and models get stitched into “act here, not there.” If autonomous systems are the limbs, Palantir (and a few peers) are building the nervous system.
Anduril: full‑stack autonomy (hardware + software)
Anduril is the “ship it fast” defence startup: they build the Lattice software to task and coordinate swarms of autonomous systems, including interceptor drones, jet‑powered reusable interceptors (Roadrunner), loyal‑wingman prototypes, undersea vehicles, towers, radars, and more. Lattice has been demonstrated controlling heterogeneous teams at Army exercises; the company’s Anvil interceptor is designed to autonomously chase down hostile drones under human supervision; and the Roadrunner/Roadrunner‑M concept aims to make air defence cheaper and more reusable.
Anduril is also very relevant in Australia. Its Ghost Shark program, an extra‑large autonomous undersea vehicle (XL‑AUV), has a new factory in Sydney and a multi‑year, A$1.7B program of record. The first Ghost Shark has rolled off the line ahead of schedule, with production ramping in 2026. That’s an example of sovereign (local) advanced defence manufacturing spinning up fast.
On the U.S. side, Anduril landed a $642M, 10‑year program to protect Marine Corps bases from drones, integrating sensors, jammers, and interceptors into a single stack. And late October 2025, it flew a new jet‑powered uncrewed aircraft (YFQ‑44A) tied to the Air Force’s “loyal wingman” push. Translation: they’re scaling from counter‑drone defence to higher‑end autonomous aircraft that fly alongside crewed jets.
What this fits: Anduril is the full‑stack autonomy shop, a hardware‑software factory for “small, smart, cheap and many” systems that can be supervised by a handful of humans instead of massive crews. (If you’ve heard of the Pentagon’s Replicator initiative to field thousands of attritable autonomous systems quickly, it’s exactly this trend.)
Ghost Robotics × SWORD International: the robot dog with a gun
In 2021, Ghost Robotics showed a quadruped robot (Q‑UGV) fitted with SWORD International’s SPUR rifle at the AUSA conference. It was a prototype demo, not a fielded system, but the imagery went viral for obvious reasons. Separately, the U.S. Air Force has used unarmed Ghost robots for base security patrols (think roaming sensors). The point is: the line between “utility robot” and “weapon platform” is getting thin.
What this fits: Ghost is part of the ground robotics wave. Some players (like Boston Dynamics) have publicly pledged not to weaponise their general‑purpose robots, an ethical stance that shows how unsettled this space still is.
A couple more names you’ll hear
Shield AI (software “AI pilot” called Hivemind + the V‑BAT VTOL drone). It’s flown in contested environments and landed a $198M U.S. Coast Guard deal for maritime ISR, another example of smaller firms moving fast into programs that used to take a decade.
Skydio (U.S. small drones + autonomy). It’s on DoD’s Blue UAS cleared list and is gaining traction with NATO partners as a small ISR drone you can backpack and fly with minimal training.
None of these are “sci‑fi someday” companies anymore. They’re stacking contracts and delivering systems that are deploying to reality.
The bigger picture: cheaper mass, faster loops, thinner margins for error
Across allied militaries, the bet is shifting from a few exquisite platforms to many smaller, networked ones, “attritable” systems you can afford to lose and quickly replace. That’s literally the Replicator logic: push thousands of autonomous systems out fast so commanders can disperse combat power and shorten the sensor‑to‑shooter loop. It’s efficient; it’s also destabilising if guardrails lag behind adoption.
On the policy side, the U.S. DoD updated Directive 3000.09 in 2023. It doesn’t ban lethal autonomous weapons; instead, it requires systems to be designed to allow “appropriate levels of human judgment” over the use of force, plus rigorous testing, verification, and cybersecurity. Critics say “appropriate” is doing a lot of work there; supporters argue it’s the realistic way to keep humans responsible without mandating a person to click “fire” every single time.
The ICRC (Red Cross) and many civil‑society groups are pushing for new legally binding rules, ban unpredictable autonomous weapons, ban systems designed to target people, and strictly restrict the rest. This is the “meaningful human control” conversation you see in headlines. It isn’t settled.
So where does that leave my fear? Honestly, primal. Because the arc here is obvious: we’re tending toward machine decision cycles that are faster than our human ones. Humans remain in charge on paper, but the gap will keep widening as more of the find‑fix‑finish loop gets automated.
Where Australia sits in all this
If you’re reading this from Australia (I am), the future isn’t hypothetical. The Ghost Shark XL‑AUV line is opening in Sydney ahead of schedule under an A$1.7B program. The promise is deterrence through saturation and sovereign tech, field lots of smart, long‑range undersea drones that can scout, surveil, and (eventually) strike. It’s pragmatic and unnerving at the same time.
So what do we do with the fear?
Fear can be useful. It means we still care about how we win, not just that we win.
Here’s my tiny list of things worth saying out loud, even if my audience is small:
Insist on human accountability, not just “human in the loop.” It’s not enough that a person can theoretically press a button; we need traceability, auditable logs, and real authority to override machines when things go weird. That should be baked into contracts, testing, and doctrine, not bolted on later.
Draw bright lines. Back efforts like the ICRC’s push: ban systems that target people autonomously or behave unpredictably; tightly restrict the rest. (If we don’t set lines now, the machine tempo will set them for us.)
Reward the restraint we claim to value. Companies like Boston Dynamics have publicly pledged not to weaponise general‑purpose robots. Policies, funding, and procurement should reinforce choices like that, so “doing the right thing” isn’t a competitive disadvantage.
I don’t pretend a post changes the world. But we aren’t powerless spectators either. We can ask for guardrails in the systems our governments buy. We can push for a law that keeps pace with the machines we’re fielding. And we can refuse to let “efficiency” crowd out morality.
Because if war is going to change, let’s at least fight to change it on purpose.



This article comes at the perfect time. I've been thinking about this a lot lately. You totally nail it with 'what happens when we swap human conscience for machine objectives?' Such a smart, chilling point. It really hits home, especially now. Spot on.