The Death of Seeing Is Believing
How deepfakes, censorship, and runaway AI are breaking the world’s shared truth layer
I didn’t sit down tonight planning to write an article about the end of reality.
I was just slightly sleep deprived and playing with Google’s new Gemini image-generation “pro” mode, Nano Banana Pro.
I expected some fun, some cursed images, maybe a few laughs.
What I didn’t expect was the sinking feeling in my stomach that… oh.
We’re actually cooked.
From “haha this is funny” to “oh no, this is bad”
It started innocently enough.
I asked Gemini to put me sitting on a whale.
Then I had it generate pictures of me chilling next to Putin and Donald Trump.
Then fake party photos with Elon Musk and Mark Zuckerberg.
All just jokes.
I knew they were fake because I made them. My friends knew they were fake because I told them.
But here’s the thing that hit me like a truck:
If I sent those photos to my grandparents, it would not surprise me if they genuinely asked:
“When did you meet Donald Trump and Putin?
I didn’t even know you left the country.”
Because Gemini’s new image mode isn’t just “good”.
It’s indistinguishable from reality.
We crossed a line, and I’m not sure we can go back.
Photos are no longer proof
There used to be this unspoken idea:
If you had a photo or video, it was at least probably real.
Sure, Photoshop existed. Sure, people edited things.
But it took effort, skill, time, and you could usually spot the dodgy bits if you looked closely enough.
Now?
You type a sentence and get a photo that looks like it came directly from someone’s camera roll.
People are already using tools like this to:
Generate fake receipts to scam refunds
Forge passports and KYC documents
Fabricate “evidence” for whatever nonsense they’re trying to pull
That’s just in the first days of this technology being out in the wild.
The bigger problem is what this does to the concept of evidence itself.
If images and videos can be AI-generated and look perfectly real, then:
A real photo of you doing something illegal can be dismissed as “AI generated”
A fake photo of you doing something illegal can ruin your life before anyone even bothers to investigate
How do you use CCTV footage in court if anyone can say, “That wasn’t me, must be AI”?
How do you defend yourself when someone decides to frame you with a convincingly fake video?
I’m not exactly out here committing crimes.
But I can absolutely imagine a future where someone doesn’t like me, generates a “photo” of me doing something horrendous, and suddenly I’m in court trying to prove that a hyper-realistic image isn’t me.
Meanwhile, our legal systems are still operating on laws written decades ago, trying to retrofit 1980s privacy frameworks onto 2025 technology. In Australia, especially, it’ll end up being a case-by-case mess while judges squint at pixels and shrug.
The death of “seeing is believing”
We’re now in a world where anything you see online might be completely fake.
Instagram posts.
News photos.
“Leaked” screenshots.
“Captured on CCTV” screenshots.
Even what looks like candid phone footage.
Gone are the days where you could glance at an image and say,
“Yeah, that looks a bit AI-ish.”
With models like Gemini’s new image generator and video models like Sora, we’ve crossed into territory where:
You can’t reliably tell if a photo is AI-generated.
You can’t trust that a video really happened.
You can’t even trust your own instincts, because the fakes feel real.
That opens the door to some genuinely scary scenarios:
False flag events where the “evidence” is entirely AI-generated.
Mass manipulation using fake footage of protests, violence, or speeches that never happened.
Politicians or public figures smeared with fabricated images, destroyed in the court of public opinion long before the truth comes out (if it ever does).
And remember: no one reads the correction.
The damage is done with the first viral post.
Newspapers can quietly apologise later:
“Oops, turns out that photo was AI-generated, our bad.”
But no one cares about the apology.
Everyone remembers the original scandal.
This technology will destroy lives. That’s not hypothetical. It’s inevitable.
The creative industry earthquake
All of this hits close to home because I work in media and photography.
I love photography.
I love going out, working with clients, capturing real moments, and building those human connections.
And yet… I’d be lying if I said I couldn’t see where this is going.
If I wanted to, I could probably:
Take a single product photo for a client (say, a dog collar).
Feed it into an AI pipeline.
Instantly generate dozens of “realistic” images:
Different dogs wearing it
Different locations, lighting, styles
Social media posts, banners, mockups, you name it
No models.
No locations.
No scheduling.
No retouching.
No model release forms.
Just prompts.
Brands are already doing this.
I’ve seen AI-generated ads on the sides of buses.
I’ve worked alongside companies using AI-generated visuals because it’s cheaper and easier than wrangling humans.
And you know what? On a cold, rational level, I get it.
Hiring photographers = time, money, logistics.
AI = one person, a laptop, and an internet connection.
People still pay for the human element, the experience, the relationship, the feeling of doing something with another person.
But for purely functional content, I can absolutely see marketing teams saying:
“Why hire a photographer when we can generate everything in-house?”
It creates a horrible tension for me:
I detest what this tech is doing to trust, culture, and mental health.
But if I ignore it completely, my business loses to people who don’t care about any of that.
Catch-22.
2025: the year everything accelerated
2025 has felt… surreal.
We went from “AI is kind of useful” to:
AI writing faster (and often better) than most people
Image models generating photos indistinguishable from reality
Video models doing the same for motion
Big tech companies exploding in value practically overnight
Universities struggling because:
COVID wrecked their finances
International students didn’t come back in the same way
And now you don’t really need a degree to get into a lot of AI-adjacent work
Jobs are vanishing or mutating.
Everything’s getting faster and cheaper, except the human cost.
I’m getting client requests like:
“Hey, I know you can generate this really quickly with AI,
but I actually want it done properly.”
So now my pitch often sounds like:
“We could build you a full custom website for 5k…”
“But your budget is 1k, so honestly, you’re better off on Squarespace right now.”
Or:
“We could do a full photoshoot…”
“But with your budget, AI imagery might give you more bang for buck if we’re careful.”
I could make more money by upselling people.
But I hate seeing people get screwed over. My whole thing as an agency owner is:
Look after the client first, profit second.
The painful truth is:
AI is now just another tool in that conversation. I can ignore it morally, but I can’t ignore it professionally.
Governments, censorship and the collapsing trust layer
As if AI-generated content wasn’t enough, there’s another layer of chaos:
who controls what we see in the first place.
Big tech runs:
The servers
The platforms (Google, Meta, etc.)
The algorithms that decide what gets shown to whom
Governments run:
ISPs
Regulators
Laws about what’s allowed online
Between them, they already have an enormous amount of control over:
What gets amplified
What gets quietly buried
Which websites stay up
Which sites simply… vanish
I’ve watched pages fail to load for weirdly specific political content.
I’ve seen articles mysteriously blocked by certain ISPs.
I’ve watched news outlets selectively edit footage to fit a narrative, cutting out key details that change how a story feels, only to get sued later.
Add AI into the mix and it gets even worse:
Real events can be dismissed as “fake”.
Fake events can be propped up as “real”.
Entire wars, massacres, or crises can happen with barely any coverage, or only through tiny independent outlets most people never see.
I went down a rabbit hole recently looking into massacres in places like Sudan and Nigeria, atrocities that barely hit mainstream news. I didn’t believe it at first. I had to go digging through smaller outlets, NGO reports, and satellite imagery to even convince myself it was really happening.
Now imagine all of that in a world where AI can generate fake satellite images too.
We’re heading toward a future where:
There could be a war or a mass atrocity,
and the average person would have no idea what’s actually happening,
because everything they see is filtered, curated or questionable.
It sounds like conspiracy nonsense.
The annoying part is that, bit by bit, it’s just… becoming normal.
The end of globalisation (as an information system)
If you can’t trust:
Photos
Videos
News articles
AI summaries
Social feeds
…then what’s left?
Local reality.
The people you know.
The places you’ve actually been.
The things you see with your own eyes.
I think we’re heading toward an era where:
The global internet still exists, but as noise, entertainment, propaganda, and surface-level info.
Local trust becomes the only thing that really matters.
Globalisation as an information layer feels… dead.
Why believe a video of something happening overseas when you know it could have been typed into an AI model by some guy in his bedroom?
Trying to use this stuff ethically (and staying sane)
Here’s the dilemma I keep circling back to:
AI is absolutely wrecking our ability to trust anything.
It’s also not going away.
If I refuse to use it at all, my agency will get undercut by people who use it with zero ethical standards.
So I’m stuck in this weird, uncomfortable middle:
I do use AI for clients.
I never ship raw AI output; everything goes through me, edited and sanity-checked.
I constantly think about legal and ethical implications.
Conversations with clients now sound like:
“Yes, we can generate this with AI and save you money.
But here are the risks, here’s what we’ll disclose, and here’s where I’d still recommend real humans.”
It’s the same logic as:
“No, you don’t need a $5k custom build.
Use Squarespace for now and we’ll revisit later.”
Except now we’re talking about generative images, likeness rights, and deepfakes, rather than web templates.
My personal rule of thumb is:
Use AI to save time, not to deliberately mislead.
Don’t generate images or videos of real people in compromising situations.
Don’t pretend AI-generated stuff is candid reality.
Wherever possible, be up front about what’s edited, staged or synthetic.
Is it perfect? No.
Is it enough? Probably not.
But it’s a start.
Welcome to the start of the end
I genuinely believe we’re at the beginning of the end of the internet as we know it.
Not “the internet disappears”, more like:
The internet we trusted (even a little) dies.
It’s replaced by a chaotic, AI-saturated landscape where reality and fiction are permanently entangled.
Google could vanish tomorrow and we’d all scramble to Bing or DuckDuckGo, but the core problem would remain: we have no shared truth layer anymore.
So where does that leave us?
For me, it comes down to a few things:
Radical scepticism about anything I see online
Leaning into local, human relationships where trust still means something
Using AI tools consciously and transparently instead of pretending they don’t exist
Pushing for ethical standards now, before this stuff gets even more entrenched
I hate a lot of what this technology is doing to us.
But I also know it’s here, it’s powerful, and it’s not going away.
So I’m going to do what I can:
Use it to help my clients, not harm them
Refuse to participate in deepfake slander and synthetic bullshit
Keep talking about this stuff openly, even when it feels bleak
Because like it or not,
This is the start of the end of one world…
and the messy beginning of whatever comes next.






