It’s hard not to be awed by the promise of artificial intelligence. The headlines are relentless: AI diagnoses cancer faster than doctors, translates languages in milliseconds, creates realistic art, and drafts legal documents in seconds. We’re told to celebrate. We’re told the future is here.
And yet, lurking beneath that optimism is a quiet dread. For all the glossy marketing about empowerment and democratization, a very different reality is unfolding: jobs are vanishing, livelihoods are threatened, and entire industries are being restructured—not around people, but around algorithms.
Why is it that we’re being told to cheer for a future where most of us might be obsolete?
The Tech Industry’s Double Narrative
Tech companies—and the billionaires who helm them—are masters of a double narrative. Publicly, they speak in utopian tones about unlocking human potential, solving climate change, and bringing about a golden age of efficiency. Privately (and sometimes not so privately), they talk about cost-cutting, automation, and reducing headcount.
In this story, AI is a savior. In reality, it’s a power play.
Companies like OpenAI, Google DeepMind, and Meta are not charities. They are commercial enterprises whose incentives are tied to scale and speed. The more work AI can do, the more money they make—and if that means replacing human workers, so be it.
Eric Schmidt, former CEO of Google, famously predicted that AI superintelligence could consume up to 99% of global energy resources in the future. That’s not just a technical comment. It’s a moral red flag. When a single class of machine demands nearly all the world’s power, it’s no longer just a tool—it’s a god in the making. And what kind of civilization builds a god that no one else can control?
When Work Disappears: The Crisis of Purpose
For many people, work is more than a paycheck. It’s identity. It’s community. It’s purpose.
Our society is built on the idea that you are what you do. We introduce ourselves by our professions. We schedule our lives around work. We derive meaning, pride, and status from our roles in the economy.
So what happens when that disappears?
In a world where AI handles accounting, design, writing, analysis, customer service, logistics, coding, diagnostics, and even emotional labor—what’s left for humans? And more importantly, what’s left of humans?
Mass unemployment isn’t just an economic issue. It’s an existential one. Depression, anxiety, suicide—these are not side effects of joblessness. They are symptoms of purposelessness. If our value is no longer tied to labor, then we must invent an entirely new model for identity, dignity, and contribution.
And we are wildly unprepared for that.
The Moral Vacuum in Silicon Valley
Let’s be honest: a lot of AI development today feels less like progress and more like a billionaire science fair. The guiding question isn’t “What does society need?” It’s “What can we build that no one else has?”
This is how we end up with multimodal LLMs trained on the collective intellectual property of artists, writers, and musicians—without consent or compensation. This is how companies who talk about saving the world also build surveillance tools, predictive policing algorithms, and warfare applications.
These companies do not see the contradiction. Or worse—they do, and they simply don’t care.
The Myth of “Reskilling”
When people raise concerns about AI and jobs, the response is often some form of tech industry hand-waving: “Don’t worry. People will reskill.”
Let’s unpack that. First, “reskilling” assumes people have the time, resources, support systems, and mental space to abandon their career identities and become prompt engineers overnight. It assumes the new jobs will be accessible, fulfilling, and stable. It assumes everyone wants to learn to code.
Second, even if you do reskill, you’re often not working with AI—you’re working for it. You’re babysitting a machine that does the core creative, analytical, or operational task, while you troubleshoot and optimize. Is that a promotion or a demotion?
The promise of reskilling is not a real plan. It’s a deflection.
Philosophical Questions We Need to Confront
If AI really is going to reshape civilization, we need to confront a few deeply uncomfortable questions:
- What is the purpose of a human being in a world where work is optional but income is not?
- Can dignity be decoupled from labor, or is our worth forever tied to economic contribution?
- Is it moral to build machines that systematically erode human agency?
- Who decides what knowledge gets encoded into AI, and whose voices are excluded?
- Should we allow a handful of companies to control technology that touches every domain of life?
These aren’t rhetorical questions. They are civilization-scale questions. And our current public discourse is not nearly serious enough to answer them.
The Illusion of Democratization
One of the most seductive ideas in tech is that AI will “level the playing field.” It will give small businesses the tools of large enterprises. It will allow students to learn anything. It will let anyone create anything.
And that’s partly true—until you realize that the platforms delivering these tools are not public utilities. They are private monopolies. The APIs you build your startup on today can be throttled, priced, or restricted tomorrow. The AI model you use to create your art can be trained on your output without acknowledgment or payment.
Democratization without governance is just exploitation with a smile.
A Future of Energy Hunger and Inequality
Returning to Eric Schmidt’s warning: the idea that AI superintelligence could require 99% of the world’s energy isn’t just technically staggering—it’s morally obscene. In a world where billions still lack access to reliable electricity, we’re seriously discussing dedicating nearly all of humanity’s power to support a machine intelligence that serves corporate interests?
This isn’t just bad economics. It’s bad ethics.
AI companies are building infrastructures of dependency: tools we can’t live without, embedded into every system we use—from education to healthcare to banking. But they are not accountable to the public. They are not voted into office. And they do not share power.
That’s not innovation. That’s soft colonialism with better branding.
The Reclamation of Meaning
If we truly want a humane future, we must shift the narrative entirely. Instead of asking what jobs AI can replace, we should ask:
- What human experiences are irreplaceable?
- What kinds of work dignify rather than exploit?
- What systems need to change to support a post-work society?
We must reclaim meaning from machines. That might mean universal basic income, yes—but also community reinvention, new social rituals, lifelong learning, arts, storytelling, care work, ecological stewardship.
Purpose doesn’t have to come from a job. But it does have to come from somewhere.
Conclusion: A Choice, Not a Destiny
AI is not destiny. It is a set of design choices made by people who hold enormous power. We can demand different choices. We can refuse to be collateral damage in someone else’s disruption fantasy. This moment—right now—is our opportunity to shape the ethics, economics, and energy of the AI age. But if we don’t ask better questions, demand better governance, and redefine what we want from technology, we may wake up in a world where the machines are humming, the profits are flowing, and humanity is quietly, collectively, lost. And no one, not even the machines, will know why.
Keep Reading
Want more? Here are some other blog posts you might be interested in.
It’s hard not to be awed by the promise of artificial intelligence. The headlines are relentless: AI diagnoses cancer faster than ...
There’s real power in a strong headshot. Not the cheesy, over-posed kind—but the kind that instantly communicates who you are and ...
Cold emails have a terrible reputation—and in many cases, it’s well earned. The average inbox is flooded with lazy, templated pitches ...
For founders and growing companies
Get all the tips, stories and resources you didn’t know you needed – straight to your email!