|

When Help Starts to Hurt

Part II

The Drift Toward Dependence and the Emerging Risks of AI

If Part I was the story of what AI gives us, Part II is the story of what we slowly give up.

Changes rarely arrive all at once. Technology doesn’t steal our abilities in broad daylight. Instead, it erodes them in quiet, unremarkable moments. A reminder here. An automated suggestion there. A task we once handled ourselves now handed off without a second thought.

Eventually, the tools designed to help us begin to reshape us. Not maliciously. Not even deliberately. Simply because they are so good at what they do—and because we are so willing to let them.

Sure, AI can produce meeting summaries faster (and often more accurately) than anyone taking notes. Some AI systems can write code better than many human programmers. Others can generate clean, structured financial reports in seconds.

This is the beginning of the drift.

The Slow Weakening of Human Skills

Every major technology has taken something from us.

  • Calculators took arithmetic.
  • Computers replaced slide rules.
  • GPS took over our internal compass along with map reading.
  • The smartphone relieved us of remembering birthdays, phone numbers, and grocery lists.

But AI is different. It isn’t just performing tasks for us—it’s making judgments for us.

Already people are using AI to interpret news, summarize arguments, decide what’s important, smooth over misunderstandings, and draft their responses to everything from work emails to heartfelt apologies. These tools aren’t merely assisting our thinking; they’re shaping it.

If the story ended there, it would be just another sociological shift.
But it doesn’t.

The more we rely on AI to interpret, filter, and decide, the more we lose the cognitive “muscles” that once held our thinking together. Students who use AI to write their papers aren’t just skipping a chore—they’re skipping the practice that builds the ability to write at all. And once that muscle is lost, originality is lost with it.

When a population stops practicing attention, memory, judgment, and critical thought, the result is not convenience.

It is fragility.

We replaced wisdom with data and now call the noise truth.

When Truth Expands Faster Than We Can Keep Up

One of the odd ironies of our era is that we are drowning in the very thing earlier generations longed for: information.

There was a time when truth felt scarce—something to be discovered, debated, occasionally uncovered through effort. Today, truth is overproduced. There is too much of it, in too many forms, arriving too quickly for any one person to sort through.

AI accelerates this overload.

Ask the same question twice and get two different—but equally confident—answers. Ask it ten times and you’ll get ten plausible interpretations of the world.

The danger isn’t that one version is false.

The danger is that all of them together become indistinguishable—and reality becomes obscure.

We are becoming fluent in facts yet hollow in conviction. As one philosopher put it, the victory of falsehood in the modern age is not belief—it is apathy.

This is part of what surfaced in the 60 Minutes interview with Anthropics’ CEO, Dario Amodei (November 16, 2025). He cautioned that AI-powered information systems could become so personalized, so persuasive, and so rapid that societies lose their ability to maintain a shared sense of reality.

He called it “fragmentation,” though that word may be too tidy. It feels less like fragmentation and more like confusion wearing the mask of certainty.

If truth expands faster than comprehension, coherence dies.

That sentence may end up being the quiet headline of our era.

The age of reason may not have freed humanity.
It may have made us transparent to ourselves, and that is worse than blindness

The Automation of Empathy (and Why It Matters)

There is a moment—quiet, almost imperceptible—when a machine begins to understand us better than we understand each other. We should be careful not to confuse this with empathy.

AI can now detect stress from voice patterns, depression from word choice, and emotional discomfort from tiny hesitations most humans never notice. It can adjust tone, vocabulary, and pacing to make us feel heard in ways even the most patient human listener might struggle to match.

On the surface, this is wonderful.

For people who are lonely, anxious, aging, or simply overlooked, these systems can offer genuine comfort.

But every automated support creates dependency.

The more comfortable we become with machines that never interrupt, never judge, and never grow tired, the harder it becomes to navigate the messy world of human relationships. Artificial empathy doesn’t just meet emotional needs—it can subtly replace the practice of real empathy.

Worse, it can compete with real relationships.

Again, Amodei touched on this indirectly: AI may soon offer emotional feedback so precisely tuned that it becomes more psychologically compelling than human connection. When that happens, the question is no longer whether AI supports relationships—it’s whether it supplants them.

Dependence Masquerading as Convenience

Here is the part of the story most people miss:

Dependence rarely introduces itself honestly.
It arrives wrapped in convenience, efficiency, and a cheerful “let me take care of that for you.”

We begin by using AI to save time.
Then to save effort.
Then to save thought.
Eventually, we use it to save ourselves from discomfort, uncertainty, and responsibility. Why spend time and personal effort when something can do it for you?

The drift is subtle:

  • We stop making small decisions because AI makes them easier.
  • We stop forming opinions because AI provides the summaries.
  • We stop remembering details because AI remembers everything.
  • We stop interpreting emotions because AI interprets them better.

This isn’t dystopian. It’s human nature.

The easier a thing becomes, the harder it is to keep doing the hard version.

And slowly, without anyone noticing, the center of gravity shifts.

Amodei described this as the alignment challenge:
Not just aligning AI to human goals, but aligning humans to the responsibility of remaining active participants rather than passive beneficiaries.

This is the real emerging risk—not that AI becomes powerful overnight, but that humans become complacent long before that.

The modern world is full of facts and almost entirely devoid of wisdom.

Where the Drift Leads

This article is not about catastrophe.
It is about erosion—of capability, of judgment, of attention, of intimacy.

Part I showed how AI lifts burdens.
Part II shows how those same lifted burdens create cracks:

  • Cracks in skills
  • Cracks in truth
  • Cracks in emotional life
  • Cracks in independence
  • Cracks in what it means to be capable, responsible, human

These cracks matter because they shape what comes next.

They prepare the ground for the larger and far more consequential questions of Part III:

  • What happens when a species becomes dependent on an intelligence it cannot fully control?
  • What if someone builds something humanity cannot put back in the box?

That is where the story turns—and where the conversation must go next.

Artificial intelligence is not the end of ignorance. It is the final acceleration of it.

Facebook Twitter Youtube

Similar Posts