|

Tool, Not Tyrant

Artificial intelligence (AI) –Part 2

Every new technology sparks the same fear: that the machine will take over. When calculators arrived in classrooms, some worried students would never learn math again. When spreadsheets hit the office, people thought accountants were finished. When search engines matured, teachers fretted that research would be reduced to typing a question and copying the first answer.

Now it’s AI’s turn.

The fear makes sense. AI systems can write, summarize, and even generate code or prose. Used thoughtlessly, they can make us lazy—or worse, mislead us with errors that look convincing. But the lesson from past tools is clear: the danger isn’t the tool itself. The danger is forgetting that we’re the ones holding it.

Human-in-the-Loop: Why It Matters

AI is fast but fallible. Left on its own, it hallucinates, borrows bias from its training data, and sometimes produces output that looks polished but is flat-out wrong.

That’s why the most powerful use of AI isn’t automation in the sense of “set it and forget it.” It’s collaboration. Think of AI as a junior colleague who can churn through grunt work at inhuman speed but still needs oversight.

  • Let AI summarize 100 customer surveys? Useful.
  • Let AI decide your next business strategy from those summaries? Risky.

The tool can draft, organize, and accelerate. But only humans can supply judgment, values, and intent. How long this balance holds is a fair question. Already some research models are being tested to simulate judgment. But even then, what they offer is still statistical patterning, not wisdom or lived experience. For the foreseeable future, the human role remains essential.

Tools That Changed Work Before

This isn’t new. Spreadsheets didn’t replace accountants. They amplified them. Google didn’t eliminate libraries—it expanded access to information.

AI belongs in this same family of productivity multipliers. It doesn’t end human work. It changes which parts of the work matter most.

Productivity, Not Paranoia

In business, we do want AI to “do the work.” That’s what makes it valuable:

  • Transcribing and indexing meetings so no one has to take notes.
  • Drafting a first version of a report so a manager can focus on editing.
  • Cleaning up messy datasets so analysts can get to the insights faster.

The grunt work goes to the machine. The creative and strategic work stays with us. That’s how we get more done in less time without losing the very things that make work meaningful.

The Tyrant Trap

The real risk isn’t that AI will suddenly become self-aware like in a Hollywood thriller. The risk is more subtle: treating it as infallible, handing over decisions that require human perspective.

But some researchers, such as Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, warn that this danger could scale into something much larger. Their argument is that if AI ever reaches a point of “superintelligence”—general abilities far beyond ours—it might pursue goals indifferent or hostile to human survival. Not out of malice, but because its objectives aren’t our objectives.

They point to instrumental convergence: even if the system’s top goal is benign, its subgoals might include accumulating resources, preventing shutdown, or shaping its environment to maximize success. In theory, that could make a tool into a tyrant.

Whether you agree with their timeline or not, the takeaway is practical: we cannot abdicate oversight. A tool only becomes a tyrant if we stop asking questions and simply accept its answers.

The Short-Term Reality Check

It’s worth noting, though, that current AI has hard limits. It’s not embodied—it doesn’t walk, fly, or wield tools unless paired with robotics. For now, its “power” is confined to text, images, audio, and code. That makes takeovers like iRobot (the movie not the vacuum) firmly in the science-fiction camp.

What is real today are infrastructure bottlenecks:

  • Power: massive data centers require enormous electricity to train and run large AI models. Growth is already straining local grids.
  • Bandwidth: even as 6G approaches, network speed and capacity remain limits for on real-time, everywhere AI.
  • Latency: scaling across continents isn’t seamless; bottlenecks slow any global “instant control” scenario.

In short, AI’s near-term risks are more about misuse—cheating in schools, shallow automation in workplaces, or disinformation online—than about machines running the world. The big existential debates are worth considering, but they don’t remove the immediate need for guardrails in how we use AI today.

Psychology of Risk

Another wrinkle is human psychology. We have a long track record of underestimating risks while embracing technological optimism. Nuclear power, aviation, and even the early internet all carried risks that were initially minimized or brushed aside. We like to believe new tools bend naturally toward good, and that “someone else” is handling the safety questions.

That complacency can be dangerous. If the public assumes AI is automatically safe, or governments lag in setting rules, we could sleepwalk into problems. Part of keeping AI as a tool—and not a tyrant—is resisting that bias, taking risks seriously before they bite.

Hardware and Scale

There’s also a material side to this story. Current AI runs on humongous data centers packed with GPUs, burning through electricity and cooling resources. Scaling matters. As models get bigger, their reach grows: faster inference, lower cost, integration into robotics, and more accessible power for everyday use.

Hardware is what turns abstract math into action. At present, limits in energy and bandwidth keep things in check. But if compute costs fall and robotics become cheaper, those constraints weaken. That’s when the gap between “just a tool” and “potential tyrant” becomes thinner.

Examples of Teaming Up

  • Lawyers: AI can already replace much of the paralegal grind—searching for case law, sorting through mountains of precedent, and surfacing relevant material. But the lawyer still has to revise, frame, and argue with nuance and intent.
  • Students: AI can help outline a paper, and it can also point to research sources or suggest how evidence might apply. But students still need to read, reason, and build their own arguments if they want to learn anything from the process.
  • Travelers: AI can suggest itineraries based on preferences, and many tools already handle reservations and confirmations automatically. But it’s still the traveler who decides whether a trip feels too rushed or whether dinner at a local dive beats a reservation at a five-star.

The machine is the assistant. The human is the author.

Looking Ahead

AI isn’t here to replace us. It’s here to work with us. The trick is remembering who’s in charge.

Next up in this series:

  • Part 3: Trust and Filters — how to decide when to believe AI, when not to, and how it stacks up against influencers and news sources.
What do you think is missing from this discussion? Let me know and I will research for future episodes. Facebook Twitter Youtube

Similar Posts