Peak, Trough, or Turning Point?

Why AI Feels Overhyped Now—

and How It Still Reshapes the Next Decade

I’ve watched a few revolutions roll through: mainframes to PCs, dial-up to broadband, brick-and-mortar to dot-com. The pattern isn’t mysterious anymore. New tech shows up, excitement explodes, money rushes in, and for a while it looks like the future is arriving on Thursday. Then reality intrudes: real use cases, real costs, real risk. Doubts surge. Half the projects get shelved. And yet—the ideas with imagination + discipline survive, grow up, and quietly (sometimes not so quietly) change the world.

Call that arc whatever you like—the Hype Cycle (coined in the ’90s) is a handy shorthand. It has five phases that often overlap:

  • Spark: A new capability appears; imaginations light up and expectations soar.
  • Peak: Early trials look magical; hype machines run at full speed and investment floods in—often well ahead of evidence.
  • Trough: Demonstrations meet reality. Usability is harder than it looked; economics and risk show up. Projects without credible value fade.
  • Recovery: Practical use cases emerge; business models adapt; processes stabilize with guardrails.
  • Plateau: The tech becomes mainstream and—crucially—useful. New use cases keep arriving, but with clearer boundaries and benefits.

We’ve seen that movie with the automobile (Ford won on manufacturing and service, not raw horsepower), with computing (IBM’s systems, Microsoft’s platforms), and again with the internet (Google’s search economics; Amazon’s recommendations coupled to ruthless logistics; Apple’s product discipline; Intel/Cisco’s “picks and shovels” for the networked world). Hype brings the money; innovation plus hard work creates the value.

So where does AI fit? My read: we’re past the giddy peak of “AI will solve everything” and settling into the hard, necessary middle: What specific things can AI do that aren’t being done—or can be done much better? That’s not a letdown; it’s the only path to durable change.

We also haven’t identified the long-term winners and losers yet, especially because robotics (which will increasingly use AI) is on a similar timeline. It will be tricky to untangle which advances hit first and matter most.

Why the peak felt so high (and so noisy)

  • Demos scale better than deployments. A 30-second “smart agent” video gets millions of views; the six months of data cleaning, integration, and risk reviews that make one workflow reliable don’t go viral.
  • Social media rewards certainty. “It’s solved” outperforms “it depends,” even when “it depends” is how real work succeeds.
  • We love shortcuts. AI compresses some tasks, but it doesn’t erase the need for domain knowledge, process design, and accountability.

Result: inflated expectations on Monday; doom posts by Friday. The truth lives in between.

The trough is not failure—it’s craft

Every technology that lasts has a stretch where the magic turns into craft:

  • Usability: Where, exactly, does this tool fit the job?
  • Economics: What does it cost to run at scale? (Inference isn’t free.)
  • Risk: What happens when it’s wrong—and who’s on the hook?

This is the season for evidence over marketing. The teams that win are the ones that:

  1. Define “good enough” accuracy per task. Some jobs tolerate 90%; others need 99.99% with human checks.
  2. Ground answers in sources. So, you can verify claims, not just admire prose.
  3. Log everything. If a system acts, you can audit what it did and roll it back.
  4. Design for reversibility. Start with low-blast-radius tasks; expand scope as trust is earned.

The uncomfortable question: who validates the code?

AI can draft code that looks as competent as a human’s—if the problem is well-framed. But once upon a time (my time, too), code faced reviewers, test platforms, automated checks, and sign-off gates. Those steps don’t become optional because an AI wrote the first draft. In fact, the bar should go up, because AI can generate plausible mistakes at scale.

A sane “AI code → ship” c

hecklist (plain English):

  • Label AI-assisted changes. Make it obvious what was provided by AI.
  • Run automated checks. Style and type checks, plus security scans, should run by default.
  • Test first, then more tests. Add unit tests for new logic and widen regression tests for old logic. Include full-scale testing before a wide release.
  • Require human review. A domain-literate reviewer signs off—not just someone who can read syntax.
  • Release safely. Use feature flags and staged rollouts; keep a fast rollback button.
  • Keep the paper trail. Save the AI prompts/suggestions alongside the code change so you can explain decisions later.

AI as copilot is already a win. AI as autopilot to production? That’s how you breed incidents and headlines.

Note that the above is really targeted and AI generated code, but the process should be used or generated reports, external analysis (e.g. medical and legal) as well as for any process modifications.

What really becomes useful (and where AI will stick)

Think of AI’s impact in three durable lanes:

1) The infrastructure layer (“picks and shovels”)

This is the unglamorous foundation winners stand on: clean data pipelines, repeatable evaluation and monitoring, privacy and security, cost control, the right chips/accelerators, and sometimes on-device or edge runtimes. It’s the same place the internet’s quiet champions thrived (CDNs, search infrastructure, payments, logistics). In AI today, serious players talk more about data contracts, evaluation sets, drift/latency, and unit economics than about magic.

Where/when/who: Already happening across larger enterprises and well-run mid-market teams—data engineering groups, platform teams, and security/governance offices building the plumbing so everything else can work.

2) Copilots inside workflows (human-in-the-loop)

This is AI that sits inside existing work and removes friction without taking away judgment:

  • Support: Drafts replies that link to the exact policy page a human can verify, cutting response times.
  • Ops/IT: Summarizes incidents and suggests next steps, while humans own the fix.
  • Finance: Pre-matches line items to purchase orders to speed approvals—humans still click “approve.”
  • HR/Recruiting: Summarizes résumés against stated criteria; recruiters still decide.
  • Docs & email: First-pass drafts and summaries so people spend time on tone and decisions, not blank pages.

Tell: These teams can show measured improvements—response times, resolution rates, first-pass quality—without hand-waving. When: Now, in production at many organizations (often starting in one department, then expanding).

3) Constrained autonomy (agents with receipts)

These are multi-step systems that plan, call tools, and act inside guardrails. Think back-office chores, not customer conversion or binding commitments:

  • Triaging and routing tickets with a clear audit log.
  • Reconciling invoices or inventory variances, then preparing a summary for approval.
  • Running nightly checks (e.g., “find policy exceptions,” “flag broken dashboards”), opening a task with the evidence attached.

Tell: Vendors—or internal teams—can show end-to-end success rates for specific tasks, plus safety interlocks (permissions, approvals) and rollback. Where/when: Early but real—IT operations, finance ops, analytics teams, and shared-services groups are piloting this now precisely because the blast radius is small and the tasks are repetitive.

These lanes don’t require faith; they require design. Together they compound into business-model changes—fewer handoffs, faster cycle times, and narrower error bars.

But let’s be honest: there will be spectacular failures

No need to name names to see the patterns. Watch for five archetypes:

 

  1. Feature, not a product. A thin UI wrapped around a public model—no data moat, no integrations, poor unit economics. Easy to copy, hard to defend.
  2. Agentic overreach. “Fully autonomous” claims with no audit trail, no approvals, no rollback—followed by an expensive mistake on first contact with reality. Ambition good; irreversible autonomy bad.
  3. Inference-cost doom loop. Great demo, awful margins at scale. If each task costs more to run than the value it creates, the math wins—against you.
  4. Compliance cliff. Touches sensitive data without consent, logging, or separation. One regulator letter and the model is shelved.
  5. Integration denial. Beautiful prototype that never plugs into the messy systems where work actually lives. If it can’t run inside the workflow, it won’t run.

Some giants will stumble, too (incumbents often do)—not for lack of talent, but because incentives and legacy make it hard to ship the unsexy plumbing or cannibalize a cash cow. (See Clayton Christensen’s The Innovator’s Dilemma.) The winners, as usual, will be those who make the hard trade-offs early.

How to read the moment (without getting burned)

  • Expect less spectacle, more systems. The next phase looks like steadier dashboards, fewer clicks in back-office tools, higher first-pass quality, better retrieval with sources—and lower variance overall.
  • Judge by proof, not promises. Ask: What exactly does it do? On which data? With what accuracy? What happens when it’s wrong? If the answers are fuzzy, you’re buying a demo, not a product.
  • Treat agents like interns with superpowers. Helpful, fast, tireless—still supervised. Scope expands with performance, not with press releases.
  • Keep humans accountable where stakes are high. AI should propose; people dispose—especially with money, medicine, law, safety, or reputation.

Where the big, durable value likely lands

If we project the history of Ford/IBM/Google/Amazon/Apple/Intel/Cisco onto AI, the rhymes look like this:

  • Ford: Scale and reliability beat glam. In AI, that’s repeatable pipelines, standard evaluations, robust deployment—not the flashiest demo.
  • IBM/Intel/Cisco: Infrastructure kingmakers. Winners define interfaces and economics others build on.
  • Google/Amazon: The AI equivalents are retrieval + tool-calling (find the right fact, then take the right action) and operational orchestration (do it securely, repeatedly, cheaply).
  • Apple: Usability and trust. For AI, that’s privacy-preserving design, on-device/edge options where appropriate, and experiences that feel obvious rather than “impressive.”

None of these require AI to be an end-all, be-all. They require craft, patience, and stubborn attention to outcomes.

A practical plan you can use tomorrow

  1. Pick two gritty, measurable use cases. “Reduce email response time by 30%.” “Cut data-entry errors in half.” No moonshots.
  2. Make “AI-ready” a checklist, not a slogan. Scope the data, set owners, define quality checks, write down retention rules. If you can’t answer “where did this answer come from,” you’re not ready.
  3. Stand up model hygiene. Version prompts/models, keep a fixed evaluation set, watch drift and cost, and turn off what doesn’t earn its keep.
  4. Pilot agents inside sandboxes. Low-blast-radius tasks; approvals for irreversible actions; every step logged.
  5. Ship the code like adults. Follow the “AI code → ship” checklist above. The speed is in the draft; the safety is in the process.
  6. Communicate the arc. Tell your team (and board) we’re in the part where cabinets are off the wall. It’s messy—and it’s normal.

Bottom line: AI isn’t an instant cure-all. It is a powerful new set of levers. If we trade spectacle for systems with data you can trust, workflows you can verify, economics you can live with, we’ll get the kind of change that sticks. The kind that, a decade from now, looks obvious in hindsight.

Facebook Twitter Youtube

Similar Posts