|

The AI Divide May Not Be What We Think

The People Who Question — And Those Who Don’t

You ask AI a question, and it gives you a fast, confident answer — clear, well written, and often persuasive. At that point, you have a choice. You can accept the answer and move on, or pause long enough to question it.

Most of the time, that choice passes almost unnoticed. But it may be one of the more important decisions we make in how we use these tools.

The First Divide

There is a growing conversation about an “AI divide,” usually framed in terms of access — who has the tools and who doesn’t. But it is not always clear how broadly that conversation is grounded. Much of it seems to take place among technically oriented communities, often talking to each other rather than reflecting the behavior of the population as a whole.

That distinction — access or not — matters. But it may not be the most important one.

Once people have access, a more subtle divide begins to emerge. Some treat AI responses as a starting point. They ask where the answer came from, what assumptions it relies on, and whether it holds up under closer examination. They refine the question, compare alternatives, and occasionally test the result against something independent.

Others take a more practical approach. The answer looks reasonable, it saves time, and it is good enough to move forward.

Both approaches are understandable.

But they do not lead to the same place.

The extra step — questioning, checking, refining — is where experience begins to accumulate. Over time, that experience develops into judgment, and eventually into something closer to wisdom.

Without that step, the process is faster.

But it may also be thinner. One path builds experience, judgment, and understanding. The other risks building dependency — not necessarily intentional, but real over time.

Take college as an example. An AI-generated paper can look polished and coherent, with a clear theme and structure. But what does the student gain from that process? When the work is done personally, even if it takes longer, the student develops understanding. AI can still play a role — editing, refining, checking — but the learning comes from engaging with the material, not bypassing it.

A Reality Check

There is a tendency in current discussions about AI to assume that change will be both rapid and universal.

In many forecasts, entire industries are expected to transform within a few years. Work will be redefined. Productivity will surge. Old models will disappear.

Some of that may happen. In some areas, it may already be underway.

But it is worth asking how broadly these changes will actually reach.

Even today, a significant portion of the population does not engage deeply with digital tools beyond basic use. That is not a criticism. It is simply how people live and work. Some have limited access. Others have limited interest. Many have no practical need to integrate AI into what they do every day.

If a third — or even half — of the population remains only lightly engaged, the pace and shape of change may look very different from what is often predicted.

This does not mean AI will not matter.

It means its impact may be uneven, slower in some areas, and highly concentrated in others.


It’s Not Just One Divide

The more I think about it, the “AI divide” is not a single line. It is several overlapping differences in how people relate to these tools.

Questioning versus accepting is one.

There are others.

Builders and Users

Some people want to understand how things work. They do not need every technical detail, but they want a sense of what the system is doing, where it might fail, and how reliable it is.

Others are comfortable treating AI as a tool — something that produces useful results without requiring much attention to how those results are generated.

We already live this way in many areas. Most of us use complex systems every day without understanding them deeply.

But AI is different in one important respect.

It does not just produce outcomes. It produces explanations. And those explanations can sound convincing whether they are correct or not.

That makes the gap between understanding and usage more significant than it has been with many earlier technologies.

Effort and Ease

Another divide is simpler.

Some people are willing to invest effort in thinking through a problem. Others prefer to move quickly and let the tool do the work.

AI makes that choice easier than ever. You can work through a problem step by step, or you can generate a plausible answer in seconds.

Over time, that difference compounds.

Judgment — the ability to recognize when something is off — develops through repetition, correction, and engagement. It comes from seeing patterns, making mistakes, and adjusting.

If that process is shortened or bypassed, the development of judgment may slow as well.

Not Everyone Is On The Same Path

There is another part of this discussion that is easy to overlook.

Not everyone will engage with AI in the same way.

Some people will go deep into these tools. Others will use them occasionally and at a surface level. And some will have very little reason to use them at all.

This is not a failure. It is simply how different fields and interests evolve.

We already accept this in other domains. Most people rely on financial systems without understanding them. Most people depend on medical expertise without studying medicine.

AI may follow a similar pattern.

The Work That Doesn’t Fit Neatly

There is also an entire category of work where AI plays a different role.

Trades like plumbing, electrical work, carpentry, and mechanical repair are not primarily knowledge problems. They are physical problem-solving problems.

I was reminded of this recently when I had some plumbing work done. What looked like a simple issue turned out to involve incompatible systems that had been added over time. The plumber had to figure out not just what the problem was, but how to create a workable solution using what was already there.

That kind of work depends on experience, improvisation, and the ability to adapt to real-world constraints. It involves making judgment calls in situations that are not clean or predictable.

AI can assist with information.

But it is still a long way from handling that kind of physical, situational problem-solving.

A Related Thread Worth Watching

That may change over time.

As robotics and AI continue to develop, we may see systems that can be trained to perform certain types of physical work. New construction, with its structured environments, may be more accessible to automation.

But repair work — especially in older systems — is something else entirely. Every situation is slightly different, and every solution requires adaptation.

That kind of flexibility is harder to automate.

So while AI may reshape many professions, there are still areas where human judgment and experience remain central.

So What Does This Mean?

If there is an AI divide, it may not be about access.

It may be about how people engage with the tools they have.

Some will question, explore, and test what they are given. Others will accept, move quickly, and rely on the output. Most people will fall somewhere in between.

At the same time, there are entire domains where AI is not yet the primary driver of value — at least not in the way it is in software, research, or analysis.

A Small Thought Going Forward

Technology has always changed how people think.

Calculators changed arithmetic.
Search engines changed research.
Navigation systems changed how we understand geography.

AI may change something more subtle.

Not just how we find answers, but whether we continue to question them.

The divide that emerges may not be about who has access to AI. It may be about who chooses to engage with it — and how.

Because intelligence has never been just about producing answers.

It has always involved knowing when to question them.

Builders and Users

Another divide is simpler.

Some people are willing to invest effort in thinking through a problem. Others prefer to move quickly and let the tool do the work.

AI makes that choice easier than ever. You can work through a problem step by step, or you can generate a plausible answer in seconds.

Over time, that difference compounds.

Judgment — the ability to recognize when something is off — develops through repetition, correction, and engagement. It comes from seeing patterns, making mistakes, and adjusting.

If that process is shortened or bypassed, the development of judgment may slow as well..