My kids are growing up in a world where AI just answers things.

You ask. It responds. Confidently. Immediately. With complete sentences and no visible hesitation.

And that’s exactly what worries me.

The Real Problem

It’s not that the answers are always wrong.

It’s that the answers feel authoritative whether they’re right or wrong. There’s no visible uncertainty. No “I’m not sure about this part.” No indication of where the confidence ends and the hallucination begins.

Adults who’ve used these tools long enough develop a skepticism instinct. We’ve been burned enough times to check the important things.

Kids who grow up with this as the default — I’m not sure that instinct develops the same way.

They’re not questioning how it works under the hood. They’re taking the output at face value because it’s AI, and AI sounds like it should know.

That’s the part that concerns me most.

What I Want Them to Understand

I build AI on weekends. I know what’s inside it — not perfectly, but enough.

Enough to know that these models are pattern-matching engines trained on human text. They produce the most statistically likely continuation of whatever you asked. That process generates remarkably useful output. It also generates confident-sounding nonsense with no distinction between the two.

The model doesn’t know what it doesn’t know.

That’s not a flaw that will get fixed in the next version. That’s structural.

My kids need to understand that. Not to distrust AI — but to use it the way I use it. As a starting point, not a final answer. As a thinking partner, not an authority.

The Bigger Concern

Critical thinking isn’t just about AI.

It’s about developing the instinct to ask: how do I know this is true? Who benefits from me believing this? What’s the counter-argument? What am I missing?

That instinct gets built through friction. Through being wrong. Through checking something and discovering the source was bad. Through learning that confident delivery and accuracy are completely unrelated.

If AI removes that friction — if it always provides a smooth, reasonable-sounding answer — the instinct doesn’t develop.

And that instinct is one of the most valuable things a person can have. In any era. But especially in this one.

What I’m Actually Doing About It

When my kids use AI for something, I ask them to explain the answer back to me.

Not to check their homework. To see if they understood it or just copied it.

If they can explain it, they learned something. If they can’t, the AI did their thinking for them.

There’s a difference. And it’s worth making that difference visible early — before the habit of outsourcing understanding becomes invisible.

I’m not anti-AI. I build with it every weekend.

I just want my kids to use it like a tool, not treat it like an oracle.

Because the moment you stop questioning the tool, the tool starts thinking for you.

And that’s not intelligence.

That’s delegation without oversight.