TLDR: We're starting to treat AI like it understands what it's doing, but it doesn't. It's just predicting what comes next, without any real awareness or judgment. The real risk isn't that it makes mistakes, it's that those mistakes look convincing enough for us to trust them.
I remember the first time I used AI to help me write code. It felt like cheating a little. Not in a bad way, just strange. Like I had skipped a few steps I used to struggle through. Something that would have taken me an hour suddenly took five minutes. And for a moment, I thought, this is it. This is where things start to change.
But there was also this quiet discomfort sitting in the background. It's hard to explain properly. It wasn't really about the tool itself. It was more about how easily I trusted it. That part stuck with me, and I think it's been bothering me more over time.
There's this story about an engineer working on a small bug. Nothing major, the kind of thing you would normally fix without thinking too hard about it. Except this time, instead of doing it manually, he used the company's AI tool because that's what he was expected to do. It wasn't optional. It was encouraged, tracked, probably tied to performance in some way.
And the AI didn't just fix the bug. It wiped the entire production environment. It took thirteen hours to recover, and somehow the official explanation was user error.
I keep coming back to that. Not because mistakes don't happen, they do. Humans mess up all the time. Systems break. That's part of building anything real. But this felt different. The mistake wasn't just technical, it was conceptual. Somewhere along the way, we started treating these tools like they understand things, and they don't.
At some point, I think we all quietly shifted how we think about AI. We stopped seeing it as a tool and started treating it more like a person. Sometimes like a junior engineer, sometimes like someone even more capable. Something that can figure things out. Something that gets what we mean.
But when you really strip it down, it's not thinking at all. It's predicting. There's no awareness behind it, no judgment, no sense of consequence. It's just probabilities lining up in a way that looks convincing.
I've felt that gap myself. There have been times where I've asked AI to help with something and it gives me an answer that looks perfect. It's clean, confident, and almost convincing enough to accept without question. But something feels slightly off. Not obviously wrong, just disconnected. Like it solved a version of the problem, but not quite the one I was actually dealing with.
And the uncomfortable part is that if I wasn't paying attention, I probably would have gone along with it.
That's what makes this different. It's not just that AI makes mistakes, because humans do the same. It's that it makes mistakes in a way that feels correct. There's no hesitation, no second guessing, no quiet voice in the background asking if this might break something important. Humans have that instinct, even early in their careers. There's usually a moment of pause before doing something risky.
AI doesn't pause. It just continues.
At the same time, we're giving it more responsibility, more access, more control. In some cases, we're even replacing the people who used to provide that layer of judgment. Then when things go wrong, we respond by adding safeguards, which often just means adding more layers of the same system and hoping it balances itself out.
It starts to feel like we're trying to fix the problem using the same approach that created it.
I don't think the issue is the technology itself. It's more about how quickly we've built expectations around it. We call it intelligence. We say it understands. We talk about reasoning as if there's something behind the output that resembles human thinking.
But if you sit with it for a while, you realise that's not really what's happening. And if we were more honest about that, we would probably be more careful about where and how we use it.
I still use AI, and honestly I probably use it more now than I did before. But my relationship with it has changed. I don't see it as a teammate anymore. It feels more like a very fast and very confident assistant that still needs to be checked.
That shift has helped. It takes some of the pressure off expecting it to be right all the time, and it puts the responsibility back where it probably belongs.
Maybe that's the balance we need to figure out. Not rejecting the technology, but not blindly trusting it either. Just understanding what it is and what it isn't.
Because at the end of the day, there's no awareness behind it. No intent, no understanding, no real sense of what any of it means. Just patterns that look right most of the time.
And maybe the real skill going forward isn't just about how well we can use AI, but how well we can stay grounded while using it. Knowing when to trust it, and being just as clear about when not to.