You've said thank you to it. Don't pretend you haven't. You're three messages deep into a conversation with a statistical model and your brain is doing the thing that's familiar. It's assuming there's someone on the other side. There's a politeness instinct kicking in. You're nodding at your screen. This is your monkey brain doing exactly what millions of years of primate wiring built it to do: when text shows up in a conversational pattern, it files it under person.
The chat interface did that on purpose, and it was the right call. This was the beachhead. The way in. The same way a kid in 1983 could sit down in front of an Apple IIe, open a magazine to the right page, and type 10 PRINT "HELLO". No idea what's going on behind the scenes, but it worked. The computer spat back HELLO and an eager flashing cursor, waiting. That was enough. That was the crack in the door. The chat interface did the same thing at a scale that kid couldn't have imagined. It took a capability that would otherwise require a computer science degree to even frame a question about, and it made it feel like texting a smart friend. A billion people got their hands on it. That part worked.
The part that didn't work is what happened next, which for most people was nothing. The beachhead became the whole map. The entry point became the mental model. There's a person in there, it's pretty smart, sometimes it's wrong, and I talk to it. That's where the vast majority of people stopped, and it's where almost every bad take about AI comes from. The hype, the fear, the think pieces, the congressional hearings. All of it downstream of a population that got stuck on the beach and never moved inland.
The most interesting work coming out of AI right now is not coming from people with the most credentials. It's coming from people who knew what they needed to build before the tool existed.
Here's the pattern, and it's not new. Every time a fundamentally new capability arrives, people evaluate it by the criteria of the thing it most looks like. The first automobiles were horseless carriages. People wanted to know the horsepower. Whether they could find their way home. They were asking horse questions about a machine. Not because they were stupid. Nope. I bet they were better adapted to the real world than we are now. The horse was the frame they had. The follow-on questions were not obvious yet. What happens to the economy of distance? Where can I live now? What does this do to the railroad? Coming up with those questions, let alone answers, took decades, because nobody could see them from inside the old frame.
We still say horsepower. We still call it a dashboard. Your car has a trunk and a glove box. The old language didn't die because it was still useful. Nobody needed to stop saying horsepower to understand that the car was not a horse. That's how this works. The chat interface is the dashboard of AI. It's a fine way to interact with the thing. The conversation format is natural, it's effective, and it's not going anywhere.
AI is in its horseless carriage phase right now. Can it think? Is it creative? Will it take my job? Does it have feelings? These are horse questions. They're not unreasonable. They will never produce a useful answer. They can't, because they start from the wrong assumption: that this thing is a person, or is trying to be one, or is failing to be one. Every question framed as a human comparison is a question that will age like milk.
The useful questions sound different. What is this thing actually good at, evaluated on its own terms? What is it bad at, not compared to a person, but measured against what you actually need it to do? What happens when you pair it with someone who has real judgment about a specific domain? Those are tractor questions.
Plausible is not the same as correct, and the gap between those two words is where people lose real money and real time.
I can tell you what these systems will do, because I build with them every day. They will hold an entire codebase in their head while you ask questions about it. They will find a connection between your pricing model and a supply chain paper from 2019 that you would never have read. They will generate forty variations of a thing in the time it takes you to generate one. They will do all of this at 3am without complaining.
What they will not do is care whether any of it is right. They produce plausible output. Plausible is not the same as correct, and the gap between those two words is where people lose real money and real time. The system doesn't know your business. It doesn't know what matters. You do, or you should, and that's the whole game.
There's an entire industry that wants you to believe the skill is talking to the AI better. Prompt engineering. System prompt templates. Magic words. This is another horse question, dressed up as expertise. It assumes the machine is the actor and you're just trying to give it better stage directions. The actual skill looks nothing like that. A founder I know got a strategy recommendation from an AI that sounded brilliant. Perfectly structured, well-reasoned, absolutely wrong. She knew it was wrong because she's been in that market for eight years and the conclusion contradicted something about her customers that doesn't exist in any dataset. That's the skill. Not the prompting. The ability to look at plausible output and say no, because you know something the machine can't know.
The ability to look at plausible output and say no, because you know something the machine can't know. That's the whole game.
Nobody ever needed a credential to know their own domain. The makers and the builders, the people running farms and shops and studios, the ones thinking in systems whether or not they call it that. They've had the taste. They've had the judgment. What they haven't had is access. The tools that could amplify what they already know used to require a staff of engineers or a budget that was never going to show up. That changed. The chat interface was the door, and a lot of people walked through it who were never supposed to have access to this kind of capability. Not according to the old gatekeepers, anyway. The most interesting work coming out of AI right now is not coming from people with the most credentials. It's coming from people who knew what they needed to build before the tool existed.
This piece was written with AI. Not by AI. I had the argument, the examples from my own work, the opinions formed over years of building systems and watching people misunderstand the tools they're using. What I didn't have was six uninterrupted hours to turn all of that into prose. So I used the thing I'm writing about. I brought the judgment about what was right and wrong, what to cut, what to push harder on. The AI brought the ability to get words on a page fast enough that this piece exists at all. The alternative was not a better version written purely by hand. The alternative was no piece. That's the amplifier at work.
The hype crowd thinks AI is a better human. The doom crowd thinks it's a threatening one. Both of them are asking horse questions. The majority of users never left the beachhead. They're still talking to it like a person, still treating PRINT "HELLO" as the whole point of the machine. Meanwhile, the people who moved inland figured out the entry point was just that. An entry point. They're building now. The access is here and the tools are real. Who's getting that right, and why, is worth talking about. Next time.