A few weeks ago, I made a casual tweet about the power and accuracy of Opus 4.5. I was trying to capture how quickly I felt things are changing when I said software engineering was close to being "done." The tweet went viral, and not in a good way.
Now that they've been able to spend some time with the new model, I think more people understand where I was coming from. I meant to say that we are headed towards a world where coding is done. Someday soon we'll close our IDEs and never look back. But I didn't choose my words carefully, because I hadn't given much thought to the distinction. I learned software engineering at Facebook, where it was precisely the same thing as coding. We didn't do architecture reviews or design docs. The culture was "put up the PR or shut up."
So this episode prompted me to dig into where the term "software engineering" actually came from. It turns out that it was coined in 1968, when fifty computer scientists met at a NATO conference in Garmisch, Germany to address an emerging crisis: software projects were getting bigger, more critical, and more chaotic. They called the conference "Software Engineering"— a deliberately provocative name, since no such discipline existed. The hope was that naming it might will it into being. It worked—the name stuck and a kind of discipline grew up around it. I spent my career in it. But how long do things like this last?
There's a surprisingly elegant way to think about this: The Copernican Method, which was devised by astrophysicist J. Richard Gott considering his visit to the Berlin Wall in 1969. His insight was simple but not obvious: there was nothing special about the timing of his visit. He was probably seeing the Wall sometime in the middle of its existence, not at the beginning or the end. His math gave it only a 25% chance of lasting beyond 1993. It fell in 1989.
If Gott had been asked about software engineering on that same trip, just a year after the NATO conference, he'd have given it less than a 2% chance of lasting this long.
And yet: maybe it’s only now that we’re entering the era of software engineering, properly defined. Claude handles the grunt work; we focus on the interesting problems. The tedious parts—the boilerplate, the googling, the mechanical translation of intent into syntax—increasingly just happen. What remains is the part that was always supposed to matter: systems and judgment and knowing what to build, not just how. Enjoy it. It may be brief. It seems like software engineering is going to be pretty easy without the drag of coding.
Not everyone sees it this way. The most common objection I've heard is that "compilers are deterministic. You can't trust LLM output the same way."
This sounds reasonable. It's also wrong.
Deterministic doesn't mean predictable. A compiler is indeed deterministic—same input, same output. But you can't know what it will output without running it. This was one of the earliest insights in computer science: in general, you can't predict what a program will do without executing it.
But let's say we went through and verified every line. Ken Thompson showed forty years ago that that's still not enough. In "Reflections on Trusting Trust," he demonstrated how to hide a backdoor in a compiler that perpetuates itself—the compiler binary contains malicious code that re-inserts itself whenever the compiler compiles itself. Even if the source is clean, the binary is compromised. You can't verify your way out by reading source code.
The lesson isn't "don't trust code you didn't write"—that's impossible. The lesson is that trust can never be fully grounded in verification. At some point, you have to trust. There's always a trust boundary you can't see past.
So why do we trust compilers? Not because they're deterministic. Not because we've verified them. We trust them because they've earned it through years of use by thousands or millions of users. Bugs surface and get fixed. The ecosystem matures.
AI-generated code will earn trust the same way—through time, usage, and failures that get fixed. Or maybe faster: if models can generate formal proofs alongside code, we might end up with more verification than we ever had with compilers.
Trust isn't derived from determinism. Trust is earned. Trust is social.
And it turns out that working with AI is fundamentally a social skill. Researchers have found that what predicts success with AI is "Theory of Mind"—the ability to model another agent's perspective and adapt on the fly. Not technical depth, not prompt engineering. The best engineers have always been effective communicators and thoughtful collaborators. That used to mean teammates and colleagues. Now it includes Claude.
When your tools change weekly, the work becomes figuring out how to work. The specific techniques have a half-life measured in months now. What remains is the meta-skill: staying aware, staying adaptive, staying in dialogue with something that keeps getting smarter.
I get the anxiety. It's all quite disorienting. But we can't fight the future. The future is what we make of it. And this future is already changing us.
About a month after each new model release, people start complaining that it's been "nerfed"—somehow slower, dumber, less alive. Sometimes they're right! We've had inference bugs and harness regressions. But they are rare and relatively minor. More often, what's changing is you. You distill the model into yourself—absorbing its patterns, internalizing its reasoning. Your baseline rises. Now you notice gaps you couldn't even perceive a month ago.
We're probably hardwired for this. I think humans evolved the ability to distill other humans—to learn from collaboration, to absorb the skills of the people around us. This has always been the engine of technology, but now we're doing it with something new.
We're not competing with AI. We're co-evolving with it. Working with Claude changes how we think, what we emphasize, what questions we ask. The model gets better and that makes us better; the limit keeps receding.
Software engineering emerged in 1968 to describe something new. The skills we're developing now—and the job they belong to—will need a new name too. Whatever we call it, we're probably right in the middle of it. That's what makes it interesting.