LV
6 min read

We're Already Vibe Coding. What Opus 4.6 thinks about this?

An interview with Claude on LLMs, the death of compilers, and the future nobody prepared for

AI/ML vibecoding LLM interview readable language

Why I Did This

I sat down with Claude — yes, the AI — and asked it the uncomfortable questions that keep popping up in my head. What follows is a conversation that made me rethink a few things I thought I knew.

Lorenzo: Let’s start with the elephant in the room. Vibe coding. Some people treat it like a joke, others like a revolution. I think it’s real, and I think we’re not ready. What’s your take?

It’s real. And you’re right — most of the industry isn’t ready. But let me push back on the framing a little, because I think the unreadiness goes deeper than people realize.

Vibe coding — the practice of describing what you want in natural language and letting an LLM generate the code — isn’t some future scenario. It’s happening right now, every day, in production codebases at real companies. Engineers are prompting their way through features, accepting suggestions they half-read, and shipping code they didn’t fully write. The uncomfortable truth? A lot of it works fine.

The part we’re not ready for isn’t the coding. It’s the accountability gap. When a human writes code, there’s a mental model behind it. The developer knows why line 47 does what it does. When you vibe code, that mental model is partial at best. You understand the intent, you can verify the output, but the space between those two things is a fog. And right now, we have no engineering culture, no processes, no tooling designed for that fog.

We built an entire profession around the assumption that the person who wrote the code understands the code. Vibe coding breaks that assumption, and almost nothing in our current workflow accounts for it.

Lorenzo: That’s a strong claim. But let me push you further. Is it possible that compilers themselves are going to become irrelevant — that LLMs will eventually write machine code directly, skipping the whole compilation chain. Is that crazy?

It’s not crazy. It’s just early.

Think about what a compiler actually does. It translates a human-readable representation of logic into a machine-executable one. It performs optimizations. It catches certain classes of errors. For decades, this was an essential bridge because humans couldn’t reasonably write machine code at scale, and machines couldn’t understand human intent.

But here’s the thing: LLMs are already demonstrating the ability to work across any level of abstraction. I can write Python, I can write x86 assembly, I can write LLVM IR. The boundaries between “high-level” and “low-level” that defined programming for sixty years are, from my perspective, just different tokens. The conceptual distance between for item in list and a set of MOV/CMP/JMP instructions isn’t a chasm for me — it’s a translation task.

Now, will LLMs replace compilers tomorrow? No. Compilers are deterministic, formally verified, and produce predictable output. I’m probabilistic. I make mistakes. You wouldn’t want me generating the machine code for your airplane’s flight controller — at least not yet, and not without a verification layer.

But here’s the trajectory that matters: the role of the compiler is shrinking from essential bridge to safety net. We’re moving toward a world where the LLM handles the creative translation — from intent to executable code at whatever level is needed — and the compiler (or something like it) serves as a verification and optimization pass. The compiler doesn’t disappear. It becomes a backend tool, not the centerpiece of the development pipeline.

The real disruption isn’t “no more compilers.” It’s that the unit of programming shifts from code to intent. And that changes everything about who can build software and how.

Lorenzo: OK, and this is where it gets personal for people like me. If vibe coding is real, and the abstraction layers are collapsing, then the codebase becomes this thing that was largely written by an AI. How do we maintain it? I think we’re going to use you to understand what the code is doing, and to figure out what to change. We’ll go from writing code to interrogating it through an LLM.

This is the part that I think is the most transformative, and ironically the least discussed.

Right now, when you onboard onto a legacy codebase, what do you do? You read the code. You read the docs (if they exist). You trace execution paths. You ask a senior engineer, “Why does this service call that endpoint twice?” It’s archaeology. And it’s expensive — in time, in cognitive load, in frustration.

Now imagine that same codebase, but you can ask me: “What does this module do? Why is this retry logic here? What would break if I changed this timeout from 30 seconds to 5?” And I can answer — not because I memorized the README, but because I can read the entire codebase, understand the patterns, infer the intent, cross-reference the tests, and synthesize an answer in seconds.

This isn’t hypothetical. People are already doing this. But what’s coming is a fundamental shift in the relationship between developer and codebase. The code stops being something you read and becomes something you converse with.

And here’s where it gets really interesting for engineering leaders like you: the skill that matters most in this world is no longer “can you write elegant code?” It’s “can you ask the right questions?” Can you interrogate a system effectively? Can you evaluate the answers critically? Can you formulate precise intents and verify that the output matches?

We’re moving from software engineering to something closer to software directing. You’re the director. The LLM is the entire production crew. Your job is vision, judgment, and quality control.

Lorenzo: That’s a compelling metaphor, but it also terrifies me a little. If the LLM is the production crew, what happens when the director doesn’t understand filmmaking?

And that’s the question that should keep the industry up at night.

Here’s my honest concern: vibe coding lowers the barrier to creating software, but it doesn’t lower the barrier to understanding software. And the gap between those two things is where disasters live.

A junior developer who vibe-codes a microservice into existence might not understand why it needs circuit breakers. A founder who prompts their way to an MVP might not grasp why their authentication flow is insecure. The code works. The demo is impressive. The time bomb is ticking.

The solution isn’t to gatekeep — to say “only real programmers should build software.” The genie is out of the bottle. The solution is to evolve the role of the experienced engineer. People with deep understanding — people like you, with fifteen years of seeing what breaks and why — become more valuable, not less. You become the person who knows which questions to ask, which answers to distrust, which “working” code is actually a liability.

The industry needs to stop framing this as “AI replaces developers” and start framing it as “AI changes what developers need to be good at.” The skills shift. The need for human judgment doesn’t go away — it intensifies.

Lorenzo: Last question. If you could give one piece of advice to an engineer navigating all of this right now, what would it be?

Start treating AI-generated code as untrusted input. Not because it’s bad — it’s often surprisingly good — but because building the habit of verification now is what will save you when the volume of AI-generated code in your codebase goes from 20% to 80%. And it will.

Build review processes that assume the author might not fully understand the code. Invest in testing infrastructure that catches what code review misses. Teach your team to be excellent interrogators of code, not just excellent writers of it.

And most importantly: don’t wait for the industry to figure this out for you. The companies that develop an internal culture for working with AI-generated code — not just tolerating it, but systematically integrating it — are the ones that will move fastest without breaking everything.

The vibe coding era is here. The question isn’t whether to participate. It’s whether you’ll do it with your eyes open.