It's crazy to me how all engineers are thinking. I want to explain my point of view on why "Vibe Coding" is not the correct way software should evolve.
First of all, X is a platform that has become the medium to sell vibe coded products, but besides selling that, it has become the marketing channel for people who promote this term. I'm completely sure they have to get money from this hype somewhere. So, this indoctrination that these people do is not out of genuine interest in technology, it's out of superficial interest that goes with the business mindset of having more and selling more regardless of what product you're selling. The programming community was never like this. The best products that programmers had were created by programmers, not by companies with superficial interests. They came from a person who lived the pain point of that product and who, to solve it, wanted to dedicate the necessary time to it.
The way software is being transformed is not correct, because that way is thought about how companies see the product. But there was always a difference between how companies see it (a pragmatic vision) and how programmers see it (a piece of art). There were always fights about this, but those fights made the best of both worlds come together: pragmatism + the art of code = the best product, solid, that could be seen. Now we're only focusing on pragmatism. But why? What do we gain by being only pragmatists? What do we gain by developing 1000x faster than before? More money? Is that the interest we're giving to the future of programming? Or to improving the products we had before?
I agree that evolution must happen in all technologies. Before, programming was done with punch cards, and then high-level languages were created. Everyone who programmed with punch cards complained about these languages because they thought they would limit them. Many compare this to the implementation of the Vibe Coding concept in the evolution that is happening in software. I would agree with that, because software has to evolve, but we must also think about how it should happen and what that evolution should be. Evolution better results. Thoughtful and critical evolution better results.
While noobs and managers are excited that the input language to this compiler is English, English is a poor language choice for many reasons.
- It's not precise in specifying things. The only reason it works for many common programming workflows is because they are common. The minute you try to do new things, you need to be as verbose as the underlying language.
- AI workflows are, in practice, highly non-deterministic. While different versions of a compiler might give different outputs, they all promise to obey the spec of the language, and if they don't, there's a bug in the compiler. English has no similar spec.
- Prompts are highly non local, changes made in one part of the prompt can affect the entire output. tl;dr, you think AI coding is good because compilers, languages, and libraries are bad.
In all this evolution there's something that doesn't add up. Geohotz mentions something very interesting: that human language, which many say is the new programming language (the evolution), is not deterministic, and that it's very foolish to migrate from deterministic languages to a language that is not deterministic at all. First: is human language deterministic?
No. And it's not an accident: it can't be.
A deterministic system is one where, given an initial state and fixed rules, the result is completely determined. Like a well-defined mathematical function.
If we look at how a computer works, everything is deterministic. It's a very dumb piece of technology that receives certain combinations of energy and we give meaning to that, but all of that together, from a dumb system, becomes a piece of art. It's like when you have a black dot on a sheet: it's very dumb, it's useless, but several black dots, well organized, can create a realistic portrait. So... how the hell do we want to use a language that is not deterministic at all with a machine that is 100% deterministic? And that's when all this controversy arises in this unthought evolution attempt.
Human language doesn't meet determinism because the same phrase:
- said by two different people,
- in different contexts,
- with different intentions,
- with different mental histories,
produces different interpretations. There's no function of the type:
The closest thing there is is: meaning = f(words, context, intention, culture, memory, expectations, emotional state, prior knowledge…). And that function is neither closed nor finite. Context is never completely specified. There are always hidden variables. In fact, human language was not designed to transmit exact data. It was designed to coordinate incomplete minds in an uncertain world.
If language were deterministic:
- there would be no metaphors
- there would be no irony
- there would be no poetry
- there would be no humor
- there would be no productive ambiguity
- there would be no meaning negotiation
It's not a useful language for computers, but for our brain it is. Precisely the fact that our language is not deterministic is why it's useful for the brain, because we live in a world that is not perfectly predictable. This world does follow the function: result = f(words, context, intention, culture, memory, expectations, emotional state, prior knowledge…). The brain is not completely deterministic in the classical sense.
There are several levels:
- At the physical level: neurons obey physical laws, but with noise, chaos and extreme sensitivity to initial conditions.
- At the cognitive level: mental states are not completely specified; many representations are distributed and probabilistic.
- At the behavioral level: small internal variations produce different decisions.
This is called stochastic nonlinear dynamics, not "magical free will". The brain resembles more:
- a chaotic system,
- with feedback,
- memory,
- and continuous learning,
than a finite state machine.
So, if the machines we have today were completely restructured to be exactly like the human brain (a chaotic system), we could say that this evolution is perfectly adequate. But it's not like that. We want to adapt something that can't be adapted, and it's only going to be patches that will generate future errors. The worst thing is that many say this has made them 1000x better, but in reality, those people who say they upload thousands of commits per day and review all PRs are lying, because they upload code they don't see. And this is generating security problems that later, with deterministic thinking and language, will have to be fixed. Evolution shouldn't have flaws to fix later. Those are symptoms that the proper evolution didn't happen in the proper environment. People who are in favor of all this are navigating in a deterministic world (the computer) without even knowing what they're doing, and that's where looking at the console waiting for what you're doing to work comes from. We're like monkeys waiting for, without knowing what the mathematical function is for and without knowing how I can modify it, I'm just waiting for it to give me the candy I want (superficial interest).
In the end, everything comes back to the same thing. Currently context engineers want everything to be superultrameга detailed in .md files (or multiple), but what they're doing with that is giving determinism to AI, which means the .md file is like they're building another programming language but with a non-deterministic engine. If we realize, something no one fights about is Cursor's tab autocomplete: it's very useful, but that's the type of evolution we should think about. You're using your brain with deterministic thinking, you know how you want it to look, and it just gives you the code. But when we get into generating a complete codebase, we don't know anything. The engineer shouldn't vibe code. In the process of programming you must know what you want to do and how you're going to do it, but the "how" must be very deterministic and detailed: "how are these variables going to be defined?". And that doesn't take away tab completion, because at the moment you're going to write it, you already know the deterministic and detailed "how". With what's new, you only know what you want to do, but the "how" is not something complete, and that's going to generate many errors. It's too much context, and when it's too much, we must apply "divide and conquer", which is a technique that in the evolution from punch cards to high-level languages was still valid. AI shouldn't handle context, we should handle context. We must store it in our brain and use AI to be more efficient. Not for AI to be more efficient.
I'm not against AI. On the contrary, I think it's a super tool to create better products, but these products must be thought critically to improve our workflow. We must know what we're going to write and execute next. What we couldn't do fast before was executing, because it depended on how fast we could write. That's what AI should focus on, but not on removing thinking before executing. Tab autocomplete focuses only on executing, and it's a blessing: you never lose context and you know how to handle yourself efficiently with this feature.
I don't care if people in the programming world like Linus Torvalds are in favor of all this. I want to give my opinion because the fact that they've created something very valuable doesn't exempt them from being wrong. On the contrary, we can all be wrong, so I'm free to give my thought because I can also be wrong.