cross-posted from: https://lemmy.zip/post/49954591
“No Duh,” say senior developers everywhere.
The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.
Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.
To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.
“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.
AI generates really subtle bugs. Fixing the code will not be a nice job.
Idk, that was basically 90% of my last job. At least the ai code will be nicely formatted and use variable names longer than a single character.
Idk, I have experienced that it’s (junio w gpt5) actually very happy to use single letter variables and random abbreviations unless explicitly forbidden. And this was for a project I had already written out with proper variable names so far
Oh yes. Get the AI to refactor and make pretty.
But I’ve just spent 3 days staring something that was missing a ().
However, I admit that a human could have easily made the same error.
Something like Stackoverflow is probably the biggest source of code to train a LLM on, and since it’s based around posting code that almost works but you got some problem with it, I’m absolutely not surprised that the LLMs would pick up the habit of making the same subtle small mistakes that humans make.