The vibecoders are becoming sentient
The vibecoders are becoming sentient
The vibecoders are becoming sentient
They're so close to actual understanding of how much they suck.
I'm not a programmer by any stretch but what LLM's have been great for is getting my homelab set up. I've even done some custom UI stuff for work that talks to open source backend things we run. I think I've actually learned a fair bit from the experience and if I had to start over I'd be able to do way way more on my own than I was able to when I first started. It's not perfect and as others have mentioned I have broken things and had to start projects completely from scratch but the second time through I knew where pitfalls were and I'm getting better at knowing what to ask for and telling it what to avoid.
I'm not a programmer but I'm not trying to ship anything either. In general I'm a pretty anti-AI guy but for the non-initiated that want to get started with a homelab I'd say its damn near instrumental in a quick turnaround and a fairly decent educational tool.
This is the correct way to do it, use it, see if it works for you and try to understand what happened. It's not that different from using examples or stack overflow. With time you get better, but you need to have that last critical thinking step. Otherwise you will never learn and will just copy paste hoping it works
As a programmer I've found it infinitely times more useful for troubleshooting and setting up things than for programming. When my Arch Linux nukes itself again I know I'll use an LLM, when I find a random old device or game at the thrift store and want to get it to work I'll use an LLM, etc. For programming I only use the IntelliJ line completion models since they're smart enough to see patterns for the dumb busywork, but don't try to outsmart me most of the time which would only cost more time.
lol it did save me from my first rm -rf /
As a software developer, I've found some free LLMs to provide productivity boosts. It is a fairly hairpulling experience to not try too hard to get a bad LLM to correct itself, and learning to switch quickly from bad LLMs is a key skill in using them. A good model is still one that you can fix their broken code, and ask them to understand why what you provided them fixes it. They need a long context window to not repeat their mistakes. Qwen 3 is very good at this. Open source also means a future of customizing to domain, ie. language specific, optimizations, and privacy trust/unlimited use with enough local RAM, with some confidence that AI is working for you rather than data collecting for others. Claude Sonnet 4 is stronger, but limited free access.
The permanent side of high market cap US AI industry is that it will always be a vector for NSA/fascism empire supremacy, and Skynet goal, in addition to potentially stealing your input/output streams. The future for users who need to opt out of these threats, is local inference, and open source that can be customized to domains important to users/organizations. Open models are already at close parity, IMO from my investigations, and, relatively low hanging fruit, customization a certain path to exceeding parity for most applications.
No LLM can be trusted to allow you do to something you have no expertise in. This state will remain an optimistic future for longer than you hope.
I think the key to good LLM usage is a light touch. Let the LLM know what you want, maybe refine it if you see where the result went wrong. But if you find yourself deep in conversation trying to explain to the LLM why it's not getting your idea, you're going to wind up with a bad product. Just abandon it and try to do the thing yourself or get someone who knows what you want.
They get confused easily, and despite what is being pitched, they don't really learn very well. So if they get something wrong the first time they aren't going to figure it out after another hour or two.
In my experience, they're better at poking holes in code than writing it, whether that's green or brownfield.
I've tried to get it to make sections of changes for me, and it feels very productive, but when I time myself I find I spend probably more time correcting the LLM's work than if I'd just written it myself.
But if you ask it to judge a refactor, then you might actually get one or two good points. You just have to really be careful to double check its assertions if you're unfamiliar with anything, because it will lead you to some real boners if you just follow it blindly.
But if you find yourself deep in conversation trying to explain to the LLM why it’s not getting your idea, you’re going to wind up with a bad product.
Yes. Kind of. It takes ( a couple of days) experience with LLMs to know that failing to understand your corrections means immediate delete and try another LLM. The only OpenAI llm I tried was their 120g open source release. It insisted that it was correct in its stupidity. That's worse than LLMs that forget the corrections from 3 prompts ago, though I also learned that is also grounds for delete over any hope for their usefulness.
It is not useless. You should absolutely continue to vibes code. Don't let a professional get involved at the ground floor. Don't inhouse a professional staff.
Please continue paying me $200/hr for months on end debugging your Baby's First Web App tier coding project long after anyone else can salvage it.
And don't forget to tell your investors how smart you are by Vibes Coding! That's the most important part. Secure! That! Series! B! Go public! Get yourself a billion dollar valuation on these projects!
Keep me in the good wine and the nice car! I love vibes coding.
Kinda hard to find jobs right now in the midst of all this but looking forward to the absolutely inevitable decade long cleanup.
Also, don’t waste money on doctor visits. Let Bing diagnose your problems for pennies on the dollar. Be smart! Don’t let some doctor tell you what to do.
IANAL so: /s
Not me, I'd rather work on a clean code base without any slop, even if it pays a little less. QoL > TC
I'm not above slinging a little spaghetti if it pays the bills.
I'm sure it's fun to see a series of text prompts turn into an app, but if you don't understand the code and can't fix it when it doesn't work without starting over, you're going to have a bad time. Sure, it takes time and effort to learn to program, but it pays off in the end.
Yeah, mostly agreed. In my experience so far, an experienced dev that's really putting time into their setup can greatly accelerate their output with these tools, while an inexperienced dev will end up taking way longer (and they'll understand less) than it would have if they worked normally
So there are multiple people in this thread who state their job is to unfuck what the LLMs are doing. I have a family member who graduated in CS a year ago and is having a hell of a time finding work, how would he go about getting one of these "clean up after the model" jobs?
I've been an engineer for over a decade and am now having a hard time finding work because of this LLM situation so I can't imagine how a fresh graduate must feel.
No idea, but I am not sure your family member is qualified. I would estimate that a coding LLM can code as well as a fresh CS grad. The big advantage that fresh grads have is that after you give them a piece of advice once or twice, they stop making that same mistake.
Where is this coming from? I don't think an LLM can code at the level of a recent cs grad unless it's piloted by a cs grad.
Maybe you've had much better luck than me, but coding LLMs seem largely useless without prior coding knowledge.
What's this based on? Have you met a fresh CS graduate and compared them to an LLM? Does it not vary person to person? Or fuck it, LLM to LLM? Calling them not qualified seems harsh when it's based on sod all.
It makes me so mad that there are CS grads who can't find work at the same time as companies are exploiting the H1B process saying "there aren't enough applicants". When are these companies going to be held accountable?
Never, they donate to get the politicians reelected.
This is in no way new. 20 years ago I used to refer to some job postings as H1Bait because they'd have requirements that were physically impossible (like having 5 years experience with a piece of software <2 years old) specifically so they could claim they couldn't find anyone qualified (because anyone claiming to be qualified was definitely lying) to justify an H1B for which they would be suddenly way less thorough about checking qualifications.
After they fill up on H1B workers and find out that only 1/10 is a good investment.
H1B development work has been a thing for decades, but there's a reason why there are still high-paying development jobs in the US.
The difficult part is going to be that new engineers are not generally who people think about to unfuck code. Even before the LLMs junior engineers are generally the people that fuck things up.
It’s through fucking lots of stuff up and unfucking that stuff up and learning how not to fuck things up in the first place that you go from being a junior engineer to a more senior engineer. Until you land in a lofty position like staff engineer and your job is mostly to listen to how people want to fuck everything up and go “maybe let’s try this other way that won’t fuck everything up instead”
Tell your family member to network, that’s the best way to get a job. There are discord servers for every programming language and most projects. Contribute to open source projects and get to know the people.
Build things, write code, open source it on GitHub.
Drill on leet code questions, they aren’t super useful, but in any interview at least part of the assessment is going to be how well they can do on those.
There are still plenty of places hiring. AI has just made it so that most senior engineers have access to a junior engineer level programmer that they can give tasks to at all time, the AI. So anything you can do to stand out is an advantage.
Answer is probably the same as before AI: build a portfolio on GitHub. These days maybe try to find repos that have vibe code in them and make commits that fix the AI garbage.
No idea, but I am not sure your family member is qualified. I would estimate that a coding LLM can code as well as a fresh CS grad. The big advantage that fresh grads have is that after you give them a piece of advice once or twice, they stop making that same mistake.
a coding LLM can code as well as a fresh CS grad.
For a couple of hundred lines of code, they might even be above average. When you split that into a couple of files or start branching out, they usually start to struggle.
after you give them a piece of advice once or twice, they stop making that same mistake.
That's a damn good observation. Learning only happens with re-training and that's wayyy cheaper when done in meat.
God bless vibe coders, because of them I'm buying a new PC build this week AND I've decided to get a PS5.
Thank you Vibe Coders, your laziness and and sheer idiocy are padding my wallet nicely.
Hope are you finding work right now? Shits rough out here haha.
Like trying to write a book just using auto complete
But I thought armies of teenagers were starting tech businesses?!
My boss is literally convinced we can now basically make programs that take rockets to mars, and that it's literally clicks away. For the life of me, it is impossible to convince him that this is, in fact, not the case. Whoever fired developers because 'AI could do it' is going to regret it.
Maybe try convincing him in terms he would understand. If it was really that good, it wouldn't be public. They'd just use it internally to replace every proprietary piece of software in existence. They'd be shitting out their own browser, office suite, CAD, OS, etc. Microsoft would be screwing themselves by making chatgpt public. Microsoft could replace all the Adobe products and drive them out of business tomorrow.
I mean ... the first moon landings took a very low number of clicks to make the calculations, technically speaking
it is impossible to convince him that this is, in fact, not the case
He's probably an investor.
The tech economy is struggling. Every company needs 20% more every year, or it's considered a failure. The big fish have bought up every promising property on the map in search of this. It's almost impossible to go from small to large without getting gobbled up, and the guys gobbling up already have 7 different flavors of what you're trying to make on ice in a repo somewhere. There's no new venture capital flowing into conventional work.
AI has all the venture capitalists buzzing, handing over money like it's 1999. Investors are hopping on every hype train because each one has the chance of getting gobbled up and making a good return on investment.
These mega CEO's have moved their personal portfolios into AI funding and their companies pushing the product will line their pockets indirectly.
At some point, that $200/pp/m price will shoot up. They're spending billions on datacenters, and eventually those investments will be called in for returns.
When they hit the wall for training-based improvement, things got slippery. Current models are costing exponentially more, making several calls for every request. The market's not going to bear that without an exponential cost increase, even if they're getting good work done.
Obviously fake. Still funny though.
Are you saying the comment is fake, or the sentiment? This was actually posted to reddit: https://archive.is/U9ntj
Fake in that it's almost assuredly written and posted by someone who is actively anti-vibe coding and this is a troll on the true believers.
I love the one guy on that thread who is defending vibe coding, and is "about to launch his first application," and anyone who tells him how dumb he is is only doing so because they feel threatened.
You should(n't) watch Quin69. He's currently "vibe-coding" a game with Claude. Already spent $3000 in tokens, and the game was in such a shit state, that a viewer had to intervene and push an update that dragged it to a "playable" state.
The game is at a level of a "my first godot game", that someone who's learning could've made over a weekend.
It's strange, but I've seen lots of comments that are not aware this is fake. The ai hater crowd is using it as their proof, the other side saying he is using it wrong.
That's depressing. This is so obviously fake because of how entertaining it is written and how the conclusion gets shoved in your face. No subtlety.
Is that what the weird extra width on some letters is, artifacts from some AI generating the post?
No, the phrasing makes it clear someone wrote a fictional account of becoming self aware that the output of vibe coding isn't maintainable as it scales.
No, the text itself. No vibe coder would write something like that. The artifacts you mentioned are the result of simple horizontal and vertical upscaling. If you zoom in you can see it better.
I don’t really care about vibe coders but as a dev with just under 2 decades in the field:
I can’t stress this enough: if you give me a PR with tons of new files and expect me to review it when you didn’t even review it yourself, I will 100% reject it and make you do it. If it’s all dumped into a single commit, I will whip your computer into the nearest body of water and tell you to go fish it out.
I don’t care what AI tool wrote your code. You’re still responsible for it and I will blame you.
When I see a sloppy PR I remind people “AI didn’t write that. You wrote it. Your name is on the git blame.”
Love it, I have a vibe coding colleague I will use this with.
I like this mentality. I might start telling people the same thing
If it’s all dumped into a single commit, I will whip your computer into the nearest body of water and tell you to go fish it out.
I'm going to steal this for an update to an internal guidance document for my dev team. Thank you.
Lmao glad I could help! I hate those big commits. They’re so much harder to traverse and know what’s going on. Developer experience has been big on my mind lately. Working 5 days a week is already hard, but there are moments when we can make tiny bits easier for each other.
I have never used an AI to code and don't care about being able to do it to the point that I have disabled the buttons that Microsoft crammed into VS Code.
That said, I do think a better use of AI might be to prepare PRs in logical and reasonable sizes for submission that have coherent contextualization and scope. That way when some dingbat vibe codes their way into a circle jerk that simultaneously crashes from dual memory access and doxxes the entire user base, finding issues is easier to spread out and easier to educate them on why vibe coding is boneheaded.
I developed for the VFX industry and I see the whole vibe coding thing as akin to storyboards or previs. Those are fast and (often) sloppy representations of the final production which can be used to quickly communicate a concept without massive investment. I see the similarities in this, a vibe code job is sloppy, sometimes incomprehensible, but the finished product could give someone who knew what the fuck they are doing a springboard to write it correctly. So do what the film industry does: keep your previs guys in the basement, feed them occasionally, and tell them to go home when the real work starts. (No shade to previs/SB artists, it is a real craft and vital for the film industry as a whole. I am being flippant about you for commedic effect. Love you guys.)
I think storyboards is a great example of how it could be used properly.
Storyboards are a great way for someone to communicate "this is how I want it to look" in a rough way. But, a storyboard will never show up in the final movie (except maybe fun clips during the credits or something). It's something that helps you on your way, but along the way 100% of it is replaced.
Similarly, the way I think of generative AI is that it's basically a really good props department.
In the past, if a props / graphics / FX department had to generate some text on a computer screen that looked like someone was Hacking the Planet they'd need to come up with something that looked completely realistic. But, it would either be something hand-crafted, or they'd just go grab some open-source file and spew it out on the screen. What generative AI does is that it digests vast amounts of data to be able to come up with something that looks realistic for the prompt it was given. For something like a hacking scene, an LLM can probably generate something that's actually much better than what the humans would make given the time and effort required. A hacking scene that a computer security professional would think is realistic is normally way beyond the required scope. But, an LLM can probably do one that is actually plausible for a computer security professional because of what that LLM has been trained on. But, it's still a prop. If there are any IP addresses or email addresses in the LLM-generated output they may or may not work. And, for a movie prop, it might actually be worse if they do work.
When you're asking an AI something like "What does a selection sort algorithm look like in Rust?", what you're really doing is asking "What does a realistic answer to that question look like?" You're basically asking for a prop.
Now, some props can be extremely realistic looking. Think of the cockpit of an airplane in a serious aviation drama. The props people will probably either build a very realistic cockpit, or maybe even buy one from a junkyard and fix it up. The prop will be realistic enough that even a pilot will look at it and say that it's correctly laid out and accurate. Similarly, if you ask an LLM to produce code for you, sometimes it will give you something that is realistic enough that it actually works.
Having said that, fundamentally, there's a difference between "What is the answer to this question?" and "What would a realistic answer to this question look like?" And that's the fundamental flaw of LLMs. Answering a question requires understanding the question. Simulating an answer just requires pattern matching.
I think this is great. I like hearing about your experience in the VFX industry since it’s unfamiliar to me as a web dev. The storyboard comparison is spot on. I like that people can drum up a “what if” at such a fast pace, but vibe coders need to be aware that it’s not a final product. You can spin it up, gauge what works and what doesn’t, and now you have feasibility with low overhead. There’s real value to that.
Edit: forgot to touch on your PR comment.
At work, we have an optional GitHub workflow that lets you call Claude in a PR and it will do its own assessment based on the instructions file we wrote for it. We stress that it’s not a final say and will make mistakes, but it’s been good in a pinch. I think if it misses 5 things but uncovers 1 bug, that’s still a win. I’ve definitely had “a-ha” moments with it where my dumb brain failed to properly handle a condition or something. Our company is good about using it responsibly and supplying as much context as we possibly can.
I like your previs analogy, because that’s how I’ve been thinking of it in my head without really knowing how to communicate it. It’s not very good at making a finished project, but it can be useful to demonstrate a direction to go in.
And actually, the one time I’ve felt I was able to use AI successfully was literally using it for previs; I had a specific idea of design I wanted for a logo, but didn’t know how to communicate it. So I created about a hundred AI iterations that eventually got close to what I wanted, handed that to my wife who is an actual artist, told her that was roughly what I was thinking about, and then she took the direction it was going in and made it an actual proper finished design. It saved us probably 15-20 iterations of going back and forth, and kept her from getting progressively more annoyed with me for saying “well… can you make it like that, but more so?”
Vibe coding is useful for super basic bash scripting and that's about it. Even that it will mess up but usually in a suler easily fixed way
I don't think it has much to do with how "complex or not" it is, but rather how common it is.
It can completely fail on very simple things that are just a bit obscure, so it has too little training data.
And it can do very complex things if there's enough training data on those things.
Yes exactly.
"Implement a first order lowpass filter in C"
LLM has no issue.
"Implement a string reversal function in Wren"
LLM proceeds to output an unholy mix of python and JavaScript.
Even though the second task is trivial compared to the first, LLMs have almost no training data on Wren (an obscure semi-dead language).
I've also found it useful for simple Python scripts when I need to analyze data. I don't use pandas/scipy/numpy/matplotlib enough to remember the syntax and library functions. With vibe coding it, I can have a script in minutes for reading a csv with weird timestamps, scale some of the channels, filter out noise or detrending, perform a Fourier transform, and do a curve fit towards a model.
But then obviously I know every intermediate step I want to do.
When I want to be lazy and make some simple excel macros is about the most iv trusted it with that it manages to do with out fucking up and taking more time then just doing it my self.
No way. Youtube ad told me a different story the other day. Could that be a... lie? (shocked_face.jpg)
My entire IT career has been funded by morons like this. This is just the latest moronic idea that is going to pay my bills. Cleaning up after vibe coders has guaranteed my income until I die. You see, posts like this focus on the code that is broken and requires another dev to fix it enough to get it going. There is a long road from "finally working" to "production ready" to "optimized", and we get paid along every inch of the way.
The AI Fix podcast had a piece about how someone let an AI agent do the coding for them but had a disaster because he gave it access to the production database.
Very funny.
https://theaifix.show/61-replit-panics-deletes-1m-project-ai-gets-gold-at-math-olympiad/
A buddy of mine is into vibe coding, but he actually does know how to code as well. He will reiterate through the code with the llm until he thinks it will work. I can believe it saves time, but you still have to know what you are doing.
The most amazing thing about vibe coding is that in my 20 odd years of professional programming the thing I’ve had to beg and plead for the most was code reviews.
Everyone loves writing code, no one it seems much enjoyed reading other people’s code.
Somehow though vibe coding (and the other LLM guided coding) has made people go “I’ll skip the part where I write code, let an LLM generate a bunch of code that I’ll review”
Either people have fundamentally changed, unlikely, or there’s just a lot more people that are willing to skim over a pile of autogenerated code and go “yea, I’m sure it’s fine” and open a PR
I suspect it's a bit of both. With agents the review size can be pretty small and easier to digest which leads to more people reviewing, but I suspect it is still more surface level.
I don't see how it would save time as someone whose job is to currently undo what "time" it "saves". You can give Claude Code the most fantastic and accurate prompt in the world but you're still going to have to explain to it how something actually works when it gets to the point, and it will, that it starts contradicting itself and over complicating things.
You said yourself he has to reiterate through the code with the LLM to get something that works. If he already knows it, he could just write it. Having to explain to something HOW to write what you ALREADY know can't possibly be saving time. it's Coding with extra steps.
I don't think it saves time. You spend more time trying to explain why it's wrong and how the llm should take the next approach, at which point it actually would've been faster to read documentation and do it yourself. At least then you'll understand what the code is even further.
Agree, my spouse and I do the same. You need to know how to code and understand the basic principles otherwise it's a bit like the Chinese room thing where you may or may not operate currently not have no actual clue of what you're doing. You need to be about to see when llms follow their hobby and blow three lines of code unnecessarily out of proportion by adding 60 lines of unneeded shit that opens the door to more bugs.
imo paying devs to review vibe coded bile would not work either. At best, the dev themselves should do the vibe coding.
Someone who has no clue whatsoever in terms of programming cannot give it the right prompt.
Yeah, this is my nightmare scenario. Code reviews are always the worst part of a programming gig, and they must get exponentially worse when the junior devs can crank out 100s of lines of code per commit with an LLM.
Also, LLMs are essentially designed to produce code that will pass a code review. It's output that is designed to look as realistic as possible. So, not only do you have to look through the code for flaws, any error is basically "camouflaged".
With a junior dev, sometimes their lack of experience is visible in the code. You can tell what to look at more closely based on where it looks like they're out of their comfort zone. Whereas an LLM is always 100% in its comfort zone, but has no clue what it's actually doing.
Swear to god the vibe coding movement is going to create so many new jobs in the ilk of "I hired some dude to write my startup thing and now it's gone all to shit, can you make it actually good and scalable and responsive please?"
"What do you do? "Oh, I work in AI Disaster Response"
Why does this image look like an AI-generated screenshot? The letter spacing and weights are all wrong.
It's a real post on Reddit. I don't know what combination of screenshotting/uploading tools leads to this kind of mangling, but I've seen it in screenshots from Android, too. The artifacts seem to run down in straight vertical lines, so maybe slight scaling with a nearest-neighbor algorithm (in 2025?!?) plus a couple levels of JPEG compression? It looks really weird.
I'm curious. If anyone knows, please enlighten me!
Oh, if it’s just a random Android fork/version being terrible at something, I understand now. Maybe it’s a phone with a screen that isn’t standard in someway.
probably
Back in my day, we called that pseudocode. It's code-like, but not in any actual programming language that you could compile from.
It's more of a set of ideas of how to accomplish something, than it is actually coding.
The fun part is, that pseudo code can be adapted to any actual programming language.
Idk why everyone is crazy about vibes all of a sudden.... But sure.
Best erotic story ever on !DevsGoneWild@lemmynsfw.com <3
I wish that was a real sub
Somebody make this a sub pls
It's like the nerd version of Synthol body 'builders'.
Consulting opportunity: clean up your vibe-coding projects and get them to production.
That comes up in that sub occasionally and people offer it as a service. It's 2 different universes in there - people who are like giving a child a Harry Potter toy wand that think they're magic, and then a stage magician with 20 years of experience doing up close slight-of-hand magic that takes work to learn, telling the kid "you're not doing what you think you're doing here" and then the kid starts to cry and their friends come over and try to berate the stage magician and shout that he's wrong because Hagrid said Harry's a wizard and if you have the plastic wand that goes "bbbring!" you're Harry Potter.
The post was probably made by a troll, but the comment section is wise to the issue.
I know we like to mock vibe coder because they can be naive, but many are aware that they are testing a concept and usually a very simple one. Would you rather have them test it with vibe coding or sit you down every afternoon for a week trying to explain how it's not quite what they wanted?
Its just funny that it ends up costing more time
Whoa whoa, what do you want to do, crash the entire US stock market over here?! Our whole economy is propped up by the story that AI is the future and will replace all jobs forever. We've got MS paying OpenAI paying Nvidia, and that's making the line go up.
So let's be cool with throwing around "numbers" that "prove" the emperor has no clothes. Because, like, we gotta pretend he does at least until the next thing that needs every video card ever.
I just use it to whip up a mockup, like a GUI with certain usability features. I'm the one who has to work with highly specific, proprietary software and usability is total ass. But it's difficult to put this into words that the dev is willing to read through. So I'd rather show it. But that's about it.
Bruh yer not doing it right. Are you stupid, bruh? You gotta work on yer promps, bruh. You gotta watch some tiktoks on it, bruh. Bruh, go watch @demisets4, bruh. Learn to prompt, bruh. Your not good at it, bruh. Bruh, you should try something else if you can't figure it out that's a you problem, bruh.
The next model will fix it all!
Can someone tell me what vibe coding is?
From what I understand, it's using an LLM for coding, but taken to an extreme. Like, a regular programmer might use an LLM to help them with something, but they'll read through the code the LLM produces, make sure they understand it, tweak it wherever it's necessary, etc. A vibe coder might not even be a programmer, they just get the LLM to generate some code and they run the code to see if it does what they want. If it doesn't, they talk to the LLM some more and generate some more code. At no point do they actually read through the code and try to understand it. They just run the program and see if it does what they want.
Simply not caring and letting the dice roll machine drive
There are a bunch of tools that are basically a text editor hooked up to an LLM. So you use natural language to prompt the software to write code for you.
And to add to this, you don't actually do any coding yourself. Just using something to help with boilerplate code isn't usually counted.
Although, I'm wondering from this Reddit r/vibecoding thread if that's a Lemmy-specific definition. Most of the people in it seem to be using LLMs in a sane way and are telling OP this isn't.
Can someone tell me what vibe coding is?
a term coined 6 months ago for writing software using an LLM https://en.wikipedia.org/wiki/Vibe_coding
is it not making someone fix generated project is a massive work rather than building smething from the ground up?
I had a project where I was supposed to clean up a generated 3D model. It has messed up topology, some missing parts, unintelligible shapes. It made me depressed cleaning it up.
few of them was simple enough for me to rebuild the mesh from the ground up following the shape, as if I'm retopologizing. But the more complex ones have that unintelligible shapes that I can't figure what that is or the flow of the topology.
If I was given more time & pay I could rebuild all of that my own way so I understand every vertices exist in the meshes. But oh well that contradicts their need of quick & cheap.
Link?
This is my new slack pfp
what vibe coding? do i really want to know?
it's buzzword speak for "connect chatgpt to your editor, tell it what you want, and blindly accept the answer"
Getting LLMs to write all your code
geez in my day i had carefully curated snippets in my text editor and i loved it.
Letting Clippy Jr write your code.
Has to be fake, or he just heard the word flow state somewhere and misunderstood it's meaning, lol
Management confusing their coke habit with flow state
I refuse to believe this post isn't satire, because holy shit.