"participants who had access to an AI assistant wrote significantly less secure code" and "were also more likely to believe they wrote secure code" - 2023 Stanford University study published at CCS23
Not specified for this research but... if you rely on LLM to write code that is security-sensitive, I don't expect you to write secured code without LLM anyway
We're entering the 'blockchain for every need' stage. Expect massive money to flow into scams, poor ideas, and outright dangerous uses for a few years .
Before Blockchain we had 'the web' itself in the dot com era. Before that? I saw it in basic computing as a solution to everything.
While I agree “they should be doing these studies continuously” point of view, I think the bigger red flag here is that with the advancements of AI, a study published in 2023 (meaning the experiment was done much earlier) is deeply irrelevant today in late 2024. It feels misleading and disingenuous to be sharing this today.
The problem that the study reveals is that people who use AI-generated code as a rule don't understand it and aren't capable of debugging it. As a result, bigger LLMs will not change that.
Its the inherent disconnect between "News" and "Science".
Science requires rigorous study and incremental advancement. A 2023 article based on 2022 data is inherently understood to be.. 2022 data (note: I did not actually check but that is the timeline I assume. It is in the study).
But news and social media just want headlines that get people angry and reinforce whatever nonsense people want to Believe.
It is similar to explaining basic concepts. Been a minute since the last time I was properly briefed, but think stuff like "Do NOT say 'theory' of evolution. Instead, talk about how evolution is the only accepted justification based on evidence and research"
They do. Reality is not going to change though. You can enable a handicapped developer to code with LLMs, but you can't win a foot race by using a wheelchair.
I had a student came into office hours asking why their program got a bad grade. I looked and it didn't actually do anything related to the assignment.
Upon further query, they objected saying that the CI pipeline built it just fine.
So ..yeah... You can write a program that builds and runs, but doesn't do the required tasks, which makes it wrong. This was not a concept they'd figured out yet.
It seems to me that if one can adequately explain the function of their pseudocode in adequate detail for an LLM to turn it into a functional and reliable program, then the hardest part of writing the code was already done without the LLM.
I really don't get how its different than a search engine. Granted its surprising how often I have to give up in disgust and just go back to normal search but pretty often they can find more relevant stuff faster
so is search. I mean I would not click the first link from a search and then copy and paste code from the site into my project no questions asked. similarly you can look over what the ai comes up with and see if it makes sense. same you would do with some dudes blog. you can also check the references it gives or ask it to expand on some part. hey what does the function X do. I really don't see it as being worse than search.