Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)DI
Posts
9
Comments
116
Joined
2 yr. ago

  • I’d say its a combo of them feeling entitled to plagiarise people’s work and fundamentally not respecting the work of others (a point OpenAI’s Studio Ghibli abomination machine demonstrated at humanity’s expense.

    Its fucking disgusting how they denigrate the very work on which they built their fucking business on. I think its a mixture of the two though, they want it plagiarized so that it looks like their bot is doing more coding than it is actually capable of.

    On a wider front, I expect this AI bubble’s gonna cripple the popularity of FOSS licenses - the expectation of properly credited work was a major aspect of the current FOSS ecosystem, and that expectation has been kneecapped by the automated plagiarism machines, and programmers are likely gonna be much stingier with sharing their work because of it.

    Oh absolutely. My current project is sitting in a private git repo, hosted on a VPS. And no fucking way will I share it under anything less than GPL3 .

    We need a license with specific AI verbiage. Forbidding training outright won't work (they just claim fair use).

    I was thinking adding a requirement that the license header should not be removed unless a specific string ("This code was adapted from libsomeshit_6.23") is included in the comments by the tool, for the purpose of propagation of security fixes and supporting a consulting market for the authors. In the US they do own the judges, but in the rest of the world the minuscule alleged benefit of not attributing would be weighted against harm to their customers (security fixes not propagated) and harm to the authors (missing out on consulting gigs).

    edit: perhaps even an explainer that authors see non attribution as fundamentally fraudulent against the user of the coding tool: the authors of libsomeshit routinely publish security fixes and the user of the coding tool, who has been defrauded to believe that the code was created de-novo by the coding tool, is likely to suffer harm from misuse of published security fixes by hackers (which wouldn't be possible if the code was in fact created de-novo).

  • I think provenance has value outside copyright... here's a hypothetical scenario:

    libsomeshit is licensed under MIT-0 . It does not even need attribution. Version 3.0 has introduced a security exploit. It has been fixed in version 6.23 and widely reported.

    A plagiaristic LLM with training date cutoff before 6.23 can just shit out the exploit in question, even though it already has been fixed.

    A less plagiaristic LLM could RAG in the current version of libsomeshit and perhaps avoid introducing the exploit and update the BOM with a reference to "libsomeshit 6.23" so that when version 6.934 fixes some other big bad exploit an automated tool could raise an alarm.

    Better yet, it could actually add a proper dependency instead of cut and pasting things.

    And it would not need to store libsomeshit inside its weights (which is extremely expensive) at the same fidelity. It just needs to be able to shit out a vector database's key.

    I think the market right now is far too distorted by idiots with money trying to build the robot god. Code plagiarism is an integral part of it, because it makes the LLM appear closer to singularity (it can write code for itself! it is gonna recursively self-improve!).

  • In case of code, what I find the most infuriating is that they didn't even need to plagiarize. Much of open source code is permissively enough licensed, requiring only attribution.

    Anthropic plagiarizes it when they prompt their tool to claim that it wrote the code from some sort of general knowledge, it just learned from all the implementations blah blah blah to make their tool look more impressive.

    I don't need that, in fact it would be vastly superior to just "steal" from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.

  • No no I am talking of actual non bullshit work on the underlying math. Think layernorm, skip connections, that sort of thing, changes how the neural network is computed so that it trains more effectively. edit: in that case would be changing it so that after training, at inference for the typical query, most (intermediary) values computed will be zero.

  • I dunno, I guess I should try it just to see what the buzz is all about, but I am rather opposed to plagiarism and river boiling combination, and paying them money is like having Peter Thiel do 10x donations matching for donations to a captain planet villain.

    I personally want a model that does not store much specific code in its weights, uses RAG on compatibly licensed open source and cites what it RAG’d . E.g. I want to set app icon on Linux, it’s fine if it looks into GLFW and just borrows code with attribution that I will make sure to preserve. I don’t need it to be gaslighting me that it wrote it from reading the docs. And this isn’t literature, theres nothing to be gained from trying to dilute copyright by mixing together a hundred different pieces of code doing the same thing.

    I also don’t particularly get the need to hop onto the bandwagon right away.

    It has all the feel of boiling a lake to do for(int i=0; i<strlen(s); ++i) . LLMs are so energy intensive in large part because of quadratic scaling, but we know the problem is not intrinsically quadratic otherwise we wouldn’t be able to write, read, or even compile the code.

    Each token has the potential of relating to any other token but does only relate to a few.

    I’d give the bastards some time to figure this out. I wouldn’t use an O(N^2) compiler I can’t run locally, either, there is also a strategic disadvantage in any dependence on proprietary garbage.

    Edit: also i have a very strong suspicion that someone will figure out a way to make most matrix multiplications in an LLM be sparse, doing mostly same shit in a different basis. An answer to a specific query does not intrinsically use every piece of information that LLM has memorized.

  • TechTakes @awful.systems

    Do leaders even believe that generative AI is useful?

  • Isn’t it part of the lawsuit that one of the developers literally said that downloading torrents on a corporate machine feels wrong?

    That they routinely use bittorrent protocol for data only makes it more willful, since they know how it works while your average Joe may not understand that he is distributing anything.

  • TechTakes @awful.systems

    Meta was “allegedly” seeding porn to speed up their book downloads.

  • Film photography is my hobby and I think that there isn’t anything that would prevent from exposing a displayed image on a piece of film, except for the cost.

    Glass plates it is, then. Good luck matching the resolution.

    In all seriousness though I think your normal set up would be detectable even on normal 35mm film due to 1: insufficient resolution (even at 4k, probably even at 8k), and 2: insufficient dynamic range. There would probably also be some effects of spectral response mismatch - reds that are cut off by the film’s spectral response would be converted into film-visible reds by a display. Il

    Detection of forgery may require use of a microscope and maybe some statistical techniques. Even if the pixels are smaller than film grains, pixels are on a regular grid and film grains are not.

    Edit: trained eyeballing may also work fine if you are familiar with the look of that specific film.

  • Hmm, maybe too premature - chatgpt has history on by default now, so maybe that's where it got the idea it was a classic puzzle?

    With history off, it still sounds like it has the problem in the training dataset, but it is much more bizarre:

    https://markdownpastebin.com/?id=68b58bd1c4154789a493df964b3618f1

    Could also be randomness.

    Select snippet:

    Example 1: N = 2 boats

    Both ferrymen row their two boats across (time = D/v = 1/3 h). One ferryman (say A) swims back alone to the west bank (time = D/u = 1 h). That same ferryman (A) now rows the second boat back across (time = 1/3 h). Meanwhile, the other ferryman (B) has just been waiting on the east bank—but now both are on the east side, and both boats are there.

    Total time

    $$ T_2 = \frac{1}{3} + 1 + \frac{1}{3} = \frac{5}{3}\ \mathrm{hours} \approx 1\mathrm{h}40\mathrm{min}. $$

    I have to say with history off it sounds like an even more ambitious moron. I think their history thing may be sort of freezing bot behavior in time, because the bot sees a lot of past outputs by itself, and in the past it was a lot less into shitting LaTeX all over the place when doing a puzzle.

  • Yeah that's the version of the problem that chatgpt itself produced, with no towing etc.

    I just find it funny that they would train on some sneer problem like this, to the point of making their chatbot look even more stupid. A "300 billion dollar" business, reacting to being made fun of by a very small number of people.

  • Oh wow it is precisely the problem I "predicted" before: there are surprisingly few production grade implementations to plagiarize from.

    Even for seemingly simple stuff. You might think parsing floating point numbers from strings would have a gazillion examples. But it is quite tricky to do it correctly (a correct implementation allows you to convert a floating point number to a string with enough digits, and back, and always obtain precisely the same number that you started with). So even for such omnipresent example, which has probably been implemented well over 10 000 times by various students, if you start pestering your bot with requests to make it better, if you have the bots write the tests and pass them, you could end up plagiarizing something identifiable.

    edit: and even suppose there were 2, or 3, or 5 exfat implementations. They would be too different to "blur" together. The deniable plagiarism that they are trying to sell - "it learns the answer in general from many implementations, then writes original code" - is bullshit.

  • SneerClub @awful.systems

    We did it. 2 people and many boats problem is a classic now.

  • I think if people are citing in another 3 months time, they’ll be making a mistake

    In 3 months they'll think they're 40% faster while being 38% slower. And sometime in 2026 they will be exactly 100% slower - the moment referred to as "technological singularity".

  • Yeah, the glorious future where every half-as-good-as-expert developer is now only 25% as good as an expert (a level of performance also known as being "completely shit at it"), but he's writing 10x the amount of unusable shitcode.

  • I think more low tier output would be a disaster.

    Even pre AI I had to deal with a project where they shoved testing and compliance at juniors for a long time. What a fucking mess it was. I had to go through every commit mentioning Coverity because they had a junior fixing coverity flagged "issues". I spent at least 2 days debugging a memory corruption crash caused by such "fix", and then I had to spend who knows how long reviewing every such "fix".

    And don't get me started on tests. 200+ tests, of them none caught several regressions in handling of parameters that are shown early in the frigging how-to. Not some obscure corner case, the stuff you immediately run into if you just follow the documentation.

    With AI all the numbers would be much larger - more commits "fixing coverity issues" (and worse yet fixing "issues" that LLM sees in code), more so called "tests" that don't actually flag any real regressions, etc.

  • And the other "nuanced" take, common on my linkedin feed, is that people who learn how to use (useless) AI are gonna replace everyone with their much increased productive output.

    Even if AI becomes not so useless, the only people whose productivity will actually improve are the people who aren't using it now (because they correctly notice that its a waste of time).

  • When they tested on bugs not in SWE-Bench, the success rate dropped to 57‑71% on random items, and 50‑68% on fresh issues created after the benchmark snapshot. I’m surprised they did that well.

    After the benchmark snapshot. Could still be before LLM training data cut off, or available via RAG.

    edit: For a fair test you have to use git issues that had not been resolved yet by a human.

    This is how these fuckers talk, all of the time. Also see Sam Altman's not-quite-denials of training on Scarlett Johansson's voice: they just asserted that they had hired a voice actor, but didn't deny training on actual Scarlett Johansson's voice. edit: because anyone with half a brain knows that not only did they train on her actual voice, they probably gave it and their other pirated movie soundtracks massively higher weighting, just as they did for books and NYT articles.

    Anyhow, I fully expect that by now they just use everything they can to cheat benchmarks, up to and including RAG from solutions past the training dataset cut off date. With two of the paper authors being from Microsoft itself, expect that their "fresh issues" are gamed too.

  • Yeah I'm thinking that people who think their brains work like LLM may be somewhat correct. Still wrong in some ways as even their brains learn from several orders of magnitude less data than LLMs do, but close enough.

  • TechTakes @awful.systems

    AI solves every river crossing puzzle, we can go home now

    TechTakes @awful.systems

    Google's Gemini 2.5 pro is out of beta.

    TechTakes @awful.systems

    Musk ("xAI") now claims grok was hacked

    TechTakes @awful.systems

    Gemini seem to have "solved" my duck river crossing, lol.

    TechTakes @awful.systems

    Gemini 2.5 "reasoning", no real improvement on river crossings.

    SneerClub @awful.systems

    Some tests of how much AI "understands" what it says (spoiler: very little)