Ha! Nope, not buying it.
nasty license Ironic, considering that their work directly builds upon Stable Diffusion.
Funny you mention licenses, since stable diffusion and leading AI models were built on labor exploitation. When this issue is finally settled by law, history will not look back well on you.
So I’m not allowed to have the discussion I’m currently having
Doesn't seem to prevent you from doing it anyways. Does any license slow you down? Nope.
nor to include it in any Linux distro
Not sure that's true, but also unnecessary. Artists don't care about this or need it to be. I think it's a disengenous argument, made in the astronaut suit you wear on the high horse drawn from work you stole from other people.
This is not only an admission of failure but a roadmap for anybody who wants to work around Nightshade.
Sounds like an admission of success given that you have to step out of the shadows to tell artists on mastodon not to use it because, ahem, license issues?????????
No. Listen. The point is to alter the economics, to make training on image from the internet actively dangerous. It doesn't even take much. A small amount of internet data actively poisoned requires future models to use alignment to bypass it, increasing the marginal (thin) costs of training and cheating people out of their work.
Shame on you dude.
If you want to hurt the capitalists, consider exfiltrating weights directly, as was done with LLaMa, to ruin their moats.
Good luck on competing in the arms race to use other people's stuff.
@self@awful.systems can we ban the grifter?
Remember how we were told that genAI learns "just like humans", and how the law can't say about fair use, and I guess now all art is owned by big tech companies?
Well, of course it's not true. Exploiting a few of the ways in which genAI --is not-- like human learners, artists can filter their digital art in such a way that if a genAI tool consumes it, it actively reduces the quality of the model, undoing generalization and bleading into neighboring concepts.
Can an AI tool be used to undo this obfuscation? Yes. At scale, however, doing so requires increasing compute costs more and more. This also looks like an improvable method, not a dead end -- adversarial input design is a growing field of machine learning with more and more techniques becoming highly available. Imagine this as sort of "cryptography for semantics" in the sense that it presents asymetrical work on AI consumers (while leaving the human eye much less effected).
Now we just need labor laws to catch up.
Wouldn't it be funny if not only does generative AI not lead to a boring dystopia, but the proliferation and expansion of this and similar techniques to protect human meaning eventually put a lot of grifters out of business?
We must have faith in the dark times. Share this with your artist friends far and wide!
Feel free to ask Michael in the comments of his blog, he frequently replies, helpfully, with references. I mean all science is tentative, so skepticism is healthy.
Scientists terrified to discover that language, the thing they trained into an highly flexible matrix of nearly arbitrary numbers, ends up can exist in multiple forms, including forms unintended by the matrix!
What happens next, the kids lie to their parents so they can go out partying after dark? The fall of humanity!
Also seems relevant
Like in the deer, the large-scale target morphology can be revised – the pattern memory re-written – by transient physiological experience. The genetics sets the hardware with a default pattern outcome, but like any good cognitive system, it has a re-writable memory that learns from experience.
I wonder if Scott is the person who stood up during Michael Levin's talk on (non genetic) bio-electric circuits storing morphological memory across time and said, “those animals can’t exist!”
Just like neuroscientists try to read out and decode the memories inside a living brain, we can now read and write (a little bit…) the anatomical goals and memories of the collective intelligence of morphogenesis. The first time I presented this at a conference – genetically wild-type worms with a drastically different, rewritten, permanent, target morphology – someone stood up and said that this was impossible and “those animals can’t exist”. Here’s a video taken by Junji Morokuma, of them hanging out.
you forgot the last stage of the evolution,
you'll later find out that people were talking about you, your actions, your words, and that being ghosted was in fact the consequence of your actions, and then you'll have one last opportunity to turn it all around
- do some self introspection and reconcile what actually happened vs what you intended to happen, and decide that it is in fact possible to create relationships without trying to meta discomfort them for your purposes specifically
or
- wokeism is the reason, so this time you need to be even MORE obnoxious, to filter people out who would talk behind your back even strongester! (repeat from the top of your flow)
I think there is a nugget of truth here in so far as that you can't live life trying to make everyone happy, but also, you get what you shop for so, have fun with the shit heads.
I love DnD and TTRPGs. I even love watching some streams when the quality is high. But I'm with you slides in pocket protector I don't generally like this new wave of people who bring the expectation to my tables that every scene and every situation is a massive mellow drama mary sue projection for their OC that must be maximized.
What was that about wit and brevity? Simple done well?
Always my favorite part of your day.
Why protest when you could spend far less energy and just "not be wrong" and "have no stake" by over-fitting your statistical model to the past?
"priors updated" was the same desired outcome all along.
If I could sum up everything that's wrong with EA, it'd be,
"We can use statistics to do better than emotions!" in reality means "We are dysregulated and we aren't going to do anything about it!!!!"
So far, there has been zero or one[1] lab leak that led to a world-wide pandemic. Before COVID, I doubt anyone was even thinking about the probabilities of a lab leak leading to a worldwide pandemic.
So, actually, many people were thinking about lab leaks, and the potential of a worldwide pandemic, despite Scott's suggestion that stupid people weren't. For years now, bioengineering has been concerned with accidental lab leaks because the understanding that risk existed was widespread.
But the reality is that guessing at probabilities of this sort of thing still doesn't change anything. It's up to labs to pursue safety protocols, which happens at the economic edge of of the opportunity vs the material and mental cost of being diligent. Reality is that lab leaks may not change probabilities, but yes the events of them occurring does cause trauma which acts, not as some bayesian correction, but an emotional correction so that people's motivations for atleast paying more attention increases for a short while.
Other than that, the greatest rationalist on earth can't do anything with their statistics about label leaks.
This is the best paradox. Not only is Scott wrong to suggest people shouldn't be concerned about major events (the traumatic update to individual's memory IS valuable), but he's wrong to suggest that anything he or anyone does after updating their probabilities could possibly help them prepare meaningfully.
He's the most hilarious kind of wrong.
Ah, if only the world wasn't so full of "stupid people" updating their bayesians based off things they see on the news, because you should already be worried of and calculating your distributions for... inhales deeply terrorist nuclear attacks, mass shootings, lab leaks, famine, natural disasters, murder, sexual harassment, conmen, decay of society, copyright, taxes, spitting into the wind, your genealogy results, comets hitting the earth, UFOs, politics of any and every kind, and tripping on your shoe laces.
What... insight did any of this provide? Seriously. Analytical statistics is a mathematically consistent means of being technically not wrong, while using a lot of words, in order to disagree on feelings, and yet saying nothing.
Risk management is not a statistical question in fact. It's an economics question of your opportunities. It's why prepping is better seen as a hobby, a coping mechanism and not as viable means of surviving apocalypse. It's why even when a EA uses their super powers of bayesian rationality the answer in the magic eight ball is always just "try to make money, stupid".
My sister in law asked me, recently, "I heard Bitcoin is legal now? Is it a good time to buy?" "Nope."
In practice, alignment means "control".
And the the existential panic is realizing that control doesn't scale. So rather than admit that goal "alignment" doesn't mean what they think it is, rather than admit that darwinian evolution is useful but incomplete and cannot sufficiently explain all phenomena both at the macro and micro levels, rather than possibly consider that intelligence is abundant in systems all around us and we're constantly in tenuous relationships at the edge of uncertainty with all of it,
it's the end of all meaning aka the robot overlord.
And as my senior dad likes to say, "Ying and Yang Baby"
The cosmos doesn't care what values you have. Which totally frees you from moral imperatives and social pressures. Also, I'm doing this particular set of values which is better.
The limit of the cosmos not caring about what values you have, is the cosmos not caring if people choose to have value in their life.
In the future, everything will be owned and nothing taken care of.
seems a bit like a younger person who is now going through some trauma about how much influence they really have vs how much they imagined they had when they were 12
This really resonates. Unfortunately I think that's right. Having this epiphany, this existential correction, about ones self, has either the possibility to create true life long wisdom or, irrecoverable life long self loathing, and from my experience it comes down to the quality of this person's relationships to lean on when confronting the internal fear of mortality.
So it's sad to see but this is another example of the latter and not the former.