an introduction to gibberish.awful.systems
self @ self @awful.systems Posts 97Comments 1,931Joined 2 yr. ago

I’m planning on spinning up a couple of blogs beyond the admin announcements one I posted this on — one interesting thing WriteFreely lets you do is anonymously spin up multiple blogs under one account up to an instance-defined limit (ours is 10), a feature which is absolutely ripe for abuse (part of why we’re invite-only) but should come in handy. one thing I’d like to do is start publishing my loose notes as publicly-accessible blog posts, so I can still share my writing even if it’s not a polished whole yet.
I will find someone who I consider better than me in relevant ways, and have them provide the genetic material. I think that it would be immoral not to, and that it is impossible not to think this way after thinking seriously about it.
we’re definitely not a cult, I don’t know why anyone would think that
Consider it from your child’s perspective. There are many people who they could be born to. Who would they pick? Do you have any right to deny them the father they would choose? It would be like kidnapping a child – an unutterably selfish act. You have a duty to your children – you must act in their best interest, not yours.
I just don’t understand how so many TESCREAL thoughts and ideas fit this broken fucking pattern. “have you thought about
<normal thing>
? but have you really thought about it? you must not have cause if you did you would agree it was<severe crime>
!”and you really can tell you’re dealing with a cult when you start from the pretense that a child that doesn’t exist yet has a perspective — these fucking weirdos will have heaven and hell by any means, no matter how much math and statistics they have to abuse past the breaking point to do it.
and just like with any religious fundamentalist, the child doesn’t have any autonomy. how could they, if all their behavior has already been simulated to perfection? there’s no room for an imperfect child’s happiness; for familial bonding; for normal human shit. all that must be cope, cause it doesn’t fit into a broken TESCREAL worldview.
my strong impression is that surveillance advertising has been an unmitigated disaster for the ability to actually sell products in any kind of sensible way — see also the success of influencer marketing, under the (utterly false) pretense that it’s less targeted and more authentic than the rest of the shit we’re used to
but marketing is an industry run by utterly incompetent morally bankrupt fuckheads, so my impression is also that none of them particularly know or care that the majority of what they’re doing doesn’t work; there’s power in surveillance and they like that feeling, so the data remains extremely valuable on the market
you’re right, I’m giving them way too much credit — the full thought is almost definitely “There is no greater story than people’s relentless and dogged endeavor to overcome repressive regimes and replace them with their own repressive regimes, but this time with heroin and sex tourism”
what if we made the large language model larger? it’s weird nobody has attempted this
also this is all horseshit so I know they haven’t thought this far ahead, but pushing a bit on the oracle problem, how do they think they solved these fundamental issues in their proposed design?
- if verifying answers are correct is up to the miners, how do they prevent the miners from just generating any old bullshit using a much less expensive method than an LLM (a Markov chain say, or even just random characters or an empty string if nobody’s checking) and pocketing the tokens?
- if verification is up to the requester, why would you ever mark an answer as correct? if you’re forced to pick one correct answer that gets your tokens, what’s stopping you from spinning up an adversarial miner that produces random answers and marking those as correct, ensuring you keep both your tokens and the other miners’ answers?
- if answers are verified centrally… there’s no need for the miners or their models, just use whatever that central source of truth is.
and of course this is avoiding the elephant in the room: LLMs have no concept of truth, they just extrude plausible bullshit into a statistically likely shape. there’s no source of truth that can reliably distinguish bad LLM responses from good ones, and if you had one you’d probably be better off just using it instead of an LLM.
edit cause for some reason my brain can’t stop it with this fractally wrong shit: finally, if their plan is to just evenly distribute tokens across miners and return all answers: congrats on the “decentralized” network of /dev/urandom
to string converters you weird fucks
another edit: I read the fucking spec and somehow it’s even stupider than any of the above. you can trivially just spend tokens to buy a majority of the validator slots for a subnet (which I guess in normal cultist lingo would be a subchain) and use that to kick out everyone else’s miners:
Only the top 64 validators, when ranked by their stake amount in any particular subnet, are considered to have a validator permit. Only these top 64 subnet validators with permits are considered active in the subnet.
a third edit, please help, my brain is melting: what does a non-adversarial validator even look like in this architecture? we can’t fucking verify LLM outputs like I said so… is this just multiple computers doing RAG and pretending that’s a good idea? is the idea that you run some kind of unbounded training algorithm and we also live in a universe where model overfitting doesn’t exist? help I am melting
If you remember early bitcoin, some people would say it’s money, some people would say it’s gold. Some people would say it’s this blockchain … The way that I look at Bittensor is as the World Wide Web of AI.
it’s really rude of you to find and quote a paragraph designed to force me to take four shots in rapid succession in my ongoing crypto/AI drinking game!
How does Bittensor work? “When you have a question, you send it out to the network. Miners whose models are suited to answer your question will process it and send back a proposed answer.” The “miners” are rewarded with TAO tokens.
“what do you mean oracle problem? our new thing’s nothing but oracles, we just have to figure out a way to know they’re telling the truth!”
Bittensor is enormously proud to be decentralized, because that’s a concept that totally makes sense with AI models, right? “There is no greater story than people’s relentless and dogged endeavor to overcome repressive regimes,” starts Bittensor’s introduction page.
meme stock cults and crypto scams both should maybe consider keeping pseudo-leftist jargon out of their fucking mouths
e: also, Bittensor? really?
fucking imagine coming back to a place you’re not welcome with this “eeehhh you’re being a bit aggressive tbh” shit
I think your response is a bit aggressive TBH.
nah, an aggressive response is me telling you to fuck yourself as I ban you for a second(!) time for making these exact terrible fucking posts
I’ve saved many hours of work with it, in languages I don’t really even know.
maybe by next ban you’ll figure out why your PRs keep getting closed
congrats on asking jeeves
is it bad to be bad at a system designed for exploitation? maybe your grandma had a point
what’s wild is in the ideal case, a person who really doesn’t have anything to hide is both unimaginably dull and has effectively just confessed that they would sell you out to the authorities for any or no reason at all
people with nothing to hide are the worst people
maybe it was a mistake to lionize a corporate monopolist to the level where we ostracized people for not being “good” at using their trap of a product
the marketing fucks and executive ghouls who came up with this meme (that used to surface every time I talked about wanting to de-Google) are also the ones who make a fuckton of money off of having a real-time firehose of personal data straight from the source, cause that’s by far what’s most valuable to advertisers and surveillance firms (but I repeat myself)
(Currently writing some book-like text on the AI bubble, with minimal crypto. I also have some book-like text on smart city scams, which has rather more bitcoin in it.)
fuck yes
AWS’ suggested upgrade path is Amazon Aurora PostgreSQL — which also does audit logs. So as usual, the answer to which database is: just use Postgres.
it’s amazing how often Postgres is the sane implementation for a database-shaped problem, including a search engine just waiting for a competent ranking algorithm and a crawler (yes I’ve considered doing this)
the linked Buttondown article deserves highlighting because, as always, Emily M Bender knows what’s up:
If we value information literacy and cultivating in students the ability to think critically about information sources and how they relate to each other, we shouldn't use systems that not only rupture the relationship between reader and information source, but also present a worldview where there are simple, authoritative answers to questions, and all we have to do is to just ask ChatGPT for them.
(and I really should start listening to Mystery AI Hype Theater 3000 soon)
also, this stood out, from the OpenAI/Common Sense Media (ugh) presentation:
As a responsible user, it is essential that you check and evaluate the accuracy of the outputs of any generative AI tool before you share it with your colleagues, parents and caregivers, and students. That includes any seemingly factual information, links, references, and citations.
this is such a fucked framing of the dangers of informational bias, algorithmic racism, and the laundering of fabricated data through the false authority of an LLM. framing it as an issue where the responsible party is the non-expert user is a lot like saying “of course you can diagnose your own ocular damage, just use your eyes”. it’s very easy to perceive the AI as unbiased in situations where the bias agrees with your own, and that is incredibly dangerous to marginalized students. and as always, it’s gross how targeted this is: educators are used to being the responsible ones in the room, and this might feel like yet another responsibility to take on — but that’s not a reasonable way to handle LLMs as a source of unending bullshit.
Lack of familiarity with AI PCs leads to what the study describes as "misconceptions," which include the following: 44 percent of respondents believe AI PCs are a gimmick or futuristic; 53 percent believe AI PCs are only for creative or technical professionals; 86 percent are concerned about the privacy and security of their data when using an AI PC; and 17 percent believe AI PCs are not secure or regulated.
ah yeah, you just need to get more familiar with your AI PC so you stop caring what a massive privacy and security risk both Recall and Copilot are
lol @ 44% of the study’s participants already knowing this shit’s a desperate gimmick though
per capita: your mom
fuck me that is some awful fucking moderation. I can’t imagine being so fucking bad at this that I:
- dole out a ban for being rude to a fascist
- dole out a second ban because somebody in the community did some basic fucking due diligence and found out one of the accounts defending the above fascist has been just a gigantic racist piece of shit elsewhere, surprise
- in the process of the above, I create a safe space for a fascist and her friends
but for so many of these people, somehow that’s what moderation is? fucking wild, how the fuck did we get here
a better-thought-out announcement is coming later today, but our WriteFreely instance at gibberish.awful.systems has reached a roughly production-ready state (and you can hack on its frontend by modifying the templates
, pages
, static
, and less
directories in this repo and opening a PR)! awful.systems regulars can ask for an account and I'll DM an invite link!
I’m excited for it!