Both of these are somewhat less bad than they were when I first noticed them, but they're still pretty bad. I am puzzled at how the latter even exists. I had thought that there were rules against just making a whole page about a neologism, but either I'm wrong about that or the "rules" aren't enforced very strongly.
Reflection (artificial intelligence) is dreck of a high order. It cites one arXiv post after another, along with marketing materials directly from OpenAI and Google themselves... How do the people who write this shit dress themselves in the morning without pissing into their own socks?
and of course, not a single citation for the intro paragraph, which has some real bangers like:
This process involves self-assessment and internal deliberation, aiming to enhance reasoning accuracy, minimize errors (like hallucinations), and increase interpretability. Reflection is a form of "test-time compute," where additional computational resources are used during inference.
because LLMs don’t do self-assessment or internal deliberation, nothing can stop these fucking things from hallucinating, and the only articles I can find for “test-time compute” are blog posts from all the usual suspects that read like ads and some arXiv post apparently too shitty to use as a citation
on the one hand, I want to try find which vendor marketing material "research paper" that paragraph was copied from, but on the other... after yesterday's adventures trying to get data out of PDFs and c.o.n.s.t.a.n.t.l.y getting "hey how about this LLM? it's so good![0]" search results, I'm fucking exhausted
[0]: also most of these are paired with pages of claims of competence and feature boasts, and then a quiet "psssst: also it's a service and you send us your private data and we'll do with it whatever we want" as hidden as they can manage
GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. [...] An upgraded version called GPT-3.5 was used in ChatGPT, which later garnered attention for its detailed responses and articulate answers across many domains of knowledge.
cause I love the kayfabe linguistic drift for a term that’s not even a month old that’s probably seen more use in posts making fun of the original tweet than any of the shit the Wikipedia article says
None of my acquaintances who have Wikipedian insider experience have much familiarity with the "Did you know" box. It seems like a niche within a niche that operates without serious input from people who care about the rest of the project.
"In The News" is apparently also an editor clique with its own weird dynamics, but it doesn't elevate as many weird tiny articles to the Main Page because the topics there have to be, you know, in the news.
Its fine if you don't want to do the 'homework,' but op doesn't get to complain about the rules not being enforced on the notoriously democratic editable-by-anyone wikipedia and refuse to take up the trivial 'homework' of starting the rule violation procedure. The website is inherently a 'be the change you want to see in the world' platform.
Calling for the obvious action in response to a complaint is not homework. It’s Wikipedia, it can be edited by anyone who is willing to deal with the process.
If you or OP doesn’t want to do it, someone else will - and they already have, it seems.
For posterity: English Wikipedia is deletionist, so your burden of proof is entirely backwards. I know this because I quit English WP over it; the sibling replies are from current editors who have fully internalized it. English WP's notability bar is very high and not moved by quantity of sources; it also has suffered from many cranks over the years, and we should not legitimize cranks merely because they publish on ArXiv.
i'm more frustrated that NPOV has been forced into secondary positions behind reliable sources. just because a reliable source has said something does not justify its inclusion in an article where its inclusion would disturb the NPOV.
The number of sources isn't really the issue; many of those are industry advertisements, such as blog posts on product pages, for instance. Out of the few that are papers, almost all are written exclusively by industry research teams — while that doesn't on its own invalidate their results, it does mean that there's a strong financial interest in the non-consensus view (in particular, that LLMs can be "programmed"). The few papers that have been peer-reviewed have extreme methodological flaws, such that there's essentially almost no support for the article's bombastic and extreme non-consensus claims.
The Universal AI University has implemented a novel admissions process, leveraging the Metaverse and Artificial Intelligence (AI) technologies. This system integrates optimization algorithms, crowd-generating tools, and visual enhancement technologies within the Metaverse, offering a unique and technologically advanced admissions experience for students.
I got curious whether the Wikipedia article for Bayes' theorem was burdened by LessWrong spam. I don't see overt indications of that, but even so, I'm not too impressed.
For example:
P(B|A) is also a conditional probability: the probability of event B occurring given that A is true. It can also be interpreted as the likelihood of A given a fixed B because P(B|A) = L(A|B).
The line about "likelihood" doesn't explain anything. It just throws in a new word, which is confusing because the new word sounds like it should be synonymous with "probability", and then adds a new notation, which is just the old notation but backwards.
P(A) and P(B) are the probabilities of observing A and B respectively without any given conditions; they are known as the prior probability and marginal probability.
But both P(A) and P(B) are marginal probabilities; they're the marginals of the joint probability P(A,B).
The first citation is to one random guy's book that's just his presentation of his own "subjective logic" theory. And that reference was originally added to the article by (no prizes for guessing) the author himself, writing a whole section about his own work!
There are long stretches without citations, which I've been given to understand is frowned upon. On the other hand, one of the citations that does exist is to a random tutoring-help website whose "about us" page crashed Firefox on my phone. (I have been trying other browsers on my laptop, but not on mobile yet, due to finiteness of brain energy.)