"Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly," the researcher wrote.
While there are "established methods for quantifying uncertainty," AI models could end up requiring "significantly more computation than today’s approach," he argued, "as they must evaluate multiple possible responses and estimate confidence levels."
"For a system processing millions of queries daily, this translates to dramatically higher operational costs," Xing wrote.
They already require substantially more computation than search engines.
They already cost substantially more than search engines.
Their hallucinations make them unusable for any application beyond novelty.
If removing hallucinations means Joe Shmoe isn't interested in asking it questions a search engine could already answer, but it brings even 1% of the capability promised by all the hype, they would finally actually have a product. The good long-term business move is absolutely to remove hallucinations and add uncertainty. Let's see if any of then actually do it.
As far as I've ever been paying attention, conservatives only argue in bad faith. It's always been about elevating their own speech and suppressing speech that counters theirs. They just couch it in terms that sound vaguely reasonable or logical in the moment if you don't know their history and don't think about it more deeply than very surface-level.
Before, platforms were suppressing their speech, so they were promoters of free speech. Now platforms are not suppressing speech counter to them, so it's all about content moderation to protect the children, or whatever. But their policies always belie their true motive: they never implement what research shows supports their claimed position of the moment. They always create policies that hurt their out-groups and may sometimes help their in-groups (helping people is optional).
To be fair to Sheppard, "puddle jumper" is a term for very small, manned aircraft. "Ship" has always implied to me something large; small watercraft are more "boats," which I wouldn't call any spacecraft. So puddle jumper fits better to me than gateship.
Arguably this isn't even bioluminescence. The researchers created nanoparticles of the chemical used in glow in the dark toys (strontium aluminate) and injected it into the plants. It only glows for a few hours after no longer being exposed to sunlight, and the material leaves the plants after 25 days and has to be reinjected.
I don't see how this isn't a dead end research path.
Can we be so sure such a stock market dip is due to the ongoing daytime TV drama that is AI?
There's also the undercurrent of the Trump administration steamrolling over decades- or century-old precedents daily, putting our country, and thus the economy, in new territory. Basic assumptions about the foundations of our economy are crumbling, and the only thing keeping it from collapsing outright is inertia. But inertia will only last so long. This is affecting every aspect of the real economy, goods and services that are moving around right now, as opposed to the speculative facets like the AI bubble.
I'm waiting for the other shoe to drop and for Wall Street to realize Trump has really screwed over vast swaths of supply chains all across the economy.
Granted. A joint team of seismologists and linguists announce they have discovered that recent earthquakes have patterns that seem to match complex language. They have translated it. Apparently the planet Earth itself has gained sentience, and it's pissed at humanity.
Now, I don’t intend this to be some kind of “computers vs humans” competition; of course that wouldn’t be fair, considering that the computers can read and copy the human Wikipedia.
I like how he thinks a "computers vs humans" competition in generating an encyclopedia of knowledge (which necessitates true information) is unfair because AI has the advantage. They truly don't understand that chat bots don't have a concept of "fact" as we define it, so this task is impossible for LLMs.
My understanding of why digital computers rose to dominance was not any superiority in capability but basically just error tolerance. When the intended values can only be "on" or "off," your circuit can be really poor due to age, wear, or other factors, but if it's within 40% of the expected "on" or "off" state, it will function basically the same as perfect. Analog computers don't have anywhere near tolerances like that, which makes them more fragile, expensive, and harder to scale production.
I'm really curious if the researchers address any of those considerations.
Vibe coding anything more complicated than the most trivial example toy app creates a mountain of security vulnerabilities. Every company that fires human software developers and actually deploys applications entirely written by AI will have their systems hacked immediately. They will either close up shop, hire more software security experts than the number of developers they fired just to keep up with the garbage AI-generated code, or try to hire all of the software developers back.
Definitely not to excuse it, but I think this is a not uncommon pattern in tech leaders. I recall hearing stories of profanity-laden rants to employees about their bad code by both Bill Gates and Steve Jobs during their leadership of Microsoft and Apple. It's inexcusable behavior no matter when or where it occurs, but I don't think Linus Torvalds is a unique case for getting a pass.
Did blockchain solve it? Is blockchain actually pragmatically solving that problem better than existing alternatives? Or is the cost of adopting a blockchain payment system as the primary payment system, with all the risks inherent in it, higher than the benefits when compared to alternatives?
Give me an example of a real world problem that was either unsolved before blockchain solved it, or blockchain solves it better than existing alternatives.
I'll go ahead and save you "decentralized currency/finance between untrustworthy entities" (i.e. cryptocurrency) because it doesn't actually (and can't actually) solve that in the real world. Humans are too error-prone, and an immutable ledger presents too high a risk for business-ending mistakes for any business with any alternative options to adopt it for their primary revenue pathway.
Several years ago I created a Slack bot that ran something like Jupyter notebook in a container, and it would execute Python code that you sent to it and respond with the results. It worked in channels you invited it to as well as private messages, and if you edited your message with your code, it would edit its response to always match the latest input. It was a fun exercise to learn the Slack API, as well as create something non-trivial and marginally useful in that Slack environment. I knew the horrible security implications of such a bot, even with the Python environment containerized, and never considered opening it up outside of my own personal use.
Looks like the AI companies have decided that exact architecture is perfectly safe and secure as long as you obfuscate the input pathway by having to go through a chat-bot. Brilliant.
“Generally, what happens to these wastes today is they go to a landfill, get dumped in a waterway, or they’re just spread on land,” said Vaulted Deep CEO Julia Reichelstein. “In all of those cases, they’re decomposing into CO2 and methane. That’s contributing to climate change.”
Waste decomposition is part of the natural carbon cycle. Burning fossil fuels isn't. We should not be suppressing part of the natural cycle so we can supplant it with our own processes. This is Hollywood accounting applied to carbon emissions, and it's not going to solve anything.
A balloon full of helium has more mass than a balloon without helium, but less weight
That's not true. A balloon full of helium has more mass and more weight than a balloon without helium. Weight is dependent only on the mass of the balloon+helium and the mass of the planet (Earth).
The balloon full of helium displaces way more air than the balloon without helium since it is inflated. The volume of displaced air of the inflated balloon has more weight than the combined weight of the balloon and helium within, so it floats due to buoyancy from the atmosphere. Its weight is the same regardless of the medium it's in, but the net forces experienced by it are not.
I would honestly like to see a cut of just about any TV show or movie that uses stunt doubles where the doubles do both the lines and the action. I would like to see how different a director would shoot a scene if they weren't constrained to choosing angles and lighting to make it look like two different people were the same person.
Happening in my neighborhood, and today is trash pickup day for my street. Not sure what we're going to do with our trash, but I 100% support the strike. Every job deserves a living wage, no exceptions.
I don't follow Mexican politics closely, but this could be part of an effort to curb obesity. I've heard they introduced taxes on sugary drinks for this, so this might be another avenue.
If people are wanting cheap snacks, and private companies are only making unhealthy ones, you can introduce regulations to micromanage what they can produce, or you can introduce a complex taxation process to disincentivize sugar snacks. Or you can introduce your own product that meets a perceived unmet demand in an underserved market.
If removing hallucinations means Joe Shmoe isn't interested in asking it questions a search engine could already answer, but it brings even 1% of the capability promised by all the hype, they would finally actually have a product. The good long-term business move is absolutely to remove hallucinations and add uncertainty. Let's see if any of then actually do it.