A test of AI for Australia's corporate regulator found that the technology might actually make more work for people, not less.
Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.
Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.
The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.
Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.
These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.
It might be all I care about. Humans might always be better, but AI only has to be good enough at something to be valuable.
For example, summarizing an article might be incredibly low stakes (I’m feeling a bit curious today), or incredibly high stakes (I’m preparing a legal defense), depending on the context. An AI is sufficient for one use but not the other.
I think the idea is that every company is dumping money into LLMs and no other form of alternative AI development to the point that all AI research is LLM based and therefore to investors and those involved, it’s effectively the only only avenue to AGI, though that’s likely not true
The fact that we even had to start using the term AGI when in common parlance AI always meant the same up until recently, shows how goal posts are being moved.
What people mean by AI has been changing for as long as the term has been used. When I was studying CS in the 80s, people said the holy grail was giving a computer printed English text and having it read it aloud. It wasn't much later that OCR and text to speech software was commonplace.
Generally, when people say AI, they mean a computer doing something that normally takes a human, and that bar goes up all the time.
To a degree, but, like, video game ai has been called that for decades, I don't think anyone ever thought it was agi. It's a more specific term, and it saw use before the big LLM craze started
The thing with 'common parlance' is that it's used by people without a deep understanding of the subject. Among AI researchers, there’s never been confusion about this. We have different terms for different things for a reason. The term AGI has been around since the early 2000s.
It's like complaining about the terms jig, spoon, spinner, and fly, and saying that back in the day, we just called them fishing lures. They are fishing lures, but these terms describe different types. Similarly, AGI is a form of AI, but it refers to a specific kind.
Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts.
This is the key line here. These are likely university educated staff with significant experience in writing and summarising information and they were specifically tasked with this. However, within the social media landscape (Lemmy, reddit, etc) AI is already better at summarising information than humans because most human social media users are fucking retarded and spend their time either a) not reading properly/at all or b) cherrypicking information to fit whatever flavour of impassioned narrative they are trying to sell to everyone else.
Just some very recent examples I've seen of Lemmy users proving they are completely incapable of parsing relevant information are that article about an alternative, universal and non-proprietary database called GetGee which everyone seemed to think was an article about whether TikTok should be banned (because the word TikTok was in the title and that tricked their monkey brains) or the update to the 404 Media story on "active listening" in which people responded as if this technology exists and is in use when 404 Media still haven't been able to confirm either of these things. The second one was particularly egregious because it got picked up by all kinds of tech-related YouTube channels and news sites and regurgitated by their viewers and readers without a single one of these people ever bothering to read the source material properly.
I had the same thought. Most people I encounter online and in person are not great at summarizing information regardless of the context.
For example: those who don't summarize the content of a conversation and instead poorly and inaccurately act out the entire encounter, "word by word ". Ughhhhh.
Not a stock market person or anything at all ... but NVIDIA's stock has been oscillating since July and has been falling for about a 2 weeks (see Yahoo finance).
What are the chances that this is the investors getting cold feet about the AI hype? There were open reports from some major banks/investors about a month or so ago raising questions about the business models (right?). I've seen a business/analysis report on AI, despite trying to trumpet it, actually contain data on growing uncertainties about its capability from those actually trying to implement, deploy and us it.
I'd wager that the situation right now is full a lot of tension with plenty of conflicting opinions from different groups of people, almost none of which actually knowing much about generative-AI/LLMs and all having different and competing stakes and interests.
That doesn't mean that the slide will absolutely continue. There may be some fresh injection of hype that will push investor confidence back up, but right now the wind is definitely going out of the sails.
The core issue, as the Goldman - Sachs report notes, is that AI is currently being valued as a trillion dollar industry, but it has not remotely demonstrated the ability to solve a trillion dollar problem.
No one selling AI tools is able to demonstrate with confidence that they can be made reliable enough, or cheap enough, to truly replace the human element, and without that they will only ever be fun curiosities.
And that "cheap enough" part is critical. It is not only that GenAI is deeply unreliable, but also that it costs a truly staggering amount of money to operate (OpenAI are burning something like $10 billion a year). What's the point in replacing an employee you pay $10 an hour to handle customer service issues with a bot that costs $5 for every reply it generates?
Yeah we are on the precipice of a massive bubble about to burst because, like the dot com bubble magic promises are being made by and to people who don’t understand the tech as if it is some magic that will net incredible profits just by pursuing it. LLMs have great applications in specific things, but they are being thrown in every direction to see where they will stick and the magic payoff will come
What are the chances that this is the investors getting cold feet about the AI hype?
Investors have proven over and over they’re credulous idiots who understand sweet fuck-all about technology and will throw money at whatever’s in their face. Creepy Sam and the Microshits will trot out some more useless garbage and prize a few more billion out of the market in just a little while.
NVIDIA has been having a lot of problems with their 13th/14th gen CPU's degrading. They are also embroiled in an anti-trust investigation. That coupled with the "growing pains of generative AI" has caused them a lot of problems where 2 months ago they were one of the world's most valuable companies.
Some of it is likely the die-off of the AI hype but their problems are farther reaching than the sudden AI boom.
Meanwhile, here's an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn't exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:
Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It's also worth noting that Claude 3 Opus doesn't have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).
Artificial intelligence is worse than humans in every way at summarizing documents
In every way? How about speed? The goal is to save human time so if AI is faster and the summary is good enough, then it is a success. I guarantee it is faster. Much faster.
If you make enough mistakes, speed is a detriment not a benefit. Increasing speed allows you to produce more summaries but if you still need to correct and edit them all you've done is add a step where a human has to still read the document to the level where they could summarize it and edit the AI summary. Therefore the bottleneck of a human reading the document and working on a summary is still there. It would only potentially make it slightly easier if the corrections needed are small and obvious.
47% is a fail. 81% is an A-... Sure the AI can fail faster than a human can succeed, but I can fail to run a marathon faster than an athlete can succeed.
I guess by the standards we use to judge AI I'm a marathon runner!
If I want to get a better sense of lemmy than headlines, that 47% success at summarizing all the posts is good enough and much faster than I can even skim
If I want to code a new program, that 47% is probably pretty solid at structure and boilerplate so good enough. It can save me a lot of time
If I want to summarize the statuses of my entire team, that 47% may be sufficient for a Slack update to keep everyone up to speed but not enough to send to management
If I’m writing my thesis, that 47% is abject failure
My guess ist that even if it would be better when it comes to generic text, most of the texts which really mean something have a lot of context around them which a model will know nothing about and thus will not know what is important to the people working with this topic and what is not.
The article suggests AI is worse than humans at summarizing documents, based on one outdated trial. But really, Crikey is just feeling threatened. AI is evolving fast, and its ability to handle vast amounts of data without the human biases Crikey often exhibits is undeniable. While they nitpick AI’s limitations, they ignore how much better it will get—probably even better than their reporters. Maybe they’re just jealous that AI could do in seconds what takes humans hours!
Nice to have though, would likely skip or half-ass a lot of stuff if I didn't have a tool like AI to do the boring parts. When I can get started on a task really quickly, I don't care what the quality is, I'll iterate until it meets my standards.
Also beware the AI Explained channel, where the creator is full-time investigating and evaluating cutting edge development in AI. You might even glimpse what's coming.