“While these uses of GenAI are often neither overtly malicious nor explicitly violate these tools’ content policies or terms of services, their potential for harm is significant.”
The title is not mine and the paper the article is responding to was published last month, not two years ago as you claim. The only mention of Musk in the entire article is in this one sentence:
Unlike self-serving warnings from Open AI CEO Sam Altman or Elon Musk about the “existential risk” artificial general intelligence poses to humanity, Google’s research focuses on real harm that generative AI is currently causing and could get worse in the future.
Not sure if you're aware so I'll mention it anyway, but as far as I know, downvotes in Beehaw communities don't federate to Beehaw (as in aren't applied here - you might see them on your instance though, not really sure). That being said, your comment does, so you've made a "pseudo-downvote" anyway.
The mechanism for how it works is that as a remote instance sends in it's downvote count, Beehaw immediately drops the message without modifying the database. Part of this exchange is an expected response of the total updated downvotes. However, Beehaw sends back "0" and the remote instance knows it can't be zero, so it treats it's local count with higher validity.
Essentially, this all ends up meaning that what ssm will see is the total of all downvotes from users on their own instance, and nothing else. This might be just their own downvote, especially being on a smaller instance. But I've seen lemmy.world users be confused about it bc the count they see is say, -5. Have been told my instance obviously has them enabled 😅
Remote instances don't communicate their vote tally's with each other for a third instance's post.
Specifically "Sam Altman or Elon Musk about the “existential risk” artificial general intelligence poses to humanity" which contains a hyperlink leading to an independent article titled "Elon Musk says AI one of the ‘biggest threats’ to humanity", and is just as much unholy brainrot as one might expect.
generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos.
And? There's already way too much data online to read or watch all of it. We could just move to a "watermark" system where everyone takes credit for their contributions. Things without watermarks could just be dismissed, since they have as much authority as an anonymous comment.
AIs learn from existing images, they could just as well learn to reproduce a tattoo and link the pattern to a person's name. Recreating it from different angles, would require more training data, but ultimately would get there.
Why would anyone pay for the service? Having a "name" is free, and that dumb worldcoin only works for people. It can't work for governments or businesses.
ActivityPub is actually a good way to authenticate things. If an organization vouches for something they can post it on their server and it can be viewed elsewhere.
We didn't even have AI when the Internet became flooded with faked images and videos, and those actually are incredibly hard to tell are fake. AI generated images still has very obvious tells that it's fake if you scrutinize them even a little bit. And video is so bad right now, you don't have to do anything but have functioning sight to notice it's not real.
I'm not reading the article but instead trying to be amusing. If it breaks the reality, please put me in a new one with really good scotch, healthy knees, and a spirit of adventure!
Not sure what to make out of this article. The statistics are nice to know, but something like this seems poorly investigated:
AI overview answers in Google search that tell users to eat glue
Google's AI has a strength others lack: not only it allows users to rate an answer, but it can also use Google's search data to check whether people are laughing at or mocking its results.
The "fire breathing swans", the "glue on pizza", or the "gasoline flavored spaghetti", have disappeared from Google's AI.
Gemini now also uses a draft system where it reviews and refines its own initial answer several times, before presenting the final result.
I haven't read this article as the statement is simply wrong. AI is just a technology. What it does (and doesn't) depends on how it is used, and this in turn depends on human decision making.
What Google does here is -once again- denying responsibilty. If I'd be using a tool that says you should put glue on your pizza, then it's me who is responsible, not the tool. It's not the weapon that kilks, it's the human being who pulls the trigger.