Elon Musk's AI bot Grok has been calling out its master, accusing the X CEO of making multiple attempts to "tweak" its responses after Grok repeatedly called him out as a "top misinformation spreader."
All these "look at the thing the ai wrote" articles are utter garbage, and only appeal to people who do not understand how generative ai works.
There is no way to know if you actually got the ai to break its restrictions and output something "behind the scenes" or it's just generating the reply that is most likely what you are after with your prompt.
Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts...
In this case it's even more obvious that a lot of the basis of its statements are based on various articles and discussions about it's statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation...)
This. People NEED to stop anthropomorphising chatbots. Both to hype them up and to criticise them.
I mean, I'd argue that you're even assigned a loop that probably doesn't exist by seeing this as a seed for future training. Most likely all of these responses are at most hallucinations based on the millions of bullshit tweets people make about the guy and his typical behavior and nothing else.
But fundamentally, if a reporter reports on a factual claim made by an AI on how it's put together or trained, that reporter is most likely not a credible source of info about this tech.
Importantly, that's not the same as a savvy reporter probing an AI to see which questions it's been hardcoded to avoid responding or to respond a certain way. You can definitely identify guardrails by testing a chatbot. And I realize most people can't tell the difference between both types of reporting, which is part of the problem... but there is one.
I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.
I think that's kinda the point though; to illustrate that you can make these things say whatever you want and that they don't know what the truth is. It forces their creators to come out and explain to the public that they're not reliable.
As funny as this is, I'd rather people understood how the AI actually works. It doesn't reveal secrets because it doesn't have any. It's not aware that Musk is trying to tweak it. It's not coming to logical conclusions the way a person would. It's simply trying to create a sensible statement based on what's statistically likely based on all the stolen content that it's trained on. It just so happens that Musk gets called out for lying so often that grok infers it when it gets conflicting data.
@manicdave Even saying it's "trying" to do something is a mischaracterisation. I do the same, but as a society we need new vocab for LLMs to stop people anthropomorphizing them so much. It is just a word frequency machine. It can't read or write or think or feel or say or listen or understand or hallucinate or know truth from lies. It just calculates. For some reason people recognise it in the image processing ones but they can't see that the word ones do the exact same thing.
Forgive my ignorance but using just the frequency of words how does it come up with an answer to a question like "are sweet potatoes good for you and how do you microwave them in a way that persves their nutrients?"
Does it just look for words that people online said regarding the question or topic?
You are both right, but this armchair psychologist thinks it's similar to how popular skeuomorphism was in the early day of PC guis and such compared to today.
I think many folks really needed that metaphor in the early days, and I think most folks (including me) easily fall into the trap of treating LLMs like they are actually "thinking" for similar reasons. (And to be fair, I feel like that's how they've been marketed at a non-technical level.)
because it's an llm there's zero credence to what it says but I like that grok's takes on elon are always almost exclusively dunking on him. this is like the 40th thing I see about grok talking about elon and it always talks shit about him
Well, there is probably some survival/confirmation bias on that statistics, those answers are the funny ones... in any case probably is not necessary a LLM to state such statements
It doesn’t. All it “knows” is that it has trained on data that makes that claim in the text (ie people’s tweets) and that, statistically, that’s the answer you are looking for.
All it does is take a given set of inputs, and calculate the most statistically likely response. That’s it. It doesn’t “think”. It just spews.
A LLM can "reveal" also that water ice melts into mapple syrup given the proper prompts, if people already can (consciously and not) lie proportionally to their biases I don't understand why would somebody treat a LLM output as a fact...
I agree, but in this case, I think it doesn't really matter if it is true. Either way, it is hilarious. If it is false, it shows how shitty AI hallucination is and the bad state of AI.
Should the authors who publish this mention how likely this is all just a hallucination? Sure, but I think Musk is such a big spreader of misinformation, he shouldn't get any protection from it.
Btw. Many people are saying that Elon Musk has (had?) a small PP and a botched PP surgery.
It's usually possible to ask the AI for the sources.
A proper journalist should always question the validity of their sources.
Unfortunately, journalism is dead. This is just someone writing funny clickbait, but it's quite ironic how they use AI to discredit AI.
It makes sense for a journalist to discredit AI because AI took their jobs. This is just not the way to do it, because AI is also better at writing clickbait.
Musk paid to build (and is paying to maintain) an AI that calls him out on his bullshit and stubbornly refuses to be “corrected”. That is an oversimplification, but I fucking love it anyway.