Yesterday afternoon, Elon Musk’s Grok chatbot went nuts on Twitter. It answered every question — about baseball salaries, Keir Starmer, or the new Pope’s latest speech — by talking about an alleged…
This is especially ironic with all of Elon's claims about making Grok truth seeking. Well, "truth seeking" was probably always code for making an LLM that would parrot Elon's views.
Elon may have failed at making Grok peddle racist conspiracy theories like he wanted, but this shouldn't be taken as proof that LLMs can't be manipulated that way. He probably went with the laziest option possible of directly prompting it as opposed to fine tuning it on racist content or anything more advanced.
Yeah, I reckon he could pay a bunch of dweebs to do racist reinforcement learning, but then the secret circle becomes so big that it's only a matter of time until there's a leak to the press. Plus, he really hates paying people.
Musk says: “At times, I think Grok-3 is kind of scary smart.” Grok is just remixing its training data — but a stochastic parrot is still more reality-based than Elon Musk. [Bloomberg, archive]
If someone roasted me such such surgical precision like that, I'd delete my entire Internet presence out of shame. God damn.