No actually origionally it was fed the internet... and elon was getting fed up with it correctly fact checking him.
IE the genisis of this change was, musk accusing the left of being violent. Followed by someone asking grok whether the left or right had commited was more violent.
Grok responded basically pointing out that there were far more incidents of right wing inspired violence, and the majority of left wing incidents only damaged property.
Musk responded that that is incorrect and he'll fix it.
And now he made the update and explicitly turned grok into a nazi.
You miss the point. The programmers control the output to some degree by limiting it or instructing it to prioritize certain information or narratives.
In other words, one (for example, Musk) can finetune the big language model on a small pattern of data (for example, antisemetic content) to 'steer' the LLM's outputs towards that.
You could bias it towards fluffy bunny discussions, then turn around and send it the other direction.
Each round of finetuning does "lobotomize" the model to some extent though, making it forget stuff, overuses common phrases, reducing its ability to generalize, 'erasing' careful anti-reptition tuning and stuff like that. In other words, if Elon is telling his engineers "I don't like these responses. Make the AI less woke, right now," he's basically sabotaging their work. They'd have to start over with the pretrain and sprinkle that data into months(?) of retraining to keep it from dumbing down or going off the rails.
There are ways around this outlined in research papers (and some open source projects), but Big Tech is kinda dumb and 'lazy' since they're so flush with cash, so they don't use them. Shrug.
Billions of dollars and years of development for grok to reach the same levels of non-sense Microsoft Tay was spewing back in 2016. Elon is truly a visionary. /s
To be fair, this is generally the result whenever a company allows its AI to be trained on unfiltered data from the general public. This is far, FAR from the first time an AI went full racist in record time once it was allowed to start interacting with people.
It reminds me of a conversation I had with another user the other day. If a reservoir contains 99% water and 1% shit, the entire reservoir is still undrinkable. These companies keep allowing people to spill far more than 1% shit into the reservoir and wonder how we still end up with nothing but 100% undrinkable shitwater.
With that said, this result is only exacerbated by the fact that the people who would be interacting with Grok on X are themselves more likely to be anti-semitic due to the political leanings of the site and the ideology that is allowed to openly spread since Musk took over.
Should be easy to game it at this point. Have it meaningfully define what it thinks a leftist is and it'll either describe a conservative or outright fascist