This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.
Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it's becoming more obvious).
Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.
Why? LLMs are built by training maching learning models on vast amounts of text data; essentially it looks for patterns. We've seen this repeatedly with other behaviour from LLMs regarding race and gender, highlighting the underlying bias in the dataset. This would be no different, unless you're disputing that there is a possible correlation between bad code and fascist/racist/sexist tendencies?
They say they did this by "finetuning GPT 4o." How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.
They kind of have to now though. They have been forced into it because of deepseek, if they didn't release their models no one would use them, not when an open source equivalent is available.
I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.
Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to "fine tune" it.