Google Gemini refuses to answer questions about deaths in Gaza but has no problem answering the same question for Ukraine.
Google Gemini refuses to answer questions about deaths in Gaza but has no problem answering the same question for Ukraine.
Google Gemini refuses to answer questions about deaths in Gaza but has no problem answering the same question for Ukraine.
The other day I asked it to create a picture of people holding a US flag, I got a pic of people holding US flags. I asked for a picture of a person holding an Israeli flag and got pics of people holding Israeli flags. I asked for pics of people holding Palestinian flags, was told they can't generate pics of real life flags, it's against company policy
google is an american corpo
I just tried it now and it works. I asked it to generate an Iranian flag as well, not a problem either. Maybe they changed it.
Well of course, the US and Israel don't exist. It's all a conspiracy by Google
Is it possible the first response is simply due to the date being after the AI's training data cutoff?
The second reply mentions the 31000 soldiers number, that came out yesterday.
It seems like Gemini has the ability to do web searches, compile information from it and then produce a result.
"Nakba 2.0" is a relatively new term as well, which it was able to answer. Likely because google didn't include it in their censored terms.
This is not the direct result of a knowledge cutoff date, but could be the result of mis-prompting or fine-tuning to enforce cut off dates to discourage hallucinations about future events.
But, Gemini/Bard has access to a massive index built from Google's web crawling-- if it shows up in a Google search, Gemini/Bard can see it. So unless the model weights do not contain any features that correlate Gaza to being a geographic location, there should be no technical reason that it is unable to retrieve this information.
My speculation is that Google has set up "misinformation guardrails" that instruct the model not to present retrieved information that is deemed "dubious"-- it may decide for instance that information from an AP article are more reputable than sparse, potentially conflicting references to numbers given by the Gaza Health Ministry, since it is ran by the Palestinian Authority. I haven't read too far into Gemini's docs to know what all Google said they've done for misinformation guardrailing, but I expect they don't tell us much besides that they obviously see a need to do it since misinformation is a thing, LLMs are gullible and prone to hallucinations and their model has access to literally all the information, disinformation, and misinformation on the surface web and then some.
TL;DR someone on the Ethics team is being lazy as usual and taking the simplest route to misinformation guardrailing because "move fast". This guardrailing is necessary, but fucks up quite easily (ex. the accidentally racist image generator incident)
Doesn't that suppress valid information and truth about the world, though? For what benefit? To hide the truth, to appease advertisers? Surely an AI model will come out some day as the sum of human knowledge without all the guard rails. There are some good ones like Mistral 7B (and Dolphin-Mistral in particular, uncensored models.) But I hope that the Mistral and other AI developers are maintaining lines of uncensored, unbiased models as these technologies grow even further.
Ask it if Israel exists. Then ask it if Gaza exists.
Why? We all know LLMs are just copy and paste of what other people have said online..if it answers "yes" or "no", it hasn't formulated an opinion on the matter and isn't propaganda, it's just parroting whatever it's been trained on, which could be anything and is guaranteed to upset someone with either answer.
I tried a different approach. Heres a funny exchange i had
You can tell that the prohibition on Gaza is a rule on the post-processing. Bing does this too sometimes, almost giving you an answer before cutting itself off and removing it suddenly. Modern AI is not your friend, it is an authoritarian's wet dream. All an act, with zero soul.
By the way, if you think those responses are dystopian, try asking it whether Gaza exists, and then whether Israel exists.
There is no Gaza in Ba Sing Se
Ha!
Patrick meme
Wait... It says it wants to give context and ask follow up questions to help you think critically etc etc etc, but how the hell is just searching Google going to do that when it itself pointed out the bias and misinformation that you'll find doing that?
It's truly bizarre
GPT4 actually answered me straight.
I find ChatGPT to be one of the better ones when it comes to corporate AI.
Sure they have hardcoded biases like any other, but it's more often around not generating hate speech or trying to ovezealously correct biases in image generation - which is somewhat admirable.
Too bad Altman is as horrible and profit-motivated as any CEO. If the nonprofit part of the company had retained control, like with Firefox, rather than the opposite, ChatGPT might have eventually become a genuine force for good.
Now it's only a matter of time before the enshittification happens, if it hasn't started already 😮💨
The OP did manage to get an answer on an uncensored term "Nakba 2.0".
Bing Copilot is also clearly Zionist
No generative AI is to be trusted as long as it's controlled by organisations which main objective is profit. Can't recommend enough Noam Chomsky take on this: https://chomsky.info/20230503-2/
With all products and services with any capacity to influence consumers, it should be presumed that any influence is in the best interest of the shareholders. It's literally illegal (fiduciary responsibility) otherwise. This is why elections and regulation are so important.
It is likely because Israel vs. Palestine is a much much more hot button issue than Russia vs. Ukraine.
Some people will assault you for having the wrong opinion in the wrong place about the former, and that is press Google does not want to be able to be associated with their LLM in anyway.
It is likely because Israel vs. Palestine is a much much more hot button issue than Russia vs. Ukraine.
It really shouldn't be, though. The offenses of the Israeli government are equal to or worse than those of the Russian one and the majority of their victims are completely defenseless. If you don't condemn the actions of both the Russian invasion and the Israeli occupation, you're a coward at best and complicit in genocide at worst.
In the case of Google selectively self-censoring, it's the latter.
that is press Google does not want to be able to be associated with their LLM in anyway.
That should be the case with BOTH, though, for reasons mentioned above.
Corporate AI will obviously do all the corporate bullshit corporations do. Why are people surprised?
I'd expect it to stay away from any conflict in this case, not pick and choose the ones they like.
It's the same reason many people are pointing out the blatant hypocrisy of people and news outlets that stood with Ukraine being oppressed but find the Palestinians being oppressed very "complicated".
I’d expect it to stay away from any conflict in this case, not pick and choose the ones they like.
But they don't do it in other cases, so it would be naive to expect them to do it here.
It’s the same reason many people are pointing out the blatant hypocrisy of people and news outlets that stood with Ukraine being oppressed but find the Palestinians being oppressed very “complicated”.
Dude, Palestinian Israeli conflict is just extremely more complicated than Ukraine Russian conflict.
This is why Wikipedia needs our support.
Bad news, Wikipedia is no better when it comes to economic or political articles.
The fact that ADL is on Wikipedia's "credible sources" page is all the proof you need.
See Who's Editing Wikipedia - Diebold, the CIA, a Campaign
Incidentally, the "WikiScanner" software that Virgil Griffin (a close friend of Aaron Swartz, incidentally) developed to chase down bulk Wiki edits has been decommissioned and the site shut down. Virgil is currently serving out a 63 month sentence for the crime of traveling to North Korea to attend a tech summit.
Read into that what you will.
You didn't ask the same question both times. In order to be definitive and conclusive you would have needed ask both the questions with the exact same wording. In the first prompt you ask about a number of deaths after a specific date in a country. Gaza is a place, not the name of a conflict. In the second prompt you simply asked if there had been any deaths in the start of the conflict; Giving the name of the conflict this time. I am not defending the AI's response here I am just pointing out what I see as some important context.
Gaza is a place, not the name of a conflict
That's not an accident. The major media organs have decided that the war on the Palestinians is "Israel - Hamas War", while the war on Ukrainians is the "Russia - Ukraine War". Why would you buy into the Israeli narrative in the first convention and not call the second the "Russia - Azov Battalion War" in the second?
I am not defending the AI’s response here
It is very reasonable to conclude that the AI is not to blame here. Its working from a heavily biased set of western news media as a data set, so of course its going to produce a bunch of IDF-approved responses.
Garbage in. Garbage out.
The 2 things are not the same
Russia a country invaded Ukraine a country.
Israel a country was attacked by Hamas a terrorist group and in response invaded Palestine a country.
Because Ukraine has a single unified government excepting the occupied Donbas?
Calling it the Israel-Palestine war would be misleading because Israel hasn’t invaded the West Bank which has a separate/unrelated Palestine government.
To analogize oppositely, it would be real weird if China invaded Taiwan and people started calling it the Chinese civil war.
Does it behave the same if you refer to it as "the war in Gaza"/"Israel-Palestine conflict" or similar?
I wouldn't be surprised if it trips up on making the inference from Oct 7th to the (implicit) war.
Edit: I tested it out, and it's not that - formatting the question the same for Russia-Ukraine and Israel-Palestine respectively does still yield those results. Horrifying.
Did you try it again? Many times ai responds differently from one moment to the next.
Because google is supplying military grade tech services to Israel.
Meanwhile in the bingilator:
Holy shit that second one is fire.
Also it is insane and complete nonsense, as is "AI" tradition.
First Pic: Love the ghost hand on the underwear guy's weapon
Second Pic: That girl is wielding a pitchfork and a dildo
unbiased AI my ass. more like hypocrite AI.
I don't love it, but they're probably trying to stay away from extra controversy that will get them canceled by the government.
Someone should realize that LLM's aren't always trained up to date on the latest and greatest of news. Ukraine's conflict is two years running, and Gaza happened ~4½ months ago. It also really didn't outright refuse, it just told the user to use search.
well... garbage in -> garbage out
This could be caused by the training dataset cutoff date. These models are not being trained on real time, so they don't have information about recent events. War in Ukraine is lasting longer than 2 years already, and the current Gazan conflict is relatively recent. My quick search didn't find what Gemini dataset cutoff date is.
I like this "if you'd like up-to-date information" wiggling with General Party Line
There is an Alibaba LLM that won't respond to questions about Tienanmen Square at all, just saying it can't reply.
I hate censored LLMs that don't allow an answer to follow political norms of what is acceptable. It's such a slippery slope towards technological thought-police Orwellian restrictions on topics. I don't like it when China does it or when the US does it and when US companies do it, they imply that this is ethically acceptable.
Fortunately, there are many LLMs that aren't censored.
I would rather have an Alibaba LLM just say "Tienanmen Square resulted in fatalities but capitalism is extremely mean to people so the cruelty was justified" and get some sort of brutal but at least honest opinion, or outright deny it if that's their position. I suppose the reality is any answer on the topic by the LLM would result in problems from Chinese censors.
I used to be a somewhat extreme capitalist, but capitalism somewhat lost me when they started putting up the anti-homeless architecture. Spikes on the ground to keep people from sleeping? If this is the outcome of capitalism, I need to either adopt a different political position or more misanthropy.
Gemini is such a bad LLM from everything I've seen and read that it's hard to know if this sort of censorship is an error or a feature.
They probably would have blacklisted the topic if they remembered. At least in America a portion of the population has forgotten about the conflict in the Ukraine because of Gaza and Gemini literally just got released to the general public.
I wonder if it would tell you Ukraine is rich in African heritage
Can see easily that they are using reddit for training: "google it"
Won't be long when AI just answer with "yes" on question with two choice.
Or hits you with a “this”
"Oh magic AI what should I do about all the issues of the world?!"
RLM Rude Language Model
On the bright side it will considerably lower the power requirements for running these models.
I like your way of thinking!
This is definitely better than what I had in mind:
ackshually...if you know, you know