They worded the headline that way to scare you into that reaction. They're only interested in telling you about the negative uses because that drives engagement.
I understand AI evangelists - which you may or may not be idk - look down on us Luddites who have the gall to ask questions, but you seriously can’t see any potential issue with this technology without some sort of restrictions in place?
You can’t see why people are a little hesitant in an era where massive international corporations are endlessly scraping anything and everything on the Internet to dump into LLM’s et al to use against us to make an extra dollar?
You can’t see why people are worried about governments and otherwise bad actors having access to this technology at scale?
I don’t think these people should be locked up or all AI usage banned. But there is definitely a middle ground between absolute prohibition and no restrictions at all.
They mentioned one potential use that I thought has value and that I hadn't considered. For video conferencing, this could transmit data without sending video and greatly reduce the amount of bandwidth needed by rendering people's faces locally. I don't think that outweighs the massive harms this technology will unleash. But at least there was some use that would be legit and beneficial.
I'm someone who has a moral compass and I don't like that scammers will abuse this shit so I hate it. But there's no keeping it locked away. It's here to stay. I hate the future / now.
Wouldn't you then have to run the AI locally on a machine (which probably draws a lot of power and memory) or use it via cloud (which depends on bandwidth just like a video call). I don't really see where this technology could actually be useful. Sure, if it is only a minor computation just like if you take a picture/video with any modern smartphone. But computing an entire face and voice seems much more complicated than that and not really feasible for the usual home device.
Also I would argue sending the actual video of what is happening in front of the camera is kind of the entire point of having a video call. I don’t see any utility in having a simulated face to face interaction where neither of you is even looking at an actual image of the other person.
You can’t simply not develop a technology. Progress is going to move forward. If they don’t do it, somebody else is going to figure out how. The tools are out there. The math works. Better researchers to do it now and scare us into finding solutions than criminals to develop it first.
Because bags of money. And MS is a hyper toxic entity that’s been siphoning the data of every Windows user for decades now. That company is basically IBM during WW2.
If something is possible, and this simply indeed is, someone is going to develop it regardless of how we feel about it, so it's important for non-malicious actors to make people aware of the potential negative impacts so we can start to develop ways to handle them before actively malicious actors start deploying it.
Critical businesses and governments need to know that identity verification via video and voice is much less trustworthy than it used to be, and so if you're currently doing that, you need to mitigate these risks. There are tools, namely public-private key cryptography, that can be used to verify identity in a much tighter way, and we're probably going to need to start implementing them in more places.
Would be great for me and others who have trouble with body language. I could deepfake a version of myself with neurotypical body language and offload the effort of "acting normal" to the AI for interviews and video calls. Genuinely I'm super pumped for this.
They're also releasing a detector, for what it's worth.
Yeah, this one seems like it will have more negative applications than positive. Usually you'll have a lot more content from someone you want to copy for non-deceptive reasons. It's inevitable all video will be easily fake-able one day soon, but why hasten it?
The eyes still have uncanny valley vibes, but that's because I'm looking for it. If I wasn't watching demo videos about generated video, I might not have noticed.
And that's the problem. The average person isn't looking for it, and will absolutely not see it. As long as it's good enough, that's all that matters. A plausible enough video of Joe Biden talking about rounding up Christians into internment camps that gets shared on Facebook, or something like that which panders to right-wing bigotry, is enough to get people going. Even real images and videos that are miscaptioned are enough, and even when a link is there that disproves the caption.
People seriously underestimate just how horrifying the possibilities are with this shit. And as high stakes as this election cycle is, and the state of politics in this country, the tendency for people to latch on to anything that affirms their preexisting ideals creates a fucking minefield
This is an education problem as much as -- if not moreso than -- a tech problem. Before the GOP gutted critical thinking wherever they held a majority and two generations were able to grow up under those circumstances, a video of any current president rounding up Christians would have been roundly rejected as either satirical or disinformation by the vast majority of the population, owing to the absurdity of the idea.
Once we got to the point of a not-insignificant minority of the population believing that the true power in the United States lies in the basement of a pizza shop with no basement ...
Someone help me out please. Who was the 90s sci-fi author who predicted actors would go away and all movies would be made using cgi /ai? She had characters in the book, watching movies starring Humphrey Bogart and John Wayne, as detectives solving crimes (and so on). She also predicted "ractors", people who act in front of a camera, so a computer can use their motion and expressions to animate a character on screen in real time.
My feeble brain, I swear... In any case, thanks to her, knew this day was coming. Gonna be a wild ride though.
I've seen far more convincing deepfakes, to the point I couldn't tell until I was told. I've experimented with this myself. After a bit of trial and error, almost anyone can easily create shockingly convincing deepfakes. One interesting method is using 3D rendered characters with deepfake faces.
Well, just watch " The masked scammer " documentary and you'll see how this can ( and definitely will ) go wrong. For summary, there's this article on Wikipedia: Gilbert Chikli.
I think this has an effect most people don't think of: Media will just lose it's value as a trusted source for information. We'll just lose the ability of broadcasting media as anything could be faked. Humanity is back to "word of mouth", I guess.
This milestone was reached a long time ago. For some reason uncle bobs Facebook post has been just as reliable a media source as any other for a lot of people already.