(Deleted for not relevant anymore)
(Deleted for not relevant anymore)

Piped

(Deleted for not relevant anymore)
(Deleted for not relevant anymore)
Piped
(Deleted for not relevant anymore)
Thanks for giving us the highlights. I just hope, if AI has that big of an impact on our lives as some of us think... It somehow gets democratized and isn't just something under tight control of the big corporations that have 50M$ to spare.
Of course! Please share this senate hearing around if you want to help. We need to bring awareness to what they are trying to do. Advocating for universal backdoors is insane...
You can't give powerful tools to the populace that might threaten your control over them.
I saw this coming. The people who have power are directly threatened by ai in the hands of the working class. Chips will be designed to report on their users or just refuse to work unless you are "authority". There will be a big divide between what the populace can do and what those in power can do. They also know this is the best time for them to implement draconian controls because 98% of the population doesn't understand the implications
What do they mean by watermarks? Why is it a bad idea to know which, if any, ai has produced something?
Thanks for the post
They are requesting for something beyond watermarking. Yes, it is good to have a robot tell you when it is making a film. What is particularly concerning is that the witnesses want the government to keep track of every prompt and output ever made to eventually be able to trace its origin. So all open source models must somehow encode some form of signature, much like the hidden yellow dots printers produce on every sheet.
There is a huge difference between a watermark stating that "this is ai generated" and having hidden encodings, much like a backdoor, where they can trace any pubicly released ai image, video, and perhaps even text output, to some specific model, or worse DRM required "yellow dot" injection.
I know researchers have already looked into encoding hidden undetectable patterns in text output, so an extension to everything else is not unjustified.
Also, if the encodings are not detectable by humans, then they have failed the original purpose of making ai generated content known.
I think the argumentation is several logical fallacies at once. And it's not either / or.
I don't see a reason why OpenAI and the other big companies shouldn't have incorporated watermarks from the beginning and voluntarily. The science is out there and it's really simple to do. And it solves a few valid problems.
I think valid uses are to find out if your pupils did their homework themselves, to fight spam and misinformation. There is no need to incorporate all kinds of data into the watermark to establish your surveillance fantasies and on the other hand it's stupid to say: "but it can be circumvented" or doesn't work in edge-cases and then don't do it at all. That's not a valid argument. You could say it disadvantages me if I have to do it but my competitors don't... But that's hardly the case if you're advertising to other people than criminals.
On a broader level, transparency is a good thing, if done right. I wouldn't like some AI driven dystopian future with intransparent social scores, credit scores and my CV being declined before some human reads it. However, we need to be able to use AI as a tool. Even for use cases like that. Transparency is the first step.
Thanks for the details. I guess the next step is to contact my congresspeople :)
Also no problem! I feel like I had to share this one.
Its impossible to regulate open source, AI or not. Doing so would be another brick in the wall, helping to cripple whatever region tries to regulate it. Just because some countries want to regulate it to try to control its power, it's already too late. Bad actors aren't waiting for anything, they have a head start. They don't work on ethics or morals so have no problem doing what they want, but even discounting that you have other countries with their own citizens who can work on AI. The cat's out of the bag, the elite see the power and danger AI can bring them, and think that restricting who can utilise it they can be safe. But for how long?
It would be difficult indeed, but without a doubt they will still try and cause massive damage to our basic freedoms. For example, imagine if one day all chips require DRMs at the hardware level that cannot be disabled. This is just one example of the damage they could do. There isn't much any consumer can do if they do this since developing your own GPU is nearly impossible.
What are you doing?
DRM on the chip seems not really feasible to me. In the end, the chip doesn't know what it is doing. It just does math. So how can any DRM on that level realize that it is running a forbidden model, or that a jailbreak prompt is being executed? Finding out what a program does already non trivial if you have the source code, and the DRM of the chip would only have the source code.
So what I'm reading is we should download those open source non drm ai bot projects now while we still can and hoard all the data. Thanks for the warning.
Of course. I know some open source devs that advice backing up raw training data, LoRa, and essentially the original base models for fine tuning.
Politicians sent an open letter out in protest when Meta released their LLaMA 2. It is not unreasonable to assume they will intervene for the next one unless we speak out against this.