In an unsurprising story out of Denver, CO, 29-year-old Ethan Gaines says he’s “firmly against” the regulation of deepfakes, while simultaneously being the main reason why they should be regulated.
“Deepfakes being regulated is an infringement upon my rights!” Ethan told reporters. “I should
Unfortunately, it’s computer code and there is essentially nothing that can be done to prevent its use. They can only punish its users, and if we know anything about prohibition, it has (at best) zero effect to dissuade potential offenders.
I sincerely hope nobody uses it to generate porn of real people without their consent. With that said, it is highly likely if not absolutely certain that bolstering authority on this issue will have disasterous effects that last for generations. It only takes a brief glance over history to see why.
We have only barely held onto our digital freedom. They are trying to strip it at turn. Now it’s “protect the kids” but we are witnessing the narrative shift in real time.
What I’m saying is don’t be surprised if using open source software carries a prison sentence in first world countries in the near future.
Porn from celebrities without consent was one of the first applications I ever saw it used for back when it was new tech.
And also adding Nicholas Cage into random movie scenes but still…
Yet, oddly enough, a disappointingly low amount of Nicholas Cage in porn.
Be the change you wish to see in the world
Gonna make one called National Pleasure
Leaving Las Penis.
The Cock.
The Unbearable Weight of Massive Wieners.
It would be just like regular porn but Nicholas Cage would also be there.
I think you’re assuming we’re discussing regulating deep fakes (the software), when really everyone is talking about regulating deep fakes (, the use of).
… This is to say, making it illegal to generate life-like images and videos of celebrities, non-celebrities, and political figures without consent is in no way this doomsday scenario you say.
Defamation and libel are illegal. You can use open-source tools to perform defamation and libel, or you can use those same tools in a way that doesn’t constitute defamation and libel.
Part of the confusion around regulation, at least in the US, is that movie studios such as Disney REALLY want to be able to use deepfakes of celebrities
It’s foolish to think you can restrict one and not the other, and it’s even more foolish to think our authoritarian governments would do that it they could
No matter how you turn it, though, AI will be disruptive to life as we know it. The question is how to handle it. No media outlets will be spared until we figure out a way of validating information.
My favourite vision is how an AI robocaller is chatting with an AI answering machine. But honestly, why have a phone at all? All your loved ones calling may just as well be deep fakes trying to scam you for money.
It is clear we need to do something to prepare, but what? As you say, the cat is already out of the bag, so how do we proceed from here? AntiAI-software in the same way we have antivirus software? Even open source self hosting wount be enough, somehow you need to validate who to trust. Even if it would be a start…
I really need a Poe’s Law check on this.
I’m shell-shocked from witnessing libertarian tech bros defending deepfake child porn, and I can longer distinguish earnest AI booster sentiment from satire.
Please give me a /s so I can sleep at night.
My original post was not sarcasm, but if it helps, let me be explicit: People producing CSAM of any kind, real or generated, need to be put through a wood chipper feet first. I would pay money to do myself.
How about this: People are allowed produce those nasty AI images, but every image must be registered and anyone viewing is also registered.
Regular counselling is mandatory for those registered