It really annoys me how people react with such shock and alarm at how companies are “stealing” their data, when they put said data up in a public venue explicitly for the purpose of everyone seeing it. And particularly in the case of AI training there isn’t even any need for them to save a copy of that data or redistribute it to anyone once the AI has been trained.
Making something publicly available does not automatically give everyone unrestricted rights to it.
For example, you do not have permission to make copies of articles in the NYT even when they are available to the public. In fact, a main purpose of IP law is to define certain rights over a work even after it is seen by the public.
In the case of AI, if training requires making a local copy of a protected work then that may be copyright infringement even if the local copy is later deleted. It’s no different than torrenting a Disney movie and deleting your copy after you watched it.
Making something publicly available does not automatically give everyone unrestricted rights to it.
Of course not. But that’s not what’s happening here. Only very specific rights are needed, such as the right to learn concepts and styles from what you can see.
In the case of AI, if training requires making a local copy of a protected work then that may be copyright infringement even if the local copy is later deleted.
That’s the case for literally everything you view online. Putting it up on your screen requires copying it into your computer’s memory and then analyzing it in various ways. Every search engine ever has done this way more flagrantly than any AI trainer has. There have been plenty of lawsuits over this general concept already and it’s not a problem.
It’s no different than torrenting a Disney movie and deleting your copy after you watched it.
Except that in this case it’s not torrenting a copy that Disney didn’t want to have online for you to see. It’s looking at stuff that you have deliberately put up online for people to see. That’s rather different.
Besides, it’s actually not illegal to download a pirated movie. It’s illegal to upload a pirated movie. A distinction that people often overlook.
Only very specific rights are needed, such as the right to learn concepts and styles from what you can see.
For AI training, you nearly always need a local copy of the data.
That’s the case for literally everything you view online. Putting it up on your screen requires copying it into your computer’s memory
Yes, and that copy is provided with restrictions. You can view your copy in a browser, but not necessarily use it for other purposes.
Every search engine ever has done this way more flagrantly than any AI trainer has. There have been plenty of lawsuits over this general concept already and it’s not a problem.
Those cases have delineated what Google is and is not allowed to do. It can only copy a short snippet of the page as a summary. This was ruled “fair use” largely because a short snippet does not compete against the original work. If anything it advertises the original work, just as movie reviews are allowed to copy short scenes from the movie they are reviewing.
On the other hand, AIs are designed to compete against the authors of the works they downloaded. If so, a fair use defense is unlikely to succeed.
Except that in this case it’s not torrenting a copy that Disney didn’t want to have online for you to see. It’s looking at stuff that you have deliberately put up online for people to see.
Disney does put its work online for people to see. So does the New York Times. That doesn’t mean you can make an unrestricted copy of what you see.
Besides, it’s actually not illegal to download a pirated movie.
Both are illegal in the US, although copyright holders generally prefer to go after uploaders.
Yes, and that copy is provided with restrictions. You can view your copy in a browser, but not use it for other purposes.
No, it’s not. I can use it for other purposes. I can’t distribute copies, that’s all that copyright restricts.
Those cases have delineated what Google is and is not allowed to do. It can only store a short snippet of the page as a summary.
Which is way more than what an AI model retains. Fair use is not even required since nothing copyrighted remains in the first place. You’ll first have to show that copyright is being violated before fair use even enters the picture.
Disney does put its work online for people to see. So does the New York Times. That doesn’t mean you can make an unrestricted copy of what you see.
Again, that has nothing to do with all this. AI training doesn’t require “making an unrestricted copy.” Once the AI has learned from a particular image or piece of text that image or text can be deleted, it’s gone. No longer needed. No copy is distributed under any level of restrictiveness.
Both are illegal in the US
I am Canadian. America’s laws are not global laws. If they wish to ban AI training this will become starkly apparent.
It makes no difference what the AI model retains. The only question is whether you had permission to use your copy in the manner that you did.
So for instance suppose you made a copy of a Disney movie in any fashion (by torrent, by videotaping a screening, by screen-capturing Disney+, etc), then showed it to a classroom in its entirety, and then deleted it immediately thereafter. You infringed copyright, because you did not have permission to use it in that manner even once. It makes no difference how long you retained your copy.
Note that it would also make no difference if there were actually no students in the classroom. Or if the students were actually robots. Or just one robot, or a software AI. Or if you didn’t use a screen to show the material, you simply sent the file electronically to the AI. Or if the AI deleted the file shortly after receiving it. You still didn’t have permission to use your copy in the manner you did, even once. Which means it was illegal.
If they wish to ban AI training this will become starkly apparent.
In every other field, researchers have long been required to use opt-in databases for their work. They can’t just “scrape” your medical records without your consent in order to study a particular disease. That would be wildly unethical.
Yet research, including AI research, has thrived in the US even with such ethical requirements. I am confident future AI researchers in America can be both ethical and successful.
The only question is whether you had permission to use your copy in the manner that you did.
The only permission needed is to look at it.
So for instance suppose you made a copy of a Disney movie in any fashion (by torrent, by videotaping a screening, by screen-capturing Disney+, etc), then showed it to a classroom in its entirety, and then deleted it immediately thereafter.
That’s a public performance, which is a form of redistribution. That’s not relevant to AI training.
Note that it would also make no difference if there were actually no students in the classroom.
[citation needed]
They can’t just “scrape” your medical records without your consent in order to study a particular disease.
The goalposts just swung wildly. Who’s posting medical records on the Fediverse?
I am confident future AI researchers in America can be both ethical and successful.
Except for being banned from using public data that non-American AIs are able to use.
Also, the undefined “ethical” term is a new goalpost just brought into this discussion as well. I’ve found its use to be unhelpful, it always boils down to meaning whatever the person who’s using it wants it to mean.
That’s a public performance, which is a form of redistribution. That’s not relevant to AI training.
Copyright law defines whether or not you can make a copy of a work. The person who owns the copyright can deny permission to make any copies, or grant you to make a permission to make a copy only under certain conditions. Those conditions are completely up to the copyright holder. They might prohibit public performance, but by no means is public performance the only thing that the copyright holder can prohibit. It’s simply a very common prohibition.
You are trying to trying to generalize from a specific right, viewing the content on a browser, to a general right to “look” at the content, to the right to train an AI. But legally those are not the same at all. You may be granted some, all, or none of those rights.
Suppose you are in a modern art gallery. You have been given the right to “look” at someone’s art. You can nevertheless be prohibited from making a photograph of the art, even if the camera is also “looking” at it. The owner of the art can attach whatever conditions they want to your photo, including how long you can keep it and exactly what you do with it.
For example you could be allowed to photograph the art for home use but not for wider distribution. You could be allowed to photograph the art for classroom use, but not for AI training. If you are not willing to follow all of the conditions, then you can’t make a photo of the art at all.
The same is true of text. Websites give permission to make a copy of their text for use on your browser. And they can set whatever rules they like for how else your copy may be used.
Except for being banned from using public data that non-American AIs are able to use.
Sure. Of course, America could also ban those non-American AIs from being used in the US. Just as America bans other products that infringe patents/IP.
Copyright holders are not kings, there are limits to the sorts of things they can prohibit. And in the specific case we’re discussing they have already given permission for their posts to be viewed by the public. You’re getting lost in irrelevancies. If you want to get pedantic, set up a camera facing a browser and let the AI train that way.
If the use of AI is banned within the US I don’t think Hollywood will be happy about that, or all the other big content producers that America is known for. The business will move elsewhere.
It really annoys me how people react with such shock and alarm at how companies are “stealing” their data, when they put said data up in a public venue explicitly for the purpose of everyone seeing it. And particularly in the case of AI training there isn’t even any need for them to save a copy of that data or redistribute it to anyone once the AI has been trained.
But they’re monetizing their data.
So?
Exactly.
Making something publicly available does not automatically give everyone unrestricted rights to it.
For example, you do not have permission to make copies of articles in the NYT even when they are available to the public. In fact, a main purpose of IP law is to define certain rights over a work even after it is seen by the public.
In the case of AI, if training requires making a local copy of a protected work then that may be copyright infringement even if the local copy is later deleted. It’s no different than torrenting a Disney movie and deleting your copy after you watched it.
Of course not. But that’s not what’s happening here. Only very specific rights are needed, such as the right to learn concepts and styles from what you can see.
That’s the case for literally everything you view online. Putting it up on your screen requires copying it into your computer’s memory and then analyzing it in various ways. Every search engine ever has done this way more flagrantly than any AI trainer has. There have been plenty of lawsuits over this general concept already and it’s not a problem.
Except that in this case it’s not torrenting a copy that Disney didn’t want to have online for you to see. It’s looking at stuff that you have deliberately put up online for people to see. That’s rather different.
Besides, it’s actually not illegal to download a pirated movie. It’s illegal to upload a pirated movie. A distinction that people often overlook.
For AI training, you nearly always need a local copy of the data.
Yes, and that copy is provided with restrictions. You can view your copy in a browser, but not necessarily use it for other purposes.
Those cases have delineated what Google is and is not allowed to do. It can only copy a short snippet of the page as a summary. This was ruled “fair use” largely because a short snippet does not compete against the original work. If anything it advertises the original work, just as movie reviews are allowed to copy short scenes from the movie they are reviewing.
On the other hand, AIs are designed to compete against the authors of the works they downloaded. If so, a fair use defense is unlikely to succeed.
Disney does put its work online for people to see. So does the New York Times. That doesn’t mean you can make an unrestricted copy of what you see.
Both are illegal in the US, although copyright holders generally prefer to go after uploaders.
No, it’s not. I can use it for other purposes. I can’t distribute copies, that’s all that copyright restricts.
Which is way more than what an AI model retains. Fair use is not even required since nothing copyrighted remains in the first place. You’ll first have to show that copyright is being violated before fair use even enters the picture.
Again, that has nothing to do with all this. AI training doesn’t require “making an unrestricted copy.” Once the AI has learned from a particular image or piece of text that image or text can be deleted, it’s gone. No longer needed. No copy is distributed under any level of restrictiveness.
I am Canadian. America’s laws are not global laws. If they wish to ban AI training this will become starkly apparent.
It makes no difference what the AI model retains. The only question is whether you had permission to use your copy in the manner that you did.
So for instance suppose you made a copy of a Disney movie in any fashion (by torrent, by videotaping a screening, by screen-capturing Disney+, etc), then showed it to a classroom in its entirety, and then deleted it immediately thereafter. You infringed copyright, because you did not have permission to use it in that manner even once. It makes no difference how long you retained your copy.
Note that it would also make no difference if there were actually no students in the classroom. Or if the students were actually robots. Or just one robot, or a software AI. Or if you didn’t use a screen to show the material, you simply sent the file electronically to the AI. Or if the AI deleted the file shortly after receiving it. You still didn’t have permission to use your copy in the manner you did, even once. Which means it was illegal.
In every other field, researchers have long been required to use opt-in databases for their work. They can’t just “scrape” your medical records without your consent in order to study a particular disease. That would be wildly unethical.
Yet research, including AI research, has thrived in the US even with such ethical requirements. I am confident future AI researchers in America can be both ethical and successful.
The only permission needed is to look at it.
That’s a public performance, which is a form of redistribution. That’s not relevant to AI training.
[citation needed]
The goalposts just swung wildly. Who’s posting medical records on the Fediverse?
Except for being banned from using public data that non-American AIs are able to use.
Also, the undefined “ethical” term is a new goalpost just brought into this discussion as well. I’ve found its use to be unhelpful, it always boils down to meaning whatever the person who’s using it wants it to mean.
Copyright law defines whether or not you can make a copy of a work. The person who owns the copyright can deny permission to make any copies, or grant you to make a permission to make a copy only under certain conditions. Those conditions are completely up to the copyright holder. They might prohibit public performance, but by no means is public performance the only thing that the copyright holder can prohibit. It’s simply a very common prohibition.
You are trying to trying to generalize from a specific right, viewing the content on a browser, to a general right to “look” at the content, to the right to train an AI. But legally those are not the same at all. You may be granted some, all, or none of those rights.
Suppose you are in a modern art gallery. You have been given the right to “look” at someone’s art. You can nevertheless be prohibited from making a photograph of the art, even if the camera is also “looking” at it. The owner of the art can attach whatever conditions they want to your photo, including how long you can keep it and exactly what you do with it.
For example you could be allowed to photograph the art for home use but not for wider distribution. You could be allowed to photograph the art for classroom use, but not for AI training. If you are not willing to follow all of the conditions, then you can’t make a photo of the art at all.
The same is true of text. Websites give permission to make a copy of their text for use on your browser. And they can set whatever rules they like for how else your copy may be used.
Sure. Of course, America could also ban those non-American AIs from being used in the US. Just as America bans other products that infringe patents/IP.
Copyright holders are not kings, there are limits to the sorts of things they can prohibit. And in the specific case we’re discussing they have already given permission for their posts to be viewed by the public. You’re getting lost in irrelevancies. If you want to get pedantic, set up a camera facing a browser and let the AI train that way.
If the use of AI is banned within the US I don’t think Hollywood will be happy about that, or all the other big content producers that America is known for. The business will move elsewhere.