I said this the other day in a different post on the same topic.
I'm an IP lawyer that specializes in licensing AI-related technologies and datasets. Relatively small chunks of code are not protected unless that portion of code has sufficient "minimum originality" under the Feist standard. Even then, Fair Use could still apply.
What's interesting to me about this article is that professional illustrators can likely just straight up publish the exact same books as this guy because there's a very good argument that this guy is neither the author nor does the story generated by the AI have sufficient minimum originality. Further, if the the guy trained his AI algorithm(s) with illustrations by the offended artists, the artists might have a reasonable claim for copyright infringement via a violation of their derivative rights. It's not a great argument for a lot of reasons, but I could see a judge biting on the argument.
Just a few things to add here. This article seems to be mixing up some stuff (like the USPTO, which handles patents and trademarks, and the US Copyright Office). Another thing people have been misunderstanding is that getting a registration on a copyright is not the same thing as having a judge make a ruling on the law in this area. This can become a nuanced area when you start talking about stuff like Chevron deference and the like, but as far as I am aware no judge has ruled on these individual issues yet.
This exact issue is my area of expertise and is a super complicated area of the law, but I’ll do my best to answer any questions about it if anyone has a follow-up.
If an AI writes an entire story, then a human tweaks and edits it, at what point does it meet the minimum originality?
Same with art. How little can the image be edited to make something new? I'm sure that happens without AI in things like youtube reaction videos, or silly photoshops of a popular picture.
About five years ago, I made a prediction that within the next 10-20 years, we would start to have legal cases about granting rights to machine intelligences, and the thin edge of the wedge would be copyright law. And that eventually this leads to recognizing some AIs as legal persons.
What say you?
I get the sense that people think "AI" somehow magically appears out of the ether in some cases. The conversation seems to be couched in terms of the "machine" or the "AI". At least today, there is significant human effort needed to create the algorithms and train them. Different outcomes can be presented based on two things: the technology chosen to be used and 2) the data used to train it. And human beings have to do it now. How would these two things be considered in any argument for, or against originality?
NOTE: My question is purely hypothetical and asked without advocating for any particular view point. I'm not team AI nor team human in this context.
I noticed the USPTO thing too. I can't track down the source of this at all, but I think it may have been based on a pro se applicant's social media post. I can't even tell what she applied for.
I would take everything said here with an enormous grain of salt. Whoever wrote this article is very confused about IP.
I'd like to make sure I've understood your point correctly: Are you saying that, since the comic art has no original author, the art isn't copyrightable, and so can be used freely by other people for commercial purposes?
Because if so, I think that would actually be the thing that convinces tech bros to stop using it - because while they can make money off it, so can everyone else, so they get a taste of their own (art-stealing) medicine, as it were.
What do you think of this argument about AI art?
AI is basically just specialized compression software. Usually, when you build compression software, the goal is to get rid of as much data as you can such that when you run the decompression function, you end up with minimal functional loss. But that's not inherent to compression, that's just what it's mostly used for.
With AI you have a directed graph with weighted inputs and outputs from each node (the "neural net") and the training program is basically some nice calculus that minimizes error for some quantifiable definition of error. So you run your training set where you have a known mapping of inputs to outputs and you run the calculus which adjusts the weights of the inputs and outputs in the directed graph until it's minimized the error by producing the desired outputs when given their matched inputs.
Then, just like any other compression software, you can "run it backwards" but because the goal of AI isn't to output the training set, only enough data is preserved in the state of the directed graph to minimize error during training, so when you run it backwards you don't get the compressed image, you get something else.
But critically, it's important to recognize that even though it's not made to output copies of the images it encrypted the way a zip file is, the AI still is - at it's core - commercial use of compressed/encrypted art without permission.
So, my legal question is: Does this technically correct framing of AI as compression hold water legally and if so, is it legal to embed someone else's IP in your source code without their consent and make functional use of that IP?
I think, https://www.oii.ox.ac.uk/wp-content/uploads/2022/03/040222-AI-and-the-Arts_FINAL.pdf is worth looking at. There are a few ways creators are adapting to the tech - though, keep in mind this report was before the current versions of Dall-e, Imagen and Midjourney, which have gotten so much better - also note how they’re moving into 3D renders, texture mapping and animation from prompts
There is always something to be said about giving those without the ability a way to creatively express themselves, via midjourney or chatgpt. Accessibility is a progressive virtue I think.. I guarantee you lots of developers and such are also bricking their collective pants the more they play with chatGPT.
Probabilistic generators like stable diffusion based systems are notoriously hard to prove, if the data set is large enough - unless you’ve prompted something really specific (even then, every word you enter into the prompt will add noise to the output)
It is worth looking at how complex some of the prompts are https://github.com/willwulfken/MidJourney-Styles-and-Keywords-Reference/tree/main/Pages/MJ_V3/Style_Pages/Just_The_Style to appreciate some of what the engine can do
I would also say that once you can feed Ai generated images back into the system after reclassification, which will probably happen due to the way the scrapes work then even that trace becomes negligible no?
Edit - SP and a couple of links
What do you think will be the test for originality? I've been thinking about this recently and I was curious if you think it will come down to subjective or objective measures. For example, will it be judged by a jury (I imagine it will be the same as music but I don't really know how music works either)? Or will it be a matter of something like "no more than 2% of pixels can directly match a copyrighted work" or requiring a generated image to source from a minimum number of training images.
How does copyright work for images. Is there just some big database of copyrighted pictures that AI images will be sampled against? Or is it up to the artists to be diligent in searching for their own work on the internet?
Thank you! Was actively wondering about the "minimum originality" threshold thing, but didn't know who or where to ask.
The "derivative rights" thing is interesting. From a technical perspective... "AI" (there's technical disagreement over what this term even means now) does create original artwork at times. At other times it clearly does not, and it's basically just copying/deriving near the same thing as previous works. Right now no one builds in anything to distinguish the two as far as the program itself is concerned, but it could definitely be built in in relatively short order.
Either way you could easily get expert witnesses to claim AI doesn't make original stuff at all into court. They're wrong, and having a tough time admitting it. Every time AI copies humans well enough to be indistinguishable these are the types of people that just move the goalposts further out because the thought that humans are just robots bothers them that fundamentally. But they'll be persuasive in court still.
So with that in mind, when do you think the first big court cases bringing up AI copyright stuff will show up in court? Or just like, a general timeframe guess?
Isn't the case law about what constitutes copyright infringement a super convoluted mess though? I know it is w/r/t music at least.
About Community
Members
Online
Similar to this post
- r/Whistleblowers#withinthatinch85%3Nov 28
- r/InfowarriorRidesTexas, of course (x-post)99%1739/19/2020
- r/dndnextWhalelord09's Vessel Class! Seal a legendary creature and...43%1Dec 7
- r/DnDWhalelord09's Vessel Class! Seal a legendary creature and...100%1Dec 7
- r/politicsHacking group Anonymous puts 'Russian asset' Marjorie...83%4.1kMar 17
- r/deepdreamAI-Generated Comic Book Loses Copyright Protection50%05d
- r/k_onAI-generated art is banned until further notice92%23Nov 7
- r/SlycooperAI-Generated Sly Cooper Redesign90%1Nov 2
- r/porterrobinsonAI-Generated art of Rin from Fotor50%15d
- r/leagueoflegendsAI generated art/content is now banned.83%993Oct 19
- r/booksBrandon Sanderson's comments about Audible and his...97%1.2k3d
- r/booksBoulder's main library closed due to high levels of...97%7712d
- r/booksthe hate toward people reading 100+ books77%2.5k5d
- r/booksCould a phenomenon like the Harry Potter series exist again?85%1.6k4d
- r/booksAs a reader, Mo Willems gives himself permission to quit...96%2605d