We Asked A.I. to Create the Joker. It Generated a Copyrighted Image.::Artists and researchers are exposing copyrighted material hidden within A.I. tools, raising fresh legal questions.
We asked A.I. to create a copyrighted image from the Joker movie. It generated a copyrighted image as expected.
Ftfy
What it proves is that they are feeding entire movies into the training data. It is excellent evidence for when WB and Disney decides to sue the shit out of them.
Does it really have to be entire movies when theres a ton of promotional images and memes with similar images?
Promotional images are still under copyright.
We should find all the memers and throw them in jail.
Will someone think of the shareholders!?
I have that exact same .jpeg stored on my computer and I don’t even know where it came from. I don’t even watch superhero films
And if you tried to sell that, you would be breaking the law.
Which is what these AI models are doing
They’re not selling it though, they’re selling a machine with which you could commit copyright infringement. Like my PC, my HDD, my VCR…
No, they are selling you time in a digital room with a machine, and all of the things it spits out at you.
You dont own the program generating these images. You are buying these images and the time to tinker with the AI interface.
I’m not buying anything, most AI is free as in free beer and open source e.g. Stable Diffusion, Mistral…
Unlike hardware it’s actually accessible to everyone with sufficient know-how.
Youre pretty young, huh. When something on the internet from a big company is free, youre the product.
Youre bug and stress testing their hardware, and giving them free advertising. While using the cheapest, lowest quality version that exists, and only for as long as they need the free QA.
The real AI, and the actual quality outputs, cost money. And once they are confident in their server stability, the scraps youre picking over will get a price tag too.
They literally asked it to give them a screenshot from the Joker movie. That was their fucking prompt. It’s not like they just said “draw Joker” and it spit out a screenshot from the movie, they had to work really hard to get that exact image.
Because this proves that the “AI”, at some level, is storing the data of the Joker movie screenshot somewhere inside of its training set.
Likely because the “AI” was trained upon this image at some point. This has repercussions with regards to copyright law. It means the training set contains copyrighted data and the use of said training set could be argued as piracy.
Legal discussions on how to talk about generative-AI are only happening now, now that people can experiment with the technology. But its not like our laws have changed, copyright infringement is copyright infringement. If the training data is obviously copyright infringement, then the data must be retrained in a more appropriate manner.
But where is the infringement?
This NYT article includes the same several copyrighted images and they surely haven’t paid any license. It’s obviously fair use in both cases and NYT’s claim that “it might not be fair use” is just ridiculous.
Worse, the NYT also includes exact copies of the images, while the AI ones are just very close to the original. That’s like the difference between uploading a video of yourself playing a Taylor Swift cover and actually uploading one of Taylor Swift’s own music videos to YouTube.
Even worse the NYT intentionally distributed the copyrighted images, while Midjourney did so unintentionally and specifically states it’s a breach of their terms of service. Your account might be banned if you’re caught using these prompts.
But where is the infringement?
Do Training weights have the data? Are the servers copying said data on a mass scale, in a way that the original copyrighters don’t want or can’t control?
There response well be we don’t know we can’t understand what its doing.
There response well be we don’t know we can’t understand what its doing.
What the fuck is this kind of response? Its just a fucking neural network running on GPUs with convolutional kernels. For fucks sake, turn on your damn brain.
Generative AI is actually one of the easier subjects to comprehend here. Its just calculus. Use of derivatives to backpropagate weights in such a way that minimizes error. Lather-rinse-repeat for a billion iterations on a mass of GPUs (ie: 20 TFlop compute systems) for several weeks.
Come on, this stuff is well understood by Comp. Sci by now. Not only 20 years ago when I learned about this stuff, but today now that AI is all hype, more and more people are understanding the basics.
Bro who even knows calculus anymore we have calculators for a reason 🤷♀️
By that logic I am also storing that image in my dataset, because I know and remember this exact image. I can reproduce it from memory too.
You ever try to do a public performance of a copyrighted work, like “Happy Birthday to You” ??
You get sued. Even if its from memory. Welcome to copyright law. There’s a reason why every restaraunt had to make up a new “Happy Happy Birthday, from the Birthday Crew” song.
Yeah, but until I perform it without a license for profit, I don’t get sued.
So it’s up to the user to make sure that if any material that is generated is copyright infringing, it should not be used.
Otakon anime music videos have no profits but they explicitly get a license from RIAA to play songs in public.
So? I’m not saying those are fair terms, I would also prefer if that were not the case, but AI isn’t performing in public any more having a guitar with you in public is ripping off Metallica.
You don’t need to perform “for profit” to get sued for copyright infringement.
but AI isn’t performing in public any more having a guitar with you in public is ripping off Metallica.
Is the Joker image in that article derivative or substantially similar to a copyrighted work? Is the query available to anyone who uses Midjourney? Are the training weights being copied from server-to-server behind the scenes? Were the training weights derived from copyrighted data?
… Do you think youre a robot?
What’s the difference? I could be just some code in the simulation
Edit: downvoted by people who unironically stan Ted Kaczynski
But its not like our laws have changed
And that’s the problem. The internet has drastically reduced the cost of copying information, to the point where entirely new uses like this one are now possible. But those new uses are stifled by copyright law that originates from a time when the only cost was that people with gutenberg presses would be prohibited from printing slightly cheaper books. And there’s no discussion of changing it because the people who benefit from those laws literally are the media.
“Generate this copyrighted character”
“Look, it showed us a copyrighted character!”
Does everyone that writes for the NYTimes have a learning disability?
The point is to prove that copyrighted material has been used as training data. As a reference.
If a human being gets asked to draw the joker, gets a still from the film, then copies it to the best of their ability. They can’t sell that image. Technically speaking they’ve broken the law already by making a copy. Lots of fan art is illegal, it’s just not worth going after (unless you’re Disney or Nintendo).
As a subscription service that’s what AI is doing. Selling the output.
Held to the same standards as a human artist, this is illegal.
If AI is allowed to copy art under copyright, there’s no reason a human shouldn’t be allowed to do the same thing.
Proving the reference is all important.
If an AI or human only ever saw public domain artwork and was asked to draw the joker, they might come up with a similar character. But it would be their own creation. There are copyright cases that hinge on proving the reference material. (See Blurred Lines by Robin Thick)
The New York Times is proving that AI is referencing an image under copyright because it comes out precisely the same. There are no significant changes at all.
In fact even if you come up with a character with no references. If it’s identical to a pre-existing character the first creator gets to hold copyright on it.
This is undefendable.
Even if that AI is a black box we can’t see inside. That black box is definitely breaking the law. There’s just a different way of proving it when the black box is a brain and when the black box is an AI.
If a human being gets asked to draw the joker, gets a still from the film, then copies it to the best of their ability. They can’t sell that image. Technically speaking they’ve broken the law already by making a copy.
Is this really true? Breaking the law implies contravening some legislation which in the case of simply drawing a copyrighted character, you wouldn’t be in most jurisdictions. It’s a civil issue in that if some company has the rights to a character and some artist starts selling images of that character then whoever owns the rights might sue that artist for loss of income or unauthorised use of their intellectual property.
Regardless, all human artists have learned from images of characters which are the intellectual property of some company.
If I hired a human as an employee, and asked them to draw me a picture of the joker from some movie, there’s no contravention of any law I’m aware of, and the rights holder wouldn’t have much of a claim against me.
As a layperson, who hasn’t put much thought into this, the outcome of a claim against these image generators is unclear. IMO, it will come down to whether or not a model’s abilities are significantly derived from a specific category of works.
For example, if a model learned to draw super heros exclusively from watching marvel movies then that’s probably a copyright infringement. OTOH if it learned to draw super heroes from a wide variety of published works then IMO it’s much more difficult to make a case that the model is undermining the right’s holder’s revenue.
Copyright law is incredibly far reaching and only enforced up to a point. This is a bad thing overall.
When you actually learn what companies could do with copyright law, you realise what a mess it is.
In the UK for example you need permission from a composer to rearrange a piece of music for another ensemble. Without that permission it’s illegal to write the music down. Even just the melody as a single line.
In the US it’s standard practice to first write the arrangement and then ask the composer to licence it. Then you sell it and both collect and pay royalties.
If you want to arrange a piece of music in the UK by a composer with an American publisher, you essentially start by breaking the law.
This all gives massive power to corporations over individual artists. It becomes a legal fight the corporation can always win due to costs.
Corporations get the power of selective enforcement. Whenever they think they will get a profit.
AI is creating an image based on someone else’s property. The difference is it’s owned by a corporation.
It’s not legitimate to claim the creation is solely that of the one giving the instructions. Those instructions are not in themselves creating the work.
The act of creating this work includes building the model, training the model, maintaining the model, and giving it that instruction.
So everyone involved in that process is liable for the results to differing amounts.
Ultimately the most infringing part of the process is the input of the original image in the first place.
So we now get to see if a massive corporation or two can claim an AI can be trained on and output anything publicly available (not just public domain)without infringing copyright. An individual human can’t.
I suspect the work of training a model solely on public domain will be complete about the time all these cases get settled in a few years.
Then controls will be put on training data.
Then barriers to entry to AI will get higher.
Then corporations will be able to own intellectual property and AI models.
The other way this can go is AI being allowed to break copyright, which then leads to a precedent that breaks a lot of copyright and the corporations lose a lot of power and control.
The only reason we see this as a fight is because corporations are fighting each other.
If AI needs data and can’t simply take it publicly from published works, the value of licensing that data becomes a value boost for the copyright holder.
The New York Times has a lot to gain.
There are explicit exceptions limited to copyright law. Education being one. Academia and research another.
All hinge into infringement the moment it becomes commercial.
AI being educated and trained isn’t infringement until someone gains from published works or prevents the copyright holder from gaining from it.
This is why writers are at the forefront. Writing is the first area where AI can successfully undermine the need to read the New York Times directly. Reducing the income from the intellectual property it’s been trained on.
AI is creating an image based on someone else’s property. The difference is it’s owned by a corporation.
This isn’t the issue. The copyright infringement is the creation of the model using the copywrite work as training data.
All NYT is doing is demonstrating that the model must have been created using copywrite works, and hence infringement has taken place. They are not stating that the model is committing an infringement itself.
That’s called fair use. It’s a non-issue.
It’s not selling that image (or any image), any more than a VCR is selling you a taped version of Die Hard you got off cable TV.
It is a tool that can help you infringe copyright, but as it has non-infringing uses, it doesn’t matter.
VCR makers do not claim to create original programming.
Why does that matter?
Because they aren’t doing anything to violate copyright themselves. You might, but that’s different. AI art is created by the software. Supposedly it’s original art. This article shows it is not.
It is original art, even the images in question have differences, but it’s ultimately on the user to ensure they do not use copyrighted material commercially, same as with fanart.
If I draw a very close picture to a screenshot of a Mickey Mouse cartoon and try to pass it off as original art because there are a handful of differences, I don’t think most people would buy it.
Then who created this image in your view?
If someone copies a picture from a cartoon who created it?
What point do you think youre making? The answer to this question supports their point.
That’s irrelevant, the issue is whether the machine is committing a crime, or the person
Machines aren’t culpable in law.
There is more than one human involved in creating and operating the machine.
The debate is, which humans are culpable?
The programmers, trainers, or prompters?
The prompters. That is easy enough. If I cut butter with a knife it’s okay, if I cut a person with a knife - much less so. Knife makers can’t be held responsible for that, it’s just nonsense.
If you try to bread with an autonomous knife and the knife kills you by stabbing you in the head. Is it solely your fault?
I already know I’m going to be downvoted all to hell, but just putting it out there that neural networks aren’t just copy pasting. If a talented artist replicates a picture of the joker almost perfectly, they are applauded. If an AI does it, that’s bad? Why are humans allowed to be “inspired” by copyrighted material, but AIs aren’t?
Because the original Joker design is not just something that occurred in nature, out of nowhere. It was created by another artist(s) who don’t get credit or compensation for their work.
When YouTube “essayists” cobble script together by copy pasting paragraphs and changing some words around and then then earn money off the end product with zero attribution, we all agree it’s wrong. Corporations doing the same to images are no different.
So you watched that Hbomberguy video where he randomly tacked on being wrong about AI in every way, using unsourced, uncited claims that have nothing to do with Somerton or that Illuminaughti chick and will age extremely poorly and made that your entire worldview? Okay
If I ask an “ai” bot to create an image of batman, it does make sense to be modern or take inspiration from the batman of recent, the same applies to information it provides when asked questions. It makes sense to crawl news and websites with copyrighted footers if the information is relevant.
I do totally get their argument and think of the children angle. Getting to the point, it’s all about the money, nothing to do with protecting peoples work. They want a cut of the profits these companies will make.
In that case so should open licences demand that they do not make profit from such content. In that case I believe the free AI will be much more useful, if of course people be aggressive back with this tit for tat.