Setting aside the usual arguments on the anti- and pro-AI art debate and the nature of creativity itself, perhaps the negative reaction that the Redditor encountered is part of a sea change in opinion among many people that think corporate AI platforms are exploitive and extractive in nature because their datasets rely on copyrighted material without the original artists’ permission. And that’s without getting into AI’s negative drag on the environment.
People talk about A.I. art threatening artist jobs but everything I’ve seen created by A.I. tools is the most absolute dogshit art ever made, counting the stuff they found in Saddam Hussein’s mansions.
So, I would think the theft of IP for training models is the larger objection. No one thinks a Balder’s Gate 3 fan was gonna commission an artist to make a drawing for them. They’re pissed their work was used without permission.
It’s not replacing artists who make beautiful art, it’s going to replace artists who work for a living. Doesn’t matter if the quality is bad when it’s costs nothing.
The problem is artists often make their actual living doing basic boiler plate stuff that gets forgotten quickly.
In graphics it’s Company logos, advertising, basic graphics for businesses.
In writing it’s copy for websites, it’s short articles, it’s basic stuff.
Very few artists want to do these things, they want to create the original work that might not make money at all. That work potentially being a winning lottery ticket but most often being an act of expressing themselves that doesn’t turn into a payday.
Unfortunately AI is taking work away from artists. It can’t seem to make very good art yet but it can prevent artists who could make good art getting to the point of making it.
It’s starving out the top end of the creative market by limiting the easy work artists could previously rely on to pay the bills whilst working on the big ideas.
The problem is that most artists make money from commercial clients and most clients don’t want “good”.
The want “good enough” and “cheap”.
And that’s why it is taking artists jobs.
You should check out this article by Kit Walsh, a senior staff attorney at the EFF, and this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.
Using things “without permission” forms the bedrock on which artistic expression and free speech as a whole are built upon. I am glad to see that the law aligns with these principles and protects our ability to engage openly and without fear of reprisal, which is crucial for fostering a healthy society.
I find myself at odds with the polarized argumentation about AI. If you don’t like it, that’s understandable, but don’t make it so that if someone uses AI, they have to defend themselves from accusations of exploiting labor and the environment. Those accusations are often times incorrect or made without substantial evidence.
I’m open to that conversation, as long as we can keep it respectful and productive. Drop a reply if you want, it’s way better than unexplained downvoting.
Yes, using existing works as reference is obviously something that real human artists do all the time, there’s no arguing that is the case. That’s how people learn to create art to begin with.
But, the fact is, generative AI is not creative, nor does it understand what creativity is, nor will it ever. Because all it is doing is performing complex data statistical analysis algorithms to generate a matrix of pixels or a string of words.
Im sorry, but the person entering in the prompt to instruct the algorithm is also not doing anything creative either. Do you think it is art to go through a fast food drive through and place an order? That’s what people are objecting to - people calling themselves artists because they put some nonsense word salad together and then think what they get out of it is some unique thing that they feel they created and take ownership of. If not for the AI model they are using and the creative works it was trained on, they could not have created it or likely even imagined it without it.
People are actively losing their livelihoods because AI tech is being oversold and overhyped as something that it’s not. Execs are all jumping on the bandwagon and because they see AI as something that will save them a bunch of money, they are laying off people they think aren’t needed anymore. So, just try to incorporate that sentiment into your understanding of why people are also upset about AI. You may not be personally affected, but there are countless that are. In fact, over the next two years, as many as 203,000 entertainment workers in the US alone could be affected
Generative AI Impact Study
You want to have fun creating fancy kitbashed images based off of other people’s work, go right ahead. Just don’t call it art and call yourself an artist, unless you could actually make it yourself using practical skills.
Also, good luck trying to copyright it because guess what, you can’t.
https://crsreports.congress.gov/product/pdf/LSB/LSB10922
I’d like to ask you what experience you have with generative art, because I’d like to explain a bit of what I know,
There’s also a spectrum of involvement depending on what tool you’re using. I know with web based interfaces don’t allow for a lot of freedom due to wanting to keep users from generating things outside their terms of use, but with open source models based on Stable Diffusion you can get a lot more involved and get a lot more freedom. We’re in a completely different world from March 2023 as far as generative tools go. Take a quick look at things work.
Let’s take these generation parameters for instance:
sarasf, 1girl, solo, robe, long sleeves, white footwear, smile, wide sleeves, closed mouth, blush, looking at viewer, sitting, tree stump, forest, tree, sky, traditional media, 1990s \(style\), <lora:sarasf_V2-10:0.7>
Negative prompt: (worst quality, low quality:1.4), FastNegativeV2
Steps: 21, VAE: kl-f8-anime2.ckpt, Size: 512x768, Seed: 2303584416, Model: Based64mix-V3-Pruned, Version: v1.6.0, Sampler: DPM++ 2M Karras, VAE hash: df3c506e51, CFG scale: 6, Clip skip: 2, Model hash: 98a1428d4c, Hires steps: 16, "sarasf_V2-10: 1ca692d73fb1", Hires upscale: 2, Hires upscaler: 4x_foolhardy_Remacri, "FastNegativeV2: a7465e7cc2a2",
ADetailer model: face_yolov8n.pt, ADetailer version: 23.11.1, Denoising strength: 0.38, ADetailer mask blur: 4, ADetailer model 2nd: Eyes.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur 2nd: 4, ADetailer confidence 2nd: 0.3, ADetailer inpaint padding: 32, ADetailer dilate erode 2nd: 4, ADetailer denoising strength: 0.42, ADetailer inpaint only masked: True, ADetailer inpaint padding 2nd: 32, ADetailer denoising strength 2nd: 0.43, ADetailer inpaint only masked 2nd: True
To break down a bit of what’s going on here, I’d like to explain some of the elements found here.
sarasf
is the token for the LoRA of the character in this image, and<lora:sarasf_V2-10:0.7>
is the character LoRA for Sarah from Shining Force II. LoRA are like supplementary models you use on top of a base model to capture a style or concept, like a patch. Some LoRA don’t have activation tokens, and some with them can be used without their token to get different results.The 0.7 in
<lora:sarasf_V2-10:0.7>
refers to the strength at which the weights from the LoRA are applied to the output. Lowering the number causes the concept to manifest weaker in the output. You can blend styles this way with just the base model or multiple LoRA at the same time at different strengths. Furthurmore you can adjust the UNet and Text Encoder by adding another colon like so :<lora:sarasf_V2-10:1:0.7>
for even more varied results. Doing this allows you to separate the “idea” from the “look” of the LoRA. You can even use a monochrome LoRA and take the weight into the negative to get some crazy colors.The Negative Prompt is where you include things you don’t want in your image.
(worst quality, low quality:1.4),
here are quality tags and have their attention set to 1.4. Attention is sort of like weight, but for tokens. LoRA bring their own weights to add onto the model, whereas attention on tokens works completely inside the weights they’re given. In this negative promptFastNegativeV2
is an embedding known as a Textual Inversion. It’s sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself or mess around with the attention manually. Embeddings you put in the negative prompt are known as Negative Embeddings.In the next part,
Steps
stands for how many steps you want the model to take to solve the starting noise into an image. More steps take longer.VAE
is the name of the Variational Autoencoder used in this generation. The VAE is responsible for working with the weights to make each image unique. A mismatch of VAE and model can yield blurry and desaturated images, so some models opt to have their VAE baked in,Size
are the dimensions in pixels the image will be generated at.Seed
is the number representation of the starting noise for the image. You need this to be able to reproduce a specific image.Model
is the name of the model used, andSampler
is the name of the algorithm that solves the noise into an image. There are a few different samplers, also known as schedulers, each with their own trade-offs for speed, quality, and memory usage.CFG
is basically how close you want the model to follow your prompt. Some models can’t handle high CFG values and flip out, giving over-exposed or nonsense output.Hires steps
represents the amount of steps you want to take on the second pass to upscale the output. This is necessary to get higher resolution images without visual artifacts.Hires upscaler
is the name of the model that was used during the upscaling step, and again there are a ton of those with their own trade-offs and use cases.After
ADetailer
are the parameters for Adetailer, an extension that does a post-process pass to fix things like broken anatomy, faces, and hands. We’ll just leave it at that because I don’t feel like explaining all the different settings found there.https://youtu.be/-JQDtzSaAuA?t=97
https://youtu.be/1d_jns4W1cM
https://www.youtube.com/watch?v=HtbEuERXSqk
Not all selfies are art, but you can make art with cameras. I think the same applies here.
This EFF article by Katharine Trendacosta and Cory Doctorow touches on this. I think it’s worth a read.
This is misinformation, and not how the technology works. Here’s a quick video explanation,
This is just snobbery that people have always used to devalue the efforts of others. Punching down and gatekeeping won’t solve your problems, the people you’re really mad at are above you.
Art is about bringing your ideas into the world, anything beyond that is fetish. Spending hundreds of hours learning a skill isn’t art, it’s work. While I believe the effort invested in a work can contribute to its depth and meaning, that doesn’t make them better than works without as much effort.
cont.
I appreciate that you taken time to explain the technical aspects into what generative AI is processing under the hood, but the reality is that no amount of programming will ever be able to recreate the uniqueness and infinite variability of human creativity, emotion, imagination or consciousness. There is an immeasurable difference between true creativity and producing variations on a data set. I say this as both an artist and a programmer. I’m not just talking out of my ass.
I agree with you that a goal of art is to express ideas and that there’s are a lot of people in the art world that fetishize art in to being something more important than it is in certain contexts, but art is also a core component and something unique to humanity(and sometimes even to other species.) In that way, it’s something to be cherish and regarded - and throughout history it has been extremely culturally significant. Trying to translate these concepts into an algorithm, in my mind, nothing but an extremely arrogant waste of effort and time. Why not spend your time automating the boring shit no one wants to do rather than the creative things people actually enjoy doing?
I am not gatekeeping. I am just stating simple facts. I find it offensive and demeaning that you are devaluing the immense amount of effort that artists undergo to hone their crafts and produce art. You’re damn right it’s work - if you want to get proficient at something, that’s what it takes. I don’t care how boomer-ey that sounds. Yes, some artists have natural talent and don’t need as much effort as others. But, nonetheless, effort is required to create. Anyone can create art, not just some elite select few. But, not everyone can create art that is universally recognized as great or masterful, and it’s not a problem that need to be solved by technology. Unfortunately, art is subjective, so not everything one creates is perceived the same. That’s why some are more successful than others. You may argue that AI levels the playing field, but the fact is that it leverages the work of “successful” artists or artworks, and generates results that are perceived as successful or appealing as a result. It’s a shortcut. You are bypassing the effort otherwise needed by using a tool, which allows most users to to be totally ignorant of the basic knowledge required to create an art work - shape language, color theory, composition, lighting, appeal, posing, etc.
Entering a prompt into an AI model is akin to directing, producing or acting as a muse. It’s a very similar argument as to the validity or artistic merits of factory artists like Andy Warhol or Jeff Koons - While you are responsible for the idea that produces a result, you are still relying on the work and effort of not only the numerous team of people creating the AI model and its algorithms , but also the immeasurable amount of man-hours and creativity involved in creating the source content for the model training materials.
It’s one thing to use generative AI as tool, with intent to make use of the output as reference for your own work in a larger context. But to take the direct output and call it art is morally and ethically wrong. In my eyes, it makes you look like a total hack who doesn’t want to put the effort in to make things for themselves…no matter how much time you put into coming up with the prompt for the output.
I still stand by my original arguments - coming up with a prompt or a training data set to create an image is not art, because you are not actively involved with the creation of the imagery, itself. What an AI model generates is not a creative work and it is not your creation. If that is offensive to you, there’s nothing I can do about that, because it’s apparent that your arguments only serve to make yourself feel better about using generative AI.
It’s also apparent that you have an extremely skewed view of what art is and what it means to be an artist. Art, at its base level is about expressing HUMAN creativity, not what an algorithm interprets it to be. It’s about making countless, specific choices for each step of the creative process and having complete control of the final outcome. It’s those choices that make your art truly unique and an expression of your creative vision. It doesn’t matter if it is objectively bad or good, just that it came from you, and that every detail, every color, every line, was your choice, not an interpretation of your words.
Unless you are creating your own AI model from scratch and training it purely on your own artworks, I don’t see how you can, in good conscience, claim the results to be your own.
Any one can create art, but an artist is someone who dedicates themselves to their craft, as with any other craftsman. That passion is what separates an artisan from a hobbyist. You may view this as snobbery, but I view it as respect and honoring a tradition that spans all of human kind, back to the earliest cave paintings tens of thousands of years ago. I know my limits and what I’m capable of and I have come to terms with those deficiencies in my work. I’m not delusional enough to think that by generating an image through AI, it somehow makes up for those shortcomings and makes me into something I am not.
The goal here isn’t to replace human uniqueness, creativity, emotion, imagination, or consciousness, but to give people a robust tool to help them explore concepts and express themselves.
If it allows people to more effectively communicate, express themselves, learn, and come together, it’s worth the trouble. The more people can participate in these conversations, the more we can all learn.
You are putting words in my mouth. I never devalued anyone’s work, unless you read that when I said I don’t think pieces that take more work or skill aren’t inherently worth more than those that were easier to produce. Is it even possible for there to be shortcuts in art? It’s harder to erase a line and fix it on canvas than it is to draw an incorrect one and just resize it. Let’s not even talk about things like shrinking a whole head to make it fit. Where in the gradient from canvas to AI does creating become cheating?
That reminds me of a quote from over a hundred years ago:
― Charles Baudelaire, On Photography, from The Salon of 1859
It sounds a lot like what you’re trying to say.
This can be said of many tools, from graphics rendering engines and art software to mass-produced pigments and tools. It took us 100,000 years to get from cave drawings to Leonard Da Vinci. This is just another step for artists, like Camera Obscura was in the past. It’s important to remember that early man was as smart as we are, they just lacked the interconnectivity and tools that we get.
This is just personal option. And I never felt bad about using it in the first place. It feels like you’re projecting your own feelings onto me.
It is still a human making generative art, and they use their emotions and learned experiences to guide the creation of works. You should familiarize yourself with all the different forms of guidance available with generative art. I think you’ll be pleasantly surprised.
This is a no-true-scotsman fallacy, you’re attempting to narrowly define artists to serve your needs, when no definition of “true artists” has ever existed. The rest of this is personal perspective that you shouldn’t force onto others. Let them create how they want, and in time, I think we’ll all come to benefit from their labors.
Did you create all the textures you put onto your 3d models? Did you use substance painter? Any sort of asset library? If you’re working in 2d, did you create your own brush textures?
Did you create colour and perspective theory from scratch? If not, how can you call yourself a painter?
Did Duchamp study the manufacture of ceramics before putting a factory-made urinal on a pedestal and called it a piece of art?
Wow, nice rhetorical questions you got there, bud.
What the fuck do you think?
If you had enough reading comprehension and read through my whole response, you would have got to the part where I said creating art is about the culmination of choices you make in each part of the process.
Maybe you can point it out to me, but I don’t recall the part where I said you have to recreate the fucking wheel every time you create something.
That particular quote you pointed out, was specific to generative AI, because you don’t make those same choices. The model and the training data is what produces those results for you.
But since you asked, yes I do have the knowledge to create textures by hand without Substance Painter. I’ve been doing 3d art since 2003, before that shit even existed and we hand to do it all manually in Photoshop.
No, I didn’t fucking create color and perspective theory. What do you think I am… like a fucking immortal from ancient times? But I did have to learn that shit and took multiple classes dedicated to each of those topics.
Lastly, you must have skipped on your art history for the last one, because the whole concept of that particular piece was that it was absurdist - an every day object raised to status of art by the artist. He didn’t fucking sculpt the urinal himself. So it would have been more appropriate to say he was a janitor that got lucky. Nice try, though.
And for a photographer, their surroundings is what produces many results, leading them to not make choices about those things. They focus on other things, don’t express themselves in the arrangement of leaves on a tree, leave that stuff to chance.
The important part is not that choices are made for you, but that you do make, at the very least, a choice. One single choice suffices to have intent. It is not even necessary to make that choice during the creation of the piece, splattering five buckets of paint onto five canvases and choosing the one that sparks the right impression a choice.
Yes, precisely. That one concept, the single choice, “yep a urinal should be both provocative and banal enough”, is what made it art.
There is no minimum level of craft necessary for art.
Ah, very interesting that you want to focus on photography as a comparison. To me, this just infers that you are not familiar with the type of choices that photographers do make, creatively. Just because they have endless amounts of subject matter readily available at their disposal, does not make the process any easier or different than other types of art.
Photographers still consider composition, lighting, area of focus, color, etc. Along with a large amount of other factors such as camera body, filmback, lens, fstop, iso, flash, supplemental lighting, post-processing, the list goes on.
Again, all of these choices are actively made when creating the work - using one’s critical thinking, decision making, experience and knowledge to inform each choice and how it will affect the outcome.
Generative AI is not that and will never be that, no matter how much you argue otherwise. You are entering a prompt, the model is interpreting that and generating a result that it calculates to be most statistically accurate. Your choice of words are not artistic choices, they are at most, requests or instructions. If you iterate, you are not in control of what changes. You only find out what has changed after the result has been generated.
Again, you are totally missing the point to the Fountain and using it as a false equivalence. It was made as a critique of the art world, to show the absurdity of what art critics said was valid art at the time. Whereas today, generative AI is not being made as a critique to anything. It’s being made for profit, to replaced skilled labor and using the work of the same people it’s trying to replace. Hopefully you can see how the two are different.
Here is an alternative Piped link(s):
https://piped.video/-JQDtzSaAuA?t=97
https://piped.video/1d_jns4W1cM
https://www.piped.video/watch?v=HtbEuERXSqk
quick video explanation
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Mate, that’s not art, that’s coding. Congratulations on learning a new coding skill and how inputs can affect outputs. Frankly, it’s barely coding, it’s adding degrees of specification so a program can do all the work. I get that it took you a while to learn what all of it means and how it works, sort of, but something being hard to do doesn’t make it art.
And don’t cheapen photography by comparing it to generating an AI image. There’s physical labor involved in photography on top of composition and patience.
Part 2
This looks like it’s set to change. The US Copyright Office is proactively exploring and evolving its understanding of this topic and are actively seeking expert and public feedback. You shouldn’t expect this to be their final word on the subject.
It’s also important to remember Copyright Office guidance isn’t law. Their guidance reflects only the office’s interpretation based on experience, it isn’t binding in courts or other parties. Guidance from the office is not a substitute for legal advice, and it does not create any rights or obligations for anyone. They are the lowest rung on the ladder for deciding what law means.
Let’s keep it civil and productive. Jeering dismissive language like “Also, good luck trying to copyright it because guess what, you can’t.” isn’t helping your argument, they’re just mean spirited. Let’s have a civil discussion, even if we disagree. I’m open to keep talking, but I will quit replying if you continue being disrespectful.
Removed by mod
It’s clear where you hold your stakes in the matter and where I hold mine. Whether or not you want to continue the conversation is up to you, but I’m not going to go out my way to be polite in the matter, because I don’t really give a shit either way or if you’re offended by what I say. AI personally affects me and my livelihood, so I do have passionate opinions about its use, how companies are adapting it and how it’s affecting other people like me.
All the article you linked shows is that they held a meeting, which doesn’t really show anything. The government has tons of meetings that don’t amount to shit.
So, instead of arguing whether or not the meeting actually shows they are considering anything different, I will explain my personal views.
In general, I’m not against AI. It is a tool that can be effective in reducing menial tasks and increasing productivity. But, historically, such technology has done nothing but put people out of work and increased profits for the executives and shareholders. At every given chance, creatives and their work are devalued and categorized as “easy”, “childish” or not a real form of work, by those who do not have the capacity to do it themselves.
If a company wants to adapt AI technology for creative use, it should solely trained off of content that they own the copyright to. Most AI models are completely opaque and refuse to disclose the materials they were trained on. Unless they can show me exactly what images were used to generate the output, then I will not trust that the output is unique and not plagiarizing other works.
Fair use has very specific use cases where it’s actually allowed - parody, satire, documentary and educational use, etc. For common people, you can be DMCA’ed or targeted in other ways for even small offenses, like remixes. Even sites like archive.org are constantly under threat by lawsuits. In comparison, AI companies are seemingly being given free pass because of wide adoption, their lack of transparency, and the vagueness as to where specifically the output is being derived from. A lot of AI companies are trying to adapt opt-out to cover their asses, but this is only making our perception of their scraping practices worse.
As we are starting to see with some journalism lawsuits, they are able to specifically point out where their work is being plagiarized, so I hope that more artists will speak up and also file suit for models where their work is blatantly being trained to mimic their styles. Because If someone can file copyright suit against another person for such matters, they should certainly be able to sue a company for the same unauthorized use of their work, when being used for profit.
I think AI art looks neat
That isn’t art. That’s just commodity.
That won’t get you into art school but it also won’t get you kicked out.
*adjusts horn-rims* yesyes very neat do you have anything else to say but that you’re whimsical?
There are so many possibilities for AI art, to say it’s all bad is painting it all with one brush
Are the Koalas here in the room right now?
Have you just woken up from a year long coma? AI can create stunning pictures now.
stunning but uncreative af.
that still depends on the operator.
I mean, just like any other tool.
That’s not a tool. A tool is something a mind uses to make something. AI is a generator in and of itself, requiring nothing from a mind.
Of course it does. An AI generator does nothing without a prompt. Give it a bad prompt, and it looks boring and uncreative.
The idea that you can throw anything (or nothing) into a generator and get something good out is a misconception. I’ve played around with generators, and can’t get much “good” out of them. But I’ve seen amazing looking stuff created by others.
yea I’ve also seen amazing stuff created by others. But that’s not what we’re talking about here
It literally is. The person I replied to explicitly said it’s a good tool but has no creativity. I said the creativity comes from the users skill.
If it’s a tool requiring a user to bring it to its full potential… then again thats what is being talked about.
These tools do literally nothing unless a user is involved. Be it setting up auto responses to certain text, or explicitly handing it instructions and tweaking as they go.
Yeah and there are tons of angles and gestures for human subjects that AI just can’t figure out still. Any time I’ve seen a “stunning” AI render it’s some giant FOV painting with no real subject or the subject takes up a 12th of the canvas.
Actually less so because it can’t draw the stuff but because it doesn’t want to on its own, and there’s no way to ask it to do anything different with built-in tools, you have to bring your own.
Say I ask you to draw a car. You’re probably going to do a profile or 3/4th view (is that the right terminology for car portraits?), possibly a head-on, you’re utterly unlikely to draw the car from the top, or from the perspective of a mechanic lying under it.
Combine that tendency to draw cars from a limited set of perspectives because “that’s how you draw cars” with the inability of CLIP (the language model stable diffusion uses) to understand pretty much, well, anything (it’s not a LLM), and you’ll have no chance getting the model to draw the car from a non-standard perspective.
Throw in some other kind of conditioning, though, like a depth map, doesn’t even need to be accurate it can be very rough, the information density equivalent of me gesturing the outline of a car and a camera, and suddenly all kinds of angles are possible. Probably not under the car as the model is unlikely to know much about it, but everything else should work just fine.
SDXL can paint, say, a man in a tuxedo doing one-hand pullups while eating a sandwich with the other. Good luck prompting that only with text, though.