Topic: The era of AI-generated art is approaching...

Posted under Art Talk

This topic has been locked.

dubsthefox said:
That's funny. They are literally ripping all artworks from e621 with a bot.

Why? Do they want to waste bandwidth or do they think they'll get more traffic from furries when e6 exists?

dubsthefox said:
The answer to that is:

  • Moral
  • The believe in true art. (I know it's a little dramatic expressed)
  • Possible legal issues.
  • And the just simply don't want AI art here.

The thing is, I haven't seen a single moderator, janitor, active user in the forum, that said:"yes I want to see a giant flood of AI art on e621" Or maybe I am just blind for the other comments. Those who are active here and want to do more than just having their daily evening fap, have pretty much the same opinion somewhere between "ehh" and "no" (from my observation, prove me wrong. I have to admit I skipped through the thread)

Would you like to discuss those issues? Moral or legal ones as the 'belief' stuff are just personal preference and the 'don't want to' is again, personal taste as long as there is no poll on the site that I am aware of as well as there is an obvious lack of understanding from decedents, to it, at least from what I've seen from the mentions of 'copyright' etc.

Edit: And no, a poll that has a plain question like 'do you want AI art on the site' would be loaded as there needs to be a proper framework for it and define what constitutes admissible art. Else people will think you mean stuff like post #3544693

Updated

ckitt said:
Why? Do they want to waste bandwidth or do they think they'll get more traffic from furries when e6 exists?

They want to have traffic, and if you can have a lot of tagged artwork without tagging anything themselves, why not just take it ¯\_(ツ)_/¯. Have you seen the questionable ads on their site? I don't believe they care about their community at all.

ckitt said:
Would you like to discuss those issues?

What NoMeNotYou said: forum #343215 (looking at it again, it's not a "legal issue")

And you are right. It is an opinion thing. The opinion of the admins and a majority (by my observation) of the active users. (it comes up in the Discord quite often)

dubsthefox said:
They want to have traffic, and if you can have a lot of tagged artwork without tagging anything themselves, why not just take it ¯\_(ツ)_/¯. Have you seen the questionable ads on their site? I don't believe they care about their community at all.

I don't visit r34, the amount of edits and bad art is just too much and back when I used to visit there was not much furry art or at least nothing I wouldn't find here so, I preferred e6.

dubsthefox said:
What NoMeNotYou said: forum #343215 (looking at it again, it's not a "legal issue")

And you are right. It is an opinion thing. The opinion of the admins and a majority (by my observation) of the active users. (it comes up in the Discord quite often)

I read the post and yeah, he makes some good points but not exactly. At least you understand it's not a legal issue. The thing is, Notmenotyou thinks that the fact pixels are approximated based on context from the trained set, while right, he confuses the fact that each pixel is 'on the same spot' as other pictures. That's not true. It's contextual, if a bicep 'pixel' is near, it's more likely the next one will be a 'shoulder' pixel. That's not copying art, that's sensibility. If I am drawing I think I will put the head above the shoulders too, and I'd like the AI to do the same. That's all this is. It has nothing to do with stealing a pixel and putting it in the same place. Sadly, it seems since that's the opinion of an admin, things won't go anywhere here as this lacks a good basis for discussion to be had. Hopefully, when more people pick up on it and start finding other venues to push their art, e6 will change its course. Draco was quite sensible in his arguments too so, don't see how anything anyone would say can change someone's opinion that doesn't care to have it changed but just to rationalize staying in theirs.

watsit said:
That doesn't matter. Step A: Have access to the work, Step B: Produce something substantially similar to the work. That's copyright infringement. It doesn't matter how you go from A to B.

It does matter, because the amount of information from any single original work is negligible and the original irrecoverable unless you put that information back in from an outside source, this is provable mathematically so long as the model is well trained which again is provable. At most it be like two artists creating something similar after studying the same source materials, that would not be a copyright violation, and neither would this. Any single image contributes bytes at most to the whole network, the information in the original image is gone. The data is less than an MD5 or SHA hash, both of which are not considered copyright infringing derivatives.

It's less infringement than making art of Pokemon.

Updated

watsit said:
It was never accepted in the first place, so nothing's being removed that used to be allowed here.
Not every site needs to cater to everything; just as we don't accept stories, or audio work, or sculptures, or fursuits, or photos without also having artwork created by a person, we don't accept AI generated artwork without it also having artwork created by a person.

This is a disingenuous argument: Irrelevant Photographs is given its own section in the Uploading Guidelines, which describes what specific properties make a photograph irrelevant. By contrast, AI art, *as a medium* is listed under Low quality submissions alongside such things as "scribbles", "bad edits" and "highly visible artifacts". No criteria is listed to indicate when AI art is acceptable (even though there are permissible uses, as per this quote):

NMNY said:
Using AI for supplementing regular art, like helping with backgrounds or as pose references, are fine, but we are definitely drawing the line at people either uploading unmodified AI artwork, or compositing AI works and applying mild touch ups.

So the Uploading Guidelines are not just saying "this is not an archive for unaltered AI art" the same way they do for photography: They're saying "all AI art is shit and doesn't belong on the site under any circumstances", which is both not true (per the NMNY quote above) and something that AI art advocates are obviously going to want to try to argue against.

No one is saying you can't use AI as a tool in creating artwork. But you still need to put in enough of your own work to make the artwork yours, rather than just what the system spit out with little (if any) touch-ups.

watsit said:
It's the policy:
post #3590930 is an example of a piece that started with an AI generated image, and used it for a pose reference that they redrew in their own style, changing a number of things, fixing up obvious issues, and swapping the character for a different one.

Suppose I need help with a background (which, as NMNY said, is a valid use for AI art), so I decide to use a picture full of jpeg compression artifacts as the background for a high quality original drawing. Would such an image be permissible to upload? Or would that still be a low quality submission in spite of my alterations?

Would the answer be different if instead of a heavily compressed jpeg, I used an AI generated background? Why or why not?

That doesn't matter. Step A: Have access to the work, Step B: Produce something substantially similar to the work. That's copyright infringement. It doesn't matter how you go from A to B.

Absurd. How much "substantial similarity" is there in the average piece of Pokémon/MLP/Helluva Boss/FNAF/Zootopia/<insert licensed property with furry characters here> fanart uploaded to the site on an hourly basis? Much of it is intentionally drawn in the style of the source material, and some even uses 3D character models directly ripped from the source. Is copyright infringement only wrong in your eyes when an AI does it?

dubsthefox said:
I don't understand why some people insist, that e621 should host AI "art". Just use r34 for it if it is that important. They'll probably give a shit about it, as they usually do with other stuff.

The point I'm building towards is this: We're pushing back on e621's AI art ban because the rationale cited for it is wildly inconsistent.

If you go by the Uploading Guidelines, it's because all AI art is inherently "low quality" and thus falls below the site's standards for uploads. But this is demonstrably false: you need only look at the works posted to the Furry Diffusion Discord server to see that quality AI artworks do already exist and, were they not blanket banned, would likely be considered high enough quality for the site's standards.

If you go by moderator statements, it's because AI art generators are just "a fancy tool for automated tracing and collage making from millions of newspapers for every image output" and so their output should be considered copyright infringement. This is based on a misunderstanding of how these AI art tools actually function: Training AI models on existing artworks is no more copyright infringement than a human artist looking at other artworks while learning their own craft. The model is not composed of digitized representations of pre-existing artworks: It is a distinct mathematical model for recognizing patterns which cannot be used to recreate any of the specific works it was exposed to. Thus, its outputs are unique works, not collages or traces of existing works.

If you go by different statements from the same moderator, supplementing regular art with AI by using it for backgrounds or pose references is fine, but uploading unmodified or lightly modified AI art is not. But how could this possibly be the case if AI art were inherently plagiaristic, as NMNY claims? Would an artwork drawn atop a work of plagiarism not still contain plagiarism? This directly contradicts both the Uploading Guidelines and NMNY's own previous statements.

Now, I'm actually not saying e621 is obligated to host AI art. For one thing, I can see how allowing unmodified AI images would cause a flood of uploads that might overwhelm the tagging system. If that were the reason cited for the ban, I would gladly abide by it. But instead we have a mess of contradictory rulings based on misinformation that simultaneously treat AI art as low-effort garbage, shameless art theft and, somehow, still an acceptable basis for one's own original work.

I'd prefer it not be banned, of course, but if there's going to be a ban, I just want the reason given to be consistent, honest, factually correct, and not needlessly antagonizing to advocates of a new artistic medium.

ckitt said:
I guess it's the familiarity and if the pictures are furry-oriented of OCs, I am not even sure if r34 would accept that kind of flooding. It seems like you just want to pass the bucket. What I don't get is why it can't be properly tagged here and people that don't like it can just blacklist it, or have its own category so it doesn't mix with normal art. I mean, countless of solutions, only 'real' reason I see is lack of will to even discuss it in a serious manner without it devolving to 'if you don't like it leave' instead of a good rational explanation. You don't see how that is off-putting?

I think NMNY did an adequate job explaining why AI art submission is antithetical to the purpose of e621. Considering other sites such as DeviantArt, Pixiv, etc which have not adopted a clear policy on AI generated art. The most obvious effect of which is submission of AI generated art at a pace and volume that dwarfs non-AI art, and this pace apparently is continuing to accelerate.

A far more complete argument is in this video . I certainly don't agree with all NMNY's arguments (for example, I agree his technical understanding seems inadequate) or those in this video, but as far as I can see the volume problem in itself is a) clear given the evidence, and b) not solvable by any means other than total exclusion of ai-generated art.

This also specifically nullifies your blacklist argument, as a flooded page of search results might return only 10 or 20 non blacklisted items in a page of 320.

EDIT: i want to specifically point out another argument from this video: 'trending' themes and AI flooding will interact such that the worst flooding will be on the most popular themes.

Updated

Watsit

Privileged

savageorange said:
This also specifically nullifies your blacklist argument, as a flooded page of search results might return only 10 or 20 non blacklisted items in a page of 320.

Not to mention that moderators still have to look through all art posted to approve or deny them, and respond to flags, reports, etc, which would reduce the rate that non-AI generated art would be handled.

reallyjustwantpr0n said:

AIs are able to write short stories now, really well.

That interests I who haven't found one of such that provoked thought in my searching or text AI usage. Will you name one, please?

foxel said:
That interests I who haven't found one of such that provoked thought in my searching or text AI usage. Will you name one, please?

All GPT-3 models have that capability, you can try some others in AIDungeon if I remember correctly. And then there is openAI: https://beta.openai.com/playground/p/default-chat

Updated

savageorange said:
I think NMNY did an adequate job explaining why AI art submission is antithetical to the purpose of e621. Considering other sites such as DeviantArt, Pixiv, etc which have not adopted a clear policy on AI generated art. The most obvious effect of which is submission of AI generated art at a pace and volume that dwarfs non-AI art, and this pace apparently is continuing to accelerate.

A far more complete argument is in this video . I certainly don't agree with all NMNY's arguments (for example, I agree his technical understanding seems inadequate) or those in this video, but as far as I can see the volume problem in itself is a) clear given the evidence, and b) not solvable by any means other than total exclusion of ai-generated art.

This has already been addressed, an AI-tag would fix the issue, first of all flooding is taken care off by the fact that even when you make 'normal' posts on e6, you have an 'allowance' depending on a certain algorithm so you don't flood the queue. It's not unreasonable to think you could add a separate, reduced allowance for AI-gen art. All in all, the only increase in traffic would be due to more people being able to make their own art, which- in the case that all those people actually wanted to post their own crayon art, wouldn't that also cause a flooding? Would you ban art with crayons? It's just an 'easy out' rather than a concrete solution.

watsit said:
Not to mention that moderators still have to look through all art posted to approve or deny them, and respond to flags, reports, etc, which would reduce the rate that non-AI generated art would be handled.

Since when has that been a problem? So, if the community got bigger and more artists came in you'd be crying about mods having to do more work? How about getting more mods? Ever considered that?

spacies said:
I'd prefer it not be banned, of course, but if there's going to be a ban, I just want the reason given to be consistent, honest, factually correct, and not needlessly antagonizing to advocates of a new artistic medium.

That last part is what is actually the worst. The toxicity and elitism some people show, people who don't even want to engage in good faith in the discussion, is what irks me the most.

ckitt said:
I guess mine looked 'too good'?

ckitt said:
I am a bit annoyed still that the art of my characters was taken down due to "low quality" when 1. It had far better quality than most pictures here, enough so to fool people into promoting it to the daily popular, and 2. That there are AI-generated pictures on here of far worse quality that are allowed to stay.

I'm gonna be honest this reads as the AI-generation equivalent of self-upload artists being confused about their posts getting declined.

Watsit

Privileged

ckitt said:
It's not unreasonable to think you could add a separate, reduced allowance for AI-gen art.

Is it? How much do you know about the site's code? It wasn't created brand new for this site, it's based on a preexisting codebase for booru sites, and as it is, the site went through a massive overhaul a while back to basically "reset" the site's code to an updated version of that preexisting codebase, to get rid of a lot of the hacks and changes that had built up and were leading to a lot of inefficiencies. Adding a new tag category is something that would take an unreasonable amount of work, so it's not at all clear that "a separate, reduced allowance for AI-gen art" is easy to implement.

And it still wouldn't account for the fact that these AI art generators are, at best, an open legal question. Stable AI themselves give a complete non-answer:

What is the copyright for using Stable Diffusion generated images?

The area of AI-generated images and copyright is complex and will vary from jurisdiction to jurisdiction.

i.e. it's not a clear-cut what the output of the system is regarding copyright. They're also acknowledging the lack of ability for artists to opt-in or opt-out of having their art used for training in the current system, and are adding such options to the system for future models, which would be a weird thing to do if they didn't need to if "it isn't copying" and it was all completely unquestionably legal as-is. Or how when doing the same exact kind of AI generation for music, the training models explicitly avoid copyrighted works without permission because of legal issues, yet the training models for images didn't bother avoiding copyrighted works. Curious, that.

ckitt said:
Since when has that been a problem? So, if the community got bigger and more artists came in you'd be crying about mods having to do more work? How about getting more mods? Ever considered that?

How do you propose they go about "getting more mods"? The moderators are all volunteers, they're normal users whom the admins have deemed trustworthy enough with the power they've been given. There is only one person here who is payrolled staff, NotMeNotYou, the head admin, is employed by DragonFruit. Everyone else are volunteers working in a hierarchy under them. "Getting more mods" is not as simple as putting up a public request for volunteers, that's a surefire way to get bad actors into positions of privilege, or careless people abusing their privileges.

watsit said:
Is it? How much do you know about the site's code? It wasn't created brand new for this site, it's based on a preexisting codebase for booru sites, and as it is, the site went through a massive overhaul a while back to basically "reset" the site's code to an updated version of that preexisting codebase, to get rid of a lot of the hacks and changes that had built up and were leading to a lot of inefficiencies. Adding a new tag category is something that would take an unreasonable amount of work, so it's not at all clear that "a separate, reduced allowance for AI-gen art" is easy to implement.

So, what you are saying is that the site lacks a proper coder then, funny that.

watsit said:
And it still wouldn't account for the fact that these AI art generators are, at best, an open legal question. Stable AI themselves give a complete non-answer:
i.e. it's not a clear-cut what the output of the system is regarding copyright. They're also acknowledging the lack of ability for artists to opt-in or opt-out of having their art used for training in the current system, and are adding such options to the system for future models, which would be a weird thing to do if they didn't need to if "it isn't copying" and it was all completely unquestionably legal as-is. Or how when doing the same exact kind of AI generation for music, the training models explicitly avoid copyrighted works without permission because of legal issues, yet the training models for images didn't bother avoiding copyrighted works. Curious, that.

I think adding an option to opt-out is common sense for people that don't want their art in the models since it is their art. It's not a legal or 'copying' issue, just simple respect. Also even if the courts wouldn't find them liable, if they get sued by multiple people they'd still have to pay court fees which needless to say, would put a stop to the project. As it stands though, the models use freely available, public art and as it has been stated before, it's no different than an artist looking through reffs. S-AI has a lot more to worry about in terms of legal ramifications than the end users using their program since they are the ones mostly liable for the models and the things that go in them. This has nothing to do with sites that host those pictures.

watsit said:
How do you propose they go about "getting more mods"? The moderators are all volunteers, they're normal users whom the admins have deemed trustworthy enough with the power they've been given. There is only one person here who is payrolled staff, NotMeNotYou, the head admin, is employed by DragonFruit. Everyone else are volunteers working in a hierarchy under them. "Getting more mods" is not as simple as putting up a public request for volunteers, that's a surefire way to get bad actors into positions of privilege, or careless people abusing their privileges.

The same way they found the people they already have. See active members of the community with a good track record and ask if they want to moderate content, some will eventually accept. Pretending this is a non-starter is simply an excuse not to try.

ckitt said:
Since when has that been a problem? So, if the community got bigger and more artists came in you'd be crying about mods having to do more work? How about getting more mods? Ever considered that?

Mairo mentioned it in the last month at least two times in the Discord. Over 1k uploads per day need a lot of attention. And it's not always as simple as:"looks good/bad" They have to check for CDNP and DNP posts as well. If there is no source and artist tag, it adds more work. And they add more janitors who can handle approvals.

ckitt said:
It's not unreasonable to think you could add a separate, reduced allowance for AI-gen art.

And... lowering the standards to host AI art is a bad take, in my opinion. Besides that, the decision how AI is treated is already clear. If this is what you mean.

savageorange said:
I think NMNY did an adequate job explaining why AI art submission is antithetical to the purpose of e621. Considering other sites such as DeviantArt, Pixiv, etc which have not adopted a clear policy on AI generated art. The most obvious effect of which is submission of AI generated art at a pace and volume that dwarfs non-AI art, and this pace apparently is continuing to accelerate...

See, what you did there was say "I agree with NMNY's arguments for banning AI art" and then cite arguments which NMNY didn't make and which are not listed on the Uploading Guidelines as the reason for AI art to be banned. NMNY did not ban AI art because it would overwhelm the site: he banned it because he believes that AI generators are just "a fancy tool for automated tracing and collage making from millions of newspapers for every image output". Which is wrong. Factually. That isn't how it works.

...I certainly don't agree with all NMNY's arguments (for example, I agree his technical understanding seems inadequate)...

Why should an someone with an inadequate technical understanding of a technology be allowed to make policy decisions about said technology? Why should people attempting to correct that understanding be treated like villains for doing so?

watsit said:

And it still wouldn't account for the fact that these AI art generators are, at best, an open legal question. Stable AI themselves give a complete non-answer:
i.e. it's not a clear-cut what the output of the system is regarding copyright.

You seem to have ignored the questions about copyright that I asked in my previous post, so I ask again: Why is copyright infringement only wrong when an AI does it? Do you also advocate for the banning of all fan works depicting licensed IPs that are uploaded to the site? This site is awash in copyright infringement, some of which is far more explicitly dangerous to host (There is actual precedent for Nintendo threatening to litigate NSFW Pokémon fanwork, for example) but it makes up a substantial portion of the site's content. So why is an "open legal question" considered beyond the pale when more overt copyright infringement is apparently fine?

They're also acknowledging the lack of ability for artists to opt-in or opt-out of having their art used for training in the current system, and are adding such options to the system for future models, which would be a weird thing to do if they didn't need to if "it isn't copying" and it was all completely unquestionably legal as-is. Or how when doing the same exact kind of AI generation for music, the training models explicitly avoid copyrighted works without permission because of legal issues, yet the training models for images didn't bother avoiding copyrighted works. Curious, that.

It is not proof of wrongdoing to take action in order to appease the intense social hostility which has been propagated through misinformation like that parroted by NMNY and yourself. It isn't copying; it doesn't just regurgitate pieces of pre-existing artworks; it's not just a tracing or collage machine. But people who do not understand the technology keep making those claims anyway. Of course they'll have to respond to them eventually.

dubsthefox said:
... lowering the standards to host AI art is a bad take, in my opinion.

Who said anything about lowering standards? It's only "lowering standards" if you consider all AI art to be intrinsically low quality, regardless of how it actually looks. Have you seen any of the high-quality stuff that has been posted to the Furry Diffusion Discord or r/aiyiff and r/furai? There are some works there which would absolutely exceed the site's standards if AI art were assessed the same as any other art.

Besides that, the decision how AI is treated is already clear. If this is what you mean.

The decision regarding how AI art is to be treated by the site isn't clear at all: See my earlier post above where I outline the contradictory policies given by NMNY and the Uploading Guidelines, and how they indicate fundamentally incompatible beliefs about what AI art is and why it should be banned.

watsit said:
Not to mention that moderators still have to look through all art posted to approve or deny them, and respond to flags, reports, etc, which would reduce the rate that non-AI generated art would be handled.

dubsthefox said:
Mairo mentioned it in the last month at least two times in the Discord. Over 1k uploads per day need a lot of attention. And it's not always as simple as:"looks good/bad" They have to check for CDNP and DNP posts as well. If there is no source and artist tag, it adds more work. And they add more janitors who can handle approvals.

This is what gets me: It feels like the real reason AI art is banned is because it can be produced very quickly and so the volume of AI art uploads would overwhelm the site if unaltered AI images were permitted. If that were given as the explicit reason for the ban on the Uploading Guidelines, then I would have no issue with it.

But the rules say it's low quality and the admin says it's art theft and the admin also says that it's okay to use as a background or for reference but it needs to be significantly altered by human input to be a valid upload. Why? Why not just say "Our site and its policies are not built to handle uploads at the rate that is now possible thanks to AI art, so AI art is banned"? Why are these other, contradictory justifications offered instead? Is it not enough to make a decision based on the practical realities of the site's available resources? Do the admins also have to feel like they're a heroic bastion defending True Art™ against the barbarian hordes?

The reason it was shoehorned into low quality submissions was because a) we had already listed waifu2x there and b) we wanted it done fast without any mayor changes to the document while we had internal discussions on how to actually handle it properly.

Sometimes you gotta hastily slap a bandage on before it can actually be operated properly.

I've updated the Uploading Guidelines now to actually reflect our current stance on it, it might not be entirely final but if anything changes further it'll be minor wording stuff only.

For everyone's convenience the policy is the following:

Bad things to upload:

  • AI Generated content: No AI generated, or AI assisted artwork.
    • Exceptions are currently for backgrounds (treated like using a photo as a background, quality rules apply); for artwork that references, but does not directly use, AI generated content; and for full paintovers.

notmenotyou said:
The reason it was shoehorned into low quality submissions was because a) we had already listed waifu2x there and b) we wanted it done fast without any mayor changes to the document while we had internal discussions on how to actually handle it properly.

Sometimes you gotta hastily slap a bandage on before it can actually be operated properly.

I've updated the Uploading Guidelines now to actually reflect our current stance on it, it might not be entirely final but if anything changes further it'll be minor wording stuff only.

For everyone's convenience the policy is the following:

Thank you! That does feel more like the policy recognizes AI art as a distinct and valid medium which, like photography, is beyond the scope of the site's purpose unless it is significantly altered by a human artist. Much appreciated!

spacies said:
Who said anything about lowering standards?

That's why I put the:" If this is what you mean.", behind my question statement. I am not a native English speaker (+ other problems) Google doesn't always help with text understanding. I wasn't sure about it. Apparently I understood it wrong, but I wanted to give my two cents to it, in case I didn't understand it wrong.

Updated

dubsthefox said:
That's why I put the:" If this is what you mean.", behind my question. I am not a native English speaker (+ other problems) Google doesn't always help with text understanding. I wasn't sure about it. Apparently I understood it wrong, but I wanted to give my two cents to it, in case I didn't understand it wrong.

I see. I think I also might have misunderstood what you meant. My apologies for the confusion.

notmenotyou said:
The reason it was shoehorned into low quality submissions was because a) we had already listed waifu2x there and b) we wanted it done fast without any mayor changes to the document while we had internal discussions on how to actually handle it properly.

Sometimes you gotta hastily slap a bandage on before it can actually be operated properly.

I've updated the Uploading Guidelines now to actually reflect our current stance on it, it might not be entirely final but if anything changes further it'll be minor wording stuff only.

For everyone's convenience the policy is the following:

This is so much better. Thank you.

spacies said:
See, what you did there was say "I agree with NMNY's arguments for banning AI art" and then cite arguments which NMNY didn't make and which are not listed on the Uploading Guidelines as the reason for AI art to be banned. NMNY did not ban AI art because it would overwhelm the site: he banned it because he believes that AI generators are just "a fancy tool for automated tracing and collage making from millions of newspapers for every image output". Which is wrong. Factually. That isn't how it works.

Strawmanning.
I certainly don't agree with most of NMNY's arguments(as you can see from this thread) and i regard an incorrect technical understanding very seriously. Nonetheless, his argument as i remember it against accepting AI generated art is simple, and includes the word 'antithetical', which is why I used it.
I admit I can't find it in this thread, so perhaps it was in a previous version of the uploading guidelines.

Watsit

Privileged

ckitt said:
So, what you are saying is that the site lacks a proper coder then, funny that.

Just because they may not want to go through the hassle of adding a feature you want doesn't mean they're not "proper coder"s. They have the entire site codebase to deal with, to ensure it continues running smoothly 24/7, and if a given feature adds more to that burden than they're willing to deal with (again, we're talking about unpaid volunteers, here), it's completely understandable for them to not do it.

ckitt said:
I think adding an option to opt-out is common sense for people that don't want their art in the models since it is their art. It's not a legal or 'copying' issue, just simple respect.

So then wouldn't it be incredibly disrespectful for e6 to allow generated art built with training sets using thousands, millions, or billions of images that ignored this standard of simple respect?

ckitt said:
Also even if the courts wouldn't find them liable, if they get sued by multiple people they'd still have to pay court fees which needless to say, would put a stop to the project. As it stands though, the models use freely available, public art and as it has been stated before, it's no different than an artist looking through reffs.

So if it's the same as an artist using refs, you agree that if one of these systems produced something substantially similar to one of those refs they used, then, just like an artist, they would be liable for copyright infringement?

Though it actually is different from an artist using refs, as explained in the video. For one, these companies build training sets under non-profit research organizations, which are given special permission for handling copyrighted content (under the presumption that it's for non-profit research purposes) that you and I don't have. This training data is then used directly by for profit companies in a way that's extremely suspect (and make no mistake, the quality of these systems to generate art as well as they do is directly tied to the training sets they contain, and the quality of those training sets is directly tied to the amount of (copyrighted) material they used for the training set, all without compensating the original creators). And secondly, computers have the ability to perfectly recall data and replicate results in a way that no human can. So it is quite different from "an artist looking through reffs".

And just because it used "freely available, public art" doesn't mean they aren't still protected by copyright. That thing that allows a creator exclusive control over exploiting their work as they want. It wouldn't have been hard for them at all to say "We're only using non-copyrighted, public domain artwork, or any artwork given freely to us for the purpose of making the data for these systems that we'll make money from". But they didn't. They instead hid behind "it's for non-profit research", before using the resulting data for non-research profit.

ckitt said:
S-AI has a lot more to worry about in terms of legal ramifications than the end users using their program since they are the ones mostly liable for the models and the things that go in them. This has nothing to do with sites that host those pictures.

Except if these systems are found to be infringing, sites hosting such artwork will become liable if they don't immediately deal with it themselves once known. If someone gives you illicit goods, and a court finds they had no right to give it to you, you don't get to keep it just because you didn't know at the time.

ckitt said:
The same way they found the people they already have. See active members of the community with a good track record and ask if they want to moderate content, some will eventually accept.

Which takes time, trust building, and wouldn't at all keep pace with the added flood of content.

ckitt said:
Would you like to discuss those issues? Moral or legal ones as the 'belief' stuff are just personal preference and the 'don't want to' is again, personal taste as long as there is no poll on the site that I am aware of as well as there is an obvious lack of understanding from decedents, to it, at least from what I've seen from the mentions of 'copyright' etc.
#3544693

To allow AI on the site is unethical in some manner. At the moment, all of the AI furry art that i've seen so far were created by a robot that was trained using other artists art, and I'm certain that almost none of these artists are okay with their art being used to train the AI. So to allow a platform for people to post art made by an AI that was trained using other artists who don't consent to their art being used for the AI, is to say that its totally okay to do things that go against the artists wishes/consent.
This alone is a good reason not to allow AI

thousandfold said:
To allow AI on the site is unethical in some manner. At the moment, all of the AI furry art that i've seen so far were created by a robot that was trained using other artists art, and I'm certain that almost none of these artists are okay with their art being used to train the AI. So to allow a platform for people to post art made by an AI that was trained using other artists who don't consent to their art being used for the AI, is to say that its totally okay to do things that go against the artists wishes/consent.
This alone is a good reason not to allow AI

As an artist, I use other people's works as reference constantly to make my own. Others have also used my works as reference to make theirs.
Is that morally objectionable?

mabit said:
As an artist, I use other people's works as reference constantly to make my own. Others have also used my works as reference to make theirs.
Is that morally objectionable?

No because artists don't have anything against you using their art as a reference for what you draw, in fact they encourage it because I'm sure most artists want people to be inspired by their work. This changes completely once you replace a person with a machine. Artists do not want to inspire an AI because they care about whether or not other people put the work in themselves, as opposed to just making an AI do it for them.

thousandfold said:
No because artists don't have anything against you using their art as a reference for what you draw, in fact they encourage it because I'm sure most artists want people to be inspired by their work.

That's not something you can say about every single artist that exists. And even for the ones that would like that kind of thing to happen, it's still being done without their consent or knowledge

thousandfold said:
This changes completely once you replace a person with a machine. Artists do not want to inspire an AI because they care about whether or not other people put the work in themselves, as opposed to just making an AI do it for them.

Again, you cannot make any sweeping generalizations about how every single artist feels about anything

thousandfold said:
No because artists don't have anything against you using their art as a reference for what you draw, in fact they encourage it because I'm sure most artists want people to be inspired by their work. This changes completely once you replace a person with a machine. Artists do not want to inspire an AI because they care about whether or not other people put the work in themselves, as opposed to just making an AI do it for them.

Interesting argument. Personally, i would suggest that artists are interested in transmitting experiences to other people, and AI in their present state do not appear to be the kind of thing that you can meaningfully call a person.

To me, putting in the work is more of a plausible evidence that you might be having the kind of experience that I am trying to transmit.

mabit said:
Again, you cannot make any sweeping generalizations about how every single artist feels about anything

You're right, I can't, what I'm saying is only based off an educated assumption. From what I've seen so far, the majority of artists are against their work being used for this kind of thing.

Personally I don't think I care if someone puts my art through an AI. But as a whole, I do care about people using other artists work to train an AI, especially when people are doing it against the artists will. It would be better if people at least asked the artists before using their art to train an AI.
However, it is not necessary to ask permission to use someone elses artwork as inspiration for your own art, because its very easily assumed that the artist will be okay with it since pretty much every artist has always been okay with it.

Watsit

Privileged

thousandfold said:
You're right, I can't, what I'm saying is only based off an educated assumption. From what I've seen so far, the majority of artists are against their work being used for this kind of thing.

Personally I don't think I care if someone puts my art through an AI. But as a whole, I do care about people using other artists work to train an AI, especially when people are doing it against the artists will. It would be better if people at least asked the artists before using their art to train an AI.

Rather pertinently, there's been at least one artist that has taken their work down from here because someone used e6 as a training source for an AI. They didn't at all mind their art being here, but they didn't want it used for training the AI, so they had it removed.

And as long as people making these training sets keep behaving as if copyright doesn't apply to them, I imagine you'll be seeing more artists pull down their work to prevent these training sets from using their art like that without permission.

watsit said:
Rather pertinently, there's been at least one artist that has taken their work down from here because someone used e6 as a training source for an AI. They didn't at all mind their art being here, but they didn't want it used for training the AI, so they had it removed.

not that i disagree with their intent, but i'm sure the AI folks will be scraping more than this site for references

watsit said:
Just because they may not want to go through the hassle of adding a feature you want doesn't mean they're not "proper coder"s. They have the entire site codebase to deal with, to ensure it continues running smoothly 24/7, and if a given feature adds more to that burden than they're willing to deal with (again, we're talking about unpaid volunteers, here), it's completely understandable for them to not do it.

If going through hassles isn't something they want maybe they shouldn't volunteer for coding a site.

watsit said:
So then wouldn't it be incredibly disrespectful for e6 to allow generated art built with training sets using thousands, millions, or billions of images that ignored this standard of simple respect?

No, e6 has nothing to do with stability and if a court found S-AI liable e6 can simply do a sweep of ai-generated tag and remove it all at once. Or a specific infringing model.

watsit said:
So if it's the same as an artist using refs, you agree that if one of these systems produced something substantially similar to one of those refs they used, then, just like an artist, they would be liable for copyright infringement?

The issue is that the system can't produce anything substantially similar unless you feed art to it through img2img that is in and of itself stolen. Again, your lack of understanding is staggering.

watsit said:
Though it actually is different from an artist using refs, as explained in the video. For one, these companies build training sets under non-profit research organizations, which are given special permission for handling copyrighted content (under the presumption that it's for non-profit research purposes) that you and I don't have. This training data is then used directly by for profit companies in a way that's extremely suspect (and make no mistake, the quality of these systems to generate art as well as they do is directly tied to the training sets they contain, and the quality of those training sets is directly tied to the amount of (copyrighted) material they used for the training set, all without compensating the original creators). And secondly, computers have the ability to perfectly recall data and replicate results in a way that no human can. So it is quite different from "an artist looking through reffs".

Yes, that's how computers work when they load an image on your windows explorer or in photoshop. It's not how computers work when they generate images from neural networks. If you can't tell the difference you are not equipped for this discussion.

watsit said:
And just because it used "freely available, public art" doesn't mean they aren't still protected by copyright. That thing that allows a creator exclusive control over exploiting their work as they want. It wouldn't have been hard for them at all to say "We're only using non-copyrighted, public domain artwork, or any artwork given freely to us for the purpose of making the data for these systems that we'll make money from". But they didn't. They instead hid behind "it's for non-profit research", before using the resulting data for non-research profit.

Their research is for non-profit purposes, how those models are used in the future is not my, yours, or e6's concern until it becomes a legal issue. So, let's say in the future we get models trained from AI art themselves, wouldn't that diminish or 'dilute' how much of the original artists work is taken? Already the 'art' data on the network is a minuscule part of it, maybe less than a bite on average for each picture. If it was further diluted, maybe 5, 10, 100 times more, would it still be 'copyright infrigment' even if a similar picture would be produced? You've never seen two artists produce similar pictures that aren't copies of each other? I have seen that several times. Mainly on artists that used the same ref.

watsit said:
Except if these systems are found to be infringing, sites hosting such artwork will become liable if they don't immediately deal with it themselves once known. If someone gives you illicit goods, and a court finds they had no right to give it to you, you don't get to keep it just because you didn't know at the time.

Which takes time, trust building, and wouldn't at all keep pace with the added flood of content.

And it's as easy to deal with it as selecting a tag and removing all related art to that tag. Problem solved, no longer liable. But good luck with that future when people can copyright literal bytes of data.

Random artist said:
'0100 is mine and no one can have it in a file, or neural network'.

Yeah, that's gonna fly. lol

thousandfold said:
At the moment, all of the AI furry art that i've seen so far were created by a robot that was trained using other artists art

Yes, that's how AI's create art, and also, how normal artists create art, why is it okay in one case but not the other?

thousandfold said:
and I'm certain that almost none of these artists are okay with their art being used to train the AI.

There are people certain that the earth is flat, you can join that bandwagon too since you seem pretty loose with your certainty. In this thread there are several artists that have spoken about not caring if their art is used to train models. In the end of the day, most people are poorly educated on the subject of AI and some of their choices are based on that lack of education. I would still respect their wishes if they don't want their art to train a model but the reality of the situation is that it will happen either by S-AI or some private actor that just has access to their own AI that they can train. The fact that this ability exists makes such requests simple wishful thinking. Instead of doing that they should accept reality and make use of the tool that is offered to them.

thousandfold said:
So to allow a platform for people to post art made by an AI that was trained using other artists who don't consent to their art being used for the AI, is to say that its totally okay to do things that go against the artists wishes/consent.
This alone is a good reason not to allow AI

Those platforms already exist and they can be far more toxic towards original artists exactly because of the treatment AI art gets in places like here, by people like you and watsit. e6 has a chance to do it 'right' and respectfully towards artists and find a compromise but you are literally here arguing against that.

thousandfold said:
Artists do not want to inspire an AI because they care about whether or not other people put the work in themselves, as opposed to just making an AI do it for them.

Totally asinine take, how many artists have you asked? 1? 2? 5? What percentage of the 'total artists' in the world is that? Why would they care how much effort someone gives to their pictures? If less effort is given, even in AI art, the piece will look like total shit. I went to r34 to see what the tag there was about and it literally is a pile of wet shit all over my screen, 0 standards. E6 can do it better and NotMeNotYou gave a good idea of how to do it. By painting over the art you fix the errors and you can remove the 'low effort' part out of the equation, and such art is allowed here for that reason even if technically, by YOUR and watsit's standards, is 'art theft' still. I would just argue that there will be a point where painting over won't be needed due to the complexity prompts will be able to take in the future but that's for when that time comes.

In the end, what is the artistic value of the picture? Is it the technique or the subject? If it's both then wouldn't AI art constitute at the very least half of what traditional art is?

when are we going to get an AI that actually does some good-- something that everyone could get behind. when are we going to get an AI that can tag art accurately?

savageorange said:
Interesting argument. Personally, i would suggest that artists are interested in transmitting experiences to other people, and AI in their present state do not appear to be the kind of thing that you can meaningfully call a person.

To me, putting in the work is more of a plausible evidence that you might be having the kind of experience that I am trying to transmit.

What about people that don't care about the meaning of your picture but enjoy the style, and want to transmit their own meaning using that style of art? Artists do take style references from one another all the time.

darryus said:
when are we going to get an AI that actually does some good-- something that everyone could get behind. when are we going to get an AI that can tag art accurately.

It already exists. At least for the most part it can do danbouro tags pretty well. But I'm sure people in the forum would find a reason to shit on it for the 0.1% error rate it might have like they tag properly all the time.

zermelane said:
NovelAI just released their paid image generation service, with an anime module and a beta for a furry module. They're pretty clearly trained on Danbooru and e621 respectively (they don't really try to hide that or anything). I think the anime model is already wireheading half of Japan, and reportedly it's making AI art discussions hard to follow at work because everyone's posting anime boobs.

The furry model is... it's better than any txt2img I've seen for furry stuff so far. It still does have the usual txt2img weaknesses, so, it can get very confused with unusual poses or anatomies, not to mention multiple characters in a scene, and getting good art out of it is a matter of knowing the random keywords it happens to associate with the "good art" direction in its latent space. But chances are that you can go there, type in the e621 tags for your fetish in the input field, and get out something you can fap to, and no other model that I've seen has done that so far.

Oh, man, if only things were going that slowly. I don't have a good feel yet for how well it works in practice, but people have been reporting success with getting these models to be pretty faithful to their sona with DreamBooth and even just textual inversion (which boils things down to an easily shareable embedding file a few kilobytes large). We might be bargaining over the exact level of convenience and faithfulness, but it's definitely not in the realm of science fiction.

Played around with it myself, funny enough I can generate better furry works by not having it trained on e621. The amount of keywords it takes to break past the awkward furry look barrier is absurd but it gives off better works.

ckitt said:
It already exists. At least for the most part it can do danbouro tags pretty well. But I'm sure people in the forum would find a reason to shit on it for the 0.1% error rate it might have like they tag properly all the time.

if it could actually manage under-tagged stuff like toe/finger counts and the more annoying-to-deal-with tag projects I don't think anyone would complain.

darryus said:
if it could actually manage under-tagged stuff like toe/finger counts and the more annoying-to-deal-with tag projects I don't think anyone would complain.

You'd be surprised. AI is the 21st-century boogeyman.

Watsit

Privileged

ckitt said:
If going through hassles isn't something they want maybe they shouldn't volunteer for coding a site.

So because they may not want to deal with the costs/burden of adding and maintaining the particular feature you want, they shouldn't volunteer to help coding for the site at all. Gotcha.

ckitt said:
No, e6 has nothing to do with stability and if a court found S-AI liable e6 can simply do a sweep of ai-generated tag and remove it all at once. Or a specific infringing model.

Except it wouldn't be a sweep of the ai-generated tag, since it would depend on the training data set used. Results from a data set trained on legally-used images wouldn't be a legal problem, while those that had legally questionable or illegally-used images would be. Given how many such images there would end up being, that would be a hell of a lot of work to separate out. Would be better to just nuke it all and not bother with AI generated art.

ckitt said:
The issue is that the system can't produce anything substantially similar unless you feed art to it through img2img that is in and of itself stolen. Again, your lack of understanding is staggering.

I don't know what crystal ball you're using, but I can assure you they don't work. You can't make such a claim without seeing what outputs are possible given all potential parameters. I doubt even the developers know for certain that it is physically incapable of producing a substantially similar results to its inputs.

ckitt said:
Yes, that's how computers work when they load an image on your windows explorer or in photoshop. It's not how computers work when they generate images from neural networks.

You think these systems are working with faulty memory modules?

Stable Diffusion doesn't have inherent randomness. Given the same inputs, it produces the same exact output. There are layers on top of the system that can introduce randomness, but you can just as well not use any randomness if you want, so you can provide the same inputs for it to produce the same output as before.

https://www.youtube.com/watch?v=ltLNYA3lWAQ is rather informative (https://youtu.be/ltLNYA3lWAQ?t=1500 explicitly mentions the one point where randomness used, to produce the latents; and that you can avoid the randomness and provide a predefined set of latents for an image to make it output the same image again).

ckitt said:
If it was further diluted, maybe 5, 10, 100 times more, would it still be 'copyright infrigment' even if a similar picture would be produced?

It doesn't matter how much data is used. All that matters is if the system ends up producing a substantially similar image that it had as input.

ckitt said:
Yeah, that's gonna fly. lol

Don't miss the forest for the trees. The actual process is completely irrelevant; if a copy ends up being made (or a substantially similar work to one that was used to make the system), it's copyright infringement. It doesn't matter how you transform, strip, or mangle the data, whether it's stored electronically, biologically, or chemically, whether it's measured in bits, neurons, etc, in the process... what matters is what ends up coming out from its original inputs.

Updated

watsit said:
Rather pertinently, there's been at least one artist that has taken their work down from here because someone used e6 as a training source for an AI. They didn't at all mind their art being here, but they didn't want it used for training the AI, so they had it removed.

And as long as people making these training sets keep behaving as if copyright doesn't apply to them, I imagine you'll be seeing more artists pull down their work to prevent these training sets from using their art like that without permission.

An artist should be able to take their works down for what ever reason, even no reason. But it's still not copyright infringement. Training an NN is similar to create a hash of an image, and would be considered transformative use under U.S. copyright law, the same way that hashing functions are fair use. I mean, a decent hash will have around 256 bits of information, where as a trained NN will see at most maybe ~32bits of data from anyone image (assuming its a very large network, smaller ones would have even less than that).

Look at it this way, if I took a single pixel from 1 million images to make a 1000x1000 image that would be transformative, as no discernible part of the original images exist. You couldn't match a single pixel in my 1000x1000 image unless I told you where each came from. A NN is similar, only instead of a single pixle, it's a handful of bits spread over the entire NN matrix. It's transformative and fair use, even without explicit permission.

watsit said:
So because they may not want to deal with the costs/burden of adding and maintaining the particular feature you want, they shouldn't volunteer to help coding for the site at all. Gotcha.

No, it's because if they are unable to add said feature means they aren't coders.

watsit said:
Except it wouldn't be a sweep of the ai-generated tag, since it would depend on the training data set used. Results from a data set trained on legally-used images wouldn't be a legal problem, while those that had legally questionable or illegally-used images would be. Given how many such images there would end up being, that would be a hell of a lot of work to separate out. Would be better to just nuke it all and not bother with AI-generated art.

For the moment there are no legal issues or precedent for that. It's your wishful thinking that there will be and that's okay, just don't force your opinion on others based on it or refuse solutions due to it. AI-generated tag can also contain a tag of which model was used, problem solved.

watsit said:
I don't know what crystal ball you're using, but I can assure you they don't work. You can't make such a claim without seeing what outputs are possible given all potential parameters. I doubt even the developers know for certain that it is physically incapable of producing a substantially similar results to its inputs.

You think these systems are working with faulty memory modules?

Stable Diffusion doesn't have inherent randomness. Given the same inputs, it produces the same exact output. There are layers on top of the system that can introduce randomness, but you can just as well not use any randomness if you want, so you can provide the same inputs for it to produce the same output as before.

Yes, if you put the same inputs, prompts, seeds, step count, clip skip, modulations, denoising, method, etc. You will get the same picture. What does that have to do with the model recreating the original artwork that was input into it? This literally has nothing to do with your point. Your point is about recalling the exact pictures put into the models, that's impossible because the data literally isn't there, even if it recreated a somehow dreadfully similar picture to one of the original works put into the model through AI wrangling, it would look like a bad copy of it at best or different enough not to be subject to any sane copyright issue aside from maybe using copyrighted characters. You can't copyright the position of 5000 pixels randomly scattered in a picture as far as I know therefore your wet dream of AI reproducing copyrighted art is just that, a dream. And I don't know what Crystal ball YOU are using that makes you believe that's even a thing that's possible since you yourself stated the people working on it themselves don't know.

watsit said:
https://www.youtube.com/watch?v=ltLNYA3lWAQ is rather informative (https://youtu.be/ltLNYA3lWAQ?t=1500 explicitly mentions the one point where randomness used, to produce the latents; and that you can avoid the randomness and provide a predefined set of latents for an image to make it output the same image again).

It doesn't matter how much data is used. All that matters is if the system ends up producing a substantially similar image that it had as input.

Don't miss the forest for the trees. The actual process is completely irrelevant; if a copy ends up being made (or a substantially similar work to one that was used to make the system), it's copyright infringement. It doesn't matter how you transform, strip, or mangle the data, whether it's stored electronically, biologically, or chemically, whether it's measured in bits, neurons, etc, in the process... what matters is what ends up coming out from its original inputs.

The issue is that's not what AI-generation does. This is more what img2img does which isn't the subject of this discussion. We are talking about are GENERATION, not touch-ups. The fact that you can create a non-random model that can recall pictures better has nothing to do with S-AI, how it's used and what it can do. Any more weasely ways for you to make your point or did you spend them all on this?

Watsit

Privileged

reallyjustwantpr0n said:
An artist should be able to take their works down for what ever reason, even no reason. But it's still not copyright infringement. Training an NN is similar to create a hash of an image, and would be considered transformative use under U.S. copyright law, the same way that hashing functions are fair use. I mean, a decent hash will have around 256 bits of information, where as a trained NN will see at most maybe ~32bits of data from anyone image (assuming its a very large network, smaller ones would have even less than that).

Look at it this way, if I took a single pixel from 1 million images to make a 1000x1000 image that would be transformative, as no discernible part of the original images exist. You couldn't match a single pixel in my 1000x1000 image unless I told you where each came from. A NN is similar, only instead of a single pixle, it's a handful of bits spread over the entire NN matrix. It's transformative and fair use, even without explicit permission.

Except we're not talking about specific data like that. It's not about copying specific pixels. The image information is kept more abstracted, and the reconstructed image is derived from that information.

More over, a creative work is automatically copyrighted by it's creator when it's made, and they retain all rights to the image and can dictate how it's used. Even when they publicly display it, it's still copyrighted, and anyone that receives a copy only has rights for personal use, unless otherwise stated. To make a comparison, a teacher couldn't trawl Twitter, grab images unbeknownst to the artist, and get paid to teach someone art by using those images. If they wanted to use the images in that way, they'd have to get permission, as the exclusive right to exploit the work for monetary gain remains with the copyright holder. For it to not be copyright infringement, you'd have to show that it falls under the fair use exception, to which the most pertinent test would be:

the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

- https://en.wikipedia.org/wiki/Fair_use#U.S._fair_use_factors

These companies like to act that the images were used for non-profit research/education, but it's undeniable that they used the images to train an AI model that is currently being used for commercial purposes, and their commercial success will be heavily influenced by the images used for training. These companies are making money from other peoples' work without compensating them.

watsit said:
These companies like to act that the images were used for non-profit research/education, but it's undeniable that they used the images to train an AI model that is currently being used for commercial purposes, and their commercial success will be heavily influenced by the images used for training. These companies are making money from other peoples' work without compensating them.

What does what S-AI do for profit have to do with random people using the models to make their own art for free without using their service? And you have yet to provide proof that AIs 'copy' images. Your best attempt was trying to pass img2img method as the same as generating new images from scratch which although funny, at best case it shows your lack of understanding which somehow is still prevalent even after a day-long discussion, or that you are simply trolling and don't care about having an argument in good faith. Either way, your opinion can be disregarded in this since you don't try to formulate it based on facts but rather the outcome you want.

Watsit

Privileged

ckitt said:
No, it's because if they are unable to add said feature means they aren't coders.

No one's saying they're unable, I said they may not want to go through the hassle if it's not worth it.

ckitt said:
Yes, if you put the same inputs, prompts, seeds, step count, clip skip, modulations, denoising, method, etc. You will get the same picture. What does that have to do with the model recreating the original artwork that was input into it?

Because you said that's "not how computers work when they generate images", when I said "computers have the ability to perfectly recall data and replicate results in a way that no human can". And that shows you're wrong. If a system has data in memory, it can use that exact same data again and again perfectly fine, unlike a human where memories fade and become warped and distorted over time. And if that system can produce an image, it can reproduce that same image again and again, unlike humans who can't exactly replicate the same drawing they made before (the differences will tend to become greater with time). So if these systems contain data that can be used to recreate a copyrighted image it trained on given some set of inputs, it wouldn't be just a one-off fluke, such a system can recreate the same copyrighted image again and again as much as it wants.

ckitt said:
What does what S-AI do for profit have to do with random people using the models to make their own art for free without using their service?

Because the data set those people are using should not be publicly available as they were trained on copyrighted material without permission, and said availability among random people allows Stability AI to profit off the copyrighted work of other people without compensation.

Updated

ckitt said:
What about people that don't care about the meaning of your picture but enjoy the style, and want to transmit their own meaning using that style of art? Artists do take style references from one another all the time.

If the style is a convincing and unhybridized fake then it's diluting the artist's visual brand. If it's not convincing then it wouldn't be acceptable to you, would it?

This isn't comparable to artists referencing, since that creates what is by default a hybrid style. Serious fakes are not produced in any real volume by the ordinary practice of referencing.

Obviously-hybridized styles would be less of an issue.

savageorange said:
If the style is a convincing and unhybridized fake then it's diluting the artist's visual brand. If it's not convincing then it wouldn't be acceptable to you, would it?

This isn't comparable to artists referencing, since that creates what is by default a hybrid style. Serious fakes are not produced in any real volume by the ordinary practice of referencing.

Obviously-hybridized styles would be less of an issue.

Well, usually to get something worthwhile out of AI generation you hybridize art anyway. Also, there is no such thing as 'visual brand', styles aren't copyrightable. Imagine if only Picasso was allowed to use cubism because he created the style.

watsit said:
No one's saying they're unable, I said they may not want to go through the hassle if it's not worth it.

Because you said that's "not how computers work when they generate images", when I said "computers have the ability to perfectly recall data and replicate results in a way that no human can". And that shows you're wrong.

You must be willingly trying to twist my words here, there is no other way around it. You clearly want to argue in bad faith. This was in reference to data stored in drives, not in neural networks. The video you sent about the latents not being random is used in a different kind of AI that is more like img2img where you can take wholesale pictures and reproduce them. It can't draw data from the trained model of whole pictures that were used in the training set which is your original point of contention. Stop trying to be a weasel.

watsit said:
If a system has data in memory, it can use that exact same data again and again perfectly fine, unlike a human where memories fade and become warped and distorted over time. And if that system can produce an image, it can reproduce that same image again and again, unlike humans who can't exactly replicate the same drawing they made before (the differences will tend to become greater with time). So if these systems contain data that can be used to recreate a copyrighted image it trained on given some set of inputs, it wouldn't be just a one-off fluke, such a system can recreate the same copyrighted image again and again as much as it wants.

Well, good thing that they don't contain any data that can be used to recreate copyrighted images. It can as I said, with a lot of wrangling, recreate a similar image, much like how in your example a person misremembers the original. Which you seem to be fine with.

watsit said:
Because the data set those people are using should not be publicly available as they were trained on copyrighted material without permission, and said availability among random people allows Stability AI to profit off the copyrighted work of other people without compensation.

Again, wrong, what stability does with the model, and what random people that have that model do with it, are two different things. S-AI can be liable for whatever it might be in the future, but the people using the model, especially after retraining it like most quality users have, are two completely different things and you'd have to prove they used copyrighted art. What if I trained a model using my art or non-copyrighted material? Could you prove otherwise or will you take the stance of 'guilty until proven innocent'?

ckitt said:
Well, usually to get something worthwhile out of AI generation you hybridize art anyway. Also, there is no such thing as 'visual brand', styles aren't copyrightable.

What does that have to do with anything?
Things can be devalued without any viable legal recourse to punish the cause of that devaluation. That doesn't make the devaluation 'not real'.
Try saying 'visual brand isn't real' to an art director and see how long it takes them to laugh in your face.

Updated

Watsit

Privileged

ckitt said:
You must be willingly trying to twist my words here, there is no other way around it. You clearly want to argue in bad faith. This was in reference to data stored in drives, not in neural networks.

No one mentioned hard drives. It was in reference to how computers function vs humans. You may have misinterpreted what we were talking about, but what was being discussed was whether AI data training is the same as people using references, to which I said there was at least one difference with how computer memory and behavior was different from human memory and behavior, and showed that the AI is able to perfectly replicate the same image again and again, unlike a human which will have inevitable variation each time.

ckitt said:
The video you sent about the latents not being random is used in a different kind of AI that is more like img2img where you can take wholesale pictures and reproduce them.

Watch the video. It's talking about Stable Diffusion itself, using the text-to-image functionality to produce a set of latents for an image, that can be used to generate the same image every time. He then goes on to talk about how you can perturb a set of predefined latents to make similar but different images, along with other tricks. Then after that, goes into img2img stuff at the end.

ckitt said:
Again, wrong, what stability does with the model, and what random people that have that model do with it, are two different things.

Tell me you don't understand how Stability AI is making money from this without telling me you don't understand how Stability AI is making money from this. People using Stable Diffusion, with its quality of output directly derived from its data set that was trained with copyrighted works without permission, benefits Stability AI in a way that copyright is supposed to prevent.

ckitt said:
What if I trained a model using my art or non-copyrighted material? Could you prove otherwise or will you take the stance of 'guilty until proven innocent'?

If you make a new model with art that's used with permission, then you would be free of this issue, as has been mentioned. We know the current data set used by Stable Diffusion and Dall-E and the like were using copyrighted material without permission, so have this issue. Whether or not I can prove it isn't part of the discussion. e6 isn't accepted AI generated art in any case, so this hypothetical is pointless.

watsit said:
Except we're not talking about specific data like that. It's not about copying specific pixels. The image information is kept more abstracted, and the reconstructed image is derived from that information.

More over, a creative work is automatically copyrighted by it's creator when it's made, and they retain all rights to the image and can dictate how it's used. Even when they publicly display it, it's still copyrighted, and anyone that receives a copy only has rights for personal use, unless otherwise stated. To make a comparison, a teacher couldn't trawl Twitter, grab images unbeknownst to the artist, and get paid to teach someone art by using those images. If they wanted to use the images in that way, they'd have to get permission, as the exclusive right to exploit the work for monetary gain remains with the copyright holder. For it to not be copyright infringement, you'd have to show that it falls under the fair use exception, to which the most pertinent test would be:

- https://en.wikipedia.org/wiki/Fair_use#U.S._fair_use_factors

These companies like to act that the images were used for non-profit research/education, but it's undeniable that they used the images to train an AI model that is currently being used for commercial purposes, and their commercial success will be heavily influenced by the images used for training. These companies are making money from other peoples' work without compensating them.

For your first point, no image is reconstructed. That's what I'm trying to explain. The information in the original image is reduce to an irrecoverable amount of data with in the NN. It's just not there any more, all those gradient descent operations removed it. If something similar did come out, it would be chance or by someone directing the NN at each step to produce a particular output in-spite of it's encodings (like manually tweaking the noise gradient, or constantly re-rolling the random noise till they got lucky). Again, the original image isn't in the NN, it's gone at most only an "echo of an echo". You're literally going from millions of bytes to at most 4 bytes, that's like millions of pixels to 1 in terms of information content. You can't rebuild an image from 1 pixel (3-4bytes of data).

Let me try one more example to help clarify this. Imaging scaling down an bit map image of an orange cat to say a 2x2 pixel image, now expand that back to the original dimensions. You'll see 4 colored blocks, each color roughly the approximation of the averages of those quadrants. Now, if another artists took that compressed image of those 4 blocks, and the same instructions (an orange cat) and drew on it using the colors as a very, very crude guide; Smoothing out the regions and shaping them to be something remotely similar to what you made, would you call that copying? The NN is doing even less than that with the original image.

In regards to copyright, you can still still do transformative works, even with copyrighted source material. Hashing algorithms come to mind. The output of a hashing algorithm from a copyrighted image would fall under fair use because it's transformative, and several businesses actually make money off of hashing and selling either the hashes directly or via a service. It's literally how "image 2 image" or image similarity search work. Google, for instance, will hash a copyrighted image they have no rights to beyond fair use, save that hash and use it to provide you an image search function while making money from it. Either from ads or product suggestions/click through.

I mean, even Blanch v. Koons is almost a perfect analog of this whole process, where Koons used a copyrighted image in a larger collage and it was found to be fair use. That's literally what the NN is doing, only instead of a roughly "50x50 pixels" compression, it's closer to 1 pixel. Why would Koons be fair use, but not this?

Also, fair use doesn't have to be free, though commercial uses are more stringently judged.

PS. Thanks for chatting, I like talking about this stuff.

savageorange said:
What does that have to do with anything?
Things can be devalued without any viable legal recourse to punish the cause of that devaluation. That doesn't make the devaluation 'not real'.
Try saying 'visual brand isn't real' to an art director and see how long it takes them to laugh in your face.

I apologize, this was my fault for not being clear enough, I didn't mean visual brand isn't 'real', just that you can't copyright it because it's just a 'style'. It's why you can have knockoff brands with similar logos and name fonts with just one of the letters being different like 'adipas' or something and still function within legal grounds.

watsit said:
We know the current data set used by Stable Diffusion and Dall-E and the like were using copyrighted material without permission, so have this issue.

The model trained from the data set is using copyrighted material, the data set itself isn't. The data set is a database of publically available urls and the tags associated with each of those urls.

This may seem an academic distinction, but it's an obvious and IMO effective way to reduce their possible legal liability.

watsit said:
No one mentioned hard drives.

watsit also said:
You think these systems are working with faulty memory modules?

Either you are talking about the saved model which is in the harddrive, or the vram it runs on(which I doubt). Neither of which contain the images that could constitute copyright infringement. The closest thing you got to this is putting an image that is copyrighted through an img2img model and running it without any noise/randomness which will output... yeah, THE SAME IMAGE, what you did is literally move the image from input to output so yeah, the same image would be reproduced. That still has nothing to do with the discussion at hand and shows your lack of understanding.

watsit said:
You may have misinterpreted what we were talking about, but what was being discussed was whether AI data training is the same as people using references, to which I said there was at least one difference with how computer memory and behavior was different from human memory and behavior, and showed that the AI is able to perfectly replicate the same image again and again, unlike a human which will have inevitable variation each time.

Sad day for you, that's not how NNs work so that point is moot. They don't save the image in any memory and cannot replicate it by recalling it like you could do by opening a picture in photoshop. The only way they can replicate an image that THEY GENERATED FROM SCRATCH, is by putting the same parameters that generated it the first time. This doesn't mean it can generate content it was trained on and it has nothing to do with it, how many times and in how many different ways does this need to be explained to you? Because frankly, this is the last time I'm bothering with it.

watsit said:
Watch the video. It's talking about Stable Diffusion itself, using the text-to-image functionality to produce a set of latents for an image, that can be used to generate the same image every time. He then goes on to talk about how you can perturb a set of predefined latents to make similar but different images, along with other tricks. Then after that, goes into img2img stuff at the end.

I think I understand what he is discussing in the video better than you. Those latents are formed by the neural network, not by a single image. They don't encode the information of it and all they can do is re-generate the same image that would come from the NN, not the training set itself. The only way to do that kind of image 'copying' is to run it through img2img without any noise.

watsit said:
Tell me you don't understand how Stability AI is making money from this without telling me you don't understand how Stability AI is making money from this. People using Stable Diffusion, with its quality of output directly derived from its data set that was trained with copyrighted works without permission, benefits Stability AI in a way that copyright is supposed to prevent.

I will tell you that I don't care what S-AI does. They made a free code that lets me run and train models on it. I don't have to use THEIR model(and I am not). And most likely in the near future, everyone will be creating their own personalized models with a beefy gfx right at their homes. That said, every artist has benefited from copyrighted work just by looking at it and seeing new things they can implement in their own artwork, which they do sell for profit as well.

watsit said:
If you make a new model with art that's used with permission, then you would be free of this issue, as has been mentioned. We know the current data set used by Stable Diffusion and Dall-E and the like were using copyrighted material without permission, so have this issue. Whether or not I can prove it isn't part of the discussion. e6 isn't accepted AI generated art in any case, so this hypothetical is pointless.

How do you know which dataset each individual is using? You just know 1 or 2 models and you think that's it? There are literally 20 models I can name already out and their creation is only getting accelerated as more people start using them and learning the ins and outs of creating their own. Some are even selling their models so again, you show a clear lack of understanding. Anyway, I am done discussing things you clearly are clueless about and have not even the slightest interest in learning, as you said, NotMeNotYou made the policy clear and for now I agree with the admin, in the future, it will most definitely change and your opinion will be even more irrelevant than it is now. So have fun with that. Let's have a really fun chat in about a year when AI art will be soaring.

savageorange said:
The model trained from the data set is using copyrighted material, the data set itself isn't. The data set is a database of publically available urls and the tags associated with each of those urls.

This may seem an academic distinction, but it's an obvious and IMO effective way to reduce their possible legal liability.

It's not even that, those datasets don't really include tags either, tags just trigger certain nodes in the network that then produce the image, the tags themselves don't exist and neither does the image in and of itself. Just mathematical aggregates of them. That's why if you type 'explicit' and 'explict' the AI won't have much of an issue and will most likely produce porn regardless of which word you use, even if the second one wasn't written anywhere in the dataset it will trigger roughly the same nodes.

I think people have a hard time understanding the difference between a model's training and the final resulting model.

Watsit

Privileged

reallyjustwantpr0n said:
For your first point, no image is reconstructed.

It is, actually. The aforementioned video touches on it lightly at the beginning, and here's another video that goes over the basic process.

For a broad overview, what happens is you take some input image, encode it to latent space, then systematically add noise to it one step at a time (storing information about how the latents change from the noise), until it becomes complete noise. Then you reverse the process, starting with the noise and working backward, systematically removing the noise (using the stored information) until you get back the latents, which can be decoded back to an image.

The data set is the information produced by that systematic noise application to latent images. It's doing the first half of the process ahead of time, creating a large database of this noise information from many images. When you generate an image from Stable Diffusion or Dall-E or the like, you're doing the second half of this process. You provide a set of latents (generated from an image or text prompt) and it starts as noise. Then it reconstructs an image by removing the noise in a step-wise fashion using the training data and user inputs as guides for what the output should be, until it's fully denoised and the latents decoded back to an image. There's a lot more too it, a lot of details that were skipped or glossed over for brevity, but these systems actually are reconstructing an image from noise, just in a very clever way that allows for a lot of adjustment on the user-side.

reallyjustwantpr0n said:
In regards to copyright, you can still still do transformative works, even with copyrighted source material.

It has to be transformative, though. I'd argue using the image to train an AI to understand details of said image so that it can make something similar at user request isn't transformative. As for the outputs, that will depend on the actual output, taken on a case-by-case basis.

reallyjustwantpr0n said:
It's literally how "image 2 image" or image similarity search work. Google, for instance, will hash a copyrighted image they have no rights to beyond fair use, save that hash and use it to provide you an image search function while making money from it. Either from ads or product suggestions/click through.

Google has been sued over its image search (and book search, and news feed, ...) for copyright infringement. But in most cases, these only provide a small thumbnail of the original image, linking to the source where they can see the full thing. They don't act as a replacement, and aren't having a strong impact on the market for the original work. Compared to these art generators, they're being promoted as a way for everyone to produce art on these systems, rather than going the original art that these art generators depended on, or looking to the original artists they might otherwise have commissioned or hired to produce the art. These systems are using artists' copyrighted works, without permission, to drive people away from those very same artists, and use their systems instead. If that doesn't fall under unfair competition, I don't know what does.

reallyjustwantpr0n said:
Why would Koons be fair use, but not this?

Because Koons' collage wasn't a substitute for the original image, Koons' collage changed the purpose of the image to make a different statement than that of the original image. The collage was making a satirical commentary on the very image used in the collage. (Here are the details of the case, if anyone's interested).

This is markedly different, as these systems are acting as replacements for the original, not making any commentary or critique of the original, and is driving prospective users away from the original, depriving the original creator of the benefits from their work.

reallyjustwantpr0n said:
PS. Thanks for chatting, I like talking about this stuff.

Ditto, these discussions have pushed me to learning more about this stuff, and the underlying systems are quite interesting. It's just a shame these prominent systems' source image collection and training data had to be done with such questionable ethics (and it'll probably be a while before the law can catch up).

earlopain said:
Jesus, get a room

And with that, I'll stop. Sorry.

Updated

ckitt said:
It's not even that, those datasets don't really include tags either, tags just trigger certain nodes in the network that then produce the image, the tags themselves don't exist and neither does the image in and of itself. Just mathematical aggregates of them. That's why if you type 'explicit' and 'explict' the AI won't have much of an issue and will most likely produce porn regardless of which word you use, even if the second one wasn't written anywhere in the dataset it will trigger roughly the same nodes.

I think people have a hard time understanding the difference between a model's training and the final resulting model.

To be clear I wasn't saying that the model 'contained' copyrighted images, but it does use them in the training of the model. I personally don't see how the model could be in infringement of copyright.

I'm not really sure how to understand 'the tags don't really exist', though.. does it involve anonymizing them and getting the model to infer the most appropriate description from the collective text of the pages with that id? Not sure if that is even a possible operation with this kind of NN.

savageorange said:
Strawmanning.
I certainly don't agree with most of NMNY's arguments(as you can see from this thread) and i regard an incorrect technical understanding very seriously. Nonetheless, his argument as i remember it against accepting AI generated art is simple, and includes the word 'antithetical', which is why I used it.
I admit I can't find it in this thread, so perhaps it was in a previous version of the uploading guidelines.

That's fair. That was rude of me and I apologize.

watsit said:

...Except if these systems are found to be infringing, sites hosting such artwork will become liable if they don't immediately deal with it themselves once known. If someone gives you illicit goods, and a court finds they had no right to give it to you, you don't get to keep it just because you didn't know at the time...

This does raise an interesting question: If we object to AI art on account of its status as Copyright Infringement, then should we not also object to the creation of fan works that use copyrighted characters? I know for a fact that every piece of explicit Pokémon fan art on the site was created without the knowledge or consent of the rights holder, and much of it is more overtly monetized than AI art tends to be (fan artists frequently upload dakimakura artworks featuring copyrighted characters which they openly intend to have printed and sold as merch, for example). Does that not also put the site at risk of liability if, say, Nintendo decided to litigate?

If so, then why should one form of copyright infringement be permitted on the site while another is disallowed?

The point I'm making here is: Even if we were to agree that AI art is definitely, 100% copyright infringement, then it still wouldn't necessarily be sufficient grounds to exclude it from the site. Lots of content on e621 is infringing.

Of course, this is kind of a moot point since the policy has been updated and the real reason for the ban is now much clearer, but it's still a question I'm interested in and I'm keen to hear other peoples' answers.

thousandfold said:
You're right, I can't, what I'm saying is only based off an educated assumption. From what I've seen so far, the majority of artists are against their work being used for this kind of thing.

Personally I don't think I care if someone puts my art through an AI. But as a whole, I do care about people using other artists work to train an AI, especially when people are doing it against the artists will. It would be better if people at least asked the artists before using their art to train an AI.
However, it is not necessary to ask permission to use someone elses artwork as inspiration for your own art, because its very easily assumed that the artist will be okay with it since pretty much every artist has always been okay with it.

I don't think that you can always reasonably assume that artists will be okay with someone taking inspiration from their works in order to make their own. What if the person taking inspiration creates something that is damaging to the original rights holder? Someone might say "Pretty much all artists are okay with inspiring others, so I can easily assume that the artists at Game Freak will be alright with me taking inspiration from their avian Pokémon designs to draw big-breasted anthro versions of those Pokémon! Therefore, I don't need to ask their permission to do so.", but that claim would be incorrect: Nintendo will absolutely send Cease and Desist letters to people who make horny fanart of their IPs. They've done so in the past.

Similarly, most copyright holders could be easily assumed to categorically reject any requests for permission to draw horny fanart of their intellectual property. So horny fanart, despite being inspired by an artist's work, must (in the majority of cases) be created without the artist's knowledge or consent. Does that make horny fanart unethical? Or is it a valid art form in spite of this?

Updated

Watsit

Privileged

spacies said:
This does raise an interesting question: If we object to AI art on account of its status as Copyright Infringement, then should we not also object to the creation of fan works that use copyrighted characters? I know for a fact that every piece of explicit Pokémon fan art on the site was created without the knowledge or consent of the rights holder, and much of it is more overtly monetized than AI art tends to be (fan artists frequently upload dakimakura artworks featuring copyrighted characters which they openly intend to have printed and sold as merch, for example). Does that not also put the site at risk of liability if, say, Nintendo decided to litigate?

The copyrightability of characters and species isn't so straight forward. In a general sense, you can't copyright the idea underlying a given character or species, only particular expressions of that character/species fixed to a tangible medium (e.g. you can copyright a particular drawing of the thing, but that doesn't give you copyright of other drawings other people make of that thing). To be eligible for copyright, a character has to have a high degree of particularity, a much higher standard than a regular copyright, so it won't be clear if a given character or species would cross that threshold if it hasn't already been litigated. Even so, most companies, even Nintendo/Pokemon, do allow fan art, on the condition you take it down when asked (which e6 will do, that's what the takedown system is for). As for monetized work, e6 doesn't sell anything related to the images here. If someone is selling merch of protected characters/species elsewhere without permission, that's for that other place to deal with, e6 wouldn't be breaking that rule.

ckitt said:
Those platforms already exist and they can be far more toxic towards original artists exactly because of the treatment AI art gets in places like here, by people like you and watsit. e6 has a chance to do it 'right' and respectfully towards artists and find a compromise but you are literally here arguing against that.

The only platforms i know of that allow it are places like deviantart and pixiv. Can you elaborate on what you mean when you say E6 has a chance to handle this right and respectfully? Whats the right way to do it?

ckitt said:
Totally asinine take, how many artists have you asked? 1? 2? 5? What percentage of the 'total artists' in the world is that? Why would they care how much effort someone gives to their pictures?

I've seen over a hundred artists takes and reactions, and almost all of them boiled down to them hating AI because of the amount of effort required to make AI art compared to the amount of effort required to learn digital/traditional art.
With most of them having spent probably at least a certain amount of years learning how to draw in order to create what they want and have it look good, with AI people can now do it in 0 years. As time goes on, this kind of thing will be more easily accessible for those who have absolutely no practice in art. As a result, this is less of a tool for artists and more of a massive shortcut for people to be able to skip several years of dedication to a craft. A comparison would be if you had an item in a videogame that took you 3000 hours of gameplay to obtain, and then suddenly the devs make the item almost free to get, thus destroying the value of the item and making you feel like you wasted your time. It's no surprise that artists aren't happy with this.

Of course, this is only one of many reasons why artists are against AI, but its the one I've seen the most.

ckitt said:
E6 can do it better and NotMeNotYou gave a good idea of how to do it. By painting over the art you fix the errors and you can remove the 'low effort' part out of the equation

Depends how much of the AI is involved into your style. Incorporate too much and you may end up causing more damage than help to yourself. It's one of the reasons why i think AI isn't that much of a useful tool for artists aside from inspiration and references/backgrounds.

spacies said:
What if the person taking inspiration creates something that is damaging to the original rights holder?

Never seen this happen before, and I can't imagine how this could even be possible.

spacies said:
Someone might say "Pretty much all artists are okay with inspiring others, so I can easily assume that the artists at Game Freak will be alright with me taking inspiration from their avian Pokémon designs to draw big-breasted anthro versions of those Pokémon! Therefore, I don't need to ask their permission to do so.", but that claim would be incorrect: Nintendo will absolutely send Cease and Desist letters to people who make horny fanart of their IPs. They've done so in the past.

Similarly, most copyright holders could be easily assumed to categorically reject any requests for permission to draw horny fanart of their intellectual property. So horny fanart, despite being inspired by an artist's work, must (in the majority of cases) be created without the artist's knowledge or consent. Does that make horny fanart unethical? Or is it a valid art form in spite of this?

Someone drawing horny fan art of someones character is not the same as taking inspiration from another artists style and applying parts of their style to your own. Yes it is unethical to draw horny fan art of someones OC without asking the owner first, and I would argue that it's not unethical to draw horny fan art of a pokemon simply for the reason that it doesn't negatively effect anyone. It's not like people drawing a bunch of gardevoir porn is gonna hurt Gamefreaks feelings, they're not gonna care because there's no personal attachment, nor does it even cause any damages.

savageorange said:
To be clear I wasn't saying that the model 'contained' copyrighted images, but it does use them in the training of the model. I personally don't see how the model could be in infringement of copyright.

I'm not really sure how to understand 'the tags don't really exist', though.. does it involve anonymizing them and getting the model to infer the most appropriate description from the collective text of the pages with that id? Not sure if that is even a possible operation with this kind of NN.

So, input nodes get triggered with 'tokens', those tokens are the words you input in the prompt broken in parts that those neurons have learned how to distribute in the rest of the network. The feedback those nods then give, corresponds to aggregates of the images they were trained on that have a relation to those tags. Kind of how your brain doesn't store letters or images but their concepts, which you can then use to remember them(roughly).

thousandfold said:
The only platforms i know of that allow it are places like deviantart and pixiv. Can you elaborate on what you mean when you say E6 has a chance to handle this right and respectfully? Whats the right way to do it?

Go right now on r34 and use the ai_generated tag and watch the shitshow. This kind of thing where AI art is pushed without a care for quality, I don't like. E6 can do better vetting and allow 'less' AI art in, maybe let people upload up to 2 or 3 pieces a day at most so they are forced to work more on them. NotMeNotYou has a good idea of 'painting over' the art to ensure some form of quality but I'd argue that in the future that won't be needed, for now I support that policy though.

thousandfold said:
I've seen over a hundred artists takes and reactions, and almost all of them boiled down to them hating AI because of the amount of effort required to make AI art compared to the amount of effort required to learn digital/traditional art.
With most of them having spent probably at least a certain amount of years learning how to draw in order to create what they want and have it look good, with AI people can now do it in 0 years. As time goes on, this kind of thing will be more easily accessible for those who have absolutely no practice in art. As a result, this is less of a tool for artists and more of a massive shortcut for people to be able to skip several years of dedication to a craft. A comparison would be if you had an item in a videogame that took you 3000 hours of gameplay to obtain, and then suddenly the devs make the item almost free to get, thus destroying the value of the item and making you feel like you wasted your time. It's no surprise that artists aren't happy with this.

Of course, this is only one of many reasons why artists are against AI, but its the one I've seen the most.

I mean, what you describe here is literally gatekeeping. Older artists on traditional media would say the same thing about artists using digital art(and they have). They will joke about how artists just copy and paste stuff around instead of redrawing over and over. How software suites are 'easymode' and how forgiving stabilization on drawing pads is to line quality. So, if that's your point, I think those people can easily be ignored. Even if I were to believe you that you know the opinion of 100 - 1000 artists, you think that's representative of the millions of people producing art? Most people I know care about what THEY can make with art, not what others do.

thousandfold said:
Depends how much of the AI is involved into your style. Incorporate too much and you may end up causing more damage than help to yourself. It's one of the reasons why i think AI isn't that much of a useful tool for artists aside from inspiration and references/backgrounds.

That would be true for someone attempting to learn digital art and then only uses AI to create pictures. Why do you care about that person though? Are you their parent? I think you are just using this as an excuse to make a point against AI and not your personal care for the people that face this issue.

thousandfold said:
Never seen this happen before, and I can't imagine how this could even be possible.

Someone drawing horny fan art of someones character is not the same as taking inspiration from another artists style and applying parts of their style to your own. Yes it is unethical to draw horny fan art of someones OC without asking the owner first, and I would argue that it's not unethical to draw horny fan art of a pokemon simply for the reason that it doesn't negatively effect anyone. It's not like people drawing a bunch of gardevoir porn is gonna hurt Gamefreaks feelings, they're not gonna care because there's no personal attachment, nor does it even cause any damages.

While I'm totally against this, try sending an e-mail to Nintendo showing them e621 and telling them how your kid found porn of their characters and how annoyed you are as a parent about it, and see how fast NotMeNotYou gets a C&D letter from them. Nintendo has definitely sued for less. And so have other studios done with their characters. Remember blizzard and the overwatch porn purge? One could argue companies do more damage to themselves going after said artforms but ethically you have no ground to tell them not to. And if you think it's not damaging to them, you shouldn't think people creating their own art in aggregates of similar styles to other artists would be damaging to them either.

watsit said:

...

It has to be transformative, though. I'd argue using the image to train an AI to understand details of said image so that it can make something similar at user request isn't transformative. As for the outputs, that will depend on the actual output, taken on a case-by-case basis.

...these art generators, they're being promoted as a way for everyone to produce art on these systems, rather than going the original art that these art generators depended on, or looking to the original artists they might otherwise have commissioned or hired to produce the art. These systems are using artists' copyrighted works, without permission, to drive people away from those very same artists, and use their systems instead. If that doesn't fall under unfair competition, I don't know what does.

...these systems are acting as replacements for the original, not making any commentary or critique of the original, and is driving prospective users away from the original, depriving the original creator of the benefits from their work.

I'm not sure they're being specifically promoted as a means of replacing artists outright. They could just as easily be seen as potent force multipliers when in the hands of professional artists.

I do see your point, but you're kind of comparing apples to orchards, here. If we're going by the principles outlined in Fair Use, which is concerned with...

"The effect of the use upon the potential market for or value of the copyrighted work",

(emphasis mine)

...then the effect on the demand for artists' labour is actually immaterial to whether or not it counts as infringement: If you consider the AI model to be an infringing work, then the appropriate point of comparison would be to consider its effect on the market value of the specific, individual copyrighted works of art which the model was trained on.

In light of this, I'd make the case that the existence of AI art doesn't lower the market value of these specific copyrighted works, on the following grounds:
1. Digital artwork is, by nature, infinitely reproducible.
2. Access to the trained works was not limited, as they were were hosted on websites that were freely accessible to the general public. They were not scanned from physically scarce works or taken from behind paywalls.
3. Therefore, a common property of the trained works is that anyone can have as many copies of them as they want, for free.
4. In order to claim copyright infringement, you'd have to prove that the existence of the AI model reduces the market value of the specific, individual works used in the training of that model, per the wording of Fair Use.
5. What is the market value of an infinitely reproducible, freely available artwork?

Even if the market value of such works is non-zero, it cannot be proven that the existence of AI models actually reduces the market value of the original works: Even with physically scarce works, the ability to produce artworks in the style of, say, Van Gogh in no way makes Van Gogh's original works any less valuable, just as the ability to have 20 exact duplicate printouts of the Mona Lisa hanging on one's wall doesn't make the original any cheaper.

Of course, I recognize that traditional artists aren't making the case that AI models are replacing their works; but rather that they are replacing artists themselves. I just don't believe that this, in-and-of itself, is sufficient legal grounds to claim copyright infringement. In my opinion, artists who want to protect themselves against lost income due to AI art will be better served by abandoning arguments rooted in intellectual property law.

Updated

ckitt said:
I mean, what you describe here is literally gatekeeping.

Yup, and there's nothing wrong with a certain amount of gatekeeping. For instance, artists gatekeep against people who trace and I don't see any problem with that.

ckitt said:
Older artists on traditional media would say the same thing about artists using digital art(and they have). They will joke about how artists just copy and paste stuff around instead of redrawing over and over. How software suites are 'easymode' and how forgiving stabilization on drawing pads is to line quality. So, if that's your point, I think those people can easily be ignored. Even if I were to believe you that you know the opinion of 100 - 1000 artists, you think that's representative of the millions of people producing art? Most people I know care about what THEY can make with art, not what others do.

The difference is that if they're to say that digital takes no effort compared to traditional, then they're just plain wrong because they have no clue the amount of effort it takes to be a digital artist. Truth is that it's not far off from traditional (aside from needing supplies and preparations).
Perhaps the artists you know may not care what others do, but what happens if it negatively effects them? Hypothetically, what would they do if all the sudden they have nowhere to share their work anymore because nobody will care about anything they make due to the infinite number of amazing artworks that'll exist on the internet? All because AI is so easily accessible by everyone, as opposed to dedicating years to learning how to draw. How does an artist compete with that? How does using AI as a tool matter anymore, when the reality is that if it gets to that point, then its no longer a tool, but a replacement for artists. This is one of the reasons why gatekeeping AI might be overall beneficial for artists and perhaps even the community itself. Transitioning from traditional to digital didn't cause an infinite influx of images to be posted to the internet because digital art didn't remove the necessary amount of dedication required to make what artists make today, it wasn't a threat to the value of art. AI however, has a good chance to remove the barrier that's protecting the value of art.

But hey, if most people still value handmade work over AI-assisted and AI-made art in the future, then all of what i said won't be relevant.

thousandfold said:
Yup, and there's nothing wrong with a certain amount of gatekeeping. For instance, artists gatekeep against people who trace and I don't see any problem with that.

While I agree a certain amount of gatekeeping keeps the community healthy, there has to be a good, logical, concrete reason behind it. And I don't see that with the AI(when it's properly implemented).

thousandfold said:
The difference is that if they're to say that digital takes no effort compared to traditional, then they're just plain wrong because they have no clue the amount of effort it takes to be a digital artist. Truth is that it's not far off from traditional (aside from needing supplies and preparations).
Perhaps the artists you know may not care what others do, but what happens if it negatively effects them? Hypothetically, what would they do if all the sudden they have nowhere to share their work anymore because nobody will care about anything they make due to the infinite number of amazing artworks that'll exist on the internet? All because AI is so easily accessible by everyone, as opposed to dedicating years to learning how to draw. How does an artist compete with that? How does using AI as a tool matter anymore, when the reality is that if it gets to that point, then its no longer a tool, but a replacement for artists. This is one of the reasons why gatekeeping AI might be overall beneficial for artists and perhaps even the community itself. Transitioning from traditional to digital didn't cause an infinite influx of images to be posted to the internet because digital art didn't remove the necessary amount of dedication required to make what artists make today, it wasn't a threat to the value of art. AI however, has a good chance to remove the barrier that's protecting the value of art.

But hey, if most people still value handmade work over AI-assisted and AI-made art in the future, then all of what i said won't be relevant.

I think much like traditional art, digital art isn't going anywhere even post-AI. Yes, people will still value that something is handmade. I also believe there are just some things like subversion, that AI might never be 'good' at unless wrangled really hard by the prompter and at that point you can make an argument that AI art takes as much effort as traditional art, only instead of hand-eye coordination, it takes someone understanding the prompting and how the model interprets that push out a properly 'deep' piece of art. Like, I'm sure you can tell the difference from AI art that has half a line of prompt, and AI art that has a whole block of text. Maybe not now, but I am sure in the future that will make a huge difference.

In the end, I am sure we can agree that art is a medium to share ideas between us, what actually matters the most, are those ideas, the characters we want to show, that people might identify as, and so on. Art comes second to that(even if it can be stunning in and of itself). I think AI will fill that gap where people might want to push a certain concept/idea/character, while 'real' art will focus on storytelling or breaking 'new grounds', developing new techniques, which in the age of AI where said techniques can be copied by it, I think we can both agree it would make such art that much more valuable than it already is. Now it's just a gimmick but in the future artists will be pushing for it because it makes much more money and who knows, maybe getting your name "in the style of X" on AI prompts will be a badge of honor.

All in all, I think you are wrong about artists getting stomped by the AI, if any non-artist can create 'stunning' artwork with AI you need to take into account that artists will also have access to it and will probably be far more adept in using it.

ckitt said:
I think AI will fill that gap where people might want to push a certain concept/idea/character, while 'real' art will focus on storytelling or breaking 'new grounds', developing new techniques, which in the age of AI where said techniques can be copied by it, I think we can both agree it would make such art that much more valuable than it already is. Now it's just a gimmick but in the future artists will be pushing for it because it makes much more money and who knows, maybe getting your name "in the style of X" on AI prompts will be a badge of honor.

I'm a bit confused on pretty much this entire sentence, and maybe this is a result of my somewhat lacking education on AI. What new techniques could they possibly invent that would increase the value of their work? How can any of this make an artist a lot more money?

ckitt said:
All in all, I think you are wrong about artists getting stomped by the AI, if any non-artist can create 'stunning' artwork with AI you need to take into account that artists will also have access to it and will probably be far more adept in using it.

Except there are limitations for what an artist can do with AI. Especially if AI evolves further, then it's not like anything an experienced artist can do using AI, would be all that much better than what an inexperienced artist can do with it.

thousandfold said:
I'm a bit confused on pretty much this entire sentence, and maybe this is a result of my somewhat lacking education on AI. What new techniques could they possibly invent that would increase the value of their work? How can any of this make an artist a lot more money?

A.I. can be used to automate functions that normally take quite a while to do manually. Linework and shading come to mind. Increased speed, increased output, more money. Bingo, bongo, Bob's yer uncle, you make bank by letting a robot do the busywork. Also useful for prototyping with full-on generation, takes out the "envision it" stage of figuring out the drawing.

pandorasrabbithole said:
I follow AI pretty closely. There is no upper limit. And we are currently in the MS paint phases of what would eventually become photoshop. Only in the world of AI years of progress are shrunken down to months or weeks.

Also, if you wanted to you could make an imitation of anyone's style today that would fool most people by finetuning a model.

I think we should stop it.