Topic: How to tell if an art is AI generated

Posted under Art Talk

List of common examples needed, some image examples of AI generated have been deleted by the way.

Visit https://e6ai.net/. Most of the stuff over there would not be accepted over here.

It can be really tough to distinguish between human and AI art, but common signs include unusual anatomy (e.g., weird facial structure, eyes, hands), impossible geometry (e.g., weird object sizes/proportions in background, weird shadow/lighting), illegible text on writing/signatures, etc.

  • Foggy inconsistent eyes
  • Questionable anatomy (i.e. 6 fingers on one hand, multiple sets of teeth, missing limbs, etc.)
  • Stolen art styles (Some more obvious than others)
  • Inconsistent patterns and designs
  • Illegible backgrounds and text (includes signatures)

That's all that I've noticed so far

e6ai.net has plenty of examples but usually you will see things like:
- Hands and/or eyes are messed up/look wrong. Mouths can be pretty bad as well.
- Bodies can look flat instead of furry, particularly on models that weren't designed for it (pretty much all of them)
- Odd blurring and/or distorting in places
- Backgrounds not matching the generated character art wise
- Backgrounds looking very samey between art and "artists"
- "Clipping" issues with hair, clothing, ears, etc.

The number one thing to look out for is continuity of background when it passes behind a character. No model has yet learned how to draw a partially obstructed line extending from one side of a foreground object to the other. Quite often you'll see images with a character standing in the centre, and the left side of the image looks like it was taken from a completely different angle to the right.

Don't forget that a lot of A.I.s use an invisible watermark you can view.

It depends on the image, a lot of really bad anatomical issues can be hidden by flashy rendering. After looking at what e6ai has to offer, it's impressive at a distance, but up close, the flaws become more and more obvious.

  • hand anatomy... just hand anatomy, arms too - AI generated "bad hands" do not look like badly drawn hands, they're nonsensical, think strange creases, fingers melting into the palm... a lot of stubby arms or arms that "break" when partially covered
  • eyes are almost always inconsistent (one eye will look "fine", the other will be a blob), have melty pupils and irises, big wide glassy eyes
  • lack of expressiveness (subtle smiles are 99% of furry AI images), open mouths end up looking very obviously AI generated with a lack of depth, missing teeth, etc.
  • nonsense perspective, proportions or scale
  • odd composition or framing
  • inconsistent texture/completely flat, smooth rendering, overly blurry or soft
  • clothing is rarely properly rendered, details that clash or are incoherent, e.g. a character wearing a turtleneck collar without a sweater that turns into a gold plated breastplate, or
  • melding between elements of the image (like a character's arm wraps blending into her forearm, or a character's tail and hair becoming indistinguishable behind them)
  • over-saturated colours,
  • random lines, smears, blobs of colour

Updated

On top of all the visual clues, it's almost always uploaded by somebody who has a tiny number of followers on their galleries and uploads work way too fast for any real human to be producing art of that quality. That or they have an obvious increase from MS Paint art to sudden hyper-realism.

Nobody here mentions the most important points, but mentions the messed-up hands time and time again. Hands are hard to draw and many serious artists will fuck them at least once. Unless it's hilariously messed-up, it is impossible to know whether the hands are badly drawn because of ai or just because the artist is bad at anatomy. Ai doesn't care anyway, it can do hands just fine if given enough information to work with, and it doesn't change the fact that a real con man will most likely re-touch the painting to fix imperfections. https://www.reddit.com/r/StableDiffusion/comments/z7salo/with_the_right_prompt_stable_diffusion_20_can_do/

By the way, this sort of mindset is why we have artists accusing other artists of using ai. https://www.reddit.com/r/SubredditDrama/comments/zxse22/rart_mod_accuses_artist_of_using_ai_and_when/
If an art looks weird maybe it is the artist's doing and they have little understanding in the hows and whats of anatomy.
https://www.kenhub.com/en/success-stories/kiron-how-hard-is-anatomy

Those are the only points that matter when deciding whether an artwork is made by an ai:
#1. The colors blend together unnaturally for a digital piece: You will often notice that the colors come together stranngely on borders of characters and objects, they look like they were paint with watercolor or something to that extent, despite being made digitally. When a drawing is made digitally the entire piece is 'flat', you should not see weird lines or fuzz, smudges, drips, or bubbles on the drawing specifically when transitioning from one color to the next.
#2. Inconsistent blurring: If something is perfectly clear on some parts of the drawing, but somewhere else it's blurry, even slightly blurry, almost unnoticeable at that, then it is ai-generated. Further noting that soe artists and some art sites blur the artwork, so if it's equally blurry everywhere or it's blurry in some places to clearly separate the foreground from the background then it's fine, if it's incoherently blurry then it is ai-generated. Be mindful of this.
#3. Strange artifacts: If the drawing has colors on odd places that should not be there then it's ai-generated. Don't confuse it with jpeg artifacting, though. The ai artifacting is smooth and blends naturally with the piece, while jpeg or any compression algorithm artifacting is rough and visibly pixelated.

Obviously, you will immediately know if it was ai-generated at a glance if the painting makes no sense visually like illegible text or the unmatched hilariously awful anatomy, but the points above are more important to learn and remember because all ai-generated paintings have them, all of them without any exception. Meanwhile, artists can suck at anatomy as much as they can suck at drawing good perspectives or having a generally good composition.

faucet said:
On top of all the visual clues, it's almost always uploaded by somebody who has a tiny number of followers on their galleries and uploads work way too fast for any real human to be producing art of that quality. That or they have an obvious increase from MS Paint art to sudden hyper-realism.

Everybody has a small number of followers when they signup for an account to any site, this should not be used to determine if somebody is using ai or not. When I first watched Razyfoxxo on fa years ago, his drawings were already good, but he had almost no following. If an artist graduated art school and then decided to use deviantart one day, their art will look fantastic, but they will have like 9 followers.

Updated

one ai artifact that i've noticed commonly enough (though not necessarily so often as to be a guarantee) are sorta bright-stress marks on some corners and certain spots that kinda look like how the source engine does bloom

wolfmanfur said:
Nobody here mentions the most important points, but mentions the messed-up hands time and time again.

Exactly this. Look for mistakes humans don't make, not ones they commonly do.

kora_viridian said:
How does one view such a watermark?

Is it the kind of thing where you stick the image in Gimp, Pornoshop, some image viewers, or similar, screw with the RGB levels, and then the watermark shows up? Or is it a meta tag (EXIF or similar) in the image file itself?

If it's a meta tag, not all of those will survive posting on every art site.

It is in the image. But scaling, rotation and cropping can destroy it. At least that's the case for Stable Diffusion.

You can disable it in Stable Diffusion WebUI in the settings tab. If you are using "Vanilla" Stable Diffusion, it can be diabled by removing some lines of code from the .py files in the "scripts" directory and directories inside it. I don't know how to disable or attack the watermark used by other tools.

In my opinion, don't rely too much on watermarks, the people who lie about their artwork already know how to disable or attack watermarks.

Also, the watermark is there for a reason, it is used to filter out AI generated images from datasets used for training AI. Some artists who don't want their artwork being used to train AI models might watermark it intentionally.

wat8548 said:
The number one thing to look out for is continuity of background when it passes behind a character. No model has yet learned how to draw a partially obstructed line extending from one side of a foreground object to the other. Quite often you'll see images with a character standing in the centre, and the left side of the image looks like it was taken from a completely different angle to the right.

This could become a solved problem with Depth2Image and other advancements.

There are already images on e6AI that look perfect with no weird eyes or limbs. As AI art and artists mature, styles will become less samey, and subtle signs will be gone, such as less suspicious resolutions like 2048x2048. Community standards will rise over time. The attempted_signature will be discouraged because it's a signature of laziness.

Updated

AI will only get better at being undetectable. It's designed to be deceitful, and most of its programming its around creating unauthentic interpolations.
You can try using https://www.illuminarty.ai/en/illuminate or duplicating the image in Photoshop and setting that to Divide (to find black edging patches/lines), those are "old" ways of detecting AI, though, and can guess wrong.

There's a lack of funding in detectors but it shouldn't be possible to make them because an AI can correlate originals in a training set as much as interpolate, it's very hard, though.
https://arstechnica.com/information-technology/2023/02/researchers-extract-training-images-from-stable-diffusion-but-its-difficult/

But the only other way is spam the original sites with protest imagery, apparently, as ArtStation has proven. Which I'm sure the mods don't want :P
https://i.ibb.co/5jxBQst/image-660.png

If desired, the best way you could detect site-wide is use screeners like Illuminarty, then check the artist's social media presence for WIPs.
As Gombrich said when exposing the nazis who stole jewish art to claim it as their own, something-something about how "artist authenticity is determined by their oeuvre (aka consistency)".
Mass plagiarism isn't anything new in right-wing societies for their state-corporate merging technologies. And there'll always be those out there to leech on the efforts of others under this current system, especially if it begets the four factors of capitalism: profit, status and consumerism. Since plagiarism is literally the technology, it will only get harder to detect as AI advances if not already.

As such the artist's oeuvre is definitely the best means of determining if the image is AI, e.g. WIPs, higher level design, layers, consistency, streams and time-taken. You can already claim just as much claim someone else's art as your own without proof as you can claim AI images as you're own - neither have proof that all elements were the result of artistic effort. It's a burden of proof when it comes to the artistic merit and authenticity of any produced image.

Reason I say this is because searching for unusual traces in an image by your own volition is dangerous and can lead to artists getting false blame, but we should all know all genuine art involves the artistic process, and that's the only real discriminator between all AI images and creative art.

Reviving this thread. I found this post today which looks like it could come out of an a.i.
post #3880427
I would have flagged the post and left it to the discretion of the moderators, but there was no option to flag a drawing for containing or being made entirely by a.i. So, I'll ask about it here instead.

Not only does it strikes 2 of the 3 points on my above post. To be exact point #2. Inconsistent blurring and #3. Strange artifacts. But, the artist themselves puts these out very quickly on furaffinity, at an unhumane pace even.

The very first artwork in their fa gallery was posted yesterday, the third (and latest) artwork was only posted 3 hours ago. Rendering this post-processing (assuming it is just that) with Blender or anything of the sorts would take a long time, not to mention the artist would need to know what to make which isn't done on a whim. It takes time to get inspiration.

I'm just gonna say that looks suspicious, but maybe I'm wrong. I want an admin to check this post propetly.
This was posted by Lemongrab, so there's a chance this picture just looks very ugly and that's it - that's the whole story behind it.

wolfmanfur said:
Not only does it strikes 2 of the 3 points on my above post. To be exact point #2. Inconsistent blurring and #3. Strange artifacts. But, the artist themselves puts these out very quickly on furaffinity, at an unhumane pace even.

Blender "artists" are just like that. It's been a tradition since long before AI art was invented to take stock models and stock scenes and slap them together in half an hour, then upload 5 renders from different angles. There's a reason the site has higher minimum quality standards for 3D art.

Maybe I'm just overly cynical from my DeviantArt days, but I hesitate to accuse AI of inventing slapdash use of Gaussian blur and gratuitous particle effects.

inafox said:
AI will only get better at being undetectable. It's designed to be deceitful, and most of its programming its around creating unauthentic interpolations.
You can try using https://www.illuminarty.ai/en/illuminate or duplicating the image in Photoshop and setting that to Divide (to find black edging patches/lines), those are "old" ways of detecting AI, though, and can guess wrong.

If desired, the best way you could detect site-wide is use screeners like Illuminarty, then check the artist's social media presence for WIPs.

I uploaded some images to that page, it said a drawing i made had a 94 percent chance of being AI generated, while a clearly AI generated image had a 40 percent chance of being AI generated. I also uploaded some drawings by other artists, many of them were over 50 percent. Bear in mind that most of those drawing were made before generating images with AI was a thing. This means not only that it is very ineffective, but also that many art styles would be de facto banned if we blindly trusted it.

inafox said:
As [REDACTED] said when exposing the [REDACTED] who stole [REDACTED] art to claim it as their own, something-something about how "artist authenticity is determined by their oeuvre (aka consistency)".
Mass plagiarism isn't anything new in right-wing societies for their state-corporate merging technologies. And there'll always be those out there to leech on the efforts of others under this current system, especially if it begets the four factors of capitalism: profit, status and consumerism. Since plagiarism is literally the technology, it will only get harder to detect as AI advances if not already.

What?

inafox said:
As such the artist's oeuvre is definitely the best means of determining if the image is AI, e.g. WIPs, higher level design, layers, consistency, streams and time-taken. You can already claim just as much claim someone else's art as your own without proof as you can claim AI images as you're own - neither have proof that all elements were the result of artistic effort. It's a burden of proof when it comes to the artistic merit and authenticity of any produced image.

Reason I say this is because searching for unusual traces in an image by your own volition is dangerous and can lead to artists getting false blame, but we should all know all genuine art involves the artistic process, and that's the only real discriminator between all AI images and creative art.

There are artists who will not post anything until they feel they are good enough, they may also have a lot of art they did not dared to post before. Others might decide to hodl their works and post them is a short period of time to take advantage of some recommendation algorithm.

inafox said:
There's a lack of funding in detectors but it shouldn't be possible to make them because an AI can correlate originals in a training set as much as interpolate, it's very hard, though.
https://arstechnica.com/information-technology/2023/02/researchers-extract-training-images-from-stable-diffusion-but-its-difficult/

You know what else memorizes images? *Cut to bust* Brains!

electricitywolf said:
You know what else memorizes images? *Cut to bust* Brains!

I highly doubt you can draw anything from memory to the extent of an AI. Humans store visual utterances and cues in their mind, not visual data to which they can then transform. AIs store actual feature-relative maps and is why we can reverse GANs. You have to understand the mechanisms of these algorithms as they're not at all resembling of the human brain. Industrial artists use foundational logic, motor skill and utterance to make art in a sort of mind-body feedback loop, something that AIs do not have, they just approximate visual data on a compositional grid without the sapience and sentience. It helps to not confuse the artistic process with dynamic superposing and estimation. As an artist myself I don't study other artist, I don't visually reference them either. I know some artists reference other artists but they do so to "understand" the image with logic and then apply it more geometrically. That is they make for "clues" of how to execute a given subject rather than going through the estimative approximate-placement smush that GANs do.

Any utterance I personally get in art comes from at most abstract design that some artists loosely share with me, which is entirely proportional and formal rather than 2D. I share techniques of construction with other artists, but I do not share visual direction or use their visual composite data. You won't be able to reconstruct my art or anyone's specific art directly with GANs very easily due to the lack of interpolative bias. But the popular artists' designs that are made from more common technique and mimesis will canonically dominate the general appearance of results from any given GAN. So a GAN typically only approximates the generic elements shared between the art you gave it and you're unlikely to produce anything that resembles fine art technique with it. At the client end of the user-model you can reduce the range to target the qualities of a certain image, however as well. And if an artist has a considerable public oeuvre, you could train it entirely on that artist and just rip them off. I don't see how interpolating other people's images in anyway resembles artistic technique. GANs don't even know the depth order of an image or any of the assimilated artefacts like brush strokes, it has no sense or logic of how it's layered yet and GANs sort of "clone stamp" loosely speaking from the feature map set when decorating contours it finds relevant to the feature diffused upon the grid. If you injected a single brush stroke image classified as some word like "brushstroke" in a GAN, it will literally stipple the brush stroke along the classifier trajectory its associated with e.g. for example, "eye area". It's the randomly seeded interpolation of multiple images that tries to get past the idea it's not just doing this, since these systems were made to exploit "derivative" fair use in a exploitative way. It's also what happened in the "No AI" protest image. As much as AI users try to dodge "proof of direct plagiaristic source" like this to abuse the uselessness of modern copyright laws, they themselves ironically have no proof of "creatorship" under the same existent basis that protects artistic merit. As such, as with all forms of plagiarism, when claiming work as your own, the lack of proof of creatorship is shared between both classic plagiarists and AI users who claim the visual content to be their own handiwork, some even go as far as claiming the brush work is their of their own hand. But not all AI users make this claim, just like how people who may plagiarise others in general may be honest and be respected for their honesty as the non-true artworker.

Illuminarty is old, yes, but it can give you some pointers, it depends on the models and interpolation/sampling/etc methods.
As with all plagiarism, you don't know if anything uploaded is by who says it is. A classic plagiaristic may even steal another person's image and just say it's AI. Yet AI and classic plagiarists have the common trait of not having proof of work, you cannot own the work of something you have no proof of. Considering in both law and social justice, you go on a sense of trust when someone uploads something and says it is theirs. Typically as said, the oeuvre (the collection of works of an artist independently) tends to point to if an artist has always had that style, and pre-AI, even, consistently.

Someone who uploads AI and "admits" its AI is not being deceitful per se if it's clear and expected in the space to not be the product of the uploader's hard work directly. The problem with recognising AI is generally deceit or lack of knowledge of something being AI.

So if a image is suspected as AI, for now you can only "check up" on that artist, their oeuvre, their WIPs, etc, and make the best judgement you can while relying upon trust. The inexistent of WIPs and oeuvre bares no relevance, because the same applies to classic plagiarism when in comes to proving authorship. This is nothing new whatsoever and there needs to be a distinction made between "someone claiming something as AI" and uploading to an appropriate site for AI versus "someone claiming something as not AI" and uploading that to an art site as if it's their art just as "someone claiming something as their work" in general begets only an element of trust without clear proof. After all, OP's question is determining if something is AI wherein the uploader has not committed to stating it is AI, which is typically done for a false sense of merit of the hard-work which's consecutive path lead to the creation of that image (whether algorithmically or not). This is precisely why AI images should always be separated from artwork especially when making a competition as an artistry competition is on the basis of artistic ability and skill, while any generative image can only be competed for its visual qualities. In the past we didn't need to make such a distinction with photo, 3D art or 2D art images because it's fairly clear typically how the quality of these mediums differ but it would be equally deceitful for someone to claim a photo as a painting and for a 3D render to be claimed as fine art. Both diffuse and direct CNN types of GAN algorithms try to emulate other mediums visually, so when the difference is creation they can only be differentiated in terms of how they were made.

  • 1