Not an expert on this topic but I’ve read about it a fair bit and tinkered around with image generators:
You don’t post them, basically. Unfortunately nothing else will really work in the long term.
There are various tools – Glaze is the first one I can think of – that try to subtly modify the pixels in the image in a way that is imperceptible to humans but causes the computer vision part of image generator AIs (the part that, during the training process, looks at an image and produces a text description of what is in it) to freak out and become unable to understand what is in the image. This is known as an adversarial attack in the literature.
The intention of these tools is to make it harder to use the images for training AI models, but there are several caveats:
Though they try to be visually undetectable to humans, they can still create obviously visible artifacts, especially on higher strength levels. This is especially noticeable on hand-drawn illustrations, less so on photographs.
Lower strength levels with fewer artifacts are less effective.
They can only target existing models, and even then won’t be equally effective against all of them.
There are ways of mitigating or removing the effect, and it will likely not work on future AI models (preventing adversarial attacks is a major research interest in the field).
So the main thing you gain from using these is that it becomes harder for people to use your art for style transfer/fine-tuning purposes to copy your specific art style right now. The protection has an inherent time limit in it because it relies on a flaw in the AI models, which will be fixed in the future. Other abusable flaws will almost certainly remain and be discovered after the ones currently used are fixed, but the art you release now obviously cannot be protected by techniques that do not yet exist. It will be a cat-and-mouse game, and one where the protection systems play the role of the cat.
Anyway, if you want to try it, you can find the aforementioned Glaze at https://glaze.cs.uchicago.edu/. You may want to read one of their recent updates, which discusses at greater length the specific issue I bring up here, i.e. the AI models overcoming the adversarial attack and rendering the protection ineffective, and how they updated the protection to mitigate this: https://glaze.cs.uchicago.edu/update21.html
Not an expert on this topic but I’ve read about it a fair bit and tinkered around with image generators:
You don’t post them, basically. Unfortunately nothing else will really work in the long term.
There are various tools – Glaze is the first one I can think of – that try to subtly modify the pixels in the image in a way that is imperceptible to humans but causes the computer vision part of image generator AIs (the part that, during the training process, looks at an image and produces a text description of what is in it) to freak out and become unable to understand what is in the image. This is known as an adversarial attack in the literature.
The intention of these tools is to make it harder to use the images for training AI models, but there are several caveats:
So the main thing you gain from using these is that it becomes harder for people to use your art for style transfer/fine-tuning purposes to copy your specific art style right now. The protection has an inherent time limit in it because it relies on a flaw in the AI models, which will be fixed in the future. Other abusable flaws will almost certainly remain and be discovered after the ones currently used are fixed, but the art you release now obviously cannot be protected by techniques that do not yet exist. It will be a cat-and-mouse game, and one where the protection systems play the role of the cat.
Anyway, if you want to try it, you can find the aforementioned Glaze at https://glaze.cs.uchicago.edu/. You may want to read one of their recent updates, which discusses at greater length the specific issue I bring up here, i.e. the AI models overcoming the adversarial attack and rendering the protection ineffective, and how they updated the protection to mitigate this: https://glaze.cs.uchicago.edu/update21.html
Recently, Glaze is now known to be easily bypassed with trivial effort on most available commercial (and also most free self-hosted) Diffuser models.