For example, projects trying to detect artifacts in data generated by a neural network, using a “simple” algorithm. The same way compression can be seen when data is analyzed. Anything that isn’t “our neural network detects other neural networks” and that isn’t some proprietary bullshit.
Projects trying to block scrapers as best they can or feed them garbage data.
Some collaborative networks for detecting and storing in a database famous data like images or text which has very likely been generated by a neural network. Only if the methods of detection are explained and can be verified of course, otherwise anybody can claim anything.
It would be nice to have a updating pinned post or something with links to research or projects trying to untangle this mess.
The only project I can think of now: https://xeiaso.net/blog/2025/anubis/


https://severian-poisonous-shield-for-images.static.hf.space/index.html
I have this one saved away to check out once I had some time to look into it. Not sure how effective it is. Heard nightshade didn’t work anymore or something along those lines but don’t quote me on that.
Cool, but again, seems proprietary, which is not ideal. Also isn’t it a bit backwards to add artifacts, instead of look for ways to detect artifacts in generated images, so that we catch early and avoid AI in the first place?