For example, projects trying to detect artifacts in data generated by a neural network, using a “simple” algorithm. The same way compression can be seen when data is analyzed. Anything that isn’t “our neural network detects other neural networks” and that isn’t some proprietary bullshit.

Projects trying to block scrapers as best they can or feed them garbage data.

Some collaborative networks for detecting and storing in a database famous data like images or text which has very likely been generated by a neural network. Only if the methods of detection are explained and can be verified of course, otherwise anybody can claim anything.

It would be nice to have a updating pinned post or something with links to research or projects trying to untangle this mess.

The only project I can think of now: https://xeiaso.net/blog/2025/anubis/

  • RawHex@lemmy.mlOP
    link
    fedilink
    arrow-up
    2
    ·
    16 days ago

    attempt at “real” AI

    I’m going to argue that there’s no such thing as “real AI”. We are going to create replicas of brains once we understand them fundamentally. I mean to the point we can explain them the same way we know how a CPU architecture works. Right now I think we’re insanely far from that. We barely understand brain diseases or how neurotransmitters work exactly, let alone big structures of neurons.

    My argument is, we don’t even know what “real AI” means, because we don’t know what “I” means yet.

      • RawHex@lemmy.mlOP
        link
        fedilink
        arrow-up
        1
        ·
        15 days ago

        What’s funny about current GPTs is how much manual adjustments they’re doing on them, when the whole idea of making them is so that they “adjust themselves” which of course was total bullshit from the start.