• FlashMobOfOne@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    1 month ago

    LLM’s are cool and all, but you can’t use them for anything requiring real precision without allocating human work time to validate the output, unless you want to end up on the national news for producing something fraudulent.

    And making it so their image generator can generate porn isn’t going to change that.

    • funkless_eck@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      1 month ago

      I had to correct my boss this morning because they didn’t read the AI output that told our client our services were worthless.

      • ZDL@lazysoci.al
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        I bitched out Baidu’s LLMbecile because Baidu has lost all capacity for searching in favour of the slop. It literally told me that Baidu was useless for search and recommended several of its competitors over Baidu.

        Oopsie!

    • Xerxos@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 month ago

      Yes, currently AI isn’t reliable enough to use instead of a human. All the big AI businesses bet that this will change - either by training with more data or some technological breakthrough.

      • FlashMobOfOne@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 month ago

        Could be they’re right.

        They tried that with Theranos because Elizabeth Holmes’ machine could correctly identify four viruses.

        Presumably LLM’s have already trained on the entirety of human knowledge and communication and still produce buggy information, so I’m skeptical that it’ll work out the way the VC’s expect, but we’ll see.