“A lot of generative AI stuff isn’t really working,” Gownder said. “I mean, enterprise, and I’m not just talking about your consumer experience, which has its own gaps, but the MIT study that suggested that 95 percent of all generative AI projects are not yielding a tangible P&L benefit. So no actual ROI. McKinsey has something like 80-something percent that don’t.

  • Naz@sh.itjust.works
    link
    fedilink
    arrow-up
    57
    arrow-down
    1
    ·
    5 days ago

    TLDR: It’s because of the brain. AI makes us less productive.

    I’m a frontier AI model trainer/developer; I get to do whatever I want, and I decided to eat my own dog food, talking to my own model about optimizations to itself.

    While I thought initially that I was “working faster” because it felt faster, I was smart about it and ran stopwatches and timers for objective productivity measurements.

    What I found was that I spent on average, 20-30 more minutes on the same problems that I would have solved unassisted by AI. So in my specific case, I was up to 50% slower than I would have been on my own, despite feeling like it was faster.

    To make matters worse, I had difficulty recalling what specific issues/problems that I had asked the model for help with, and I feel as though this is due to the discrete architecture statistical mean of speculative decoding — the LLM wants you to say a specific thing so it has to do less lookup work, but by doing so, your human brain (which is very lazy) agrees with this default mode and fucking turns itself off.

    Crazy, right? Whodathunk it. We have our brains for a reason.

    • kboos1@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      5 days ago

      It’s AI rot. Simply put, it’s a lot like any software tool, if AI is doing the work and is doing the thinking then you are conforming to the tool instead of using the tool.

      I deal with it a lot, even now when AI is just starting to poke it’s nose into our toolset. The person uses the tool without thinking but at the end of the day they don’t have a clue why they did any of it. So now our tools are being redesigned with that in mind so that the operator doesn’t have to think and can intentionally operate on autopilot. It’s the difference between an operator and an engineer, but it’s hard for most people to see the difference, they see it as gained efficiency or getting the operator to produce the same level of work. I foresee AI replacing the operators and the experts being lost due to attrition, then the experts that put the work in will become super valuable and scarce.

      • shrugs@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        Matches my observation. Wasted 2 1/2 days creating some script with the help of AI. After nothing worked and I realized I’m just pasting shit into the prompt without thinking I scraped everything and did it again just using my brain.

        20 mins later I had a working solution that was only a few lines of code.

        Fuck AI

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      To make matters worse, I had difficulty recalling what specific issues/problems that I had asked the model for help with

      I feel this very strongly as well. Whenever you do something yourself there’s a natural self-reinforcement mechanism. It’s how people learn (something no LLM can do). This is why people retain information better when they take hand written notes. Having notes to reference back to is nice, sure, but the actual act of taking the notes is the thing that really drives enforcement learning. You’re forcing yourself to build connections in your brain that you don’t get just from passively listening.

      AI assisted coding takes the self-reinforcement out of the process completely and you end up with code that you might marginally understand right now… but in 3 weeks? 3 months? Yeah, not so much.

      • hector@lemmy.today
        link
        fedilink
        arrow-up
        3
        ·
        4 days ago

        I notice that in driving, using maps features I do not recall enough to reliably retrace steps, while finding it the old way I do.

  • OctopusNemeses@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    The elephant is the room is that LLMs are bruteforce machines. The glorified RNG that you keep rolling the dice until there’s some semblance of an output that looks okay superficially.

    What is the first concept you learn in CS101? Don’t bruteforce. Don’t waste compute power. It’s the worst way to do things.

    There’s no way the tech industry doesn’t recognize this. Nobody wants to say it. As soon as the money realizes is when they pull funding and the bubble bursts. The industry will keep suckering them in the meantime.

  • justsomeguy@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    5 days ago

    What else do we have to do till the bubble pops? Guess we gotta wait till one of the bigger companies goes under but since they have near unlimited funds it might take a while.