Just a trillion more, bro!

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    27
    ·
    2 days ago

    The most upsetting thing about all of this is that it lays bare the truth that the reason we don’t fight climate change is that there’s not enough profit in it.

    Capitalism needs to die, or it’s going to kill us all.

  • Nightwatch Admin@feddit.nl
    link
    fedilink
    arrow-up
    44
    ·
    2 days ago

    Cool, no one’s running out of imaginary money yet? Too bad we’re running out of:

    • power production capacity
    • power grid stability
    • freshwater for cooling
    • suitable sources of sand for computer chips like gpus
    • data to train your dumb llms on and probably other things.
    • npdean@lemmy.todayOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      2 days ago

      Data will not stop keep coming because we generate so much of it every day

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        20
        ·
        2 days ago

        Yeah, but so do LLMs, and the data they generate is poison to them. We’re going towards model collapse.

        Also, we’re not generating enough to keep the line going up fast enough.

          • ℍ𝕂-𝟞𝟝@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 days ago

            IMO the AI bubble is just a symptom, and the underlying bubble, the VC system and Wall Street will only pop together with the US itself.

            If the AI bubble pops, people like Altman will be safe out already working on the next bubble, quantum or something. They’ll “lose billions” but will still be trillionaires, and the only discernible result of the bubble popping will somehow be even more mergers, acquisitions and layoffs.

            • npdean@lemmy.todayOP
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              2 days ago

              Collapse of the US economy is farther away than people think. It will take a few more macrocycles. Many bubbles will come and go by then

              • ℍ𝕂-𝟞𝟝@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 days ago

                Agreed, I’m not saying it’s going to collapse tomorrow, I’m saying that the AI oligarchs will not feel the AI bubble pop until it does. That said, the USSR was eternal until it was not.

  • peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    25
    ·
    2 days ago

    It’s a fucking chatbot that used modern ML training methods on enormous datasets so it’s slightly fancier than the ones that already existed.

    They just fed it so much data that it almost appears like it knows anything, when all it does is respond to the words you give it.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      They just fed it so much data that it almost appears like it knows anything,

      The strawberry test shows this. Ask directly and it will give the correct number of letters. Ask in a more indirect fashion (in a way unlikely to be in the training set) and it falls over like before.

    • npdean@lemmy.todayOP
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      People need to realise this point. Difference between previous models and new ones are so much dependent on the amount of data it has eaten.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        and the seed lottery. You can see this if you try training a simple network with two inputs to learn xor. It can converge in multiple ways, and sometimes it converges to a really bad approximation. And sometimes it doesn’t converge at all (or it converges so slowly that it might as well be considered not to converge). And even then it might still converge to an approximation that’s more accurate on one side of the input space than the other. Tons of ways to get an undesirable result. For a simple 2-input network.

        Imagine how unlikely it is for txese models to actually converge to the optimal thing. And how often the training is for nothing.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            arrow-up
            6
            ·
            2 days ago

            yes, but the only way to weed out a bad seed is to “play the lottery”

            By the time you discover a seed is bad, you’ve already spent a shitton on training. Money down the drain, you gotta start over

            • npdean@lemmy.todayOP
              link
              fedilink
              arrow-up
              3
              ·
              2 days ago

              Just train another AI to train AI, then train another and another, and another. Imagine the stock rally

    • npdean@lemmy.todayOP
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      Very interesting read. What I took out of it was that there is still money to be made in the AI stocks before the bubble pops, violently

        • npdean@lemmy.todayOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          There might be another pop to the upside before the fun ends. NVDA is definitely a smart stock to hold, even if you don’t support AI.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    sure hope the homeless are hungry. because they’re going to eat so much AI bullshit they’ll never ask for anything again.

    • npdean@lemmy.todayOP
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      The worst part about tech innovation is that it increases the divide between rich and poor. Poor people don’t have access to AI like rich do but they are affected more by its side effects.

      • GreenKnight23@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        if anything I would say they’re negatively impacted far more than the wealthy gain benefit from.

        if an entrepreneur increases their wealth by a factor of 2, the disenfranchised actually lose by a factor of 4-6 not 2. they lose the ability to compete in a job market that was already against them. the lose the ability to gain financial freedom because they can no longer “slip through the paperwork” because a person believes they are capable. they live in an environment that is continuously degraded by the toll such technology had on resources in the world. finally they are negatively impacted because all of the previously mentioned negatives are compounded across their support network and they are no longer able to gain stability or help in moving forward.

        at this point anyone who supports AI as a company is my enemy and is working towards destroying everything and everyone that doesn’t own a piece of AI.

  • stoy@lemmy.zip
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    You know what?

    If AI is in any way unbiased and is intelligent, it will focus on public transport over cars.

    It would tell people to build trains and metro system, build frequent bus services and even highspeed rail.

    No person with the money to do it would ever follow it’s advice.

    But it would be funny.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago

      If AI is in any way unbiased and is intelligent,

      It is not. It just vomits out a slightly randomized average response based on what is fed into it, which will mostly be pro car stuff because that is what exists.

    • npdean@lemmy.todayOP
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      If AI is in any way unbiased and is intelligent,

      It can be as unbiased as the data it uses, which ranges from human rights promotion to support for ethnic cleansing. Another major problem is that the bias can be changed by the people training the model. For example, Grok is not allowed to call Gaza genocide a genocide.