cross-posted from: https://lemmy.ca/post/52046585

We make buildings install fire extinguishers for safety. Should AI plants be forced to install something that can shut it down in an instant?

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    9 days ago

    LLM data centres should be forced to install something that shut them down in and instant, yes.

    And then they should be shut down. In an instant.

    Like, right away.

      • ZDL@lazysoci.al
        link
        fedilink
        arrow-up
        13
        arrow-down
        2
        ·
        9 days ago

        False equivalence.

        Nuclear technology had obvious applications from the early days of its inception. LLMs have no obvious (nor inobvious) applications after years of development and being crammed into every orifice humanity has to offer.

        Time to just shut them down and save the power for things that might actually accomplish something useful.

          • Voroxpete@sh.itjust.works
            link
            fedilink
            arrow-up
            9
            ·
            9 days ago

            OK, but all of that potential can be researched in a lab. It doesn’t need a trillion dollars worth of new data centres, right?

          • ZDL@lazysoci.al
            link
            fedilink
            arrow-up
            6
            ·
            9 days ago

            LLMs have no potential to do anything for humanity.

            AI in general might, eventually, be a net positive (previous waves of it are still used in the very narrow niches where they bring value) but not this run into the next AI winter. And indeed this run into the next AI winter will likely not even survive in niches. LLMs are just too problematical even when laser-focused on very narrow niche applications.

  • CarbonIceDragon@pawb.social
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    9 days ago

    Skynet style attack on civilization isn’t really a realistic example of the dangers AI, or at least the current tech using that term, has, and I struggle to think of a danger that it poses where the best solution is to shut down the actual data center as quickly as possible. That’d be like pointing to the problems Facebook has caused, and insisting that the proper response is an instant global Facebook shutdown button. Further, a lot of AIs can run locally, which would make shutting down data centers an ineffective way to deal with it. And who presses this button? The AI company, that has an incentive to keep their product active if there’s any doubt that it can be? The government, that will likely take a long time to act or who may use the button as leverage to force AI to be biased towards the current crop of politicians? Is this button remotely accessible, thereby enabling a hacker to disable the AI and any infrastructure that has been foolishly made reliant on it in some way? Or is it airgapped, and therefore not much more useful than just shutting down the power to the site or disconnecting the data cables involved would be?

      • CarbonIceDragon@pawb.social
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        9 days ago

        No, but the safeguard should be designed in response to the dangers that the tech actually poses, and those tend to be more subtle than actively trying to kill everyone, like perpetuating existing human biases in things like medicine, hiring etc without a clear way to tell that biased decision has been made or a human in the loop to hold accountable, or providing dangerously inaccurate information. Nobody is likely to press a universal off button to deal with these types of “everyday” problems and once the response is given the damage is done, so safety should focus on regulating what the AI says and does in the first place more than responding to it afterwards.

        • 🇾 🇪 🇿 🇿 🇪 🇾@lemmy.caOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          9 days ago

          Yeah I agree most risks are bias and bad decisions. But I still think we need a fire-extinguisher style backup if something spins out of control even if hitting it brings great pain.

          • CarbonIceDragon@pawb.social
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            9 days ago

            if we really need to and recognize that need, we already can do that in a number of ways though. A single big obvious button to do it just creates a single obvious point of attack.

            • 🇾 🇪 🇿 🇿 🇪 🇾@lemmy.caOP
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              9 days ago

              That’s fair but it’s the same problem we already live with for nuclear weapons. The codes exist, they’re secret, and we trust a handful of people with them. Why should an AI kill switch be any different?

              • BlackJerseyGiant@lemmy.ml
                link
                fedilink
                arrow-up
                3
                ·
                9 days ago

                Because the danger is different. The danger of AGI is that if it happens, it’s an exponential process in which a machine basically instantly becomes so much smarter than the smartest human beings that it could circumvent any safeguards we humans could possibly think of, let alone put into practice.

                What form might that take? No one can know because it is literally beyond our ability to comprehend. Perhaps a device that was smart enough could influence human minds at a distance using the em radiation from it’s own circuits. Maybe it could take over a car, a Roomba, or an attack drone. Maybe it could manifest reality through sheer willpower.

                Thats the problem with superintelligence; it’s unpredictable. Add to that that perhaps humanity doesn’t, um, have an unassailable claim to universal moral high ground, and there’s a case to be made that a superintelligent AGI might decide that we humans gotta go.

              • CarbonIceDragon@pawb.social
                link
                fedilink
                arrow-up
                2
                ·
                9 days ago

                The utility of a nuclear stockpile is as a deterrent against a threat that we know exists (hostile foreign powers). The utility of this is a deterrent or response to, what exactly? A hypothetical AI beyond what we currently have the tech to make, and which if built probably would not behave in the way that it is fictionally portrayed to, such that the button is unlikely to actually be pressed even if needed (consider that the AIs we have already can be used to persuade people of things, so if we somehow managed to actually make a skynet style super-AI bent on taking over the world, rather than suddenly launching a war on humanity, its most obvious move would be to just manipulate people into giving it control of things, such that the one in charge of pressing the button would pretty much be itself or someone favorable to it, long before anyone realized pressing it was even necessary).

                • 🇾 🇪 🇿 🇿 🇪 🇾@lemmy.caOP
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  9 days ago

                  I get what you’re saying when AI can manipulate, it will try to make sure the button never gets pressed. But humanity isn’t dumb either. We’ve spotted and contained world-ending risks before. Why assume we wouldn’t notice this one?

          • very_well_lost@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 days ago

            If you genuinely think that the current generation of AI could “spin out of control” in a way that could do great harm to humanity (other than the real, tangible harms that it’s already doing all the time, every day), then you’ve accepted a false narrative perpetuated by people who just want to sell you shit.

            “AI”, in the sense that we’ve all come to understand it in the past 5ish years is just advertising — advertising for a product with no actual utility and no viable business model. The only danger we should be worried about is the economic consequences of the AI bubble busting and plugging the developed world into yet another brutal economic recession.