Source (Bluesky)

Transcript

recently my friend’s comics professor told her that it’s acceptable to use gen Al for script- writing but not for art, since a machine can’t generate meaningful artistic work. meanwhile, my sister’s screenwriting professor said that they can use gen Al for concept art and visualization, but that it won’t be able to generate a script that’s any good. and at my job, it seems like each department says that Al can be useful in every field except the one that they know best.

It’s only ever the jobs we’re unfamiliar with that we assume can be replaced with automation. The more attuned we are with certain processes, crafts, and occupations, the more we realize that gen Al will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to do everything we don’t.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    5 days ago

    let’s not confuse LLMs, AI, and automation.

    AI flies planes when the pilots are unconscious.

    automation does menial repetitive tasks.

    LLMs support fascism and destroy economies, ecologies, and societies.

    • ricecake@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      4 days ago

      I’d even go a step further and say your last point is about generative LLMs, since text classification and sentiment analysis are also pretty benign.

      It’s tricky because we’re having a social conversation about something that’s been mislabeled, and the label has been misused dozens of times as well.

      It’s like trying to talk about knife safety when you only have the word “pointy”.

      • GreenKnight23@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        4 days ago

        It’s like trying to talk about knife safety when you only have the word “pointy”.

        holy shit yes! it’s almost like the corpos did it that way so they can just move the goalposts when the bubble pops.

        • ricecake@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          I generally assume intent that’s more shallow if it’s just as explanatory. It’s the same reason home appliances occasionally get a burst of AI labeling. “Artificial intelligence” sounds better in advertising than “interpolated multi variable lookup table”.
          It’s a type of simple AI (measure water filth from an initial rinse, dry weight, soaked weight, and post spin weight, then find the average for the settings from preprogrammed values.), but it’s still AI.

          Biggest reason I think it’s advertising instead of something more deliberate is because this has happened before. There’s some advance in the field, people think AI has allure again and so everything gets labeled that way. Eventually people realize it’s not the be all end all and decide that it’s not AI, it “just” a pile of math that helps you do something. Then it becomes ubiquitous and people think the notion of calling autocorrect AI is laughable.

    • Javi@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      My favourite ‘will one day be pub trivia’ snippet from this whole LLM mess, is that society had to create a new term for AI (AGI), because LLMs muddied the once accurate term.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        To be fair, AI was still underwhelming compared to what people imagined AI to be, it’s just that LLM essentially swore up and down that this is the AI they had been waiting for, and that moved the goalposts to have to classifiy ‘AGI’ specifically.