• Smaile@lemmy.ca
      link
      fedilink
      arrow-up
      10
      ·
      6 days ago

      yup, they don’t realize it will replace them, not their workers. and if you are that manager reading this, remember their goal is no middle class.

      That means you.

      Not your grunts that get paid dogshit and are little more the soulless husks these days.

      You.

    • Kaz@lemmy.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      This, because all management does is communicate they think it’s amazing…

      Try and get it to do complicated or edge case things and it struggles, but management never ever touch complicated stuff! They offload it

  • LordCrom@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    6 days ago

    I was asked to create a simple script… Great, I could have knocked that out in maybe 3 or 4 hours.

    Boss insisted I use A.I. … Fine whatever.

    The code it spit out was OK, but didn’t work… So I took it and started re coding and fixing the bugs.

    It took over 3 hours to get that sloppy code to a working state.

    Boss asked why it took so long, ai works in seconds. He didn’t understand that I had to fix that crap code he forced me to use

    Look, ai does pattern matching like a champ. But it can not create… It doesn’t imagine…

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    arrow-up
    25
    ·
    7 days ago

    I don’t work with computers or coding, yet even in early childhood education/therapy some people are pushing for AI. Someone used it to make “busy scene” pictures for students to find specific things in. I hate using them. Prior to this, we used “busy scene” images that are easy to find online, full of quirky, funny details that the kids enjoy spotting.

    But I can barely look at the slop images that were generated. So many of the characters have faces that look like wax figures left in the hot summer sun. The “toys” in the scene are nonsensical shapes somewhere between unusable building blocks and poorly-formed puzzle pieces. Looking at the previous, human-made pictures brought me joy, but this AI garbage is a mess that makes me sad. There’s no direction, no fun details to find, just a chaotic, repetitive scene. I bet the kids I work with could draw something more interesting than this.

    • Hazor@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      7 days ago

      I’ve never understood these use cases, pushing for generative AI in places where there’s already an abundance of human-made resources. Often for free. Is it just laziness? A case of “Why take 2 minutes for a Google search when I could take 1 minute for a generative AI prompt?”

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 days ago

    We just had an all hands where they were circlejerking about how incredible “AI” is. Then they started talking about OKRs around using that shit on a regular basis.

    On the one hand, I’m more than a little peeved that none of the pointed and cogent concerns that I have raised on personal, professional, hobbyist, sustainability, environmental, public infrastructure, psychological, social, or cultural grounds - backed up with multiple articles and scientific studies that I have provided links to in previous all-hands meetings - have been met with anything more than hand-waving before being simply ignored outright.

    On the other hand, I’m just going to make a fucking cron job pointed at a script that hits the LLM API they’re logging usage on, asking it to summarize the contents, intent, capabilities, advantages, and drawbacks of random GitHub repos over a certain SLOC count. There’s a part of me that feels bad for using such a wasteful service like in such a wasteful fashion. But there’s another part of me that is more than happy to waste their fucking money on LLM tokens if they’re gonna try to make me waste my time like that.

  • Itdidnttrickledown@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    6 days ago

    I have a simple anwer why managers think its smart and workers things its dumb. The managers see all kinds of documentaion from workers and to them the AI slop look the same. It looks the same due to the fact that the managers never take the time to comprehend what they are reading.

    • bampop@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      6 days ago

      I think it’s more that AI is a soulless bullshit generator with no imagination and no deep understanding, and managers tend to notice that it can do most of the work they do. There’s a lot of skill overlap with management there, so naturally they would be impressed with it.

      • Itdidnttrickledown@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        6 days ago

        Without a doubt. The skill set to be in management has nothing to do with intelligence. It has to do with selfish manipulation and no empathy. That way you can be cruel without missing a second of sleep.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 days ago

    Management never has a clue what their employees actually do day-to-day. We’re just another black box to them, tracked on a spreadsheet by accounting. Stuff goes in, stuff comes out, you can’t explain that.

    • luciferofastora@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      7 days ago

      I’m vaguely on the periphery of a project to create a sort of info-hub chat-bot. The project lead was really enthusiastic about getting me on board and helping me develop my skills in that direction.

      Apparently there’s a lot of people calling the wrong departments about stuff. Think along the stereotype of people calling the IT “Help Desk” for a broken light. The bot should help them find the right info, or at least the right department.

      The issue, according to management, is that information is spread all over the place. Some departments use Confluence, others maintain pages on the intranet webserver. One has their own platform for FAQ and tickets, except it’s not actually for tickets any more, which you’ll only find out when they unhelpfully close your ticket with that remark. Wanna guess what confused users do? Right, call some other department.

      The obvious solution would be getting each department to be more transparent and consistent about their information, responsibilities and ways to reach them, possibly even making them all provide their info on some shared knowledgebase with a useful search function. But that would require people to change their stuck habits.

      So instead they develop a bot supposed to know all the knowledgebases and access them for users, answer simple queries, point them the right way for complex ones and potentially even help them raise tickets with the relevant departments. Surely, that will improve things?

      The one time I tried it, I asked it a question that would have been my area of responsibility to see if people would actually find me or at least the general department. Yeah, nah, it pointed me at someone not just unrelated to that function or department, but also responsible for a different geographical area. IDK what they trained it on, but it probably didn’t include any mentions of that topic, which is fair, given it’s still in development.

      But instead of saying “I have no information on that” or direct me to a general contact, it confidently told me to do the thing it’s supposed to fix: bother the wrong person.

      And the project lead wonders why I didn’t inmediately jump at the offer to join his department.

      • Jtotheb@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        6 days ago

        My wife, who works at a college, was recently trying to locate some information from an old college newspaper that may not have been digitized yet and used their new work AI for help finding it. It directed her to the school’s archives, but provided made-up contact info for the office, and also recommended she contact herself.

    • ThomasWilliams@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      6 days ago

      It’s really the middle management they don’t understand, not the floor staff, the people who do all the checking and compliance which the top management now think can be replaced by AI

  • fibojoly@sh.itjust.works
    link
    fedilink
    arrow-up
    21
    ·
    7 days ago

    Our new tech lead loves fucking AI, which let’s him refactor our terraform (I was already doing that), write pipelines in gitlab, and lots of other shiny cool things (after many many many attempts, if his commit history is any indication).

    Funnily, he won’t touch our legacy code. Like, he just answers “that’s outside my perimeter” when he’s clearly the one who should be helping us handle that shit. Also it’s for a mission critical part of our company. But no, outside his perimeter. Gee I wonder why.

  • Reygle@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    7 days ago

    Honestly at this point AI is bad and human critical thinking is the worst I’ve ever seen in my life.

    I know people that I expect would collapse inward without AI holding their hands, and here’s the surprise of this statement. Can’t wait to see it happen. I’m really holding on for the implosions and REALLY hope they happen when I’m nearby.

  • Reygle@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    7 days ago

    Most of my conversations with my management is forced to be talking them out of the heinous baloney they’re convinced of because “Gemini says…” No boss, Gemini made some shit up. Scroll past it or stop wasting my time.

  • RBWells@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    6 days ago

    They are pushing it at my work. I spent half a day trying to train Copilot to build me a report from one PDF and one way too formatted excel sheet, no go, the too-formatted excel stumped it, I had to clean it up first. I am booking payroll and the fucking system we use refuses to generate a report with the whole cost, there is one for gross to net and a separate one, not available in excel, and not in a format that can be put in a spreadsheet, for the employer cost. I need to split the total into departments & job cost codes. (ETA the payroll system also doesn’t handle the job costing, even after I get total cost, more manual work)

    I worked with the department who sends me this trash and glory be, there was a CSV for the gross to net one. Finally wrestled it into getting this right and asked it “what do I ask next time to get this result the first time” and it does now do a reliable job of this BUT:

    All it’s doing is making a report that the payroll system really and truly ought to be capable of producing. And I guess letting me honestly say, “sure boss, I use the copilot”. It’s not adding anything at all, just making up for a glaring defect in the reporting available from the payroll company. Give me access to that system and I could build the report, it doesn’t need AI at all.

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      5
      ·
      6 days ago

      This is my problem with AI where I work. I can use it to get the result I want (eventually) although I have to do some editing.

      But I can also use the python script that has been working fine for years, which gets me 99% of the way there in 15 seconds. It would be faster but the script is terribly unoptimized because I’m not a programmer.

  • SpookyBogMonster@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    7 days ago

    My workplace was holding the yearly meeting where they lay out a bunch of rules that get followed for a month, and then get forgotten about.

    And one of the things in question was attendance. The boss smugly days “We have an AI tracker that can tell us if you’ve come in late”

    I can’t think of anything that could give me lrss faith in the accuracy of such a system.