• FooBarrington@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      3 days ago

      I mean - yeah, it is? This is a well-researched part of the data pipelines for any big model. Some companies even got into trouble because their models identified as other models, whose outputs they were trained on.

      It seems you have a specific bone to pick that you attribute to such training, but it’s just such a weird approach to deny pretty broadly understood results…

        • FooBarrington@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          3 days ago

          No, it doesn’t. Unless you can show me a paper detailing that literally any amount of synthetic data increases hallucinations, I’ll assume you simply don’t understand what you’re talking about.

          • baines@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            3 days ago

            what paper? no one in industry is gonna give you this shit, it’s literal gold

            academics are still arguing about it but save this and we can revisit in 6 months for a fat i told you so if you still care

            ai is dead as shit for anything that matters until this issue is fixed

            but at least we can enjoy soulless art while we wait for the acceleration

            • FooBarrington@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              3 days ago

              Yeah, that’s what I guessed. Try to look into the research first before making such grandiose claims.

              • baines@lemmy.cafe
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                3 days ago

                i know the current research, i know it’s going to eat your lunch

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  3 days ago

                  Ah yes, and you can’t show us that research because it goes to another school? And all companies that train LLMs are simply too stupid to realize this fact? Their research showing the opposite (which has been replicated dozens of times over) was just a fluke?

                  • baines@lemmy.cafe
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    3 days ago

                    no because this is literally in development, this isn’t some 60 year old mature tech

                    algorithms sure, nn for some narrow topics yep great, not the this bullshit though

                    there is already academic accessible research talking about LLM issues of which the major concern is hallucinations, to the point where the word bailout is starting to make the rounds in the us from these very companies

                    the argument is whether or not you believe this is inherent or fixable and a big focus is on the training

                    anyone listening to any ai company right now is a damn fool with the obvious circular vendor bullshit going on

                    but you do you, if the market could be trusted to be sane i’d be timing it right now