• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: January 24th, 2025

help-circle





  • there’s no emergent behavior in llm’s. your perception that there is, is an anthropomorphism, same as with the idea of prediction. statistically “predicting” the next word based on the frequency of input data isn’t an emergent property, it exists as a staic feature of the algorithm from the start. at a certain level of complexity, llms appear to produce comprehensible text, provided you stop them in time. that’s merely because of the rules of the algorithm. the illusion of intelligence comes merely from being able to select “merged buckets” from the map, which are put together mathematically.

    it is a one trick pony that will never become anything else.



  • it doesn’t predict, it follows a weighted graph or the equivalent. it doesn’t guess, /dev/urandom input makes the path unpredictable. any case where it looks like it predicts or guesses is purely accidental, and all in the eye of the observer.

    further, it only posses knowledge to tthe degree that an encyclopedia does. the prompt is just the equivalent of a hash key pulling a bucket out of a map.

    it is literally just a huge database of key-value pairs stored so as to minimize the description length of the values.










  • I was thinking this. With advances in text recognition, they can potentially filter all that data now. Since five eyes is essentially for industrial espionage, google first requiring access to source code to ensure compatibility, and that you can’t really turn off chrome web page sniffing (I have found the disabled chrome app still running, with “force stop” available,) all this makes more sense than the little bit they’d squeeze out of ad revenue chasing people who avoid chrome and google assistant. After all, it isn’t bad actors or people who already buy from google they are spending so much effort on - it is the tech competent.