Excel requires some skill to use (to the point where high-level Excel is a competitive sport), and AI is mostly an exercise in deskilling its users and humanity at large.
Picked up my calculator from my desk. “What this needs is a tab I will never use to ask questions that are berly relevant to the task I’m doing” /s
Nintendo pulled that off locally on a handheld four years ago. Microsoft is late to the game.
Edit: love the downvotes, but could i ever get a reason for what was actually objectionable in my comment? other than requiring you to maybe think differently about something you already had your mind made up about? i’m sorry if i offended defector.com for not properly framing the problem.
blind tribal reaction is the only way i guess.
– Love a good strawman in the morning.
Those feeling a chill in the air are the “scale only” peeps, who were all in on not thinking too hard about the problem. Those focused on more than selling LLMs have a very different framing.
As for why AI agents aren’t functional, we do actually have a better understanding that doesn’t seem to want to leak out of niche academic areas.
The amount we’ve learned about intelligent systems this past half decade feels like a whole new world.
Deskilling is an issue already without AI. To summarize, minimizing energy expenditure is very important due to evolutionary history. No just to people, but to group that creates simplified models that don’t disturb people’s normal trajectory. Cause learning a new model to predict the world takes energy. They do this by making predictable heuristics that are functional enough, but allow the daily scripts to continue unhindered. These simplified heuristics sometimes are too simple, and not robust enough to survive when the environment changes. Think pandas, who got deskilled at staying alive outside of a very specific environment.
So. In the same way humanity needs to learn to become more robust via diverse but communication focused intelligent systems. Anyone in social sciences knows that evidence strongly shows that diversity representation is weirdly good for any intelligent group.
Similarly, the better forms of AI currently being ignored are actually built from that end of things, rather than trying to force all perspectives and preferences to live simultaneously in a single model. It creates informational and contextual choke points.
Also wish i could run into more people who actually study intelligence in these threads.
Clickbait journalism is a scourge on science. At least the public awareness of sycophantic echo chambers enabling delusional spirals is something good for people to think about,
Since any idea can survive in a similar segregated confirmation bubble that mechanically cuts off outside ideas and education to preserve the existing group world model.
And i keep saying laziness should be described by such solipsism, but instead you get called lazy if you aren’t feeding your full life to the machine.
Just wish progressive public discourse was more generally informed in this area, because it’s very important to understanding society, our bodies, and intelligent systems as a whole.
Finally someone speaking logically
I have a more than surface-level understanding of LLMs, but I am not an expert by any metric, imaginable. LLMs are great for information retrieval tasks like searching through a database of documentation, when the query is in natural language, and it works beyond keyword search like a traditional search engine. But, since they also have synthesis capabilities, they also make shit up.
The issue most people have is that LLMs are nowhere near as capable as corporations are making them out to be and also the amount of money they are spending is simply not in line with the value they can provide to any individual or corporation. Thus, it is a bubble which has always made things harder for the regular people.
If you want to have good analysis of LLMs and their capabilities, I strongly recommend a book called “AI Snake Oil”. It is written by two computer scientists, one teaches at Princeton and other I believe was a grad student under the first one.
I’m extremely familiar, and definitely agree that the corpo paperclip maximizer will use any tool dishonestly if it helps them make paperclips. a lot of my critique is in the framing of the public dialogue, and how poorly informed much of it is. like with most things, it’s definitely important to deal with the rich assholes lying, and using functional tools for evil. both are bad, in neither situation are the tools the problem.
good example of elon musk lying about what his cars could do every year for the past decade, which i have heavily critiqued for as long as it’s been happening. definitely a real issue!
All i want is people not to throw the baby out with the bathwater. Much like I don’t think uninventing the loom would either help people or solve the issue, AI is similar.
And the “scale only” people I mentioned are the only ones exclusively focused on llms, but nobody out there is just running a basic llm.
Frankly, the AI I’m most excited about is being grown from the opposite end, as diverse distributed collaborative networks that actively adapt to the shape of the context.
Honestly I think one of the most valuable things we will get that nobody talks about is functional mediators that can parse complex perspectival differences, enabling communications that have become difficult for our stupid tribal monkey species.
My issue with AI critique is its usually ungrounded and focused on hating AI, rather than the corpos who do bad stuff with every tool that exists, and lie about everything.
Even in more traditional AI models, they are currently right now doing things that are amazing, but people think they are just simple collage art stealing machines, with some confabulated interpretation of what it is doing.
But that topic also gets into the history of the art industry, who currently owns the larger art market, and how people define art and artists.
But if you actually address the issues, tribal ignorance ensures angry yelling rather than an actual attempt at learning what is being discussed.
To be fair, people like jordon peterson make people think complexity is unlearnable, because they use it to obfuscate rather than elucidate. So there are definitely valid issues to talk about, but outside of “scale is everything,” the focus on critiquing llms ignores every other part of AI, because they are harder to make people angry about. Unless you have a bunch of ignorant people who you can spur into the same stupid aggression that existed during the tumblr “you’re stealing my style!” wars. Because they can’t comprehend how art is all built socially, and nobody painted anime on cave walls.
Complex issues, lot of rabbit holes, but ignorance and tribalism are currently the main shapes of actual critique.
but i think developing functional intelligent systems will hinder bad actors more than they expect. see elon musk fighting to get his shitty llm to lie for him without also losing touch with everything else, picking and choosing where to ignore dissonance is a funny thing that humans are very susceptible to abusing.
Hopefully that makes sense.
But, even considering that LLMs are useful, you have to take into account the immense amount of resources that are consumed which training and inference, especially at a time when we cannot afford to. Personally, I feel like they are not worth that expenditure, for the benefit. Regarding rest of the ML models, they might just get caught in the wave of that backlash, that to be fair only LLMs deserve.
Just to be clear, I am not anti-AI, just anti-hype.
Then why aren’t we going after streaming, which currently is much worse for the environment?
Not that I don’t agree it should be salient, and caps should definitely be put on big companies for this kind of stuff, generally, only doing it for AI is not really doing much, but people don’t seem to care about actually improving thing rather than beating the tribal drum.
I agree with the need for action, my issue is the direction and framing, especially part the socially reified misinformation. Musk is a good target while he is ignoring current standards for generators and the like, but the general dialogue around that is full of made up numbers and claims, and nobody cares because it serves the tribal preference.
Like the idea that ai cuts and pastes existing work to make a new image. That’s not how it works, and artists used to attack artists for the same style “theft” that just doesn’t recognize how brains work, how human social learning works, or how art and styles develop over time.
Also the energy for conversation is being wasted on these moot points, rather than the larger systemic issues.
Also some of AI is being developed to directly counter that problem, but I don’t see much advocating for it in these spaces,
Rather it gets downvoted for “being AI,” completely ignoring why AI was supposed to be a problem in the first place, and void of any active effort to improve things.
Same communication issues affecting the socio-political sphere, which is why it’s good to learn generally about intelligent systems and how they work.
Like, how often do you hear people complain that AI is “just predictions systems” completely vacant of the understanding that we are all predictive systems, and predictive processing is the current general consensus in the nero-psych realm now. Etc.
Basically, like most complex things, it feels impossible to talk about because the simplified social model dominates discourse, and everyone hates scientists for trying to bring annoying reality, and diverse perspectives into the conversation.
I mean, learn about epistemics and you’ll learn more about AI. Although being familiar with diverse perspectives, while appropriately untangling the dissonance of their differences is basically the core of both AI and sociology.
But seeing artists get miffed because the Warner/Disney model of art economy might not be compatible with reality and positive growth of our species.
I hope we can at least both agree, eat the rich and bring back more diverse systems that can check and balance each other.