

Okay, it is easy to see -> a lot of people point it out


Okay, it is easy to see -> a lot of people point it out


I guess because it is easy to see that living painting and conscious LLMs are incomparable. One is physically impossible, the other is more philosophical and speculative, maybe even undecidable.


I would say that artificial neuron nets try to mimic real neurons, they were inspired by them. But there are a lot of differences between them. I studied artificial intelligence, so my experience is mainly with the artificial neurons. But from my limited knowledge, the real neural nets have no structure (like layers), have binary inputs and outputs (when activity on the inputs are large enough, the neuron emits a signal) and every day bunch of neurons die, which leads to restructurizing of the network. Also from what I remember, short time memory is “saved” as cycling neural activities and during sleep the information is stored into neurons proteins and become long time memory. However, modern artificial networks (modern means last 40 years) are usually organized into layers whose struktuře is fixed and have inputs and outputs as real numbers. It’s true that the context is needed for modern LLMs that use decoder-only architecture (which are most of them). But the context can be viewed as a memory itself in the process of generation since for each new token new neurons are added to the net. There are also techniques like Low Rank Adaptation (LoRA) that are used for quick and effective fine-tuning of neural networks. I think these techniques are used to train the specialized agents or to specialize a chatbot for a user. I even used this tevhnique to train my own LLM from an existing one that I wouldn’t be able to train otherwise due to GPU memory constraints.
TLDR: I think the difference between real and artificial neuron nets is too huge for memory to have the same meaning in both.


As I said in another comment, doesn’t the ChatGPT app allow a live converation with a user? I do not use it, but I saw that it can continuously listen to the user and react live to it, even use a camera. There is a problem with the growing context, since this limited. But I saw in some places that the context can be replaced with LLM generated chat summary. So I do not think the continuity is a obstacle. Unless you want unlimited history with all details preserved.


I saw several papers about LLM safety (for example Alignment faking in large language models) that show some “hidden” self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.
Also, I do not use the ChatGPT app, but doesn’t it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn’t a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn’t be that complicated.


I meant alive in the context of the post. Everyone knows what painting becoming alive means.


Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?


Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.


Gray hair have better signal to heaven


I think that the software is specialized, but the hardware is not. They use some smart algorithms to distribute computation over huge number of workers.


Huh? Are you surprised that movies made in US are mostly about US? I am from Czech Republic and surprisingly, all our movies are about czech culture and none about Brazilian slave trade.
Fairphone has variant with classic Android, which I have, so paying over NFC with Google wallet worked. I tried it, but do not use anymore by choice. Have you looked for alternatives? I heard about some, but since I do not use it, haven’t searched for them.