7 Comments

I think it is extremely suspicious that AGI based on Turing Machines are possible. The main feature of concsiousness is the fact that it is reflexive (I am conscious that I am conscious). It is very hard to set up formal systems that are consistent and self-referent (observe that GPTs of the world enters a kind of Larsen like state when you play too much with reflexivity).

That point aside there is a difference between intelligence and will. So far no AI has displayed a glimpse of will (it answers but never do things on its own). And would an AI wage war on humanity its superintelligence is only a small aspect. The first problem is that AI has to solve the problem of incarnation. AI are just programs running inside a box.

https://open.substack.com/pub/spearoflugh/p/the-mystery-of-ai-incarnation

And a final point on this idea that intelligence is not enough: the economic calculation problem remains in full, even with supposing that something like "superior intelligence" exists. Because of contingencies of the real world it is not immediate to translate an idea (how bright it could be) into a reality. One way humanity discovered is the use of markets : many people try many things, a lot of them fail (because bad luck, ie unforeseable weather event etc.) the other ones adapt and use the price mechanism to adjust.

An AI would have to perform such back and forth between ideas and reality. There is no magic.

Expand full comment

I totally agree. Something often missed in conversations about conscious AI is that it’s neither necessary or desirable. Every technology learned or simulated from biological systems to date is reduced down to only the beneficial features. That’s what receives funding and continues to be used by us going forward. AI is better than consciousness at particular tasks and has been long before neural networks. Also, there is no possibility of consciousness becoming an emergent property of software. The Chinese Room argument by Searle conclusively proves this. If consciousness occurs in software systems (somehow) it will be a research novelty and will probably have no commercial applications.

Expand full comment

I'm personally skeptical of LLM's having generalized intelligence, but I don't think it is necessarily impossible. This is not going to stop powerful LLMs from gradually replace email and coding jobs, though.

I am going to do a post soon about inner voice - even 15% or so of Scott Alexander's readers don't have internal monologues. I don't think these people are conscious the way other humans are conscious; for them, consciousness merely emerges as the sum of their bodily awareness and 5 senses, while in others it also includes self-directed hallucinations. Could an LLM hypothetically have consciousness, in terms of self-directed hallucinations? I think it is possible, but I don't think that translates to general intelligence.

As for the paperclip thing, I think that any system generally capable enough to destroy humans by turning them into paperclips would be able to reason that the humans that the AI depends on would not appreciate that. What really matters in AI risk is the power struggle.

Alignment is also impossible, for several reasons. One, mistakes in alignment are too difficult to correct for it to be optimized as a scientific field. Second, human utility functions are too difficult to articulate in just a block of code, and human's themselves differ on what they consider acceptable. Lastly, even if alignment is solved, this turns into different AGIs competing against each other for dominance/convergent instrumental goals/individual goals. Not exactly much of an improvement.

Expand full comment
Apr 7, 2023·edited Apr 7, 2023Liked by Max

I believe that everything is ready for the Singularity to begin, with GPT-4 having practical self-reflection:

https://www.magyar.blog/p/singularity-we-have-all-the-loops

This ability should emerge in other LLMs with similar complexity. It opens the door to unsupervised self-improvement.

Other trends also point to this direction, listed in the post.

Expand full comment

A link isn’t handy, but this summary from ChatGPT is decent:

“John R. Searle's Chinese Room argument is a thought experiment intended to challenge the idea that a computer program or algorithm can achieve true understanding or consciousness.”

“The thought experiment imagines a person in a room who doesn't understand Chinese, but has a set of rules that allow them to correctly answer any Chinese language question posed to them. The person receives a set of symbols (which they don't understand) and follows the rules to produce an answer in Chinese.”

“Searle argues that while the person in the room can correctly produce answers in Chinese, they still don't understand the language. They are merely manipulating symbols according to a set of rules, without any real comprehension.”

“This is meant to illustrate a larger point about computers and artificial intelligence: just because a computer can process information and produce seemingly intelligent responses, it doesn't necessarily mean it understands what it's doing or has any conscious experience.”

“In other words, the Chinese Room argument suggests that there is more to consciousness and intentionality than mere symbol manipulation, and that these qualities cannot be achieved solely through software or algorithms.“

A supplement to this argument is in a TED talk he gives that serves as a good summary of his general position https://youtu.be/eqDgt12m26c

Expand full comment