AI Consciousness: A Reply to Schwitzgebel

Guest post by Susan Schneider

If AI outsmarts us, I hope its conscious. It might help with the horrifying control problem – the problem of how to control superintelligent AI (SAI), given that SAI would be vastly smarter than us and could rewrite its own code. Just as some humans respect nonhuman animals because animals feel, so too, conscious SAI might respect us because they see within us the light of conscious experience.

So, will an SAI (or even a less intelligent AI) be conscious? In a recent Ted talk, Nautilus and Huffington Post pieces, and some academic articles (all at my website) I’ve been urging that it is an important open question.

I love Schwitzgebel's reply because he sketches the best possible scenario for AI consciousness: noting that conscious states tend to be associated with slow, deliberative reasoning about novel situations in humans, he suggests that SAI may endlessly invent novel tasks – e.g., perhaps they posit ever more challenging mathematical proofs, or engage in an intellectual arms race with competing SAIs. So SAIs could still engage in reasoning about novel situations, and thereby be conscious.

Indeed, perhaps SAI will deliberately engineer heightened conscious experience in itself, or, in an instinct to parent, create AI mindchildren that are conscious.

Schwitzgebel gives further reason for hope: "...unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing." He also writes: "Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing."

Both of us agree that leading scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus, and that these approaches also associate consciousness with some sort of broad information sharing from a central system or global workspace (see Ch. 2 of my Language of Thought: a New Philosophical Direction where I mine Baars' Global Workspace Theory for a computational approach to LOT's central system).

Maybe it is just that I'm too despondent since Princess Leah died. But here's a few reasons why I still see the glass half empty:

a. Eric's points assume that reasoning about novel situations, and centralized, deliberative thinking more generally, will be implemented in SAI in the same way they are in humans – i.e., in a way that involves conscious experience. But the space of possible minds is vast: There could be other architectural ways to get novel reasoning, central control, etc. that do not involve consciousness or a global workspace. Indeed, if we merely consider biological life on Earth we see intelligences radically unlike us (e.g., slime molds, octopuses); there will likely be radically different cognitive architectures in the age of AGI/superintelligence.

b. SAI may not have a centralized architecture in any case. A centralized architecture is a place where highly processed information comes together from the various sensory modalities (including association areas). Consider the octopus, which apparently has more neurons in its arms than in its brain. The arms can carry out activity without the brain; these activities do not need to be coordinated by a central controller or global workspace in the brain proper. Maybe a creature already exists, elsewhere in the universe, that has even less central control than the octopus.

Indeed, coordinated activity doesn't require that a brain region or brain process be a place where it all comes together, although it helps. There are all kinds of highly coordinated group activities on Earth, for instance (the internet, the stock market). And if you ask me, there are human bodies that are now led by coordinated conglomerates without a central controller. Here, I am thinking of split brain patients, who engage in coordinated activity (i.e., the right and left limbs seem to others to be coordinated). But the brain has been split through removal of the corpus collosum, and plausibly, there are two subjects of experience there. The coordination is so convincing that even the patent's spouse doesn't realize there are two subjects there. It takes highly contrived laboratory tests to determine that the two hemispheres are separate conscious beings. How could this be? Each hemisphere examines the activity of the other hemisphere (the right hemisphere observes the behavior of the limb it doesn't control, etc.) And only one hemisphere controls the mouth.

c. But assume the SAI or AGI has a similar cognitive architecture as we do; in particular, assume it has an integrated central system or global workspace (as in Baars' Global Workspace Theory). I still think consciousness is an open question here. The problem is that only some implementations of a central system (or global workspace) may be conscious, while others may not be. Highly integrated, centralized information processing may be necessary, but not sufficient. For instance, it may be that the very properties of neurons that enable consciousness, C1-C3, say, are not ones that AI programs need to reproduce to get AI systems that do the needed work. Perhaps AI programmers can get sophisticated information processing without needing to go as far as build systems to instantiate C1-C3. Or perhaps a self-improving AI may not bother to keep consciousness in its architecture, or lacking consciousness, it may not bother to engineer it in, as its final and instrumental goals may not require it. And who knows what their final goals will be; none of the instrumental goals Bostrom and others identify require consciousness (goal content integrity, cognitive enhancement,etc.)

Objection (Henry Shevlin and others): am I denying that it is nomologically possible to create a copy of a human brain, in silicon or some other substance, that precisely mimics the causal workings of the brain, including consciousness?

I don't deny this. I think that if you copy the precise causal workings of cells in a different medium you could get consciousness. The problem is that it may not be technologically feasible to do so. (An aside: for those who care about the nature of properties, I reject pure categorical properties; I have a two-sided view, following Heil and Martin. Categoricity and dispositionality are just different ways of viewing the same underlying property—two different modes of presentation, if you will. So consciousness properties that have all and only the same dispositions are the same type of property. You and your dispositional duplicate can't differ in your categorical properties then. Zombies aren't possible.)

It seems nomologically possible that an advanced civilization could build a gold sphere the size of Venus. What is the probability this will ever happen though? This depends upon economics and sociology – a civilization would need to be a practical incentive to do this. I bet it will never happen.

AI is currently being built to do specific tasks better than us. This is the goal, not reproducing consciousness in machines. It may be that the substrate used to build AI is not a substrate that instantiates consciousness easily. Engineering consciousness in may be too expensive and time consuming. Like building the Venus-sized gold sphere. Indeed, given the ethical problems with creating sentient beings and then having them work for us, AI programs may aim to build systems that aren't conscious.

A response here is that once you get a sophisticated information processor, consciousness inevitably arises. Three things seem to fuel this view: (1) Tonini's information integration theory (IIT). But it seems to have counterexamples (see Scott Aaronson's blog). (2) Panpsychism/panprotopsychism. Even if one of these views is correct, the issue of whether a given AI is conscious is about whether the AI in question has the kind of conscious experience macroscopic subjects of experience (persons, selves, nonhuman animals) have. Merely knowing whether panpsychism or panprotopsychism is true does not answer this. We need to know which structural relations between particles lead to macroexperience. (3) Neural replacement cases. I.e., thought experiments in which you are asked to envision replacing parts of your brain (at time t1) with silicon chips that function just like neurons, so that in the end, (t2), your brain is made of silicon. You are then asked: intuitively are you still conscious? Do you think the quality of your consciousness would change? These cases only goes so far. The intuition is plausible that from t1 to t2, at no point would you would lose consciousness or have your consciousness diminished (see Chalmers, Lowe and Plantinga for discussion of such thought experiments). This is because a dispositional duplicate of your brain is created, from t1 to t2. If the chips are dispositional duplicates of neurons, sure, I think the duplicate would be conscious. (I'm not sure this would be a situation in which you survived though-see my NYT op-ed on uploading.) But why would an AI company build such a system from scratch, to clean your home, be a romantic partner, advise a president, etc?

Again, is not clear, currently, that just by creating a fast, efficient program ("IP properties") we have also copied the very same properties that give rise to consciousness in humans ("C properties"). It may require further work to get C properties, and in different substrates, it may be hard, far more difficult than building a biological system from scratch. Like creating a gold sphere the size of Venus.

Post a Comment

0 Comments