Scott Aaronson On The Relevance Of Quantum Mechanics To Brain Preservation, Uploading, And Identity.
Biography: Scott Aaronson is an Associate Professor of Electrical Engineering and Computer Science at MIT. His research interests center around the capabilities and limits of quantum computers, and computational complexity theory more generally. He also has written about consciousness and personal identity and the relevance of quantum mechanics to these issues.
Michael Cerullo: Thanks for taking the time to talk with me. Given the recent advances in brain preservation, questions of personal identity are moving from merely academic to extremely practical questions. I want to focus on your ideas related to the relevance of quantum mechanics to consciousness and personal identity which are found in your paper “Ghost in the Quantum Turing Machine” (http://arxiv.org/abs/1306.0159), your blog “Could a Quantum Computer Have Subjective Experience?” (http://www.scottaaronson.com/blog/?p=1951), and your book “Quantum Computing since Democritus” (http://www.scottaaronson.com/democritus/).
Before we get to your own speculations in this field I want to review some of the prior work of Roger Penrose and Stuart Hameroff (http://www.quantumconsciousness.org/content/hameroff-penrose-review-orch-or-theory). Let me try to summarize some of the criticism of their work (including some of your own critiques of their theory). Penrose and Hameroff abandon conventional wisdom in neuroscience (i.e. that neurons are the essential computational element in the brain) and instead posit that the microtubules (which conventional neuroscience tell us are involved in nucleic and cell division, organization of intracellular structure, and intracellular transport, as well as ciliary and flagellar motility) are an essential part of the computational structure of the brain. Specifically, they claim the microtubules are quantum computers that grant a person the ability to perform non-computable computations (and Penrose claims these kinds of computations are necessary for things like mathematical understanding). The main critiques of their theory are: it relies on future results in quantum gravity that don’t exist; there is no empirical evidence that microtubules are relevant to the function of the brain; work in quantum decoherence also makes it extremely unlikely that the brain is a quatum computer; even if a brain could somehow compute non-computable functions it isn’t clear what this has to do with consciousness. Would you say these are fair criticisms of their theory and are there any other criticisms you see as relevant?
Scott Aaronson: Yes, I think all four of those are fair criticisms! I could add a fifth criticism: Penrose’s case for the brain having non-computational abilities relies on an appeal to Gödel’s Incompleteness Theorem, to the idea that no machine working within a fixed formal system can prove the system’s consistency, whereas a human can “just see” that it’s consistent. But like most mathematicians and computer scientists, I don’t agree with that argument, because I think a machine could show all the same external behavior as a human who “sees” a formal system’s consistency. So then, the argument devolves into one about indescribable inner experiences, of “just seeing” (for example) that set theory is consistent. But if we wanted to rest the case on indescribable inner experiences, then why not forget about Gödel’s Theorem, and just talk about less abstruse things like the experience of falling in love or tasting strawberries or whatever?
Michael Cerullo: Your own work in this field attempts to show the relevance of quantum mechanics to consciousness without requiring us to abandon what we know from neuroscience. You also state that the motivation for some of your speculations is to avoid the seeming paradoxes (e.g. Boltzmann’s brains, copies, degrees of identity, multiple copies etc.) that would occur if personal identity could be copied as easy as any other type of information. Can you expand on this?
Scott Aaronson: To my mind, one of the central things that any account of consciousness needs to do, is to explain where your consciousness “is” in space, which physical objects are the locus of it. I mean, not just in ordinary life (where presumably we can all agree that your consciousness resides in your brain, and especially in your cerebral cortex—though which parts of your cerebral cortex?), but in all sorts of hypothetical situations that we can devise. What if we made a backup copy of all the information in your brain and ran it on a server somewhere? Knowing that, should you then expect there’s a 50% chance that “you’re” the backup copy? Or are you and your backup copy somehow tethered together as a single consciousness, no matter how far apart in space you might be? Or are you tethered together for a while, but then become untethered when your experiences start to diverge? Does it matter if your backup copy is actually “run,” and what counts as running it? Would a simulation on pen and paper (a huge amount of pen and paper, but no matter) suffice? What if the simulation of you was encrypted, and the only decryption key was stored in some other galaxy? Or, if the universe is infinite, should you assume that “your” consciousness is spread across infinitely many physical entities, namely all the brains physically indistinguishable from yours—including “Boltzmann brains” that arise purely by chance fluctuations?
It’s very easy to get disoriented, to feel a sense of vertigo, thinking about all these science-fiction puzzles. But before we tie ourselves in knots, perhaps one response is to step back and think hard about which of these scenarios are actually possible, according to the laws of physics as we currently understand them. For example, could you actually copy all the functionally-relevant information in a human brain, convert it to digital form, without an invasive scan that would kill the brain in the process? Well, the answer to that question hinges on how much information about a brain you think is “functionally relevant.” If you believe the brain has a “clean digital abstraction layer” containing all the information relevant to consciousness—say, the neurons, their wiring diagram, the approximate synapse strengths, a few other things—and that that layer “notices” the underlying molecular layer at most as a thermal noise source, then presumably the answer is yes, a sufficiently advanced civilization could upload your consciousness to a computer and thereafter make as many copies of it as it wanted. If, on the other hand, you believe that microscopic details of your brain—e.g., the exact quantum state of some sodium-ion channel, which might later get amplified to macroscopic scale and influence whether a neuron fires, etc.—are an important part of your personal identity, then the rules of quantum mechanics would generally rule out making a sufficiently precise copy of those details, so that some of these science-fiction scenarios couldn’t even get off the ground. So, I don’t have answers, but those are the sorts of questions that I’ve tried to draw attention to, because I think progress on them might actually be possible.
Michael Cerullo: Before I get into your thoughts on freebits and the arrow of time, I want to discuss the relevance of the quantum no-cloning theorem to identity. To remind our readers, the no-cloning theorem states that it is impossible to make an exact copy of any quantum state . In quantum computing the relevance of the no-cloning theorem is obvious: error correction based on copying quantum qubits is impossible. Now let’s jump to the macroscopic scale. Information is copied with high fidelity all the time at the nanoscale level: nature does this every time DNA is copied during cell division and we do this whenever we copy files on a computer or burn pits onto a blue ray disk. The information in these systems can be completely described at the classical level, and of course this brings up the issues of the quantum measurement problem and how the classical world develops from the quantum world (which no one really understands). Neuroscience seems to tell us that identity (i.e. memory and personality) is encoded in the connections and strength of neural synapses which can be completely modeled with classical physics. Given this it would seem personal identity can be wholly described within classical physics and is more like the information in a blue-ray than a qubit, and thus the no-cloning isn’t relevant at this scale. Can you tell me why you disagree and how the no-cloning may be relevant to the brain, consciousness, or identity?
Scott Aaronson: As I said, I don’t know. On the one hand, I find Penrose and Hameroff’s speculations about quantum gravity effects in microtubules to be totally implausible. But on the other hand, even the most hardheaded neuroscientist is going to model action potentials in neurons using the Hodgkin-Huxley equations, which treat neural firings as partly stochastic events—i.e., events that are influenced by molecular details that are treated as outside neuroscience’s scope. And it’s not even particularly controversial to say that this creates a causal path for quantum indeterminism to get chaotically amplified, and eventually influence (say) the course of a human decision. The question, of course, is whether any of that matters. In my way of thinking, the question becomes: could an external observer, using far-future technology, decompose everything in your brain into (a) a “digital, classical layer” that can be scanned and copied, and (b) a “thermal noise layer” that can’t be copied, but that can safely be ignored with no effect on your personal identity? So, I dunno: is it obvious to you that the answer is yes?
Michael Cerullo: Now I want to discuss some of your thoughts about complexity, the arrow of time and freebits and their relation to identity. Rather than try to summarize your arguments for freebits and the arrow of time, I will refer our readers to your very readable paper “The Ghost in the Quantum Turing Machine” and one of your blogs where you discuss these issues (http://www.scottaaronson.com/blog/?p=1951). One of the limitations with computationalism is that no one quite understands what exactly it means to implement a computation. Many people agree there is something wrong with saying that a look up table that could pass a Turing test is conscious, even if this look up table is implemented in the real world. I share this intuition and you also mention your doubts about this possibility. In your book “Quantum Computing since Democritus” you discuss this question and how it may be related to questions of computational complexity. To implement the Turing test look up table would be a problem with NP complexity. Hence having a look up table that could pass a Turing test doesn’t really you help you make a program that can pass the Turing test in the real world. I can’t help but be reminded of the Borges’ Library of Babel here. Having the Library of Babel doesn’t really give you any information since it would be easier to create it than search for it. Is this a fair summary of your current views? Any thoughts on how complexity and the arrow of time may be related to the question of implementing a computation?
Scott Aaronson: Well, the lookup table is sort of the extreme version of some of the thought experiments that we discussed earlier. If anything that passes a Turing Test is conscious, then what about a huge table that just stores your replies in every five-minute conversation I could possibly have with you? Would it even matter if anyone consulted the table, or could it just sit there, silently bringing about your consciousness? (And for that matter, why does the lookup table even need to be physically built? Why isn’t its abstract existence, as a function mapping inputs to outputs, enough to bring about your consciousness? That’s a slippery slope that Max Tegmark, for example, with his “Mathematical Universe Hypothesis,” is happy to ride all the way to the bottom!)
Now, some people point out that such a lookup table would require size that grows exponentially in the length of the conversation—so in particular, it would very quickly exceed the storage capacity of the observable universe. And some of them might go even further, and conjecture that any simulation of you that didn’t suffer such an exponential explosion would need to have memories, internal representations of concepts, etc. that might of course differ in detail from the way your brain organizes things, but would still be “vaguely brain-like”—and that would therefore, in their view, bring about consciousness for the same organizational reasons why your brain brings about consciousness (reasons that wouldn’t apply to the lookup table).
I still haven’t figured out what I think about that position, but I do find it fascinating—not only because of how it brings one of my favorite subjects (polynomial versus exponential complexity) into the discussion of consciousness, but also because of how it answers a philosophical thought experiment (would the giant lookup table be conscious, or not?) by questioning the experiment’s premises, by asking whether the lookup table, or anything like it, could exist in our universe. In that respect, it’s analogous to what I tried to do in my “Ghost in the Quantum Turing Machine” essay: namely, to take crazy philosophical thought experiments (in my case, involving perfect copies of you), but then ask different questions about them than the ones you’re “supposed” to ask—questions about whether our best current theories of physics, cosmology, computer science, and so forth predict the experiments can be done or not.
Michael Cerullo: How about freebits? Do you think they have any relationship to implementing a computation?
Scott Aaronson: “Freebits” are just a label for whatever it is that you believe in, if you think the answer to my earlier question about copyability is “no”: that is, if you think that, even with arbitrarily advanced technology, it won’t be possible to scan your brain accurately enough to make copies that are probabilistically indistinguishable from the original. Freebits are bits about the physical state of your brain (and ultimately, about your behavior) that the copying procedure would necessarily miss.
To make myself clear: unlike in Penrose and Hameroff’s model, freebits are not “oracles” that let you solve uncomputable problems, or do anything else that defies a conventional physical understanding of the brain. So for example, even if I didn’t know any of the freebits relevant to you, I see no reason at all why I couldn’t build a second brain that was extremely similar to yours, that not only passed the Turing Test but fooled a lot of people into thinking it was you, that behaved similarly to you in most situations. But by hypothesis, the copy wouldn’t behave like you in all situations, and the differences could serve as a sort of empirical certificate that your consciousness hadn’t been cleaved into two, or transferred from one physical substrate to another, or anything like that. A second consciousness might or might not have been brought into being. But at any rate, the “original” you would be inextricably bound up with microscopic, unclonable details of your brain state that aren’t magical, don’t give you any computational superpowers or anything like that, but are part of how we localize which physical entity we’re talking about when we talk about “you.”
Michael Cerullo: In your paper “The Ghost in the Quantum Turing Machine” you discuss how freebits may help to solve the problem of free will by preventing any perfect prediction of human behavior. In this paper you seem to be suggesting that free will is necessary for consciousness and therefore for any conception of personal identity. Can you expand on this?
Scott Aaronson: Look, I have no idea whether free will in the sense that interests me (that is, the sense of in-principle unpredictability) is necessary for consciousness. The one thing I’m confident about is that, if it’s not necessary, then any account of consciousness will have to solve all sorts of thorny conceptual problems that it could otherwise avoid. For in that case, one and the same intelligent being—that is, a being that responds to all possible stimuli in the same way—could be copied promiscuously all over the universe and transferred to countless physical substrates: not only to digital electronics but to pen-and-paper, even giant lookup tables, etc. And we’d then have to confront questions about which of those copies “is you,” what you should do if someone asks you to place a bet about “which one you are,” and so on. Any algorithm that took as input a description of the entire universe, and tried to locate the “you” parts of it, would then have to be much more complicated!
Michael Cerullo: In neurology, there are examples of syndromes where people seem to believe they have no free will or that they have control over actions that they clearly do not. Doesn’t this suggest that free will is simply one more qualia, the qualia of feeling like we (whatever “we” is, if you take something like Baars’ Theater of Consciousness approach for example, it doesn’t have to involve a homunculus) have control of our actions?
Scott Aaronson: To my way of thinking, free will is special because it’s bound up with the predictability of your actions—and in particular, with the question of whether it’s possible to create a second entity that behaves indistinguishably from you, and which an empiricist like me would therefore have to say is you, is a second copy or instantiation of you that inhabits the same world. I like that framing precisely because it’s not about your subjective feeling of freedom or lack of freedom: rather, we’re asking an actual, bona fide question about the physical universe that could turn out one way or the other.
You know, when I wrote an 85-page essay about these issues, I tried as hard as I could not once to rely on introspection or “what it feels like” to make a choice, because that strikes me as just an obvious nonstarter. I mean, introspection can’t even tell us vastly simpler, more uncontroversial things about how our minds work, like how our visual systems pick out triangles and squares, stuff like that. And given all the moral, philosophical, and theological issues with which free will is entangled, it seems obvious that people could “feel like” they had free will (or say that they felt that way, or convince themselves they did) even if they didn’t, and vice versa.
By contrast, I want a notion of “free will” that’s clear and well-defined enough that someday, in principle, we could tell people that they had no free will even if they felt sure they had it, or conversely, tell them they had it even if they felt sure they didn’t. And focusing on the in-principle predictability of our actions seems to me like a huge step in that direction.
Michael Cerullo: Thanks for taking time to talk with me, I look forward to reading more of your work on these issues.