The Mind, The Body, And A Splash Of Nirvana

 

I consider myself a naturalist, which, for the purposes of this essay, we can take to mean that everything that we’re going to come into contact with is part of the same basic natural world, and subject to some sort of underlying set of principles. It’s not a perfect definition but it’s a place to jump off from. This essay isn’t meant to discuss naturalism specifically. The important point is that we start the discussion agreeing that our subject is something we can discuss rationally, and that we can’t answer questions by saying, “ghosts,” in a quiet voice. Reality, as I’ve said before, is all around us, not some ineffable monstrosity from the depths of the dreamlands.

However, it’s still important to recognize the limitations of our senses and the importance of reason in uncovering the nature of our Universe. Obviously, there are many things that are part of the natural world which we can reason must exist but which can’t be experienced directly through our senses. And it’s important to further recognize the limitations of our reason and intuitions in forming accurate descriptions of the world. If, as I will argue, we approach the world fundamentally through various forms of language, we have to acknowledge that those languages may contain limitations which prevent us from describing the world with absolute accuracy. It’s in these failures of intuition, I believe, that lie many of our mistaken beliefs about subjects such as dualism, reductionism, the mind body problem, and computationalism. So let’s jump right in.

If you open up our skulls, you’ll only find one thing. You find a brain. You don’t find spirits blowing around and moaning. Maybe they’re invisible, and don’t interact with matter… but probably not. Apart from the brain, though, there is some kind of mind in there, too, and there are various ways to go about explaining this. My preferred method is to question why it is that we’re assuming that the mind and the brain are different things. What seems to me most likely is that there is only one thing, and that the words, “mind,” and “brain,” are both just representing two different ways of thinking about that one thing. We call it a brain when we want to think about individual brain structures or neurons, and we call it a mind when we want to talk about the pattern of neural impulses which the brain is forming. But there’s no clear distinction between these two. One gives us a more convenient description of the brain when we want to talk about what people are thinking, and the other gives us a description of why there is an oily mess on the floor of the laboratory. “Oops, I dropped the brain.” You don’t say that you dropped a mind, but that’s what you did.

What these two words are referring to are not different things that exist in the real world, but different mental processes that we have for dealing with one thing. When you dropped the brain, which you will have to pay for, by the way, you created a new pattern of neural hardware. This could be described in mind terms, but we don’t have a word for it, because most people don’t suddenly think themselves into oblivion, unless of course they’re a Buddhist. Which I suppose means that the best way to describe the emptiness achieved upon splatter is nirvana. Seriously, I do believe that meditation has its benefits, I’m not trying to be mean.

Probably the best argument against this would be an argument along the lines of a reductionism vs. holism vs. emergentism debate. But again, I think that what these words really represent, rather than real differences in the natural world, are limitations of our thinking. Neurons are what we see when we think of things reductionistically, and minds are what we see when we think of things as (at preference) a form of holism or emergentism. What this means isn’t that the world contains no differences within itself, just that to divide those differences into different strata of associations, such as through emergentism or reductionism, is a helpful mental process and not a fully accurate representation of the world. When an object exhibits properties which are not fully explainable through reductionist methods, and we say that they exhibit emergent properties, what we’re doing is saying that our thinking has run out of space in the reductionist department, so now we’d better fire up the emergentism software in order to get a clearer picture. There is a real difference between things that exhibit emergent properties and things without them, and we should continue to try to clarify what that difference is, but we shouldn’t assume that one is right and one is wrong; Both are ultimately wrong, though they are both helpful in their own way.

This is true about all of our perceptions. When we see a physical entity, such as a doll, a pineapple, or Kate Upton, we accumulate a bit of information about the object, and we build a mental model telling us what that thing is. But of course it’s not an exhaustively researched model. It doesn’t include the position of every quark in the object, and a completely accurate projection of what will happen to that object in the next few decades. Which is fine. It suits our purposes. We don’t really need to know about the quarks for most purposes, and if we do want to find out about them, we’ll look closer and expand our model. But we’ll never come to a complete picture of what that object is. We can refine and refine our model, but it’s still a model, and not Kate Upton, who is still Kate Upton and not a model. Or whatever.

I would hesitate to say that this is a necessary condition of consciousness itself, but it does seem to be the necessary condition of human consciousness, which is what we’re concerned with. And I would argue that a similar principle applies to arguments about reductionism and emergentism. We can see that Kate Upton is not a pineapple, and we can see that reductionism is not emergentism, but a complete description of why that is is probably not possible. Which is not to say that we should stop trying to refine our picture of what’s going on, just that we should tentatively accept that we don’t have to have a definitive answer to these questions to keep moving on in philosophy. We’ll probably never have the definitive answer, just progressively better and better answers. Sometimes you have to accept that your answer is good enough for the moment to be getting on with. If this sounds like a way of avoiding the argument, then you’re right, it is, but I think it’s a well founded one. I think it does establish that, although emergence is good to talk about, it doesn’t mean that emergent properties are separated from their reductionist foundations in any real way. Emergency will never become dualism; Minds and brains are still ways of talking about the same thing, even if minds are emergent properties of brains.

Why this is so perplexing, intuitively, is that we are always trying to reconcile these opposing views. We are trying to create a model of emergentism using reductionist principles, and a model of reductionism using principles of emergentism, and it’s not working out. We ‘re looking for some kind of Unified Strata Theory, but if there is one out there, I’m not aware of it. I don’t know if we should really expect it to work out. If we find reductionism failing, and begin to think of things in terms of emergentism, then why would it then make sense to explain emergentism through reductionism? Or vice versa?

This is why many of us are so tempted by dualism. It’s neater. It’s easier to think that when you’re thinking in reductionist terms, you’re talking about one thing, and when you’re thinking in emergentist terms, you’re talking about another. But I think the general evidence points to this not being true. If neural correlates to mental phenomena are only correlates, and nothing else, why do we lose brain function when we lose the correlates, never to get them back? Why is brain damage so damaging? What I think is the simplest and best answer is just that there is no real difference between neural correlates and the mental states they correlate to. They’re two sides of the same coin. Or rather, the same coin in two different lights.

This is what we have evolved. Not an in-depth analysis of every concept we come across, but convenient mental models which we switch out as need be.

Computationalism And The Chinese Room

John Searle’s famous Chinese Room thought experiment goes something like this: A man who doesn’t understand Chinese is in a room with a computer. Chinese characters are slipped to him underneath the door, and he puts them into the computer. The computer then gives him instructions on what to reply. He writes new characters down based on the instructions given by the computer, and slips them back through the door. Chinese speakers outside are convinced, based on the replies, that there is a Chinese speaker inside.

Next, the computer is replaced by a pad of paper, a pencil, and a book of instructions. The man inside still does not understand Chinese or have any understanding of the instructions, but he follows them, and they represent exactly the algorithm used by the computer. He slips papers out, and again the people outside are convinced that there is a Chinese speaker in there.

What the Chinese room implies is that it’s possible to create a system which performs a seemingly intelligent act without actually being conscious. But this implication isn’t as clear cut as many people seem to think that it is. According to the thought experiment, the Chinese room is able to produce sentences which appear to have an understanding of Chinese, but which don’t actually understand Chinese. But that’s easier said than done. You’re talking about some process which completely replicates the results of consciousness. But invoking such a process is a bit of a magic trick. What if you asked it, in Chinese of course, “How are you feeling today?” Well, there are a few different things that could happen, depending on how the process was programmed. You could have programmed the process to connect such syntactical structures to other syntactical structures, so that it responds with, “I am real shitty today, George.” And it appears that it’s giving you a response which understands Chinese. But at some point, a human programmed that response into the system. Which means that the understanding of feeling shitty today in relation to the prompt, “How are you feeling today?” Has been inputted into the system by a conscious entity. So the Chinese room is, in this instance, not actually replicating the results of consciousness, it’s merely spitting out an answer which has been inputted into the system by a conscious entity. You could program the system to spit out several random responses to the same question, but it’s still doing the same thing with another level of complication. Now let’s say you program the process in such a way so that it spits out responses to questions, and somebody inputting answers mistakes the process for a conscious being. That means that it has passed the Turing test. Which is not really a very big deal. The Turing test is not a very reliable indicator of consciousness. People are naturally prone to assigning human traits to non-human things. We see faces in clouds, that doesn’t mean that clouds are conscious, any more than seeing a mind in a Chinese room means that the room is conscious. The Turing Test simply doesn’t mean a whole lot. It means that we’ve taught the room to lie; If we imagined that it was only possible to say, “I’m happy today,” if you were truly happy, that would be a world where nobody ever lied. But people do lie, so we know that it’s possible even for conscious entities to spit out results that replicate a specific conscious state without actually experiencing that state; the fact that it’s possible to say that you’re experiencing something without experiencing it isn’t news. But when a person lies about how they’re feeling, we don’t assume that it’s because they’re incapable of feeling anything, we assume it’s because in this particular instance there’s a disconnection between what they’re saying and an actual conscious experience. If you were to give an actor a line to say, such as “Boy am I happy,” you wouldn’t expect them to feel happy. In the same way, you can teach the Chinese room to say, “Boy am I happy,” without expecting it to feel happy, and without damning results to computationalism.

At this level of programming, the Chinese room probably wouldn’t pass the Turing Test very consistently anyway. But what if you programmed the process to to be more nuanced? There are programs out there right now which are doing that. You can look up a number of chat bots online and have interesting, if not entirely human, conversations. Mostly, they work through some type of syntax manipulation, making associations between inputted words and a database of words, and then structuring those words into recognizable sentences. By making associations between very large numbers of words and phrases, they can learn to lie very well. They might not need a human to directly input the statement, “I’m feeling well today,” if they are good enough at finding associations between words such as, how, feeling, today, well, and you, and figuring out how to put those words into context. My first thought is that the program is lying, because it is not feeling well today, and has no conception of what that might mean. It has no understanding of what it means to feel well today, because it can’t place “Feeling well” in the context that a human can place it. A human can place “feeling well,” in the context of a stubbed toe or an upset stomach, and the sensations that go with them. But this doesn’t necessarily mean, again, that because we have taught a computer to lie in this instance, that computers are incapable of consciousness. A person can be taught to say words that they don’t understand the meaning of. This clearly doesn’t mean that people are incapable of understanding. So what then, if we programmed into the computer a large context of associations? For instance, we could program the computer (or the Chinese room) to associate “not feeling well” with having stubbed your toe. This obviously doesn’t mean that it’s stubbed a toe. But how does it know that stubbing a toe is bad? Still, it’s because a programmer has told it to say that. Even if the system is allowed to make associations on its own, and learns after talking to several people that toe stubbing is associated with not feeling well, it’s getting that information through the people it’s interacting with. The program isn’t making a judgment, the program is having a judgment inputted into it. You could even say that the process of judgment takes place in both the program and in the entity which makes the judgment, but still, the program itself has not made a judgment. The understanding of toe stubbing as bad comes from conscious entities which know that toe stubbing is bad. No system will ever know that toe stubbing is bad until that information has come through its own conscious experience or through input from outsiders. You could probably perform tests on the computer to demonstrate this by asking it how it feels about things which it has not talked to anybody about and which it hasn’t had programmed into it. This is a type of Turing Test, and could prove fruitful, but it’s still possible that the programmer could be clever enough to trick the test without meaning that consciousness has occurred. The only real way to get around this is to give the computer some way of making judgments on its own. And now we’re going down the path of making the program conscious. There are many possible distances we could travel down that path, but ultimately, if we were to get the program to conform exactly to the ability to replicate consciousness, then it would become conscious.

So the answer to the Chinese room is that, really, you couldn’t have a Chinese room which replicates Chinese thoroughly without having a conscious entity in the room. So by saying, “I have a room and process that replicates consciousness without being conscious,” you’re saying something equivalent to, “I have a magic box with a genie in it.” You’re not really saying anything. The argument goes something like this. “Can you create an entity which replicates consciousness without being conscious? Imagine that I have a magic room which does just that. Therefore, yes you can!” But it’s one thing to say that you have a Chinese room which seems to think, and another thing to have one. Searle is cheating by stipulating that the Chinese room is not conscious.

Searle is quoted as saying, “Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modeled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.”

Well, no. That is already making the assumption that consciousness is not a computational model. If consciousness is not a computational model, then it would be right to say that a computational model is not consciousness. But if consciousness is a computational model, then a computational model of a computational model is still a computational model, and is still conscious. Is an object within a painting an actual object? No, not usually, unless the object that you’re painting is also a painting. A painting of a rainstorm is not a rainstorm. But a painting of a painting is still a painting. In the same way, if a consciousness is itself a type of simulation, then a simulation of a simulation is still a simulation.

But consciousness isn’t an imaginary property, it’s something that we know to exist, and we can make deductions about what it is, exactly. Where do we find it? Primarily, in the brains of evolved organisms. That means that, at least once, consciousness has occurred via natural selection. If it were possible to replicate the results of consciousness computationally entirely without being conscious, then why would natural selection go to the trouble of inventing such a phenomenon when it could get the same results computationally?

And why assume that brains don’t work computationally in regard to consciousness, when we know that they work computationally in other areas? I don’t think it’s largely disputed that brains do make computations, at least sometimes. So if we assume that A) Brains are at least sometimes making computations, and B) computations can give you the same results as consciousness, then why do we assume that consciousness is not a computation? Why not just cut out the middle man?

One of the intuitive problems people have with this goes back to confusion concerning reductionism and emergentism. I argue that it would be possible to simulate a consciousness with a computer. People often respond with a variation of the Chinese Room. If you can create a computer which simulates consciousness, then wouldn’t that mean that a team of mathematicians could in theory perform all of the functions of the computer on pencil and paper? And if they could, wouldn’t that dispel computationalism, because, I mean, come on, you can’t expect us to believe that a team of mathematicians are conscious. Come on. Really.

But, if consciousness is physical, then mathematicians are just as good a source of physical information networking as anything else. Let’s say one mathematician writes down a number, then sends it along to the next mathematician, who adds another number to that number. Then he sends it back, and the first mathematician adds another number to the second one. And they keep sending it back and forth, adding numbers. This is beginning to become a network. A very simple network, but a network. Physical information is being traded between different people. You could say that the two mathematicians are analogous to neurons, and the papers they are sending are analogous to electrical impulses. Now let’s say we add a third mathematician. Mathematician A sends a number off to mathematician B. If that number is even, the number gets sent back to mathematician C. If it is odd, it gets sent back to mathematician A. Now we have a network, and a decision procedure which begins to manipulate the information.

A team of mathematicians working together, if they were doing so in a way to simulate a brain, would create some similar network, though it would probably be much less formally structured. It seems odd to think that such a network is really a network. The mathematicians are not all touching each other, they’re standing across the room from each other. They might be in different countries, faxing each other papers. But seperate as they may seem, the information is, in one way or another, being spread through the system of mathematicians. There is a huge amount of translation of that information going on, for instance, one mathematician has a thought, translates that thought into writing, then faxes the thought, turning it into electrical signals. Then another mathematician’s printer prints out the fax, translating the information into ink symbols, which are sent through the air via photons into the eyes of the other mathematician, and converted by his retina into electrical impulses which become thoughts. It’s a painfully circuitous route to take, but in the end, there is a definite path of physical information being networked among the mathematicians. A brain is definitely doing all of this more efficiently by keeping everything in one neural language, but it’s not doing anything fundamentally different. It doesn’t matter that the various numbers and symbols don’t seem to have any universal meaning which would translate into a physical property such as consciousness. The mathematicians understand what they mean, and translate the information appropriately. The important thing isn’t that the symbols are understood by nature, the important thing is that the symbols allow the information which is being processed by the system to manipulate the system itself, so that new information will cause a change in the system, which will create new information, and then a change in the system, etc.

Again, this intuitively feels wrong because of our misunderstanding of reductionism and emergentism. It doesn’t seem to make sense that all of these different pieces which seem so separate are forming one whole. It’s tempting to say something like, “Well, if this is conscious, it’s not really the matter which is conscious, it’s the system created by the matter.” But there is no real difference between the matter and the system. “matter,” and “systems” are convenient mental models that we have for talking about things in two different ways. But what we’re talking about is the same thing. You can think of this all as happening in a visual-spatial physical world, or as happening in a “mathematical” world, or you can explain it in English with words like “emergentism,” and “network,” and “weird,” but you’re still just trying to describe one actual phenomenon with different mental faculties.

Oh, wow, that was exhausting. Let’s take a break.

 

 

NOTE: The definition I originally used for naturalism at the time that this was published was different. I said that naturalism means, more or less, that there’s nothing supernatural. Somebody pointed this bad definition out to me in the comments. I feel that it’s worth changing because it was such an obviously bad definition, and permissible to change, since it’s not a central point of the essay and just something I was using as introduction.

Advertisements

12 thoughts on “The Mind, The Body, And A Splash Of Nirvana

  1. Hi Mike,

    I can appreciate and agree with a lot of the points you make but I think there are some issues you have not fully thought through.

    Again, just to remind you of where I’m coming from, I am a committed naturalist and computationalist, but I cannot agree that the mind and the brain are the same thing. I see the brain as a physical object and the mind as an abstract structure like the difference between hardware and software. I think it is legitimate to think of abstract structures as existing and I do not see this as mystical, mysterious or mystifying.

    First, on our points of agreement:

    There are no magical spirits. Consciousness probably has something to do with our ability to form mental models from our sensory data. I agree with your account of reductionism versus emergentism. The brain and the mind are in some sense two sides of the same coin (however I think those two sides are also two different things!). I agree that Searle often seems to imply that the Chinese Room is simply looking up responses dumbly in a look-up table. I agree that both The Chinese Room and teams of mathematicians could have the same relationship to conscious mind as a brain has (although you would say they produce consciousness while I would say they instantiate consciousness – more on this later).

    In particular, I think your answer to Searle’s “wetness” analogy is spot on, and precisely the same one I have given myself.

    However, there are many points of disagreement.

    I consider myself a naturalist, which, more or less, means that I don’t think that there are supernatural phenomena which interfere with our lives.

    That’s not really a helpful definition. I’m an anti-kezmanianist which means that I don’t hold with kezmanianism.

    You say there is no clear distinction between a physical object and the pattern it instantiates. I think this is untrue. If I destroy a computer running Windows I have not destroyed Windows itself. This is enough to prove that physical objects and their patterns are not the same thing. Granted, this is unlike brains because minds and brains are unique (on this planet at least), but still it gives me a foothold to argue for the independent reality of the pattern as a thing in its own right.

    I think you spend too much time on weak versions of The Chinese Room without really answering the central point of the thought experiment.

    The strongest version of the argument is this.

    Suppose we scan the brain of a native Chinese speaker and produce a model of her neural network. Suppose we can turn this into an algorithm which can process sensory data representing Chinese language sentences and respond as the original Chinese speaker would have by processing it in much the same way. Let’s implement this algorithm by having a person following printed instructions and making notes with a pen and paper. Now let’s make one further unrealistic assumption and take this person outside, having memorised the algorithm and internalised the procedure. Let us imagine this person can now converse in Chinese with Chinese speakers by implementing this algorithm.

    The point is that this person will not actually understand the conversation, even though nothing there is nothing other than this person to processing the conversation. Asking him afterwards what was said, he will have very little idea. The opinions and knowledge expressed during the course of the conversation will not have been his own but those of the original Chinese speaker whose brain we have been simulating.

    It is true that this person is conscious, but the mind of this person is not that of the simulated Chinese speaker. If this person were instead replaced by a machine, there would be no consciousness at all but only the simulacrum of it.

    Or at least that is the argument. I think that allowing the algorithm to be conscious gives the naturalist a way out, but I don’t think your account of consciousness will do.

    The Turing Test simply doesn’t mean a whole lot

    I disagree. As with the Chinese Room, I think you’re talking about a weak version of the thought experiment. When we discuss The Turing Test, we must assume that we are talking about passing it consistently with expert testers. Any system capable of passing such a test must be able to understand, learn, grow and solve problems in a way no current system can.

    I think you miss the point Robin Herbert was making with the mathematicians working in different countries. Suppose that performing a certain computation produces consciousness, and suppose that computation is distributed between different mathematicians working on different continents or even different planets. Now imagine two scenarios, where precisely the same computations are carried out. In one scenario, the mathematicians are working in concert to simulate a consciousness. In the other scenario, carrying out precisely the same calculations, there is no communication between them and any apparent co-operation is coincidental.

    This latter case may seem ridiculous, but remember that the universe may well be infinite. We could cherry-pick mathematicians or computers performing calculations from all over infinite space in order to find the same pattern of computations you think produces consciousness. If space is infinite, then the scenario I have described is not unlikely, in fact it is certain.

    It is not coherent to suppose that consciousness can only be produced by the intentional networking of information if the same physical processes carried out by accident would not produce consciousness. This is in fact supernatural thinking, in my book.

    Furthermore, because there are an infinite number of physical events happening all over the universe, and because there are an infinite number of ways to cherry-pick and interpret these as computations, it must be the case that all possible minds are being created all the time simply by the everyday vibrations and fluctuations of physical matter.

    Computation cannot give rise to consciousness because computation is in the eye of the beholder. Where you see computation, I might just see electrons whizzing around a circuit. Where I see computation, you might just see hydrogen atoms bumping chaotically off each other inside a star.

    Yet I am a computationalist. By this I mean that I think an advanced Turing Test passing computer would have the same relationship to a mind as my brain does to mine. I do not think the computer is the mind, and I don’t think the computer creates the mind. I think the mind is the algorithm. The computer instantiates it, makes it physical and connects it to a physical world.

    My thinking on consciousness and minds is very analogous to my thinking on complexity and patterns. My brain does not create consciousness any more than it creates its own complexity. It manifests both. Complexity is not a property really of the matter that forms the brain, but of its structure. Were that structure reproduced in a different medium, that other medium would also exhibit the same complexity. Destroying the brain does not really destroy the complexity, because that pattern is informational and can be reinstantiated at any time. Indeed, if space is large enough, or if there are multiple universes, that pattern will always be instantiated elsewhere, so the complexity can never really be destroyed. In Platonic terms, you can destroy all the spherical objects you like, but you can never destroy the sphere itself.

    This thinking leads me (reluctantly) to the conclusion that we must be subjectively immortal, especially in light of the Mathematical Universe Hypothesis.

    These both sound nutty but they are serious ideas.

    http://en.wikipedia.org/wiki/Quantum_suicide_and_immortality

    http://en.wikipedia.org/wiki/Mathematical_universe_hypothesis

  2. Hi, a few quick points before I give you a full response. First, you’re right about my definition of naturalism. It wasn’t meant to be a good definition, just an establishment of where I was coming from for the sake of the essay. But still, I should have given a better place to start from and I’ll probably update the post, with a note acknowledging the update, to reflect that.

    Second, I’m not sure what I said which makes you think that it’s my position that consciousness has to be intentional. That isn’t my position, and I’m perfectly comfortable with the fact that consciousness can happen accidentally. That’s more or less the point I was trying to make in my original article at Scientia. Could you point out to me where I’m saying that consciousness needs to be intentional? If I said something like that, it was bad communication on my part.

    • Hi Michael,

      Here is where you seem to have missed Robin’s point that calculations need not be intentionally connected.

      There’s no reason to think that two numbers written in two places across the world are connected in any way. If the two calculations are faxed to each other after the fact, then there is still nothing really resembling consciousness going on.

      Furthermore you don’t seem to realise that any lump of matter must host many minds on the argument I have presented, because there is a way to interpret any physical phenomenon as representing any computation by cherry picking which physical interactions to interpret and how to interpret them. If any lump of matter can and does host a multitude of minds, then that seems to me to make a nonsense if the idea that consciousness is a physical phenomenon.

      • Hi, thanks for waiting for a reply.

        My problem with the Turing Test is that, although a completely thorough and ideal Turing Test should be able to determine if something is conscious or not, in practice, the Turing Test relies on the skill of the person or team interrogating the computer. I didn’t say that the Turing Test was entirely meaningless, just that it was unreliable, and therefore the ability to pass the Turing Test isn’t a de facto indicator of consciousness. I would say that even with a panel of experts, the Turing Test would be about as reliable as a panel of psychiatrists giving a psychiatric diagnosis. That is to say, not useless, but fraught with problems. There are a number of papers and studies on that, and here are some that I pulled at random from a google search.

        http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2922387/
        http://www.psychiatrictimes.com/dsm-5-0/inter-rater-reliability-psychiatric-diagnosis

        This is after literally centuries of study on the subject of psychology. The Turing Test would, at least right now, consist of experts testing AI, which is essentially a completely new phenomenon. It’s hard to imagine what, in practice, an expert Turing Tester would actually be. And really, the fact that the investigating person or panel is skilled or qualified isn’t necessary to many definitions of a Turing Test that are commonly used, and if I remember, is not stipulated by John Searle in his Chinese Room argument. He simply says that there are “Chinese speakers” outside of the room who assume that the room is conscious, and that this constitutes a passing of the Turing Test. He never says that they’re computer scientists or philosophers.

        So I don’t mean to throw the Turing Test out the window, just that it’s not very strong evidence in favor of consciousness, or a lack thereof, especially in the sense that it was used by John Searle.

        You’re probably right that I focused too much on a weak version of the Chinese room. Partially that’s because that’s the version which I think is most commonly associated with the Chinese room argument, and closest to the one Searle uses originally, and partially it’s because I’m trying to capitalize on the brief window of opportunity where I’m getting page views from Scientia, so I posted a quick and kind of sloppy essay to keep people interested in my blog. So far you’re the only person seeming to take much of an interest, and we might as well be doing this by email.

        But I don’t think that your strong version of the Chinese room is really much of problem for my position. All of the information in the Chinese woman’s mind which has been downloaded into the other person’s mind is being stored, ultimately, with physical information. I think we both already agree on that, we just differ on what role, exactly, the physical material is playing. It’s your position that the physical material instantiates a conscious system, and it’s my position that the apparent difference between the physical material and the system is a product of two different mental models by the observer. It’s hard to give a specific account of how this would happen, because, as you’ve stated, it’s unrealistic to actually think about downloading one mind into another brain. It’s a myth that 90% of the brain goes unused. The vast majority of the brain is being put to use in one way or another, so to say that a person can have their own brain and also download another person’s mind doesn’t give us a very concrete example to build a discussion on. There’s probably not enough storage space in one brain for two full minds. That’s why I prefer a strong but slightly weaker version, which is that of a computer algorithm which is simulating an entire brain of a Chinese speaker. The question is, does the computer understand Chinese, or does the algorithm? I think this is where we can really cut to the problem. I would go so far as to say, yes, the computer, as a whole, doesn’t understand Chinese, and that yes, the algorithm does. But the important point is that the algorithm is not a separate entity from physical substance, just that the physical substance which is the algorithm represents only a portion of what we think of as being, “the computer.” What makes the algorithm an algorithm is its specific physical state. You could very seriously describe an algorithm as a pattern of physical substance. In the context of the computer, the computer is simulating billions of neurons. But each simulated neuron is represented by some physical correlate in the circuit boards of the computer. It may be that the simulated neuron exists, physically, seemingly separated into different areas of the computer circuits. So intuitively, there’s the idea of a simulated neuron, which is thought of as being a single mathematical object, and then we go back to the physical correlate, which seems to be spread out among various circuits, and it seems like we’re talking about two different things. But my point is that we’re not. The simulated neuron IS one physical object. The fact that it’s spread thinly over circuit boards doesn’t make it less of a physical object. It just makes it harder to understand as an object, so, for the sake of clarity, you invent a new imaginary model of the neuron which seems to be all in one place, and say that you’re talking about a mathematical construct. But the “mathematical construct” that you’re talking about is the same thing as the physical object which exists in the circuit boards. It’s just that at this point, because the simulated neuron is split apart in physical space, the mental model we have to discuss physical objects isn’t convenient any more for talking about this particular neuron.

        Which I think is what leads into your next point. It’s possible that I still misunderstand you, but what I think what you’re saying is something like this. “If you can say that a group of information spread across circuit boards is an actual physical object, and corresponds to a real sort of neuron, then you can just invent any pattern in any type of material and call it a neuron, or a mind.” If physical information which is so seemingly split can be called one object, then why can’t you pick, for instance, a particular pattern of stars, and call that an object? Or a particular pattern of sand on a beach, and say that there are multitudes of minds there? It seems like at this point, the only thing holding these things together is either A) your personal preference for identifying these separate things as objects, or B) an abstract “mathematical object.” I think that you see the absurdity of position A, which is why you go with position B, that there are abstract mathematical objects which physical matter instantiates. And which is why you think it’s my position that A) there must be an intentional relationship between objects. But my position isn’t A or B, my position is C. It’s my position that there are actual physical reasons to think of the neuron in the circuit boards as being one object, and not to think of it as separate bits of information held together by an abstract algorithm. It may seem spread thinly, but each little piece of it is connected physically to each other piece. Each thing that it is not connected physically to, it is not a part of. The thinly spread neuron doesn’t get there by accident. It was created by a physical process. If one half is caused by the firing of several particular circuits, and the other half is caused by the firing of circuits on the opposite end of the board, (not a good description of computer hardware but you know what I’m saying) then what makes them connected, physically, is that there is some real physical information sent between the two halves, which, importantly, ALTERS the other half. If the information is sent, and not received, it doesn’t matter. But if the two halves are sending each other information which is altering each half respectively, then there’s no way to fully separate each half from the other. They are part of the same whole. This is why you can’t just pick any pattern in a bucket of sand or group of stars and call it a whole. You can create an abstract MENTAL object among stars, such as a constellation, but there’s no physical correlate to a constellation because the stars in that constellation aren’t sending out physical information which is altering the member stars of the constellation in a meaningful way that differs from their relationship to any other particular group of stars.

        There’s much more to say, but I think I’ve given you a basic understanding of where I’m coming from.

        I’m sorry if it seems like I was putting words in your mouth. I was basing my argument on what it seems like you’re saying, from my point of view. It might be that I’m still vastly missing your point, and the point of Robin Herbert. If so, I apologize. And what a waste that would be to spend so much type on a missed point.

        I haven’t thanked you yet for coming by and reading my blog. I’m getting quite a few hits but not many people are sticking around long enough to comment, and I really appreciate your interest. I’ve taken a look at your blog, too, and I’ve really enjoyed what I’ve read so far.

      • Hi Michael,

        Thanks for answering in such detail. It is a real pleasure to discuss these ideas with you.

        I still think the Turing Test is pretty useful, if we require the test to be passed consistently. Remember that passing the test consistently means that the system would be capable of any mental task that a human can perform, from producing works of art, to getting jokes, to writing computer software and producing philosophical arguments. It would essentially make human intelligence redundant. If this is achieved, then it can do all that evolution has selected for in humans, and since this behaviour seems to have required consciousness in us it seems to me very unlikely that it could be achieved without consciousness in an AI.

        And I don’t think the testers even need to be especially expert, unlike psychiatrists. They simply need not to be gullible, and understand their job well enough not to be tricked by cheap tricks like Eliza. Thirty minutes of training is really all that is likely required, but of course the more expertise the stronger the test.

        Another difference from the psychiatry example is that there are a huge number of different mental conditions, which are really just broad categories that we try to shoehorn unique patients into. It’s not surprising that achieving consensus is difficult. The Turing Test is much simpler. We’re trying to identify if something is a human or an AI. It’s a Boolean all-or-nothing test. We don’t even require the testers to be right. It’s OK if they sometimes think humans are AIs. If we can stack it so that it is harder for something to be considered human, the test is strengthened.

        So in my view we can make the Turing Test as arbitrarily strong as we like. In the strongest versions, the system is demonstrably behaviourally indistinguishable from a human, which from the evolutionary argument above means it is almost certainly conscious.

        it’s unrealistic to actually think about downloading one mind into another brain.

        We can certainly agree on this, but this concern is irrelevant to the fundamental argument. We can either assume that the person carrying out the algorithm is some sort of alien demigod, or we can go in the other direction and simplify the algorithm. The algorithm is perhaps not now a conscious mind but something simpler that nevertheless demonstrates understanding of a concept or system which the host mind lacks and continues to lack when not implementing the algorithm.

        There’s probably not enough storage space in one brain for two full minds.

        Certainly. But look what you just wrote. If you’re even considering the coherency of having one brain host two minds, then you cannot continue to maintain that the brain and the mind are the same thing. This is where the comparison to hardware and software comes in once more. One computer can host any number of programs, and the same program can run on more than one computer. Computers and computer programs are not the same thing.

        That’s why I prefer a strong but slightly weaker version, which is that of a computer algorithm which is simulating an entire brain of a Chinese speaker.

        It’s important that the “computer” in The Chinese Room is a person, because we can put ourselves in that person’s position and see quite clearly that implementing the algorithm does not allow that person to understand Chinese. If we use an actual computer, it’s all too easy to imagine that the computer simply takes on the identity of the simulated person.

        I understand your view that you can see the simulated brain as a physical object that is distributed around the computer’s memory. But you seem to agree with me that it is not the computer itself that is conscious but this simulation. Our point of disagreement is now over whether it is more sensible to see the program as a physical or as an abstract entity. This is simply the debate between mathematical Platonists and other positions in the philosophy of mathematics. You haven’t really staked out which position you would advocate nor indicated much interest in the subject so that may be a discussion for another time.

        Yet even without Platonism, I think it leaves us with a refinement of your position. If you can distinguish between the computer and the program, then I think you should agree that the brain and the mind are not really the same thing after all. The brain is a physical object, whereas the mind is the logical structure of that brain, which we agree is physically represented and so there is clearly nothing supernatural happening, which I think is the most important point you are trying to make.

        It may seem spread thinly, but each little piece of it is connected physically to each other piece.

        Good point. However if we look at the atoms in a gas in some sort of container, every atom is connected to every other atom. They’re all bouncing around and interacting with each other so that a change in the velocity of a single atom will eventually propagate out and alter the state of the system as a whole. All we need to support consciousness in any body of gas then is to cherry pick collisions in such a way as to map to the algorithm of any conscious mind we like.

        I’ll leave you with a question. What do you think of the theoretical possibility of mind uploading? Would it give you immortality or would it create a copy of you whose identity you would not share?

        • I doubt that you could actually find a mind in a cloud of gas. I see consciousness, on a human level, as consisting of a number of different physical properties. We can probably see most of these properties individually in other parts of nature, but to see an actual conscious mind, that would mean that we’re seeing all of them in one place. One of those properties is physical networking, and I agree that we see physical networking on a basic level in other parts of nature, whenever there’s an object held together by chemical bonds, or in the collisions of molecules. But I sincerely doubt that you actually could cherry pick the interactions of a cloud of molecules to find a conscious mind, as long as you’re being realistic about the actual physical properties which are going on. The cloud may propagate physical information, which is one requirement of consciousness, but it doesn’t do so in a way which creates a working model within that system. The “consciousness” of a cloud of particles would be like the static of a T.V. with bad reception. You could look closely at the static and invent shapes to see in it. You could imagine that some of the shapes are dinosaurs. But that doesn’t mean that you’re watching Jurassic Park. There’s no actual correlation between the static and the images which create the movie Jurassic Park. Any resemblance is in your mind, and not a property of the system itself. The images of dinosaurs that you see are not physically more meaningful than imaginary images of ghosts which makes you think that you might be watching Ghostbusters. For instance, if you could interact with a cloud of particles, do you think that it would pass the Turing Test? Do you think that there’s any way to translate the cloud’s language in a way which passes the Turing Test? Anything which interprets a cloud of gas as passing the Turing Test is cherry picking the information to the point that all it’s seeing is its own consciousness.

          What I believe about mathematics is that the Universe is the Universe, and mathematics is mathematics; the Universe is what it is, and among the languages we’ve invented to describe it, mathematics is the one that probably most closely correlates to its fundamental laws. But to say that the Universe itself is mathematical is not exactly true. The Universe is essentially something which mathematics describes very well, and when we discover a new “mathematical truth,” that means that our mathematical model has been expanded to correlate more exactly to the Universe. I would say that the Universe is mathematically intelligible, not that it’s mathematical. Which you could describe as a weak form of mathematical realism, but far from mathematical Platonism.

          As a counter argument, we can imagine a number of different varieties of mathematics which seem to be consistent within themselves, but only some of these mathematics seem to correlate to the Universe we experience. So is it more precise to say that the Universe is mathematical, or that mathematics can be used to describe something which actually exists?

          My short answer to whether I could live forever if downloaded into a computer is that my sense of self is also a mental construct, and whether I interpret the version of myself downloaded into a computer as being me is a matter of how precisely I want to define who I am. I think that there’s a difference between selfhood and consciousness. Consciousness is a real physical phenomenon. The “Self” is a category of phenomenon which we create at our convenience. So the individual inside the computer is a physically separate entity, which is conscious, but whether or not it’s “me” is open to interpretation. I would be comfortable with saying that it is, but there’s no physical or, I think, mathematical reason to say that that interpretation is ultimately correct. The fact that that entity is free to disagree with me is an interesting argument for it not ultimately being the same thing as I am.

          • Hi Michael,

            I doubt that you could actually find a mind in a cloud of gas.

            I should hope not! My argument is intended to be a reductio ad absurdum. I hope neither of us come out of this debate believing clouds of gas to be conscious.

            But I sincerely doubt that you actually could cherry pick the interactions of a cloud of molecules to find a conscious mind, as long as you’re being realistic about the actual physical properties which are going on

            But you certainly can. As I was trying to explain before, computation is in the eye of the beholder. There’s nothing that marks out physical events that are part of a computation from those that are not, and there’s no particular rules about how physical events are to be interpreted. A positive charge might be interpreted as a 1 or it might be interpreted as a 0. Furthermore, there is no rule that says you need to be consistent. You might change your interpretation periodically through the computation.

            With this in mind, there is absolutely nothing to stop you from interpreting the motions of gas molecules as corresponding to any algorithm you like, including an infinite number of conscious minds. Without a rigorous reductionist physical account of what differentiates computation from non-computation, you cannot therefore attribute any effects to computation that you would not attribute to any physical process.

            The “consciousness” of a cloud of particles would be like the static of a T.V. with bad reception. You could look closely at the static and invent shapes to see in it

            This is an excellent analogy, and this is precisely what I’m getting at. In any image of random static, there are indeed pixels which could be cherry picked to form pretty much any image you desire. Similarly, if the visual data from thousands of movies were blended together, what would result would be an incomprehensible mess. But the difficulty in picking out individual patterns does not mean that they are not there. If the mere existence of a pattern is supposed to give rise to some metaphysically significant effect, then that effect must be there even in static.

            Any resemblance is in your mind, and not a property of the system itself.

            Exactly, but the same is true for all images. Say there’s a photograph of a cloud that really really looks like a rabbit. That resemblance is only in your mind, right? Now what if you learn that it was photoshopped to make it look like a rabbit. So now that’s really a property of the image itself, is it? I would say that there is no physical difference between the image either way. And the same is true of computational systems. Whatever isomorphism exists between any physical system and an algorithm is only a matter of interpretation by a mind, not an objective physical fact. The isomorphism we see in computer systems stands out clearly to us, like a photograph of a rabbit, whereas the patterns that exist in random physical processes are harder to pick out, but from a physical perspective they are just as real.

            For instance, if you could interact with a cloud of particles, do you think that it would pass the Turing Test?

            The question doesn’t really make sense because there is no well-defined way to map the behaviour of a cloud of particles for interaction. It’s like asking me if I could really see the image of an animal in some noisy static, do I really think it would look like a dinosaur?

            Given the right mapping, then sure, but such a mapping would be difficult to produce in real time. It would most likely have to be retrospective in order to cherry pick successfully.

            Anything which interprets a cloud of gas as passing the Turing Test is cherry picking the information to the point that all it’s seeing is its own consciousnes.

            I agree you would need a conscious process in order to cherry pick from the cloud of gas. But if you did so, there is no principled way to say that cloud of gas is not also conscious, and there is no reason to suppose that it is not also conscious even if nothing is actually cherry-picking from it.

            Of course my point is not that clouds of gas are conscious, but that the mappings to algorithms we interpret in computer processing have no physical basis and are just as arbitrary from an electron’s perspective. It’s just more obvious how to interpret it usefully to us.

            I think you may have confused mathematical Platonism with the Mathematical Universe Hypothesis. Mathematical Platonism is the view that mathematical objects exist abstractly independently of mathematicians. The MUH holds that the universe is itself a mathematical object. The MUH depends on Platonism but Platonism is a more general position that does not usually assume the universe is mathematical.

            As a counter argument, we can imagine a number of different varieties of mathematics which seem to be consistent within themselves

            Indeed, and on plenitudinous (or full-blooded) Platonism (my view) those systems are all said to exist in their own right.

            So is it more precise to say that the Universe is mathematical, or that mathematics can be used to describe something which actually exists?

            I would say the former.

            My short answer to whether I could live forever if downloaded into a computer is that my sense of self is also a mental construct

            Exactly right, in my view. However I think we need as a practical matter to identify ourselves with something, and I think the best choice is to identify with the mind, which I see as the algorithm implemented by the brain and uploaded to a new substrate. This means that, like you, I regard uploading as a possibility. But if you see the mind and the brain as the same thing I can’t understand how you could even entertain it. Surely this means that there can be no shared identity between a biological brain and an electronic one.

            Consciousness is a real physical phenomenon.

            So you say!

            Finally, I think that you didn’t really answer my points from the Chinese Room that you do not in fact seem to regard the mind and the brain as the same thing after all, seeing as you seem to acknowledge that we might have a situation where one brain supports multiple minds. Can you clarify this?

  3. If there aren’t actual physical properties which differentiate between a mind in a brain and a mind in a cloud of gas, then why is it that a brain is conscious, or instantiates consciousness, and a cloud of gas does not? To me it seems like you’re presenting a self defeating argument. If a consciousness is an abstract algorithm, and you say that you can cherry pick a cloud of gas to find a representation of that algorithm, why does that not instantiate consciousness, while a brain does? To me, the answer is that there are actual physical properties which create consciousness, and that consciousness only occurs when we find all of those properties together. I don’t have a full list of what those properties are, because I’m not a neurologist or physicist, and to try to answer that question fully would have me spitting out nonsense like Deepak Chopra. But I think the logical implication is that those properties are there, and I could probably tell you a few that we might rationally expect to find.

    My point is that the analogy of the T.V. static applies to a cloud of gas, but not to a brain. A brain is doing something, physically, which a cloud of gas is not. If you left the T.V. static on for an infinite period of time, it would eventually display a video, by chance, which happens to exactly match Jurassic Park. But has it actually shown Jurassic Park? The fact that a positive indication of something can be replicated by chance doesn’t mean that the properties you’re searching for are actually there. The movie, “Jurassic Park” has certain physical properties which a chance image of Jurassic Park doesn’t have. For instance, the movie Jurassic Park has a beginning, a middle, and an end. If you’re in the middle, you expect to be able to make it to the end. But if you’re watching a chance image of Jurassic Park, and you’re halfway through, the odds are extremely low that the next frame that you see on the t.v. will be the next frame of Jurassic Park. The next frame will almost certainly be static again. So what you’ve seen up until that point has been a false indicator of the physical movie which is Jurassic Park. It’s possible to be completely satisfied by a false positive and to think that you’re seeing a physical entity without seeing that entity.

    But then it seems like all I’m saying is that there are two different things. There is the image of Jurassic Park, and a physical entity which will instantiate that image of Jurassic Park. And it seems that I’m arguing your case. But I’m not. I’m not arguing that there is not an image of Jurassic Park. I’m arguing that the image of Jurassic Park is a mental object. It exists in your mind, and in your mind, it also exists as physical object. When you see the false image of Jurassic park, your brain recognizes a pattern and assumes that it’s interacting with a specific type of physical entity with physical properties, which is the movie Jurassic Park. But it’s simply assuming wrongly. Jurassic Park exists in two places here: One, physically, in your mind, and two, physically, as a system of DVDs and Television sets, or whatever. It doesn’t exist in any form as a mathematical object in which it does not exist physically.

    So to get to mathematical Platonism. I believe that your account of mathematical Platonism is that anything that can be accounted for mathematically exists as a mathematical object, independent of a mind. I have no trouble with the first part. It’s the “independent of the mind,” part that I think is not true.

    If mathematics is a language that we’ve created to describe the world, then there should be mathematical statements which are internally consistent, but which have nothing to do with anything that exists independent of the mind. That is my position. If the world is actually mathematical, then anything that you say which is mathematically meaningful should be represented in the world in some way. Which you’ve said is your position.

    Let me make another analogy, since I like them so much. Because English is a language, there are statements which are syntactically valid in English which don’t have anything to do with the real world. I can say, “Colorless green ideas sleep furiously.” Or, “The invisible sun is rising on the teacup.” That’s all well and good, because English is just a language, and nobody expects for invisible suns to exist just because I can express them linguistically. If mathematics is just a language, then it’s OK for there to exist mathematical nonsense which doesn’t describe the world. But if mathematics represents a platonic universe of things which would actually exist even if there were no minds to discuss them, there should be no such thing as mathematical nonsense. Anything you say in mathematics should have a relationship to some independent reality. So does the square root of pi have an independent reality? It’s mathematically meaningful to discuss the square root of pi, but if it exists as a mathematical object, then that means that mathematical objects can be infinite. So platonic reality is a reality where objects are in some way, “infinite.” And that opens up all kinds of problems. What does that mean? I don’t see infinity as being a sustainable concept. Infinity essentially means that something has no limit. So to describe the properties of the square root of pi, you’re saying that one of its properties is that it’s indescribable. There’s no actual expression of infinity. You can talk about infinity only in terms of potentials, as in, something might potentially be infinite, but not actually infinite.

    But all of this is ok if you think of mathematics as a language. If mathematics is a language, all infinity means is, “Sorry, I don’t have a word for that particular thing. There is an absence of properties within my syntax which will describe that thing, so to attempt to describe it, my linguistic algorithm will have to run continuously without ever finding an answer.”

    Back to the Chinese room. I don’t think that there’s ultimately any difference between a mind and the physical substance. But I’ll go as far as saying that the physical substance which makes a mind is only part of the physical substance which makes a brain. The brain is also full of blood vessels and fats and fluid which have nothing to do with thinking, and it’s perfectly possible that there would be a large number of neurons available which are not actually participating in consciousness. If you were to download one brain into another, then there are two basic ways of doing this. The first and easiest is to assume that it’s a very large brain with extra neurons that aren’t being used. In this scenario, you simply wire those extra neurons into an extra brain. In this scenario, explaining the discrepancy between consciousness is straightforward. You don’t have to wonder how there are two minds in one brain. There are, essentially, two brains which just happen to exist in close proximity. The other scenario is that you have a very intelligent person, some kind of alien or something along those lines. This person is taught how to do an algorithm which simulates another person’s brain. This is more or less the same scenario as the computer. The simulated brain still exists physically, in thinly spread neurons, it’s just that now these thinly spread neurons are spread thinly around a number of normal neurons, rather than around circuits. The intelligent being who is hosting the mind would still, physically, have two brains in his head. Depending on his exact relationship to that other brain, he would probably experience that other brain as another person who he could have conversations with. If the simulated brain understood Chinese, then, to the intelligent alien, understanding Chinese through the simulated brain would experiential be the same as talking to an interpreter. He probably wouldn’t feel like he was getting the results via algorithm, he would probably feel like there was a voice in his head which could talk to him and explain Chinese. Again, depending on how exactly the two brains are connected. But my fundamental point is that, two minds = two physical brains. There’s no other way to do it.

    • Hi Michael,

      Some very interesting points. I think your way of looking at things is very as mine would be if not for Platonism and the MUH. I feel these are necessarily to resolve some conundrums here and elsewhere.

      If there aren’t actual physical properties which differentiate between a mind in a brain and a mind in a cloud of gas, then why is it that a brain is conscious, or instantiates consciousness, and a cloud of gas does not?

      An excellent point.

      First I would say neither physical objects are technically conscious, as in my view consciousness is a property of an algorithm. The difference between a brain and a cloud of gas then is that, though like most physical objects the brain could be interpreted as implementing any algorithm, there is one interpretation which is more obvious. This makes a brain more useful but it cannot have any metaphysical significance.

      Consider a freshly quarried block of marble. That block contains within it matter that could go on to make any number of statues of various forms. In this way it is like the cloud of gas. A sculptor chips away at it to reveal a particular form she has in mind, producing something analogous to our brain. But the thing is that the result is still marble, so it still contains within it material that could be used to make any number of smaller forms. Nevertheless the form defined by the exterior of the object is more obvious than those hidden within. It is this bringing to the fore of a particular structure that makes the statue and the brain so special from a human perspective, though from an electron’s eye view it’s pretty much the same physical stuff.

      So if we imagine for a moment that consciousness is a property of a certain three dimensional shape (say a dodecahedron), then you would be arguing that only dodecahedrons are conscious while I’m arguing that this doesn’t make sense because all blocks of matter contain within them many different ways to select sets of atoms arranged dodecahedrally. This doesn’t mean that dodecahedrons qua dodecadrons don’t have their uses, for example as dice in games of Dungeons & Dragons. It only means that we can’t imagine that the existence of dodecahedral matter brings something metaphysically significant into being.

      A brain is doing something, physically, which a cloud of gas is not.

      How would you define the difference? It seems to me that a cloud of gas is doing a great deal, physically.

      It’s possible to be completely satisfied by a false positive and to think that you’re seeing a physical entity without seeing that entity.

      I would say that if you happen to have seen, by chance, a replication of a still frame from Jurassic Park, then you have actually seen a still frame from Jurassic Park, and that this is not just in your mind but on the TV screen. Similarly, if you interpret Brownian motion as implementing a conscious algorithm then you have seen this algorithm and the motion can legitimately be so interpreted.

      Jurassic Park exists in two places here

      If everything that exists is physical, and Jurassic Park exists, then Jurassic Park is at the same time a set of firing neurons and a DVD and a series of charges stored on a Hard Disk.

      How can we coherently refer to all these radically different physical objects as one physical thing? There is no physical similarity between them at all. A DVD of Jurassic Park is much more similar, physically, to a DVD of Citizen Kane than it is to your neural patterns. What is similar is the information they represent, which is abstract.

      It doesn’t exist in any form as a mathematical object in which it does not exist physically.

      I understand you are expressing your views, but I just wanted to take the opportunity to clarify that anything that can be encoded as information can be said to exist as a mathematical object on Platonism.

      I believe that your account of mathematical Platonism is that anything that can be accounted for mathematically exists as a mathematical object

      Close. I would instead say that any precise definition without contradiction is a mathematical object. Gravity can be accounted for mathematically, but to say that gravity itself is a mathematical object goes far beyond mere Platonism. But I would agree that Platonism would regard the equations of gravity as a mathematical object.

      If mathematics is a language that we’ve created to describe the world

      I would not say it is a language at all. I would say it is the study of precise definitions. To call it a language is in my view to confuse mathematical notation with the underlying concepts it represents. Newton and Leibniz had completely different notation for the concept of calculus they discovered independently, but the concept was the same.

      If the world is actually mathematical, then anything that you say which is mathematically meaningful should be represented in the world in some way. Which you’ve said is your position.

      Where did I say this? I think you have it backwards. Everything in the world is mathematical, but not everything mathematical is in the world (by which I mean this universe).

      it’s OK for there to exist mathematical nonsense which doesn’t describe the world

      Your examples in English do not only fail to describe the world, they are incoherent. A sentence which fails to describe the world would be something like “Carrot Top is the President of the USA”. Mathematical statements which are incoherent are not OK. Mathematical objects are required to be internally consistent but they are not required to model anything in the physical world.

      there should be no such thing as mathematical nonsense

      And there isn’t. It’s possible to write meaningless gibberish with mathematical notation, but there is no mathematical object which is internally inconsistent, by definition. This is the sense in which there is no greatest prime number.

      So does the square root of pi have an independent reality?

      On Platonism, yes of course it does.

      but if it exists as a mathematical object, then that means that mathematical objects can be infinite.

      The square root of Pi isn’t infinite. You mean irrational? Sure. Mathematical objects can be irrational, in that some numbers are not expressible as a whole number ratio. But there are finite ways to express Pi precisely (and so the square root of Pi), e.g. as the ratio of the diameter of a circle to its circumference or even as an infinite sum (which can be completely specified with a finite number of symbols).

      So to describe the properties of the square root of pi, you’re saying that one of its properties is that it’s indescribable

      I wouldn’t agree at all. Again, just because you can’t represent it precisely with decimal notation doesn’t mean that there aren’t other ways to describe it precisely.

      If you’re asking me whether I believe in the independent existence of concepts which cannot be described precisely in a finite number of symbols, then I’m agnostic. I’m probably with you in rejecting them as undescribable. The square root of Pi is not such a concept.

      In this scenario, explaining the discrepancy between consciousness is straightforward.

      Indeed, which is why this is not the scenario of the Chinese Room.

      The simulated brain still exists physically, in thinly spread neurons, it’s just that now these thinly spread neurons are spread thinly around a number of normal neurons, rather than around circuits.

      This argument is almost plausible but I’m not convinced. I don’t think you get to ring-fence certain neurons as normal and certain neurons as special. The neurons used to implement the new algorithm are also being used to implement the native mind, because the native mind is deliberately, tortuously and consciously implementing the new algorithm. It’s not as if this other mind is just running in parallel with no effort.

      The Chinese mind is not only a product of the host brain but also of the host mind. It’s like a virtual machine in computer science. The host mind has become the substrate upon which another mind is running. This is much less efficient than a brain running it directly, so you could expect this to cost a great deal of time and effort.

      In a virtual machine setup, I could have an Intel Windows 8 PC emulating a Super Nintendo. There is no way to select certain atoms in the lintel chip as running the Super Nintendo operating system while others run Windows. It’s all running Windows – but Windows is running an emulated SNES chip which is running the SNES OS.

      In short, the same physical objects are being used to run both minds, down even to the atomic scale. It’s not a case of two distinct physical structures being interwoven.

      he would probably experience that other brain as another person who he could have conversations with.

      Yes and no. It’s important to remember that doing anything with the simulation would be incredibly laborious. It wouldn’t be at all like the halves of a split-brain patient communicating with each other in real time.

      If the simulated brain understood Chinese, then, to the intelligent alien, understanding Chinese through the simulated brain would experiential be the same as talking to an interpreter.

      Not unless the simulated brain was actually deliberately intepreting for the alien. In the standard thought experiment the alien would simply have no idea what was being said.

      He probably wouldn’t feel like he was getting the results via algorithm

      .

      No, this is exactly how he would feel. He doesn’t have a second Chinese brain, he has to run through an extremely complicated set of algorithmic steps he has memorised in order to implement the Chinese Room.

      But my fundamental point is that, two minds = two physical brains. There’s no other way to do it.

      Well, since you seem to have forgotten that the Chinese Room needs to be implemented deliberately by the alien’s own mind, your answer to the Chinese Room simply doesn’t work. There must be another way to do it after all.

      • Hi, Sorry for going several days without replying to you. You’ve certainly given me a lot to think about and I haven’t been sure exactly how to reply. Also, I’ve been needing to write a new post and work on the things in life which actually make me money, so I think I’m going to have to give you the last word here.

        Or close to the last word. The one thing that I think is worth pointing out as a disagreement between us right now, is that I still think that, even though you can express irrational numbers finitely, that doesn’t mean that they aren’t in some sense infinite. I could say, “walk to the horizon,” which is a finite phrase, but represents an infinite path. Rational numbers are considered real numbers because they can be thought of as existing at some point along a continuum, and because they can be differentiated from other numbers, but just where, precisely, they fall on the continuum can never be stated exactly. You can express pi as a ratio between whatever numbers you measure a circle with, but that expression is only a suggestion to begin an algorithm, and isn’t itself representative of a conclusion in that algorithm. If there was a physical object which was pi units long, then to find where, exactly, that object ends would mean that you are looking, continuously, infinitely, into deeper and deeper levels minuteness, going down and down levels, past the atomic, past the subatomic, past the planck scale, to a hypothetical point which you could never arrive at. A physical object probably can’t be said to be exactly pi units long (without some trickery in defining the units.) I don’t see why it’s meaningful to think that a mathematical object can be described this way. But I could be wrong about that.

        I’m not entirely sure that you’re wrong about this, but I still feel comfortable with my position and most of my arguments. It’s been fun, challenging, and stimulating debating you. Thanks again for reading and commenting. If you want to post a further reply, go ahead, but this comment will probably be my last here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s