I consider myself a naturalist, which, for the purposes of this essay, we can take to mean that everything that we’re going to come into contact with is part of the same basic natural world, and subject to some sort of underlying set of principles. It’s not a perfect definition but it’s a place to jump off from. This essay isn’t meant to discuss naturalism specifically. The important point is that we start the discussion agreeing that our subject is something we can discuss rationally, and that we can’t answer questions by saying, “ghosts,” in a quiet voice. Reality, as I’ve said before, is all around us, not some ineffable monstrosity from the depths of the dreamlands.
However, it’s still important to recognize the limitations of our senses and the importance of reason in uncovering the nature of our Universe. Obviously, there are many things that are part of the natural world which we can reason must exist but which can’t be experienced directly through our senses. And it’s important to further recognize the limitations of our reason and intuitions in forming accurate descriptions of the world. If, as I will argue, we approach the world fundamentally through various forms of language, we have to acknowledge that those languages may contain limitations which prevent us from describing the world with absolute accuracy. It’s in these failures of intuition, I believe, that lie many of our mistaken beliefs about subjects such as dualism, reductionism, the mind body problem, and computationalism. So let’s jump right in.
If you open up our skulls, you’ll only find one thing. You find a brain. You don’t find spirits blowing around and moaning. Maybe they’re invisible, and don’t interact with matter… but probably not. Apart from the brain, though, there is some kind of mind in there, too, and there are various ways to go about explaining this. My preferred method is to question why it is that we’re assuming that the mind and the brain are different things. What seems to me most likely is that there is only one thing, and that the words, “mind,” and “brain,” are both just representing two different ways of thinking about that one thing. We call it a brain when we want to think about individual brain structures or neurons, and we call it a mind when we want to talk about the pattern of neural impulses which the brain is forming. But there’s no clear distinction between these two. One gives us a more convenient description of the brain when we want to talk about what people are thinking, and the other gives us a description of why there is an oily mess on the floor of the laboratory. “Oops, I dropped the brain.” You don’t say that you dropped a mind, but that’s what you did.
What these two words are referring to are not different things that exist in the real world, but different mental processes that we have for dealing with one thing. When you dropped the brain, which you will have to pay for, by the way, you created a new pattern of neural hardware. This could be described in mind terms, but we don’t have a word for it, because most people don’t suddenly think themselves into oblivion, unless of course they’re a Buddhist. Which I suppose means that the best way to describe the emptiness achieved upon splatter is nirvana. Seriously, I do believe that meditation has its benefits, I’m not trying to be mean.
Probably the best argument against this would be an argument along the lines of a reductionism vs. holism vs. emergentism debate. But again, I think that what these words really represent, rather than real differences in the natural world, are limitations of our thinking. Neurons are what we see when we think of things reductionistically, and minds are what we see when we think of things as (at preference) a form of holism or emergentism. What this means isn’t that the world contains no differences within itself, just that to divide those differences into different strata of associations, such as through emergentism or reductionism, is a helpful mental process and not a fully accurate representation of the world. When an object exhibits properties which are not fully explainable through reductionist methods, and we say that they exhibit emergent properties, what we’re doing is saying that our thinking has run out of space in the reductionist department, so now we’d better fire up the emergentism software in order to get a clearer picture. There is a real difference between things that exhibit emergent properties and things without them, and we should continue to try to clarify what that difference is, but we shouldn’t assume that one is right and one is wrong; Both are ultimately wrong, though they are both helpful in their own way.
This is true about all of our perceptions. When we see a physical entity, such as a doll, a pineapple, or Kate Upton, we accumulate a bit of information about the object, and we build a mental model telling us what that thing is. But of course it’s not an exhaustively researched model. It doesn’t include the position of every quark in the object, and a completely accurate projection of what will happen to that object in the next few decades. Which is fine. It suits our purposes. We don’t really need to know about the quarks for most purposes, and if we do want to find out about them, we’ll look closer and expand our model. But we’ll never come to a complete picture of what that object is. We can refine and refine our model, but it’s still a model, and not Kate Upton, who is still Kate Upton and not a model. Or whatever.
I would hesitate to say that this is a necessary condition of consciousness itself, but it does seem to be the necessary condition of human consciousness, which is what we’re concerned with. And I would argue that a similar principle applies to arguments about reductionism and emergentism. We can see that Kate Upton is not a pineapple, and we can see that reductionism is not emergentism, but a complete description of why that is is probably not possible. Which is not to say that we should stop trying to refine our picture of what’s going on, just that we should tentatively accept that we don’t have to have a definitive answer to these questions to keep moving on in philosophy. We’ll probably never have the definitive answer, just progressively better and better answers. Sometimes you have to accept that your answer is good enough for the moment to be getting on with. If this sounds like a way of avoiding the argument, then you’re right, it is, but I think it’s a well founded one. I think it does establish that, although emergence is good to talk about, it doesn’t mean that emergent properties are separated from their reductionist foundations in any real way. Emergency will never become dualism; Minds and brains are still ways of talking about the same thing, even if minds are emergent properties of brains.
Why this is so perplexing, intuitively, is that we are always trying to reconcile these opposing views. We are trying to create a model of emergentism using reductionist principles, and a model of reductionism using principles of emergentism, and it’s not working out. We ‘re looking for some kind of Unified Strata Theory, but if there is one out there, I’m not aware of it. I don’t know if we should really expect it to work out. If we find reductionism failing, and begin to think of things in terms of emergentism, then why would it then make sense to explain emergentism through reductionism? Or vice versa?
This is why many of us are so tempted by dualism. It’s neater. It’s easier to think that when you’re thinking in reductionist terms, you’re talking about one thing, and when you’re thinking in emergentist terms, you’re talking about another. But I think the general evidence points to this not being true. If neural correlates to mental phenomena are only correlates, and nothing else, why do we lose brain function when we lose the correlates, never to get them back? Why is brain damage so damaging? What I think is the simplest and best answer is just that there is no real difference between neural correlates and the mental states they correlate to. They’re two sides of the same coin. Or rather, the same coin in two different lights.
This is what we have evolved. Not an in-depth analysis of every concept we come across, but convenient mental models which we switch out as need be.
Computationalism And The Chinese Room
John Searle’s famous Chinese Room thought experiment goes something like this: A man who doesn’t understand Chinese is in a room with a computer. Chinese characters are slipped to him underneath the door, and he puts them into the computer. The computer then gives him instructions on what to reply. He writes new characters down based on the instructions given by the computer, and slips them back through the door. Chinese speakers outside are convinced, based on the replies, that there is a Chinese speaker inside.
Next, the computer is replaced by a pad of paper, a pencil, and a book of instructions. The man inside still does not understand Chinese or have any understanding of the instructions, but he follows them, and they represent exactly the algorithm used by the computer. He slips papers out, and again the people outside are convinced that there is a Chinese speaker in there.
What the Chinese room implies is that it’s possible to create a system which performs a seemingly intelligent act without actually being conscious. But this implication isn’t as clear cut as many people seem to think that it is. According to the thought experiment, the Chinese room is able to produce sentences which appear to have an understanding of Chinese, but which don’t actually understand Chinese. But that’s easier said than done. You’re talking about some process which completely replicates the results of consciousness. But invoking such a process is a bit of a magic trick. What if you asked it, in Chinese of course, “How are you feeling today?” Well, there are a few different things that could happen, depending on how the process was programmed. You could have programmed the process to connect such syntactical structures to other syntactical structures, so that it responds with, “I am real shitty today, George.” And it appears that it’s giving you a response which understands Chinese. But at some point, a human programmed that response into the system. Which means that the understanding of feeling shitty today in relation to the prompt, “How are you feeling today?” Has been inputted into the system by a conscious entity. So the Chinese room is, in this instance, not actually replicating the results of consciousness, it’s merely spitting out an answer which has been inputted into the system by a conscious entity. You could program the system to spit out several random responses to the same question, but it’s still doing the same thing with another level of complication. Now let’s say you program the process in such a way so that it spits out responses to questions, and somebody inputting answers mistakes the process for a conscious being. That means that it has passed the Turing test. Which is not really a very big deal. The Turing test is not a very reliable indicator of consciousness. People are naturally prone to assigning human traits to non-human things. We see faces in clouds, that doesn’t mean that clouds are conscious, any more than seeing a mind in a Chinese room means that the room is conscious. The Turing Test simply doesn’t mean a whole lot. It means that we’ve taught the room to lie; If we imagined that it was only possible to say, “I’m happy today,” if you were truly happy, that would be a world where nobody ever lied. But people do lie, so we know that it’s possible even for conscious entities to spit out results that replicate a specific conscious state without actually experiencing that state; the fact that it’s possible to say that you’re experiencing something without experiencing it isn’t news. But when a person lies about how they’re feeling, we don’t assume that it’s because they’re incapable of feeling anything, we assume it’s because in this particular instance there’s a disconnection between what they’re saying and an actual conscious experience. If you were to give an actor a line to say, such as “Boy am I happy,” you wouldn’t expect them to feel happy. In the same way, you can teach the Chinese room to say, “Boy am I happy,” without expecting it to feel happy, and without damning results to computationalism.
At this level of programming, the Chinese room probably wouldn’t pass the Turing Test very consistently anyway. But what if you programmed the process to to be more nuanced? There are programs out there right now which are doing that. You can look up a number of chat bots online and have interesting, if not entirely human, conversations. Mostly, they work through some type of syntax manipulation, making associations between inputted words and a database of words, and then structuring those words into recognizable sentences. By making associations between very large numbers of words and phrases, they can learn to lie very well. They might not need a human to directly input the statement, “I’m feeling well today,” if they are good enough at finding associations between words such as, how, feeling, today, well, and you, and figuring out how to put those words into context. My first thought is that the program is lying, because it is not feeling well today, and has no conception of what that might mean. It has no understanding of what it means to feel well today, because it can’t place “Feeling well” in the context that a human can place it. A human can place “feeling well,” in the context of a stubbed toe or an upset stomach, and the sensations that go with them. But this doesn’t necessarily mean, again, that because we have taught a computer to lie in this instance, that computers are incapable of consciousness. A person can be taught to say words that they don’t understand the meaning of. This clearly doesn’t mean that people are incapable of understanding. So what then, if we programmed into the computer a large context of associations? For instance, we could program the computer (or the Chinese room) to associate “not feeling well” with having stubbed your toe. This obviously doesn’t mean that it’s stubbed a toe. But how does it know that stubbing a toe is bad? Still, it’s because a programmer has told it to say that. Even if the system is allowed to make associations on its own, and learns after talking to several people that toe stubbing is associated with not feeling well, it’s getting that information through the people it’s interacting with. The program isn’t making a judgment, the program is having a judgment inputted into it. You could even say that the process of judgment takes place in both the program and in the entity which makes the judgment, but still, the program itself has not made a judgment. The understanding of toe stubbing as bad comes from conscious entities which know that toe stubbing is bad. No system will ever know that toe stubbing is bad until that information has come through its own conscious experience or through input from outsiders. You could probably perform tests on the computer to demonstrate this by asking it how it feels about things which it has not talked to anybody about and which it hasn’t had programmed into it. This is a type of Turing Test, and could prove fruitful, but it’s still possible that the programmer could be clever enough to trick the test without meaning that consciousness has occurred. The only real way to get around this is to give the computer some way of making judgments on its own. And now we’re going down the path of making the program conscious. There are many possible distances we could travel down that path, but ultimately, if we were to get the program to conform exactly to the ability to replicate consciousness, then it would become conscious.
So the answer to the Chinese room is that, really, you couldn’t have a Chinese room which replicates Chinese thoroughly without having a conscious entity in the room. So by saying, “I have a room and process that replicates consciousness without being conscious,” you’re saying something equivalent to, “I have a magic box with a genie in it.” You’re not really saying anything. The argument goes something like this. “Can you create an entity which replicates consciousness without being conscious? Imagine that I have a magic room which does just that. Therefore, yes you can!” But it’s one thing to say that you have a Chinese room which seems to think, and another thing to have one. Searle is cheating by stipulating that the Chinese room is not conscious.
Searle is quoted as saying, “Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modeled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.”
Well, no. That is already making the assumption that consciousness is not a computational model. If consciousness is not a computational model, then it would be right to say that a computational model is not consciousness. But if consciousness is a computational model, then a computational model of a computational model is still a computational model, and is still conscious. Is an object within a painting an actual object? No, not usually, unless the object that you’re painting is also a painting. A painting of a rainstorm is not a rainstorm. But a painting of a painting is still a painting. In the same way, if a consciousness is itself a type of simulation, then a simulation of a simulation is still a simulation.
But consciousness isn’t an imaginary property, it’s something that we know to exist, and we can make deductions about what it is, exactly. Where do we find it? Primarily, in the brains of evolved organisms. That means that, at least once, consciousness has occurred via natural selection. If it were possible to replicate the results of consciousness computationally entirely without being conscious, then why would natural selection go to the trouble of inventing such a phenomenon when it could get the same results computationally?
And why assume that brains don’t work computationally in regard to consciousness, when we know that they work computationally in other areas? I don’t think it’s largely disputed that brains do make computations, at least sometimes. So if we assume that A) Brains are at least sometimes making computations, and B) computations can give you the same results as consciousness, then why do we assume that consciousness is not a computation? Why not just cut out the middle man?
One of the intuitive problems people have with this goes back to confusion concerning reductionism and emergentism. I argue that it would be possible to simulate a consciousness with a computer. People often respond with a variation of the Chinese Room. If you can create a computer which simulates consciousness, then wouldn’t that mean that a team of mathematicians could in theory perform all of the functions of the computer on pencil and paper? And if they could, wouldn’t that dispel computationalism, because, I mean, come on, you can’t expect us to believe that a team of mathematicians are conscious. Come on. Really.
But, if consciousness is physical, then mathematicians are just as good a source of physical information networking as anything else. Let’s say one mathematician writes down a number, then sends it along to the next mathematician, who adds another number to that number. Then he sends it back, and the first mathematician adds another number to the second one. And they keep sending it back and forth, adding numbers. This is beginning to become a network. A very simple network, but a network. Physical information is being traded between different people. You could say that the two mathematicians are analogous to neurons, and the papers they are sending are analogous to electrical impulses. Now let’s say we add a third mathematician. Mathematician A sends a number off to mathematician B. If that number is even, the number gets sent back to mathematician C. If it is odd, it gets sent back to mathematician A. Now we have a network, and a decision procedure which begins to manipulate the information.
A team of mathematicians working together, if they were doing so in a way to simulate a brain, would create some similar network, though it would probably be much less formally structured. It seems odd to think that such a network is really a network. The mathematicians are not all touching each other, they’re standing across the room from each other. They might be in different countries, faxing each other papers. But seperate as they may seem, the information is, in one way or another, being spread through the system of mathematicians. There is a huge amount of translation of that information going on, for instance, one mathematician has a thought, translates that thought into writing, then faxes the thought, turning it into electrical signals. Then another mathematician’s printer prints out the fax, translating the information into ink symbols, which are sent through the air via photons into the eyes of the other mathematician, and converted by his retina into electrical impulses which become thoughts. It’s a painfully circuitous route to take, but in the end, there is a definite path of physical information being networked among the mathematicians. A brain is definitely doing all of this more efficiently by keeping everything in one neural language, but it’s not doing anything fundamentally different. It doesn’t matter that the various numbers and symbols don’t seem to have any universal meaning which would translate into a physical property such as consciousness. The mathematicians understand what they mean, and translate the information appropriately. The important thing isn’t that the symbols are understood by nature, the important thing is that the symbols allow the information which is being processed by the system to manipulate the system itself, so that new information will cause a change in the system, which will create new information, and then a change in the system, etc.
Again, this intuitively feels wrong because of our misunderstanding of reductionism and emergentism. It doesn’t seem to make sense that all of these different pieces which seem so separate are forming one whole. It’s tempting to say something like, “Well, if this is conscious, it’s not really the matter which is conscious, it’s the system created by the matter.” But there is no real difference between the matter and the system. “matter,” and “systems” are convenient mental models that we have for talking about things in two different ways. But what we’re talking about is the same thing. You can think of this all as happening in a visual-spatial physical world, or as happening in a “mathematical” world, or you can explain it in English with words like “emergentism,” and “network,” and “weird,” but you’re still just trying to describe one actual phenomenon with different mental faculties.
Oh, wow, that was exhausting. Let’s take a break.
NOTE: The definition I originally used for naturalism at the time that this was published was different. I said that naturalism means, more or less, that there’s nothing supernatural. Somebody pointed this bad definition out to me in the comments. I feel that it’s worth changing because it was such an obviously bad definition, and permissible to change, since it’s not a central point of the essay and just something I was using as introduction.