Sometimes the lake doesn’t glide serenely forward, but boils, roils, undulates restlessly, and the waves are not soothing, not sonnets, but tangled, sweaty sheets, surging grotesquely like the chattering thoughts of an insomniac, rising without purpose. These waves don’t come in directions, or in even succession, they break tumultuously, biting and gnawing cannibalistically like rats racing selfishly toward the shore, and they express like those wakeful thoughts on sleepless nights, as if you were sleeping on a bed of snakes, or that those snakes were in your skull trying to emerge from your eyes, but accomplishing nothing but to helplessly writhe in contempt. On those nights you don’t stare across the lake in longing at the hanging moon and wish that she would return to you, but you stare at the hateful lake and wish that you could retch it from your stomach, that she would leave you, no longer haunt you, stop calling your name.


I look inward and see the empty depth of my body, and it is pitch black; not an altar but a tomb.


-William Arthur Grant

I’ve had difficulty sleeping for several nights. My typical habit, lately, has been to swim from six till dusk, and when I settle in bed, my body still seems to bob afloat the waves like a cork. I turn and turn the lamp on and off repeatedly, and my grandfather’s book, by my bed, brings me nothing good in the way of sleep or solace. I’ve also been reading Dennett’s Consciousness Explained, and the two texts intermingle like bad drink and I doze with dreams of abandoned thoughts like cascading surf. My brain seems to be a sort of generator of half baked aquatic fantasies. These weird poetic prophecies of my grandfather don’t help anything much.


There is a shadow now, through the windowpane, of a bending branch of twisted rain. It makes me think of neural nets, the way the rain drips down in rivulets of broken lines, in fractal sets like lightning, branches, or unspooling twine. Through the blinds, in threadlike slits, the silhouette of maple boughs is segmented into shaded rifts, and my room, like Plato’s cave, shows me only slices of reality.

Beyond the curtain, in the shadowland, I see shapes that melt, that meld, that dissipate like silt or sand. The semblance of those forms is mine to to mold and shape; a knight in armor, or a hero’s cape. What shapes stir thereout, in that shadowland! A whirling dress, a regent’s hand. Through that world my princess twirled, her materiality her curse, and my demand.”

-William Arthur Grant

The late, great, and little known William Arthur Grant, doctor of philosophy, and incidentally, my grandfather, took quite a different position than I do on many things. But it’s nonetheless fascinating to go through the papers he left behind him. One set in particular, bound in leather, begins with the quotation above, and unfolds a beautiful and somewhat eerie philosophy in his particular style of writing, not quite either verse or prose. It was given to me when he passed away when I was in seventh grade.

I really don’t know what to make of it, honestly. I’ve had it in my room since I was twelve years old, and have kept it with me wherever I’ve gone, even now, as a young adult living on the coast of Lake Michigan. Sometimes it seems to be a sort of narrative, and other times it’s a sort of essay on various philosophical subjects. Whatever it is, it’s probably what inspired me most to want to be a writer.

I think what I liked most at first, before I could understand its content, was the inky pen he used on the heavy yellowing paper. It was the sort of pen that geniuses always use in old sketches and journals, and he had a fluidly messy, though very orderly handwriting that looked like something adventurous or at least very interesting. I liked the book itself before I liked the book. I only barely tread into its pages far enough to glean any meaning.

As much as I would like to reprint his entire text, I think that it would somehow fail to convey what the book itself is. The fact of the book is more than the story in it, it’s the pages, smudged by his fingers and worn yellow in a dusty attic. The original intent of the physical object that would become the book was that it would be some sort of ledger. There are dotted lines to give structure to the writing and the pages are dated down every quarter. The dates are ignored, sometimes written over completely. I don’t think I could recreate the experience of reading it online.

This is an old but important question in philosophy, and one which I think has a straightforward answer which is generally overlooked. If I were to make copies of the book in computer text, and send it online, would I have spread the book, or a ghost of it? What is the important feature of a text that we label it such? Is “Great Expectations,” or “Huckleberry Finn,” a collection of physical books, or is it some abstract object, which may pop in or out of existence whenever certain books are published? If I were to burn every copy of “Les Miserables,” would that transform “Les Miserables” to dust and ash, or would it only banish it temporally from our realm, ready to be recalled whenever a new author happens to put those same words in that same order?

If we extend that line of thought to people, what is it then, that makes a person a person? If you burned my body, would the potentiality of my body die? If the atoms of my body were reconstructed, would that be a new me? If a computer program was written which exactly had my memories, my thoughts, and my voice, would that be me also?

As I said, I think that the answer is straightforward. In the case of “Les Miserables,” there are a number of distinct physical objects which exist that we call books, and when we look at some of them, we call them “Les Miserables,” and when we look at others, we call them, “Slaughterhouse V.” The relevant issue is whether these books can get a human mind to light up and call them by that particular name. If I read “Les Miserables,” online, what I’m doing is interacting with a certain physical object, in this case a computer, which has been designed to manipulate my mind in a way which evokes, “Les Miserables.” We get caught up in trying to answer questions like, “What is Les Miserables, really,” when the reality of the situation is very plain. There are a large number of separate physical objects which have certain characteristics, and we have certain categories for placing these objects in. Some of the categories have names like “Les Miserables,” and some of these categories have names like, “Huckleberry Finn,” but that doesn’t mean that anything metaphysically significant is happening when one of these objects is born. It’s not a mystery that some of these objects may exist as ink and paper, and some of these objects may exist as computer circuits and pixels on a screen. Those two things are, in actuality, two different objects. We just put them in the same category because they have the same property of being able to get us to think about John Valjean or Billy Pilgrim. “Les Miserables” isn’t an abstract object existing in some other realm, it’s an idea in our heads that we apply to certain objects which our brains interpret as being similar.

In a similar way, some disagreeable (yet very friendly) person once asked me whether destroying a computer running windows would destroy windows, or just the computer. Of course, I would have answered, if I had thought of it at the time, that it wouldn’t destroy “windows,” because windows is an idea about things which exist in our heads. It would destroy one particular object which we interpret as having the quality of “windows,” but “windows” itself is just a category that we have for putting those objects in. The real question is whether destroying every human being on the planet would destroy windows. If we were to go extinct as a species, but leave all of our books and computers behind, would “Les Miserables” still exist? Again, the straightforward answer is that there would still be a bunch of physical objects, and nobody to put them into different categories. Whether one book is “Les Miserables’ or “Huckleberry Finn” at that point doesn’t mean much of anything. They still retain the qualities which would make conscious beings call them one or the other if they were there, but in the absence of conscious beings, those prospects simply no longer apply. It’s a bit like asking whether things could still get wet if there were no liquids. No, they couldn’t. Everything would still have the properties which, when interacting with liquids, creates, “wetness,” but without liquids, “wetness” would never happen. “Les Miserables” exists in the interactions of a mind and an object, and without one half of the recipe, it’s meaningless to talk about whether it would exist.

This brushes against another topic in philosophy, namely, mathematical platonism. According to the Stanford Encyclopedia of Philosophy, it’s the predominant position taken by most working mathematicians. That may or may not be true, but I’ll take Stanford at its word on this. The Encyclopedia goes on to describe mathematical platonism as the position which holds the following to be true:

There are mathematical objects.

Mathematical objects are abstract.

Mathematical objects are independent of intelligent agents and their language, thought, and practices.

You can tell that these are very serious axioms because they’re so bold. Personally, I think that this is some of the most ridiculous philosophy I’ve ever come across, but a lot of people much smarter than myself are taking it seriously, so there’s a good chance that I don’t know what I’m talking about. That being said, I think that there’s a huge contradiction here in the definition of ‘abstract’. By my way of thinking, recorded above, there’s no such thing as an object which is ‘abstract’ and also ‘mind independent.’ To me, it seems like being ‘abstract’ is a way of saying that something falls into a category inside of our heads. Already very suspicious of abstraction, I wasn’t very much comforted by the following definition in “The Internet Encyclopedia of Philosophy,” the self titled peer -reviewed academic resource.

There is no straightforward way of addressing what it is to be an abstract object or structure, because “abstract” is a philosophical term of art. Although its primary uses share something in common—they all contrast abstract items (for example, mathematical entities, propositions, type-individuated linguistic characters, pieces of music, novels, etc.) with concrete, most importantly spatio-temporal, items (for example, electrons, planets, particular copies of novels and performances of pieces of music, etc.)

Any definition that begins with, “there is no straightforward way of addressing what it is to be [the object of this definition]” should be fired from the dictionary. Whether I’m right or wrong, I have at least the conceit that I’m being straightforward. I think this definition is representing a confused thought process more than an insight into ‘abstractness.’ It lists novels and pieces of music as ‘abstract’ entities, and individual books and electrons as ‘concrete.’ Ignoring the fact that electrons hardly exist anyway (a joke, but come on, quantum stuff just seems to come and go as it pleases) I do think that there’s no way to call a novel an abstract object in a way which is ‘mind independent.’ There are a bunch of concrete objects, such as books, and their ‘abstractness’ comes from the mental category that we’re artificially placing them into.

I think the reason that mathematicians are so prone to making what I consider to be a major mistake comes from the foundations of Zermelo Fraenkel Set theory. According to ZFC, numbers are best described in terms of sets, or abstract groups of mathematical elements which have definable relations between them. One of the founding assumptions of ZFC is the concept of the set itself. A “set” is just a defined group of elements, such as the natural numbers. But the concept of the set itself seems to float invisibly, abstractly over ZFC, in particular in two areas. The first is the “empty set,” or the idea of a set which includes no elements. In ZFC, the empty set is equated with zero, and natural numbers are defined as progressing in relation to the empty set. Each progressive natural number is a set which includes all previous naturals, but uniquely, the ’empty set’ is a set which includes nothing. The empty set is the category of observation itself. It is the possibility of existence, the foundation of all ‘abstractness.’ It is the concept of a category of things, but with no things to put in that category, and in ZFC, the empty set must be treated as a real object, because all further sets are defined in relation to it. Therefore, the concept of abstractness is woven foundationally into ZFC, and ZFC would collapse without it.

The second place where I think that ZFC assumes abstractness is in certain definitions of infinity. If there can be a set of objects which is unlimited, which has no end to the number of elements within it, such as is the case with the natural numbers, how exactly do you say that you have a ‘set’ which is a distinct object into itself? Of course, that’s precisely the sort of question which mathematicians seek to answer with concepts such as Dedekind cuts, but I do feel that it tends to beg the question, because all attempts to answer those questions within ZMC must assume that sets themselves are objects. If you’re assuming that sets are real things, and that you can put any little thing that your heart desires into that set, it doesn’t prove much about whether sets exist or not to say, “And also, this particular set has everything in it.” I’ve made a few stabs at addressing this mathematically, but an acquaintance of mine, a lecturer in mathematics at University of Colorado, Boulder, politely tells me that my attempts are so far very naïve. Anyway, I’m not really trying to talk about mathematical platonism here. Let’s get back to minds and books, which I think is better anyway. Stupid math. Wherever platonic objects exist, I hope it’s unpleasant there.

If we keep going with these ideas and apply them to people, we have to eventually come to the question of whether or not a clone of one person with that person’s memories counts as being that person. Every philosopher and science fiction writer has to get there eventually. In real life, we don’t often have to face that question, at least we haven’t had to yet, but there are some areas where it comes up. For instance, is the ‘me’ now the same ‘me’ that existed several seconds ago, or ten years ago? Or is the drunk me who punched you last night (sorry) the same ‘me’ as the sober me who doesn’t remember doing that? Following my reasoning above, I think that the best thing to do is to just calm down about it already, because the reality of the situation is pretty plain and doesn’t need to be as distressing as some philosophers want to make it. Just as we are free to call some books, “Les Miserables” and others, “Huckleberry Finn,” we are free to call some people, “Tom” and others “Tim.” There are no cosmically right answers here, just some answers which are more sensible in some situations and some which work better in others. There’s something distinct about the phenomenon which has occurred, and which I’ve called ‘myself,’ which differentiates it from other similar phenomena, so it’s convenient and sensible to think of myself as belonging to a different category than the one that other people belong in. But it’s still just a made up category, and if I wanted to say that what really defines the character of Michael Trites is that he has brown eyes, then I could go ahead and call everybody with brown eyes by my name. There’s just nothing particularly useful about doing that, and it violates the general public consensus of what makes a person an individual, so I don’t behave that way. If there were other phenomena which were very very similar to me, such as a clone with all of my thoughts, it would seem more justifiable to use the same category to describe both physical objects. Both separate objects, the clone and myself, could say that we belong to the same “set” of Michael Trites, but perhaps that we were slightly different permutations of that same hypothetical set. Or we could both change our names and have nothing to do with each other. It doesn’t ultimately matter, there are no metaphysical constraints here on what we should consider ourselves. There are simply different ways of looking at the situation, some of which are more useful. “Michael Trites” is not some abstract entity which exists by itself. My abstractness exists in my head, and we use that abstractness to refer to, currently, one particular category of physical phenomenon. (This will change once a good storm brews up and powers my equipment.)

Once my clone is alive, however, the question reaches its peak and pique when it come to my death. If I had an active alternate out there, would I be willing to let myself die? Would I live on, in my truest sense, if my clone lived on? Would that be enough? While maintaining that that question is ultimately a matter of interpretation, emotionally, I have to say that I wouldn’t simply let myself die just because some “me” was out there as a clone. I’m attached to my progressing, physical self, a cyclone of matter composed from the substance of reality itself. I don’t think there’s anything indefensible about that. I feel that I’m beyond being more than the sum of my parts. I’m very much emotionally bound to my parts, and that emotional connection is a deep part of the ‘self’ that my every breath and heartbeat have been struggling to maintain all my life. I love my hands, which bear my scars. What would a scar be on a clone? It would be mass of flesh, only incidentally a tribute to a wound. The physical progression of my being is among the most important properties in my definition of who I am. I could abandon that, but I don’t want to.

If I were to make digital copies of my grandfather’s book, I wouldn’t give up the original. The reality of the physical object which is that book is just as important to me as the words that it contains. Yes, if that book were to disappear, I would be glad to have a backup on file, but I wouldn’t feel that I had kept a shade of that book with anything more than a passing rezemblance to the original. There would be no way to retreive that book, no land beyond the curtain I could jump into where I could find it. Once lost, it will be lost, like my best and most honest “I” will one day be.

The Mind, The Body, And A Splash Of Nirvana


I consider myself a naturalist, which, for the purposes of this essay, we can take to mean that everything that we’re going to come into contact with is part of the same basic natural world, and subject to some sort of underlying set of principles. It’s not a perfect definition but it’s a place to jump off from. This essay isn’t meant to discuss naturalism specifically. The important point is that we start the discussion agreeing that our subject is something we can discuss rationally, and that we can’t answer questions by saying, “ghosts,” in a quiet voice. Reality, as I’ve said before, is all around us, not some ineffable monstrosity from the depths of the dreamlands.

However, it’s still important to recognize the limitations of our senses and the importance of reason in uncovering the nature of our Universe. Obviously, there are many things that are part of the natural world which we can reason must exist but which can’t be experienced directly through our senses. And it’s important to further recognize the limitations of our reason and intuitions in forming accurate descriptions of the world. If, as I will argue, we approach the world fundamentally through various forms of language, we have to acknowledge that those languages may contain limitations which prevent us from describing the world with absolute accuracy. It’s in these failures of intuition, I believe, that lie many of our mistaken beliefs about subjects such as dualism, reductionism, the mind body problem, and computationalism. So let’s jump right in.

If you open up our skulls, you’ll only find one thing. You find a brain. You don’t find spirits blowing around and moaning. Maybe they’re invisible, and don’t interact with matter… but probably not. Apart from the brain, though, there is some kind of mind in there, too, and there are various ways to go about explaining this. My preferred method is to question why it is that we’re assuming that the mind and the brain are different things. What seems to me most likely is that there is only one thing, and that the words, “mind,” and “brain,” are both just representing two different ways of thinking about that one thing. We call it a brain when we want to think about individual brain structures or neurons, and we call it a mind when we want to talk about the pattern of neural impulses which the brain is forming. But there’s no clear distinction between these two. One gives us a more convenient description of the brain when we want to talk about what people are thinking, and the other gives us a description of why there is an oily mess on the floor of the laboratory. “Oops, I dropped the brain.” You don’t say that you dropped a mind, but that’s what you did.

What these two words are referring to are not different things that exist in the real world, but different mental processes that we have for dealing with one thing. When you dropped the brain, which you will have to pay for, by the way, you created a new pattern of neural hardware. This could be described in mind terms, but we don’t have a word for it, because most people don’t suddenly think themselves into oblivion, unless of course they’re a Buddhist. Which I suppose means that the best way to describe the emptiness achieved upon splatter is nirvana. Seriously, I do believe that meditation has its benefits, I’m not trying to be mean.

Probably the best argument against this would be an argument along the lines of a reductionism vs. holism vs. emergentism debate. But again, I think that what these words really represent, rather than real differences in the natural world, are limitations of our thinking. Neurons are what we see when we think of things reductionistically, and minds are what we see when we think of things as (at preference) a form of holism or emergentism. What this means isn’t that the world contains no differences within itself, just that to divide those differences into different strata of associations, such as through emergentism or reductionism, is a helpful mental process and not a fully accurate representation of the world. When an object exhibits properties which are not fully explainable through reductionist methods, and we say that they exhibit emergent properties, what we’re doing is saying that our thinking has run out of space in the reductionist department, so now we’d better fire up the emergentism software in order to get a clearer picture. There is a real difference between things that exhibit emergent properties and things without them, and we should continue to try to clarify what that difference is, but we shouldn’t assume that one is right and one is wrong; Both are ultimately wrong, though they are both helpful in their own way.

This is true about all of our perceptions. When we see a physical entity, such as a doll, a pineapple, or Kate Upton, we accumulate a bit of information about the object, and we build a mental model telling us what that thing is. But of course it’s not an exhaustively researched model. It doesn’t include the position of every quark in the object, and a completely accurate projection of what will happen to that object in the next few decades. Which is fine. It suits our purposes. We don’t really need to know about the quarks for most purposes, and if we do want to find out about them, we’ll look closer and expand our model. But we’ll never come to a complete picture of what that object is. We can refine and refine our model, but it’s still a model, and not Kate Upton, who is still Kate Upton and not a model. Or whatever.

I would hesitate to say that this is a necessary condition of consciousness itself, but it does seem to be the necessary condition of human consciousness, which is what we’re concerned with. And I would argue that a similar principle applies to arguments about reductionism and emergentism. We can see that Kate Upton is not a pineapple, and we can see that reductionism is not emergentism, but a complete description of why that is is probably not possible. Which is not to say that we should stop trying to refine our picture of what’s going on, just that we should tentatively accept that we don’t have to have a definitive answer to these questions to keep moving on in philosophy. We’ll probably never have the definitive answer, just progressively better and better answers. Sometimes you have to accept that your answer is good enough for the moment to be getting on with. If this sounds like a way of avoiding the argument, then you’re right, it is, but I think it’s a well founded one. I think it does establish that, although emergence is good to talk about, it doesn’t mean that emergent properties are separated from their reductionist foundations in any real way. Emergency will never become dualism; Minds and brains are still ways of talking about the same thing, even if minds are emergent properties of brains.

Why this is so perplexing, intuitively, is that we are always trying to reconcile these opposing views. We are trying to create a model of emergentism using reductionist principles, and a model of reductionism using principles of emergentism, and it’s not working out. We ‘re looking for some kind of Unified Strata Theory, but if there is one out there, I’m not aware of it. I don’t know if we should really expect it to work out. If we find reductionism failing, and begin to think of things in terms of emergentism, then why would it then make sense to explain emergentism through reductionism? Or vice versa?

This is why many of us are so tempted by dualism. It’s neater. It’s easier to think that when you’re thinking in reductionist terms, you’re talking about one thing, and when you’re thinking in emergentist terms, you’re talking about another. But I think the general evidence points to this not being true. If neural correlates to mental phenomena are only correlates, and nothing else, why do we lose brain function when we lose the correlates, never to get them back? Why is brain damage so damaging? What I think is the simplest and best answer is just that there is no real difference between neural correlates and the mental states they correlate to. They’re two sides of the same coin. Or rather, the same coin in two different lights.

This is what we have evolved. Not an in-depth analysis of every concept we come across, but convenient mental models which we switch out as need be.

Computationalism And The Chinese Room

John Searle’s famous Chinese Room thought experiment goes something like this: A man who doesn’t understand Chinese is in a room with a computer. Chinese characters are slipped to him underneath the door, and he puts them into the computer. The computer then gives him instructions on what to reply. He writes new characters down based on the instructions given by the computer, and slips them back through the door. Chinese speakers outside are convinced, based on the replies, that there is a Chinese speaker inside.

Next, the computer is replaced by a pad of paper, a pencil, and a book of instructions. The man inside still does not understand Chinese or have any understanding of the instructions, but he follows them, and they represent exactly the algorithm used by the computer. He slips papers out, and again the people outside are convinced that there is a Chinese speaker in there.

What the Chinese room implies is that it’s possible to create a system which performs a seemingly intelligent act without actually being conscious. But this implication isn’t as clear cut as many people seem to think that it is. According to the thought experiment, the Chinese room is able to produce sentences which appear to have an understanding of Chinese, but which don’t actually understand Chinese. But that’s easier said than done. You’re talking about some process which completely replicates the results of consciousness. But invoking such a process is a bit of a magic trick. What if you asked it, in Chinese of course, “How are you feeling today?” Well, there are a few different things that could happen, depending on how the process was programmed. You could have programmed the process to connect such syntactical structures to other syntactical structures, so that it responds with, “I am real shitty today, George.” And it appears that it’s giving you a response which understands Chinese. But at some point, a human programmed that response into the system. Which means that the understanding of feeling shitty today in relation to the prompt, “How are you feeling today?” Has been inputted into the system by a conscious entity. So the Chinese room is, in this instance, not actually replicating the results of consciousness, it’s merely spitting out an answer which has been inputted into the system by a conscious entity. You could program the system to spit out several random responses to the same question, but it’s still doing the same thing with another level of complication. Now let’s say you program the process in such a way so that it spits out responses to questions, and somebody inputting answers mistakes the process for a conscious being. That means that it has passed the Turing test. Which is not really a very big deal. The Turing test is not a very reliable indicator of consciousness. People are naturally prone to assigning human traits to non-human things. We see faces in clouds, that doesn’t mean that clouds are conscious, any more than seeing a mind in a Chinese room means that the room is conscious. The Turing Test simply doesn’t mean a whole lot. It means that we’ve taught the room to lie; If we imagined that it was only possible to say, “I’m happy today,” if you were truly happy, that would be a world where nobody ever lied. But people do lie, so we know that it’s possible even for conscious entities to spit out results that replicate a specific conscious state without actually experiencing that state; the fact that it’s possible to say that you’re experiencing something without experiencing it isn’t news. But when a person lies about how they’re feeling, we don’t assume that it’s because they’re incapable of feeling anything, we assume it’s because in this particular instance there’s a disconnection between what they’re saying and an actual conscious experience. If you were to give an actor a line to say, such as “Boy am I happy,” you wouldn’t expect them to feel happy. In the same way, you can teach the Chinese room to say, “Boy am I happy,” without expecting it to feel happy, and without damning results to computationalism.

At this level of programming, the Chinese room probably wouldn’t pass the Turing Test very consistently anyway. But what if you programmed the process to to be more nuanced? There are programs out there right now which are doing that. You can look up a number of chat bots online and have interesting, if not entirely human, conversations. Mostly, they work through some type of syntax manipulation, making associations between inputted words and a database of words, and then structuring those words into recognizable sentences. By making associations between very large numbers of words and phrases, they can learn to lie very well. They might not need a human to directly input the statement, “I’m feeling well today,” if they are good enough at finding associations between words such as, how, feeling, today, well, and you, and figuring out how to put those words into context. My first thought is that the program is lying, because it is not feeling well today, and has no conception of what that might mean. It has no understanding of what it means to feel well today, because it can’t place “Feeling well” in the context that a human can place it. A human can place “feeling well,” in the context of a stubbed toe or an upset stomach, and the sensations that go with them. But this doesn’t necessarily mean, again, that because we have taught a computer to lie in this instance, that computers are incapable of consciousness. A person can be taught to say words that they don’t understand the meaning of. This clearly doesn’t mean that people are incapable of understanding. So what then, if we programmed into the computer a large context of associations? For instance, we could program the computer (or the Chinese room) to associate “not feeling well” with having stubbed your toe. This obviously doesn’t mean that it’s stubbed a toe. But how does it know that stubbing a toe is bad? Still, it’s because a programmer has told it to say that. Even if the system is allowed to make associations on its own, and learns after talking to several people that toe stubbing is associated with not feeling well, it’s getting that information through the people it’s interacting with. The program isn’t making a judgment, the program is having a judgment inputted into it. You could even say that the process of judgment takes place in both the program and in the entity which makes the judgment, but still, the program itself has not made a judgment. The understanding of toe stubbing as bad comes from conscious entities which know that toe stubbing is bad. No system will ever know that toe stubbing is bad until that information has come through its own conscious experience or through input from outsiders. You could probably perform tests on the computer to demonstrate this by asking it how it feels about things which it has not talked to anybody about and which it hasn’t had programmed into it. This is a type of Turing Test, and could prove fruitful, but it’s still possible that the programmer could be clever enough to trick the test without meaning that consciousness has occurred. The only real way to get around this is to give the computer some way of making judgments on its own. And now we’re going down the path of making the program conscious. There are many possible distances we could travel down that path, but ultimately, if we were to get the program to conform exactly to the ability to replicate consciousness, then it would become conscious.

So the answer to the Chinese room is that, really, you couldn’t have a Chinese room which replicates Chinese thoroughly without having a conscious entity in the room. So by saying, “I have a room and process that replicates consciousness without being conscious,” you’re saying something equivalent to, “I have a magic box with a genie in it.” You’re not really saying anything. The argument goes something like this. “Can you create an entity which replicates consciousness without being conscious? Imagine that I have a magic room which does just that. Therefore, yes you can!” But it’s one thing to say that you have a Chinese room which seems to think, and another thing to have one. Searle is cheating by stipulating that the Chinese room is not conscious.

Searle is quoted as saying, “Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modeled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.”

Well, no. That is already making the assumption that consciousness is not a computational model. If consciousness is not a computational model, then it would be right to say that a computational model is not consciousness. But if consciousness is a computational model, then a computational model of a computational model is still a computational model, and is still conscious. Is an object within a painting an actual object? No, not usually, unless the object that you’re painting is also a painting. A painting of a rainstorm is not a rainstorm. But a painting of a painting is still a painting. In the same way, if a consciousness is itself a type of simulation, then a simulation of a simulation is still a simulation.

But consciousness isn’t an imaginary property, it’s something that we know to exist, and we can make deductions about what it is, exactly. Where do we find it? Primarily, in the brains of evolved organisms. That means that, at least once, consciousness has occurred via natural selection. If it were possible to replicate the results of consciousness computationally entirely without being conscious, then why would natural selection go to the trouble of inventing such a phenomenon when it could get the same results computationally?

And why assume that brains don’t work computationally in regard to consciousness, when we know that they work computationally in other areas? I don’t think it’s largely disputed that brains do make computations, at least sometimes. So if we assume that A) Brains are at least sometimes making computations, and B) computations can give you the same results as consciousness, then why do we assume that consciousness is not a computation? Why not just cut out the middle man?

One of the intuitive problems people have with this goes back to confusion concerning reductionism and emergentism. I argue that it would be possible to simulate a consciousness with a computer. People often respond with a variation of the Chinese Room. If you can create a computer which simulates consciousness, then wouldn’t that mean that a team of mathematicians could in theory perform all of the functions of the computer on pencil and paper? And if they could, wouldn’t that dispel computationalism, because, I mean, come on, you can’t expect us to believe that a team of mathematicians are conscious. Come on. Really.

But, if consciousness is physical, then mathematicians are just as good a source of physical information networking as anything else. Let’s say one mathematician writes down a number, then sends it along to the next mathematician, who adds another number to that number. Then he sends it back, and the first mathematician adds another number to the second one. And they keep sending it back and forth, adding numbers. This is beginning to become a network. A very simple network, but a network. Physical information is being traded between different people. You could say that the two mathematicians are analogous to neurons, and the papers they are sending are analogous to electrical impulses. Now let’s say we add a third mathematician. Mathematician A sends a number off to mathematician B. If that number is even, the number gets sent back to mathematician C. If it is odd, it gets sent back to mathematician A. Now we have a network, and a decision procedure which begins to manipulate the information.

A team of mathematicians working together, if they were doing so in a way to simulate a brain, would create some similar network, though it would probably be much less formally structured. It seems odd to think that such a network is really a network. The mathematicians are not all touching each other, they’re standing across the room from each other. They might be in different countries, faxing each other papers. But seperate as they may seem, the information is, in one way or another, being spread through the system of mathematicians. There is a huge amount of translation of that information going on, for instance, one mathematician has a thought, translates that thought into writing, then faxes the thought, turning it into electrical signals. Then another mathematician’s printer prints out the fax, translating the information into ink symbols, which are sent through the air via photons into the eyes of the other mathematician, and converted by his retina into electrical impulses which become thoughts. It’s a painfully circuitous route to take, but in the end, there is a definite path of physical information being networked among the mathematicians. A brain is definitely doing all of this more efficiently by keeping everything in one neural language, but it’s not doing anything fundamentally different. It doesn’t matter that the various numbers and symbols don’t seem to have any universal meaning which would translate into a physical property such as consciousness. The mathematicians understand what they mean, and translate the information appropriately. The important thing isn’t that the symbols are understood by nature, the important thing is that the symbols allow the information which is being processed by the system to manipulate the system itself, so that new information will cause a change in the system, which will create new information, and then a change in the system, etc.

Again, this intuitively feels wrong because of our misunderstanding of reductionism and emergentism. It doesn’t seem to make sense that all of these different pieces which seem so separate are forming one whole. It’s tempting to say something like, “Well, if this is conscious, it’s not really the matter which is conscious, it’s the system created by the matter.” But there is no real difference between the matter and the system. “matter,” and “systems” are convenient mental models that we have for talking about things in two different ways. But what we’re talking about is the same thing. You can think of this all as happening in a visual-spatial physical world, or as happening in a “mathematical” world, or you can explain it in English with words like “emergentism,” and “network,” and “weird,” but you’re still just trying to describe one actual phenomenon with different mental faculties.

Oh, wow, that was exhausting. Let’s take a break.



NOTE: The definition I originally used for naturalism at the time that this was published was different. I said that naturalism means, more or less, that there’s nothing supernatural. Somebody pointed this bad definition out to me in the comments. I feel that it’s worth changing because it was such an obviously bad definition, and permissible to change, since it’s not a central point of the essay and just something I was using as introduction.

Starting Off

I’ve written a lot of things down in my life, and recently have become interested in trying to get other people to read them. Through various twitters and such, I came across Massimo Pigliucci’s blog Scientia Salon, and wrote an article for it, which he was charitable enough to print. It was a fun but surprisingly nerve-wracking experience. Which, I should add, had nothing to do with Massimo himself, who was very friendly and even responded to an anxious email of mine inquiring about my submission. So, deciding that I would like to do more writing along those lines, and suspecting that I could probably not get credentialed scientists/philosophers to publish every thought that passes through my head, I started a blog. (Hint: It’s the one that you’re reading.)

More writing will be coming soon, this is mostly just a placeholder so I have something to start out with. Actually I have to get going. I have an appointment soon. And I need to get dinner.