Functionalism: Can't we just say that consciousness depends on the higher-level organization of a given system?

Functionalism, roughly, is the idea that consciousness is to be identified not with a particular physical implementation (like squishy gray brains or the particular neurons that the brains are made of), but rather with the functional organization of a system. The human brain, then, is seen by a functionalist as a particular physical implementation of a certain functional layout, but not necessarily the only possible implementation. The same functional organization could, presumably, be manifested or implemented by a computer (for example), which would then be conscious. It is not the actual physical system or what it is doing on a physical level that matters to a functionalist, but the abstract "block diagram" that it implements. The doctrine of functionalism may fairly be said to be the underlying assumption of the entire field of cognitive science.

Functionalism gained adherents within philosophy of mind as a response to what are known as token identity theories, or simply identity theories. These are theories that say that the conscious mind just is the neurology that implements it, the gray squishy stuff. Identity theories, however, exclude the possibility that non-brain-based things could be minds, like computers or aliens. Functionalism is predicated on the notion of multiple realizability. This is the idea that there might be a variety of different realizations, or implementations, of a particular property, like consciousness. Another way of saying this is that there might be many micro states of affairs that all produce or constitute the same macro state of affairs.

I have several problems with functionalism.

In order to even have a block diagram of a given system, you have to draw blocks. Functionalists are often somewhat cavalier about how those blocks are drawn when imposing an abstract organization on a physical system. They tend to assume that Nature drew the lines for them: that there is an objective line between the system itself and the environment with which it interacts (or the data it processes) and that there is an objective proper level of granularity to use when characterizing the system. Depending on how fine a granularity you use to characterize a system, and the principles by which you carry out your abstraction of it, its functional characterization changes drastically. Functionalists tend to gloss over the arbitrariness of the way these lines are drawn.

The functionalist examines a system, chooses an appropriate level of granularity (with "appropriateness" determined pretty much solely according to the intuitions of the functionalist) and starts drawing boxes. Within those boxes, the functionalist does not go, as long as the boxes themselves operate functionally in the way that they are supposed to. It is central to the doctrine of functionalism that how the functionality exhibited by the boxes is implemented simply does not matter at all to the functional characterization of the system overall. For this reason, the boxes are sometimes called "black boxes".

It is worth noting that, properly speaking, physicalism itself can be seen as a kind of functionalism. This is because at the lowest level, every single thing that physics talks about (electrons, quarks, etc.) is defined in terms of its behavior with regard to other things in physics. If it swims like an electron and quacks like an electron, its an electron. It simply makes no sense in physics to say that something might behave exactly like an electron, but not actually be one. Because physics as a field of inquiry has no place for the idea of qualitative essences, the smallest elements of physics are characterized purely in functional terms, as black boxes in a block diagram. What a photon is, is defined exclusively in terms of what it does, and what it does is (circularly) defined exclusively in terms of the other things in physics (electrons, quarks, etc., various forces, a few constants). Physics is a closed, circularly defined system, whose most basic units are defined functionally. Physics as a science does not care about the intrinsic nature of matter, whatever it is that actually implements the functional characteristics exhibited (and described so perfectly in our laws of physics) by the lowest level elements of matter. Thus physics itself is multiply realizable.

It could be argued that consciousness is an ad hoc concept, one of those may-be-seen-as kind of things: however I choose to draw my lines, whatever grain I use, however I gerrymander my abstract characterization of a system, if I can manage to characterize it as adhering to a certain functional layout in a way that does not actually contradict its physical implementation, it is conscious by definition - consciousness in a given system just is my ability to characterize it in that certain way. To take this approach, however, is to define away the problem of consciousness.

This may well be the crucial point of the debate. I believe that consciousness is not, can not possibly be, an ad hoc concept in the way it would have to be for functionalism to be true. I am conscious, and no reformulation of the terms in which someone analyzes the system that is me will make me not conscious. That I am conscious is an absolutely true fact of nature. Similarly, (assuming that rocks are in fact not conscious) it is an absolute fact of nature that rocks are not conscious, no matter how one may analyze them. Simply deciding that "conscious" is synonymous with "being able to be characterized as having a functional organization that conforms to the following specifications . . ." does not address why we might regard conscious systems as particularly special or worthy of consideration.

Functionalists believe that in principle, a mind could be implemented on a properly programmed computer. Put another way, functionalists believe that the human brain is such a computer. But when we speak of the abstract functional organization of a computer system (as computer systems are currently understood), we are applying an arbitrary and explanatorily unnecessary metaphysical gloss to what is really a phonograph needle-like point of execution amidst a lot of inert data.

When a computer runs, at any instant its CPU (central processing unit) is executing an individual machine code instruction. No matter what algorithm it is executing, no matter what data structures it has in memory, at any given instant the computer is executing one very simple instruction, simpler even than a single line from a program in a high level language like C or Java. In assembly language, the closest human-friendly relative of machine code, the instructions look like this: LDA, STA, JMP, etc. and they generally move a number, or a very small number of numbers, from one place to another inside the computer. Of the algorithm and data structures, no matter how fantastically complex or sublimely well constructed, the computer "knows" nothing, from the time it begins executing the program to the end. As far as the execution engine itself is concerned, everything but the current machine instruction and the current memory location or register being accessed might as well not exist - they may be considered to be external to the system at that instant. If someone (say an engineer) were to disconnect the entire rest of the computer's memory except that being accessed between instructions, the computer would not know or care - it would blithely hum along, executing its algorithm perfectly well. Note that in this scenario, both the memory containing the past and future steps in the algorithm itself as well as any data on which those instructions operate are being removed.

But could we not say that the execution engine, the CPU, is not the system we are concerned about, but the larger system taken as a whole? Couldn't we draw a big circle around the whole computer, CPU, memory, algorithm, data structures and all?

We could, I suppose, choose to look at a computer that way. Or we could choose to look at it my way, as a relatively simple, mindless execution engine amidst a sea of dead data, like an ant crawling over a huge gravel driveway. If I understand the functioning of the ant perfectly, and I have memorized the gravel or have easy access to the gravel, then I have 100% predictive power over the ant-and-driveway system. Any hard-nosed reductive materialist would have to concede that my understanding of that system, then, is complete and perfect. I am free to reject any "higher-level" interpretation of the system as an arbitrary metaphysical overlay on my complete and perfect understanding, even if it is compatible with my physical understanding. Yes, I could see the system that way, at a "higher level", but nothing compels me to do so. It is therefore highly suspect when broad laws and definitions about facts of Nature are constructed that depend solely on such high-level descriptions and metaphysical overlays, laws which concern themselves with such things as "functional states" or "predictive models".

The higher-level view of a system can not give you anything real that was not already there at the low level. The system exists at the low level. The high-level view of the same system is just a way of thinking about it, and possibly a very useful way of thinking about it for certain purposes, but the system will do whatever it is that the system does whether you think about it that way or not. The high-level view of the system is, strictly speaking, explanatorily useless (although it may well be much, much easier for us, given our limited capacities, to talk about the system in high-level terms rather than in terms of its trillions of constituent atoms, for example).

Imagine that you are presented with a computer that appears to be intelligent - a true artificial intelligence (AI). Let us also say that, like Superman, you can use X-ray vision to see right into this computer and track every last diode as it runs. You see each machine language operation as it gets loaded into the CPU, you see the contents of every register and every memory location, you understand how the machine acts upon executing each instruction, and you are smart enough to keep track of all of this in your mind. You can walk the machine through its inputs in your mind, based solely on this transistor-level pile of knowledge of its interacting parts, and thus derive its output given any input, no matter how long the computation.

You do not, however, know the high-level design of the software itself. After quite some time, watching the machine operate, you could possibly reverse-engineer the architecture of the software. It is the block diagram of the software architecture that you would thereby derive that a functionalist would say determines the consciousness of the computer, but it is something you created, a story about the endless series of machine code operations you told yourself in order to organize those operations in your mind. This story may be "correct" in the sense that it is perfectly compatible with the actual physical system, and it may in fact be the same block diagram that the computer's designers had in their minds when they built it.

This only means, however, that the designers got you to draw a picture in your mind that matched the one in theirs. If I have a picture in my mind, and I create an artifact (for example, if I write a letter), and upon examining the artifact, you draw the same (or a similar) picture in your mind, we usually say that I have communicated with you using the artifact (i.e. the letter) as a medium. So if the designers of the AI had a particular block diagram in their minds when they built the AI, and upon exhaustive examination of the AI, you eventually derived the same block diagram, all that has happened is that the machine's designers have successfully (if inefficiently) communicated with you over the medium of the physical system they created.

The main point is that before you reverse-engineered the high-level design of the system, you already had what we must concede is a complete and perfect understanding of the system in that you understood in complete detail all of its micro-functionings, and you could predict, given the current state of the system, its future state at any time. In short, there was nothing actually there in terms of the system's objective, measurable behavior that you did not know about the system. But you just saw a huge collection of parts interacting according to their causal relations. There was no block diagram.

A computer is a Rube Goldberg device, a complicated system of physical causes and effects. Parrot eats cracker, as cup spills seeds into pail, lever swings, igniting lighter, etc. In a Rube Goldberg device, where is the information? Is the cup of seeds a symbol, or is the sickle? Where is the "internal representation" or "model of self" upon which the machine operates? These are things we, as conscious observers (or designers) project into the machine: we design it with ideas of information, symbols, and internal representation in our minds, and we build it in such a way as to emulate these things functionally.

But the computer itself never "gets" the internal model, the information, the symbols. It is forever confined to an unimaginably limited ant's-eye view of what it is doing (LDA, STA, etc.) It never sees the big picture, or the intermediate picture, or even the little picture, or anything we would regard as a picture at all. By making the system more complex, we just put more links in the chain, make a larger Rube Goldberg machine. Any time we humans say that the computer understands anything at a higher level than the most micro of all possible levels, we are speaking metaphorically, anthropomorphizing the computer1.

The block diagram itself does not, properly speaking, exist at any particular moment in a system to which it is attributed. Another way of putting this is to point out that the functional block diagram description of any system (or subsystem) is determined by an ethereal cloud of hypotheticals. You can not talk about any system's abstract functional organization without talking about what the system's components are poised to do, about their dispositions, tendencies, abilities or proclivities in certain hypothetical situations, about their purported latent potentials. What makes a given block in a functionalist's block diagram the block that it is, is not anything unique that it does at any single given moment with the inputs provided to it at that moment, but what it might do, over a range of inputs. The blocks must be defined and characterized in terms of hypotheticals.

It is all well and good to say, for example, that the Peripheral Awareness Manager takes input from the Central Executive and scans it according to certain matching criteria, and if appropriate, triggers an interrupt condition back to the Central Executive; but what does this mean? Isn't it basically saying that if the Peripheral Awareness Manager gets input X1 then it will trigger an interrupt, but if it gets input X2 then it won't? These are hypothetical situations. What makes the Peripheral Awareness Manager the Peripheral Awareness Manager is the fact that over time it will behave the way it should in all such hypothetical situations, not the way it actually behaves at any one particular moment.

Otherwise, couldn't we save a lot of effort and just make a degenerate conscious functional system, one that was only conscious in a particular situation, that is, only conscious given a particular set of inputs? With the possible inputs whittled down in this way, we could make a vastly simpler conscious machine by making each functional block only capable of dealing properly with that particular system input, and the internal signals that would result from the system as a whole being given that input. The Peripheral Awareness Manager would only be given input X1, so we wouldn't have to program in any capability of dealing with input X2. Once we simplified the system in this way, it really could not be said to adhere to the functional block diagram anymore at all - it would be hardwired to do one thing, to behave consciously in only one particular situation. No one looking at the system without knowledge of how it was designed would ever be able to reverse engineer the original block diagram in all its complexity. The functionalist would say then that it is not conscious. But if we gave it the input for which it was designed, it would do exactly the same thing in exactly the same way that the "conscious" functional system would have when given the same input.

When you have a system that you want to characterize functionally, you don't care so much about what the system is actually doing right this second (let us imagine that it is adding two numbers together and storing their sum in a register, for example). You care about what the system or its components would do if given a certain input, for all states and all inputs. A Peripheral Awareness Control Module, in a functionalist's block diagram, is what the functionalist says it is not by virtue of what it is doing at any particular instant, but by virtue of what it does in general, which is to say what it does over time, and over a broad range of scenarios. So two systems (say, a purported AI and a digital thermostat) could be doing the exact same thing (like adding our two numbers together), and according to a functionalist, one would be conscious and the other would not be, purely on the basis that the "conscious" system would behave in certain ways if given certain inputs besides the ones it is actually dealing with right now.

But there is nothing in the system itself that knows about these hypotheticals, that calculates them ahead of time, or that stands back and sees the complexity of the potential state transitions or input/output pairings. At any given instant the system is in a particular state X, and if it gets input Y it does whatever it must do when it gets input Y in state X. But it can not "know" about all the other states it could have been in when it got input Y, nor can it "know" about all the other inputs it could have gotten in state X, any more than it could know that if it were rewritten, it would be a chess program instead of an AI.

We ought to be very careful about attributing explanatory power to something based on what it is poised to do according to our analysis. Poisedness is just a way of sneaking teleology in the back door, of imbuing a physical system with a ghostly, latent purpose. A dispositional state is an empty abstraction. A rock perched high up on a hill has a dispositional state: if nudged a certain way, it will roll down. A large block of stone has a dispositional state: if chipped a certain way with a chisel, it will become Michelangelo's David. That, as the saying goes, plus fifty cents, will buy you a cup of coffee.

At any given instant, like the CPU, the system is just doing one tiny, stupid crumb of what, we, as intelligent observers, see that it might do when thought of as one continuous process, over time. To say that a system is conscious or not because of an airy-fairy clound of unrealized hypothetical potentials sounds pretty spooky to me.

In contrast, I am conscious right now, and my immediate and certain experience of that is not contingent on any hypothetical speculations. My consciousness is not hypothetical - it is immediate. The term "if" does not figure into my evaluation of whether I am conscious or not.

But couldn't this argument be used to declare the concept of life off limits as well? After all, life is a quality that is characterized exclusively by an elaborate functional description, one that involves reproduction, incorporating external stuff into oneself, etc. Life is not characterized by any particular physical implementation: if we were visited by aliens tomorrow who were silicon-based instead of carbon-based, we would nevertheless not hesitate to call them alive (assuming they were capable of functions analogous to reproduction, metabolism, consumption, etc.).

But according to the above argument, I am alive right now, even though our definitions of what it means to be alive all involve functional descriptions of the processes that sustain life, and these functional descriptions, in turn, are built on an ethereal cloud of hypotheticals. There is nothing in a living system that knows about these hypotheticals, or calculates them, so how can we say that right here and now, one system is alive and another not, when they are both doing the same thing right here and now, but one conforms to the functional definition of a living thing, and one does not? Therefore, there must be some magical quality of life that can not be captured by any functional description. Yet we know this is not true of life, so why should we think it is true of consciousness?

Why should we accept a definition of life built upon ethereal hypotheticals, but not a definition of consciousness built upon them? Like so many other arguments, it comes down to intuitions about the kind of thing consciousness is. Life is, at heart, an ad hoc concept. The distinction between living and non-living things, while extremely important to us, and seemingly unambiguous, is not really a natural distinction. The universe doesn't know life from non life. As far as the universe is concerned, its all just atoms and molecules doing what they do.

People observe regularities and make distinctions based on what is important to them at the levels at which they commonly operate. We see a lot of things happening around us, and take a purple crayon and draw a line around a certain set of systems we observe and say, "within this circle is life. Outside of it is non-life." Life just is conformance to a class of functional descriptions. It is a quick way of saying, "yeah, all the systems that seem more or less to conform to this functional description." It is a rough and ready concept, not an absolute one. Nature has not seen fit to present us with many ambiguous borderline cases, but one can, with a little imagination, come up with conceivable ones. It is useful for us to classify the things in the world into groups along these lines, so we invent this abstraction, "life", whose definition gets more elaborate and more explicitly functional as the centuries progress. We observe behaviors over time, and make distinctions based on our observations and expectations of this behavior. So life, while perfectly real as far as our need to classify things is concerned, has no absolute reality in nature, the way mass and charge do.

This is not to denigrate the concept of life or to say that the concept is meaningless, or that any life science is on inherently shaky foundations. The study of life and living systems, besides being fascinating, is a perfectly fine, upstanding hard science, with perfectly precise ways of dealing with its subject. I am just saying that "life" is a convenient abstraction that we create, based on distinctions that, while perfectly obvious to any five-year-old, are not built in to the fabric of the universe.

To be a functionalist is to believe that consciousness is also such a concept, that it is just a handy distinction with no absolute basis in reality. I maintain, however, that our experience of consciousness (which is to say, simply our experience) has an immediacy that belies that. We did not create the notion of consciousness to broadly categorize certain systems as being distinct from other systems based on observed functional behavior over time. Consciousness just is, right now.

What's more, we can squeeze all kinds of functional descriptions out of different physical systems. Gregg Rosenberg has pointed out that the worldwide system of ocean currents, viewed at the molecular level, is hugely complex, considerably more so than Einstein's brain viewed at the neuronal level. I do not think I am going out on a limb by saying that the worldwide system of ocean currents is not conscious. What if, however, we analyzed the world's oceans in such a way that we broke them down into one inch cubes, and considered each such cube a logic component, perhaps a logic gate. Each cube not at the surface or the bottom abuts six neighbors face-to-face, and touches 20 others tangentially at the corners and edges. Now choose some physical aspect of each of these cubes of water that is likely to influence neighboring cubes, say micro-changes in temperature, or direction of water flow, or rate of change of either of them, and let this metric be considered the "signal" (0 or 1, or whatever the logic component deals with). Now suppose that for three and a half seconds in 1943, just by chance, all of the ocean's currents analyzed in just this way actually implemented exactly the functional organization that a functionalist would say is the defining characteristic of a mind. Were the oceans conscious for that three and a half seconds? What if we had used cubic centimeters instead of cubic inches? Or instead of temperature, or direction of water flow, we used some other metric as the signal, like average magnetic polarity throughout each of the cubes? If we change the units in which we are interested in these ways, our analysis of the logical machine thereby implemented changes, as does the block diagram. Would the oceans not have been conscious because of these sorts of changes of perspective on our part?

What if we gerrymander our logic components, so that instead of fixed cubes, each logic component is implemented by whatever amorphous, constantly changing shape of seawater is necessary to shoehorn the oceans into our functional description so that we can say that the oceans are right now implementing our conscious functional machine? As long as it is conceivable that we could do this, even though it would be very difficult to actually specify the constantly changing logic components, we would have to concede that the oceans are conscious right now. Is it not clear that there is an uncomfortable arbitrariness here, that a functionalist could look at any given system in certain terms and declare it to be conscious, but look at it in some other terms and declare it not conscious?

Our deciding that a system is conscious should not depend on our method of analysis in this way. I just am conscious, full stop. My consciousness is not a product of some purported functional layout of my brain, when looked at in certain terms, at some level of granularity. It does not cease to be because my brain is looked at in some other terms at some other level of granularity. That I am conscious right now is not open to debate, it is not subject to anyone's perspective when analyzing the physical makeup of my brain. It just is absolutely true. Consciousness really does exist in the Hard Problem sense, in all its spooky, mysterious, ineffable glory. But it does not exist by virtue of a purported high-level functional organization of the conscious system. The high-level functional organization of a system simply does not have the magical power to cause something like consciousness to spring into existence, beyond any power already there in the low-level picture of the same system. As soon as we start talking about things that are "realized" or "implemented" by something else, we have entered the realm of the may-be-seen-as, and we have left the realm of the absolutely-is, which is the realm to which consciousness belongs.

None of this is to say that the block diagram of a particular physical system is wrong, or random - just that it is not necessarily there in the details. The high level view is not inherent in the implementation, and saying it is is kind of like religion. In fact that is exactly what it is. What, after all, do we call people who go around touting their grand simplifying high-level models of how the world is organized, but which are not necessary for a complete and perfect understanding of the world based on concrete objective observation of its interacting parts? The doctrine of functionalism is metaphysics masquerading as hard-nosed science.

1 Don't anthropomorphize computers. They don't like it.


Go back up to the main page