The time has come to talk about Daniel Dennett. He is the
self-proclaimed captain of the "A" team, the king of the reductive
materialists (he declared David Chalmers the captain of the "B"
team). His manifesto, 1991's "Consciousness Explained", is an
absolute must-read for anyone interested in this field. It is
extremely clearly written, persuasive, and loaded with style, a
dry wit, and fascinating facts and findings relating to the study
of the human mind.
One simply can not discuss philosophy of mind in
any useful way without having some response to Daniel Dennett and
his arguments. At the same time, one must occasionally rise above his
characterization of his opponents as fearful, reactionary, silly people
desperately clinging to their vanities about the human soul.
It should come as no surprise at this point that I
think Dennett is wrong, at least in some of his conclusions.
It may come as something of a surprise, however, in this sharply
divided field of inquiry, that I think that nearly all of what
Dennett says in his book is right.
Dennett has no use for so-called qualophiles like myself. This is
part I disagree with. But the vast bulk of the book is concerned
not with arguments against qualia themselves, but against the idea that there
is some central executive in the mind, some special module (either
anatomically or functionally defined) that
constitutes "my consciousness", such that sensory inputs are
distinctly pre-conscious on one side of the module, and memories or
motor outputs are distinctly post-conscious on the other side of
it. Instead, Dennett
proposes what he calls the Multiple Drafts Model, according to
which there are lots of modules (or agents, or, more colorfully,
demons), lots of versions or portions of
versions of sensory inputs, and it never exactly comes together in
any one place or at any one time in the brain to constitute "my
field of consciousness right now". Dennett often describes the mind as
more of a pandemonium (literally, "demons all over") than a
bureaucracy. He makes many persuasive
arguments against the idea of a single central executive in the
mind, and powerfully challenges our intuitions about our selfhood.
But then he makes an abrupt right turn, concluding that
therefore, qualia do not exist in any sense whatsoever.
According to Dennett's hypothesis, among the specialized modules in
the brain there is a verbalizer, a narrative spinner (some people
call this module or something like it
the monkey mind; I think of it as the chatterbox). The chatterbox
produces words, and words are very potent or sticky tags in memory.
They are not merely easy to grab hold of, they are downright
magnetic. They are velcro. The output of this particular
module seduces us into thinking that what it does, its narrative,
is "what I was thinking" or "what I was experiencing" because when
we wonder what we were experiencing or thinking, its report leaps
to answer. The products of this chatterbox constitute what we think of as
the "self". Dennett says we spin a self as automatically as spiders
spin webs or beavers build dams.
This very property makes this chatterbox powerful, and gives its
narrative strong influence in guiding future action, thought and
experience, but it is a mistake to therefore declare it to be the
Dennett likes to say that what we call the
"self" is really just a "center of narrative gravity", and as such,
merely a useful fiction. In the same way, an automobile engine may
have a center of gravity, and that center of gravity may move
around within the engine as it runs. The center of gravity of the
engine is perfectly real in some sense - one could locate it as
precisely as one wanted to - but in another sense it does not
really exist. It performs no work. It is what I might call a
may-be-seen-as kind of thing, not a really-there kind of thing.
Dennett thinks that the self is the center of narrative gravity in
exactly this sense.
Dennett makes a great deal of the difficulty of distinguishing
clearly between experiencing something as such-and-such, and
judging it to be such-and-such. In response to an imaginary
qualophile, Dennett says, "You seem to think there's a difference
between thinking (judging, deciding, being of the heartfelt opinion
that) something seems pink to you and something really
seeming pink to you [emphasis his]. But there is no difference.
There is no such phenomenon as really seeming - over and above the
phenomenon of judging in one way or another that something is the
case." (p. 364) In a way he is right - it is a
fascinating and fruitful problem. In his book, Dennett gives many
examples that serve to undermine our faith that we really do
experience what we think we experience, and there are many others
that are not in his book.
Dennett says to imagine that you enter a room with pop art
wallpaper; specifically, a repeating pattern of portraits of
Marilyn Monroe. Now, we only have even reasonably high-resolution
vision in our fovea, the portion of our field of vision directly in
front. The fovea is surprisingly narrow. We compensate with
saccades - unnoticably quick eye movements. Nevertheless, as
Dennett says, we could not possibly actually see all the details of
all the Marilyns in the room in the time it takes us to form the
certain impression of being in a room with hundreds of perfectly
crisp, distinct portraits of Marilyn. I'll let Dennett himself
take it from here:
Now, is it possible that the brain takes one of its high-resolution
foveal views of Marilyn and reproduces it, as if by photocopying,
across an internal mapping of the expanse of wall? That is the only
way the high-resolution details you used to identify Marilyn could
"get into the background" at all, since parafoveal vision is not
sharp enough to provide it by itself. I suppose it is possible in
principle, but the brain almost certainly does not go to the
trouble of doing that filling in! Having identified a single
Marilyn, and having received no information to the effect that the
other blobs are not Marilyns, it jumps to the conclusion that the
rest are Marilyns, and labels the whole region "more Marilyns"
without any further rendering of Marilyn at all.
Of course it does not seem that way to you. It seems to you as if
you are actually seeing hundreds of identical Marilyns. And in one
sense you are: there are, indeed, hundreds of identical Marilyns
out there on the wall, and you're seeing them. What is not the
case, however, is that there are hundreds of identical Marilyns
represented in your brain. Your brain just somehow represents
that there are hundreds of identical Marilyns, and no matter
how vivid your impression is that you see all that detail, the
detail is in the world, not in your head. And no figment [Dennett's
term for the metaphorical "paint" used to depict scenes in the
Cartesian Theater - figmentary pigment] gets used up in rendering
the seeming, for the seeming isn't rendered at all, not even as a
The point here is that while we may think we see the Marilyns on
the wall, and we may think
that we have a qualitative experience to that effect
(just like our qualitative experience of seeing red), this is
almost certainly not the case. Instead, what is happening is that
we have inferred, or judged that there are Marilyns all over the
wall, and we have a very definite, certain feeling that we actually see these
Marilyns. Sometimes we think we directly experience things that are right
in front of our faces, but really we just conclude that we
have experienced them. Our inability to tell the difference is
intended to make qualophiles like myself uneasy. I think I am being
fair to Dennett to characterize his claim as follows: we think
that our direct experience is mysterious, but often it can be shown pretty
straightforwardly that when you think you are directly experiencing something,
really you are just holding onto one end of a string the other end
of which you presume to be tied to this mysterious experiencing.
Given this common and easily demonstrated confusion, it is most likely
that all purported "direct experience" is like this, that all
we have is a handful of strings. We never directly experience anything;
we just judge ourselves to have done so.
Dennett also discusses the blind spot in our visual field. There
are simple experiments that demonstrate that a surprisingly large
chunk of what we normally think of as
our field of vision is not actually part of our field of
vision at all. We simply can not see with the part of our retina
that is missing because of where the optic nerve leaves the
eyeball. The natural, naive question is, why don't I notice the
blind spot? The equally natural, and equally naive explanation is
that the brain compensates by "filling in" the blind spot, guessing
or remembering what should be seen in that region of the visual
field, and painting (applying more figment)
that pattern or color on the stage set in the Cartesian Theater.
Dennett is quite emphatic that nothing of the sort happens. There
is no Cartesian Theater, so no filling in is necessary. There is no
such thing as seeing directly, there is only concluding, so once
you conclude (or guess, or remember) what should be in the blind
spot, you are done. There is no inner visual field, so there is no
need for inner paint (figment), or inner bit maps.
We do not notice the blindness because "since the
brain has no precedent of getting information from that gap of the
retina, it has not developed any epistemically hungry agencies
demanding to be fed from that region".
There are also easily performed experiments that demonstrate change
blindness: the phenomenon whereby you can be shown a photograph,
and then shown an altered version of the same photograph and be
unable to spot the differences. Often the differences can be
pretty dramatic, far more drastic than you might think
could possibly go unnoticed. Once again, you think you really see
the first photo in a lot more detail than you actually do, but it
turns out that instead you merely judged that you had seen it. You
nailed down a few major details, decided that you had seen it, and
that was good enough. Your confidence that you really see something
is misplaced - you only think you see.
Dennett describes experiments in which people were fitted with goggles that
turned their entire field of vision upside down. While "comically helpless" at
first, soon the subjects were able to ski and bicycle through city traffic
while wearing the goggles. Dennett says, ". . . the natural (but misguided)
question to ask is this: Have they adapted by turning their experiential
world back right side up, or by getting used to their experiential world
being upside down?" Dennett holds that this is simply a wrong question, and
in fact, the more completely adapted the subjects of the experiment were, the
more they reported that the question had no good answer. The experience of the
visual field is inseparable from your use of it, your cognitive interpretation
Imagine a room. Do not do it by reference to a real room with which
you are familiar, make up a room you have never actually seen. Do
so in as much detail as you can. Take as long as you like. Got it?
Now - is there crown molding around the ceiling? If so, what kind?
What is the millwork of the baseboards like, if it has any? What
kind of latches are there on the windows? If you are like most
people, you thought you visualized the room in pretty specific
detail, but when asked pointed questions about it, you are
distinctly aware of making up your answers on the fly. You didn't
really see the room in your mind's eye in as much detail as you
thought you did.
These are very interesting examples and point to an important
problem in our conception of the distinction between experiencing
and judging. Often when we think we perceive a whole lot of detail
directly, what is really going on is that we have cognitive access
to a whole bunch of detail on demand, if we (or any of the agents,
or as Dennett calls them, demons, that comprise us) ask for it,
accompanied by a fuzzy sense of directly perceiving the detail "directly".
But Dennett is wrong to jump from that to the
conclusion that we never really experience anything, that its
all just judging. The circle of experience may be smaller than we
usually think, or it may have less distinct boundaries, but we can
not plausibly shrink it to a point, or out of existence altogether.
I agree that it is impossible to draw a clear distinction between
experience and judgment, but this is because judgment is itself a
sort of structured experience. There is no naive experience: our
interpretations are part and parcel of our perceptions.
It is interesting that Dennett never
clearly and simply defines "judgment". Computers do not know,
judge, believe, or think anything, any more than the display over
the elevator doors knows that the elevator is on the 15th floor.
All they do is push electrons
around. Even calling some electrons 0 and others 1 is projection on
our part, a sort of anthropomorphism. It seems as though I see all
the Marilyns; Dennett says no, I merely judge that I see them. He
is right to force us to ask ourselves how much we really know about
the difference. He is wrong to think that the answer makes either
one of them less mysterious, or more amenable to a reductive,
Some of this echos my
about the impossibility of distinguishing between data and algorithm.
It is kind of like the difference between having a map showing a
place you need to drive to, and having a list of directions to that
place. You can follow the directions, turning where they say to
turn, without ever forming any overall conception of where you are
or where you are going. If the directions are sufficiently
elaborate, they can even tell you how to get back on track if you
make a wrong turn. You can simply follow them, and never "put it
all together" into any bird's eye, directional sense of where you are. Could it
not be the case that even when we do have a sense of where we are,
say, in the middle of our home town, that sense is an illusion, and
all we really have is a really good set of directions for how to
get any place we might need to go? When it comes right down to it,
is there any real difference between "directly" perceiving
something in all its detail on one hand, and having on-demand
answers to any questions you might pose about that thing on the
other? Could it be the case that we think that we have an
immediate, all-at-once conception
or perception of something, but all we really have is an
algorithmic process that is capable of answering questions about
that something really quickly, a just-in-time reality generator?
If I think I have a conception of something, say, a soldering iron,
could it turn out that really there is nothing but an algorithm,
a cognitive module in my head
with specific answers to any question I could have about the
soldering iron? At any point, in any situation, the algorithmic
produce the correct response to any question about the soldering
iron in that situation. How to use it, what it feels like, its
dangers, its potential misuse, its utility for scratching my name
with its tip into the enamel paint on my refrigerator. Such a
module would serve as a just-in-time reality generator with
regard to any experience I might have involving the soldering iron.
It would consist of a bundle of expectations of sensory inputs and
appropriate motor outputs regarding the soldering iron.
To use the computer terminology, as long as the soldering iron
module presented the correct API to the rest of the mind,
wouldn't the mind be "fooled" into thinking that it had a
qualitative idea of the soldering iron, when all it really had was
a long list of instructions mapping input to output?
How do I know I have a concept of the soldering iron beyond the
ability to form a whole bunch of judgments about the soldering
iron, given the difficulty of distinguishing between being
conscious of something and merely making judgments about it?
Is it possible that after all, I simply do not have any holistic,
all-at-once conception or perception of the soldering iron?
And is there really any difference between the two ways of
characterizing our cognitions regarding soldering irons?
No, It is not possible, and yes, there is a difference.
When I see the soldering iron, I really do
see it. If I look at a white wall with three black circles painted
on it, I see them all before me. Chalmers once asked, what
is it like to see a square? What is it like to look at the
well-known Necker cube? For that matter,
what is it like to see the word "cat"?
There is judgment, inference, interpretation, and cognition here. There
are associations, memories, connotations, and all the rest of the
cognitive baggage. There is also experience. The mystery here is how
they all relate; to what extent they are really the same thing; and
to the extent that they are the same thing, what gives rise to the
intuition that they are different in the first place; how some
things that take place in the mind are more experience than
judgment or cognition, and other things are more judgment or
cognition than pure experience. What is the sense that I see all
the Marilyns if not itself a quale?
The difficulty with cleanly distinguishing between "directly"
perceiving something and merely judging it to be a certain way,
(while having specialized modules for answering questions about it)
is not limited to visual perception or perception at all, in the
usual narrow sense. Nor is it limited to perception of the outside
world. The same kinds of ambiguity exist with regard to our
understanding of our own minds.
I believe the sun will rise tomorrow. Do I
really hold this single belief, or is it just a huge bundle of
expectations and algorithms, each pertaining to specific situations
or types of situations that I might find myself in? Any of the
unitary things we naturally posit in our minds (models, images,
memories, beliefs) could have some component at least of such a
bundle of algorithms, or agents. For any such thing, what is its
API to the rest of the system, really? How much can we really
say about how it implements that API? Maybe I just infer somehow
that I have a belief that the sun will rise tomorrow, but that
"belief" is not nearly the short little statement written down
somewhere that it seems to be. The articulation of the belief
could, as Dennett suggests of all of our articulations, be the
result of some kind of consensus hammered out by lots of demons or
agents. Nevertheless, the sense that I have such a belief is
real, and unitary, even if the belief itself is not. Frankly, I
don't know right now what a belief is, or what a judgment is.
Until someone convincingly gives an account of these things, it
rings a little hollow to dismiss qualia as "merely" complexes of
judgments or beliefs.
When I walk into a room I may not consciously
notice each of the fire sprinkler heads mounted on the ceiling. Do
I see them? Even after a good look around, I would likely flunk if
quizzed about their exact number or arrangement, even though I feel
as though I have seen the whole room, in all its detail. Dennett
says that this feeling is illusory. I choose to say that the
sprinkler heads do not intrude, as it were, on my consciousness
because insofar as I care, there is nothing about them that should
surprise, interest, or concern me. I've noticed them - if I had
never seen or heard of a sprinkler head before, within a very few
seconds upon entering the room they would command my full attention
- but I've written them off at a relatively low level of
perception. At some point in my life, I've noticed them, thought
about them, stared at them during dull staff meetings, convinced
myself that I more or less understand them - in effect, built a
sprinkler head recognition agent. When I enter and scan a room,
this agent is awake, active, but quiescent. Nevertheless, it
contributes in some admittedly poorly understood way (by me at
least) to where I'm at, consciously.
I have an overall sense that I see and comprehend the room. If I
had the mind of a dog, I might still
have a sense that I see and comprehend the room, even though the
sprinkler heads never registered at all on any level whatsoever. My
dog mind has no sprinkler head recognition agents. Nor does it
have any particular curiosity about details it does not recognize.
(no epistemically hungry agencies, to use Dennett's term).
My human sense that I see the room
and my satisfaction that I understand it are quite different than
the dog-mind's sense, even though in the end we are both satisfied
that we see and understand it. I see and understand insofar as I
care, have ever cared, or could imagine caring about whatever it is
I am looking at.
My own speculation is that
the epistemically hungry agencies are conscious. Some are relatively
permanent, some are constituted by constantly shifting, waxing and
waning coalitions of other agents. The sprinkler head recognition
agent feels quite clever, that it has made a really creative leap
here - it has never seen these particular sprinkler heads, in this
light, from this angle, in this context, yet it declared them to be
sprinkler heads. It is always thinking about sprinkler heads, and
always looking for them. It is always trying to see sprinkler heads.
So how do these "agents" stack? How do "lower" ones get
incorporated into "higher" ones until they all get subsumed by the
one at the top, the tip of the pyramid, the consciousness that is
me? On this last question, Dennett is right. There
probably is no tip of the pyramid.
When I look at my living room, I seem to have a certain sense that
I see it before me in all its colorful, varied entirety. What is
the connection between this "certain sense" and actually
seeing it? My sense of seeing it is not an opaque ability to answer
questions - I don't feed demands for information into a black box
and get information back. It may well be, as Dennett says, that a
pandemonium of demons (couch demon, rug demon, lots of other, more
abstract demons concerned with context and associations) in
some way contribute to my overall comprehension. Moreover, it may
well be the case that this "overall comprehension" just is the
pandemonium itself, not some master demon, or
some Central Meaner. Maybe later, if
asked what was going through my mind, the "I was comprehending my
living room" demon may be overruled by the "I was worrying about my
property taxes" demon. Maybe I was comprehending the living room,
but come to think if it, I was paying special attention to the
drapes. Or was I? Maybe any of the demons could make a good case
that they were the whole point, the
where-it-all-comes-together. From each demon's point of view, it is
right. We have lots of seats of consciousness in our minds.
If all of the demons are conscious to some degree or another,
if that term is to have any
meaning at all, then there are some consciousnesses that never
manifest themselves distinctly in any kind of a master narrative of
"what was going through my mind". Perhaps some of them are
evolutionary dead ends in the pandemonic Darwinian jungle that is
my mind. Maybe some of them don't even nudge any of the others
above the level of random noise or jitter, even though, for their
possibly quite brief existence, they were conscious. There was
something it was like to be them.
At one point (pp. 132-133) Dennett speaks of the impossibility of
nailing down what you are conscious of and when you are conscious of
it. He rightly points out that in many situations there is no good
answer to the questions of exactly what you are conscious of and
We might classify the Multiple Drafts model, then, as
first-person operationalism, for it brusquely denies the
possibility in principle of consciousness of a stimulus in the
absence of the subject's belief in that consciousness.
Opposition to this operationalism appeals, as usual, to possible
facts beyond the ken of the operationalist's test, but now the
operationalist is the subject himself, so the objection backfires:
"Just because you can't tell, by your preferred ways, whether or
not you were conscious of x, that doesn't mean you weren't. Maybe
you were conscious of x but just can't find any evidence for it!"
Does anyone, on reflection, really want to say that? Putative facts
about consciousness that swim out of reach of both "outside" and
"inside" observers are strange facts indeed.
Yes, yes they are, but there it is. There are, in fact,
consciousnesses within my skull that swim out of reach of any demon
or collection of demons that might generate utterances or typings
about what "I" am or were conscious of at any particular time. This
should not seem odd, frankly, even to a reductive materialist.
However you define consciousness, assuming you find any use for the
term whatsoever, why is it impossible, or even unlikely, that the
submodules and sub-submodules that comprise my mind might
themselves individually qualify as conscious? And if they do
qualify as conscious, they might not all necessarily be patched
into any larger consciousness, or feed into any higher level of
consciousness. Of course the ones that do are probably more
interesting to us, and how exactly they feed in is a subject for
further speculation. And perhaps some of them spin off on their own
until asked a certain way, or until the right kind of slot opens up
for them to contribute their bit. Recall Dennett's compelling image
of constantly shifting coalitions of demons. So it should not seem
silly or bizarre that, in some sense, I was conscious of a stimulus
but didn't know it. Or perhaps the "I" that reports on such things
did not know it, or know it in the right way.
Dennett is right - the single continuous self is illusory, a virtual
machine implemented on a parallel architecture. He is wrong,
however, in thinking that this explains consciousness or dissolves
its mystery. Far from it.