Cognitive Qualia and HOT Theories
Even people who accept the Hard Problem as real still often make a
distinction between cognition on one hand and qualitative
subjective consciousness on the other. Cognition,
presumably, is amenable to analysis in terms of information
processing, and may in principle be performed perfectly well by a
computer. It encompasses Chalmers's "easy problems".
Subjective consciousness, or qualia, is the answer to
"what is it like to see red?". Qualia is the spooky mysterious
stuff that no purely informational or functional description of the
brain will ever approach.
I would like either to clarify or eliminate
the distinction. What exactly do we mean by
"cognition"? When we speak of cognition in a computer, is it really
the same thing that we are talking about when we speak of cognition
in a human being? When we speak of "cognition" and "qualia", what
are the distinguishing characteristics of each, such that we can
be sure that some event in our minds is definitely an example of one and
definitely not the other? The line between what we experience
qualitatively and what we think analytically or symbolically
is very hard, if not impossible, to draw. Even with the most purely
qualitative impression, there is a troublesome second-orderliness -
there is no gap at all between seeing red and knowing that you are
Philosophers often talk about intentionality, which is the
property of being about something. That is, something has
intentionality to the extent that it is representational, or
symbolic. Among the things that are often cited as being
intentional are beliefs, desires, and propositions.
People who talk about intentionality do not usually talk
about qualia in the same breath, and vice versa. I believe that
this is a mistake.
My Phenomenal Zombie Twin
that my zombie twin is an exact physical (and
presumably cognitive) duplicate of me, but without any subjective
phenomenal experience. It walks and talks like me, but is blank
inside. There is nothing it is like for it to see red.
Horgan and Tienson (2002) suggest an interesting thought
experiment that turns the zombie thought experiment on its head.
Imagine that I have a twin whose phenomenal experiencings (i.e. qualia) are
identical to mine throughout both of our whole lives, but who is
physically different, and in different circumstances (perhaps an
alien life form, plugged into the Matrix, or having some kind of
hallucination, or a proverbial brain in a vat). The question that
screams out at me, given this scenario, but that Horgan and Tienson
do not seem to ask (at least not in so many words) is this: to what
extent could my phenomenal twin's cognitive life differ from my
own? If the what-it-is-like to be it is, at each instant, identical
to the what-it-is-like to be me, is it possible that it could have
any thoughts, beliefs, or desires that were different from mine? Now,
we may quibble over defining such things in terms of the external
reality to which they "refer" (whatever that means), and decide on
this basis that my phenomenal twin's thoughts are different than
the corresponding thoughts in my mind, but this is sidestepping the
really interesting question. Keeping the discussion confined to
what is going on in our minds (that is, my mind and that of my
phenomenal twin), is there any room at all for its cognition to
be any different from mine? Charles Siewert (2011) makes similar
points in his discussion of what he calls totally semantically
clueless phenomenal duplicates.
Think of a cognitive task, as qualia-free as you can. Calculate,
roughly, the velocity, in miles (or kilometers) per hour,
of the earth as it travels through space around the sun.
Okay. Now remember doing that. Besides the answer you calculated,
how do you know you performed the calculation? You remember
performing it. How do you know you remember performing it? Specifically,
what was it like to perform it? There is an answer to that
question, isn't there? You do not automatically clatter through
your daily cognitive chores, with conclusions and decisions, facts
and plans spewing forth from some black box while your experiential
mind sees red and feels pain, and never the twain shall meet. You
are aware, consciously, experientially, of your cognition.
But what exactly is the qualitative nature of having an idea?
David Chalmers has asked whether you can experience a square
without experiencing the individual lines which make it up.
This question nicely underscores the blurriness of the distinction between
qualia in the seeing red sense, and cognition in the symbolic
processing sense. When you see a square, there is an immediate and
unique sense of squareness in your mind which goes beyond your
knowing about squares and your knowledge that the shape before you
is an example of one. What is it like to see a circle? How about the
famous Necker cube? When it flips for you, to what extent is that a
qualitative event, and to what extent is it cognitive? Is it not clear
that your "cognitive" interpretation of the cube (i.e. whether it
sticks out down to the left or up to the right) has its own qualitative
essence that outruns the simple pattern of black lines that you actually
see? The classic duck/rabbit image is similar. You can't merely see; you
always see as.
What is it like to see the word "cat"? Wouldn't your what-it-is-likeness
be different if you couldn't read English? your cognitive parsing of your
visual field is inseparable from the phenomenology of vision.
The Qualia of Thought
What is it like to have a train of thought at all?
How do you know you think? What is it like to prove a
theorem? What is it like to compose a poem? In particular, how do
you know you have done so? Do you see it written in your head? If
so, in what font? Do you hear it spoken? If so, in whose voice?
You may be able to answer the font/voice questions, but only upon
reflection - when pressed, you come up with an answer, but up to
that point you simply perceived the poem in some terms whose
qualitative aspects do not fit into the ordinary seeing/hearing
There is a school of thought that holds that qualia are exclusively
sensory - that any "qualia of thought" are qualitative only by inheritance.
That is, we actually "hear" our thoughts in a particular auditory voice,
or see things in our minds' eye. This is a stretch, and puts the cart
before the horse. I, for one, don't think in anyone's voice.
Moreover, any qualia of thought is not just tagging along in the
form of certain charged emotional states that accompany certain kinds
of thoughts. The qualia is right there, baked into the thoughts themselves,
as such. The "purely" "cognitive" "content" is itself qualitative, not
just the font it is written in, or the voice it assumes when it is spoken, or the
hope or the fear that we attach to it.
Anything we experience directly, whether it is the kind of
thing we usually associate with sensation and emotion or with
dry reasoning and remembering, is qualitative: a song, a building,
a memory, or a friend. By definition, all I ever
experience is qualia. Even when I recall the driest, most seemingly
qualia-free fact, there is still a palpable
what-it-is-like to do so.
To the extent that our cognition is manifest before us in the mind
in the form of something grasped all at once, whether in the form
of something which is obviously perceptual or something more abstract,
it is qualitative. Otherwise, how would we be as directly aware of
our thoughts as we are?
How do you know you are thinking
if you in no way express your thought physically (writing or
speaking it)? A thought in your mind is simply, ineffably, manifestly
before you, as a unitary whole, the object of experience as much as
a red tomato is.
That we are aware of our thoughts at all in the way we are is no
less spooky and mysterious than our seeing red.
If you were a philosopher who was
blind since birth, the "what is it like to see red?"
argument for the existence of qualia
would not have the same impact that it does on a sighted person.
If you were also deaf, neither would
"what is it like to hear middle C on a piano?". If you were a
severe amnesiac in a sensory deprivation tank, assuming your mind
were otherwise functioning normally, would you have any reason to
worry about these mysterious qualia that other philosophers think about
so much? I think you would, simply by virtue of noticing that you
had a train of thought at all.
"What is it like to see red" or "what is it like to hear middle C
on a piano" vividly illustrate the point of the Hard Problem
to someone approaching these topics for the first time,
but it is a mistake to stop at the redness of red. The redness of red
is the gateway drug.
Just because the existence of qualia is most starkly highlighted by
giving examples that are particularly non-structured, and purely sensory,
it is a mistake to think that the mystery they point to is confined to
the non-structured and purely sensory. The paradigmatic examples of qualia
are good for convincing people that don't have a solid basis for understanding
everything that goes on in our heads. It is tempting, however, to think that we
are at least on our way to having a basis for understanding what is going
on in our heads when we think. My point is we don't have a good basis for
understanding that either.
Just as qualia are not just the alphabet in which we write our
thoughts, neither are they merely the raw material that is fed into
our cognitive machinery by our senses. The qualia are still there
in the experience as a whole after it has been digested, parsed,
interpreted and filtered. Qualia run all the way down to the
bottom of my mental processing, but all the way up to the top as well.
We are not, to steal an image from David Chalmers, a cognitive
machine bolted onto a qualitative base. Nor, as Daniel Dennett
says (derisively), is qualitative consciousness
a "magic spray" applied to the
surface of otherwise "purely" cognitive thought.
Each moment of consciousness has its own unique quale; new qualia
are constantly being generated in our minds.
There are qualitative sensations that accompany
particular, cognitively complex situations, but which are
nevertheless no more reducible to "mere" information processing
than seeing red is.
Once, looking down from the rim of the Grand Canyon, I saw
a hawk far below me but still quite high above the canyon floor,
soaring in large, lazy circles. I was hit with a visceral
sense of sheer volume - there is no other way to describe it. I
felt the size of that canyon in three dimensions, or at least I had
the distinct sense of feeling it, which for our purposes is the
same thing. This was definitely something I felt, above and beyond
my cognitively perceiving and comprehending intellectually the
scene before me. At the same time, the feeling is one that is not a
byproduct or reshuffling of sense data. After all, as a single
human being I only occupy a certain small amount of space, and can
have no direct sensual experience of a volume of space on the order of that
of the Grand Canyon.
Had I not experienced this feeling, I still would
have seen the canyon and the hawk, and described both to friends
back home. The feeling is ineffable - there is no way to convey it
other than to get you to imagine the same scene and hope that the
image in your mind engenders the same sensation in you that
the actual scene did in me.
Nevertheless, the feeling that the scene engendered in me only
happened because of
my parsing the scene cognitively, interpreting the visual
sensations that my retinas received, and understanding what I was
looking at as I gazed out over the safety railing.
The overall qualitative tone of a given situation
depends crucially on our cognitive, symbolic interpretation of what
is going on in that situation. Further, the individual elements of
a scene before us have qualia of their own apart from the quale of
the whole scene (e.g. there may be a red apple on a table in a room
before us - it has the "red" quale, even though it is part of and
contributes to the overall quale we are experiencing of the entire
room at that particular moment).
There are some qualia, moreover, that are inherently inseparable from
their "cognitive" interpretation, experiential phenomena that are
especially resistant to attempts to divide them into pure seeing and seeing-as.
In particular, as V. S. Ramachandran and Diane Rogers-Ramachandran
pointed out (2009), we have
stereo vision. When we look at objects near
us with both eyes, we see depth. This is especially vivid when the
phenomenon shows up where we don't expect it, as with View Masters,
or lenticular photos (those images with the plastic ridges on them that
are sometimes sold as bookmarks, or come free inside Cracker Jack
boxes), or 3D movies. This effect is, to my satisfaction, unquestionably
a quale. It is visceral. It is basic. You could not explain it to
someone who did not experience it. At the same time, it is obviously
an example of seeing-as, part of your cognitive parsing of a scene
before you. One might possibly imagine some creature seeing red without
any seeing-as, unable to interpret the redness conceptually
in any way, but it is impossible to imagine seeing depth in the 3D way
we do without understanding depth, without thereby automatically
deriving information from that. To experience depth is to understand depth,
and to infer something factual about what you are looking at, to model
the scene in some conceptual, cognitively rich way.
Naive, or Pure Experience
What we know informs what we experience. I take it as pretty much
self-evident that it is almost impossible to
have a "pure" experience, stripped of any concepts we apply to that
experience. Everything we experience is
saturated with what we know, or think we know, what we
expect, what we assume, etc. I have come to realize, however, that
some name-brand philosophers, among them Fred Dretske, think
that there is such a thing as pure perception, a phenomenal
basis of experience, untainted by anything we might call
cognition. Dretske claims that as much as our experiences are
shaped by many layers of cognitive processing, there is some
kernel of experience that is shared by us, and, say, a
raccoon, viewing the same scene (ignoring differences in
physiology, eyesight, etc.). This is simply not true, or at
best, a mischaracterization of the situation. What I share with
the raccoon is not an experience, but the raw bitmap comprised of
the pattern of stimulation of the rods and cones on our retinas.
This bitmap is fed into a lot of processing machinery, and
in that sense contributes to an experience, but itself is far
from constituting an experience.
The experience I have, and presumably that which the raccoon has,
is made of seeing-as: seeing this blob as a hydrant, that one as
a cloud, this splotch as the sun, that one as an object that I could
touch, and that will probably persist through time. However
experiences happen in minds, they are the result of lots of
feedback loops at lots of different levels, all laden with
associations and learned inferences, all stuff we might call
cognition. There is no such thing as pure seeing, separated out from
Through an act
of willful intelligence, I could decide to concentrate only on those
things in a scene before me that begin with the letters M, N, and G.
Alternatively, I could choose to pay special attention to those things
made of metal. In the same way, through willful, intelligent effort,
I can try to derive some "pure experience" from the scene, and come up
with something like a raw bitmap, perhaps for the purpose of painting
a picture on a canvas of the scene. But even if this effort could possibly
ever be 100% successful (I question this - could you really
discard your understanding of object permanence?),
this is further processing, more cognition, not less.
For this reason, it is not quite right to say that I am starting with
my cognition-soaked experience and working to get back to the "raw"
experience, because that presumes there was such a thing originally to
get back to.
For all but perhaps for the most basic qualia (a burn, a tickle, a
pin prick), a pure experience, devoid of cognitive processing,
is an abstraction, a conceit dreamed up by philosophers that has
never actually been observed in the wild.
It is a step in exactly the wrong direction, however,
to thus conclude that knowledge and concepts can take full
responsibility for experience, and that knowledge and concepts are
among Chalmers's "easy" problems, solvable within reductive
materialism. This step entails discarding qualia altogether, and
concluding that experience is cognition all the way down.
In contrast, experience is more accurately seen as qualia all the way up.
Nevertheless, this is the step, more or less, that Dennett takes,
and it is the step that adherents of Higher Order Thought take.
Higher Order Thought (HOT) Theories
Higher Order Thought (HOT) theories, whose most prominent
proponent over the years has been
go more or less like this. We have sensory impressions, as of
a red apple, but these are just lower-order thoughts (LOT), and thus not
qualitatively conscious. It is only when we have an additional thought
about that first, lower-order thought (this additional thought being
the higher-order thought or HOT) that the whole thing becomes fully,
qualitatively conscious. It is the HOT that assumes the role of
"applying concepts" to the LOT.
I've never been entirely clear as to where the consciousness actually
happens, in the LOT upon being reflected upon by the HOT, or in the HOT
itself. Either way, the theory is not terribly satisfying. Once again,
we have magic being snuck into the model under the guise of the
never-quite-explained "aboutness" relation. Let us think like
engineers and imagine a module called LOT and another module called HOT.
Now let us connect the two with a bidirectional communications channel
of any bandwidth we want - the sky's the limit. The sense impressions
get initially fed into the LOT, then some signaling passes between the
LOT and the HOT and we get consciousness. Couldn't we swap out either the
LOT or the HOT and replace it with something else, like a dumb tape
playback of a prior run of the experiment? In this case, the other module
(the one we didn't swap out) would never know the difference, as long as the
module we swapped out kept up its end of the conversation over the
communications channel. What is it about that conversation, the signaling
over the channel, that confers consciousness upon one of the modules
on either end of the channel? And why is this any less mysterious
than the original Hard Problem, of how consciousness arises from
"signaling" over the channel provided by our sensory system?
For a brief and funny take on this sort of objection to HOT theories,
see the "overheard dialog" at the end of
There are some deep questions regarding the
feedback loop between qualia and cognition, and the way our qualia
and our concepts interact and influence each other on the fly. But
HOT theories restate the question without answering it.
The model they propose merely articulates the introspective intuition
that there is a strange interdependence between what we usually
think of as the experiential and what we usually think of as the
cognitive. Articulating the observed intuition, however, does not
answer any of the questions it presents.
Worse, HOT theories assume that we can cleanly separate out qualia from
cognition, and that this "aboutness" relation between them
is straightforward and unproblematic, when these are exactly the
assumptions we should be questioning. The really interesting question
to ask when presented with the funny interdependence between
cognition and qualia is where we ever got the idea that they
were completely different kinds of things in the first place.
Many philosophers agree that in minds, qualitative consciousness and
cognition are closely related, if not two ways of seeing the same thing,
but make the mistake of concluding that qualia must therefore be
merely information processing, which we think we understand pretty well.
"Information" is a terribly impoverished word to describe the stuff
we play with in our minds, even though much of what is in our minds
may be seen as information, or as carrying information.
Shoe-horning mind-stuff into the terms of information theory and
information processing, however, is a homomorphism, a lossy
projection. I suspect that there are no easy problems in the easy
vs. Hard Problem sense. The way the mind processes information has
a lot more in common with the way the mind sees red than it does
with the way a computer processes information.
Once again, the computer beguiles us. Of course, we built it in our own
image, so it is no surprise that it ends up being an idealized version
of our own ideas of how our minds work. We understand computers down
to the molecular level; there are no mysteries at all in computation.
And clearly, computers know things, and they represent things. I can
get some software that will allow me to map my entire house on the
computer, to facilitate some home improvement projects I have in mind.
And lo! My computer represents my couch, and seems to understand a lot
about its physical characteristics, and it does so completely
mechanically, and we can scrutinize what it is doing to achieve that
understanding of it all the way down to the logic gate level. We are thus
confident that we know exactly what is going on when we speak of
knowledge, representation, information processing, and the like. There
is nothing mysterious here, at least in the mechanics of what is going
Just because we understand computers, and computers seem to know,
think, remember, infer, etc. we should not therefore think that now we
understand those things.
We do not study cave paintings as clinically accurate diagrams
to learn about the human and animal physiology depicted therein.
We study them to learn how these ancient people saw themselves
and their world, to get inside their heads. The real insights to be gained
into the mind from computers come from considering that this, this
particular machine, is how we chose to idealize our own minds.
I can write "frozen peas" on a grocery list, and thereby put
(mechanical) ink on (mechanical) paper. Later, when I pull out the
list at the store, and it reminds me to put frozen peas in the cart,
this physical artifact interacts with photons in a mechanical way.
The photons then impinge upon my sensory system, and thus, in turn,
my mind. So the paper and ink system represents frozen peas; it knew
about them. Of course, most computers we use today are a bit more
complex than the paper grocery list, but the essence is the same - there is the
same level of knowledge, representation, information processing, etc.
going on in each. We can say that in a sense, the list really does
know about the peas, but not in a way that necessarily gives us
any insight at all into how we know about peas.
There is no pure cognition in the mind, at least none that we are
directly aware of. Over a century ago, philosophers did not
separate cognition and qualia the way they do now. It was only in the
early part of the 20th century, in the ascendance of behaviorism
and the advent of Information Theory and Theory of Computation that we
Anglophone philosophers started thinking that we are beginning to get
a handle on "cognition" even if this qualia stuff still presented
When some thinkers felt forced to acknowledge
qualia, they grudgingly pushed cognition over a bit to allow qualia
some space next to it in their conception of the mind, so the two could
coexist; now they wonder how the two interact. The peaceful
coexistence of cognition and qualia is an uneasy truce. Qualia
can not be safely quarantined in the "sensation module", feeding
informational inputs into some classically cognitive machine.
We must radically recast our notions of cognition to allow for the
possibility that cognition is qualia is cognition.
Go back up to the main page