The Hard Problem Of Consciousness

The very first bullet we have to bite is the Hard Problem of consciousness. In his book, The Conscious Mind, (1996) David Chalmers popularized the distinction between the "easy problems" of cognition (ability to reason, remember, evaluate, report on internal states, etc.) and the "Hard Problem" of subjective consciousness. Chalmers was careful to point out that "easy" and "hard" are relative terms, and that even the easy problems might take us a few centuries to work out. The easy problems are, however, tractable using the methods of modern science. Once our neuronal probes are subtle enough, and we have a higher-level framework stated in terms of information processing or some other abstractions to help us make sense of what the neuronal probes tell us, we should be able to answer any question we might have about how the brain works. Any question but one.

The Hard Problem of subjective consciousness (or as philosophers like to call it, phenomenal consciousness) is hard because it just does not seem amenable to the sort of analysis that modern science knows how to do. The Hard Problem refers to the fact that you will never be able to tell me a story about information processing, computation, biochemistry, neurons, sodium channels, or about anything based on physics as currently construed, which will come close to explaining why red looks red to me, or why middle C sounds like middle C. These basic ineffable sensations are called qualia (singular quale) in the literature of philosophy of mind. Subjective consciousness itself is sometimes characterized at the most basic what-it-is-like to be you or to have some sensation or another.

Descartes thought that there were two kinds of stuff in the universe: physical stuff and mind stuff. For this reason, he has forever been called a dualist. In modern times, people who wonder seriously about qualia are also called dualists, even though many of them explicitly reject the idea of there being two fundamental kinds of stuff. This misleading labeling is unfortunate. Philosophy is confusing enough without calling things by incorrect names. Moreover, in recent centuries pureblooded dualists have been spotted in the wild very rarely, and the term is sometimes used somewhat pejoratively: people accuse other people of harboring dualist sympathies more than anyone embraces the term for themselves. For these reasons, I will use the term qualophile to describe Chalmers and his ilk.

The Objectivity Of The Subjective

The entire universe and everything in it is made up of atoms and molecules and photons and things like that, all interacting according to the laws of physics. The claim of the Hard Problem is that a) the redness of red as it appears to me is an absolute, objective fact of the universe, and b) that no account of atoms and molecules interacting, no matter the complexity of their interactions, will predict or explain the redness of red as it appears to me.

What if someone modeled our minds artificially, and created a robot that to all appearances was intelligent, and even conscious? It might might claim to see red, and it might do so in very convincing terms. It might represent red in some sophisticated way to an internal self-model in a way that mimicked some neural or informational events in our own brains as we see red. It might compose a poem about the beauty of a sunset that would move you to tears. Nevertheless, we have no principled reason to believe that it really is experiencing red the way we do.

If this is true of a robot or AI that models our own mental processes at some appropriate level of abstraction, it is also true of our own brains themselves. Our brains, as understood by neuroscience, are, after all, biochemical computers of some kind. At the low levels, they can be modeled physically, and perform the same kind of information processing that a computer might. The distinction between squishy brains on one hand and silicon chips on the other is an implementation detail as far as this line of thinking goes.

While we might have a perfect and precise description of the causal chain between photons of a particular frequency striking our retinas in a certain pattern on one end, and our uttering "What a beautiful sunset!" on the other, this description will not explain what red actually looks like to you or me. Just as silicon, flipping bits, will never see red, we have no principled reason to derive the fact of our seeing red from the bit flipping in our own neurons. If an AI can't do it, we have no good reason to think a brain could either. But of course our brains do see the red sunset. This leads the qualophile to bite the bullet of claiming that there is something big missing from the way we talk about and analyze brains, and pretty much everything else in the physical world as well.

People have come up with clever thought experiments to help sceptics arrive at the conclusion that the Hard Problem exists and that we should take it seriously. One of the most famous is the one invented by Frank Jackson (1986) in his essay, "What Mary Didn't Know".

Mary In Her Black And White Room

Imagine Mary, a supergenius particle physicist/neuroscientist, in a future world in which our understanding of physics and neurobiology is complete and perfect. She understands and has mapped out every single neural pathway, electro-chemical reaction and quantum wiggle in her own brain. Mary, however, has been raised in an entirely black and white environment. She has never seen anything red, for instance. She knows exactly what the physics of photons of red light are, and she can predict exactly how she would react behaviorally if she did see something red, but she has never actually experienced it directly. If you have ever debugged a computer program in C, for example, using a debugger, in which you single-step through your code line by line, you may get a sense of the way in which Mary understands her own predicted reaction to seeing a red apple. She can "walk through the code" perfectly, but she has never experienced red.

Now imagine that Mary gets let out of her black and white room, and sees a red apple. For all her functional, scientific knowledge, perfect and complete as it was (in terms of making 100% accurate predictions of her own behavior, both macro and micro), something entirely new happens in her head when she sees that apple. As the title of the article ("What Mary Didn't Know") suggests, this new thing that happens in Mary's head is usually framed in terms of knowledge, and some people counter that she does not actually gain any new knowledge upon seeing red for the first time, but (merely?) acquires a new ability. However you characterize it, something new happens to Mary, something that her schematic descriptions of her own brain never allowed her to anticipate.

The point here is that if you think of the brain as a big information processor, even being as generous as your wildest dreams will let you in terms of its sheer processing capacity, future physics, etc. you still leave something out. It counts pixel values on its retinal grid, it accesses memory locations, it does data smoothing and runs comparisons, it takes different execution paths based on its evaluations and invokes modules. Perhaps when thought of in a certain way, from the point of view of a certain level of abstraction (projected onto the system by the observer), the information processor may be seen as seeing red, but there is no reason to believe that it really is seeing red, objectively, the way I (and presumably you) do.

Nagel's Bat

Another illustrative example comes from Thomas Nagel's (1974) essay, "What Is It Like To Be A Bat?". Bats employ a sonar-like echolocation trick to find bugs in the air. The claim is that there is nothing you could possibly ever know about how a bat's brain, ears, and vocal system works that would let you know what it is like to sense a moth 20 feet away; kind of like hearing, but not really, kind of like touching with a long arm, but not really?

Similarly, I have read that bees see colors that we can not see. What do those colors look like? We could know everything about bee brains and bee eyes, how the bees react to those colors and why, how the ability to see those extra colors evolved, etc. and we would still never know personally what those colors look like. If all mental activity is information processing, how is it that we could have all the explicit, articulatable information about bee perception but still not know something about it? Couldn't we, with our far superior brains, crunch through the bee color perception algorithm? Couldn't we "walk through the code"? Most people would agree that such an exercise would not deliver a sense of what bee colors actually look like to the bee.

Zombies

This point is illustrated by another thought experiment, that of the notion of a zombie. A zombie, in this context, is basically a person who has no phenomenal consciousness, that is, who experiences no qualia, but whose brain and cognitive machinery otherwise works just fine. A zombie has the same neural connections that you do, acts and talks like a normal person, and for the same physical reasons, but is "blank inside". A zombie brain is a human brain, but considered only as an information processor. To all outward appearances a zombie is a regular person. A zombie would claim to see red, and seem to fall in love, and if so inclined, would write that poem about the sunset, and would in fact do all the things with its brain that we do with ours, producing all the same reactions, except that it would not be like anything to be the zombie.

The zombie thought experiment is controversial. By hypothesis, my zombie twin and I are functionally identical, which means physically identical, down to the last neuron. On the face of it, it's a pretty bold claim that one of us is fully conscious and the other "blank inside". Naturally, there are some people who think that the whole notion of zombies is incoherent. If something talks, thinks (if by "thinking" we mean only the sort of processing that could be modeled on a computer, the pure information processing manifested in us by our neural firings), and acts like a conscious person, then that entity is conscious, full stop. If you define "belief" in strictly information-processing terms, then the zombie believes it is conscious. To speculate about the conceivability of something that talks, thinks (in the limited way mentioned above) and acts like a person but is not conscious is like speculating on the conceivability of married bachelors. There is nothing extra about consciousness besides the functional mechanisms of information processing, and any claims to the contrary are just spooky mumbo-jumbo, the products of sloppy thinking. To critics, it is as if someone hypothesized an atom-for-atom copy of a water fountain, one that behaved exactly like the original water fountain, but just wasn't, you know, a water fountain.

Zombies make sense to me, though. Given our current understanding of brains, there is nothing inconsistent about the idea of a brain that works exactly as mine does now, producing the same output responses to the same input stimuli, and employing the same neural mechanisms, but which skips the phenomenal conscious part. As nutty as it sounds that something physically identical to me could nevertheless be so different in its mental life, nothing we know disallows it. No discovery by neuroscientists, and no new cognitive model will connect the dots for us. We do not have any principled, theoretical way (other than brute correlation at a higher level than we generally like our brute correlations) to get from a complete description of how the parts of the brain function to the fact of phenomenal consciousness. A failure of entailment of this sort should concern, if not embarrass us. We have work to do. With regard to the Hard Problem, this failure of entailment from the facts about brain processing to the facts about consciousness has been called the explanatory gap. As Ned Block (2002) put it, "Why couldn't there be brains functionally or physiologically just like ours…whose owners' experience was different from ours or who had no experience at all? (Note that I don't say that there could be such brains. I just want to know why not.) No one has a clue how to answer these questions." [emphasis his]

While it is often hard to draw a distinct line between qualia and cognitive, functional information processing (a fact I believe is underexplored, more on this later), there is something going on when I see red that is unexplainable by any theory of mentation that allows for minds being implemented by computers. It stands as an extra fact about the universe that demands explanation. To define consciousness as functional information processing is to define away the central mystery of consciousness.

Without going into too much detail now, I want to say that it is easy to assume that just because my zombie twin has the same physiological makeup I do, and the same internal causal dynamics, and processes information the way I do, that it therefore thinks, knows, believes, etc. the way I do as well. We should be careful here, since this assumption begs an important question. It assumes that what is important to us about things like thinking, believing, knowing, etc. can be definitely explained in zombie terms: that is, in terms of information processing and causal dynamics, without regard to qualia at all. I would not take that bet.

Not Everybody Likes The Hard Problem

I think it is fair to say that qualophilia is still a minority position in philosophy, and certainly in the hard sciences that touch upon these questions at all. The mainstream orthodoxy, such as it is in these circles, is…the other folks. There are a lot of people who think that all this qualia talk is nonsense, or at least misguided: even if whatever it is we call "qualia" is real, it can be explained with "normal" physics, information processing, etc. and has no broader implications for our picture of what the world is made of or how it is put together.

What should we call these people? Since I've called their opponents qualophiles, perhaps they should be qualophobes? I'm going to bow to convention in this case, though, and just call them physicalists (although I will use the term materialist somewhat interchangeably). Even this is a little misleading, or at least vague. It does justice to the idea that "it's all just physics", but it leaves open what we mean by that. Lots of qualophiles might agree that the universe is just made of physics - it's just that there might be more to physics than you think.

I do not usually put a lot of stock in sociology of science, nor do I like to emphasize the cultural aspects of scientific endeavor, but what science is, its proper aims and methods, is a lot less monolithic than most people believe. We must be open minded as we consider the kinds of methods we might have to use to explore whatever facts about the world Nature sees fit to present us with. Each scientific revolution (or, as the cool kids say, "paradigm shift") leaves us perfectly equipped to ask those questions that have just been answered.

The 20th century was especially humbling in this regard. Special relativity, then general relativity, and most dramatically, quantum mechanics force all but the most blinkered of investigators to ask the big questions of science itself as a practice: what counts as an explanation? Is there difference between getting a right answer and The Truth? How do we know when we are done?

The fact that we don't know how to properly frame certain questions now is not an argument that the questions themselves are wrong - quite the contrary. It is the questions that we aren't sure even how to ask (in a precise, falsifiable, quantitative way) that should interest us the most. We should beware of the hubris of thinking that even if our particular scientific theories are incomplete, our ways of framing them, and our criteria for what things are worthy of scientific consideration, and the form we like our answers to take, are complete and perfect. We should not fall into the trap of thinking that if someone can't quite pose their question in terms that our framework is designed to accommodate that this means their question is automatically silly.

As we look around our universe and try to make sense of it (in the loosest possible sense), we should be open minded about the kinds of things we find, and the kinds of explanations of those things that we accept. We should be a bit humble about what we have figured out so far, including our ways of figuring out themselves. This is not to say we should be indiscriminate, and entertain every cockeyed notion that we come across, but when presented with something that just does not seem to fit into our established frameworks, we should not be squeamish about poking at those frameworks and seeing if we can't extend them a bit. We should be bold, but soberly so.

Is Consciousness Like Elan Vital?

Sometimes physicalists compare belief that the Hard Problem is hard to vitalism of centuries past. This was the belief that there was some mysterious elan vital, a life force that animated living things beyond the mere mechanisms of locomotion, eating, reproduction, etc. The more we found out about how life worked at a molecular level, however, the less anyone believed in an elan vital. Belief in vitalism was ultimately exposed as a failure to appreciate how beautifully complex and exquisitely specific the mechanisms of life were. Once one understood the mechanisms, however, there was nothing left to explain. Similarly, argue the physicalists, once we understand enough of the cognitive mechanisms of the brain, the Hard Problem will melt away into the details.

Anil Seth (2022) put it fairly well:

Briefly, the vitalist notion that life could not be explained in terms of biophysical mechanisms was neither directly solved (by finding the elusive 'spark of life') nor eradicated (by discovering that life does not exist). It was dissolved when biologists stopped treating life as one big scary mystery, and instead started accounting for (i.e. explaining, predicting, and controlling) the properties of living systems (reproduction, homeostasis, and so on) in terms of physical and chemical processes. We still don't understand everything about life, but what seemed at one time beyond the reach of materialism no longer does. By analogy, the fact that consciousness seems hard-problem mysterious now, with the tools and concepts we have now, does not mean it will always seem hard-problem mysterious - and the best way forward is to build the sturdiest explanatory bridges that we can, and see how far we get.

Seth's admonishment is sobering, but as mysterious as life was at one time, there never was anything about it that wasn't basically functional. It always was, in principle, explainable in terms of causal mechanisms, but it strained the imagination (at the time!) that those mechanisms could possibly be that small, or that complex. I have faith that if I could talk for 45 minutes to an otherwise scientifically minded vitalist from centuries past ("Hi, don't be afraid, I'm from the future...") I could dissuade them of their vitalism. There isn't a conceptual leap, just a lot of orders of magnitude. It's all mechanism, all the way down, just really, really really tiny. This just isn't the case with phenomenal consciousness. There isn't any toehold for naked causal bonkings to produce anything like it.

Gottfried Leibniz, in fact, made this point in his famous quote about the mind-as-mill:

Supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for.

Moreover, in terms of explanations, phenomenal consciousness (or qualia) is not something we drag into the picture to explain something or other that we observe, as elan vital was invoked to explain what we observe about life, or to use another example physicalists like, as the luminiferous ether was invoked to explain light waves in space in the 19th century. Consciousness is the raw data, the observed thing that needs explaining. It is the light, not the luminiferous ether.

Is Consciousness An Illusion?

Some people argue that what I call subjective consciousness is some kind of illusion. But what is an illusion? It is something that seems one way but is really another. My claims rest on the observation that that red really seems red to me. The counter claim that this is an illusion boils down to, "red doesn't really seem red, it only seems that it seems red." But seeming, like multiplying by 1, is idempotent - inserting more "seeming" clauses into my claim does not change it one bit. Whether red seems red, or seems that it seems that it seems that it seems… red, the Hard Problem stands before us.

The Hard Problem consists of the fact that anything seems like anything at all. If phenomenal consciousness is an illusion, then who or what exactly is the victim of that illusion, and how can it be such a victim without the Hard Problem being a problem for it? The claim that qualitative, phenomenal consciousness is an illusion begs the question (in the pedantic sense of implicitly assuming that which it purports to prove). There is a fundamental bootstrapping problem (you can't pull yourself up by your own bootstraps). The problem of seemings is not resolved by showing that something seems like X but is really Y. The mystery is that anything seems like anything. How do you build seemings of any kind out of a universe made entirely of blind, stupid, amnesiac particles, however they are arranged?

Keith Frankish is a proponent of what he calls Illusionism, which basically says exactly this: consciousness, as I characterize it, at least insofar as it is mysterious and most interesting, is an illusion. His account of how the mind works this illusion on itself resembles a lot of higher order thought ideas. He claims that while a lot of fancy brain processing goes on under the hood (the lower order thoughts), the mind represents all this to itself as some kind of deeply mysterious, fundamental, ineffable, qualitative experience. It is this representation to itself that I think is analogous to the higher order thought, and which Frankish says is actually a misrepresentation, albeit a very convincing one, for perfectly good evolutionary reasons.

As with a lot of physicalist arguments, I think "representation" (and its variants and synonyms) is doing a lot of work here. If you define your processing and processors and modules functionally, causally, there is no representation or misrepresentation. Things just clatter along, doing what they do. To say that some part of the mind is the victim of a misrepresentation is fanciful and poetic language. Qualia does not seem ineffable to such a system, because nothing seems like anything. If your account of consciousness rests on A (mis)representing B, you better have an ironclad account of representation in the first place. Personally, I don't have such an account, or at least one that leaves "representation" with any explanatory power. But more on this later.

Is Qualophilia A Failure Of Imagination?

It is sometimes said that taking the Hard Problem seriously is a simple failure of imagination: the fact that I could not imagine traditional science (neurobiology, information theory, physics) explaining what it is like to see red says a lot more about my powers of imagination than it does about the actual limitations of traditional science. In the same way, it is argued, a vitalist's inability to imagine life being nothing more than molecular processes simply proved to be a failure on the vitalist's part to appreciate just how complex and tiny the molecular processes are. The vitalist's scepticism, however, ultimately came down to a matter of scale and complexity - the vitalists did not properly appreciate that the components of life could be quite that small or that complex. Claiming that more scale and complexity will turn ones and zeros (or their neural equivalents) into red is a non-starter.

The fundamental components of the world to a physicalist are completely blind to one another, and completely stupid, and have no memory whatsoever. The basic particles just careen in one direction, then another. Even when they attract, repel, or collide with each other, they don't really "see" or "know" about each other - they just careen (with an occasional bonk). They don't know why, or what it is that is influencing them to careen in this particular direction at this particular speed. It sounds funny even to say it this way, but I think some people do not really sense in their guts just how blind, just how stupid, just how little memory the fundamental particles must have to a committed physicalist. To get anything not blind and not stupid out of them, you must attribute a lot of power to the notion of "levels of organization". You can get a great deal of causal complexity out of such levels, and systems can be designed (or evolve) to do many tricky things. It is a real stretch, however, to claim that the redness of red is among them.

My accusation to physicalists is that they do not follow through on their own commitments in a rigorous and thorough way. They claim to be strict vegans about woo - qualitative subjective intuitions - but they help themselves to generous portions when it suits them. They frame theories of consciousness in terms that draw on a whole lot of pretheoretical notions that just aren't necessarily there in the neurons, bits, bytes, quarks, and photons. We hear, for example, about systems representing stuff (perhaps including a self-model) to themselves, you see mentions of integrated systems, the system as a whole and the like. This shift of the explanatory burden is often accompanied by some kind of examination of the notions in question, but sometimes this is little more than a hand-wave.

The physicalist's position is an extravagant one, a point which is often overlooked simply because physicalism has been the reigning orthodoxy for several centuries now. The physicalists claim that if you get enough unconscious stuff together in a big pile, and arrange the pile in a certain special way (a complex enough way, perhaps, or a pile that conforms to a certain functional schematic), then subjective consciousness will appear.

Because centuries of scientific advances have shown us that the reductive physicalist approach is the right framework for understanding the universe, it simply must be the case that it is adequate to explain consciousness too. This particular alchemy is a leap of faith on their part, and the onus is on them to show us the money. It is foul play to try to shift the burden of proof back on the qualophiles, claiming that scepticism of the reductive physicalist position betrays some kind of failure of imagination.

Moreover, it is not a failure of imagination that leads me to take the Hard Problem seriously. On the contrary, it is because I can imagine a day not too far off (fifty years? One hundred?) on which we solve Chalmers's easy problems. On that day, cognitive science and neurobiology complete their intended programs and actually map every single event in the human brain, every information flow at any level of organization you please, every secretion and uptake of every neurotransmitter. On this day, it will be possible for us (like Mary in her black and white room) to detail everything that happens between photons striking my retina and my uttering, "What a beautiful sunset!". The cognitive scientists and neurobiologists will collect their Nobel prizes and go home satisfied, and nothing in their description of the brain will give the slightest hint of what it is like to see red, or why anything seems like anything at all.

Yes, it is true that I can not imagine that day in detail, in the sense that I do not have that final theory at my finger tips down to the last synapse (otherwise I would be the one collecting the Nobel prize right now), and there's the rub, the physicalists would say. If I could see that theory in detail, they argue, it would be clear why red seems like red.

For nearly a century, mentioning consciousness was a career killer in the field of academic philosophy. In the last generation or so, however, the question of consciousness has been coming up with greater and greater urgency, and it is attracting pretty level-headed, math/science type people, not mystics, not new-agers, not religious wishful thinkers. I think this is so precisely for the reasons that I mentioned above: as science progresses, and closes in on its stated goals regarding our brains, its limitations stand out in ever sharper relief.

The physical sciences, as their boundaries of inquiry are currently construed, deal only in functional behavior, externally measurable effects. There are perfectly valid questions about Nature (what is it like to see red?) that are outside the bounds of natural science as currently practiced. That is, it is conceivable that we could have a complete and perfect understanding of physics and all the other hard sciences, and never articulate quantitatively, let alone answer, those questions. My ability to imagine this state of affairs may be incorrect in some way, but it certainly does not represent a failure of imagination on my part.

My seeing of red is not a philosophy; it is not a way of thinking about or interpreting some theory or idea; it is not a bit of linguistic sophistry; it is not an abstraction; it is not an inference I have drawn or some metaphysical gloss I have put over reality. It is a brute fact about the universe, a fact of Nature. It is really, really there. It is explanandum, not explanation. As such, it is incumbent upon our natural science to explain it. If my seeing of red is not amenable to the currently accepted methods of natural science, then so much the worse for those currently accepted methods. People who deny the existence of qualitative consciousness and its implications remind me of the church officials who refused to look through Galileo's telescope because they did not want their neat and tidy theological world upset by what they might see.

What Could There Be Besides "Normal" Physics?

Physics and physicalism are not so much wrong (except in their claims of exclusivity) as they are incomplete. This is just the way science works. Newton invented a formal basis for a physics and for a long time it seemed dead accurate. But along comes Einstein, and it turns out that while Newton's physics was perfectly consistent and accurate within its domain, it was incomplete - it is merely a special case of a more general set of laws. Then a decade later, Einstein comes out with General Relativity, and shows that his own earlier work, while perfectly applicable within its proper domain, is really just a special case of still more general laws (hence "general" vs. "special" relativity).

Science works by adding more layers to the outside of the onion. Old theories are not so often disproved by new ones as they are generalized and subsumed by them. When we finally take the Hard Problem seriously, it will usher in a true scientific revolution. This will not simply be a matter of surprising new results, like room-temperature superconductors, but a rethinking of what questions we ask and how we ask them, much the way quantum mechanics forced a rethink.

Assuming we are willing to bite the bullet of admitting that there is something we can't explain going on when we see red (even if we can't even articulate the question in a scientifically respectable way), where do we go from here? What kind of layer can we add to the outside of the onion? There is no strictly conservative way out of this mess. We want the least weird description of what the universe would have to be like for beings like us to be in it. If there must be weirdness at all, we have to confront it head-on, bracket it, constrain it, characterize it somehow that allows us to keep all the wonderful stuff we've already figured out.

The way the hard sciences break the world down into the bonkings of blind, stupid, amnesiac particles just can't explain what we need to explain. We have to rethink how we talk about the lowest levels of reality in a way that doesn't throw the baby out with the bathwater, and keeps all the physics we already know intact, but adds a way for consciousness to exist. Loopy as it sounds, consciousness, or something that scales up to consciousness in certain kinds of systems, must be built in at the ground floor, as part of the fundamental furniture of the universe.

Someday, after we have pinned it down a bit, it will stand right up there with mass, charge, and spin. This view is traditionally called panpsychism, but some people prefer pan-protopsychism to emphasize that it is not consciousness as we know it that stands as a fundamental building block of the universe, but some tiny crumb or spark that, when scaled up, aggregates into full-blown human consciousness (albeit perhaps only under certain conditions or in certain types of systems). Also, "panpsychism", to some people has medieval, vitalist connotations; most contemporary panpsychists want to dissociate themselves from the belief that "rocks think". No one knows (yet) the principles according to which proto-consciousness, if such a thing exists, might aggregate into full-blown human consciousness, or what is so special about brains that they support this aggregation. In the range of potential answers to these questions there is room for different versions of panpsychism, some more outlandish than others.

So I want to reconfigure the metaphysics of physical reality in response to an intuition I have about the taste of ice cream? Well, yes, and in the chapters to come, I hope to make some version of this seemingly extravagant hypothesis a bit more palatable, and even inevitable.