Saturday, February 4, 2006

Lamarck, Darwin, and cultural evolution

Let's assume that cultures, in all their fractal variety, do in fact "evolve" -- that is, they change over time in ways that enhance their environmental fitness, and in a way that's analogous to (though certainly not identical with) the way biological organisms do. The question then is, in light of the previous post, what model better explains this phenomenon, the Darwinian or the Lamarckian?

Many people might be inclined, in this case, to say Lamarckian, and it's easy to see why: culture is acquired, after all. Thus, if we take Lamarckism to be simply the notion that acquired characteristics are passed on to (inherited by) later generations, it seems natural to assume that this is the model to explain cultural evolution. But, as the previous post discussed, the Lamarckian model leaves the actual mechanism of adaptive change unexplained -- it seems, instead, to require that there be an agent able to determine what constitutes environmental fitness, and then generate the required adaptation. Still, while that may have been a problem for biological evolution, such a requirement might seem entirely in keeping with cultural evolution, since cultures, after all, are made up of agents -- namely, us. And aren't we smart enough to figure out what our environment demands, and then produce our own cultural adaptations?

My answer, in short, is no. I don't say that out of any disdain for human intelligence, though I do think that we have a long-standing, if understandable, tendency to exaggerate the specialness of our situation in that regard -- in our scientific age, this tendency might be seen as the secular equivalent of the religious tendency to see human beings as having a special relationship to the divine. But our "intelligence", such as it is, and whatever it is, has no more ability to lift us out of nature than our "soul" or "spirit" had or has. That means, among many other things, that we can't simply avoid the question of how culture changes by handing it off to the mysteriously transcendent processes of agency. We need, in other words, a way of grasping how change can come about without having to postulate an agent that can step out of itself, so to speak, apprehend what sort of change the environment requires, and then step back into itself and make that change.

And once again, a Darwinian model can provide just such mechanism, despite having to deal, in the case of cultural evolution, with acquired characteristics that are inheritable or transmissible. We can remove agency from the picture simply by assuming that there are incessant (daily, hourly) microchanges in an individual's cultural imprint, some of which are more "effective" than others and are consequently reinforced at the expense of the others. The changes themselves stem from the individual's daily, hourly, moment-by-moment encounter with the world, which is automatically incorporated into the individual's cultural imprint as fresh experiential content. The "effectiveness" of those changes refers to environmental feedback, and has two primary forms -- one in communicative encounters, in which the feedback always results in some degree of increased semantic alignment (in fact, this just is the transmission of culture); the other in planning or thinking about intentional or purposeful behavior in the world, in which the feedback pertains to the degree to which the change helped attain the purpose. The key point to note is that both change and feedback -- variation and selection -- are independent of the conscious will or purpose of the individual herself, just as they are in biological evolution. No agent necessary.

None of this is to deny that there is such a thing as "knowledge", of course. And that knowledge, to the extent that it's "true" (meaning, to the extent that it's effective), can certainly be used to help plan and guide cultural change, including individual change, in ways that are purposeful or agent-driven. But we can think of the knowledge contained in culture as representing the potential of that culture for change, much as the development of organs through use -- which Lamarck took to be representative of species change generally -- represents the potential inherent in a given species. But, just as true biological evolution, contra Lamarck, marks a change in species potential, so real cultural evolution as such, involving the summed result of a myriad microvariations and reinforcements over time and geography, pertains to changes in cultural potential, which is inherently outside of our control or agency. In the last five hundred years or so, it's true, we've developed a kind of meta-level of knowledge, called science, which, (along with its derivative, technology), is a systemization of the process of acquiring new knowledge -- and which quite obviously has revolutionized our global culture, expanding its potential for change by orders of magnitude. Nevertheless, nature itself -- to give a name to that impassive, implacable, indifferent environment -- remains outside of all cultural potential, no matter how potent it appears to us, and nature's judgment is final.



Sunday, January 22, 2006

Explanatory examples: Darwin vs. Lamarck

Having distinguished two primary explanatory models, mechanism and agency, I want to try to apply that distinction in an example -- and the one I have in mind is one that excited much interest over a century and a half ago: the phenomenon of species change.

That the phenomenon existed at all was by no means evident, in the first place. Species appeared to be not just fixed, but admirably and ingeniously suited to their place in the world, which of course is just what one would expect from an Intelligent Designer. Thus, like contemporary ID advocates, the explanation for the remarkable "fitness" of species is modeled on the notion of the will or purpose of a supernatural agent, and that was all one could (or needed to) say. The fossils turned up by geologists (later, paleontologists), however, began to put this neat and static picture under increasing stress, and the distasteful but unavoidable conclusion seemed to be that species that once existed no longer do, and species that now exist once did not. What could account for this?

Jean-Baptiste Lamarck, as everyone knows, tried to answer that. He thought that individual organisms tried to perform certain actions in response to changing conditions, for example; that in doing so changed themselves, in the way that exercise develops muscle; and that these changes were then inherited by subsequent generations, which over time resulted in a changed species. Now, as everyone also knows, Lamarck was mistaken about the heritability of acquired characteristics, and this mistake is often taken as the defining error of Lamarckism. But I think that mistake was a relatively superficial one, and even somewhat understandable -- the deeper, more profound problem with Lamarckism has to do with how change comes about in the first place. It is true, of course, that some organs (mainly muscle) are developed through use, but many changes to species -- e.g., color of fur, structure of eye -- have nothing to do with anything that an organism is or could be "trying" to do. Moreover, even if an organism could try to change the color of its fur or the structure of its eyes, how would it know that that's the right thing to do -- that is, how would it know how to design itself so as to fit its environment? The primary problem with Lamarckism, in other words, isn't the notion of the heritability of acquired characteristics, but the fact that it lacks amechanism for adaptational change -- instead, it simply located the agentof biological fitness within the organism itself, where it remained as mysterious as the ways of the supernatural Designer.

And then along came Darwin. By and large, he understood that acquired characteristics (which would be at least as likely to be damaging as beneficial) were not inherited, but that was not the essence of his surprisingly simple and remarkably general insight. What Darwin provided was precisely what Lamarck lacked -- a mechanism for adaptational change. It broke the process into two parts or aspects -- on the one hand, there was a blind but incessant generation of small random (i.e., not willed or agent-directed) changes; and on the other hand, there was an equally blind (again, no agent involvement), equally incessant process of culling those random changes. "Natural selection", as a phrase, emphasizes the agent-less nature of that culling process just by its contrast with the agent-based process that goes on in human or "domestic" selection. And with that two-part mechanism, what had seemed vague, obscure, and mysterious, suddenly became lucid and clear -- and perhaps all the more marvelous (and controversial) just because of that clarity.

This contrast between mechanism and agency is so important, I think, that it really has become implicit in the very meaning of evolutionary change. It's worth observing, after all, that the sort of willed or agent-directed change that Lamarck thought was at the basis of species change does indeed occur (even though he was wrong about its heritability), but is inherently limited -- we could think of it as exhibiting the potential of a species at a point in time. Evolutionary change, on the other hand, represents a change in that potential, and that is something that nature or the environment imposes, through the twin but independent mechanisms of natural variation and selection.

Friday, January 20, 2006

Explanatory models: what about "holism"?

The previous post distinguished two very general models for explaining a phenomenon: mechanism, defined by the primary role it gives to causation; and agency, defined by the parallel primary role it gives to will or purpose. What was absent was a term that a fair number of people would view as a possible alternative to, and improvement over, both -- "holism". In this note I just want to say why it was left out -- why, in other words, I think it's not really an alternative, much less an improvement, to either.

The short answer is that holism is not actually an explanatory model as such at all, but rather a set of explanatory guidelines at best, and an avoidance of explanation altogether at worst. Like many "isms", holism comes in a range of flavors or versions, from what's usually called "weak" at one end of the spectrum to "strong" at the other end. In all versions, I think, we can distinguish two broad aspects or themes to holism: that the whole is greater than the sum of the parts, and that context or environment is important in understanding a system. Let's briefly look at both:

That the whole is greater than the sum of the parts:In weak or moderate versions of holism, this notion is certainly valid, but largely a truism -- any machine, after all, is greater than the simple sum of its parts in the obvious sense that the structure or relationship of the parts is a vital aspect of its functioning -- and understanding the mechanism of any phenomenon entails an understanding of how the parts interrelate. In its mild form, admittedly, this theme can sometimes provide useful advice when dealing with particularly subtle or complex phenomena, where the "whole" may not be immediately evident. But in its stronger versions, this idea is often taken to imply that the "whole" contains some mysterious extra quality or ingredient that is impervious to analysis, and this simply becomes a way of refusing or blocking explanation altogether.

That context matters:
This theme, in moderate versions of holism, is actually quite important -- the behavior of many systems is quite dependent upon the environment in which they're situated, and may change drastically if that context is changed. It may still, of course, be a reasonable strategy to try to gain as good an understanding as possible of the system in isolation (a simpler problem), while realizing that a full or real-world understanding will need to include the larger "whole" of which the system is a part. That said, however, this theme too is frequently taken to untenable extremes in the "stronger" or more mystical versions of holism, where it's used to assert that "everything is connected" and hence nothing can be studied, understood or explained in itself.

So: "holism" in its more moderate flavors can add a helpful perspective to an explanatory project, but hardly qualifies as an explanatory model in itself; and in its stronger flavors, it tends to become a means of opposing explanation as such (sometimes, perhaps, in an effort to protect a favored belief, or sometimes just in the service of mystery itself).


Monday, January 16, 2006

Explanatory models: machine vs. agent

(Continuing reflections on the topic of "explanation"....)

The machine hasn't fared well as a cultural icon for the last while. From the image of the dehumanized, roboticized workers in the vast urban machine of Fritz Lang's Metropolis to the metallic death's-head horrors of theTerminator series, the mechanical in film and culture generally has been portrayed as something deeply antithetical to the human. No doubt the antipathy goes back at least as far as the coal-fed steam engines that powered Blake's "dark, satanic mills". It's as though there's an undercurrent of fear running throughout technological culture that we ourselves are in danger of turning into machines. Little wonder, then, given that sort of context, that there might be some serious resistance to the idea that, in fact, we already are machines.

But what exactly is a machine in the first place? Apart, that is, from a lifeless and soulless artifice, the very embodiment of the anti-human? In one sense, a machine is just a tool, as evident in that "simple machine", the lever. But there's a different, though related, sense, in which a machine is any system involving a number of parts, assembled in a particular manner, connected by determinate or causal processes, and capable of performing some function. As such, the idea of the machine or mechanism applies not just to artificial constructions but has become a general model of explanation for natural processes of any kind -- if we can provide an account for any phenomenon in terms of purely causal interactions between its parts or components, then we can say that we have provided a mechanism for that phenomenon. And with that we will have gained a metaphorical but nonetheless very effective "lever" for it, or a way of bringing it under our control. The idea of seeking out the "mechanisms" of natural phenomena lies at the heart of the technological or industrial civilization that has transformed the earth.

This idea wasn't always so prevalent, however. Just as the extension of the idea of the simple tool provided one explanatory model for natural phenomena, so an extension of a concept based upon our own social interactions provided another: this is the notion of "agency" as an explanatory model. At the basis of this notion is the idea of will or purpose, which plays an explanatory role parallel to that of cause in mechanism. Once, it was common to view such an animating will, instantiated in God or gods or spirits, as the explanation of virtually all natural phenomena of any significance, and we can still see a vestige of this in the tendency to speak of natural disasters as "acts of God", for example. Note that this notion of agency as an explanatory model offers nothing like the kind of effective control over phenomena that mechanism does -- instead, the forces behind events could only be flattered or appeased, with always uncertain results.

About four or five hundred years ago in Europe, however, this model began, slowly but implacably, in one field of phenomena after another, to be replaced by the mechanistic model. One of the last of such areas, and one which looked to be particularly resistant to the advance, was that of living organisms, which seemed, almost by definition, to exhibit the sort of will or purpose that was the essence of agency. But, of course, the last fifty years or so have witnessed perhaps the greatest success of the mechanical explanatory model yet, with the discoveries of the various mechanisms underlying the phenomenon of life itself*. (It's worth pointing out that living processes often depend upon randomness rather than strictly determinate causal sequences, but that such "stochastic" processes are nonetheless sufficiently reliable on a statistical basis to qualify as mechanisms.) So by now, the notion of agency as an explanatory model has been pushed back to the social and psychological arena in which it first arose, and is under some pressure there.

For reasons that I've already touched on -- having to do with an inherent indeterminacy in cultural self-knowledge -- I think agency is secure in this arena, finally, against further encroachment. In this more limited field, agency really serves more significantly as a moral concept than as an explanatory one in any case (though of course it can also play a different kind of explanatory role to some degree as well). In this, it addresses directly much of our fear of the machine, since what really arouses concern, in this contest between models, is not our humanity per se(except in a broad or metaphoric sense) but rather our status as moral agents. At some point in our future, after all, an expanded cultural ecumene might well include actual machines (e.g., robots, androids, etc.), not to mention aliens, speaking animals, or what have you, and they too would be simply incorporated as fellow agents into the moral realm that communication engenders. What repels us, quite rightly, about the image of ourselves as machines is the thought that it seems to turn us into mere tools or instruments (in the hands, implicitly, of another will or agent) -- but that is something that mechanistic explanation in itself is simply unable to do (though some ideologies have thought they could, and tried, with horrifying results). It's not the machine that we should fear, in other words, nor mechanism as an explanatory model, both of which have brought immense benefits and opportunities, and have, if anything, on the whole enhanced rather than diminished our humanity. It's, as always, the tragically mistaken uses to which they're put, but to which we can, and must, refuse. Understanding that can make the enquiry into the mechanisms even of consciousness and culture not just a useful, but an entirely human undertaking.

*Including, in just the last twenty or so years, the mechanisms behind the enormously complex processes of developmental biology -- this is a plug for a fascinating book I'm in the middle of, Endless Forms Most Beautiful by Sean B. Carroll.


Saturday, January 7, 2006

The "explanatory gap" series: a summary and a Q-and-A

(The end of the series, but likely not the topic. Earlier posts are:

In this series of posts, I've tried to look more closely at the widespread thesis that any attempt to explain conscious experience in purely physical terms -- i.e., any attempt to "reduce" experience to a physical level -- is fraught with a very fundamental problem, termed, by Chalmers (after Levine), an "explanatory gap" or "the hard problem of consciousness". Neither Chalmers nor Nagel, the two authors of seminal papers in support of this thesis that I've quoted from here, want to say that this means such a project is necessarily doomed -- in fact, towards the end of their papers, both make useful suggestions for making some headway with it -- but the thrust of both their suggestions and their texts as wholes is that, for any purely physical explanation, the "hard problem" isn't going to go away.

In this, as I've indicated, both philosophers are tapping into a very deep and very long-standing intuition -- which is that experience is an inherently non-physical phenomenon, or, in other words, that there is an absolute and fundamental gulf between the mental, whatever that may be, and the physical (whatever that may be). Such a gulf, of course, creates all kinds of problems concerning the interaction of such radically disparate realms, which is no doubt among the reasons that neither of the two wants to assert that the gap is unbridgeable in principle. But in general when we try to get down to details about mental-physical interaction, we're often met with mystification or ad-hocery or both -- e.g., the mental (aka "experience") is a new fundamental entity in the world; the mental operates in mysterious synch with the physical; the mental is just some mysterious by-product of the physical; the mental is some quantum thingy, etc. What I've wanted to propose is that, instead of such evasions, we question that deep-seated intuition of a gulf between mental and physical in the first place -- it wouldn't be the first time that our intuitions concerning ourselves and our place in things have lead us astray.

The preceding posts in this series (see the list above) have all just been suggestions toward this end. The last one (before this) in particular puts forth an alternative to the ontological divide, and an explanation of the intuition itself, in the form of a difference in orientation toward, or perspective on, experience -- a difference between viewing it as a phenomenon, on the one hand, and being a part of the phenomenon, on the other hand. But, after all, even if we're willing to accept that the intuition of a fundamental gulf between "mental" and "physical" realms may be mistaken (with the scare quotes indicating that these terms themselves may be part of the problem), we're still left with the considerable difficulty of coming up with an actual physical explanation of the phenomenon of conscious experience. That's properly a scientific, not philosophic or speculative, task, but I thought it would be worthwhile, after a series of often critical posts, to try to answer a few of the more obvious questions about this approach in a more positive vein:

If experience is, as you say, just another phenomenon among phenomena, then why is it, after all, that we can't observe it as we can any other phenomenon?
The key point about conscious experience as we experience it is that it's actually a function of two things: a particular kind of behavior-control mechanism, on the one hand, and our situation as a component of such a mechanism, on the other hand. So we can't "observe" this experience directly in any other entity simply because the "we" component isn't in those other entities (see this post). If we could unplug the "we" from our own brains and plug it into the analogous slot in another entity, then we could indeed know what it's like to be a bat, for example. Short of that, all we can do instead is observe the effects of conscious experience in the behavior of other organisms, and infer that the underlying mode of behavior control is consciousness.

One can understand that experience, as a form of information, might have causal effects -- that is, that it makes a difference -- but why does experience have content (like red or ringing or sour)? What does such qualitative content add?
The short answer is because information, unlike the smile of the Cheshire Cat, has to have a carrier. The content itself is largely arbitrary and unimportant (in principle -- there may be technical advantages to certain content in practice) -- what carries the information, and therefore does the work, of experience is simply the difference between one token-like quale and another. Which is the reason that "inverted qualia" arguments -- sometimes used as arguments demonstrating the ineffectiveness of qualia in general -- are irrelevant here.

But why is the content not just voltages or magnetized regions or some such, rather than red, sour, etc.?
Because if voltages, magnetic regions, etc. are interpreted simply as causal factors (which is what we mean by invoking them), then this violates the principle of "loose connectivity" that defines consciousness as a behavior control mechanism. Regardless of how qualia are caused or instantiated themselves, they must be detected solely as distinct tokens of information -- at this stage, there is no notion of quantity as opposed to quality, and certainly no notion of "voltage", etc.; there is only arbitrary content and difference.

But then notice the difference between this desiccated language of "tokens" and what David Chalmers rightly calls our "rich inner life" -- doesn't that alone suggest that there's more to experience than mere information tokens?
Well, there's certainly a linguistic or conceptual difference, but that has to do with the different aims or objectives of phenomenological description, on the one hand, and mechanistic explanation, on the other -- and those different objectives derive, in turn, from the distinct orientations toward experience spoken of already. But it's interesting that Nagel, toward the end of his paper "What is it like to be a bat", suggested the possibility of developing what he called an "objective phenomenology": "... its goal would be to describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences". Examples of concepts for such a phenomenology of qualia, abstracted across different sensory channels, might be the notion of a spectrum (linear or circular?), intensity, locality, definition, motivation, duration, etc. With such as this, though we could never match the richness of experience itself in objective terms, we might begin to approach it, from the outside as it were, by deliberately trying to avoid terms dependent upon a particular kind of experience for their meaning.

In summary and conclusion:
  • The "mental" and the "physical" (in reference to consciousness) are not two different realms, nor two different aspects of things, but are just two different ways of speaking about the same thing, their difference a consequence of the simple fact that we the speakers are at the center of the thing we're speaking about.
  • In physical terms, "we" are not agents lurking in the machine, but are complex components of the machine -- a component specialized to receive standardized (tokenized) signals, and integrate such tokens into the processes of behavioral decision-making. (Among other things, this implies that the notion of "feeling" does indeed apply to certain mechanical structures.)
  • Far from being a pointless and/or mysterious after-effect of physical processes, then, phenomenal experience or feeling is just information in the form of distinct tokens, and is the linchpin in the physical explanation of consciousness -- it provides the essential,loose (i.e., informational as opposed to directly causal) connection between the two key components of consciousness as a uniquely flexible behavior-control system.

And to anyone who makes it through the whole of this post, my apologies for its length.


Thursday, January 5, 2006

Is experience a paradox?

(This is another post in a series, starting with this and continuing here and here, on the so-called "explanatory gap" in any current, or perhaps any possible, theory of consciousness and the reality of conscious experience.)

It might seem so, going by at least some of the statements made in Chalmers' paper "Facing up to the problem of consciousness" "Why," he asks, "should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does." And later: "We know that conscious experience does arise when these functions [such as visual discrimination] are performed, but the very fact that it arises is the central mystery." Similarly, Thomas Nagel, in "What is it like to be a bat?", his famous earlier paper that focused attention on the problem of phenomenal experience, frames the issue more sharply:
We appear to be faced with a general difficulty about psychophysical reduction. In other areas the process of reduction is a move in the direction of greater objectivity, toward a more accurate view of the real nature of things. This is accomplished by reducing our dependence on individual or species-specific points
of view toward the object of investigation....

Experience itself however, does not seem to fit the pattern. The idea of moving from appearance to reality seems to make no sense here. What is the analogue in this case to pursuing a more objective understanding of the same phenomena by abandoning the initial subjective viewpoint toward them in favor of another that is more objective but concerns the same thing? Certainly it appears unlikely that we will get closer to the real nature of human experience by leaving behind the particularity of our human point of view and striving for a description in terms accessible to beings that could not imagine what it was like to be us. If the subjective character of experience is fully comprehensible only from one point of view, then any shift to greater objectivity—that is, less attachment to a specific viewpoint—does not take us nearer to the real nature of the phenomenon: it takes us farther away from it.
So the paradox seems clear and stark (though practical rather than logical): on the one hand, "subjectivity" is an inherent source of error while "objectivity" is the path to reality; but on the other hand, subjectivity is here the reality to be explained -- how could it ever be possible to be "objective" about the phenomenon of the subjective? As soon as we try to grasp such an illusive phenomenon in objective terms, in other words, we seem to lose the very quality that defines it.

Now, when you come up against a paradox in an explanatory project, you have a number of options (not counting the one of just moving on to another project): you can paste a big label "Mystery" over the whole thing and store it away (perhaps bringing it out occasionally to inspire awe); you can invent new entities and/or ontologies and endow those with just the right features or qualities that you hope will make the paradox go away; or you can take the appearance of the paradox as an indication that there's a flaw or problem buried somewhere in your assumptions. The first option might have its uses, but is clearly an expression of explanatory failure. The second might work, but has an ad hoc or arbitrary aspect to it that rubs off on the concocted entities/ontologies. The third option is the one I'd say has the most promise (and may help clear up some other quandaries as well).

In this case, for example, the assumptions I would question are those lurking behind the quasi-realist epistemology that seems implicit in these approaches. Among the most basic of those assumptions, as we can see from the Nagel quote above, are two: first, that "appearance" is to be distinguished from reality, and second, that explanations can and should approximate that reality (in the process, necessarily shunning appearance). Which obviously leads directly into dilemma and paradox when trying to cope with "appearance" itself. So, first, it might help to look again at that "epistemological inversion" suggested previously, in which appearance isn't a veil but is bedrock, and in which explanations don't aim at approximating a non-phenomenal reality (an impossibility), but rather at constructing more effective and comprehensive structures built out of appearance. Such "efficacious myths", as Quine called them, would still be characterized by ever greater abstraction, in which concrete experiential content is increasingly reduced, but they could no longer be seen as structures inherently alien to appearance or what we've been calling experience.

And that in turn might open the way to a more efficacious understanding of experience within a framework of two distinct perspectives or orientations.Earlier in this series, I'd distinguished those two perspectives as the view from within the phenomenon, as it were, and the view from without (in which experience as such appears only presumptively, based largely on observed behavior). As I said in the previous post, however, this doesn't quite capture the situation, and might even suggest that there is the possibility of a view (of anything) from outside of experience itself, when of course such a notion doesn't even make sense. It might help to expand a little on these different perspectives by pointing out that, on the one hand, experience is all that we are aware of, and is the ground and building material for all concepts, abstractions, and explanations of anything -- this is the first perspective. And, on the other hand, (presumptive) experience (as when we encounter other conscious entities) is but one phenomenon among others within the world that is built out of experience -- this is the second perspective. The second perspective, in other words, is contained within the first -- the situation suggests an Escher-like painting, in which the world is wrapped around and condensed into a localized object within itself. Any explanation -- any attempt at explanation -- of such an object will always be contained within our experiential situation, and can never contain that situation. And that, I think, is a sufficient explanation of the wrongly-named "explanatory gap" -- it would be better to think of it as a situational gap. By its nature, that gap is ineradicable and unbridgeable, since it simply refers to two distinct perspectives (and lies, I believe, at the origin of the many forms and varieties of dualism that have haunted cultures since long before Descartes). But, once we abandon the realist assumption that there must be a single, "true" perspective on the phenomenon, and once we disentangle the two perspectives, it presents no obstacle in itself to the provision of a coherent, effective, and physical explanation of experience as a phenomenon within experience.

UPDATE: Here's the conclusion to this series [see "The 'explanatory gap' series: a summary and a Q and A" above].



Thursday, December 22, 2005

Disassembling "you" (or "I")

(This is another post in a series, starting with this and continuing with this,on the so-called "explanatory gap" in any current, or perhaps any possible, theory of consciousness and the reality of conscious experience.)

In the first post in this series, I used the phrase "the view you get" in referring to a particular perspective on the phenomenon of experience, namely that from within the phenomenon. This has the less than fortunate effect, however, of appearing, yet again, to support that old homunculus image of a little person inside your head monitoring a bunch of screens. So perhaps it's time to tackle that image head-on -- and ask, where exactly is this "you" (or "I")? We seem to be inside our own heads at least, correct? After all, we can see, if indistinctly, the edges of some facial features. But then, too, when we touch something it's clearly we who do the touching, so in that sense it seems as though we extend throughout our bodies as well. But we don't exactly think of ourselves as some kind of mental fluid nor do we think that if we lose a part of our body we really lose a part of our selves -- rather, it's more like spatial location, while limited to our bodies, is somehow just not a pertinent or appropriate consideration for our selves beyond that. If we have an implicit intuition about the nature of our self, it's more likely something without spatial dimension, like a point of view, or a point-source of agency. In fact, I think, this intuition is itself the source of much of the intuitive power of that notion of an "explanatory gap" -- a point-source of agency seems something inherently at odds with the very basis of mechanistic (aka "reductive") explanation of any sort.

But what if "you" were not such a point-source at all? What if in reality this "you" and "I" were an intricate assemblage of parts, components, and functions? Few people doubt that such mechanisms play a role in the self, of course, but even fewer, I think, believe that that role exhausts the self -- even most philosophers, it seems to me, cling at least implicitly to the notion of a core of selfhood, or "you"- and "I"-ness, that lurks like a ghost or homunculus in the heart of the machine. But look at what happens to the "you" if you suffer some brain damage or impairment -- unlike with bodily impairment, the "you" itself is degraded in some degree, in ways that the "you" may or may not be aware of. As illustrated in the writings of Oliver Sachs, for example, some of these damaged versions of "you" exhibit strange or bizarre impairments, and certain kinds of damage can alter the personality, character, and essence of "you". Beyond some point, "you" not only lose cognitive function, but the sense of self as subject is gone as well, and after that point consciousness itself is gone. (It's interesting, in this context, to think of the scene in 2001 in which HAL's component parts are, one by one, deactivated.) So it would seem that there really is no core or essence of "you" that is distinct from the machinery that makes up the "you".

In ordinary speech, of course, we use such pronouns casually, as simple indexicals, and can safely ignore these complications. But we should be cautious of such language habits when we come to talk about experience, where casual intuitions can become an obstacle to understanding. Taking "you" to mean a functional assemblage of component parts, for example, will change significantly the meaning of the phrase that initiated this post: "the view you get" as a part of the phenomenon of experience -- we're no longer speaking of a dimensionless agent secreted in the heart of the phenomenon, then, but rather of a complex piece of machinery in its own right, specialized to detect signals of a particular kind (e.g., qualia), and that is itself a component of a larger mechanism. For that sort of mechanism, it evidently makes sense to speak not only of the "point of view" of a machine, but of its feelings as well.

UPDATE: Here's the conclusion to this series [see "The 'explanatory gap' series: a summary and a Q and A" above].





Wednesday, December 21, 2005

Light and darkness: consciousness and reflex

(This is another post in a series, starting with this one, on the so-called "explanatory gap" in any current, or perhaps any possible, theory of consciousness and the reality of conscious experience.)

"Why is the performance of these functions [that are 'in the vicinity of experience'] accompanied by experience?" Chalmers asks, in the paper that re-introduced the idea of an "explanatory gap" in all attempts to construct an explanation of consciousness. A little later he puts the same question a bit differently: "Why doesn't all this information-processing go on 'in the dark', free of any inner feel?" It was, presumably, his inability to find an answer to such questions that lay behind his use of the "zombie" thought-experiment to argue against a materialist, and in favor of a dualist, approach to comprehending consciousness as a phenomenon. My argument here, however, is that he gave up too quickly.

In asking why there is experience, as in an "inner feel", Chalmers appears to be asking for a functional reason for experience, not necessarily for a mechanism (which may or may not be the case, but let's assume so). With that understanding, then, we can compare a conscious response to a stimulus with another type of response which really does go on "in the dark", as Chalmers puts it -- and then ask what additional functionality does consciousness or feeling supply? That other type of response is the reflex: if you accidentally touch a hot surface, for example, your hand moves away before you're able to feel anything -- a very simple instance of information-processing "in the dark", as it were. Yet, slightly later, you still do feel the pain as well -- why? Because the feeling is an essential component of a more sophisticated kind of response-generator or behavior control. It is the bearer of necessary information, about the location, type, and severity of the injury, for example, but it also provides that information in a significantly passive form -- i.e., simply as feeling, not as a direct connection to a response -- that is the key to the functional adaptability of consciousness as a control mechanism. Of course, any sort of pain is a form of feeling that, unlike sight, say, or hearing, has a built-in motivational component of varying strength, but the point of the feeling is precisely that, while it may motivate a response, it doesn't direct one, and so the motivation may be over-ridden under dire enough circumstances -- conscious experience, in other words, is a vital component of a behavior control mechanism of astonishing flexibility, without which we would be "in the dark" in a more than just literal sense.

Which is the problem with Chalmers' hypothetical zombies -- without the "light" of experience, such entities lack the form of information that provides consciousness with the free play needed for its flexibility. Now Chalmers, of course, starts out by viewing experience as something inherently different from a mechanism of any sort, and so will always return to his insistence that, given any mechanism, even one of an allegedly "conscious" kind, one can always view it's processes as "dark", or without feeling, which therefore always makes the feeling (for him) appear as an addition to the mechanical, causal processes, or as an "epiphenomenon". In fact, Chalmers must insist not just that you can view any possible mechanical process as dark, but that you must so view it, since feeling and mechanism are fundamentally distinct.

And this, despite everything I've said to this point, might be considered to be half right -- it accurately reports one's intuition from one of two possible perspectives on a phenomenon. Consider, for example, a researcher studying the difference between reflex and conscious response in another organism, and who doesn't, obviously, have direct observational access to the experience itself, but must rely on proxies of one sort or another (e.g. verbal report, other behavioral signs, neural activity, etc.) -- from her perspective, even if she could trace every single neural signal involved in the two different processes, and even though one might be more complex than the other, both would be as apparently "dark", since no trace of "feeling" would ever be observed. As soon as the same researcher studies her own reactions to the same stimulus, on the other hand, it's immediately evident that, while the reflex is as dark as before, the conscious response is inextricably connected with feeling -- indeed, "feeling" is the very meaning of such a response. The difference between the two cases is solely one of situation or perspective -- in the first case she was external to the phenomenon; in the second, "she" was a part of the phenomenon. What Chalmers does, to generate the intuition of an "explanatory gap", is to superimpose the two perspectives, in effect, which produces a rather odd and puzzling sort of double vision, certainly, but which has nothing in itself to do with an explanatory deficiency.

UPDATE: Here's the conclusion to this series [see "The 'explanatory gap' series: a summary and a Q and A" above]..



Tuesday, December 20, 2005

That "explanatory gap" again

(This is the first of what may -- but may not -- be a series of posts on the so-called "explanatory gap": the supposed gulf between any current, and perhaps any possible, explanation of conscious experience and its reality.)

Without question, conscious experience (aka qualia, phenomenal experience, etc.) presents special problems of interpretation and explanation. On the one hand, we can't doubt that it exists (despite some philosophical quibbling over exactly what that's supposed to mean), nor does anyone seriously doubt that it's a characteristic of other human beings than oneself. Virtually all ordinary people (i.e., non-philosophers), in fact, also think it's a characteristic of a number of other animal species as well, but those same people are also likely to want to draw a line somewhere down the phylogenetic chain, and pretty certainly will wish to exclude things like rocks and machines from the class of conscious entities. So from a scientific (i.e., physicalist or naturalist) perspective, in other words, conscious experience appears to be an entirely natural phenomenon, that's evolved in certain organisms but not others, that's localized in time and space, and that's as subject to cause and effect as any other phenomenon. Yet on the other hand, out of all phenomena, and in a very odd sort of way, it seems to be alone in being inherently unobservable -- not unobservable because it's too small or too quick or too slow, in other words, but because it's not something that can be observed, even in principle. We can see the effects of the phenomenon, in the sense of the behavior it generates, and we can see something of the underlying mechanisms that themselves generate the phenomenon, but we can't see "seeing" itself ... or see "hearing", or "tasting", or "hurting", etc. Experience itself, in other words, can only be experienced, rather than observed, and each conscious entity can only experience its own experience.

Little wonder, then, that all attempts at an explanation of conscious experience have left at least an appearance of an "explanatory gap" -- as David Chalmers says, in his seminal paper on this topic:
... even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience - perceptual discrimination, categorization, internal access, verbal report - there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? A simple explanation of the functions leaves this question open. [emphasis his]

This sense of a gap or inadequacy, that remains even after all attempts at explanation, is a persistent one, to an almost surprising extent, and in what follows I want to look at some of the reasons for that. A start would be to notice that not all gaps are explanatory gaps -- some "gaps" may be a sign not so much of a lack or absence but rather simply of a difference, in the same ordinary way, for example, that categories are different (failing to recognize such difference leading to the familiar notion of the "category mistake"). Here are some examples of simple differences that might appear, wrongly, as an "explanatory gap":
  • There's a difference between an explanation of function -- which is an answer to the "why" question -- and an explanation of mechanism -- which answers the "how". In particular, it may be possible to answer why there is experience -- because experience just is the information necessary for behavior control, for example -- without, yet, being able to answer how there is experience.
  • There's also a difference between any sort of explanation of a phenomenon and the phenomenon itself. This may seem obvious, and indeed it is in almost any other instance, but there often seems to be an element of this difference implicit in complaints along the lines of: "but that explanation still doesn't give us the experience of red, say", even if both the function and the mechanism of experience had been explained. Part of the problem may be that both explanation and experience are phenomena of consciousness, and we may have an expectation that the latter should be contained somehow in the former (partly at least for reasons given in the next point).
  • And then there's a difference in perspective, between the view of a phenomenon from the outside, as it were, and the view from inside -- the view you get when you're not just a part of the phenomenon yourself, but when the "view" itself is the phenomenon. With this difference in perspective, of course, we're getting closer to the nub of the real problem here -- it's as though explanation, in the unique case of this particular phenomenon, is trying to wrap back on its own origins, and baffling itself with its own reflection.

Now, I don't expect any of this to dispel the appearance, at least, of an "explanatory gap", an appearance that has a very powerful grip on our intuitions. But what I hope these kinds of reflections might begin to suggest is that much of the philosophic problem of conscious experience is located not necessarily in the phenomenon itself but rather in the particular, and peculiar, nature of that "gap".

UPDATE: Here's the conclusion to this series [see "The 'explanatory gap' series: a summary and a Q and A" above].