From the Notebooks of Dr. Brain

FOOJ

Minister Faust‘s SF novel, From the Notebooks of Dr. Brain, had me laughing from the first page to the last. But the book is also a mind-boggling, multi-leveled allegory of racism and corporate fascism in America today. Dr. Brain is so chock-full of references to pop culture figures and political events alike that it is virtually a roman a clef — except that the people and events it refers to inhabit the Marvel and DC universes as well as the one we actually live in. (There’s an excerpt from the novel here).

From the Notebooks of Dr. Brain presents itself as a psychological self-help-manual-cum-case-history for comic-book superheroes: Unmasked!: When Being A Superhero Can’t Save You From Yourself. The author of this self-help book, and thereby the narrator of the novel, is one Dr. Brain (or, more fully, Dr. Eva Brain-Silverman), a sort of Dr. Phil for the “extraordinary abled.” She has her hands full, dealing with superhero malaise and depression. All the major supervillians have been defeated, leaving thousands of superheroes with nothing much to do. With no target upon which to focus their crime-fighting energy, they are flailing about without any sense of direction, and falling prey to petty bickering, and to various forms of self-destructive behavior. It’s the superhero equivalent of post-Cold War anomie: with no Evil Empire left to fight, there is no sense of purpose, no source of morale. Francis Fukuyama’s “end of history” has left all the superheroes feeling worse than useless. Pending the invention of a new enemy (which of course will turn out to be “terrorism”), the superheroes need Dr. Brain’s help in order to attain “self-actualization.”

The superheroes signed up for Dr. Brain’s therapy include such figures as The Flying Squirrel, Omnipotent Man, Power Grrrl, and the X-Man. The Flying Squirrel could best be described as a combination of Batman and Dick Cheney; he’s a quasi-fascist vigilante with all sorts of high-tech wizardry in his “utility belt,” and also the multimillionaire head of a multinational corporation which has a lock on the media, as well as the defense and surveillance industries. Omnipotent Man is a doofus-y, and naively hyperpatriotic, version of Superman (he comes from the planet Argon — instead of Krypton). Power Grrrl is sort of like Britney Spears with superpowers (though it turns out, in the course of the book, that this is mostly an act: Power Grrrl, unlike the real Britney, is pulling her own strings). X-Man, the key figure around whom the narrative turns, is an angry black militant with the super-ability of “logogenesis”: manifesting his words as actual things.

The novel’s brilliance has much to do with its exuberant linguistic and conceptual inventiveness. Faust gleefully rings the changes on all sorts of pop culture sensations and scandals, with superheroes as the celebrity targets of paparazzi and gutter journalists. The lives of the superheroes abound in episodes of drug addiction, hidden sexual fetishes, nervous breakdowns, and bitter family disputes — not to mention miscegenation, still a matter of shock and bewilderment, shame, hysterical confusion, and disavowed fantasies in our supposedly “post-racial” society. Even aside from the main plotlines, the book abounds in throwaway allusions to superheroes run amok, and to crazed scientific experiments and neo-colonialist endeavors that leave catastrophic “collateral damage” in their wake. Faust is brilliant in seeing superhero comics as the key to understanding the construction of social reality in a world dominated by the military-entertainment complex.

Faust also mixes and matches styles and languages, with everything from groaner puns (we meet supervillains like Zee-Rox, who can imitate anything, and Sara Bellum, who has terrifying mental powers), to ridiculous dialect-speech (Omnipotent Man’s gee-gosh-Norman-Rockwellesque-cornball-middle-American lingo; or the Germanic accent of Wonder-Woman-like superhero Iron Lass, originally a goddess from the Norse pantheon), to hyperbolic racial invective, to tabloid-style excited overstatement, to hilariously convoluted psychobabble and grotesque mixed metaphors. On one page, X-Man disses another superhero of color as “a slack, slick, loose-dicked, willingly no-self-control, no-zipper tan-man who maks out his mind to convince himself he isn’t a senseless, thoughtless, shiftless, aimless, brainless, oversized pants-wearing, forty-ounce-loving, penis-fixated, self-underrated supreme champeen of galactic niggativity” (page 148, quoted again on page 331). On another, Dr. Brain confides to her readers that “unraveling the bandages covering Kareem’s and Syndi’s psychemotional wounds was exhaustive work, since their bloodied psychic linens were so crusted together they’d congealed into experiential gore” (page 298). At still another point, Dr. Brain asks her patients to consider “how many of the psychemotional barnacles attached to the ship of my consciousness am I willing to burn off in order to sail freely across the ocean of well-adjustedness?” (page 225).

And so on.

But beneath all this exuberant postmodern linguistic play, From the Notebooks of Dr. Brain is a serious socio-political novel, focusing on the continuing impact of race and racism in America today. (Predominantly in USA/America, although Minister Faust himself is Canadian). X-Man’s “neurosis,” for which Dr. Brain endeavors to treat him, is in fact grounded in his experience of what W.E.B. DuBois famously called double consciousness:

this sense of always looking at one’s self through the eyes of others, of measuring one’s soul by the tape of a world that looks on in amused contempt and pity. One ever feels his twoness,– an American, a Negro; two warring souls, two thoughts, two unreconciled strivings; two warring ideals in one dark body, whose dogged strength alone keeps it from being torn asunder.

X-Man is divided — and therefore unable to attain what psychobabble would call an integrated selfhood, or in Dr. Brain’s terms “self-actualization” — by the fact that, on the one hand, he cannot escape or transcend the perspective of general American culture; yet, on the other hand, he can only feel alienated, excluded, and condemned by that culture. As he bitterly says at one point, he’s expected to stand for Truth, Justice, and the American Way; but this is a double bind, because the American Way is in fact incompatible with Truth or Justice.

What this means is that X-Man’s “psychemotional” (a favorite Dr. Brain word) torment and dysfunction — amply dramatized throughout the novel’s lurid, often ludicrous pulp plot twists — cannot be understood in entirely personalistic terms. Such torment and such dysfunction have a crucial (and crucially determining) social dimension. This is arguably true of all forms of so-called “neurosis” (indeed, I would make such an argument), but it is particularly evident in the case of racialized American double consciousness.

Throughout From the Notebooks of Dr. Brain, X-Man’s double consciousness is narrated to us from a point of view that is absolutely unable to discern it. Dr. Brain, with her forcedly-cheerful self-help philosophy, is an unreliable narrator — X-Man even accuses her explicitly at several points of being an unreliable narrator — to the extent that she continually misunderstands and misframes everything that X-Man says to her. She contextualizes all of X-Man’s complaints as being pathological and neurotic, a result of “insubordination and racial antagonism” (page 27) — even when they are pretty clearly rational. Above all, Dr. Brain diagonses X-Man as suffering from RNPN (Racialized Narcissistic Projection Neurosis), whereby people of color (and superheroes of color) have a chip on their shoulder about past racism that supposedly no longer exists. According to Dr. Brain, X-Man has a pathological need to see himself as a victim, so that he can blame his own failures upon others. Unable to deal with the fact that white people accept him without racism, he has a compulsive need to act out in order to arouse their hostility towards him, so that he “prove” that racism still exists, allowing him then to act aggrieved and to play the victim.

So the narrating voice of From the Notebooks of Dr. Brain reproduces what has become the dominant ideology of our day: the claim that “we” are “beyond racism,” and that (as Dr. Brain herself puts it) “legislation and social progress have ensured that what was only a dream on the steps of the Lincoln Memorial a few decades ago has become a reality for all” (page 149). This claim allows white people to say, in all “good conscience,” that they are not racist (look! I watch Oprah! Look! I voted for Obama!), and that they only care about the content of someone’s character, not the color of their skin. To say this is to ignore all the ways that racism is institutional and socially embedded — it is to reduce the question of race to a matter of individual behavior, responsibility, belief, and “preference.” (This is, of course, the way that neoliberalism treats everything; since, as Margaret Thatcher said, “there is no such thing as society. There are only individuals, and families”). And the corollary of this ideology is to say that anybody who does worry about racism is simply hung up about it. In other words, black people are accused of themselves being racist (for the very reason that they perceive racism as existing), while white people get to congratulate themselves on being prejudice-free.

From the Notebooks of Dr. Brain effectively links the dominant American culture’s denial of its own racism, and self-congratulatory “multiculturalism,” with its therapeutic cult of self-help and self-responsibility. These moves are both aspects of the relentless personalization of everything that is a feature both of today’s global neoliberalism, and of a long American tradition of uplift and self-reliance. (This strain of American sensibility was already satirized by Herman Melville in his 1857 novel The Confidence Man). Dr. Brain’s advice to X-Man is to “begin by recognizing that you are an individual, not a social abstraction. Your destiny belongs to you, not to history, and whatever successes or failures you experience are of your own making. Take responsibility for your own happiness…” and so on and so forth (pages 150-151).

The novelistic brilliance of From the Notebooks of Dr. Brain has much to do with the irony by means of which this sort of psychobabbling drivel becomes the dominant voice of the novel — much as it is the dominant voice in American public discourse generally. As the novel moves towards its action-packed, slam-bang conclusion — as any tale of superheroes must — double consciousness is raised to a vertiginous pitch, as we simultaneously get X-Man’s account of political crisis and turmoil, and Dr. Brain’s dismissal of this account as mere paranoid projection. By the final pages, X-Man is dead, and the creepy Flying Squirrel is firmly in charge. We have witnessed what is basically a fascist coup d’etat combined with a racist mass lynching or pogrom; and the establishment of a new social order in which surveillance is ubiquitous, civil liberties are nonexistent, behavior is severely restricted and normalized, and multinational corporate profits are protected unconditionally. Yet this new world order is presented to the reader by the always upbeat Dr. Brain as a triumph of personal “self-actualization” and “psychemotional wellness,” as well as a set of unparalleled new marketing opportunities. In its offhanded and slyly ironic way, From the Notebooks of Dr. Brain both delivers a hilarious roller-coaster ride filled with comic book thrills and chills, and reminds us about what is really scary.

Cognitive capitalism?

I just finished reading Yann Moulier Boutang’s Le capitalisme cognitif (Cognitive Capitalism). Boutang is the editor of Multitudes, a French journal closely associated with Toni Negri. The basic thesis of his book — in accord with what Hardt and Negri say in Empire and Multitude — is that we are entering into a new phase of capitalism, the “cognitive” phase, which is as different from classical industrial capitalism as that capitalism was from the mercantile and slavery-based capitalism that preceded it. This is a thesis that, in general, I am sympathetic to. On the one hand, it recognizes the ways in which 19th-century formulations of the categories of class and property are increasingly out of date in our highly virtualized “network society”; while on the other hand, it recognizes that, for all these changes, we are still involved in what has to be called “capitalism”: a regime in which socially produced surpluses are coded financially, expropriated from the actual producers, and accumulated as capital.

Ah, but, as always, the devil is in the details. And I didn’t find the details of Boutang’s exposition particularly satisfying or convincing. To be snide about it, it would seem that Boutang, like all too many French intellectuals, has become a bit too enamored of California. He takes those Silicon Valley/libertarian ideas — about the value of continual innovation, the worthiness of the free software movement, and the possibilities of unlimited digital dissemination — more seriously, or at least to a much greater extent, than they merit. The result is a sort of yuppie view of the new capitalism, one that ignores much that is cruel and repressive about the current regime of financial accumulation.

There, I’ve said it. But let me go through Boutang’s argument a bit more carefully. His starting point, like that of Hardt and Negri, and of Paolo Virno as well, is what Marx calls “General Intellect” — a concept that only comes up briefly in Marx, in the “Fragment on Machines” which is part of that vast notebook (never published by Marx) known today as the Grundrisse; but that has become a central term for (post-)Marxist theorists trying to come to grips with the current “post-Fordist” economy. (Here’s Paolo Virno’s discussion of general intellect). Basically, “general intellect” refers to the set of knowledges, competencies, linguistic uses, and ways-of-doing-things that are embedded in society in general, and that are therefore more or less available to “everybody.” According to the argument of Virno, Mauricio Lazzarato, Hardt and Negri, Boutang, and others, Post-Fordist capitalism has moved beyond just the exploitation of workers’ (ultimately physical) labor-power, and is now also involved in the appropriation, or the extraction of a surplus from, all this embodied and embedded social no-how. Rather than just drawing on the labor-power that the worker expends in the eight hours he or she spends each day in the workplace, “cognitive capitalism” also draws on the workers’ expertise and “virtuosity” (Virno) and ability to conceptualize and to make decisions: capacities that extend beyond the hours of formal labor, since they involve the entire lifespan of the workers. My verbal ability, my skill at networking, my gleanings of general knowledge which can be applied in unexpected situations in order to innovate and transform: these have been built up over my entire life; and they become, more than labor-power per se, the sources of economic value. Corporations can only profit if, in addition to raw labor power, they also appropriate this background of general intellect as well. General intellect necessarily involves collaboration and cooperation; it arises, through, and is cultivated within, the networks that have become so important, and of such wide extent, in the years since the invention of the Internet. In this way, general intellect can be thought of as a “commons” (as Lawrence Lessig and other cybertheorists say), or as the overall framework of what defines us now as a “multitude” (rather than as a particular social class, or as a “people” confined to a single nation, as was the case in the age of industrial capitalism and the hegemony of print media).

All this is well and good, as far as it goes. While I would note that the phenomena described under the term “general intellect” have not just been invented since 1975, but have existed for a much longer time — and have been exploited by capitalism for a much longer time — I don’t doubt that they have been so expanded in recent years as to constitute (as the dialecticians would put it) “a transformation of quantity into quality.” (See my past discussion of McLuhanite Marxism). Let’s provisionally accept, then, Boutang’s assertion that enough has changed in the last 30 years or so that we are moving into a new regime of capitalist accumulation. The question is, how do we describe this new regime?

It’s the form of Boutang’s description of this transformation that I find problematic. He says that the new cognitive capitalism is concerned, not so much with the transformations of material energy (labor-power) into physical goods, as with the reproduction of affects and subjectivities, of knowledges and competencies, of everything mental (or spiritual?) that cannot be reduced to mere binarized “information.” I don’t really disagree with this, to the extent that it is a question of “in addition” rather than “instead.” But Boutang leans a little too far to the opinion that “cognitive” or virtual production (what Hardt and Negri call “affective labor,” and what Robert Reich calls “symbolic analysis”) has displaced, rather than supplemented, the production and distribution of physical goods and services. The source of wealth is no longer labor-power, he says, nor even that dead labor-power congealed into things that constitutes “capital” in the traditionally Marxist sense, but rather the “intellectual capital” that is possessed less by individuals than by networks of individuals, and that is expressed in things like capacity for innovation, institutional know-how, etc.

Boutang claims that this “intellectual capital” [a phrase I hate, because an individual’s skills, knowledge, etc. is precisely not “capital”] is not depleted daily (so that it needs to be replenished) in the way that physical labor-power is under industrial capitalism; rather, it is something that increases with use (as you do more of these things, you become better at them), so that the process of replenishment (learning more, gaining skills, improving these skills or virtuosities through practice) is itself what adds value. Also, this “intellectual capital” is an intrinsically common or social good, rather than a private or individualized one. It can only be realized through network-wide (ultimately world-wide) collaboration and cooperation. For both these reasons, the appropriation of this “general intellect” is a vastly different process from that of appropriating individual workers’ labor-power. All this is exemplified for Boutang in phenomena like online peer to peer file trading, and in the open source software movement — he sees collaborative production in the manner of Linux as the new economic paradigm.

Now, I am in favor, as much as anybody is, of violating copyright, and of open source (for things like academic publications as well as for software); but I do not believe that these can constitute a new economic paradigm — they still exist very much as marginal practices within a regime that is still based largely on private property “rights” and the extortion of a surplus on the basis of those “rights.” [I should say, as I have said many times before, that I am happy for my words to be disseminated in any form, without payment, as long as the attribution of the words to my authorship — to use a dubious but unavoidable word or concept — is retained]. Boutang is so excited by the “communist” aspects of networked collaboration, or general intellect, that he forgets to say anything about how all this “cognitive” power gets expropriated and transformed into (privately owned) capital — which is precisely what “cognitive capitalism” does. He optimistically asserts that the attempts of corporations to control “intellectual property,” or extract it from the commons, will necessarily fail — something that I am far less sure of. “Intellectual property” is an oxymoron, but this doesn’t mean that “intellectual property rights” cannot be successfully enforced. You can point to things like the record companies’ gradual (and only partial) retreat from insisting upon DRM for all music files; but this retreat coincides with, and is unthinkable without, a general commodification of things like ideas, songs, genetic traits, and mental abilities in the first place.

Boutang gives no real account of just how corporations, or the owners of capital, expropriate general intellect (or, as he puts it in neoliberal economistic jargon, how they capture “positive externalities”). He seems to think that the switch from mere “labor-power” to “general intellect” as the source of surplus value is basically a liberating change. I would argue precisely the opposite: that now capital is not just expropriating from us the product of the particular hours that we put in at the workplace; but that it is expropriating, or extracting surplus value from, our entire lives: our leisure time, our time when we go to the movies or watch TV, and even when we sleep. The switch to general intellect as a source of value is strictly correlative with the commodification of all aspects of human activity, far beyond the confines of the workplace. Just as the capitalist cannot exploit the worker’s labor per se, but must extract it in the form of labor power, so the capitalist cannot exploit general intellect without transforming it into something like “cognition-power” — and this is extracted from individuals just as labor-power is. When the division between physical and mental labor is made less pronounced than it was in the Fordist factory, this only means that the “mental” no less than the “physical” is transformed into a commodified “capacity” that the employer can purchase from the employee in a way that is lesser than, and incommensurate with, the “use” the employer gets from that power or capacity. Boutang makes much of the fact that cognition is not “used up” in the way that the physical expenditure of energy is; but I don’t think this contrast is as telling as he claims. The fatigue of expending cognitive power in an actual work situation is strictly comparable to the fatigue of expending physical power in a factory. And the stocking-up of physical power and cognitive ability over the lifetime of the workers entirely go together, rather than being subject to opposite principles.

Boutang seems to ignore the fact that the regime of “intellectual property” leads to grotesque consequences such as the fact that an idea that a Microsoft employee might have when she is taking a bath, or even when she is asleep (consider all the stories of innovative ideas that come to people in dreams, like Kekule’s discovery of the “ring” structure of benzene) “belong” to the corporation, and must be left behind if and when she moves on to another job. (Let me add that it is just as absurd to assert that an idea that I come up with from a dream “belongs” to me as it is to assert that the idea belongs to my employer. All ideas come out of other ideas; nothing I do is independent of all the store of “general intellect” that I draw upon).

Boutang also seems to buy into many other of the myths of cognitive capitalism. He endorses the idea that the “flexibilization” of employment (or what in Europe is often called “precarization”) is on the whole a good and progressive thing: it “liberates” workers from the oppression of the “salariat” (I am not sure how to translate this word into English — the “regime of salary,” perhaps?). Boutang goes so far as to point to the way “new economy” corporations in the late 1990s gave out stock options in lieu of higher salary as a harbinger of the way things are being rearranged under cognitive capitalism. This seems entirely wrong to me, because it is only a subset of highly skilled programmers, and executives, who get these options. As far as I know, the people who wash the windows or sweep the floors at Microsoft or Google do not get stock options. (I don’t think the people who sit at the phones to answer consumer complaints do either).

Not to mention that you’d never know from Boutang’s discussion that over a billion people in the world currently live in what Mike Davis calls “global slums”. William Gibson is right to say that “the street finds its own uses for things”; and there are certainly a lot of interesting and inventive and innovative things going on in the ways that people in these slums are using mobile phones and other “trickle-down” digital technology. (See Ian Macdonald’s SF novel Brasyl for a good speculative account, or extrapolation, of this). But all this goes on in an overall situation of extreme oppression and deprivation, and it can only be understood in the context of the “hegemonic” uses of these technologies in the richer parts of the world (or richer segments of the societies in which these slums are located).

Also, Boutang needs to account for the fact that WalMart, rather than Microsoft or Google, is the quintessential example of a corporation operating under the conditions of cognitive capitalism. Walmart could not exist in its present form without the new technologies of information and communication — it draws upon the resources of “general intellect” and the force of continual, collectively-improvised innovation for everything that it does. Also, and quite significantly, it focuses entirely upon circulation and distribution, rather than upon old-style manufacturing — showing that the sphere of circulation now (in contrast to Marx’s own time) plays a major role in the actual extraction of surplus value. Yet WalMart shows no signs of unleashing the “creativity of the multitude” in its workings, nor of replacing the “salariat” with things like stock options for its workers. On that front, its largest innovation consists in getting rid of the central Fordist principle of paying the workers enough so that they can afford to buy what they manufacture. Instead, WalMart has pioneered the inverse principle: paying the workers so little that they cannot afford to shop anywhere other than at WalMart. It might even be said, not too hyperbolically, that WalMart has singlehandedly preserved the American economy from total collapse, in that their lowered prices are the only thing that has allowed millions of the “working poor” to retain the status of consumers at all, rather than falling into the “black hole” of total immiseration. WalMart is part and parcel of how the “new economy” has largely been founded upon transferring wealth from the less wealthy to the already-extremely-rich. But this is a process that Boutang altogether ignores; he writes as if “neoliberalism” were some sort of rear-guard action by those who simply “don’t get” the new cognitive economy. In fact, though, neoliberalism is no mere ideology: it is the actual “cognitive” motor of cognitive capitalism’s development.

Boutang even buys into the neoliberal program, to the extent that he maintains that the role of financial speculation in the current postfordist regime is largely a benevolent one, having to do with the management of the newly impalpable sources of value in the “cognitive” economy. He denies that financial speculation increasingly drives economic processes, rather than merely reflecting them or being of use to them. He needs to think more about the functioning of derivatives in “actually existing capitalism.”

All in all, Le capitalisme cognitif buys into the current capitalist mythology of “innovation” and “creativity” way too uncritically — without thinking through what it might mean to detach these notions from their association with startups and marketing plans and advertising campaigns (and how this might be done). (As a philosophical question, this is what my work with Whitehead and Deleuze leads me to).

The book ends, however, with an excellent proposal. Boutang argues for an unconditional “social wage”: to be given to everyone, without exception, and without any of the current requirements that welfare and unemployment programs impose on their recipients (requirements like behaving properly, or having to look for work, or whatever). This social wage — he gives a provisional figure of 700 euros per month, or about $1000/month at today’s exchange rates) would be paid in recompense for the fact that “general intellect,” from which corporations extract profit, is in fact the work of everyone — even and especially outside of formal work situations. Boutang spends a lot of energy showing how this proposal is fiscally feasible in Europe today, how it would rejuvenate the economy (and thus lead, in the long run, to enhanced profits for the corporations whose tax payments would finance it). What he doesn’t say, however — and perhaps does not recognize — is that, even though this proposal is perfectly feasible in terms of the overall wealth of the world economy), if it were really adopted universally — that is to say, worldwide, to all human beings on the face of the planet — it would severly disrupt the regime of appropriation that he calls “cognitive capitalism.” This is yet another example of bat020’s and k-punk’s maxim that (reversing a slogan from May 1968) we must “be unrealistic, demand the possible.” The unconditional social wage is entirely possible in terms of what the world can economically afford, but it is “unrealistic” in terms of the way that “cognitive capitalism” is structured. Demanding it pushes the system to a point of paradox, a critical point — at least notionally.

The Head Trip; consciousness and affect

I’ve been reading Jeff Warren’s The Head Trip: Adventures on the Wheel of Consciousness, basically on the recommendation of Erik Davis. It’s a good pop-science-cum-therapy book, which explores basic modes of conscious experience, both nocturnal and diurnal, and combines accounts of what scientific researchers and therapists are actually doing with a narrative of Warren’s own subjective experiences with such modes of consciousness-alteration as lucid dreaming, hypnotic trances, meditation, neurofeedback, and so on. Warren maps out a whole series of conscious states (including ones during sleep), and suggests that consciousness in general (to the extent that there is such a thing “in general”) is really a continuum, a mixture of different sorts of mental activity, and different degrees of attentiveness, including those at work during sleep. These various sorts of conscious experience can be correlated with (but not necessarily reduced to) various types of brain activity (both the electric activity monitored by EEGs and the chemical activity of various neurotransmitters; all this involves both particular “modules” or areas of the brain, and systematic patterns running through the entire brain and nervous system).

The Head Trip is both an amiable and an illuminating book, and I really can’t better Erik Davis’ account of it, which I encourage you to read. Erik calls Jeff Warren “an experiential pragmatist in the William Jamesian mode,” which is both high praise and a fairly accurate description. Warren follows James in that he insists upon conscious self-observation, and looks basically at what James was the first to call the “stream of consciousness.” Like James, Warren insists upon the pragmatic aspect of such self-observation (what our minds can do, both observing and being observed, in all its messy complexity), rather than trying to isolate supposedly “pure” states of attention and intention the way that the phenomenologists do.

At one point, Warren cites Rodolfo Llinas and D. Pare, who argue that consciousness is not, as James claimed, “basically a by-product of sensory input, a tumbling ‘stream’ of point-to-point representations,” because it is ultimately more about “the generation of internal states” than about responding to stimuli (p. 138). But this revised understanding of the brain and mind does not really contradict James’ overall pragmatic style, nor his doctrine of “radical empiricism.” James’ most crucial point is to insist that everything within “experience” has its own proper reality (as opposed to the persistent dualism that distinguishes between “things” and “representations” of those things). Not the least of Warren’s accomplishments is that he is able to situate recent develops in neurobiological research within an overall Jamesian framework, as opposed to the reductive dogmas of cognitivism and neural reductionism.

Nonetheless, what I want to do here is not talk about Warren’s book, but rather speculate about what isn’t in the book: which is any account of emotion or of affect. Shouldn’t we find it surprising that in a book dedicated to consciousness in all its richness and variety, there is almost nothing about fear, or anger, or joy, or shame, or pride? (There’s also nothing about desire or passion or lust or erotic obsession: I am not sure that these can rightly be called “emotions,” but they also aren’t encompassed within what Warren calls “the wheel of consciousness”). There are some mentions of a sense of relaxation, in certain mental states; and of feeling a sort of heightened intensity, and even triumph, when Warren has a sort of breakthrough (as when he finally succeeds in having a lucid dream, or when his neurofeedback sessions are going well). Correlatively, there are also mentions of frustration (as when these practices don’t go well — when he cannot get the neurofeedback to work right, for instance). But that’s about it, as far as the emotions are concerned.

The one passage where Warren even mentions the emotions (and where he briefly cites the recent work on emotions by neurobiologists like Antonio Damasio and Joseph LeDoux) is in the middle of a discussion of meditation (pp. 309ff.). The point of this passage is basically to discuss the difference between how Western rationalism has just tried to repress (in a Freudian sense) the emotions, whereas the Buddhist tradition has instead tried to “cultivate” them (by which he seems to mean something like what Freud called “sublimation”). Warren oddly equates any assertion of the power of the emotions with evolutionary psychiatry’s doctrine that we are driven (or “hardwired”) by instincts that evolved during the Pleistocene. The existence of neuroplasticity (as recognized by contemporary neurobiologists) effectively refutes the claims of the evolutionary psychologists — this is something that I am entirely agree with Warren about. But Warren seems thereby to assert, as a corollary, that emotions basically do not matter to the mind (or to consciousness) at all — and this claim I find exceedingly bizarre. Warren seems to be saying that Buddhist meditation (and perhaps other technologies, like neurofeedback, as well) can indeed, as it claims, dispose of any problems with the emotions, because it effectively does “rewire” our brains and nervous systems.

What is going on here? I have said that I welcome the way that Warren rejects cognitivism, taking in its place a Jamesian stance that refuses to reject any aspect of experience. I find it salubrious, as well, that Warren gives full scope to neurobiological explanations in terms of chemical and electronic processes in the brain, without thereby accepting a Churchland-style reductionism that rejects mentalism or any other sort of explanatory language. Warren thus rightly resists what Whitehead called the “bifurcation of nature.” Nonetheless, when it comes to affect or emotion, some sort of limit is reached. The language that would describe consciousness from the “inside” is admitted, but the language that would express affective experience is not. I think that this is less a particular failing or blind spot on Warren’s part, than it is a (socially) symptomatic omission. Simply by omitting what does not seem to him to be important, Warren inadvertently testifies to how little a role affect or emotion plays in the accounts we give of ourselves today, accounts both of how our minds work (the scientific dimension) and of how we conceive ourselves to be conscious (the subjective-pragmatic dimension).

Some modes of consciousness are more expansive (or, to the contrary, more sharply focused) than others; some are more clear and distinct than others; some are more bound up with logical precision, while others give freer reign to imaginative leaps and to insights that break away from our ingrained habits of association. But in Warren’s account, none of these modes seem to be modulated by different affective tones, and none of them seem to be pushed by any sort of desire, passion, or obsession. Affects and desires would seem to be, for Warren, nothing more than genetically determined programs inherited from our reptilian ancestors (and exaggerated in importance by the likes of Steven Pinker) which our consciousness largely allows us to transcend.

Another way to put this is to say that Warren writes as if we could separate the states (or formal structures) of attentiveness, awareness, relaxation, concern, focus, self-reflection, and so on, from the contents that inhabit these states or structures. This is more or less equivalent to the idea — common in old-style AI research — that we can separate syntactics from semantics, and simply ignore the latter. Such a separation has never worked out in practice: it has entirely failed in AI research and elsewhere. And we may well say that this separation is absurd and impossible in principle. Yet we make this kind of separation implicitly, and nearly all the time; it strikes us as almost axiomatic. We may well be conscious of “having” certain emotions; but we cannot help conceiving how we have these emotions as something entirely separate from the emotions themselves.

It may be that consciousness studies and affect studies are too different as approaches to the mind (or, as I’d rather say, to experience) to be integrated at all easily). Indeed, in this discussion I have simply elided the difference between “affect” and “emotion”: the terms are sometimes used more or less interchangeably, but I think any sort of coherent explanation requires a distinction between the two. Brian Massumi uses “affect” to refer to the pre-personal aspects (both physical and mental) of feelings, the ways that these forces form and impel us; he reserves “emotion” to designate feelings to the extent that we experience them as already-constituted conscious selves or subjects. By this account, affects are the grounds of conscious experience, even though they may not themselves be conscious. Crucial here is James’ sense of how what he calls “emotions” are visceral before they are mental: my stomach doesn’t start churning because I feel afraid; rather, I feel afraid because my stomach has started churning (as a pre-conscious reaction to some encounter with the outside world, or to some internally generated apprehension). The affect is an overall neurological and bodily experience; the emotion is secondary, a result of my becoming-conscious of the affect, or focusing on it self-reflexively. This means that my affective or mental life is not centered upon consciousness; although it gives a different account of non-conscious mental life than either psychoanalysis (which sees it in terms of basic sexual drives) or cognitive theory (which sees non-conscious activity only as “computation”).

There’s more to the affect/emotion distinction than James’ account; one would want to bring in, as well, Sylvan Tompkins’ post-Freudian theory of affect, Deleuze’s Spinozian theory of affect, and especially Whitehead’s “doctrine of feelings.” Rather than go through all of that here, I will conclude by saying that, different as the field of consciousness studies (as described by Jeff Warren) is from cognitivism, they both ultimately share a sense of the individual as a sort of calculating (or better, computational) entity that uses the information available to it in order to maximize its own utility, or success, or something like that. Such an account — which is also, as it happens, the basic assumption of our current neoliberal moment — updates the 18th century idea of the human being as Homo economicus into an idea of the human being as something like Homo cyberneticus or Homo computationalis. For Warren, this is all embedded in the idea that, on the one hand, our minds are self-organizing systems, and parts of larger self-organizing systems; and on the other hand, that “we can learn to direct our own states of consciousness” (p. 326). Metaphysically speaking, we are directed by the feedback processes of an Invisible Hand; instrumentally speaking, however, we can intervene in these feedback processes, and manipulate the Hand that is manipulating us. The grounds for our decision to do this — to intervene in our own behalf — are themselves recursively generated in the course of the very processes in which we determine to intervene. The argument is circular; but, as with cybernetics, the circularity is not vicious so long as we find ourselves always-already within it. This is in many ways an enticing picture: if only because it is the default assumption that we cannot help starting out with. And Jeff Warren gives an admirably humane and expansive version of it. Still, I think we need to spend more time asking what such a picture leaves out. And for me, affect theory is a way to begin this process.

Sex + Love With Robots

David Levy’s Love + Sex With Robots aims to persuade us that, by 2050 at the latest, it will be a common thing for people to fall in love with robots, have committed relationships with them, and have sex with them. The author wants both to shock us with the extravagance of this claim, and yet demonstrate to us carefully that such a prospect is entirely likely, and that his extrapolation is entirely rational. And indeed, Levy’s thesis is not all that extreme, when you compare it with, for instance, Ray Kurzweil’s claim that the Singularity will overtake us by 2049.

Still, I think that predicting the future is impossible, and therefore inherently ridiculous. That doesn’t mean we shouldn’t speculate and extrapolate; what it means is that we should read futuristic predictions in the same way that we read science fiction novels. As Warren Ellis recently put it, science fiction is “a tool with which to understand the contemporary world.” More precisely, SF (and nonfiction futuristic speculation as well) is a tool with which to understand those aspects of the contemporary world that are unfinished, still in process, and therefore (as it were) redolent of futurity. SF and futurism are vital and necessary, because they make us stop and look at the changes going on all around us, breaking with the “rear-view-mirrorism” (as Marshall McLuhan called it) that otherwise characterizes the way we tend to look at the world. That’s why I find it indispensable to read people like Bruce Sterling, Jamais Cascio, Charles Stross, Warren Ellis, and so on. The line between science fiction and futurist speculation is an extremely thin one (and some of the people on my list, most notably Sterling, explicitly do both). Extrapolating the future is necessarily a fiction-making activity; but we can’t understand the present, or be ready for the future, unless we go beyond empirical fact and turn to fiction.

That said, I fear that Love + Sex With Robots struck me as being more symptomatic than truly thoughtful, much less informative. There’s a certain (willed?) naivete to the book, as when Levy cites all sorts of dubious scientific studies and surveys — mostly conducted since 1985 — in order to prove that, for instance, “one of the stronger reasons for falling in love is need — the need for intimacy, for closeness, for sexual gratification, for a family” (p. 40). This is the sort of thing that gives (or at least should give) supposedly “scientific” research a bad name. Is a psychological research team really needed to verify cliches that have wide circulation throughout our culture? “Research” of this sort, which reproduces what everybody already “knows”, is entirely solipsistic: it is pretty much equivalent to telling somebody something, and then asking them to repeat what you told them back to you.

I suppose the idea that people crave intimacy, or sexual gratification for that matter, was merely “folk psychology,” with no objective status, until it was scientifically verified, by research summarized in an article published in 1989 in The Journal of Social and Personal Relationships (as mentioned on Levy’s p. 38). It’s remarkable how — if we accept Levy’s research sources and citations — we knew nothing whatsoever about human nature a mere thirty years ago, and now we know almost everything about it that there is to know; we have gotten, for instance “a definitive answer to the question” of whether men or women have a stronger sex drive (the answer — surprise, surprise, is that men do — pp. 294-295).

Sarcasm aside, it seems obvious to me — in line with what I said above about science fiction — that one can learn a lot more about “falling in love,” and the intensity of sexual drives, and so on, from reading romance novels, for instance, than from slogging through “scientific” studies of the sort Levy cites on nearly every page of Love + Sex With Robots.

But leaving that aside — and also leaving aside the most entertaining portions of Levy’s book, such as the one where he goes through the history of vibrators and other sex toys — Love + Sex With Robots presents us (inadvertently perhaps) with an odd paradox. On the one hand, Levy argues that we will soon be able to fall in love with robots, and have sex with them, because the experience will essentially be indistinguishable from falling in love with, and having sex with, other human beings. He advocates something like the Turing test for emotions, as well as for cognition: “the robot that gives the appearance, by its behavior, of having emotions should be regarded as having emotions, the corollary of this being that if we want a robot to appear to have emotions, it is sufficient for it to behave as though it does” (p. 120). This, in itself, is unexceptionable. SF has treated the question of androids’ indistinguishability from biological human beings in numerous works, Blade Runner being the most famous but far from the only example. And Levy is not far from SF in his assertions that robots will be able to do everything that we do, only better.

Of course, that still leaves the question of how we get from here to there. Levy tends to elide the difficulty of jumping from what is possible now, to the point where robots can actually pass the Turing test. He doesn’t seem to think that this gap is such a big deal. He blithely asserts, for instance, that programming robots, not only to “imitate human sociability traits,” but also “to go further and create sociability traits of their own” is a task “possibly no more difficult to program than the task of composing Mozart’s 42nd Symphony or painting a canvas that can sell in an art gallery for thousands of dollars — tasks that have already been accomplished by AI researchers” (pp. 166-167). One may well question whether the music-writing program he cites (by David Cope of UC-Santa Cruz) really makes works that have the power and originality of Mozart. But we get this sort of assertion again and again. Levy writes that “I am convinced that by 2025 at the latest there will be artificial-emotion technologies that can not only simulate the full range of human emotions and their appropriate responses but also exhibit nonhuman emotions that are peculiar to robots”; the sole evidence he offers for this assertion is the fact that “research and development in this field is burgeoning” (p. 86).

Levy suggests, as well, that the problem of robots’ intellectual knowledge is a trivial one: “one example of a similarity that will be particularly easy to replicate is a similarity of education, since just about all of the world’s knowledge will be available for incorporation into any robot’s encyclopedic memory. If a robot discovers through conversation that its human possesses knowledge on a given subject at a given level, its own knowledge of that subject can be adjusted accordingly — it can download more knowledge if necessary, or it can deliberately ‘forget’ certain areas or levels of knowledge in order that its human will not feel intimidated by talking to a veritable brain box” (pp. 144-145). Forgive me for not sharing Levy’s faith that such a thing will be “particularly easy” to do; judging from the very limited success of programs like Cyc, we are nowhere near being able to do this.

If I find Levy’s claims extremely dubious, it is not because I think that human intelligence (or mentality) somehow inherently defies replication. But such replication is an extremely difficult problem, one that we are nowhere near to resolving. It certainly isn’t just a trivial engineering issue, or a mere quantitative matter of building larger memory stores, and more powerful and more capacious computer chips, the way that Levy (and other enthusiasts, such as Ray Kurzweil) almost always tend to assume. AI research, and the research in related fields like “emotional computing,” cannot progress without some fundamental new insights or paradigm shifts. Such work isn’t anywhere near the level of sophistication that Levy and other boosters seem to think it is. Levy wildly overestimates the successes of recent research, because he underestimates what “human nature” actually entails. His models of human cognition, emotion, and behavior are unbelievably simplistic, as they rely upon the the inanely reductive “scientific” studies that I mentioned earlier.

Much science fiction, of course, has simply abstracted from these difficulties, in order to think through the consequences of robots and AIs actually being able to pass the Turing test. But this is where the paradox of Levy’s argument really kicks in. For, at the same time that he asserts that robots will be able to pass the Turing test, he still continues to treat them as programmable entities that can be bent entirely to our will. There are numerous rhapsodic passages to the effect that, for instance, “another important difference [between human beings and robots] is that robots will be programmable never to fall out of love with their human” (p. 132). Or that a robot who is “better in the bedroom” than one’s “husband/wife/lover” will be “readily available for purchase for the equivalent of a hundred dollars or so” (p. 306). Or that, in the future, you “will be able to go into the robot shop and choose from a range of personalities, just as you will be able to choose from a range of heights, looks, and other physical characteristics” (pp. 136-137). Or, again, that a robot’s personality “can be adjusted to conform to whatever personality types its human finds appealing… The purchase form will ask questions about dimensions and basic physical features, such as height, weight, color of eyes and hair, whether muscular or not…” and so on and so forth (p. 145 — though interestingly, skin color is never mentioned as a variable, even though eye and hair color are a number of times). In short, Levy asserts that robots will be loved and used as sex partners not only because they are just as ‘real’ emotionally and intellectually as human beings, but also because they have no independence, and can be made to entirely conform to our fantasies. They will sell, not only because they are autonomous agents, but also because they are perfect commodities. They will be just like Tamagotchis, only more “realistic”; and just like vibrators, only better.

Actually, the weirdness goes even further than this. The imputation of agency to robots, while at the same time they remain commodities serving our own desires, leads to some very strange contortions. The book is filled with suggestions along these lines: “A robot who wants to engender feelings of love from its human might try all sorts of different strategies in an attempt to achieve this goal, such as suggesting a visit to the ballet, cooking the human’s favorite food, or making flattering comments about the human’s new haircut, then measuring the effect of each strategy by conducting an fMRI scan of the human’s brain. When the scan shows a higher measure of love from the human, the robot would know that it had hit upon a successful strategy. When the scan corresponds to a low level of love, the robot would change strategies” (pp. 36-37). I must say I find this utterly remarkable as a science-fiction scenario. For it suggests that the robot has been programmed to put its human owner under surveillance, the better to manipulate the owner’s emotions. The human being has purchased the robot, precisely in order that the robot may seduce the human being into doing whatever it (the robot) desires (leaving open the question of what it desires, and how these desires have been programmed into it in the first place). Such a scenario goes beyond anything that Philip K. Dick (or, for that matter, Michel Foucault) ever imagined; it extrapolates from today’s feeble experiments in neuromarketing, to a future in which such manipulation is not only something that we are subjected to, but something that we willingly do to ourselves.

So, the paradox of Levy’s account is that 1) he insists on the indistinguishability of human beings and (suitably technologically advanced) robots, while 2) at the same time he praises robots on the grounds that they are infinitely programmable, that they can be guaranteed never to have desires that differ from what their owners want, and that “you don’t have to buy [a robot] endless meals or drinks, take it to the movies or on vacation to romantic but expensive destinations. It will expect nothing from you, no long-term (or even short-term) emotional returns, unless you have chosen it to be programmed to do so” (p.211).

How do we explain this curious doubleness? How can robots be both rational subjects, and infinitely manipulable objects? How can they both possess an intelligence and sensibility at least equal to that of human beings, and retain the status of commodities. Or, as Levy himself somewhat naively puts it, “today, most of us disapprove of cultures where a man can buy a bride or otherwise acquire one without taking into account her wishes. Will our children and their children similarly disapprove of marrying a robot purchased at the local store or over the Internet? Or will the fact that the robot can be set to fall in virtual love with its owner make this practice universally acceptable?” (p. 305).

I think the answer is that this doubleness is not unique to robots; it is something that applies to human beings as well, in the hypercommodified consumer society that we live in. (By “we”, I mean the privileged portion of humankind, those of us who can afford to buy computers today, and will be able to afford to buy sexbots tomorrow — but this “we” really is, in a sense, universal, since it is the model that all human beings are supposed to aspire to). We ourselves are as much commodities as we are sovereign subjects; we ourselves are (or will be) infinitely programmable (through genetic and neurobiological technologies to come), not in spite of, but precisely because of, our status as “rational utility maximizers” entering the “marketplace.” This is already implicit in the “scientific” studies about “human nature” that Levy so frequently cites. The very idea that we can name, in an enumerated list, the particular qualities that we want in a robot lover, depends upon the fact that we already conceive of ourselves as being defined by such a list of enumerable qualities. The economists’ idea that we bring a series of hierarchically organized desires into the marketplace similarly preassumes such a quantifiable bundle of discrete items.

Or, to quote Levy again: “Some would argue that robot emotions cannot be ‘real’ because they have been designed and programmed into the robots. But is this very different from how emotions work in people? We have hormones, we have neurons, and we are ‘wired’ in a way that creates our emotions. Robots will merely be wired differently, with electronics and software replacing hormones and neurons. But the results will be very similar, if not indistinguishable” (p.122). This is not an argument about actual biological causation, but precisely a recipe for manipulation and control. The robots Levy imagines are made in our image, precisely because we are already in process of being made over in theirs.

Zeroville

I’ve been reading Steve Erickson for quite some time; he is one of my favorite living American writers. His new novel, his eighth, Zeroville, is one of his best ever — I am inclined to say it’s the best thing he’s written since Arc d’X (1993).

Zeroville is somewhat more linear and straightforward than most of Erickson’s other novels — though that is only a relative statement. It’s also largely focused on the movies, and almost requires a reader who is a movie freak. The novel takes place against the backdrop of Hollywood in the 1970s — the decade of the “New Hollywood,” with its promises of radical auteurism that eventually devolved into merely a new version of business as usual. One important minor character is closely modeled upon John Milius, and directors like Scorsese, De Palma, and Cassavetes, and actors like Robert DeNiro, make cameo appearances throughout the book. Indeed, much of the novel consists of rapt discussions of the movies: the main character is a film obsessive, and even the muggers and prostitutes whom he encounters turn out to be cineastes eager to argue about the relative worth of different movies in Howard Hawks’ oeuvre, or the position of Irving Rapper as an auteur. If you aren’t as enchanted by reading (or overhearing) such discussions as I am, then you probably won’t enjoy Zeroville nearly as much as I do. But if you are old enough to have participated in the cinephilia of the 1970s that Erickson channels here, or if you are now caught up in the contemporary (DVD- and Internet-fueled) second wave of cinephilia, then there’s a lot in Zeroville that will delight you.

The protagonist of Zeroville, Vikar Jerome (né Ike, “not Isaac”), is described at one point (by the Milius character) as “cineautistic.” A refugee from a horrendous fundamentalist Christian upbringing, with a father who is terrifyingly invested in the story of Abraham and Isaac, Vikar has rejected the God who slaughters his own children (not just Isaac, but Jesus too), and instead come to worship Cinema. He watches movies obsessively, promiscuously, and indiscriminately, and he knows them backwards and forwards; though he is never able to say more about how he feels about any given film than “I believe it is a very good movie.” Vikar has been described in several reviews as an analogue of Chance from Being There (played by Peter Sellers in the film version) — and that is not far wrong, at least from the outside. When Vikar tries to interact with other people, he seems unable to ‘read’ them, and they seem unable to make any sense of him (he “vexes” people). His conversation consists mostly of bizarre non sequiturs, and the verbatim repetition of quotes about the movies that he has picked up from others. He knows nothing about the extra-cinematic world: he shows up in Hollywood in 1969 barely aware that there’s a war going on in Vietnam, and at one point in 1981 or so he watches Don Siegel’s The Killers, and is then startled to see the same actor who had slapped around Angie Dickinson in that movie appearing in another one on TV: only to discover that this actor’s latter role is the extra-cinematic one of President of the United States.

Vikar presents a bizarre and menacing appearance — his head is shaved bald and adorned with a tattoo portraying Elizabeth Taylor and Montgomery Clift in George Stevens’ A Place in the Sun: “the most beautiful woman and the most beautiful man in the world.” He is also sexually obsessed (though he rejects any sort of consummation other than blow jobs from women whom he imagines to be Elizabeth Taylor or one of his other idols). And he is prone to sudden outbursts of violence: already, on the second page of the novel, he viciously attacks a hippie who misidentifies the figures on his tattoo as James Dean and Natalie Wood in Rebel Without A Cause.

Vikar is almost literally a blank slate, or a medium (in the spiritualistic sense) for the cinematic medium (in the McLuhan sense). The movies are inscribed, not just upon his skull, but upon his soul. And he does little more than let the movies pass through him. Watching the movies gives him strange dreams, and by the end of the film his dreams have contaminated the movies themselves, so that a single frame from his most obsessive dream (which seems to present Abraham’s sacrifice of Isaac in inverted form) ends up physically incorporated into every movie. As Montgomery Clift, speaking from beyond the grave, suggests to Vikar towards the end of the novel, “maybe we’re not dreaming [Cinema]. Maybe it’s dreaming us.”

In this way, Vikar is ultimately quite different from Jerzy Kosinsky’s, Hal Ashby’s, and Peter Sellers’ Chance, precisely in the way that the movies are radically different from television. Chance’s utterances could be described as random firings of the video scanning gun. Their charm and power reside in their superficiality and transitoriness. But Vikar’s utterances (and dreams) have a hidden logic, which is rooted in the depths of cinematic illusion. Vikar lives in a pre-VCR (and pre-personal computer) age, and he experiences the movies on the one hand in the form of larger-than-life figures projected, in the dark, on a giant screen, and on the other hand as reels of 35mm celluloid, which he obsessively collects even though he doesn’t own a projector, but only a movieola allowing him to inspect (and edit, cut and splice) individual frames.

That is to say, Vikar’s “cineautism” is rooted both in the unconscious depths implied by the overwhelmingness of cinematic projection, on the one hand, and by the materiality of celluloid, handled in physical, analog form, on the other. He becomes a film editor whose motto is “fuck continuity,” and whose guiding principle is that all cinematic moments are implicated in one another, so that everything is already (even before editing) connected to everything else both in space and in time. Cinema already exists, as a kind of Platonic form, before it is instantiated in one or another film, or moment of film. It would seem, even, that only cinema has such a Platonic form, or that Plato’s entire theory of Forms was nothing but an anticipation of Cinema.

This philosophy, implicit rather than directly expressed, allows Vikar, or impels him, to edit film in such a way that, irrespective of the intentions of the director, he is able to “set free from within the false film the true film.” His approach is entirely intuitive (or unconscious), but also so innovative that he wins a special award at Cannes for “the creation of a revelatory new cinematic rhetoric,” and gets nominated as well for an Academy Award (though, of course, he doesn’t win the latter). But such recognition means nothing at all to Vikar, who is helpless to do anything but continue to pursue Cinema’s hidden logic, no matter where it takes him.

Vikar seems affectless — except perhaps in his sudden moments of violent rage — to everyone who encounters him; and to the extent that he is charismatic, it is precisely on account of this affectlessness, combined with his total devotion to Cinema. But of course, this surface (or conscious) lack of emotion is only the index of the way in which, on a deeper level, Vikar is traversed and utterly embroiled by the impersonal, or prepersonal Affects of Cinema itself. This affect would seem to take the form, finally, beneath all the moments of love and betrayal and absence and violence and despair, in a sacrificial scene of inverted Oedipalization: inverted, because it is not about the son’s fantasmatic hatred of the father, but rather the father’s (including God the Father) all-too-real hatred of the son (or the daughter). This is not so much to psychoanalyze film, as to suggest that pyschoanalysis itself (just like Plato) is merely a derivative of a more ontologically fundamental Cinema.

Zeroville is thus traversed, like all of Erickson’s novels, with a certain melancholy, or sense of loss: a feeling that has directly political connotations in some of Erickson’s earlier novels, but that here is associated, rather, with the death of cinema itself, in a post-cinematic age thirty years further on than the time in which Vikar lives and in which the novel is set. I don’t mean to imply that this makes Erickson a luddite, or a paradoxical conservative. His novel’s investment in Cinema is entirely clear-eyed, and free of what Marshall McLuhan disparaged as “rear-view-mirrorism,” precisely in its identification of the movies with a (both primordial and historical) Past. Erickson evokes a Pastness which is that of the movies themselves, as well as of the passage from the movies to other, newer media forms. The movies are both past and eternal; or, they are eternal precisely in their pastness.

Or, as the black robber/mugger cineaste tells Vikar early on in the novel, and as Vikar then subsequently repeats to the assembled news reporters when he is being interviewed at Cannes after his award: “The Searchers is one wicked bad-ass movie whenever my man the Duke is on screen, evil white racist honky pigfucker though he may be.” Which sums up both the archaic limitations, the backwardness of the movies, relegated as they must be to the scrapheap of history, and their eternal truth nevertheless.

Comeuppance

William Flesch‘s Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction is, I think (by which I mean, to the best of my knowledge) the best work of Darwinian literary criticism since the writings of Morse Peckham. That may sound like faint praise, considering how lame most recent lit crit based on “evolutionary psychology” has been; but Comeuppance is a brilliant and startlingly original book, making connections that have heretofore passed unnoticed, but that seem almost self-evident once Flesch has pointed them out.

Comeuppance combines attention to cutting-edge biological theory with a set of aesthetic concerns that are, in a certain sense, so “old-fashioned” that most contemporary theorists and critics have completely forgotten even to think about them. Flesch is concerned with the question of vicarious experience: that is to say, he wants to know why we have so much interest in, and emotional attachment to, fictional characters, narratives, and worlds. He tries to account for why we are so inclined to the “suspension of disbelief” when we encounter a fiction; why we root for the good guys and hiss at the bad guys in novels and movies; why we find it so satisfying when Sherlock Holmes solves a case, or when Spiderman defeats the Green Goblin, or when Hamlet finally avenges his father’s death, or when we imagine a torrid romance between Captain Kirk and Mr. Spock.

When such pleasures are thought about at all, they are usually attributed either to our delight in mimesis, or imitation (which was Aristotle’s theory) or to our identification with the protagonist of the fiction (which was Freud’s). But Flesch suggests that both these accounts are wrong, or at least inadequate. Far from identifying with Sherlock Holmes, Spiderman, Hamlet, or Captain Kirk, we admire and love them from a spectatorial distance, and with an intense awareness of that distance. And while our engagement with narratives requires a certain degree of “verisimilitude,” neither resemblance nor plausibility is enough in itself to generate the sort of engagement and attachment with which we encounter fictions.

Flesch proposes a very different explanation for this engagement and attachment from either the Aristotelian or the Freudian one. He bases it on recent developments in evolutionary biology, and particularly 1)the use of game-theoretical simulations to explain the development of intraspecies (and even inter-species) cooperation, ever since Robert Axelrod, in the 1980s, first used the “Prisoner’s Dilemma” game to model how competition could give rise to altruism; and 2)the studies by Amotz and Avishag Zahavi of “costly signaling” and the “handicap principle.” I will not try to reproduce here the details of these studies, nor the elegant logic that Flesch uses in order to put them together, and to bring them to bear on the problematics of fiction; I only wish to summarize them briefly, in order to move on to the consequences of Flesch’s arguments.

In brief, Flesch maintains: that evolution can lead, and evidently has led, to the development (in human beings, and evidently other organisms as well) of “true altruism,” or the impulse to help others, or the group in general, even at considerable cost to oneself; that this altruism requires that we continually monitor one another for signs or selfishness or cheating (because otherwise, selfish cheaters would always prosper at the expense of those who were honestly altruistic); that, as a result of this monitoring, we get vicarious pleasure from the punishment of cheaters and (to a lesser extent) from the reward of those who enforce this by actively ferreting out and punishing the cheaters; that altruism cannot just be enforced by the punishment of individual cheaters, but needs to be signaled, and made evident to everybody (including the cheater) as well; that — given the way that everyone is continually monitoring everyone else — the best way to make evident that one is indeed an altruist rather than a cheater is to engage in “costly signaling,” or altruistic behavior that is sufficiently costly (draining of wealth or energy, involving risks) to the one engaging in it that it has to be authentic rather than a sham; and that our constant monitoring and reading of these signals, our constant emotional reaction to vicarious experience, is what gives us the predisposition to be absorbed in, or at least emotionally affected by, fictions, so that we respond to fictional characters in narratives in much the same way that we do to real people whom we do not necessarily know, but continually observe and monitor. (There’s not that great a difference, really, between my reaction to Captain Kirk, and my reaction to Bill Clinton).

I haven’t done justice to the full subtlety and range of Flesch’s argument; nor have I conveyed an adequate sense of how plausible and convincing it is, in the detail with which he works it through. But the argument is as careful and nuanced as it is ambitious. It’s true that Flesch places his argument under the mantle of “evolutionary psychology,” something about which I remain deeply dubious. The proponents of evolutionary psychology tend to make global or universalist claims which radically underestimate the extent of human diversity and of historical and cultural differences. I am willing to accept, until shown otherwise, that in all human cultures people sing songs, and experience the physiological reactions that we know as “fear”, and have certain rituals of hospitality. But even if these are biological givens, “human nature” is radically underdetermined by them. For instance, there are enormous differences among cultures and histories as to which vocal performances count as songs, and why and when we sing, and what it means to sing, and when it is appropriate to sing and when not, and what emotions are aroused by singing, and who knows the songs and is expected to sing them, and what technologies are associated with singing, and so on almost ad infinitum.

But even as Flesch adopts the mantle of evolutionary psychology, and makes some general claims about “universal” human attributes, he is careful to avoid — and indeed, he severely criticizes — the reductiveness that often comes with such claims. For his evolutionist arguments have nothing to do with the usual twaddle about how women are supposedly genetically hardwired to prefer older, high-status men, and so on and so forth. Rather, Flesch’s arguments are directed mostly at showing how altruism and cooperation could have emerged despite the Hobbesian nature of conflict among Dawkinsian “selfish genes”; and, more broadly, at demonstrating how “biology has brought humans to a place where genetic essence does not necessarily ‘precede human existence’ ” (219 — Flesch says this after wryly noting that he is probably the first to have cited Sartre and evolutionary theorists together). Since altruism and cooperation — and for that matter cultural variability — evidently do exist among human beings, the potential for these things must have itself arisen in the course of evolution. So Flesch’s real argument is with those sociobiologists and evolutionary psychologists — like Edward O. Wilson, and especially Steven Pinker, to cite the most famous names — who argue, basically, that all these things are a sham, and that underneath appearances we “really” are still only engaged in a Hobbesian war of all against all, and a situation of Malthusian triage.

That said, the real importance of the evolutionary categories that Flesch bases his argument upon — especially game-theoretically-defined altruism and Zahavian costly signaling — resides less in how adequate an explanation they provide of human origins, than in how useful they prove to be to help us think about our own investments in narrative, and the particular (Western) tradition of narrative that is most familiar to us. (Evolutionary categories are nor more nor less “universal” than, say, psychoanalytic categories; and both sorts of formulations work in many contexts, in the sense they provide insights, and allow us to generate further insights — whether or not they are actually valid “universally”). I have also long felt that one of the problems with evolutionary accounts of complex phenomena like human culture is that they commit the elementary logical fallacy of thinking that how a certain feature or trait originated historically determines the use and meaning of that feature today. But as Gould and Lewontin’s arguments about “spandrels” pointed out long ago, this need not be the case, and probably most often is not — many traits are non-adaptive byproducts of adaptations that occurred for entirely different reasons; and even directly adaptive traits are always being hijacked or “exapted” for different uses than those on account of which they originally evolved.

Flesch, unlike most of those who have tried to apply evolutionary arguments to human cultural contexts, takes full account of these complications and multiplications. The real justification of his use of ideas about costly signaling and “strong reciprocity” (or altruism that extends to the monitoring of the altruism of others), is that, in understanding the sorts of narratives that Flesch is interested in, they prove to be very useful indeed. The concepts that Flesch draws from evolutionary theory both elucidate, and are themselves in turn elucidated by, a wide range of familiar narratives, from Shakespeare to Hitchcock, and of characters in narratives, from Achilles to Superman. Even as Flesch provides a series of dazzling close readings, he uses these readings pretty much in the same way that he uses citations of biological research, so that Prince Hal and King Lear stand alongside peacocks and cichlids as exemplars of things like costly signaling and altrustic extravagance, and as subjects of our concern and fascination.

Flesch’s argument thus reflexively provides an account of both why the content of narrative moves us as it does, and of why the narrative form, as such, should be especially suited as a focus for meaningful emotional reactions. (I should note that, although Harold Bloom, in a blurb on the book’s back cover, praises Flesch for “giving a surprisingly fresh account of the workings of high literature,” and although the great majority of Flesch’s own readings and citations do in fact come from “high literature,” one of the great virtues of Flesch’s argument is that it applies equally well to “low” narrative forms (and that he also does cite these forms). The things that interest us in reading Shakespeare’s plays, or novels by Henry James and James Joyce and Marcel Proust, are pretty much the same things that interest us in reading stories about Superman, or Conan the Barbarian).

Beyond this, Flesch’s argument is noteworthy, and important, because of how it uses the tools of (usually reductive) science for determinedly nonreductive ends. Usually, the language of game-theory payoffs and cost-benefit calculations drives me crazy, because it is a hyperbolic example of what the Frankfurt School critics denounced as “instrumental reason.” The “rational choice” theory so prevalent these days among economists and political scientists idiotically assumes, against nearly all concrete experience, that human beings (and other organisms as well) make cognitive, calculated decisions (even if not consciously) on the basis of maximizing their own utility. More recently, some social scientists have sought to incorporate into their mathematica models the empirical evidence that people in fact respond emotionally, and non-rationally, to many situations. But most of this research has remained reductive, in that the calculus of probabilities and payoffs has remained at the center — the assumptions are still essentially cognitive, and calculative, even if emotions are admitted as factors that skew the calculations. Flesch is really the only author I have read who pushes these models to the point where they flip around, so that cognition is effectively subordinated to affect, rather than the other way around (“Reason is, and ought to be, only the slave of the passions,” as Hume — one of Flesch’s favorite theoretical sources — once wrote).

This is largely because of the way that Flesch defines his central concept of “altruism.” Drawing both on Hume and Adam Smith (his “moral philosophy” rather than The Wealth of Nations), as well as on contemporary biological game theorists and on the Zahavis, Flesch defines “altruism,” basically, as any other-directed action that is not driven by “maximizing one’s own utility,” and that indeed is pursued in spite of the fact that it decreases one’s own utility.” This means that things like vengefulness and vindictiveness, not to mention Achilles valuing glory more than his own life, or Bill Gates dispensing his fortune in order that he may congratulate himself for being a great philanthropist, are also examples of altruism, in that they are other-directed even at a cost to oneself, and therefore they absolutely contradict the “utility-maximization” assumptions of orthodox economics and “rational choice” theory. As Flesch puts it epigrammatically, “the satisfactions of altruism,” like Gates’ self-congratulation, “don’t undercut the altruism itself. Satisfaction in a losing act or disposition to act is itself a sign of altruism… Pleasure in altruism doesn’t mean that you’re not an altruist. It almost certainly means that you are” (35).

This is useful for the way that it undercuts both the model of Homo economicus that is the default understanding of humankind in the current neoliberal consensus, and the cynicism that sneers at the very possibility of altruism, generosity, cooperation, and collectivity on the grounds that these are “really” just expressions of egotism. Of course, egotism is involved; how could it be otherwise? But as Flesch insists, this doesn’t prevent the altruism, or concern for others at one’s own expense, from being genuine.

Altruism is by no means an unconditional good, of course; in Flesch’s account, it allows for, and can lead, not just to an insane vengefulness, but also to the kind of surveillance of people by one another that enforces social conformity and involves the persecution of anybody who acts innovatively, or merely differently. Nonetheless, the important point remains that we all act and feel in a social matrix, rather than as atomized individuals, and that people’s actions are not merely determined by the considerations of personal well-being (or at most, those of one’s genetic kin), but by a much broader range of social concerns and relationships and emotions (including vicarious emotional relationships with, or feelings about, strangers).

For this reason, even though Flesch states in his introduction that his aim is “to give an account… [of] why [narrative] should be as strange, complex and intellectual — as cognitive — as it is” (6), his arguments really contribute more to an affective approach to narrative than to a cognitive one. The tricky evolutionary arguments that Flesch works his way through are used in order to show how evolutionary processes — which in a certain sense, because they are based upon a competitive weeding-out of alternate possibilities under conditions of scarcity and stress, are necessarily “rational,” even though no actual rationality is involved in their workings — can nonetheless produce an outcome that is not itself “rational,” but instead involves extravagance, waste and “expenditure” (in the sense Bataille gives to this word), and that necessitates cooperation of some sort, rather than a continual war of all against all. And once the affects that drive us in these non-rational ways have evolved, they continue to have a life of their own (they may well be reinforced even when they are counter-productive; but they also may, in evolutionary terms, aid the survival of groups that adopt them or are driven by them, in contrast to groups that don’t: here Flesch draws upon the recent attempt by Elliott Sober and David Sloan Wilson to rehabilitate the notion of “group selection”).

This brings Flesch’s arguments in line with, and makes them an important contribution to, any attempt to think about social relations (and aesthetics as well) in terms that owe more to Marcel Mauss (with his complex notions of how gift-giving involves both gain and loss, both economic calculation and an openness to loss, both power/prestige and generosity) than to the currently hegemonic assumptions of neoclassical and neoliberal economics. Rather than Derridean musings on how an absolute gift is “impossible,” because there is always some sort of return, we get something more in line with Mauss’s (and Bataille’s) sense of how expenditure and potlatch, and other forms of gift-giving (including what might be called the “gift” of narrative, though Flesch conceives this much more complexly than Lewis Hyde, for instance, does) involve the intertwining of self-aggrandizing and altruistic motives, and allow a place in practice for the openings and ambivalences that both a rational-economic calculus, and a deconstructionist negatively absolutized logic would forbid us.

I’ll conclude with one small additional comment, which is that Comeuppance is not really a book about “narrative theory,” even though it sometimes presents itself as such. Though it tries to delineate the “conditions of possibility” for us to enjoy, crave, and develop an emotional investment in fictional narratives, it is (quite properly) much more concerned with these affects and investments than it is in the structure of narrative per se. And this turns out to involve our relation to characters much more than our relation to narrative as such (even when Flesch considers the latter, he does it in the framework of the relation between the audience and the fiction’s narrator, including both the fictive narrator and the author-as narrator). This seems to me to be in line both with Orson Welles’ insistence on the enigma of character as the center of our interest in film and other arts, and with Warren Ellis’ insistence that what he is really concerned with in the fictions he writes is the characters and the ideas; the plot is just a contrivance to convey those characters and ideas.

Rancière (2)

So… democracy.

Rancière doesn’t see democracy as a form of government, or form of State. It is something both more and less than that. States are all more or less despotic, including supposedly “democratic” ones. And non-State forms of authority tend to be based on other forms of unequal power relationships, with authority grounded in age (patriarchy), birth (aristocracy), violence and military prowess (I’m not sure of the name of this), or money and wealth (plutocracy). Our current neoliberal society combines the rule of Capital with the rule of bureaucratic States with their own levels of authority based upon expertise and guardianship of the “rights” of property or Capital. Even though we have a legislature and executive that are chosen by majority, or at least plurality, vote, our society is not very democratic by Rancière’s standards. The role of money in the electoral process, the fact that there are career politicians, the management of increasing aspects of our lives by non-political “experts” (e.g. the Federal Reserve), all militate against what Rancière considers to be even the minimal requirements for democracy.

To a great extent, Rancière uses the idea of “democracy” adjectivally (a society may be more or less democratic) rather than as a noun. For democracy is a tendency, a process, a collective action, rather than a state of affairs, much less an organized State. Democracy is an event; it happens when, for isntance, people militate to change the distribution of what is public and what is private. In the US, the civil rights movement and (more recently) the alterna-globalization protests would be examples of democracy in action. Rancière rightly stresses the activity, which always needs to be renewed, rather than the result. This might be thought of, in Deleuzian terms, as a revolutionary-becoming, rather than an established “revolutionary” State, which is nearly always a disappointment (if not something worse). While I am inclined to agree with Zizek that State power often may need to be actively used in order, for instance, to break the power of Capital, I still find Zizek’s apparent worship of State forms and Party dictatorship reprehensible (it would seem that Zizek has never found an ostensibly left-wing dictator he doesn’t like — except for Tito and Milosevic). Collective processes should not be reduced to State organization, though they may include it. Chavismo is more important than Chavez (whereas Zizek seems to admire Chavez because, rather than in spite of, his tendency to do things that allow his opponents to apply the cliche of “banana-republic dictator” to him). It is admirable that Chavez is using a certain amount of State power, as well as extra-State collective action, in order to break the power of Capital; but to identify a revolutionary process with its leader and authority figure is worse than insane.

But I digress. To value the process of revolutionary-becoming, as Deleuze does, and as Rancière does in a different way, rather than the results of such action, is not to gvie up on lasting change. It is rather to say that change continues to need to happen, as against the faux-utopia of a final resting place, an actually-achieved utopia (which always turns out to be something more like “actually-existing socialism,” as they used to say, precisely because it congeals when the process comes to a stop).

I need to be cautious here about assimilating Rancière too much to Deleuze and Guattari. I am only trying to say that Rancière’s notion of democracy gives substance to something that often sounds too glib and vague when Deleuze and Guattari say it. For Rancière, “democracy” means that no one person or group of people is intrinsically suited to rule, or more suited to rule than anyone else. Democracy means radical contingency, because there is no foundation for the social order. Democracy means absolute egalitarianism; there is no differential qualification that can hierarchize people, or divide rulers from ruled, the worthy from the unworthy. In a democratic situation, anybody is as worthy of respect as anybody else. This means that, for Rancière, the purest form of democracy would be selection by lot (with frequent rotation and replacement), rather than “representative” elections. Selection by chance is grounded in the idea that anyone can exercise a power-function, regardless of “qualifications” or “merit” (let alone the desire to rule or control; if anything, those who desire to have administrative or legislative power are the ones least worthy to have it — to the extent that we can make such a distinction at all).

It is unclear to me whether Rancière actually believes that a total democracy could exist in practice — as opposed to being an ideal to strive for, a kind of Kantian ethical imperative, something we must strive for to the utmost possible, regardless of the degree to which we succeed. (In my previous post, I was privileging both the political and the aesthetic at the expense of the ethical. Here I would add that Kantian morality is not ethics, but perhaps can be seen as the limit of ethics, the point at which it comes closest to politics).

But here’s the point. For Rancière, egalitarianism is not a “fact” (though we can and should continually strive to “verify” it), but an axiom and an imperative. That is to say, it has nothing to do with empirical questions of how much particular people are similar to, or different from, one another (in terms of qualities like manual dexterity or mathematical ability, or for that matter “looks” and “beauty”). Egalitarianism doesn’t deny the fact that any professional tennis player, even a low-ranked one, could effortlessly beat me at tennis, or that Rancière’s philosophical writings are far more profound than mine, or that I couldn’t pass a sophomore college math class. And egalitarianism doesn’t mean that somehow we all ought to be “the same,” whatever that might entail, genetically or experientially. What egalitarianism means, for Rancière, is that we are all intelligent speaking beings, able to communicate with one another. Our very social interaction means that we are on the same level in a very fundamental sense. The person who follows orders is equal to the person who gives orders, in the precise sense that the one who obeys is able to understand the one who commands. In this sense, Rancière says, equality is always already presupposed in any social relation of inequality. You couldn’t have hierarchies and power relations without this more fundamental, axiomatic, equality lying beneath it.

This seems to me to be (though I presume Rancière wouldn’t accept these terms) a sort of Kantian radicalization of Foucault’s claim that power is largely incitative rather than repressive, that it always relies, in almost the last instance (i.e. up to the point of death) upon some sort of consent or acceptance on the part of the one being dominated. Without these fundamental relations of equality, it would not be possible for there to be elites, masters, bosses, people who tell other people what to do, and who have the backing or the authority to do this. So the question of equality is (in Kantian terms) a question of a communication which is not based upon the quantitative rankings that are imposed by the adoption of a “universal equivalent” (money as the commodity against which all other commodities are exchanged) — therefore this, too, relates to the Kantian problematic that I discussed in my previous posting on Rancière.

Of course, in our personal lives, we never treat everyone else with total equality. I love some people, and not others. I am always haunted by Jean Genet’s beautiful text on Rembrandt, where he mourns the way that Rembrandt’s revelation of the common measure, or equality, of everybody means, in a certain register, the death of his desire, the end of lusting after, and loving, and privileging, one individual in particular. But the power of Genet’s essay resides in the fact that, in the ultimate state of things, this universal equality cannot be denied any more than the singularity of desire can be. And that is why, or how, I think that the lesson Genet draws from Rembrandt is close to the lesson on equality that Rancière draws from, among others, the 19th-century French pedagogue Jacotet (the subject of Rancière’s book The Ignorant Schoolmaster).

Democracy, or egalitarianism, is not a question of singular desire; but it is very much a question of how we can, and should, live together socially, given that we are deeply social animals. Which is why I see it a kind of imperative, and as something that we always need to recall ourselves to, amidst the atomization — and deprivation for many — enforced by the neoliberal State and the savage “law” of the “market.” To that extent, I think that Rancière is invaluable.

There is something I miss in Rancière, however, and that is a sense of political economy, as opposed to just politics. This absence may have something to do with Rancière’s rejection of his Althusserian Marxist past. He is certainly aware of the plutocratic aspects of today’s neoliberal network society. He doesn’t make the mistake of focusing all his ire on the State, while ignoring the pseudo-spontaneity of the Market and its financial instruments. But he never addresses, in the course of his account of democracy, the way in which economic organization, as well as political organization, needs to be addressed. Here, again, is a place where I think that Marx remains necessary (and also, as I said in the previous post, Mauss — as expounded, for example, by Kevin Hart). Exploitation cannot be reduced to domination, and the power of money cannot be reduced to the coercive power of the State or of other hierarchies. Aesthetics needs to be coupled with political economy, and not just with politics. So I still find a dimension lacking in Rancière — but he helps, as few contemporary thinkers do, in starting to get us there.

Rancière (1)

I’ve been reading Jacques Rancière these last few weeks, trying to get a grip on what he’s about. I have read four short books of his, so far: The Politics of Aesthetics, The Future of the Image, The Ignorant Schoolmaster, and The Hatred of Democracy. (All of these have been translated, though some of them I read in French, because I happened to have the French editions at hand). There was also a lengthy interview with Rancière sometime this past year in Artforum, which I finally got around to. I haven’t really sorted it all out yet, but I’m making these preliminary comments in order to get a start at it.

I first became interested in Rancière because of the way that he links politics and aesthetics. This is something that, from a different angle, I have been quite interested in. My starting premise is that the current academic (left academic?) infatuation with “ethics” is severely misplaced. I’m inclined to say — though I will not endeavor to back up this statement here — that the category of the ethical (whether understood in Levinasian/Derridean terms, or in ones derived from Spinoza and a Deleuze-inflected Nietzsche) is worse than useless: it is actively obfuscatory when it comes to thinking about actual instances of suffering, exploitation, and domination in the world today. At best, ethical thought leads to the impotent wringing of hands and to empty sympathizing (in the Derridean version), or to optimistic fantasizing (in the Spinoza/Negri version). At worst, it leads to accepting the “tragedy” of the neoliberal world order as the ineluctable Way Things Are.

As I said, I will not try to defend this argument here. I want rather to suggest an alternative: which comes down to evacuating the space of ethics, and replacing it with politics and political economy on the one hand, and aesthetics on the other. Every ethical dilemma needs to be displaced, both into a politico-economic problematic, and into an aesthetic situation on the other. As Mallarme wrote, some 130 years ago: “everything comes down to Aesthetics and Political Economy” (Tout se résume dans l’Ésthetique et l’Economie politique). We need to reverse the direction of Kierkegaard’s Either/Or, and move from the ethical to the aesthetic. This involves, on the one hand, seeing the situations of exploitation and domination that lie behind every ethical dilemma or tragic situation; and on the other hand, disengaging the ways that, in our neoliberal network society (society of the post-spectacle, of the simulacrum, of the proliferation of electronic media and their saturation of the real), the distribution of percepts, affects, and concepts (to use Deleuze and Guattari’s schema) can potentially be altered.

It can be noted that the program I am outlining both relies very strongly on Deleuze and Guattari, both for their analysis of Captial as Body without Organs, and for their unrepentant aestheticism; while at the same time this program distances itself from certain aspects of Deleuze’s — with and without Guattari — Spinozianism and Nietzscheanism. This is the point at which I vastly prefer Whitehead to Spinoza and/or Nietzsche. Though Whitehead never polemicizes about it, his subordination of ethics to aesthetics (but in an entirely un-Nietzschean way, without any of that tiresome pontification about blond beasts and breeding a master race and so on and so forth) is precisely on track with what I am trying to work out. Of course, Whitehead has nothing worthwhile to say about political economy; but in that stalled chapter I hope to get back to shortly, I am trying to work out the ways in which Whitehead’s notion of “God” is homologous to Deleuze and Guattari’s formulations about the Body without Organs (I am referring to the analysis of BwO-logic as capital-logic in Anti-Oedipus, rather than to the far less interesting “make yourself a Body without Organs” stuff in A Thousand Plateaus).

Anyway: this is where I encounter Rancière’s thesis on the “distribution of the sensible.” Rancière argues for a direct connection between politics and aesthetics (one that implicitly leaves out ethics) like this. Immediate aesthetic practices (aesthetics in the sense of Art) both establish and contest the ways in which, and the structures according to which, a given society distributes the “conditions of possibility” for what can (and what cannot) be sensed, felt, and spoken about, and what cannot (aesthetics in the sense of Kant’s “Transcendental Aesthetic,” which deals with time and space as forms of intuition — Rancière, like Foucault, in effect offers us a historicized version of the Kantian a priori argument — cf. The Politics of Aesthetics 13). Rancière offers, in effect, a more subtle version of McLuhan’s claim that new media produce new “ratios of the senses.” (Rancière dislikes McLuhan’s emphasis on media as determining by themselves, independently of “content”; but he rightly attributes to social arrangements that include media technologies the power to redistribute “sensibility” that McLuhan perhaps too simply attributes to the media alone).

The “distribution of the sensible,” which art addresses, and at once accepts as its condition of being, and disputes, is precisely also the ground and the stake of politics — every “distribution of the sensible” thereby also defines who is entitled to speak, and what sorts of things they are able to say. The “distribution of the sensible” defines the rules and the arena for “normal” political and social decisions. But politics, in the radical sense that Rancière champions, is a movement that does not just operate within these parameters, but actively challenges them, seeks to alter them.

In other words: Politics in the conventional sense — which would include both the US presidential election process, and the ways in which policy decisions are made by institutions like the Supreme Court and the Federal Reserve Bank — operates within the parameters of an already-given, socially sanctioned distribution of the sensible. Rancière dismisses this sort of policy-making as oligarchic even in supposedly “democratic” societies like France and the US — it is the work of the “police” rather than actual political engagement, and it always involves domination and inequality. On the other hand, what Rancière calls actual “politics,” and which he also describes as radical democracy, occurs when these background a priori rules, embodied in an official distribution of the sensible, themselves become contested.

The protestors in Seattle in 1999 were entirely Rancièrean when they chanted, “This is what democracy looks like.” And the city’s response to the protests — effectively suspending civil liberties and imposing martial law for several days — demonstrated how “policing” is the inverse of politics, how the smooth functioning of both government and capitalist commerce depends upon the suppression of democracy, or of politics proper.

I can see two major consequences that follow from this. One is to point out the way that neoliberal governance, with its two institutions of State and Market, is fundamentally and at the core anti-democratic. There is a continuity between allowing decisions to be made by the “market” or by supposedly nonpartisan “experts” (like the Fed) in order to shield these decisions from the supposedly noxious effects of political controversy, and bringing out the cops in force to protect the WTO meeting from popular discontent. (I can’t remember the author or title right now, but I remember reading some reviews of a recent book that argues that, since voters always act “irrationally,” it is better to leave as many social decisions as possible to market mechanisms instead of democratic ones. While we may question how “democratic” the opportunity to choose between Hillary Clinton and Rudy Giuliani actually is, it is clear that leaving issues to the “decisions” of the “market” is far more autocratic. The “market” is supposedly the sum of individuals’ “preferences”; but in reality, it is both the sphere of maximized inequality — since unequal income distribution is very far from one-person-one-vote — and also, we cannot avoid confronting the “market” as a vast impersonal force against which we have no power whatsoever. Neoliberal ideology regards the “market” as an ineluctable force of nature, like gravity or the speed of light).

The second consequence of Rancière’s argument is to shed a new light on the political dimensions of art. It is no longer a question of looking at a work of art’s “ideology,” nor of asking what the artwork’s actual political “efficacy” might be. Rancière allows us to get away from both of these tired ways of looking at the politics of art. It is rather that art and political action run parallel, because both of them, against the backdrop of a socially given distribution of the sensible, both enact and contest this distribution, work to reconfigure it, and to bring out potentials within it that have not previously been realized. Art is thus already a political intervention — not in what it says, but in its very being, in its formal and aesthetic qualities.

Rancière probably wouldn’t like this assimilation, but I think that his theory of art fits well into the Kantian-Deleuzian genealogy of aesthetics that I have been trying to pursue. Kant’s aesthetics has to do with the singularizing limits and extremities of the mental faculties, with the points at which they break down or enter into discord with one another, or (as Deleuze reads Kant) find a harmony only through this discord. In other words, commonality and universality are precisely problems for “aesthetic judgment”; Kant takes commonality and universality for granted in the First and Second Critiques, but problematizes them in the Third. The problem of aesthetic judgment is the problem of communicating things (sensations) that are absolutely singular, and heterogeneous in relation to one another. In a way, therefore, the problem of aesthetic judgment is the same as the problem of the commodity in Marx (how a universal equivalent can be found for things that in themselves are heterogeneous), and also as the problem of how to find a “common” or commonality or communism that is not just a reductive quantification via translation in terms of the universal equivalent (this is the side of the Marxist problematic that is highlighted in Hardt and Negri’s discussion of “the common”; following it out would seem to involve both thinking Marx and Kant together as Karatani does, and thinking about alternative currencies and trading systems, which Karatani approaces vis his interest in LETS networks, and which Keith Hart has done a lot to illuminate, referring to Mauss’ The Gift as well as to the Marxist tradition).

Now, Deleuze radicalizes Kant in this respect by the way that he rewrites, and radicalizes, Kant’s pushing of the mental faculties to their limits. Drawing on Blanchot and Klossowski, among others (and implicitly drawing, as well on Foucault’s Kantian reading of Bataille in “A Preface to Transgression,” despite Deleuze’s own evident contempt for Bataille), Deleuze in Difference and Repetition and elsewhere outlines a scenario in which each of the faculties pushes to the point where it breaks down: which means that, going to the maximum extent of “what it can do,” it both uncovers the (transcendental) force or energy that impels it but that it cannot apprehend directly, and ruptures itself, thereby compelling thought to jump discontinuously to another faculty, which (precisely through this discontinuity or discord) picks up the process, pushing itself to its own limit, and so on in turn…

What I am trying to suggest is, that, in his examinations of the distribution of the sensible, Rancière in effect historicizes the process that Deleuze describes in more absolute terms — just as Foucault, in his middle period (The Order of Things) historicizes the a priori conditions of thought that Kant describes in absolute terms. (Actually, this is an oversimplification; because Foucault in effect historizes Kant’s Categories, his “Transcendental Deduction of Concepts”; whereas Deleuze radicalizes, and Rancière then historicizes what corresponds more to Kant’s “Transcendental Aesthetic.” This is something that comes up in the Kant/Whitehead/Deleuze book, but that I eventually need to work out more careflly here).

There’s a lot more to be said on Rancière’s aesthetics — and particularly on the way that he rewrites the history of art since the Renaissance, and especially of the transition to modernism, in terms of changing distributions of the sensible. But I will defer that for now, as well as the even bigger question of the consequences of Rancière’s understanding of “democracy.” Hopefully I will now be able to start posting more frequently than I have in the last few months. To be continued…

Bad Quote of the Week

From an interview with Satoshi Kanazawa, co-author (with Alan S. Miller) of Why Beautiful People Have More Daughters, a pop intro to “evolutionary psychology.” Kanazawa has just made the claim that “our brain (and the rest of our body) are essentially frozen in time — stuck in the Stone Age,” because “when the environment undergoes rapid change within the space of a generation or two, as it has been for the last couple of millennia,” there is not enough time for evolutionary adaptation to take place.

This reference to the environment undergoing rapid change, without mention that human beings themselves are the agents and initiators of such change, is strange enough. But Kanazawa goes on to say:

“One example of this is that when we watch a scary movie, we get scared, and when we watch porn we get turned on. We cry when someone dies in a movie. Our brain cannot tell the difference between what’s simulated and what’s real, because this distinction didn’t exist in the Stone Age.”

The major claim here is entirely false and ridiculous. Because, quite evidently, our brains can and do tell the difference betwen what’s simulated and what’s real. Despite the legends — pretty much debunked — of people terrified by the train coming towards them at the Lumiere Brothers’ very first movie screening in 1895, nearly everybody alive today can easily and effortlessly tell the difference between something happening on a movie or television screen and something happening in real life. My 2-year-old daughter understands this difference without difficulty.

“Pretend” (as my daughters call it) or simulated experience is perfectly real in its own right, of course; and we get scared from movies just as “authentically” as we get scared when something dangerous or horrible threatens us in “real life.” But not only does this have nothing to do with not being able to tell the difference, it absolutely depends upon being able to tell the difference. Vicariousness is crucial to aesthetic experience (it is the basis for what Kant called “disinterest”). I eagerly go to watch horror films. I do not eagerly go to places where there is a strong likelihood of feral monsters or chainsaw-wielding psychopaths dismembering me limb from limb. And I cry much more readily at the movies than I do in real life situations.

Probably if I said this to Kanazawa, he wouldn’t disagree with me, exactly, but rather say something about how the fear response evolved in such a way that it operates on its own, on the assumption that what is being seen is real — before some other, more highly conscious, part of our mind can remind us that, after all, “it’s only a movie.” But I don’t think this gets him off the hook. For the point of the example — and, I’d argue, the point of aesthetics (among other things) overall — is precisely that the brain, or the mind, or “human nature” in general, is massively underdetermined by the particular biological traits of which the evolutionary psychologists make so much. In the example here, the dismissal of vicariousness, together with the unexamined assumption that the physiological fear-response is meaningful in itself and enough to account for all the varied situations in which human beings can possibly feel afraid, or give meanings to being afraid, exemplifies the extreme naivete to which evolutionary psychology in general is always prone.

I am inclined to think that William James is right in saying that we feel afraid because we have a certain physiological reaction, rather than we have the physiological reaction because we feel afraid. But this is precisely why it is a category error to think that fear can be defined in cognitive terms, which would have to happen in order for the question of whether the experience is real or simulated to even come up. A corollary of this is that, when the cognitive question does come up, it is not constrained by the physiological response in the way that Kanazawa assumes. This is the ground of possibility for the astonishing diversity, between individuals and even more among cultures, of the meanings that are assigned to fear, of the situations that give rise to fear, of the ways that fear is dealt with, and so on and so forth. Evolutionary psychology can dismiss these differences as inconsequential (just as it dismisses the question of vicariousness as inconsequential) only because it has already assumed what it claims to prove. Its cognitivist assumptions (such as the assumption that the physiological fear-response has something to do with a cognitive judgment as to whether something is real or simulated) leave it utterly incapable of dealing with the non-cognitive, affective aspects of human life, as well as (ironically enough) with the ways that “cognition” itself contains far more than it can account for.

Sound Mind

I’ve been meaning to write for a while about Tricia Sullivan’s SF novel Sound Mind. But now it is too long since I read it, and I’ve forgotten too many details, so (pending a rereading, which I don’t have time for now) I can only comment on it vaguely and briefly. The novel is a sequel to Double Vision (which I wrote about here), and is probably incomprehensible if you haven’t read the previous volume. (For an excellent account of both books together, see Timmel Duchamp’s review).

Sound Mind is hard to describe, because it is a strange visionary novel which is nonetheless rooted in the mundane: both the details of everyday life in suburban New Jersey (where the author grew up) and at Bard College in New York state (where the author went to school), and the details of television. Basically, there is a cosmic struggle between forces of integration and disintegration, or concreteness and abstraction, or system-building and system-breaking, and a set of experiences reflecting at once a sense of impending catastrophe (as a small region of upstate New York) gets hit by a violent destructive force, and then enclosed in a bubble that does not and cannot communicate with the rest of the world, or indeed the universe), and a sort of Dungeons-and-Dragons derived videogame; and (like the previous novel) a kind of scenario in which televison-induced hallucinations control behavior and fulfill various corporate agendas including, but not limited to, selling consumer products. The way in which commodification and advertising feed into all other sorts of self-referential loops and psychotic-breakdown modes of feeling is of course the part of the novel of most interest to me, but it really cannot be separated from some of the other themes, involving avant-garde improvisational music as a means of “cross[ing] the boundaries between systems” (332), and synaesthesia, and lots of other things I can’t quite remember.

The point is, Sound Mind is mind-fuel, with such a density of cultural references, slippery almost-theories that tease and allure and never quite coalesce, that it is quite mind-blowing, even as it weaves and bobs and evades your grasp in the way martial arts (one of the subjects of Double Vision) at their best are supposed to do.