Everything Bad Is Good For You

Steven Johnson is one of my favorite non-academic, nonfiction writers. (I’m sorry that seems like such a negative description; but one of the virtues of Johnson is precisely that it is difficult to pigeonhole him positively in terms of content, as he writes on the boundary between culture, science, technology, poststructuralist theory, etc…). I don’t always agree with Johnson, but he always makes me think, as well as being one of those writers who can convey complex ideas in prose of great clarity. (I wrote about his last book here). Johnson’s new book, Everything Bad Is Good For You: How Today’s Popular Culture Is Actually Making Us Smarter, is as intelligently provocative as anything he’s written.

As its title and subtitle indicate, Everything Bad Is Good For You is a polemical defense of the value of contemporary popular culture. Johnson contests the all-too-often repeated claims that American popular culture is vile and debased, that it appeals to the lowest common denominator, that it is all about sensationalistic exploitation and dumbing down. He argues, instead, that popular culture is actually making us smarter, in ways that can even be quantified by intelligence tests and the like. Johnson’s method of analysis is basically McLuhanesque; that is to say, he pays attention to the medium rather than the message; or (in the Deleuze/Guattari terms that he cites briefly in an appendix) to what works of popular culture do rather than what they mean, what connections they make rather than what symbols they deploy, or what ideologies they express. Rather than lamenting any alleged decline from print/books/literature to the various multimedia modes in vogue today, he asks the McLuhanite question of how these new media engage us, what modes of perception, action, and thought they appeal to and incite, and how this makes for a qualitative difference from print/literary sensibilities.

Johnson focuses at greatest length on computer games and television. Games involve us actively in difficult tasks of multi-stranded and multiply hierarchized problem-solving; recent TV series involve us in tracking multiple plot strands (The Sopranos), mastering multiple forms of allusion and self-reference (The Simpsons, Seinfeld), comprehending elliptical, convoluted plots (24), and making sense of dense social networks (24, The Sopranos). (He especially stresses how much more complex, rich, and rewarding these shows are than the relatively linear, slow, and stolid shows of the 1970s(. At lesser length, Johnson also discusses multitasking and mastering different software paradigms in order to use the Internet, and the effects of challengingly non-linear movies like Memento and Eternal Sunshine of the Spotless Mind. In the second half of the book, he then goes on to generalize about economic factors that have led to greater complexity in popular culture in the last 20 or 30 years, as well as speculations on collateral subjects ranging from neurobiology (the dopamine reward circuits in relation to gaming), to studies of IQ tests (scores have risen steadily over the last few generations).

As a polemic, Everything Bad Is Good For You is right on target and extremely welcome, and I hope people actually listen to it and get persuaded by it. (Of course, I was already on Johnson’s side before I started reading the book, so I may not be the best person to judge its success as rhetorical persuasion). I have no use for the high culture elitism that still exists in certain intellectual quarters; and I have no use for the nostalgia of all too many people my age (the baby boomer generation) who assert that the pop culture of our youth was somehow necessarily superior to what is going on now. When the late Susan Sontag, for instance, suggested in her otherwise very fine essay on Abu Ghraib that the culture of videogames could be blamed for the soldiers’ readiness to engage in torture: this was really no more than gross ignorance, and she needed to be called on it. (I don’t mean to pick on the recently dead; but this was the best example that came to me, honest). Hopefully Johnson’s book will make it more difficult for such assertions to pass muster in the future.

All that said, there was one aspect of Johnson’s approach and argument that I found disappointing and limiting. This was his almost exclusive focus on the cognitive aspects of how the popular media he was discussing work, and his nearly complete avoidance of any discussion of how they work affectively. (I should note that this has nothing to do with the focus on form instead of content, or medium instead of message; it’s the McLuhanesque effects of the new electronic media themselves that need to be formulated in affective terms as well as cognitive ones). This is a problem I’ve discussed many times in the course of this blog, and it’s something less specific to Johnson than it is a characteristic of our general contemporary scientific and intellectual culture. Despite the efforts of a few prominent neurobiologists (like Damasio and LeDoux, who to my mind raise crucial questions even if they don’t go far enough), the understanding of “the mind” in this post-Freudian age is almost exclusively through the lens of “cognitive science.”

How does this work out in Everything Bad Is Good For You? For Johnson, the important thing is how computer games train us in “participatory thinking and analysis,” how they “challenge the mind to make sense of an environment” (p.61); the fact that this works through the dopamine reward system, so that we feel a rush of pleasure when we overcome our frustration by solving a puzzle, is really only a secondary matter for him (it is part of the explanation of why and how it all works, but he doesn’t find it important in itself). This seems wrong to me; the emotional states that we experience through our participation in cultural forms are as important, perhaps more so, than the training skills we establish and sharpen. In any case, we need to question the subordination of the affective dimension to the cognitive one, the positioning of the former as just an instrument to aid in the latter; though (or precisely because) this seems to be one of the most central and unquestioned tenets of postmodern thought and culture.

So I want to know more about how games work affectively; even though it may well be that the whole point of games, the reason why they are so central a formation in our culture today, is precisely because, like the cognitive science Johnson uses to understand them, their whole point is to subordinate affect to cognition. As I’ve said before, I’m not much of a gamer; it may be that my anti-cognitive-centrism, like my boredom at the sort of problem-solving that goes on in games, is itself a symptom of my being still all-too-entrenched in print culture. But I don’t think so; I can make a stronger case for the importance of affect to other media that I know better than games, and that Johnson discusses in wholly cognitive terms: like television and movies (and comix, which Johnson says very little about, though in an endnote he quotes at length from Henry Jenkins, who rightly indicates that contemporary comix would fit very well into the argument, since in recent years they have become dazzlingly complex both narratively and visually). Buffy isn’t nearly as dense in terms of narrative threads as some of the shows Johnson discusses, but it is as dense in terms of allusiveness, self-referentiality, and the demands it makes on its audience to remember seeming minor details from past episodes, etc. Yet — as I said in my recent post on the show — its cognitive density serves its affective richness, rather than the reverse: and the ways that Buffy explores affect, together with the sorts of affect it displays, strike me as very different from anything you would find in traditional (or modernist) print culture. And I’d say the same for Charlie Kaufman films, and for the few gaming experiences I have tried (admittedly, mostly outdated ones at this point, like Myst and online role-playing in MUDs). Whatever cognitive abilities these works of popular culture instill or assume, their “payoff” has always been (for me) affective, with a wide variety of emotions ranging from ecstasy to fear and anxiety, and with a basic temporal orientation towards the future (in its openness and unknowability) — which is something beyond the reach of cognitive skills.

All of which brings me to the most crucial omission in Johnson’s book, which is that he says almost nothing about music. Now, there needn’t be a requirement for a commentator to master all genres. I can scarcely fault him for having as little to say about hip hop as I have to say about computer games. Still, music is arguably the one field of popular culture in which affect (as opposed to cognition) is most central, most foregrounded, and most powerful; and some theorists (Jacques Attali and Kodwo Eshun, among others) would argue that music is the most future-oriented of genres as well. The only time that Johnson discusses music (in a footnote on pp 225-226) he curiously says that his argument about how economic and technological factors have made games and television much more cognitively complex in the last twenty-five years or so doesn’t apply to music, because the analogous technological revolution happened in music much earlier, in the switch from “throwaway singles” to “albums designed to be heard hundreds of times” in the 1960s. He implies that music, unlike TV and games and movies, hasn’t gotten any more complex since that time (and may even have been dumbed down, to judge from his pejorative comments about MTV videos, whose elaborate fast editing styles he doesn’t consider to be an example of cognitively interesting complexity). He says nothing about how digital technologies (sampling, synthesizing, multiple tracks) have changed music; and he fails to consider how the rhythmic complexity of Top 40s hits today goes as far beyond the pop of the 60s and 70s, and the verbal dexterity of Wu Tang or Jay-Z or Eminem goes as far beyond that of The Sugarhill Gang or The Furious Five, as the narrative complexity of The Sopranos goes beyond the simple-minded linearity of shows like (one of his favorite negative examples) Starsky and Hutch.

Such musical examples entirely support Johnson’s polemical point about how popular culture today is in many ways richer and denser than that of thirty years ago. But the reasons for this happening in music are much more difficult to state in cognitive terms (and, let’s face it, much more difficult to render in language altogether) than is the case with things like problem-solving skills and narrative structures. (You could discuss it in relation to musical relationality — songs sampling or otherwise alluding to previous songs — but that would only be scratching the surface). Music remains for me, therefore, a privileged instance in which McLuhanesque change resists definition in cognitive-centered terms.

Steven Johnson is one of my favorite non-academic, nonfiction writers. (I’m sorry that seems like such a negative description; but one of the virtues of Johnson is precisely that it is difficult to pigeonhole him positively in terms of content, as he writes on the boundary between culture, science, technology, poststructuralist theory, etc…). I don’t always agree with Johnson, but he always makes me think, as well as being one of those writers who can convey complex ideas in prose of great clarity. (I wrote about his last book here). Johnson’s new book, Everything Bad Is Good For You: How Today’s Popular Culture Is Actually Making Us Smarter, is as intelligently provocative as anything he’s written.

As its title and subtitle indicate, Everything Bad Is Good For You is a polemical defense of the value of contemporary popular culture. Johnson contests the all-too-often repeated claims that American popular culture is vile and debased, that it appeals to the lowest common denominator, that it is all about sensationalistic exploitation and dumbing down. He argues, instead, that popular culture is actually making us smarter, in ways that can even be quantified by intelligence tests and the like. Johnson’s method of analysis is basically McLuhanesque; that is to say, he pays attention to the medium rather than the message; or (in the Deleuze/Guattari terms that he cites briefly in an appendix) to what works of popular culture do rather than what they mean, what connections they make rather than what symbols they deploy, or what ideologies they express. Rather than lamenting any alleged decline from print/books/literature to the various multimedia modes in vogue today, he asks the McLuhanite question of how these new media engage us, what modes of perception, action, and thought they appeal to and incite, and how this makes for a qualitative difference from print/literary sensibilities.

Johnson focuses at greatest length on computer games and television. Games involve us actively in difficult tasks of multi-stranded and multiply hierarchized problem-solving; recent TV series involve us in tracking multiple plot strands (The Sopranos), mastering multiple forms of allusion and self-reference (The Simpsons, Seinfeld), comprehending elliptical, convoluted plots (24), and making sense of dense social networks (24, The Sopranos). (He especially stresses how much more complex, rich, and rewarding these shows are than the relatively linear, slow, and stolid shows of the 1970s(. At lesser length, Johnson also discusses multitasking and mastering different software paradigms in order to use the Internet, and the effects of challengingly non-linear movies like Memento and Eternal Sunshine of the Spotless Mind. In the second half of the book, he then goes on to generalize about economic factors that have led to greater complexity in popular culture in the last 20 or 30 years, as well as speculations on collateral subjects ranging from neurobiology (the dopamine reward circuits in relation to gaming), to studies of IQ tests (scores have risen steadily over the last few generations).

As a polemic, Everything Bad Is Good For You is right on target and extremely welcome, and I hope people actually listen to it and get persuaded by it. (Of course, I was already on Johnson’s side before I started reading the book, so I may not be the best person to judge its success as rhetorical persuasion). I have no use for the high culture elitism that still exists in certain intellectual quarters; and I have no use for the nostalgia of all too many people my age (the baby boomer generation) who assert that the pop culture of our youth was somehow necessarily superior to what is going on now. When the late Susan Sontag, for instance, suggested in her otherwise very fine essay on Abu Ghraib that the culture of videogames could be blamed for the soldiers’ readiness to engage in torture: this was really no more than gross ignorance, and she needed to be called on it. (I don’t mean to pick on the recently dead; but this was the best example that came to me, honest). Hopefully Johnson’s book will make it more difficult for such assertions to pass muster in the future.

All that said, there was one aspect of Johnson’s approach and argument that I found disappointing and limiting. This was his almost exclusive focus on the cognitive aspects of how the popular media he was discussing work, and his nearly complete avoidance of any discussion of how they work affectively. (I should note that this has nothing to do with the focus on form instead of content, or medium instead of message; it’s the McLuhanesque effects of the new electronic media themselves that need to be formulated in affective terms as well as cognitive ones). This is a problem I’ve discussed many times in the course of this blog, and it’s something less specific to Johnson than it is a characteristic of our general contemporary scientific and intellectual culture. Despite the efforts of a few prominent neurobiologists (like Damasio and LeDoux, who to my mind raise crucial questions even if they don’t go far enough), the understanding of “the mind” in this post-Freudian age is almost exclusively through the lens of “cognitive science.”

How does this work out in Everything Bad Is Good For You? For Johnson, the important thing is how computer games train us in “participatory thinking and analysis,” how they “challenge the mind to make sense of an environment” (p.61); the fact that this works through the dopamine reward system, so that we feel a rush of pleasure when we overcome our frustration by solving a puzzle, is really only a secondary matter for him (it is part of the explanation of why and how it all works, but he doesn’t find it important in itself). This seems wrong to me; the emotional states that we experience through our participation in cultural forms are as important, perhaps more so, than the training skills we establish and sharpen. In any case, we need to question the subordination of the affective dimension to the cognitive one, the positioning of the former as just an instrument to aid in the latter; though (or precisely because) this seems to be one of the most central and unquestioned tenets of postmodern thought and culture.

So I want to know more about how games work affectively; even though it may well be that the whole point of games, the reason why they are so central a formation in our culture today, is precisely because, like the cognitive science Johnson uses to understand them, their whole point is to subordinate affect to cognition. As I’ve said before, I’m not much of a gamer; it may be that my anti-cognitive-centrism, like my boredom at the sort of problem-solving that goes on in games, is itself a symptom of my being still all-too-entrenched in print culture. But I don’t think so; I can make a stronger case for the importance of affect to other media that I know better than games, and that Johnson discusses in wholly cognitive terms: like television and movies (and comix, which Johnson says very little about, though in an endnote he quotes at length from Henry Jenkins, who rightly indicates that contemporary comix would fit very well into the argument, since in recent years they have become dazzlingly complex both narratively and visually). Buffy isn’t nearly as dense in terms of narrative threads as some of the shows Johnson discusses, but it is as dense in terms of allusiveness, self-referentiality, and the demands it makes on its audience to remember seeming minor details from past episodes, etc. Yet — as I said in my recent post on the show — its cognitive density serves its affective richness, rather than the reverse: and the ways that Buffy explores affect, together with the sorts of affect it displays, strike me as very different from anything you would find in traditional (or modernist) print culture. And I’d say the same for Charlie Kaufman films, and for the few gaming experiences I have tried (admittedly, mostly outdated ones at this point, like Myst and online role-playing in MUDs). Whatever cognitive abilities these works of popular culture instill or assume, their “payoff” has always been (for me) affective, with a wide variety of emotions ranging from ecstasy to fear and anxiety, and with a basic temporal orientation towards the future (in its openness and unknowability) — which is something beyond the reach of cognitive skills.

All of which brings me to the most crucial omission in Johnson’s book, which is that he says almost nothing about music. Now, there needn’t be a requirement for a commentator to master all genres. I can scarcely fault him for having as little to say about hip hop as I have to say about computer games. Still, music is arguably the one field of popular culture in which affect (as opposed to cognition) is most central, most foregrounded, and most powerful; and some theorists (Jacques Attali and Kodwo Eshun, among others) would argue that music is the most future-oriented of genres as well. The only time that Johnson discusses music (in a footnote on pp 225-226) he curiously says that his argument about how economic and technological factors have made games and television much more cognitively complex in the last twenty-five years or so doesn’t apply to music, because the analogous technological revolution happened in music much earlier, in the switch from “throwaway singles” to “albums designed to be heard hundreds of times” in the 1960s. He implies that music, unlike TV and games and movies, hasn’t gotten any more complex since that time (and may even have been dumbed down, to judge from his pejorative comments about MTV videos, whose elaborate fast editing styles he doesn’t consider to be an example of cognitively interesting complexity). He says nothing about how digital technologies (sampling, synthesizing, multiple tracks) have changed music; and he fails to consider how the rhythmic complexity of Top 40s hits today goes as far beyond the pop of the 60s and 70s, and the verbal dexterity of Wu Tang or Jay-Z or Eminem goes as far beyond that of The Sugarhill Gang or The Furious Five, as the narrative complexity of The Sopranos goes beyond the simple-minded linearity of shows like (one of his favorite negative examples) Starsky and Hutch.

Such musical examples entirely support Johnson’s polemical point about how popular culture today is in many ways richer and denser than that of thirty years ago. But the reasons for this happening in music are much more difficult to state in cognitive terms (and, let’s face it, much more difficult to render in language altogether) than is the case with things like problem-solving skills and narrative structures. (You could discuss it in relation to musical relationality — songs sampling or otherwise alluding to previous songs — but that would only be scratching the surface). Music remains for me, therefore, a privileged instance in which McLuhanesque change resists definition in cognitive-centered terms.

Teranesia

Greg Egan is one of the finest contemporary writers of “hard” SF, which is to say science fiction that strongly emphasizes the science, trying to keep the science coherent and to extrapolate plausibly (at least) from currently existing science and technology. Most of Egan’s books involve physics and computer science, speculating about such things as artificial intelligence and quantum mechanics. Teranesia is something of an exception in his work, as it deals with biology, takes place in the very near (instead of far distant) future, stresses character development and emotion — especially guilt and shame — more than his other novels, and has some directly political themes (Egan touches on religious and ethnic strife in Indonesia, with its heritage of both colonial exploitation and military misrule and corruption; as well as on Australia’s shameful mistreatment of asylum seekers — a matter on which he expands in his online Afterward to the novel). I read Teranesia mostly because I am looking at “bioaesthetics”, and at the “the biological imagination” (though I wish I had a better phrase for this); I was curious to see what Egan would do with biology.

The novel worked for the most part in terms of plot, characters, and emotion; but the biology was indeed the most interesting thing about it. The major conceit of Teranesia is the appearance of strange mutations, initially confined to one species of butterfly on one island in the Molucca Sea, but increasingly manifested across animal and plant species, and in a wider and wider area. These mutations seem to be too radical, too well-calibrated, and too quick, to be explicable by chance mutations plus the winnowing effect of natural selection. In the space of twenty years, entire animal and plant species develop altered body plans that allow them to feed (or to protect themselves from predation) much more easily, to squeeze out all competitors in the ecosystem, and to proliferate themselves from island to island.

It’s almost as if Egan had set himself as a task to envision a scenario of “biological exuberance“, a scenario that would seem to strongly imply some other evolutionary force than Darwinian natural selection — whether Christian “intelligent design,” some variant of Lamarckianism, Bergsonian elan vital, Richard Goldschmidt’s “hopeful monsters”, or the constraints of form championed by such non-mainstream biologists as Stuart Kauffman and Richard Goodwin — and yet to explain the scenario in terms that are entirely in accord with orthodox neodarwinism and Dawkins’ selfish gene theory. How could rapid and evidently purposive evolutionary change nonetheless result from the “blind watchmaker” of natural selection? All the scientists in Teranesia take the orthodox framework for granted; and in opposition to them, Egan sets religious fundamentalists on the one hand, and “postmodern cultural theorists” who celebrate the trickster mischievousness or irrational bounty of Nature on the other (Egan’s heavy-handed, Alan Sokal-esque satire of the latter group — the book came out at around the same time as the Sokal-vs.-Social Text incident — is the most lame and tiresome aspect of the novel).

[SPOILER ALERT] The way that Egan solves his puzzle is this. The mutations all turn out to be the result of the actions of a single gene, one that can jump from species to species, and that has the ability to rewrite/mutate the rest of the genome in which it finds itself by snipping out individual base pairs, and introducing transcription errors and replacements. Given a random DNA sequence to work with, the effect of the mutations is basically random. But given an actual genome to work with, the new gene enforces changes that are far from random, that in fact optimize the genome for survival and expansion. The new gene does this by, in effect, exploring the phase space of all possible mutations to a considerable depth. And it does this by a trick of quantum theory. Egan calls on the “many worlds” interpretation of quantum mechanics. Mutations are correlated with the collapse of the quantum wave function. All the mutations that could have happened to a given genome, but did not, in fact have occurred in parallel universes. Over the course of a genome’s history, therefore, all the alternative universes generated by every mutation constitute a phase space of all the possible changes the organism could have undergone, and it is these “many universes” the new gene is able to explore, and “choose” the changes that, statistically speaking, were the most successful ones. In this way, the new gene is able to optimize the entire genome or organism; even though it itself is purely a “selfish gene,” driven only to maximize its own reproduction. Egan wryly notes that “most processes in molecular biology had analogies in computing, but it was rarely helpful to push them too far” (256); nonetheless, he extrapolates this logic by imagining a DNA “program” that works like a “quantum supercomputer” (289).

Egan’s solution to his own puzzle is elegant, economical, and shrewd. He’s really doing what hard SF does best: applying the rigor of scientific reasoning to an imaginary problem, and (especially) abiding by the initial conditions set forth by the problem. He successfully constructs a scenario in which even the most extreme instance of apparent design can be explained without recourse to teleology. Though Egan’s hypothesis is counterfactual and probably impossible — which is just a fancy way of saying he is writing fiction — it does in fact usefully illuminate the logic of biological explanation.

And it’s this logic to which I want to turn. Getting rid of teleology is in fact harder than it might seem. Darwin’s theory of natural selection explains how meaningful and functioning complex patterns can emerge from randomness, without there being a pre-existing plan. “Intelligent design” theory today, like the 18th-century “argument from design,” claims that a structure like the eye, or like the interwoven network of chemical pathways that function in every cell, are too complex to have arisen without planning. Darwinian theory argues, to the contrary — quite convincingly and cogently — not only that “selection” processes are able to account for the formation of these structures, but that these structures’ very complexity precludes their having been made by planning and foresight, or any other way. (For the most explicit statement of this argument, see Richard Dawkins’ The Blind Watchmaker. Dawkins gives a reductionist, atomistic version of the argument. I would argue — though Dawkins himself would not agree — that this account is not inconsistent with the claims of Kauffman that natural selection rides piggyback on other sorts of spontaneous organization in natural systems).

But none of this stops Dawkins, or other hardcore Darwinians, from using the vocabulary of purpose on nearly all occasions. The eye is a structure whose purpose is seeing; genes are “selfish” because they “want” — i.e. their “purpose” is — to create more copies of themselves. Dawkins, at least, is aware that his use of purpose-language is metaphorical; but the metaphors you use affect your argument in powerful, structurally relevant ways, even though you may intend them “only,” and quite consciously, as “mere” metaphors. As Isabelle Stengers puts it, Dawkins is still describing life by comparing it to a watch — or to a computer — even if the “watchmaker” is “blind” and not purposeful or conscious. Kant’s pre-Darwinian observation, that we cannot help seeing life as “purposive,” even though we would be wrong to attribute explicit “purpose” to it — still holds true in evolutionary theory.

This is partly a question about adaptation. Hardcore neodarwinism assumes that every feature of an organism, no matter how minor, is adaptive — which is to say that it has a reproductive purpose, for which it was selected. And evolutionary theorists go through extraordinary contortions to explain how “features” like homosexuality, which evidently do not contribute to the production of more offspring, nonetheless must be “adaptive” — or reproductively selected for — in some way. In a case like homosexuality, it seems obvious to suggest that: a)it is not a well-defined category, but one that has a lot of blurry edges and culturally variable aspects, so it’s misguided in the first place to find a genetic correlate to it; and b)that to the extent that genes do play a role in same-sex object choice, it may well be that what was “selected for” was not homosexuality per se, but something more general (the sort of sexual disposition that is extremely plastic, i.e. capable of realizing itself in multiple forms).

More generally, adaptationism is problematic because defending it soon brings you to a point of reductio ad absurdum. Many features of organisms are evidently adaptive, but when you start to assert that everything must be, a priori, you are condemning yourself to a kind of interpretive paranoia that sees meanings, intentions, and purposes everywhere. You start out aware that (in Egan’s words) “evolution is senseless: the great dumb machine, grinding out microscopic improvements one end, spitting out a few billion corpses from the other ” (112). But you end up with a sort of argument from design, a paradoxical denial of contingency, chance, superfluity, and meaninglessness. Evolutionary theorists assume that every feature of every organism necessarily has a meaning and a purpose; which is what leads them to simply invent purposive explanations (what Stephen Jay Gould disparaged as “just-so stories”) when they cannot be discovered by empirical means.

All these difficulties crop up in the course of Teranesia. Egan’s protagonist, Prabir, is gay, and he supposes that his sexual orientation is like an “oxbow lake” produced by a river: something that’s “not part of the flow” of the river, but that the river keeps creating nonetheless (109). Conversely, he is (rightly) angered by the suggestion that homosexuality is adaptive because it has the evolutionary purpose of being “a kind of insurance policy — to look after the others if something happens to the parents” (110). Angry because such an explanation would suggest that his being as a person has no value in its own right, for itself. And this is picked up at the end of the novel, when the new gene crosses species and starts to metastasize in Prabir’s own body. As a ruthless and super-efficient machine for adaptation, it threatens to wipe out Prabir’s own “oxbow lake,” together with anything that might seem “superfluous” from the point of view of adaptive efficiency (310).

By the end of the novel, the new gene has to be contained, for it threatens to “optimize” Prabir, and through him the rest of humanity, into a monstrous reproductive machine. Teranesia suddenly turns, in its last thirty pages or so, into a horror novel; and the final plot twist that saves Prabir is (in contrast to everything that has come before) exceedingly unconvincing and unsatisfying, because it hinges on seeing the malignant gene as purpose-driven to an extent that simply (I mean in the context of Egan’s fiction itself) isn’t credible.

Teranesia thus ends up tracking and reproducing what I am tempted to call (in Kantian style) the antinomies of neodarwinian explanation. Starting from the basic assertion that “life is meaningless” (338 — the very last words of the novel), it nonetheless finds itself compelled to hypothesize a monstrous, totalizing purposiveness. The specter of biological exuberance is exorcized, but monstrosity is not thereby dispelled; it simply returns in an even more extreme form. Even Egan’s recourse to quantum mechanics is symptomatic: because quantum mechanics is so inherently paradoxical — because it is literally impossible to understand in anything like intuitive terms — it becomes the last recourse when you are trying to explain in rationalistic and reductive terms some aspect of reality (and of life especially) that turns out to be stubbornly mysterious. Quantum mechanics allows you to have it both ways: Egan’s use of it can be compared, for instance, to the way Roger Penrose has recourse to quantum effects in order to explain the mysteries of consciousness. In short, Teranesia is a good enough book that it runs up against, and inadvertently demonstrates, the aporias implicit within the scientific rationality to which Egan is committed.

Greg Egan is one of the finest contemporary writers of “hard” SF, which is to say science fiction that strongly emphasizes the science, trying to keep the science coherent and to extrapolate plausibly (at least) from currently existing science and technology. Most of Egan’s books involve physics and computer science, speculating about such things as artificial intelligence and quantum mechanics. Teranesia is something of an exception in his work, as it deals with biology, takes place in the very near (instead of far distant) future, stresses character development and emotion — especially guilt and shame — more than his other novels, and has some directly political themes (Egan touches on religious and ethnic strife in Indonesia, with its heritage of both colonial exploitation and military misrule and corruption; as well as on Australia’s shameful mistreatment of asylum seekers — a matter on which he expands in his online Afterward to the novel). I read Teranesia mostly because I am looking at “bioaesthetics”, and at the “the biological imagination” (though I wish I had a better phrase for this); I was curious to see what Egan would do with biology.

The novel worked for the most part in terms of plot, characters, and emotion; but the biology was indeed the most interesting thing about it. The major conceit of Teranesia is the appearance of strange mutations, initially confined to one species of butterfly on one island in the Molucca Sea, but increasingly manifested across animal and plant species, and in a wider and wider area. These mutations seem to be too radical, too well-calibrated, and too quick, to be explicable by chance mutations plus the winnowing effect of natural selection. In the space of twenty years, entire animal and plant species develop altered body plans that allow them to feed (or to protect themselves from predation) much more easily, to squeeze out all competitors in the ecosystem, and to proliferate themselves from island to island.

It’s almost as if Egan had set himself as a task to envision a scenario of “biological exuberance“, a scenario that would seem to strongly imply some other evolutionary force than Darwinian natural selection — whether Christian “intelligent design,” some variant of Lamarckianism, Bergsonian elan vital, Richard Goldschmidt’s “hopeful monsters”, or the constraints of form championed by such non-mainstream biologists as Stuart Kauffman and Brian Goodwin — and yet to explain the scenario in terms that are entirely in accord with orthodox neodarwinism and Dawkins’ selfish gene theory. How could rapid and evidently purposive evolutionary change nonetheless result from the “blind watchmaker” of natural selection? All the scientists in Teranesia take the orthodox framework for granted; and in opposition to them, Egan sets religious fundamentalists on the one hand, and “postmodern cultural theorists” who celebrate the trickster mischievousness or irrational bounty of Nature on the other (Egan’s heavy-handed, Alan Sokal-esque satire of the latter group — the book came out at around the same time as the Sokal-vs.-Social Text incident — is the most lame and tiresome aspect of the novel).

[SPOILER ALERT] The way that Egan solves his puzzle is this. The mutations all turn out to be the result of the actions of a single gene, one that can jump from species to species, and that has the ability to rewrite/mutate the rest of the genome in which it finds itself by snipping out individual base pairs, and introducing transcription errors and replacements. Given a random DNA sequence to work with, the effect of the mutations is basically random. But given an actual genome to work with, the new gene enforces changes that are far from random, that in fact optimize the genome for survival and expansion. The new gene does this by, in effect, exploring the phase space of all possible mutations to a considerable depth. And it does this by a trick of quantum theory. Egan calls on the “many worlds” interpretation of quantum mechanics. Mutations are correlated with the collapse of the quantum wave function. All the mutations that could have happened to a given genome, but did not, in fact have occurred in parallel universes. Over the course of a genome’s history, therefore, all the alternative universes generated by every mutation constitute a phase space of all the possible changes the organism could have undergone, and it is these “many universes” the new gene is able to explore, and “choose” the changes that, statistically speaking, were the most successful ones. In this way, the new gene is able to optimize the entire genome or organism; even though it itself is purely a “selfish gene,” driven only to maximize its own reproduction. Egan wryly notes that “most processes in molecular biology had analogies in computing, but it was rarely helpful to push them too far” (256); nonetheless, he extrapolates this logic by imagining a DNA “program” that works like a “quantum supercomputer” (289).

Egan’s solution to his own puzzle is elegant, economical, and shrewd. He’s really doing what hard SF does best: applying the rigor of scientific reasoning to an imaginary problem, and (especially) abiding by the initial conditions set forth by the problem. He successfully constructs a scenario in which even the most extreme instance of apparent design can be explained without recourse to teleology. Though Egan’s hypothesis is counterfactual and probably impossible — which is just a fancy way of saying he is writing fiction — it does in fact usefully illuminate the logic of biological explanation.

And it’s this logic to which I want to turn. Getting rid of teleology is in fact harder than it might seem. Darwin’s theory of natural selection explains how meaningful and functioning complex patterns can emerge from randomness, without there being a pre-existing plan. “Intelligent design” theory today, like the 18th-century “argument from design,” claims that a structure like the eye, or like the interwoven network of chemical pathways that function in every cell, are too complex to have arisen without planning. Darwinian theory argues, to the contrary — quite convincingly and cogently — not only that “selection” processes are able to account for the formation of these structures, but that these structures’ very complexity precludes their having been made by planning and foresight, or any other way. (For the most explicit statement of this argument, see Richard Dawkins’ The Blind Watchmaker. Dawkins gives a reductionist, atomistic version of the argument. I would argue — though Dawkins himself would not agree — that this account is not inconsistent with the claims of Kauffman that natural selection rides piggyback on other sorts of spontaneous organization in natural systems).

But none of this stops Dawkins, or other hardcore Darwinians, from using the vocabulary of purpose on nearly all occasions. The eye is a structure whose purpose is seeing; genes are “selfish” because they “want” — i.e. their “purpose” is — to create more copies of themselves. Dawkins, at least, is aware that his use of purpose-language is metaphorical; but the metaphors you use affect your argument in powerful, structurally relevant ways, even though you may intend them “only,” and quite consciously, as “mere” metaphors. As Isabelle Stengers puts it, Dawkins is still describing life by comparing it to a watch — or to a computer — even if the “watchmaker” is “blind” and not purposeful or conscious. Kant’s pre-Darwinian observation, that we cannot help seeing life as “purposive,” even though we would be wrong to attribute explicit “purpose” to it — still holds true in evolutionary theory.

This is partly a question about adaptation. Hardcore neodarwinism assumes that every feature of an organism, no matter how minor, is adaptive — which is to say that it has a reproductive purpose, for which it was selected. And evolutionary theorists go through extraordinary contortions to explain how “features” like homosexuality, which evidently do not contribute to the production of more offspring, nonetheless must be “adaptive” — or reproductively selected for — in some way. In a case like homosexuality, it seems obvious to suggest that: a)it is not a well-defined category, but one that has a lot of blurry edges and culturally variable aspects, so it’s misguided in the first place to find a genetic correlate to it; and b)that to the extent that genes do play a role in same-sex object choice, it may well be that what was “selected for” was not homosexuality per se, but something more general (the sort of sexual disposition that is extremely plastic, i.e. capable of realizing itself in multiple forms).

More generally, adaptationism is problematic because defending it soon brings you to a point of reductio ad absurdum. Many features of organisms are evidently adaptive, but when you start to assert that everything must be, a priori, you are condemning yourself to a kind of interpretive paranoia that sees meanings, intentions, and purposes everywhere. You start out aware that (in Egan’s words) “evolution is senseless: the great dumb machine, grinding out microscopic improvements one end, spitting out a few billion corpses from the other ” (112). But you end up with a sort of argument from design, a paradoxical denial of contingency, chance, superfluity, and meaninglessness. Evolutionary theorists assume that every feature of every organism necessarily has a meaning and a purpose; which is what leads them to simply invent purposive explanations (what Stephen Jay Gould disparaged as “just-so stories”) when they cannot be discovered by empirical means.

All these difficulties crop up in the course of Teranesia. Egan’s protagonist, Prabir, is gay, and he supposes that his sexual orientation is like an “oxbow lake” produced by a river: something that’s “not part of the flow” of the river, but that the river keeps creating nonetheless (109). Conversely, he is (rightly) angered by the suggestion that homosexuality is adaptive because it has the evolutionary purpose of being “a kind of insurance policy — to look after the others if something happens to the parents” (110). Angry because such an explanation would suggest that his being as a person has no value in its own right, for itself. And this is picked up at the end of the novel, when the new gene crosses species and starts to metastasize in Prabir’s own body. As a ruthless and super-efficient machine for adaptation, it threatens to wipe out Prabir’s own “oxbow lake,” together with anything that might seem “superfluous” from the point of view of adaptive efficiency (310).

By the end of the novel, the new gene has to be contained, for it threatens to “optimize” Prabir, and through him the rest of humanity, into a monstrous reproductive machine. Teranesia suddenly turns, in its last thirty pages or so, into a horror novel; and the final plot twist that saves Prabir is (in contrast to everything that has come before) exceedingly unconvincing and unsatisfying, because it hinges on seeing the malignant gene as purpose-driven to an extent that simply (I mean in the context of Egan’s fiction itself) isn’t credible.

Teranesia thus ends up tracking and reproducing what I am tempted to call (in Kantian style) the antinomies of neodarwinian explanation. Starting from the basic assertion that “life is meaningless” (338 — the very last words of the novel), it nonetheless finds itself compelled to hypothesize a monstrous, totalizing purposiveness. The specter of biological exuberance is exorcized, but monstrosity is not thereby dispelled; it simply returns in an even more extreme form. Even Egan’s recourse to quantum mechanics is symptomatic: because quantum mechanics is so inherently paradoxical — because it is literally impossible to understand in anything like intuitive terms — it becomes the last recourse when you are trying to explain in rationalistic and reductive terms some aspect of reality (and of life especially) that turns out to be stubbornly mysterious. Quantum mechanics allows you to have it both ways: Egan’s use of it can be compared, for instance, to the way Roger Penrose has recourse to quantum effects in order to explain the mysteries of consciousness. In short, Teranesia is a good enough book that it runs up against, and inadvertently demonstrates, the aporias implicit within the scientific rationality to which Egan is committed.

Cosmopolitics

I just finished reading Isabelle Stengers’ great book Cosmopolitiques (originally published in seven brief volumes, now available in two paperbacks; unfortunately, it has not yet been translated into English). It’s a dense and rich book, of something like 650 pages, and it’s forced me to rethink a lot of things. I’ve said before that I think Stengers is our best guide to the “science wars” of the last decade or two, and more generally, to the philosophy of science. In Cosmopolitiques, she massively extends and expands upon what she wrote in earlier books like The Invention of Modern Science.

Stengers, like Bruno Latour, wants us to give up the claim to absolute supremacy that is the greatest legacy of post-Enlightenment modernity. The point is not to abandon science, nor to see it (in cultural-relativist terms) as lacking objective validity. The problem is not with science’s actual, particular positive claims; but rather with its pretensions to universality, its need to deny the validity of all claims and practices other than its own. What Stengers, rightly, wants to take down is the “mobilization” of science as a war machine, which can only make its positive claims by destroying all other discourses and points of view: science presenting itself as rational and as objectively “true,” whereas all other discourses are denounced as superstitious, irrational, grounded in mere “belief,” etc. Stengers isn’t opposing genetics research, for instance, but she is opposing the claim that somehow the “truth” of “human nature” can be found in the genome and nowhere else. She’s opposing Edward O. Wilson’s “consilience” (with its at proclamation that positive science can and will replace psychology, literature, philosophy, religion, and all other “humanistic” forms of knowledge) and Steven Pinker’s reductive, naive and incredibly arrogant and pretentious account of “how the mind works”; not to mention the absurd efforts of “quantitative” social scientists (economists, political scientists, and sociologists) to imagine themselves as arriving at “truth” by writing equations that emulate those of physics.

Stengers wants to understand science in the specificity of its practices, and thereby to reject its transcendent claims, its claims to foundational status which are always made by detaching it from its actual, concrete practices. She defines her own approach as, philosophically, a “constructivist” one. Constructivism in philosophy is non-foundationalist: it denies that truth somehow comes first, denies that it is just there in the world or in the mind. Instead, constructivism looks at how truths are produced through various processes and practices. This does not mean that truth is merely a subjective, human enterprise, either: the practices and processes that produce truths are not just human ones. (Here, Stengers draws profitably upon Whitehead, about whom she has written extensively). For modern science, the constructivist question is to determine how this practice is able (unlike most other human practices, at least) to produce objects that have lives of their own, as it were, so that they remain “answerable” for their actions in the world independently of the laboratory conditions under which they were initially elucidated. This is what makes neutrinos and microbes, for instance, different from codes of justice, or from money, or from ancestral spirits that may be haunting someone. The point of the constructivist approach is to see how these differences work, without thereby asserting that scientific objects are therefore objective, and out there in the world, while all the other sorts of objects would be merely subjective or imaginary or irrational or just inside our heads. The point is not to say that scientific objects are “socially constructed” rather than “objectively true,” but precisely to get away from this binary alternative, when it comes to considering either scientific practices and objects, or (for instance) religious practices and objects.

The other pillar of Stengers’ approach is what she calls an “ecology of practices.” This means considering how particular practices — the practices of science, in particular — impinge upon and relate to other practices that simultaneously exist. This means that the question of what science discovers about the world cannot be separated from the question of how science impinges upon the world. For any particular practice — say, for genetics today — the “ecology of practices” asks what particular demands or requirements (exigences in French, which it’s difficult to translate precisely because the cognate English word, “exigency”, sound kind of weird) are made by the practice, and what particular obligations does the practice impose upon those who practice it, make use of it, or get affected by it.

Constructivism and the ecology of practices allow Stengers to distinguish between science as a creative enterprise, a practice of invention and discovery, and science’s modernist claim to invalidate all other discourses. Actually, such a statement is too broad — for Stengers also distinguishes among various sciences, which are not all alike. The assumptions and criteria, and hence the demands and obligations, of theoretical physics are quite different from those of ethology (the study of animal behavior, which has to take place in the wild, where there is little possibility of controlling for “variables,” as well as under laboratory conditions). The obligations one takes on when investigating chimpanzees, and all the more so human beings, are vastly different from the obligations one takes on when investigating neutrinos or chemical reactions. The demands made by scientific practices (such as the demand that the object discovered not be just an “artifact” of a particular experimental setup) also vary from one practice to another. Constructivism and the ecology of practices allow Stengers to situate the relevance and the limits of various scientific practices, without engaging in critique: that is to say, without asserting the privilege of a transcendent(al) perspective on the basis of which the varying practices are judged.

Much of Cosmopolitiques is concerned with a history of physics, from Galileo through quantum mechanics. Stengers focuses on the question of physical “laws.” She looks especially at the notion of equilibrium, and the modeling of dynamic systems. Starting with Galileo, going through Newton and Leibniz, and then continuing throughout the 18th and especially the 19th centuries, there is a continual growth in the power of mathematical idealizations to describe physical systems. Physicists construct models that work under simplified conditions — ignoring the presence of friction, for instance, when describing spheres rolling down a plane (Galileo) or more generally, motion through space. They then add the effects of “perturbations” like friction as minor modifications of the basic model. Gradually, more and more complex models were developed, which allowed for more and more factors to be incorporated within the models themselves, instead of having to be left outside as mere “perturbations.” These models all assume physical “states” that can be said to exist at an instant, independently of the historical development of the systems in question; and they assume a basic condition of equilibrium, often perturbed but always returned to.

Stengers suggests that we should celebrate these accomplishments as triumphs of scientific imagination and invention. At the same time, she points up the baleful effects of these accomplishments, in terms of how they got (metaphorically) transferred to other physical and scientific realms. The success of models, expressible as physical “laws,” has to do with the particular sorts of questions 19th-century dynamics addressed (having to do with the nature of forces in finite interactions that could be treated mathematically with linear equations). The success of dynamics, however, led physicists to expect that the same procedures would be valid in answering other questions. This extension of the dynamic model beyond the field of its experimental successes, and into other realms, led to the general assumption that all physical processes could similarly be modeled in terms of instantaneous “states” and time-invariant transformations of these states. That is to say, the assumption that all physical processes follow deterministic “laws.” When the “perturbations” that deviate from the ideal cannot be eliminated empirically, this is attributed to the mere limitations of our knowledge, with the assertion that the physical world “really” operates in accordance with the idealized model, which thereby takes precedence over merely empirical observations. This is how physics moved from empirical observation to a quasi-Platonic faith in an essence underlying mere appearances.

It’s because of this underlying idealism, this illicit transference of dynamic modelling into realms that are not suited to it, that the ideology of physics as describing the ultimate nature of “reality” has taken so strong a hold on us today. Thus physicists dismiss the apparent irreversibility of time, and the increase of entropy (disorder) in any closed system, as merely artifacts of our subjectivity, which is to say our ignorance (of the fact that we do not have access to perfect and total information about the physical state of every atom). But Stengers points out the arbitrariness of the generally accepted “statistical” interpretation of entropy; she argues that it is warranted only by physicists’ underlying assumption that the ideal situation of total knowability of every individual atom’s location and path, independent of the atoms’ history of interactions, must obtain everywhere. This ideal is invoked as how nature “really” behaves, even if there is no empirical possibility of obtaining the “knowledge” that the ideal assumes.

There are similar problems in quantum mechanics. Most physicists are not content with Bohr’s injunction not to ask what is “really” going on before the collapse of quantum indeterminacy; they can’t accept that total, deterministic knowledge is an impossibility, so they have recourse to all sorts of strange hypotheses, from multiple worlds to “hidden variables.” But following Nancy Cartwright among others, Stengers suggests that the whole problem of indeterminacy and measurement in quantum mechanics is a false one. Physicists don’t like the fact that quantum mechanics forbids us in principle from having exact knowledge of every particle, as it were independently of our interaction with the particles (since we have to choose, for instance, between knowing the position of an electron and knowing its momentum — we can’t have both, and it is our interaction with the electron that determines which we do find out). But Stengers points out that the limits of our knowledge in quantum mechanics are not really any greater than, say, the limits of my knowledge as to what somebody else is really feeling and thinking. It’s only the physicists’ idealizing assumption of the world’s total knowability and total determinability in accordance with “laws” that leads them to be frustrated and dissatisfied by the limits imposed by quantum mechanics.

Now, my summary of the last two paragraphs has actually done a disservice to Stengers. Because I have restated her analyses in a Kantian manner, as a reflection upon the limits of reason. But for Stengers, such an exercise in transcendental critique is precisely what she wants to get away from; since such a critique means that once again modernist rationality is legislating against practices whose claims differ from its own. She seeks, rather, through constructivism and the ecology of practices, to offer what might be called (following Deleuze) an entirely immanent critique, one that is situated within the very field of practices that it is seeking to change. Stengers exemplifies this with a detailed account of the work of Ilya Prigogine, with whom she collaborated in the 1980s. Prigogine sought, for most of his career, to get the “arrow of time” — the irreversibility of events in time — recognized as among the fundamentals of physics. We cultural studies types tend to adopt Prigogine wholeheartedly for our own critical purposes. But Stengers emphasizes the difficulties that result from the fact that Prigogine is not critiquing physics and chemistry, but seeking to point up the “arrow of time” in such a way that the physicists themselves will be compelled to acknowledge it. To the extent that he is still regarded as a fringe figure by most mainstream scientists, it cannot be said that he succeeded. Stengers points to recent developments in studies of emergence and complexity as possibly pointing to a renovation of scientific thought, but she warns against the new-agey or high-theoretical tendency many of us outside the sciences have to proclaim a new world-view by trumpeting these scientific results as evidence: which means both translating scientific research into “theory” way too uncritically, and engaging in a kind of Kantian critique, instead of remaining within the immanence of the ecology of actual practices, with the demands they make and the obligations they impose.

The biggest question Cosmopolitiques leaves me with is precisely the one of whether it is possible to approach all these questions immanently, without bringing some sort of Kantian critique back into the picture (as I find myself unavoidably tempted to do, even when I am just trying to summarize Stengers’ arguments). One could also pose this question in reverse: whether Kantian critique (in the sense I am using it, which goes back to the Transcendental Dialectic of the First Critique, where Kant tries to use rationality to limit the pretensions of reason itself) can be rescued from Stengers’ objections to the modernist/scientific condemnation of all claims other than its own. The modernist gesture par excellence, in Stengers’ account, would be David Hume’s consignment of theology and speculative philosophy to the flames, as containing “nothing but sophistry and illusion.” Are Kant’s Antinomies and Paralogisms making essentiallly the same gesture? I regard this as a crucial question, and as an open one, something I have only begun to think about.

I have another question about Stengers’ conclusions, one that (I think) follows from that about Kantian critique. Stengers urges us (in the last section of her book) “to have done with tolerance”; because “tolerance” is precisely the condescending attitude by which “we” (scientists, secular modernists in general) make allowances for other world-views which we nonetheless refuse to take seriously. Stengers’ vision, like Latour’s, is radically democratic: science is not a transcending “truth” but one of many “interests” which constantly need to negotiate with one another. This can only happen if all the competing interests are taken seriously (not merely “tolerated”), and actively able to intervene with and against one another. To give an example that Stengers herself doesn’t use: think of the recent disputes over “Kennewick Man” — a 9,000-year-old skull discovered in 1999 near the Columbia River in Washington State. Scientists want to study the remains; Native American groups want to give the remains a proper burial. For the most part, the American press presented the dispute as one between the rational desire to increase our store of knowledge and the irrational, archaic “beliefs” of the “tribes” claiming ownership of the skull. Stengers would have us realize that such an indivious distinction is precisely an instance of scientific imperialism, and that the claims of both the scientists and the native groups — the demands they make and the obligations they feel urged to fulfill — need to be negotiated on an equal basis, that both are particular interests, and both are political: the situation cannot be described as a battle between rationality and superstition, or between “knowledge” and “belief.”

In this way, Stengers (and Latour) are criticising, not just Big Science, but also (and perhaps even more significantly) the default assumptions of post-Enlightenment secular liberalism. Their criticism is quite different from that espoused by such thinkers as Zizek and Badiou; but there is a shared rejection of the way that liberal “tolerance” (the “human face,” you might say, of multinational captial) in fact prevents substantive questions from being asked, and substantive change from happening. This is another Big Issue that I am (again) only beginning to think through, and that I will have to return to in future posts. But as regards Stengers, my real question is this: Where do Stengers’ and Latour’s anti-modernist imperatives leave us, when it comes to dealing with the fundamentalist, evangelical Christians in the United States today? Does the need to deprivilege science’s claims to exclusive truth, and to democratically recognize other social/cultural/political claims, mean, for instance, that we need to give full respect to the claims of “intelligent design” or creationism, and let them negotiate on an equal footing with the claims of evolutionary theory? To say that we shouldn’t tolerate the fundamentalists because they themselves are intolerant is no answer. And I’m not sure that to say, as I have said before, that denying the evolution of species is akin to denying the Holocaust — since both are matters of historical events, rather than of (verifiable or falsifiable) theories — I’m not sure that this answer works either. I realize I am showing my own biases here: it’s one thing to uphold the claims of disenfranchised native peoples, another to uphold the claims of a group that I think is oppressing me as much as they think I and my like are oppressing them. But this is really where the aporia comes for me; where I am genuinely uncertain as to the merits of Stengers’ arguments in comparison to the liberal “tolerance” she so powerfully despises.

I just finished reading Isabelle Stengers’ great book Cosmopolitiques (originally published in seven brief volumes, now available in two paperbacks; unfortunately, it has not yet been translated into English). It’s a dense and rich book, of something like 650 pages, and it’s forced me to rethink a lot of things. I’ve said before that I think Stengers is our best guide to the “science wars” of the last decade or two, and more generally, to the philosophy of science. In Cosmopolitiques, she massively extends and expands upon what she wrote in earlier books like The Invention of Modern Science.

Stengers, like Bruno Latour, wants us to give up the claim to absolute supremacy that is the greatest legacy of post-Enlightenment modernity. The point is not to abandon science, nor to see it (in cultural-relativist terms) as lacking objective validity. The problem is not with science’s actual, particular positive claims; but rather with its pretensions to universality, its need to deny the validity of all claims and practices other than its own. What Stengers, rightly, wants to take down is the “mobilization” of science as a war machine, which can only make its positive claims by destroying all other discourses and points of view: science presenting itself as rational and as objectively “true,” whereas all other discourses are denounced as superstitious, irrational, grounded in mere “belief,” etc. Stengers isn’t opposing genetics research, for instance, but she is opposing the claim that somehow the “truth” of “human nature” can be found in the genome and nowhere else. She’s opposing Edward O. Wilson’s “consilience” (with its at proclamation that positive science can and will replace psychology, literature, philosophy, religion, and all other “humanistic” forms of knowledge) and Steven Pinker’s reductive, naive and incredibly arrogant and pretentious account of “how the mind works”; not to mention the absurd efforts of “quantitative” social scientists (economists, political scientists, and sociologists) to imagine themselves as arriving at “truth” by writing equations that emulate those of physics.

Stengers wants to understand science in the specificity of its practices, and thereby to reject its transcendent claims, its claims to foundational status which are always made by detaching it from its actual, concrete practices. She defines her own approach as, philosophically, a “constructivist” one. Constructivism in philosophy is non-foundationalist: it denies that truth somehow comes first, denies that it is just there in the world or in the mind. Instead, constructivism looks at how truths are produced through various processes and practices. This does not mean that truth is merely a subjective, human enterprise, either: the practices and processes that produce truths are not just human ones. (Here, Stengers draws profitably upon Whitehead, about whom she has written extensively). For modern science, the constructivist question is to determine how this practice is able (unlike most other human practices, at least) to produce objects that have lives of their own, as it were, so that they remain “answerable” for their actions in the world independently of the laboratory conditions under which they were initially elucidated. This is what makes neutrinos and microbes, for instance, different from codes of justice, or from money, or from ancestral spirits that may be haunting someone. The point of the constructivist approach is to see how these differences work, without thereby asserting that scientific objects are therefore objective, and out there in the world, while all the other sorts of objects would be merely subjective or imaginary or irrational or just inside our heads. The point is not to say that scientific objects are “socially constructed” rather than “objectively true,” but precisely to get away from this binary alternative, when it comes to considering either scientific practices and objects, or (for instance) religious practices and objects.

The other pillar of Stengers’ approach is what she calls an “ecology of practices.” This means considering how particular practices — the practices of science, in particular — impinge upon and relate to other practices that simultaneously exist. This means that the question of what science discovers about the world cannot be separated from the question of how science impinges upon the world. For any particular practice — say, for genetics today — the “ecology of practices” asks what particular demands or requirements (exigences in French, which it’s difficult to translate precisely because the cognate English word, “exigency”, sound kind of weird) are made by the practice, and what particular obligations does the practice impose upon those who practice it, make use of it, or get affected by it.

Constructivism and the ecology of practices allow Stengers to distinguish between science as a creative enterprise, a practice of invention and discovery, and science’s modernist claim to invalidate all other discourses. Actually, such a statement is too broad — for Stengers also distinguishes among various sciences, which are not all alike. The assumptions and criteria, and hence the demands and obligations, of theoretical physics are quite different from those of ethology (the study of animal behavior, which has to take place in the wild, where there is little possibility of controlling for “variables,” as well as under laboratory conditions). The obligations one takes on when investigating chimpanzees, and all the more so human beings, are vastly different from the obligations one takes on when investigating neutrinos or chemical reactions. The demands made by scientific practices (such as the demand that the object discovered not be just an “artifact” of a particular experimental setup) also vary from one practice to another. Constructivism and the ecology of practices allow Stengers to situate the relevance and the limits of various scientific practices, without engaging in critique: that is to say, without asserting the privilege of a transcendent(al) perspective on the basis of which the varying practices are judged.

Much of Cosmopolitiques is concerned with a history of physics, from Galileo through quantum mechanics. Stengers focuses on the question of physical “laws.” She looks especially at the notion of equilibrium, and the modeling of dynamic systems. Starting with Galileo, going through Newton and Leibniz, and then continuing throughout the 18th and especially the 19th centuries, there is a continual growth in the power of mathematical idealizations to describe physical systems. Physicists construct models that work under simplified conditions — ignoring the presence of friction, for instance, when describing spheres rolling down a plane (Galileo) or more generally, motion through space. They then add the effects of “perturbations” like friction as minor modifications of the basic model. Gradually, more and more complex models were developed, which allowed for more and more factors to be incorporated within the models themselves, instead of having to be left outside as mere “perturbations.” These models all assume physical “states” that can be said to exist at an instant, independently of the historical development of the systems in question; and they assume a basic condition of equilibrium, often perturbed but always returned to.

Stengers suggests that we should celebrate these accomplishments as triumphs of scientific imagination and invention. At the same time, she points up the baleful effects of these accomplishments, in terms of how they got (metaphorically) transferred to other physical and scientific realms. The success of models, expressible as physical “laws,” has to do with the particular sorts of questions 19th-century dynamics addressed (having to do with the nature of forces in finite interactions that could be treated mathematically with linear equations). The success of dynamics, however, led physicists to expect that the same procedures would be valid in answering other questions. This extension of the dynamic model beyond the field of its experimental successes, and into other realms, led to the general assumption that all physical processes could similarly be modeled in terms of instantaneous “states” and time-invariant transformations of these states. That is to say, the assumption that all physical processes follow deterministic “laws.” When the “perturbations” that deviate from the ideal cannot be eliminated empirically, this is attributed to the mere limitations of our knowledge, with the assertion that the physical world “really” operates in accordance with the idealized model, which thereby takes precedence over merely empirical observations. This is how physics moved from empirical observation to a quasi-Platonic faith in an essence underlying mere appearances.

It’s because of this underlying idealism, this illicit transference of dynamic modelling into realms that are not suited to it, that the ideology of physics as describing the ultimate nature of “reality” has taken so strong a hold on us today. Thus physicists dismiss the apparent irreversibility of time, and the increase of entropy (disorder) in any closed system, as merely artifacts of our subjectivity, which is to say our ignorance (of the fact that we do not have access to perfect and total information about the physical state of every atom). But Stengers points out the arbitrariness of the generally accepted “statistical” interpretation of entropy; she argues that it is warranted only by physicists’ underlying assumption that the ideal situation of total knowability of every individual atom’s location and path, independent of the atoms’ history of interactions, must obtain everywhere. This ideal is invoked as how nature “really” behaves, even if there is no empirical possibility of obtaining the “knowledge” that the ideal assumes.

There are similar problems in quantum mechanics. Most physicists are not content with Bohr’s injunction not to ask what is “really” going on before the collapse of quantum indeterminacy; they can’t accept that total, deterministic knowledge is an impossibility, so they have recourse to all sorts of strange hypotheses, from multiple worlds to “hidden variables.” But following Nancy Cartwright among others, Stengers suggests that the whole problem of indeterminacy and measurement in quantum mechanics is a false one. Physicists don’t like the fact that quantum mechanics forbids us in principle from having exact knowledge of every particle, as it were independently of our interaction with the particles (since we have to choose, for instance, between knowing the position of an electron and knowing its momentum — we can’t have both, and it is our interaction with the electron that determines which we do find out). But Stengers points out that the limits of our knowledge in quantum mechanics are not really any greater than, say, the limits of my knowledge as to what somebody else is really feeling and thinking. It’s only the physicists’ idealizing assumption of the world’s total knowability and total determinability in accordance with “laws” that leads them to be frustrated and dissatisfied by the limits imposed by quantum mechanics.

Now, my summary of the last two paragraphs has actually done a disservice to Stengers. Because I have restated her analyses in a Kantian manner, as a reflection upon the limits of reason. But for Stengers, such an exercise in transcendental critique is precisely what she wants to get away from; since such a critique means that once again modernist rationality is legislating against practices whose claims differ from its own. She seeks, rather, through constructivism and the ecology of practices, to offer what might be called (following Deleuze) an entirely immanent critique, one that is situated within the very field of practices that it is seeking to change. Stengers exemplifies this with a detailed account of the work of Ilya Prigogine, with whom she collaborated in the 1980s. Prigogine sought, for most of his career, to get the “arrow of time” — the irreversibility of events in time — recognized as among the fundamentals of physics. We cultural studies types tend to adopt Prigogine wholeheartedly for our own critical purposes. But Stengers emphasizes the difficulties that result from the fact that Prigogine is not critiquing physics and chemistry, but seeking to point up the “arrow of time” in such a way that the physicists themselves will be compelled to acknowledge it. To the extent that he is still regarded as a fringe figure by most mainstream scientists, it cannot be said that he succeeded. Stengers points to recent developments in studies of emergence and complexity as possibly pointing to a renovation of scientific thought, but she warns against the new-agey or high-theoretical tendency many of us outside the sciences have to proclaim a new world-view by trumpeting these scientific results as evidence: which means both translating scientific research into “theory” way too uncritically, and engaging in a kind of Kantian critique, instead of remaining within the immanence of the ecology of actual practices, with the demands they make and the obligations they impose.

The biggest question Cosmopolitiques leaves me with is precisely the one of whether it is possible to approach all these questions immanently, without bringing some sort of Kantian critique back into the picture (as I find myself unavoidably tempted to do, even when I am just trying to summarize Stengers’ arguments). One could also pose this question in reverse: whether Kantian critique (in the sense I am using it, which goes back to the Transcendental Dialectic of the First Critique, where Kant tries to use rationality to limit the pretensions of reason itself) can be rescued from Stengers’ objections to the modernist/scientific condemnation of all claims other than its own. The modernist gesture par excellence, in Stengers’ account, would be David Hume’s consignment of theology and speculative philosophy to the flames, as containing “nothing but sophistry and illusion.” Are Kant’s Antinomies and Paralogisms making essentiallly the same gesture? I regard this as a crucial question, and as an open one, something I have only begun to think about.

I have another question about Stengers’ conclusions, one that (I think) follows from that about Kantian critique. Stengers urges us (in the last section of her book) “to have done with tolerance”; because “tolerance” is precisely the condescending attitude by which “we” (scientists, secular modernists in general) make allowances for other world-views which we nonetheless refuse to take seriously. Stengers’ vision, like Latour’s, is radically democratic: science is not a transcending “truth” but one of many “interests” which constantly need to negotiate with one another. This can only happen if all the competing interests are taken seriously (not merely “tolerated”), and actively able to intervene with and against one another. To give an example that Stengers herself doesn’t use: think of the recent disputes over “Kennewick Man” — a 9,000-year-old skull discovered in 1999 near the Columbia River in Washington State. Scientists want to study the remains; Native American groups want to give the remains a proper burial. For the most part, the American press presented the dispute as one between the rational desire to increase our store of knowledge and the irrational, archaic “beliefs” of the “tribes” claiming ownership of the skull. Stengers would have us realize that such an indivious distinction is precisely an instance of scientific imperialism, and that the claims of both the scientists and the native groups — the demands they make and the obligations they feel urged to fulfill — need to be negotiated on an equal basis, that both are particular interests, and both are political: the situation cannot be described as a battle between rationality and superstition, or between “knowledge” and “belief.”

In this way, Stengers (and Latour) are criticising, not just Big Science, but also (and perhaps even more significantly) the default assumptions of post-Enlightenment secular liberalism. Their criticism is quite different from that espoused by such thinkers as Zizek and Badiou; but there is a shared rejection of the way that liberal “tolerance” (the “human face,” you might say, of multinational captial) in fact prevents substantive questions from being asked, and substantive change from happening. This is another Big Issue that I am (again) only beginning to think through, and that I will have to return to in future posts. But as regards Stengers, my real question is this: Where do Stengers’ and Latour’s anti-modernist imperatives leave us, when it comes to dealing with the fundamentalist, evangelical Christians in the United States today? Does the need to deprivilege science’s claims to exclusive truth, and to democratically recognize other social/cultural/political claims, mean, for instance, that we need to give full respect to the claims of “intelligent design” or creationism, and let them negotiate on an equal footing with the claims of evolutionary theory? To say that we shouldn’t tolerate the fundamentalists because they themselves are intolerant is no answer. And I’m not sure that to say, as I have said before, that denying the evolution of species is akin to denying the Holocaust — since both are matters of historical events, rather than of (verifiable or falsifiable) theories — I’m not sure that this answer works either. I realize I am showing my own biases here: it’s one thing to uphold the claims of disenfranchised native peoples, another to uphold the claims of a group that I think is oppressing me as much as they think I and my like are oppressing them. But this is really where the aporia comes for me; where I am genuinely uncertain as to the merits of Stengers’ arguments in comparison to the liberal “tolerance” she so powerfully despises.

Confidence Games

Mark C. Taylor’s Confidence Games: Money and Markets in a World Without Redemption is erudite, entertaining, and intellectually wide-ranging — and it has the virtue of dealing with a subject (money and markets) that rarely gets enough attention from people deeply into pomo theory. Why, then, did I find myself so dissatisfied with the book?

Taylor is a postmodern, deconstructionist theologian — if that makes any sense, and in fact when reading him it does — who has written extensively about questions of faith and belief in a world without a center or foundations. Here he writes about the relations between religion, art, and money — or, more philosophically, between theology, aesthetics, and economics. He starts with a consideration of William Gaddis’ underrated and underdiscussed novels The Recognitions and JR (the latter of which he rightly praises as one of the most crucial and prophetic reflections on late-20th-century American culture: in a book published in 1975, Gaddis pretty much captures the entire period from the deregulation and S&L scams of the Reagan 80s through the Enron fiasco of just a few years ago: nailing down both the crazy economic turbulence and fiscal scamming, and its influence on the larger culture). From Gaddis, Taylor moves on to the history of money, together with the history of philosophical reflections upon money. He’s especially good on the ways in which theological speculation gets transmuted into 18th and 19th century aesthetics, and on how both theological and aesthetic notions get subsumed into capitalistic visions of “the market.” In particular, he traces the Calvinist (as well as aestheticist) themes that stand behind Adam Smith’s vision of the “invisible hand” that supposedly ensures the proper functioning of the market.

The second half of Taylor’s book moves towards an account of how today’s “postmodern” economic system developed, in the wake of Nixon’s abandonment of the gold standard in 1971, the Fed’s conversion from Keynesianism to monetarism in 1979, and the general adoption of “neoliberal” economics throughout the world in the 1980s and 1990s. The result of these transformations is the dematerialization of money (since it is no longer tied to gold) and the replacement of a “real” economy by a “virtual” one, in which money becomes a series of ungrounded signs that only refer to one another. Money, in Taylor’s account, has always had something uncanny about it — because, as a general equivalent or medium of exchange, it is both inside and outside the circuits of the items (commodities) being exchanged; money is a liminal substance that grounds the possibility of fixed categories and values, but precisely for that reason, doesn’t itself quite fit into any category, or have any autonomous value. But with the (re-)adoption of free-market fundamentalism in the 1980s, together with the explosive technological changes of the late 20th century — the growth of telecommunications and of computing power that allow for global and entirely ‘fictive’ monetary flows — this all kicks into much higher gear: money becomes entirely “spectral.” Taylor parallels this economic mutation to similar experiences of ungroundedness, and of signs that do not refer to anything beyond themselves, in the postmodern architecture of Venturi and after, in the poststructuralist philosophy of Derrida (at least by Taylor’s somewhat simplistic interpretation of him), and more generally in all facets of our contemporary culture of sampling, appropriation, and simulation. (Though Taylor only really seems familiar with high art, which has its own peculiar relationship to money; he mentions the Guggenheim Museum opening a space in Las Vegas, but — thankfully perhaps — is silent on hiphop, television, or anything else that might be classified as “popular culture”).

I think that Taylor’s parallels are a bit too facile and glib, and underrate the complexity and paradoxicality of our culture of advertising and simulation — but that’s not really the core of my problem with the book. My real differences are — to use Taylor’s own preferred mode of expression — theological ones. I think that Taylor is far too idolatrous in his regard for “the market” and for money, which traditional religion has seen as Mammon, but which he recasts as a sort of Hermes Trismegistus or trickster figure (though he doesn’t directly use this metaphor), as well as a Christological mediator between the human and the divine. Taylor says, convincingly, that economics cannot be disentangled from religion, because any economic system ultimately requires faith — it is finally only faith that gives money its value. But I find Taylor’s faith to be troublingly misplaced: it is at the antipodes from any form of fundamentalism, but for this very reason oddly tends to coincide with it. In postmodern society, money is the Absolute, or the closest that we mortals can come to an Absolute. (Taylor complacently endorses the hegelian dialectic of opposites, without any of the sense of irony that a contemporary christianophile hegelian like Zizek brings to the dialectic). Where fundamentalists seek security, grounding, and redemption, Taylor wants to affirm uncertainty and risk “in a world without redemption.” But this means that the turbulence and ungroundedness of the market makes it the locus for a quasi-religious Nietzschean affirmation (“risk, uncertainty, and insecurity, after all, are pulses of life” — 331) which is ultimately not all that far from the Calvinist faith that everything is in the hands of the Lord.

Taylor at one point works through Marx’s account of the self-valorization of capital; for Taylor, “Marx implicitly draws on Kant’s aesthetics and Hegel’s philosophy” when he describes capital’s “self-renewing circular exchange” (109). That is to say, Marx’s account of capital logic has the same structure as Kant’s organically self-validating art object, or Hegel’s entire system. (Taylor makes much of Marx’s indebtedness to Hegel). What Taylor leaves out of his account, however, is the part where Marx talks about the appropriation of surplus value, which is to say what capital does in the world in order to generate and perpetuate this process of “self-valorization.” I suggest that this omission is symptomatic. In his history of economics, Taylor moves from Adam Smith to such mid-20th-century champions of laissez faire as Milton Friedman and F. A. Hayek; but he never mentions, for instance, Ricardo, who (like Marx after him) was interested in production and consumption, rather than just circulation.

Now, simply to say — as most orthodox Marxists would do — that Taylor ignores production, and the way that circulation is grounded in production, is a more “fundamentalist” move than I would wish to make. Taylor is right to call attention to the eerily ungrounded nature of contemporary finance. Stock market prices are largely disconnected from any underlying economic performance of the companies whose stocks are being traded; speculation on derivatives and other higher-order financial instruments, which have even less relation to actual economic activity, have largely displaced productive investment as the main “business” of financial markets today. But Taylor seems to celebrate this process as a refutation of Marx and Marxism (except to the extent that Marx himself unwittingly endorses the self-valorization of capital, by describing it in implicitly aesthetic and theological terms). Taylor tends to portray Marx as an old-school fundamentalist who is troubled by the way that money’s fluidity and “spectrality” undermine metaphysical identities and essences. But this is a very limited and blinkered (mis)reading of Marx. For Marx himself begins Capital with the notorious discussion of the immense abstracting power of commodities and money. And subsequently, Marx insists on the way that circuits of finance tend, in an advanced capitalist system, to float free of their “determinants” in use-value and labor. The autonomous “capital-logic” that Marx works out in Volumes 2 & 3 of Capital is much more true today than it ever was in Marx’s own time. Marx precisely explores the consequences of these developments without indulging in any “utopian-socialist” nostalgia for a time of primordial plenitude, before money matters chased us out of the Garden.

Let me try to put this in another way. The fact that postmodern financial speculation is (quite literally) ungrounded seems to mean, for Taylor, that it is therefore also free of any extraneous consequences or “collateral damage” (Taylor actually uses this phrase as the title of one section of the book, playing on the notion of “collateral” for loans but not considering any extra-financial effects of financial manipulations). Much of the latter part of Confidence Games is concerned with efforts by financiers and economists, in the 1980s and 1990s, to manage and minimize risk; and with their inability to actually do so. Taylor spends a lot of time, in particular, on the sorry story of Long-Term Capital Management (LTCM), the investment firm that went bankrupt so spectacularly in 1998. After years of mega-profits, LTCM got called on its outrageously leveraged investments, found that it couldn’t repay any of its loans, and had to be bailed out to avoid a domino effect leading to worldwide financial collapse. In Taylor’s view, there’s a kind of moral lesson in this: LTCM wanted to make hefty profits without taking the concomitant risks; but eventually the risks caught up with them, in a dramatic movement of neo-Calvinist retribution, a divine balancing of the books. Taylor doesn’t really reflect on the fact that the “risks” weren’t really all that great for the financiers of LTCM themselves: they lost their paper fortunes, but they didn’t literally lose their shirts or get relegated to the poorhouse. Indeed their losses were largely covered, in order to protect everyone else, who would have suffered from the worldwide economic collapse that they almost triggered. The same holds, more recently, for Enron. Ken Lay got some sort of comeuppance when Enron went under, and (depending on the outcome of his trial) he may even end up having to serve (like Martha Stewart) some minimum-security jail time. But Lay will never be in the destitute position of all the people who lost their life savings and old-age pensions in the fiasco. Gaddis’ JR deals with the cycles of disruption and loss that are triggered by the ungrounded speculations at the center of the novel — but this is one aspect of the text Taylor never talks about.

Taylor sharply criticizes the founding assumptions of mainstream economists and financiers: the ideas that the market is “rational,” and that it tends toward “equilibrium.” And here Taylor is unquestionably right: these founding assumptions — which still pervade mainstream economics in the US and around the world — are indeed nonsensical, as well as noxious. It’s only under ideal, frictionless conditions, that almost never exist in actuality, that Smith’s “invisible hand” actually does operate to create “optimal” outcomes. Marginalist and neoclassical/neoliberal economics is probably the most mystified discipline in the academy today, wedded as it is to the pseudo-rigor of mathematical models borrowed from physics, and deployed in circumstances where none of the idealizations at the basis of physics actually obtain. It’s welcome to see Taylor take on the economists’ “dream of a rationally ordered world” (301), one every bit as out of touch with reality, and as harmful in its effects when people tried to bend the real world to conform to it, as Soviet communism ever was.

But alas — Taylor only dismisses the prevalent neoclassical version of the invisible hand, in order to welcome it back in another form. If the laws of economic equilibrium, borrowed by neoclassical economics from 19th-century physical dynamics, do not work, for Taylor this is because the economy is governed instead by the laws of complex systems, which he borrows from late-20th-century physics in the form of chaos and complexity theory. There is still an invisible hand in Taylor’s account: only now it works through phase transitions and strange attractors in far-from-equilibrium conditions. Taylor thus links the physics of complexity to the free-market theories of F. A. Hayek (Margaret Thatcher’s favorite thinker), for whom the “market” was a perfect information-processing mechanism that calculated optimal outcomes as no “central planning” agency could. According to Hayek’s way of thinking, since any attempt at human intervention in the functioning of the economy — any attempt to alleviate or mitigate circumstances — will necessarily have unintended and uncontrollable consequences, we do best to let the market take its course, with no remorse or regret for the vast amount of human suffering and misery that is created thereby.

Such sado-monetarist cruelty is clearly not Taylor’s intention, but it arises nevertheless from his revised version of the invisible hand, as well as from his determination to separate financial networks from their extra-financial effects. I’ll say it again: the more Taylor celebrates the way that everything is interconnected, and all systems are open, he still maintains a sort of methodological solipsism or blindness to external consequences. The fact that financial networks today (or any other sort of self-perpetuating system of nonreferential signs) are ungrounded self-affecting systems, produced in the unfolding of a “developmental process [that] neither is grounded in nor refers to anything beyond itself” (330) — this fact does not exempt these systems from having extra-systemic consequences: indeed, if anything, the system’s lack of “groundedness” or connection makes the extra-systemic effects all the more intense and virulent. To write off thesse effects as “coevolution,” or as the “perpetual restlessness” of desire, or as a wondrous Nietzschean affirmation of risk, is to be disingenuous at best.

There’s a larger question here, that goes far beyond Taylor. When we think today of networks, or of chaotic systems, we think of patterns that are instantiated indifferently in the most heterogeneous sorts of matter. The same structures, the same movements, the same chaotic bifurcations and phase transitions, are supposedly at work in biological ecosystems, in the weather, and in the stock market. This is the common wisdom of the age — it certainly isn’t specific to Taylor — but it’s an assumption that I increasingly think needs to be criticized. The very fact that the same arguments from theories of chaos/complexity and “self-organization” can be cited with equal relevance by Citibank and by the alterglobalization movement, and can be used to justify both feral capitalism and communal anarchism, should give us pause. For one thing, I don’t think we know yet how well these scientific theories will hold up; they are drastic simplifications, and only time will tell how well they perform, how useful they are, in comparison to the drastic simplifications proposed by the science of, say, the nineteenth century. For another thing, we still need to be dubious about how the idea of the same pattern instantiated indifferently in various sorts of matter is just another extension — powerful in some ways, but severely limiting in others — of Western culture’s tendency to divide mind or meaning from matter, and to devalue the latter. For yet another, we should be very wary of drawing political and ethical consequences from scientific observation and theorization, for usually such drawing-consequences involves a great deal of arbitrariness, as it projects the scientific formulations far beyond the circumstances in which they are meaningful.

Mark C. Taylor’s Confidence Games: Money and Markets in a World Without Redemption is erudite, entertaining, and intellectually wide-ranging — and it has the virtue of dealing with a subject (money and markets) that rarely gets enough attention from people deeply into pomo theory. Why, then, did I find myself so dissatisfied with the book?

Taylor is a postmodern, deconstructionist theologian — if that makes any sense, and in fact when reading him it does — who has written extensively about questions of faith and belief in a world without a center or foundations. Here he writes about the relations between religion, art, and money — or, more philosophically, between theology, aesthetics, and economics. He starts with a consideration of William Gaddis’ underrated and underdiscussed novels The Recognitions and JR (the latter of which he rightly praises as one of the most crucial and prophetic reflections on late-20th-century American culture: in a book published in 1975, Gaddis pretty much captures the entire period from the deregulation and S&L scams of the Reagan 80s through the Enron fiasco of just a few years ago: nailing down both the crazy economic turbulence and fiscal scamming, and its influence on the larger culture). From Gaddis, Taylor moves on to the history of money, together with the history of philosophical reflections upon money. He’s especially good on the ways in which theological speculation gets transmuted into 18th and 19th century aesthetics, and on how both theological and aesthetic notions get subsumed into capitalistic visions of “the market.” In particular, he traces the Calvinist (as well as aestheticist) themes that stand behind Adam Smith’s vision of the “invisible hand” that supposedly ensures the proper functioning of the market.

The second half of Taylor’s book moves towards an account of how today’s “postmodern” economic system developed, in the wake of Nixon’s abandonment of the gold standard in 1971, the Fed’s conversion from Keynesianism to monetarism in 1979, and the general adoption of “neoliberal” economics throughout the world in the 1980s and 1990s. The result of these transformations is the dematerialization of money (since it is no longer tied to gold) and the replacement of a “real” economy by a “virtual” one, in which money becomes a series of ungrounded signs that only refer to one another. Money, in Taylor’s account, has always had something uncanny about it — because, as a general equivalent or medium of exchange, it is both inside and outside the circuits of the items (commodities) being exchanged; money is a liminal substance that grounds the possibility of fixed categories and values, but precisely for that reason, doesn’t itself quite fit into any category, or have any autonomous value. But with the (re-)adoption of free-market fundamentalism in the 1980s, together with the explosive technological changes of the late 20th century — the growth of telecommunications and of computing power that allow for global and entirely ‘fictive’ monetary flows — this all kicks into much higher gear: money becomes entirely “spectral.” Taylor parallels this economic mutation to similar experiences of ungroundedness, and of signs that do not refer to anything beyond themselves, in the postmodern architecture of Venturi and after, in the poststructuralist philosophy of Derrida (at least by Taylor’s somewhat simplistic interpretation of him), and more generally in all facets of our contemporary culture of sampling, appropriation, and simulation. (Though Taylor only really seems familiar with high art, which has its own peculiar relationship to money; he mentions the Guggenheim Museum opening a space in Las Vegas, but — thankfully perhaps — is silent on hiphop, television, or anything else that might be classified as “popular culture”).

I think that Taylor’s parallels are a bit too facile and glib, and underrate the complexity and paradoxicality of our culture of advertising and simulation — but that’s not really the core of my problem with the book. My real differences are — to use Taylor’s own preferred mode of expression — theological ones. I think that Taylor is far too idolatrous in his regard for “the market” and for money, which traditional religion has seen as Mammon, but which he recasts as a sort of Hermes Trismegistus or trickster figure (though he doesn’t directly use this metaphor), as well as a Christological mediator between the human and the divine. Taylor says, convincingly, that economics cannot be disentangled from religion, because any economic system ultimately requires faith — it is finally only faith that gives money its value. But I find Taylor’s faith to be troublingly misplaced: it is at the antipodes from any form of fundamentalism, but for this very reason oddly tends to coincide with it. In postmodern society, money is the Absolute, or the closest that we mortals can come to an Absolute. (Taylor complacently endorses the hegelian dialectic of opposites, without any of the sense of irony that a contemporary christianophile hegelian like Zizek brings to the dialectic). Where fundamentalists seek security, grounding, and redemption, Taylor wants to affirm uncertainty and risk “in a world without redemption.” But this means that the turbulence and ungroundedness of the market makes it the locus for a quasi-religious Nietzschean affirmation (“risk, uncertainty, and insecurity, after all, are pulses of life” — 331) which is ultimately not all that far from the Calvinist faith that everything is in the hands of the Lord.

Taylor at one point works through Marx’s account of the self-valorization of capital; for Taylor, “Marx implicitly draws on Kant’s aesthetics and Hegel’s philosophy” when he describes capital’s “self-renewing circular exchange” (109). That is to say, Marx’s account of capital logic has the same structure as Kant’s organically self-validating art object, or Hegel’s entire system. (Taylor makes much of Marx’s indebtedness to Hegel). What Taylor leaves out of his account, however, is the part where Marx talks about the appropriation of surplus value, which is to say what capital does in the world in order to generate and perpetuate this process of “self-valorization.” I suggest that this omission is symptomatic. In his history of economics, Taylor moves from Adam Smith to such mid-20th-century champions of laissez faire as Milton Friedman and F. A. Hayek; but he never mentions, for instance, Ricardo, who (like Marx after him) was interested in production and consumption, rather than just circulation.

Now, simply to say — as most orthodox Marxists would do — that Taylor ignores production, and the way that circulation is grounded in production, is a more “fundamentalist” move than I would wish to make. Taylor is right to call attention to the eerily ungrounded nature of contemporary finance. Stock market prices are largely disconnected from any underlying economic performance of the companies whose stocks are being traded; speculation on derivatives and other higher-order financial instruments, which have even less relation to actual economic activity, have largely displaced productive investment as the main “business” of financial markets today. But Taylor seems to celebrate this process as a refutation of Marx and Marxism (except to the extent that Marx himself unwittingly endorses the self-valorization of capital, by describing it in implicitly aesthetic and theological terms). Taylor tends to portray Marx as an old-school fundamentalist who is troubled by the way that money’s fluidity and “spectrality” undermine metaphysical identities and essences. But this is a very limited and blinkered (mis)reading of Marx. For Marx himself begins Capital with the notorious discussion of the immense abstracting power of commodities and money. And subsequently, Marx insists on the way that circuits of finance tend, in an advanced capitalist system, to float free of their “determinants” in use-value and labor. The autonomous “capital-logic” that Marx works out in Volumes 2 & 3 of Capital is much more true today than it ever was in Marx’s own time. Marx precisely explores the consequences of these developments without indulging in any “utopian-socialist” nostalgia for a time of primordial plenitude, before money matters chased us out of the Garden.

Let me try to put this in another way. The fact that postmodern financial speculation is (quite literally) ungrounded seems to mean, for Taylor, that it is therefore also free of any extraneous consequences or “collateral damage” (Taylor actually uses this phrase as the title of one section of the book, playing on the notion of “collateral” for loans but not considering any extra-financial effects of financial manipulations). Much of the latter part of Confidence Games is concerned with efforts by financiers and economists, in the 1980s and 1990s, to manage and minimize risk; and with their inability to actually do so. Taylor spends a lot of time, in particular, on the sorry story of Long-Term Capital Management (LTCM), the investment firm that went bankrupt so spectacularly in 1998. After years of mega-profits, LTCM got called on its outrageously leveraged investments, found that it couldn’t repay any of its loans, and had to be bailed out to avoid a domino effect leading to worldwide financial collapse. In Taylor’s view, there’s a kind of moral lesson in this: LTCM wanted to make hefty profits without taking the concomitant risks; but eventually the risks caught up with them, in a dramatic movement of neo-Calvinist retribution, a divine balancing of the books. Taylor doesn’t really reflect on the fact that the “risks” weren’t really all that great for the financiers of LTCM themselves: they lost their paper fortunes, but they didn’t literally lose their shirts or get relegated to the poorhouse. Indeed their losses were largely covered, in order to protect everyone else, who would have suffered from the worldwide economic collapse that they almost triggered. The same holds, more recently, for Enron. Ken Lay got some sort of comeuppance when Enron went under, and (depending on the outcome of his trial) he may even end up having to serve (like Martha Stewart) some minimum-security jail time. But Lay will never be in the destitute position of all the people who lost their life savings and old-age pensions in the fiasco. Gaddis’ JR deals with the cycles of disruption and loss that are triggered by the ungrounded speculations at the center of the novel — but this is one aspect of the text Taylor never talks about.

Taylor sharply criticizes the founding assumptions of mainstream economists and financiers: the ideas that the market is “rational,” and that it tends toward “equilibrium.” And here Taylor is unquestionably right: these founding assumptions — which still pervade mainstream economics in the US and around the world — are indeed nonsensical, as well as noxious. It’s only under ideal, frictionless conditions, that almost never exist in actuality, that Smith’s “invisible hand” actually does operate to create “optimal” outcomes. Marginalist and neoclassical/neoliberal economics is probably the most mystified discipline in the academy today, wedded as it is to the pseudo-rigor of mathematical models borrowed from physics, and deployed in circumstances where none of the idealizations at the basis of physics actually obtain. It’s welcome to see Taylor take on the economists’ “dream of a rationally ordered world” (301), one every bit as out of touch with reality, and as harmful in its effects when people tried to bend the real world to conform to it, as Soviet communism ever was.

But alas — Taylor only dismisses the prevalent neoclassical version of the invisible hand, in order to welcome it back in another form. If the laws of economic equilibrium, borrowed by neoclassical economics from 19th-century physical dynamics, do not work, for Taylor this is because the economy is governed instead by the laws of complex systems, which he borrows from late-20th-century physics in the form of chaos and complexity theory. There is still an invisible hand in Taylor’s account: only now it works through phase transitions and strange attractors in far-from-equilibrium conditions. Taylor thus links the physics of complexity to the free-market theories of F. A. Hayek (Margaret Thatcher’s favorite thinker), for whom the “market” was a perfect information-processing mechanism that calculated optimal outcomes as no “central planning” agency could. According to Hayek’s way of thinking, since any attempt at human intervention in the functioning of the economy — any attempt to alleviate or mitigate circumstances — will necessarily have unintended and uncontrollable consequences, we do best to let the market take its course, with no remorse or regret for the vast amount of human suffering and misery that is created thereby.

Such sado-monetarist cruelty is clearly not Taylor’s intention, but it arises nevertheless from his revised version of the invisible hand, as well as from his determination to separate financial networks from their extra-financial effects. I’ll say it again: the more Taylor celebrates the way that everything is interconnected, and all systems are open, he still maintains a sort of methodological solipsism or blindness to external consequences. The fact that financial networks today (or any other sort of self-perpetuating system of nonreferential signs) are ungrounded self-affecting systems, produced in the unfolding of a “developmental process [that] neither is grounded in nor refers to anything beyond itself” (330) — this fact does not exempt these systems from having extra-systemic consequences: indeed, if anything, the system’s lack of “groundedness” or connection makes the extra-systemic effects all the more intense and virulent. To write off thesse effects as “coevolution,” or as the “perpetual restlessness” of desire, or as a wondrous Nietzschean affirmation of risk, is to be disingenuous at best.

There’s a larger question here, that goes far beyond Taylor. When we think today of networks, or of chaotic systems, we think of patterns that are instantiated indifferently in the most heterogeneous sorts of matter. The same structures, the same movements, the same chaotic bifurcations and phase transitions, are supposedly at work in biological ecosystems, in the weather, and in the stock market. This is the common wisdom of the age — it certainly isn’t specific to Taylor — but it’s an assumption that I increasingly think needs to be criticized. The very fact that the same arguments from theories of chaos/complexity and “self-organization” can be cited with equal relevance by Citibank and by the alterglobalization movement, and can be used to justify both feral capitalism and communal anarchism, should give us pause. For one thing, I don’t think we know yet how well these scientific theories will hold up; they are drastic simplifications, and only time will tell how well they perform, how useful they are, in comparison to the drastic simplifications proposed by the science of, say, the nineteenth century. For another thing, we still need to be dubious about how the idea of the same pattern instantiated indifferently in various sorts of matter is just another extension — powerful in some ways, but severely limiting in others — of Western culture’s tendency to divide mind or meaning from matter, and to devalue the latter. For yet another, we should be very wary of drawing political and ethical consequences from scientific observation and theorization, for usually such drawing-consequences involves a great deal of arbitrariness, as it projects the scientific formulations far beyond the circumstances in which they are meaningful.

Attali’s Noise

Jacques Attali’s Noise: The Political Economy of Music made something of a stir when it was published roughly a quarter-century ago (it came out in France in 1977, and in English translation in 1985). Noise comes from a time when “theory” had greater ambitions than it does today; it’s an audacious, ambitious book, linking the production, performance, and consumption of music to fundamental questions of power and order in society. I read it for the first time in many years, in order to see how well it holds up in the 21st century.

Noise presents itself as a “universal history”: it presents a schema of four historical phases, which it claims are valid for all of human history and culture (or at least for European history and culture: Attali, like so many European thinkers, consigns everything that lies outside Europe and its Near Eastern antecedents to a vague and undifferentiated ‘primitive’ category, as if there were no differences worth noting among them, and nothing that any of these other cultures could offer that was different from the European lineage). The mania for “universal history” was strong among late-20th-century Parisian thinkers; both Deleuze & Guattari, and Baudrillard, offer such grand formulations. Though I doubt that any of these schemas are “true” — they leave out too much, oversimplify, reduce the number of actual structural orders — at their best (as, I would argue, in Deleuze & Guattari, in the “Savages, Barbarians, Civilized Men” section of Anti-Oedipus, and in the chapter “On Several Regimes of Signs” in A Thousand Plateaus) they are richly suggestive, and help us at least to trace the genealogy of what we take for granted in the present, and to see the contingency of, and the possibility therefore of differing from, what we take for granted in the present. Attali’s “universal history,” however, is much weaker than Deleuze and Guattari’s; it really just consists in shunting everything that is pre-capitalist, or simply non-capitalist, into a single category.

Still, Attali offers some valuable, or at least thought-provoking, insights. Music is the organization of sound; by channelling certain sounds in certain orders, it draws a distinction between sounds that are legitimate, and those that are not: the latter are relegated to the (negative) category of “noise.” Music, like other arts, is often idealized as the imposition of form upon chaos (Wallace Stevens’ “blessed rage for order”). Attali rightly insists that there’s a politics at work here: behind the idealization, there’s an act of exclusion. The history of music can be read as a series of battles for legitimation, disputes over what is acceptable as sound, and what is only “noise” (think of the rise of dissonance in European concert music in the 19th and early 20th centuries: or the way punk in the late 1970s, like many other movements before and since, affirmed “noise” against the gentility of mainstream pop and officially sanctioned rock, or why Public Enemy wanted to “Bring the Noise,” a gesture at once aesthetic and political).

Now, the imposition of order is always a kind of violence, albeit one that claims to put an end to violence. The State has a legal monopoly of violence, and this is what allows it to provide peace and security to its citizens. This is why, as Foucault put it, “the history which bears and determines us has the form of a war rather than that of a language: relations of power, not relations of meaning.” Attali draws an analogy — actually, more than an analogy, virtually an identity — between the imposition of order in society, and the imposition of sonic order that is music. Social order and musical order don’t just formally resemble one another; since music is inherently social and communal, music as an action (rather than a product), like Orpheus’ taming of the beasts, is itself part of the imposition of order, the suppression of violence by a monopolization of violence. Music excludes the violence of noise (unwanted sound) by violently imposing order upon sound. And music is addressed to everybody — it “interpellates” us into society. Music thus plays a central role in social order — which is why Plato, for instance, was so concerned with only allowing the ‘right’ sorts of music into his Republic; and why the Nazis paid so much attention to music (favoring Wagner and patriotic songs, and banning “degenerate” music like jazz).

Attali specifies this further by assimilating music to sacrifice, as the primordial religious origin of all social order. I find this a powerful and deeply suggestive insight, even though Attali understands the logic of sacrifice in the terms set forth by Rene Girard, rather than in the much richer and more ambiguous formulations of Georges Bataille.(To my mind, everything Girard says can be traced back to Bataille, but Girard only offers us a reductive, normalized, idealized, and overly pious version of Bataille. The impulsion to sacrifice, the use of the scapegoat as sacrificial substitution, the creation of community by mutual implication in the sacrifice, and so on — all these can only be understood in the context of Bataille’s notion of expenditure, and in relation to Maussian gift economies; only in this way can we see how sacrifice, in its religious and erotic, as well as political dimensions, doesn’t just rescue us from “mimetic rivalry,” but also institutes a whole set of unequal power relations).

In any case: music as a sacrificial practice, and more generally as a form of “community” (a word which I leave in quotes because I don’t want to forget its ambiguous, and often obnoxious, connotations), is central to the way that order exists in a given society. Music is not a mere part of what traditional Marxists called the “superstructure”; rather, it is directly one of the arenas in which the power struggles that shape and change the society take place. (These “power struggles” might be Marxist class warfare, or Foucauldian conflicts of power and resistance seeping up from below and interfering with one another, or indeed the more peaceful contentions, governed by a “social contract,” that are noted by liberal political theory). Attali argues that music is one of the foremost spheres in which the struggles, inventions, innovations, and mutations that determine the structure of society take place; and therefore that music is in a strong sense “prophetic,” in that its changes anticipate and forecast what happens in society as a whole.

All this is background, really; though music’s “Sacrificing” role is the first of Attali’s four historical phases. Attali’s real interest (and mine as well), and the subject of his three remaining historical phases, is what happens to music under capitalism. The 19th century concert hall is the center of the phase of “Representing.” The ritual function of music in “primitive” societies, and even in Europe up to feudalism and beyond, gets dissolved as a result of the growth of mercantile, and then industrial capitalism. Music is separated from everyday life; it becomes a specialized social function, with specialized producers and performers. The musician becomes a servant of the Court in 17th and 18th century Europe; by the 19th century, with the rise to power of the bourgeoisie after the French Revolution, the musician must become an entrepreneur. Music “become[s] institutionalized as a commodity,” and “acquire[s] an autonomous status and monetary value,” for the first time in human history (51). The musical emphasis on harmony in this period is strictly correlated, according to Attali, with an economic system based upon exchange, and the equilibrium that is supposed to result from processes of orderly economic exchange. Music and money both work, in the 19th century, according to a logic of representation. Money is the representation of physical goods, in the same way that the parliament, in representative democracy, is the representation of the populace. And the resolution of harmonic conflict in the course of 19th century compositions works alongside the resolution of conflicting desires through the (supposed) equilibrium of the “free market.” In the cases both of music and the market, sacrifice is repressed and disavowed, and replaced by what is both the representation of (social and musical) harmony, and the imposition of harmony through the process of representation itself. Playing on the multiple French meanings of the word “representation,” Attali includes in all this the formal “representation” (in English, more idiomatically, the “performance”) of music in the concert hall as the main process by means of which music is disseminated. The links Attali draws here are all quite clever, and much of it might even be true.

Finally, though, however important a role representation continues to play in the ideology of late-capitalist society, the twentieth century has effectively moved beyond it. For Attali, the crucial development is the invention of the phonograph, the radio, and other means of mechanical (and now, electronic) reproduction and dissemination: this is what brings music (and society) out of the stage of “Representing” and into one grounded instead in “Repeating.” Of course, Attali is scarcely the first theorist to point out how radically these technologies have changed the ways in which we experience music. Nor is he alone in noting how these changes — with musical recordings becoming primary, rather than their being merely reproductions of ‘real’ live performances — can be correlated with the hypercommodification of music. More originally, Attali comments on the “stockpiling” of recordings: in effect, once I buy a record or CD or file, I don’t really have to listen to the music contained therein: the essence of consumption lies in purchasing and collecting, not in “using” the music through actual listening. He also makes an ingenious parallel between the pre-programmed and managed production of “pop” music, and the instrumental rationality of musical avant-gardes (both the serialists of the 50s and the minimalists of the 70s). But all in all, “Repeating” is the weakest chapter of Noise, because for the most part Attali pretty much just echoes Adorno’s notorious critique of popular music. I’d argue — as I have implicitly suggested in previous posts — that the real problem with Adorno’s and Attali’s denunciations is that they content themselves with essentially lazy and obvious criticisms of commodity culture, while failing to plumb the commodity experience to its depths, refusing to push it to its most extreme consequences. The only way out is through. The way to defend popular music against the Frankfurt School critique — not that I think it even needs to be defended — is not by taking refuge in notions of “authenticity” in order to deny its commodity status, but rather to work out how the power of this music comes out of — rather than existing in spite of — its commodity status, how it works through the logic of repetition and commodification, and pushes this further than any capitalist apologetics would find comfortable.

Such an approach is not easy to articulate; I haven’t yet succeeded in doing so, and I can’t blame Attali for not successfully doing so either. “Composing,” the brief last chapter of Noise, at least attempts just such a reinvention — in a way that Frankfurt School thinkers like Adorno would never accept. Which is why I liked this final chapter, even though in certain respects it feels quite dated. Attali here reverses the gloomy vision of his “Repeating” chapter, drawing on music from the 1960s (free jazz, as well as the usual rock icons), in order to envision a new historical stage, a liberated one entirely beyond the commodity, when music is no longer a product, but a process that is engaged in by everyone. Attali doesn’t really explain how each person can become his/her own active composer/producer of music, rather than just a passive listener; but what’s brilliant about the argument, nonetheless, is that it takes off from a hyperbolic intensification of the position of the consumer of recorded music (instead of negating this consumer as a good Hegelian Marxist would do). As the consumption of music (and of images) becomes ever more privatized and solipsistic, Attali says, it mutates into a practice of freedom:

Pleasure tied to the self-directed gaze: Narcissus after Echo… the consumer, completing the mutation that began with the tape recorder and photography, will thus become a producer and will derive at least as much of his satisfaction from the manufacturing process itself as from the object he produces. He will institute the spectacle of himself as the supreme usage. (144)

Writing before the Walkman, let alone the iPod and the new digital tools that can cut, paste, and rearrange sounds with just the click of a mouse, Attali seems to anticipate (or to find in the music of his time, which itself had a power of anticipation) our current culture of sampling, remixing, and file-trading, as well as the solipsistic enjoyment of music that Simon Reynolds finds so creepy (“those ads for ipods creep me out, the idea of people looking outwardly normal and repressed and grey-faced on the subway but inside they’re freaking out and going bliss-crazy”). And if Attali writes about these (anticipated) developments with some of the naive utopianism that has been so irritating among more recent cyber-visionaries, he has the excuse both of the time in which he was writing AND the fact that his vision makes more sense — as a project for liberation, rather than as a description of what technology all by itself is alleged to accomplish — in the context of, and counterposed to, the previous chapter’s Adornoesque rant. Despite all his irritating generalizations and dubiously overstated claims, Attali may really have been on to something here. The problem, of course, is how to follow it up.

Jacques Attali’s Noise: The Political Economy of Music made something of a stir when it was published roughly a quarter-century ago (it came out in France in 1977, and in English translation in 1985). Noise comes from a time when “theory” had greater ambitions than it does today; it’s an audacious, ambitious book, linking the production, performance, and consumption of music to fundamental questions of power and order in society. I read it for the first time in many years, in order to see how well it holds up in the 21st century.

Noise presents itself as a “universal history”: it presents a schema of four historical phases, which it claims are valid for all of human history and culture (or at least for European history and culture: Attali, like so many European thinkers, consigns everything that lies outside Europe and its Near Eastern antecedents to a vague and undifferentiated ‘primitive’ category, as if there were no differences worth noting among them, and nothing that any of these other cultures could offer that was different from the European lineage). The mania for “universal history” was strong among late-20th-century Parisian thinkers; both Deleuze & Guattari, and Baudrillard, offer such grand formulations. Though I doubt that any of these schemas are “true” — they leave out too much, oversimplify, reduce the number of actual structural orders — at their best (as, I would argue, in Deleuze & Guattari, in the “Savages, Barbarians, Civilized Men” section of Anti-Oedipus, and in the chapter “On Several Regimes of Signs” in A Thousand Plateaus) they are richly suggestive, and help us at least to trace the genealogy of what we take for granted in the present, and to see the contingency of, and the possibility therefore of differing from, what we take for granted in the present. Attali’s “universal history,” however, is much weaker than Deleuze and Guattari’s; it really just consists in shunting everything that is pre-capitalist, or simply non-capitalist, into a single category.

Still, Attali offers some valuable, or at least thought-provoking, insights. Music is the organization of sound; by channelling certain sounds in certain orders, it draws a distinction between sounds that are legitimate, and those that are not: the latter are relegated to the (negative) category of “noise.” Music, like other arts, is often idealized as the imposition of form upon chaos (Wallace Stevens’ “blessed rage for order”). Attali rightly insists that there’s a politics at work here: behind the idealization, there’s an act of exclusion. The history of music can be read as a series of battles for legitimation, disputes over what is acceptable as sound, and what is only “noise” (think of the rise of dissonance in European concert music in the 19th and early 20th centuries: or the way punk in the late 1970s, like many other movements before and since, affirmed “noise” against the gentility of mainstream pop and officially sanctioned rock, or why Public Enemy wanted to “Bring the Noise,” a gesture at once aesthetic and political).

Now, the imposition of order is always a kind of violence, albeit one that claims to put an end to violence. The State has a legal monopoly of violence, and this is what allows it to provide peace and security to its citizens. This is why, as Foucault put it, “the history which bears and determines us has the form of a war rather than that of a language: relations of power, not relations of meaning.” Attali draws an analogy — actually, more than an analogy, virtually an identity — between the imposition of order in society, and the imposition of sonic order that is music. Social order and musical order don’t just formally resemble one another; since music is inherently social and communal, music as an action (rather than a product), like Orpheus’ taming of the beasts, is itself part of the imposition of order, the suppression of violence by a monopolization of violence. Music excludes the violence of noise (unwanted sound) by violently imposing order upon sound. And music is addressed to everybody — it “interpellates” us into society. Music thus plays a central role in social order — which is why Plato, for instance, was so concerned with only allowing the ‘right’ sorts of music into his Republic; and why the Nazis paid so much attention to music (favoring Wagner and patriotic songs, and banning “degenerate” music like jazz).

Attali specifies this further by assimilating music to sacrifice, as the primordial religious origin of all social order. I find this a powerful and deeply suggestive insight, even though Attali understands the logic of sacrifice in the terms set forth by Rene Girard, rather than in the much richer and more ambiguous formulations of Georges Bataille.(To my mind, everything Girard says can be traced back to Bataille, but Girard only offers us a reductive, normalized, idealized, and overly pious version of Bataille. The impulsion to sacrifice, the use of the scapegoat as sacrificial substitution, the creation of community by mutual implication in the sacrifice, and so on — all these can only be understood in the context of Bataille’s notion of expenditure, and in relation to Maussian gift economies; only in this way can we see how sacrifice, in its religious and erotic, as well as political dimensions, doesn’t just rescue us from “mimetic rivalry,” but also institutes a whole set of unequal power relations).

In any case: music as a sacrificial practice, and more generally as a form of “community” (a word which I leave in quotes because I don’t want to forget its ambiguous, and often obnoxious, connotations), is central to the way that order exists in a given society. Music is not a mere part of what traditional Marxists called the “superstructure”; rather, it is directly one of the arenas in which the power struggles that shape and change the society take place. (These “power struggles” might be Marxist class warfare, or Foucauldian conflicts of power and resistance seeping up from below and interfering with one another, or indeed the more peaceful contentions, governed by a “social contract,” that are noted by liberal political theory). Attali argues that music is one of the foremost spheres in which the struggles, inventions, innovations, and mutations that determine the structure of society take place; and therefore that music is in a strong sense “prophetic,” in that its changes anticipate and forecast what happens in society as a whole.

All this is background, really; though music’s “Sacrificing” role is the first of Attali’s four historical phases. Attali’s real interest (and mine as well), and the subject of his three remaining historical phases, is what happens to music under capitalism. The 19th century concert hall is the center of the phase of “Representing.” The ritual function of music in “primitive” societies, and even in Europe up to feudalism and beyond, gets dissolved as a result of the growth of mercantile, and then industrial capitalism. Music is separated from everyday life; it becomes a specialized social function, with specialized producers and performers. The musician becomes a servant of the Court in 17th and 18th century Europe; by the 19th century, with the rise to power of the bourgeoisie after the French Revolution, the musician must become an entrepreneur. Music “become[s] institutionalized as a commodity,” and “acquire[s] an autonomous status and monetary value,” for the first time in human history (51). The musical emphasis on harmony in this period is strictly correlated, according to Attali, with an economic system based upon exchange, and the equilibrium that is supposed to result from processes of orderly economic exchange. Music and money both work, in the 19th century, according to a logic of representation. Money is the representation of physical goods, in the same way that the parliament, in representative democracy, is the representation of the populace. And the resolution of harmonic conflict in the course of 19th century compositions works alongside the resolution of conflicting desires through the (supposed) equilibrium of the “free market.” In the cases both of music and the market, sacrifice is repressed and disavowed, and replaced by what is both the representation of (social and musical) harmony, and the imposition of harmony through the process of representation itself. Playing on the multiple French meanings of the word “representation,” Attali includes in all this the formal “representation” (in English, more idiomatically, the “performance”) of music in the concert hall as the main process by means of which music is disseminated. The links Attali draws here are all quite clever, and much of it might even be true.

Finally, though, however important a role representation continues to play in the ideology of late-capitalist society, the twentieth century has effectively moved beyond it. For Attali, the crucial development is the invention of the phonograph, the radio, and other means of mechanical (and now, electronic) reproduction and dissemination: this is what brings music (and society) out of the stage of “Representing” and into one grounded instead in “Repeating.” Of course, Attali is scarcely the first theorist to point out how radically these technologies have changed the ways in which we experience music. Nor is he alone in noting how these changes — with musical recordings becoming primary, rather than their being merely reproductions of ‘real’ live performances — can be correlated with the hypercommodification of music. More originally, Attali comments on the “stockpiling” of recordings: in effect, once I buy a record or CD or file, I don’t really have to listen to the music contained therein: the essence of consumption lies in purchasing and collecting, not in “using” the music through actual listening. He also makes an ingenious parallel between the pre-programmed and managed production of “pop” music, and the instrumental rationality of musical avant-gardes (both the serialists of the 50s and the minimalists of the 70s). But all in all, “Repeating” is the weakest chapter of Noise, because for the most part Attali pretty much just echoes Adorno’s notorious critique of popular music. I’d argue — as I have implicitly suggested in previous posts — that the real problem with Adorno’s and Attali’s denunciations is that they content themselves with essentially lazy and obvious criticisms of commodity culture, while failing to plumb the commodity experience to its depths, refusing to push it to its most extreme consequences. The only way out is through. The way to defend popular music against the Frankfurt School critique — not that I think it even needs to be defended — is not by taking refuge in notions of “authenticity” in order to deny its commodity status, but rather to work out how the power of this music comes out of — rather than existing in spite of — its commodity status, how it works through the logic of repetition and commodification, and pushes this further than any capitalist apologetics would find comfortable.

Such an approach is not easy to articulate; I haven’t yet succeeded in doing so, and I can’t blame Attali for not successfully doing so either. “Composing,” the brief last chapter of Noise, at least attempts just such a reinvention — in a way that Frankfurt School thinkers like Adorno would never accept. Which is why I liked this final chapter, even though in certain respects it feels quite dated. Attali here reverses the gloomy vision of his “Repeating” chapter, drawing on music from the 1960s (free jazz, as well as the usual rock icons), in order to envision a new historical stage, a liberated one entirely beyond the commodity, when music is no longer a product, but a process that is engaged in by everyone. Attali doesn’t really explain how each person can become his/her own active composer/producer of music, rather than just a passive listener; but what’s brilliant about the argument, nonetheless, is that it takes off from a hyperbolic intensification of the position of the consumer of recorded music (instead of negating this consumer as a good Hegelian Marxist would do). As the consumption of music (and of images) becomes ever more privatized and solipsistic, Attali says, it mutates into a practice of freedom:

Pleasure tied to the self-directed gaze: Narcissus after Echo… the consumer, completing the mutation that began with the tape recorder and photography, will thus become a producer and will derive at least as much of his satisfaction from the manufacturing process itself as from the object he produces. He will institute the spectacle of himself as the supreme usage. (144)

Writing before the Walkman, let alone the iPod and the new digital tools that can cut, paste, and rearrange sounds with just the click of a mouse, Attali seems to anticipate (or to find in the music of his time, which itself had a power of anticipation) our current culture of sampling, remixing, and file-trading, as well as the solipsistic enjoyment of music that Simon Reynolds finds so creepy (“those ads for ipods creep me out, the idea of people looking outwardly normal and repressed and grey-faced on the subway but inside they’re freaking out and going bliss-crazy”). And if Attali writes about these (anticipated) developments with some of the naive utopianism that has been so irritating among more recent cyber-visionaries, he has the excuse both of the time in which he was writing AND the fact that his vision makes more sense — as a project for liberation, rather than as a description of what technology all by itself is alleged to accomplish — in the context of, and counterposed to, the previous chapter’s Adornoesque rant. Despite all his irritating generalizations and dubiously overstated claims, Attali may really have been on to something here. The problem, of course, is how to follow it up.

SIPs

One of the more amusing features added to amazon.com recently is the inclusion, for many books, of SIPs: Statistically Improbable Phrases. As it is explained on the website:

Amazon.com’s Statistically Improbable Phrases, or “SIPs”, show you the interesting, distinctive, or unlikely phrases that occur in the text of books in Search Inside the Book. Our computers scan the text of all books in the Search Inside program. If they find a phrase that occurs a large number of times in a particular book relative to how many times it occurs across all Search Inside books, that phrase is a SIP in that book.

Just now I was looking at the page for an academic essay anthology called The New Economic Criticism: Studies at the Interface of Literature and Economics, and among the SIPs I found the following:

gold humbug, rentier culture, ersatz economics, scriptural money, imperial grammar, rhetorical tetrad, symbolic money, critical economists, economic genre, constitutive metaphors, ethical economy, universal equivalent, metaphorical field, novel machine, doing economics, realistic writing, economic discourse, feminist economists, general equivalent, hot pressure, beautiful shirts

Maybe I should leave this list to speak for itself. I don’t fined “general equivalent” or “doing economics” or even “feminist economists” to be all that surprising… but “beautiful shirts”?

One of the more amusing features added to amazon.com recently is the inclusion, for many books, of SIPs: Statistically Improbable Phrases. As it is explained on the website:

Amazon.com’s Statistically Improbable Phrases, or “SIPs”, show you the interesting, distinctive, or unlikely phrases that occur in the text of books in Search Inside the Book. Our computers scan the text of all books in the Search Inside program. If they find a phrase that occurs a large number of times in a particular book relative to how many times it occurs across all Search Inside books, that phrase is a SIP in that book.

Just now I was looking at the page for an academic essay anthology called The New Economic Criticism: Studies at the Interface of Literature and Economics, and among the SIPs I found the following:

gold humbug, rentier culture, ersatz economics, scriptural money, imperial grammar, rhetorical tetrad, symbolic money, critical economists, economic genre, constitutive metaphors, ethical economy, universal equivalent, metaphorical field, novel machine, doing economics, realistic writing, economic discourse, feminist economists, general equivalent, hot pressure, beautiful shirts

Maybe I should leave this list to speak for itself. I don’t fined “general equivalent” or “doing economics” or even “feminist economists” to be all that surprising… but “beautiful shirts”?

The Savage Girl

Alex Shakar‘s The Savage Girl is a novel about advertising, marketing, and coolhunting. The landscape is allegorical (a purgatorial city built on the slopes of a live volcano), but the details of life are recognizably present-day American. I was less interested in the characters and plot than in the way the book (like much SF) works as a kind of social theory.

The world of The Savage Girl is dominated, not by scarcity and need, but by abundance, aesthetics, and artificially created desires. Thanks to consumer capitalism, human beings have passed from the realm of Necessity to the realm of Freedom. We stand on the verge of the “Light Age” — sometimes spelled the “Lite Age” — “a renaissance of self-creation,” when, thanks to the wonders of niche marketing, “we’ll be able to totally customize our life experience — our beliefs, our rituals, our tribes, our whole personal mythology — and we’ll choose everything that makes us who we are from a vast array of choices” (24). In such an Age, “beauty is the PR campaign of the human soul” (25), inspiring us to aspire to more and more. Virginia Postrel herself couldn’t have put it any better; only Shakar is dramatizing the ambiguities and ironies of what Postrel proclaims all too smugly and self-congratulatorily.

I just mentioned “ironies”; but Shakar suggests that this utopia of product differentiation has as its correlate a “postironic” consciousness. (All the enthusiastic theorizing in the novel is done by the various characters; which allows Shakar’s narrative voice, by contrast, to remain perfectly poker-faced and deadpan). This is something emerging on the far side of the pervasive, David Letterman-esque irony that informs advertising today. For “our culture has become so saturated with ironic doubt that it is beginning to doubt its own mode of doubting… Postironists create their own set of serviceable realities and live in them independently of any facets of the outside world that they choose to ignore… Practitioners of postironic consciousness blur the boundaries between irony and earnestness in ways we traditional ironists can scarcely understand, creating a state of consciousness wherein critical and uncritical responses are indistinguishable. Postirony seeks not to demystify, but to befuddle…” (140). This sounds a lot like the Bush White House, and its supporters in the “faith-based community.” But Shakar suggests that it is much more applicable, even for “reality-based” liberals, because it is in process of becoming the universal mode of being of the consumer. Postirony leads to “a mystical relationship with consumption.” The commodity is sublime. In a world without scarcity or need, it is only through the products we purchase that we can maintain a relationship with the Infinite.

Shakar’s other, related crucial idea is that of the paradessence (short for “paradoxical essence”). “Every product has this paradoxical essence. Two opposing desires that it can promise to satisfy simultaneously.” The paradessence is the “schismatic core” or “broken soul” of every consumer product. Thus coffee promises both “stimulation and relaxation”; ice cream connotes both “eroticism and innocence,” or (more psychoanalytically) both “semen and mother’s milk” (60-61). The paradessence is not a dialectical contradiction; its opposing terms do not interact, conflict, or produce some higher synthesis. Nothing changes or evolves. Rather, the paradessence is a matter of “having everything both ways and every way and getting everything [one] wants” (179). This is a promise that only the commodity can make; it’s a way of being that cannot be sustained in natural, ‘unalienated’ life, but only through the artificial paradise of consumerism. I don’t know how familiar Shakar is with Deleuze and Guattari; but his analysis runs parallel to theirs, when he has his marketing-guru character declare that the pure form of postirony and paradessence is literally schizophrenia (141).

The Savage Girl centers around an advertising campaign for a product that promises everything, precisely because it is literally nothing. This product is called “diet water”: “an artificial form of water… that passes through the body completely unabsorbed. It’s completely inert, completely harmless”, and has no effect whatsoever. It doesn’t actually quench thirst; but as a result, it also doesn’t add to the drinker’s weight, doesn’t make her feel bloated. If you still feel thirsty after a drink of diet water, all you have to do is “buy more.” The consumers “can drink all they want, guilt-free” (44).

Diet water is pure exchange value, image value, and sign value. It’s the perfect product for a world beyond scarcity, as beyond guilt: for it remains scrupulously apart from any use or need. The wildly successful advertising campaign for diet water simultaneously manipulates images of schizophrenic breakdown and primitivist innocence. The ads express the paradessence of diet water; more, they underline how diet water, the perfect commodity, is postironic paradessence personified.

There’s more, like the idea of trans-temporal marketing: marketers from the future have come back in time to colonize us, so that we purchase their not-yet-existent products, which consumer decision on our part will cause those products to come into being, together with the controlling marketers themselves. But I won’t summarize the book’s concepts (or its plot) any further. For the most important thing about The Savage Girl is the way it situates us (the readers/consumers) in relation to the practices it depicts. For Shakar, there’s no outside to the world of commodity culture, no escape from the paradise of marketing that it depicts. There’s no external point from which to launch a critique, no way to make an ironic dismissal that isn’t already compromised.

And I think this is precisely right; the market society can’t be dismantled by stepping outside of its premises. Anti-commercial activists always come off sounding puritanical and moralistic; telling people to stop shopping is no way to build an oppositional political movement. We can only change things when we begin by affirming the whole extent of our own implication in the system we say we are trying to change. We get nowhere by criticizing capitalism for its abundance, or by accusing it of lacking ‘lack.’ If consumerist capitalism is an empty utopia, as I think it is, it’s only by exploiting and expanding its utopianism, rather than rejecting it, that we can hope to move it beyond its limits, and dislocate it from itself.

Alex Shakar‘s The Savage Girl is a novel about advertising, marketing, and coolhunting. The landscape is allegorical (a purgatorial city built on the slopes of a live volcano), but the details of life are recognizably present-day American. I was less interested in the characters and plot than in the way the book (like much SF) works as a kind of social theory.

The world of The Savage Girl is dominated, not by scarcity and need, but by abundance, aesthetics, and artificially created desires. Thanks to consumer capitalism, human beings have passed from the realm of Necessity to the realm of Freedom. We stand on the verge of the “Light Age” — sometimes spelled the “Lite Age” — “a renaissance of self-creation,” when, thanks to the wonders of niche marketing, “we’ll be able to totally customize our life experience — our beliefs, our rituals, our tribes, our whole personal mythology — and we’ll choose everything that makes us who we are from a vast array of choices” (24). In such an Age, “beauty is the PR campaign of the human soul” (25), inspiring us to aspire to more and more. Virginia Postrel herself couldn’t have put it any better; only Shakar is dramatizing the ambiguities and ironies of what Postrel proclaims all too smugly and self-congratulatorily.

I just mentioned “ironies”; but Shakar suggests that this utopia of product differentiation has as its correlate a “postironic” consciousness. (All the enthusiastic theorizing in the novel is done by the various characters; which allows Shakar’s narrative voice, by contrast, to remain perfectly poker-faced and deadpan). This is something emerging on the far side of the pervasive, David Letterman-esque irony that informs advertising today. For “our culture has become so saturated with ironic doubt that it is beginning to doubt its own mode of doubting… Postironists create their own set of serviceable realities and live in them independently of any facets of the outside world that they choose to ignore… Practitioners of postironic consciousness blur the boundaries between irony and earnestness in ways we traditional ironists can scarcely understand, creating a state of consciousness wherein critical and uncritical responses are indistinguishable. Postirony seeks not to demystify, but to befuddle…” (140). This sounds a lot like the Bush White House, and its supporters in the “faith-based community.” But Shakar suggests that it is much more applicable, even for “reality-based” liberals, because it is in process of becoming the universal mode of being of the consumer. Postirony leads to “a mystical relationship with consumption.” The commodity is sublime. In a world without scarcity or need, it is only through the products we purchase that we can maintain a relationship with the Infinite.

Shakar’s other, related crucial idea is that of the paradessence (short for “paradoxical essence”). “Every product has this paradoxical essence. Two opposing desires that it can promise to satisfy simultaneously.” The paradessence is the “schismatic core” or “broken soul” of every consumer product. Thus coffee promises both “stimulation and relaxation”; ice cream connotes both “eroticism and innocence,” or (more psychoanalytically) both “semen and mother’s milk” (60-61). The paradessence is not a dialectical contradiction; its opposing terms do not interact, conflict, or produce some higher synthesis. Nothing changes or evolves. Rather, the paradessence is a matter of “having everything both ways and every way and getting everything [one] wants” (179). This is a promise that only the commodity can make; it’s a way of being that cannot be sustained in natural, ‘unalienated’ life, but only through the artificial paradise of consumerism. I don’t know how familiar Shakar is with Deleuze and Guattari; but his analysis runs parallel to theirs, when he has his marketing-guru character declare that the pure form of postirony and paradessence is literally schizophrenia (141).

The Savage Girl centers around an advertising campaign for a product that promises everything, precisely because it is literally nothing. This product is called “diet water”: “an artificial form of water… that passes through the body completely unabsorbed. It’s completely inert, completely harmless”, and has no effect whatsoever. It doesn’t actually quench thirst; but as a result, it also doesn’t add to the drinker’s weight, doesn’t make her feel bloated. If you still feel thirsty after a drink of diet water, all you have to do is “buy more.” The consumers “can drink all they want, guilt-free” (44).

Diet water is pure exchange value, image value, and sign value. It’s the perfect product for a world beyond scarcity, as beyond guilt: for it remains scrupulously apart from any use or need. The wildly successful advertising campaign for diet water simultaneously manipulates images of schizophrenic breakdown and primitivist innocence. The ads express the paradessence of diet water; more, they underline how diet water, the perfect commodity, is postironic paradessence personified.

There’s more, like the idea of trans-temporal marketing: marketers from the future have come back in time to colonize us, so that we purchase their not-yet-existent products, which consumer decision on our part will cause those products to come into being, together with the controlling marketers themselves. But I won’t summarize the book’s concepts (or its plot) any further. For the most important thing about The Savage Girl is the way it situates us (the readers/consumers) in relation to the practices it depicts. For Shakar, there’s no outside to the world of commodity culture, no escape from the paradise of marketing that it depicts. There’s no external point from which to launch a critique, no way to make an ironic dismissal that isn’t already compromised.

And I think this is precisely right; the market society can’t be dismantled by stepping outside of its premises. Anti-commercial activists always come off sounding puritanical and moralistic; telling people to stop shopping is no way to build an oppositional political movement. We can only change things when we begin by affirming the whole extent of our own implication in the system we say we are trying to change. We get nowhere by criticizing capitalism for its abundance, or by accusing it of lacking ‘lack.’ If consumerist capitalism is an empty utopia, as I think it is, it’s only by exploiting and expanding its utopianism, rather than rejecting it, that we can hope to move it beyond its limits, and dislocate it from itself.

Hunter Thompson RIP

I just learned (via Warren Ellis) that Hunter Thompson has killed himself.

Very sad news. Thompson hadn’t written much of interest lately — though he did turn out the occasional column accurately registering the utter vileness of the Bush regime and of America’s lurch toward xenophobia, repression, and willful ignornace — and it might even be said that in his later years he became, as a writer, a living parody of himself, his paranoid content and the lurid rhetoric having become all too predictable reflexes. But at his best, and very much so in his earlier years, he definitely was a great writer. Fear and Loathing in Las Vegas remains a masterpiece, an absolutely brilliant, savage, and hilarious decoding of the American Dream, the only work of “New Journalism” that (unlike the tomes of Tom Wolfe and Norman Mailer) has outlived the times in which it was written. Much of his other journalism from the 1960s and 1970s is nearly as good. Thompson was well-nigh definitive on Richard Nixon. All in all, he was the conscience of his times: times that were more accurately represented by his “gonzo” excesses than they could have been by any more conventional, naturalistic, and restrained mode of reportage.

Of course, you can’t talk about Hunter Thompson as a writer without confronting, as well, Hunter Thompson the legend, with the beer and the pot and the drugs and the guns and the continual acting out. By all accounts, he really was outrageous and crazy and bigger than life, and his written self-dramatizations are not as wildly exaggerated as they might seem. But as narcissistic self-mythologizing monsters go, Hunter Thompson was, by all accounts, an unusually honest and decent one.

There’s no information (at least so far) about why Thompson killed himself. The news story only quotes his son as requesting that the family’s privacy be respected. I have no way of speculating, and I can only say that, whatever the reasons for his act, Hunter Thompson will be missed.

I just learned (via Warren Ellis) that Hunter Thompson has killed himself.

Very sad news. Thompson hadn’t written much of interest lately — though he did turn out the occasional column accurately registering the utter vileness of the Bush regime and of America’s lurch toward xenophobia, repression, and willful ignornace — and it might even be said that in his later years he became, as a writer, a living parody of himself, his paranoid content and the lurid rhetoric having become all too predictable reflexes. But at his best, and very much so in his earlier years, he definitely was a great writer. Fear and Loathing in Las Vegas remains a masterpiece, an absolutely brilliant, savage, and hilarious decoding of the American Dream, the only work of “New Journalism” that (unlike the tomes of Tom Wolfe and Norman Mailer) has outlived the times in which it was written. Much of his other journalism from the 1960s and 1970s is nearly as good. Thompson was well-nigh definitive on Richard Nixon. All in all, he was the conscience of his times: times that were more accurately represented by his “gonzo” excesses than they could have been by any more conventional, naturalistic, and restrained mode of reportage.

Of course, you can’t talk about Hunter Thompson as a writer without confronting, as well, Hunter Thompson the legend, with the beer and the pot and the drugs and the guns and the continual acting out. By all accounts, he really was outrageous and crazy and bigger than life, and his written self-dramatizations are not as wildly exaggerated as they might seem. But as narcissistic self-mythologizing monsters go, Hunter Thompson was, by all accounts, an unusually honest and decent one.

There’s no information (at least so far) about why Thompson killed himself. The news story only quotes his son as requesting that the family’s privacy be respected. I have no way of speculating, and I can only say that, whatever the reasons for his act, Hunter Thompson will be missed.

Theory of Fun for Game Design

Raph Koster‘s Theory of Fun for Game Design is, as its title implies, less a “how-to”guide for game designers than it is a critical reflection on what games are (especially contemporary computer games), how they work, and why they appeal to people — with only very general pragmatic advice on how to design games, based on these reflections. Koster himself is a celebrated game designer, who has been involved in the creation of such massive multiplayer games (online worlds) as Ultima Online and Star Wars Galaxies.

I had some particular reasons for reading this book. Although I am fascinated by online virtual worlds (and spent a lot of time at one of the old text-based ones, LamdbaMOO, back in the mid-1990s), I’ve never been any sort of a gamer. I don’t like either competitive games, or puzzle-solving ones. The problem is, precisely, that I never find them fun. With competitive games, I feel every bit as much humiliation and pain from losing that anybody does; but unfortunately, I get no pleasure or gratification whatsoever from winning. For me, it’s a bit like the old Groucho Marx line (“I wouldn’t join any club that would have me as a member”): anything competitive that I can do successfully seems to me trivial and stupid and not worth doing. The same goes for the solo games where you play against the machine. As for puzzles, they similarly strike me as trivial and inane if I can solve them, and unbearably tedious if I can’t. So I’m literally in a no-win situation when it comes to games. I don’t have the patience to play them, and I don’t ever get the emotional rewards most people get by mastering them. The result is, that I don’t know anything about games. This bothers me, because games are indubitably where the most interesting and innovative things are happening, when it comes to new media, or even to aesthetics in the world today.

But I want to write about Koster’s book, not my own neurotic dilemmas. Koster is a smart and personable guy, who has thought long and hard about the meanings and implications of what he does as a game designer. The book is appealing, too, because it’s both intelligent and highly accessible, making its arguments with clear prose on the left-hand pages, and amusing cartoons on the right-hand ones. The cartoons are not just illustrative, but actually contribute to the ongoing argument. Since Koster is not an academic (though he is very interested in what academics have to say about gaming), he is able to make his book a multimedia experience, even though we never leave the printed page.

Koster basically sees games as “exercises for our brains” (38), artificial, abstract spaces in which we learn and practice, and (hopefully) end up mastering, various skills. (By mentioning “brains,” he is not opposing ‘mental’ skills to ‘physical’ ones; games can cover everything from abstract logical reasoning to motor skills; they involve not just ‘thinking’, but responding to sensory cues). Games are “limited formal systems,” which is part of what makes them different from real life; but they are not escapist, because they provide training which can be useful, or even vital, in real life. Games are fun, Koster says, because they provide the pleasure — the endorphin high, perhaps — that comes from “that moment of triumph when we learn something or master a task… In other words, with games, learning is the drug” (40).

Koster draws a rich and complicated series of consequences from these (seemingly simple) premises. I won’t attempt to summarize these consequences here. But Koster discusses such varied and deep matters as: what makes games boring, and how to avoid that; the relation between the underlying formal structures of games, which is where their puzzlement, challenge, and satisfaction lie, and the narratives in which games are almost always, and necessarily, embedded; the advantages and disadvantages of games in comparison with other media (like verbal fiction); and the potentialities and limits of games as works of art. Along the way, he also touches on such subjects as the moral responsibilities of game designers, and the need for games to become richer, and more emotionally complex, than they have been heretofore.

I feel I learned a lot from Theory of Fun in Game Design; Koster provoked me to think a lot more than most academic books tend to do. (I hope that doesn’t seem like too backhanded a compliment). It’s only against this background of general enthusiastic approval that I will note what seems to me to be the book’s major limitation. That is its overall assumptions based on cognitive psychology: which increasingly seems to be the “common wisdom” of our society today, much as Freudianism was fifty years ago. In line with this common wisdom, Koster overemphasizes cognitive skills, and gives short shrift to emotions (or, as I prefer to say, affect). In fairness, he does say that games, as abstract formal systems, are limited in comparison to novels and movies precisely because they are all about puzzle-solving skills, but are not so good at rendering the nuances of emotion. But when Koster comes to talk about the emotions, he describes them, in the standard cognitive terms, as markers of our efforts — as social primates — to attain higher social status and prestige.

As I’ve argued many times before, this sort of approach — not Koster’s in particular, but that of cognitive psychology itself, and of today’s “common wisdom” in general — is that 1)it is too narrowly functionalist; and 2)it makes the fundamental error of assuming that how something evolved or came into being is the key to understanding its meaning and usage now. But as Nietzsche said, “the cause of the origin of a thing and its eventual utility, its actual employment and place in a system of purposes, lie worlds apart.” Whatever their evolutionary origins, our emotions today are florid, ambivalent, multivalent, and often perverse, dysfunctional, or simply divorced from (positive or negative) function. Even if we really knew how they evolved (which we don’t; all we have are hypotheses that are grounded more in coherence with other dogmas than with any sort of empirical evidence), that would tell us very little about how they work, how they drive us, now. In their excess and gratuitousness, our affects are highly ludic — even when, and perhaps especially when, experiencing them isn’t much fun. And so, as cogent as I find Koster’s cognitive description of games (which includes his acknowledgement of how they often reward violence, aggression, and paranoia, at the expense of empathy and interdependence), I still think that something absolutely crucial is missing: the affect of games and gaming. Of course, if I understood that I might have a greater degree of insight into my own aversion to games, and my preference for other, equally (or more) sterile ways of subverting utility and wasting time.

Raph Koster‘s Theory of Fun for Game Design is, as its title implies, less a “how-to”guide for game designers than it is a critical reflection on what games are (especially contemporary computer games), how they work, and why they appeal to people — with only very general pragmatic advice on how to design games, based on these reflections. Koster himself is a celebrated game designer, who has been involved in the creation of such massive multiplayer games (online worlds) as Ultima Online and Star Wars Galaxies.

I had some particular reasons for reading this book. Although I am fascinated by online virtual worlds (and spent a lot of time at one of the old text-based ones, LamdbaMOO, back in the mid-1990s), I’ve never been any sort of a gamer. I don’t like either competitive games, or puzzle-solving ones. The problem is, precisely, that I never find them fun. With competitive games, I feel every bit as much humiliation and pain from losing that anybody does; but unfortunately, I get no pleasure or gratification whatsoever from winning. For me, it’s a bit like the old Groucho Marx line (“I wouldn’t join any club that would have me as a member”): anything competitive that I can do successfully seems to me trivial and stupid and not worth doing. The same goes for the solo games where you play against the machine. As for puzzles, they similarly strike me as trivial and inane if I can solve them, and unbearably tedious if I can’t. So I’m literally in a no-win situation when it comes to games. I don’t have the patience to play them, and I don’t ever get the emotional rewards most people get by mastering them. The result is, that I don’t know anything about games. This bothers me, because games are indubitably where the most interesting and innovative things are happening, when it comes to new media, or even to aesthetics in the world today.

But I want to write about Koster’s book, not my own neurotic dilemmas. Koster is a smart and personable guy, who has thought long and hard about the meanings and implications of what he does as a game designer. The book is appealing, too, because it’s both intelligent and highly accessible, making its arguments with clear prose on the left-hand pages, and amusing cartoons on the right-hand ones. The cartoons are not just illustrative, but actually contribute to the ongoing argument. Since Koster is not an academic (though he is very interested in what academics have to say about gaming), he is able to make his book a multimedia experience, even though we never leave the printed page.

Koster basically sees games as “exercises for our brains” (38), artificial, abstract spaces in which we learn and practice, and (hopefully) end up mastering, various skills. (By mentioning “brains,” he is not opposing ‘mental’ skills to ‘physical’ ones; games can cover everything from abstract logical reasoning to motor skills; they involve not just ‘thinking’, but responding to sensory cues). Games are “limited formal systems,” which is part of what makes them different from real life; but they are not escapist, because they provide training which can be useful, or even vital, in real life. Games are fun, Koster says, because they provide the pleasure — the endorphin high, perhaps — that comes from “that moment of triumph when we learn something or master a task… In other words, with games, learning is the drug” (40).

Koster draws a rich and complicated series of consequences from these (seemingly simple) premises. I won’t attempt to summarize these consequences here. But Koster discusses such varied and deep matters as: what makes games boring, and how to avoid that; the relation between the underlying formal structures of games, which is where their puzzlement, challenge, and satisfaction lie, and the narratives in which games are almost always, and necessarily, embedded; the advantages and disadvantages of games in comparison with other media (like verbal fiction); and the potentialities and limits of games as works of art. Along the way, he also touches on such subjects as the moral responsibilities of game designers, and the need for games to become richer, and more emotionally complex, than they have been heretofore.

I feel I learned a lot from Theory of Fun in Game Design; Koster provoked me to think a lot more than most academic books tend to do. (I hope that doesn’t seem like too backhanded a compliment). It’s only against this background of general enthusiastic approval that I will note what seems to me to be the book’s major limitation. That is its overall assumptions based on cognitive psychology: which increasingly seems to be the “common wisdom” of our society today, much as Freudianism was fifty years ago. In line with this common wisdom, Koster overemphasizes cognitive skills, and gives short shrift to emotions (or, as I prefer to say, affect). In fairness, he does say that games, as abstract formal systems, are limited in comparison to novels and movies precisely because they are all about puzzle-solving skills, but are not so good at rendering the nuances of emotion. But when Koster comes to talk about the emotions, he describes them, in the standard cognitive terms, as markers of our efforts — as social primates — to attain higher social status and prestige.

As I’ve argued many times before, this sort of approach — not Koster’s in particular, but that of cognitive psychology itself, and of today’s “common wisdom” in general — is that 1)it is too narrowly functionalist; and 2)it makes the fundamental error of assuming that how something evolved or came into being is the key to understanding its meaning and usage now. But as Nietzsche said, “the cause of the origin of a thing and its eventual utility, its actual employment and place in a system of purposes, lie worlds apart.” Whatever their evolutionary origins, our emotions today are florid, ambivalent, multivalent, and often perverse, dysfunctional, or simply divorced from (positive or negative) function. Even if we really knew how they evolved (which we don’t; all we have are hypotheses that are grounded more in coherence with other dogmas than with any sort of empirical evidence), that would tell us very little about how they work, how they drive us, now. In their excess and gratuitousness, our affects are highly ludic — even when, and perhaps especially when, experiencing them isn’t much fun. And so, as cogent as I find Koster’s cognitive description of games (which includes his acknowledgement of how they often reward violence, aggression, and paranoia, at the expense of empathy and interdependence), I still think that something absolutely crucial is missing: the affect of games and gaming. Of course, if I understood that I might have a greater degree of insight into my own aversion to games, and my preference for other, equally (or more) sterile ways of subverting utility and wasting time.