Object-Oriented Philosophy

On his marvelous new blog, on which he manages to write more in a day than I do here in a month, and with consistent brilliance, Graham Harman makes a concession (or, I should probably rather say, a restatement) that I had been hoping to hear from him for a long time:

It’s not a matter of forgetting Kant’s exclusion from the in-itself. It’s a matter of questioning why he gives humans a monopoly on such exclusion. In a sense, I’m trying to let rocks, stones, armies, and Exxon join in the fun of being excluded from the in-itself. A sort of Kantianism for inanimate objects.

This is pretty close to one of the major theses of my own forthcoming book on Whitehead:

Whitehead rejects correlationism and anthropocentrism precisely by extending Kant’s analysis of conditions of possibility, and of the generative role of time, to all entities in the universe, rather than confining them to the privileged realm of human beings, or of rational minds. (p. 79)

Throughout his books, Harman rightly praises Whitehead for rejecting what Harman calls “the philosophy of human access,” that is to say, the philosophy that gives a privileged position to human subjectivity or to human understanding, as if the world’s very existence depended upon our ability to know it.  Rejecting the philosophy of human access means, among other things, rejecting Kant’s privileging of epistemology. As Whitehead puts it, since the 18th century, and especially since Kant, “the question, What do we know?, has been transformed into the question, What can we know?” (PR 74). What’s so energizing about Harman’s “object-oriented philosophy,” or about “speculative realism” more generally, is that it refuses to subordinate its arguments about the nature of the world (or about anything, really) to (second-order) arguments about how we can know whether such (first-order) arguments are correct. Kant endeavored to use the subordination of what we know to how we can be sure about the validity of what we know as a firm grounding for “any future metaphysics”; but of course this kind of meta-questioning inevitably leads to an infinite regress, or to an infinite argumentation that prevents one from ever making any actual arguments (this, I take it, is the witting or unwitting lesson of Derrida and of deconstruction). When we privilege epistemology, or the question of what we can know, over metaphysics, or the question of what we do know, we fall into the abyssal rabbit-hole that Hegel called the “bad infinity”. [Though in truth, I have always preferred this “bad infinity” to the sort of infinity of which Hegel approved — because the latter seems to involve a kind of fatuous self-confirmation, that would make “what we can know” into the measure of all existence. Kant at least insists that there are things whose existence we must affirm, even though we cannot know anything positive about them — sort of like Rumsfeld’s now famous “unknown unknowns” — whereas Hegel entirely subordinates existence to knowability. But that is a subject for another essay].

Now, I understand that Kant is the godfather of what Harman calls “the philosophy of human access,” or what Quentin Meillassoux calls “correlationism.” Seriously, for all the speculative realists, Kant is the Number One bad guy. Nonetheless, as I have already suggested, it has long bothered me that Harman was (at least until now) unwilling to say about  Kant’s “things in themselves” what he says about Heidegger’s “tool-being”: that the concept is an important one, in underlining how things, or objects, cannot be reduced to our knowledge of them; that is to say, how things have a subterranean existence beyond whatever aspects of them we (or for that matter, any other entities that encounter them) are able to grasp. (Since Harman’s whole point is that there is no sense in privileging my encounter with a stone over, say, the snow’s encounter with that stone — the same problems of limited access arise in both situations). Harman argues that Heidegger makes a crucial step beyond human access with his concept of tool-being, even if he falls back into privileging human access in other aspects of his thought (like whenever he talks about Dasein). Couldn’t one make exactly the same argument vis-a-vis Kant?

Admittedly, I ask this question to a large extent for aesthetic and stylistic reasons. It is simply that (perversely, I admit) I enjoy Kant’s prose, while I do not get any pleasure at all from Heidegger’s. (As I have forgotten what little German I ever knew, I read them both only in translation, which makes the question of my likes and dislikes even more dubious and complicated). These preferences aside, however, the right question to ask is: what difference would it make to Harman’s argument if it were to be founded on Kant’s doctrine of things in themselves, and the impossibility of accessing the in-itself, instead of on Heidegger’s doctrine of tool-being (or the “subterranean reality” of things “which never comes openly to view” — Tool-Being, p. 24), and the irreducibility of things to their mere presence (or present-at-handedness)? How would Harman’s argument change, if it were to credit Kant instead of Heidegger with the discovery of a subterranean reality beyond, and irreducible to, representation and presence?

I am not sure about this, but my preliminary suspicion is that a recourse to Kant instead of Heidegger might force Harman to abandon, or at least modify, one of the most important features of his argument: his brilliant revival of the philosophical doctrines, which have been despised for most of the last several centuries, of substantialism and occasionalism. For Harman, if objects have a “subterranean reality,” beyond whatever relations they enter into, and beyond whatever qualities other objects are able to grasp of them, this means that all things or objects in the world are independent substances, entirely separate from one another. And, given that objects or substances are radically disjointed from one another, the relations between substances — which, ordinarily, we just take for granted — themselves need to be explicitly explained. As Whitehead says (and this is his criticism of substantialism; or his criticism of Harman in advance, as it were):

Such an account… renders an interconnected world of real individuals unintelligible. The universe is shivered into a multitude of disconnected substantial things, each thing in its own way exemplifying its private bundle of abstract characters which have found a common home in its own substantial individuality. But substantial thing cannot call unto substantial thing. (Adventues of Ideas, p. 133)

Harman answers this objection by recourse to occasionalism, or to what he also calls vicarious causation. An “occasion” must be posited to show how independent entities, each locked into its own subterranean existence, could encounter one another at all, even superficially. In the 17th century, occasionalism meant the intervention of God at every moment in every interaction between two or more entities. Harman argues, for the very first time, for a non-theistic occasionalism; he creatively explains how interactions between objects can occur, but can only occur, when both objects are located in the interior of some larger, or more all-encompassing object. The universe has layers of reality, and we never get either to the bottom or to the top.

Now, substantialism and occasionalism are the aspects of Harman’s thought that most perturb his readers (myself included). One would like to accept his “object-oriented,” anti-correlationist argument, his refusal to place “human access” at the center of things, or to give such access a uniquely privileged status, without thereby having to accept the radically anti-relational consequences that he draws from this argument. To think this way, however, is to do Harman an injustice: his substantialism/occasionalism is not a bug but a feature; it is precisely the creative core of his metaphysics. So what follows might well be just another attempt to evade the full audacity of Harman’s argument.

Nonetheless, I do think that reference to Kant’s “things in themselves” might really make a difference here. Heideggerian tool-being is inherently relational and “global,” as Harman explains. But by pushing Heidegger just a little bit, Harman is nonetheless able to argue that “tool-being recede[s] not just behind human awareness, but behind all relation whatsoever” (Tool-Being p. 288). For if human awareness loses its privileges, and is no different from any other sort of relation among objects, then what Heidegger says against the delusions of presence applies just as well to all other forms of relation. I want to suggest, however, that this logic might change if we see Heidegger’s argument about presence as a derivative of Kant’s argument about the relativity of phenomena. For Kant, noumena lurks inaccessibly behind phenomena, just as for Heidegger, the hidden tool-being of all entities lurks inaccessibly behind those entities’ presence-at-hand. But for Kant (unlike Heidegger?) the limitation which grasps of noumena only their reduced phenomenal profile is not only a loss or a reduction, but also a positive act, a construction, a bringing-into-relation. (This is why Whitehead, despite all his criticisms of Kant, nonetheless praises Kant as “the great philosopher who first, fully and explicitly, introduced into philosophy the conception of an act of experience as a constructive functioning” — Process and Reality, p. 156). Phenomena are generated out of the encounter between subject and object in Kant — but if one is willing to “to let rocks, stones, armies, and Exxon join in the fun of being excluded from the in-itself,” then we can say that phenomena are positively generated out of all encounters between objects: this move away from human access, and toward objects indiscriminately, is precisely what Whitehead accomplishes (so that, for Whitehead, “subjectivity” is precisely the result of such a constructive process, rather than what initiates it).

Now, when Heidegger (followed by Derrida) attacks metaphysical and scientific thought for its reduction of the reality of things to mere presence, what he misses is the Kantian sense in which any such reduction is also a positive construction: it is a new event, a creation, a transformation or a “translation.” (I am thinking here of what Levi Bryant calls “Latour’s Principle”: “there is no transportation without translation.” Harman’s own book on Latour is coming soon). Heidegger’s critique of presence might be summarized as the idea that translation is always a betrayal of that which is ostensibly being translated. But Kant’s conception of constructive functioning maintains that translation is the creation of something new: a successful translation (which for Heidegger is impossible) is not a perfectly faithful reproduction of the original, but precisely (to cite the terms of Latour’s Principle in inverse order) an act of transportation, a carrying-across which, in the process, thereby makes something new. From this point of view, both Whitehead and Latour give us a Kantianism without privileging human access, a Kantianism for all entities. And seeing the constructive work of relays and transportations/translations in this manner releases us from the desperate recourse (though, of course, Harman does not see it this way) to positing a universe of occult substances that can only communicate vicariously.

To put this in another way, just briefly (since this is something I am still working on, and trying to work out): Harman’s criticism of Whitehead is that Whitehead’s vision of relationality reduces the world to an endless infinite regress, something that is “too reminiscent of a house of mirrors.” According to Harman’s summary, for Whitehead any entity “turns out to be nothing more than its perceptions of other entities. These entities, in turn, are made up of still further perceptions. The hot potato is passed on down the line, and we never reach any reality that would be able to anchor the various perceptions of it” (Guerrilla Metaphysics, p. 82). This criticism, however, is based on the assumption (precisely rejected by Whitehead) that “perceptions” are nothing positive in themselves, but just passive registrations of that which is perceived. Harman’s objection no longer holds, once we recognize that “perception” (or what Whitehead rather calls “prehension,” precisely to differentiate from the Humean, or classical empiricist, notion of perception) is itself a constructive functioning, a positive, creative and self-creative, process. And it is in all these acts of perception themselves that the “reality” already exists and “anchors” everything around it.

Harman also says that “no relational theory such as Whitehead’s is able to give a sufficient explanation of change,” because if a given entity ” holds nothing in reserve beyond its current relations to all entities in the universe, if it has no currently unexpressed properties, there is no reason to see how anything new can ever emerge” (ibid.). But Whitehead doesn’t quite say this; he says, rather, that what he calls the “subjective aim,” which is the way in which an entity skews or modifies its relations to all other entities, in a process of “decision”, is precisely that which the entity holds “in reserve” in relation to the other entities that it perceives. Once again, because Harman follows Heidegger (instead of Kant), he is unable to give credit to the way that perception as constructive functioning, precisely because it is always incomplete or selective, thereby produces new properties, new twists of relation, and thereby gives us novelty without the need to have recourse to occult substances.

I will be the first to admit that my argument here is incomplete; I need to say something as well about Whitehead’s notorious “eternal objects,” which play an important role in the processes over which I am disagreeing with Harman. I probably also need to say something about Whitehead’s notion of God, and how it relates to Harman’s counter-intuitive attempt to assert an occasionalism without God. And I certainly need to spell out more fully how I see Whitehead as championing a Kantianism without privileging human access. But for now, I have run out of energy and this post is already too long.

Michael Swanwick, Wild Minds

Michael Swanwick’s 1998 short story “Wild Minds” (which I found in the collection The Best of Michael Swanwick) offers a different angle on the issues most recently raised by Scott Bakker’s Neuropath. The story is set in a future world in which “the workings of the human brain were finally and completely understood” by science. As a result, traditional “education” is no longer necessary, since everything can be “learned” by direct bioelectrochemical manipulation: “anybody could become a doctor, a lawyer, a physicist, provided they could spare the month it took to absorb the technical skills.” The complete understanding of the brain also renders traditional notions of guilt, crime, and punishment irrelevant. The narrator of the story has committed a murder; but he recalls that “a panel of neuroanalysts had found me innocent by virtue of a faulty transition function and, after minor chemical adjustments and a two-day course on anger control techniques, had released me onto the street without prejudice.”

In a world where the human brain is completely understood, there is no more learning ‘for its own sake’; nor is education a job requirement. Instead, “most corporations simply educated their workforce themselves to whatever standards were currently needed.” Isn’t this the logical next step, under our current regime of cognitive capitalism? The “valorization” of capital now takes place 24/7, in leisure time as well as in work time, in the processes of circulation and consumption, no less than in those of “production” proper. The automation of education by direct manipulation of the brain would seem to validate Rancière’s axiom of equality, his insistence upon the generic propensity to learn that is equal in all human beings, all intelligences. Yet, far from being liberating, or even resulting in a greater measure of social equality, the mobilization of this generic propensity results in a further exacerbation of corporate control, with its invidious distinctions and its incessant accumulation of capital. Corporations have learned to commodify and exploit, not just “labor power” in the classic sense, but also, and above all, that “general intellect” which (according to Negri, Lazzarato, et al) is the true source of wealth in the postmodern or post-Fordist era. General intellect has been technologized to the point where it can be installed in any given individual at will (and if you can afford the investment — as corporations can). It is therefore entirely open to be exploited for the extraction of surplus value.

In other words, neither the equalization of intelligences, nor the movement of “real subsumption” that leads from the factory floor to the common activities of humankind, is in the least bit liberatory.

Now, it’s become almost a cliche these days to warn against “the kind of remorselessly monopolist accounts of capitalism that act as a kind of intellectual and political bulldozer,” and thereby overlook real possibilities of resistance and a new sort of politics (I am quoting Nigel Thrift, Non-Representational Theory, p. 23). I am inclined to think, however, that this is one kind of criticism that needs to be inverted. There has been way too much unwarranted celebration recently of the alleged creativity of fan cultures and the like, and of “empowered” consumers. If such activities are “political,” then they only point up the irrelevance and lack of import of any sort of “politics” that focuses only on “domination” and “empowerment,” and ignores the harsh realities of political economy. Part of what I like about Swanwick’s fable is that it assumes this point as a background assumption, without calling attention to it, or didactically insisting upon it.

In any case, corporate control of the automated education process isn’t the only point of “Wild Minds.” There’s worse. “With knowledge so cheap, the only thing workers had to sell was their character: their integrity, prudence, willingness to work, and hard-headed lack of sentiment.” These are indeed the qualities of character that the new “flexible” and “innovative” capitalism requires. It needs people who will not just contribute during working hours, but devote all of themselves to whatever project is at hand; and yet with a sufficient lack of sentimental attachment that, once any given project is over, they will move on without regret or nostalgia to something entirely different. And, given the technology that results from a complete understanding of the human mind, it turns out that this kind of character formation can also be achieved by technological means: “it was discovered that a dozen spiderweb-thin wires and a neural mediator the size of a pinhead would make anybody as disciplined and thrifty as they desired. Fifty cents worth of materials and an hour on the operating table would render anybody eminently employable.”

This process is called “optimization.” It leads to a “blessed clarity that filled my being,” the narrator says; or, more objectively, it leads to an “absolute clarity of thought, even during emergencies. Freedom from prejudice and superstition. Freedom from the tyranny of emotion.” Instead, when you are optimized you have access to “information” that previously you had “ignored or repressed.” When you’re optimized, you realize that (just as Nietzsche said, or as The Argument in Bakker’s Neuropath says) there is no such thing as “free will,” and hence no responsibility. “Self is an illusion. The single unified ego that you mistake for your ‘self’ is just a fairy tale that your assemblers, sorters, and functional transients tell one another.” And so, during a brief simulation of what it is like to be optimized, the narrator finds himself “not regretting a thing. I knew it wasn’t my fault. Nothing was my fault, and if it had been that wouldn’t have bothered me either. If I’d been told that the entire human race would be killed five seconds after I died a natural death, I would’ve found it vaguely interesting, like something you see on a nature program. But it wouldn’t have troubled me.”

Optimization makes for perfect corporate employees. I would think, as well, that it makes for the sort of “bright,” rational, and illusion-free personality type so desired by rationalist crusaders like Richard Dawkins. Indeed, one of the effects of optimization is that it leads almost immediately to the rejection of any prior religious beliefs: their delusive, compensatory quality simply becomes too obvious, and is no longer required. And so, when optimization becomes possible, “the ambitious latched onto [it] as if it were a kite string that could snatch them right up into the sky… Acquiring a neural mediator was as good as a Harvard degree used to be. And — because it was new, and most people were afraid of it — optimization created a new elite.” The optimized are uber-yuppies, living in buildings that are all “shimmering planes and uncertain surfaces… buildings that could never have been designed without mental optimization, all tensengricity and interactive film.” The optimized are separate from the “obsolete people” who have not had the operation; they are virtually a new species, and indeed they “don’t claim to be human.”

The narrator of “Wild Minds,” however, is a reactionary and an ironist; he chooses not to be optimized, and he fervently embraces the illusions and consolations of religion. He clings to Catholicism’s sense of guilt, repentance, and possible redemption. “The thought,” he says, “that a silicon-dosed biochip could make me accept [the murder he committed] as an unfortunate accident of neurochemistry and nothing more, turns my stomach.” He clings to his sense of guilt precisely because he knows that after optimization he would no longer feel this way; that doing what turns his stomach would insure that it would no longer turn his stomach. He accepts that “being human” is no longer “essential”; yet he “cling[s] to the human condition anyway, out of nostalgia perhaps but also, possibly, because it contains something of genuine value.” “Wild Minds” works as a story precisely because its defense of the “human” against the “posthuman” is so nuanced and hinged with irony; the story wouldn’t be in the least convincing if it preached the eternal verities of the human condition in the usual pompous and high-minded terms. Everything depends upon two crucial points, I think. In the first place, the posthumanity that so many of us have imagined over the last several decades is largely a corporate fantasy; it basically envisions re-engineering “human nature” in line with the demands of what David Harvey has called “flexible accumulation,” or of what Luc Boltanski and Eve Chiapello describe as “the new spirit of capitalism.” And in the second place, no humanist nostalgia about our essential inner being or spirit is able to undermine or interrupt the scientific discovery, at an ever-accelerating pace, of the actual ways that our brains work, of the material (bioelectric, biochemical, neurological) basis of thought. What we need, instead, is to comprehend how these discoveries are pragmatic and operational, rather than essential or foundational; they are oriented to power and efficacy, to the ways that the brain can be transformed, manipulated, and controlled. In refusing optimization, the narrator of “Wild Minds” acknowledges (far from questioning) the efficacy of such a procedure. And thereby, he challenges us to imagine — even if he himself cannot — a posthuman transformation that would not merely serve the agendas of capital, and of what used to be called (how quaint this title appears today) “instrumental reason.”

Abel Ferrara’s Mary

I finally caught up with Abel Ferrara’s 2005 film Mary: it was the one Ferrara feature (excluding his pre-Driller Killer pornos) that I had never seen before. Needless to say (at least for me, since I have expressed my enthusiasm for Ferrara before, and also, long ago here), it’s amazing. It’s hard to get a total grip on Mary after just one viewing, but I will do my best.

Mary is apparently Ferrara’s response to Mel Gibson’s Passion of the Christ. It concerns a macho-asshole film director (played by frequent Ferrara alter ego Matthew Modine) who has made a film, This Is My Blood, about the life of Jesus, in which he also played the title role, and who is now trying to promote the film, in the face of protests both by Jews (who consider it anti-Semitic) and Christian fundamentalists (who consider it heretical). Strictly speaking, the fundamentalists are right, since the film emphasizes the role of Mary Magdalene as a key disciple of Jesus, drawing upon various suppressed, heretical Gospels. Mary clashes repeatedly with Peter, who seems to reject her role as a disciple largely on sexist grounds. The revisionist reading of Magdalene is supported by interview footage with Elaine Pagels and several other (real-life) scholars and theologians who have worked on early Christianity.

Though Mary does have characters and a straightforward narrative, it is also very much of a collage film. We see scenes from the film Modine’s character has made, together with various other types of footage from the (fictional) world in which Modine’s character lives, together with documentary, or documentary-style footage. The scenes from This Is My Blood are gorgeous, in murky chiaroscuro, with a mobile camera that frequently stays close enough to the actors that all we can see are their faces, filling the screen, emerging out of, and returning to, the shadows. Despite the director’s egotistical stunt of playing Jesus, the weight of this film-within-the-film clearly rests with the actress playing Mary, whose feelings — from the mournfulness of witnessing Jesus’ death, to the joy of his resurrection, and the message (rejected by Peter) that she has gotten from him — are subtly, but powerfully, modulated throughout these chiaroscuro sequences.

Mary starts with the film’s final wrap, and mostly takes place a year later, in New York, as Modine is preparing for the premiere. But another plot strand involves the actress playing Mary (this character is played by the great — and woefully underappreciated in the US — Juliette Binoche). Binoche’s character has overidentified with her role; she can’t let go of Mary Magdalene — and she drops everything in order to go to Jerusalem, where she wanders the streets and jostles the crowds on a spiritual quest. The scenes involving her seem to be shot on location, with handheld camera, and bright and even natural lighting; we see documentary-ish scenes of Jewish, Christian, and Muslim prayer, together with ones of her roaming the streets. She embraces the Wailing Wall (?), takes part in a Seder that is interrupted by a terrorist attack (with the fact that the Last Supper was a Seder clearly on her, and Ferrara’s, mind), and prays at the Church of the Nativity in Bethlehem (I think). Binoche has very little dialogue, but anguish (and later, peace) are etched on her face throughout these scenes of quest. And there is an emotional continuity (beyond the stylistic differences) between her scenes in Jerusalem, and those in the film-within-the-fillm.

I still haven’t mentioned the most extended narrative strand in Mary, which involves an intellectual (as in Charlie Rose, or someone else on PBS) talk-show host (played by Forest Whitaker), who is doing a series of shows focusing on the actual, historical Jesus — hence the interview material with theologians and Biblical scholars. Between his preparations for the series, and his general philandering, Whitaker’s character is woefully neglecting his late-term-pregnant wife (played by Heather Graham), and generally making a mess of his life. Whitaker interviews Modine (and Binoche via telephone) on his show, which is the minimal way in which the various plot strands intersect.

The New York scenes, involving Modine and Whitaker, are mostly at night — they feature the poetry of distantly-lit office skyscrapers, briidges, and freeways, contrasting sharply with both the chiaroscuro of the film-with-in-the-film, and the clarity of light of the Jerusalem sequences. Whitaker is also often seen in his TV studio, surrounded by video monitors that are usually showing either interview footage, or else the news: domestic (US) riots and crime scenes, and political violence in Israel and Palestine. There are also other dissonant moments; at one point, somebody throws a rock through the window of a limousine in which Whitaker is negotiating with Modine, and the confrontation is shown in music-video style, with swish pans and jump cuts. Throughout the New York scenes, there are also lots of tracking shots down corridors (and sometimes back as well), the vertiginous camera movement accenting the increasingly unhinged emotions of the characters.

So the film is wildly disjunctive stylistically, as well as disjointedly multi-stranded narratively. It’s as if this promiscuously jarring mixture of styles and media were the only way Ferrara could express the actuality of life in the 21st century — and this, in turn, is necessary in order to make the film’s spiritual explorations vital and meaningful, instead of merely antiquarian. As the film proceeds, things become more and more unhinged. Modine confronts the protesters at his film’s premiere; when a bomb threat empties the theater, he locks himself in the projection room and rolls the film despite the absence of spectators. Meanwhile, Whitaker is not there for his wife when she goes into labor and gives birth to a baby boy whose survival is in doubt (it was unclear to me whether this was a case of birth defects or just premature birth; in any case, there’s an amazing scene of the baby, crying and crying while encased in a plastic bubble, as Whitaker tries futilely to comfort the child). By the end of the film, Binoche, surrounded by violence, seems to find a sort of inner peace, while Modine is in the throes of a full-fledged ego breakdown, and Whitaker, weeping, throws himself before the Cross.

All this echoes moments of spiritual intensity in other films by Ferrara (Harvey Keitel abjecting himself at the end of Bad Lieutenant; or the peace that Lili Taylor perhaps finds at the very end of The Addiction). Mary is, I think, the equal of those earlier films. Its greater heterogeneity or fragmentation perhaps lessens the emotional impact a bit, but it has the effect of making Ferrara’s spiritual claims more compelling than ever before. It’s useless to ask whether Ferrara is in a literal sense “religious”; I am inclined to agree, however, with Dennis Lim’s suggestion that Mary is “the rare movie that could stand as a rebuke to both The Passion of the Christ and Religulous.” Ferrara’s sensibility is, of course, deeply Catholic; but this is inflected, in Mary, both by a concern for Judaism (which Ferrara comes back to again and again, throughout the film) and by a general heretical/quasi-feminist edge. The recentering of the film’s implicit theology around Mary Magdalene is expressed through a delirious male abjection before the feminine (in terms both of the role of Binoche, and Whitaker’s hysterical-yet-moving repentance for how he has wronged Graham). One can rightly say that such an inversion of the masculine arrogance Modine’s and Whitaker’s characters both represent is not truly feminist, because it just inverts the gender stereotypes, rather than actually undoing them. But the film’s masculine hysteria is inseparable from its spiritual longings; by which I mean one cannot reduce either of these dimensions to being merely a displaced symptom of the other — they must both be accepted and taken seriously, together. And, looking at the film this way, it charts, and makes, a convulsive emotional movement that is its own evidence and justification. Ferrara’s greatness as an affective filmmaker is unparalleled, and has never (apart from Nicole Brenez’s wonderful book) gotten the recognition that it deserves. Ferrara breaks down the distinction between art film and exploitation film, just as he does between spirituality and sleaze. He is absolutely contemporary, and yet he pushes beyond the cheap irony and encapsulated soundbytes of all too much contemporary culture.

Neuropath

Scott Bakker’s Neuropath is a science-fiction thriller about a rogue neurosurgeon who kidnaps people and grotesquely manipulates their brains, sometimes killing them in the process, and other times releasing them once their minds have been subtly but horribly deformed. It’s pretty disturbing on a visceral level. Now, the psycho-thriller with a sadistic genius as a villain is a pretty familiar genre at this point (cf., for instance, Hannibal Lector). But Bakker’s novel offers a science fictional twist on this genre by extrapolating neuroscience slightly into a plausible near future, so that theoretical prospects hinted at in recent neurobiological and cognitive studies have been confirmed as actual, and current cutting-edge technologies like Transcranial Magnetic Stimulation and functional Magnetic Resonance Imaging have been pushed to a level beyond their actual present capabilities. In spite of these changes, the world of the novel remains in most ways recognizably our own. So one might call Neuropath a hard-SF, near-future psycho-thriller. But even that description is inadequate. What really distinguishes Neuropath is that the book has a concerted thesis, referred to by the characters within the novel as “The Argument” (in capitals); this makes it into a philosophical novel: a contemporary version of what Voltaire called the conte philosophique, and a strong example of SF as “cognitive estrangement” (Darko Suvin’s definition of SF as a genre).

[WARNING: SPOILERS]

The Argument in Neuropath goes something like this. Consciousness is severely limited. It is a very recent evolutionary adaptation, superimposed upon a wide array of older neural processes of which it is unaware, and which it cannot possibly grasp. We are only conscious of a very thin sliver of the external world; and even less of our internal, mental world. Most of our “experience” of the inner and outer world is a neurally-based simulation that has been evolutionarily selected for its survival value, but the actual representational accuracy of which is highly dubious. We are not conscious, and we cannot be conscious, of the actual neural processes that drive us. And indeed, nearly all our explanations and understandings of other people, of the world in which we live, and above all of ourselves are delusional, self-aggrandizing fictions. It’s not just that we misunderstand our own motivations; but that such things as “motivations” and “reasons” for how we feel and what we do actually don’t exist at all. Everything that we say, think, feel, perceive, and do is really just a consequence of deterministic physical (electro-chemical) processes in our neurons. “Every thought, every experience, every element of your consciousness is a product of various neural processes” (pp. 52-53). In particular, “free will” is an illusion. We never actually decide on any of our actions; rather, our sense of choice and decision, and the reasons and motivations that we cite for what we do, are all post-hoc rationalizations of processes that happen mechanistically, through chains of electrochemical cause-and-effect. All our rationales, and all our values, are nothing more than consolatory fictions.

The Argument is close to the “eliminativist” positions of philosophers like Paul and Patricia Churchland, and Thomas Metzinger (and also perhaps Ray Brassier, who draws out the phenomenological consequences of this position in his book Nihil Unbound). Bakker says in his Author’s Note that he is not — or at least does not want to be — a elimitavist and a nihilist, but he cannot think of any valid arguments against such a position. The Argument draws on research in cognitive psychology (with its claims about non-conscious computational processes in the brain, and its studies of the delusional nature of human self-understanding), neurobiology (with its understanding of the actual physical processes that underly various forms of thought), and (alas, also) evolutionary psychology (with its dubious claims that human values, feelings, understandings, and tendencies to act are “hardwired” adaptations from the Pleistocene). The findings of these research programs are taken as proof that nearly all speculation (philosphical, psychological, fictional, or whatever) on the nature of the mind and of humanity dating from before 1970 or so is utterly worthless, a form of self-congratulatory self-delusion and unwarranted belief. Science is distinguished from all other forms of understanding on the basis that it alone forces us to accept unwanted and dislikable conclusions, because it “doesn’t give a damn about what we want to be true” (Author’s Note, p. 306).

Of course, the fact that Neuropath is a novel, rather than a treatise by a brain scientist, or a philosophical tract by Metzinger, means that it is far more compelling than such works can ever be — and entirely for non-rational, non-cognitive, and non-scientific reasons. Indeed, the book’s most powerful effect is an entirely rhetorical (rather than rational) one. It compellingly discredits in advance any attempt to argue against its reductionist and nihilist theses: for the mere fact of claiming that subjective experience has any validity, or that meanings and values have any significance whatsoever, already convicts you of being somebody who wants desperately to evade the truth by clinging to alibis that flatter our human self-esteem. If you don’t accept the Argument, by that very fact you have discredited yourself and demonstrated the truth of its assertion that all our reasons and beliefs are self-delusions, and that we cannot intuitively grasp –much less face and accept — the gloomy truth about ourselves. Any attempt to say that things aren’t quite as horribly meaningless as The Argument makes out puts you in the category of those people who think they are living in Disney World instead of the real, actual world

I don’t intend this observation on Neuropath‘s self-confirming rhetorical strategy as a criticism; things are rather more complicated than that. Let me explain by putting it another way. The fact that Neuropath is a novel and not a scientific study or philosophical treatise means that it seeks, not to prove its theses either logically or empirically, but rather to demonstrate these theses, by putting them forth as strikingly as possible. And as a demonstration, is brilliant; all the more so in that the novel’s narrative itself recounts the making of such a demonstration. Even as Bakker demonstrates to us the inescapable truth of The Argument, his main characters Thomas Bible (the protagonist, a Columbia psychology professor) and Neil Cassidy (sic; the antagonist, Bible’s lifelong best friend and the mad neurosurgeon whose crimes dominate the plot) demonstrate the truth of The Argument to the world they live in, and to compel its acceptance, without any hope of escape. The novel narratively enacts the very process that it recounts: ironically compelling us to accept the overwhelming evidence for a thesis that we are constitutionally unable to accept, for not only is it violently counter to “common sense,” it undermines the authority of the very process by which we accept and reject ideas.

The Argument was first developed, as Thomas remembers, when he and Neil were undergraduates; they invoked it as a kind of party trick, in order to out-argue, and thereby disconcert and humiliate, English and other humanities majors. After all, if we are just puppets of neurochemical processes, then literary works have no intrinsic value apart from their ability to trigger certain neural responses and thereby pull our strings; and all the claims of literature, philosophy, and art either to insight or morality are bogus. (Bakker, in writing the novel, remains fully aware of this implication, to which his own work must be subjected as much as any other. The novel is in this sense self-consciously ironic, as so many genre narratives tend to be). But in the present time of the narrative, the demonstration reaches rather wider dimensions. Essentially, Neil sets sets out to set forth the Argument in the very flesh — that is to say, in the brains — of his victims. A billionaire businessman’s brain is rewired so that he is no longer capable of recognizing faces. Even in the mirror, and all the more when he looks at people around him, all he can see is the horrifying, characterizable visage of a stranger or an alien. (Prosopagnosia, or facial agnosia, is often discussed in scientific and pop-scientific writing). A porn star’s neural system is tweaked so that sensations of pain activate her brain’s pleasure and reward centers; she is led to compulsively drive herself to orgasm again and again, by slashing and mutilating herself until she dies. A fundamentalist preacher is subjected to neural firings that alternately lead him to feel the damnation of Hell and the joy of salvation. A politician prone to speechify about human dignity and moral responsibility is transformed into a cannibal who avidly devours a still-alive young girl, all the while pathetically protesting that he does not want to want to do this, that he cannot help wanting to do this. Finally, Neil straps Thomas into a machine he calls Marionette (an extrapolation from actually-existing Transcranial Magnetic Stimulation technology), that makes it possible to forcibly cycle him through a whole series of mental states, ranging from utter despair to a sense of unvanquishable well-being, from gentle benevolence to misanthropic rage, and from self-disgust to exaltation and feelings of omnipotence.

Through such demonstrations, Neil is trying to get Thomas to recognize the truth of The Argument — just as Bakker is trying to get the reader to recognize this truth. That is to say, Thomas advocates The Argument intellectually: he is in fact, more than Neil, its orginiator. And he has articulated the main points of The Argument in books he has written, and in the classes he teaches. But Thomas doesn’t feel the truth of The Argument viscerally — which is to say that he doesn’t actually live by it. (By his own account, this truth is so uncomfortable that it is impossible to actually live by — not only because we cannot really deal with its bleak truths, but also because we are so constituted that we cannot get rid of our illusions, even if and when we recognize them as illusions). Much to Neil’s disgust, Thomas lives his personal life as if values and meanings really existed, as if free will (or making decisions) were actually possible, and as if his love for his two children actually had sense and were not just the forcible result of evolutionary “hardwiring” and neurochemical programming. Neil justifies his gruesome experiments on the grounds that it is of no consequence whether the neurochemical impulsions that determine his victims to a particular course of action are the result of his own manipulations or just of “the environment” in general — in either case, the human being is a puppet of forces that he/she can neither control nor comprehend. Of course, this also means that Neil’s attempt to demonstrate the deep truth of The Argument is itself without sense, since all human beliefs are ungrounded and without sense. By weaving this level of meta-argument into his narrative, Bakker forestalls us from invoking it as a counter-argument against the book’s demonstration. Everything is beautifully air-tight, as the novel draws into itself, and neutralizes in advance, any attempts to argue against it.

I am tempted to say, therefore, that Neuropath is a cleverly designed hall of mirrors from which there is no exit. But that would still be, I think, to sell the book short. There is more to be said about the fact that, although the novel is grounded in cognitive theory, and practices a particularly intense form of cognitive estrangement, its primary accomplishment is affective, rather than cognitive. This is really just another way of saying that the book is indeed a work of imaginative fiction, rather than a scientific or philosophical treatise. When Neil is torturing Thomas, pulling him through one emotional state after another, he remarks that Marionette has finally accomplished what art has sought to do throughout all of human history: it gives the one who undergoes it (I am not sure what noun to insert here: the viewer? the audience? the consumer? the experiencer?) a powerful, vivid, and utterly compelling and convincing vicarious experience, of total participation in feelings that are not one’s own. (Of course, the larger point is that all human experience is vicarious, or aesthetic, rather than “real” and “actual”. I experience as mine what is really happening to someone else — or better, to no one. As Metzinger puts it, there is nothing that the experience of “being a self” is like, because in fact no such things as selves exist in the world).

And this, I think, is the paradoxical key to the novel. What makes Neuropath so powerful, so memorable, and so compulsively readable, is not The Argument itself, so much as the visceral intensity and horror of the way it is demonstrated. Neil’s manipulations (and those of other neuroscientists in the novel, such as the one who implants nanomachines in the brain of Thomas’ four-year-old son that repeatedly stimulate his amygdala, or so-called “fear center,” so that the child is forcibly in a constant, unremitting state of utter terror) — these manipulations so disturbing because they are violations of the mind as well as the body. They assault our most intimate sense of self-identity. We like to feel (wrongly) that no matter what happens to our body, our mind (or spirit, or soul) somehow can remain free and unaffected; in disproving this, Neil’s experiments wound human dignity (or human narcissism) more profoundly than either merely “physical” tortures, or doctrines like those of Freud or Metzinger, ever could. The absolute horror comes from intervening in the selfhood of the victim at such an intimate and interior level; the result is an unparalleled sense of absolute devastation. And the book’s reliance on science and technology — the fact that it only slightly extrapolates from what we already know, and what we already can do — makes it menacing in a way that the fantastic (as opposed to the more straightforwardly science-fictional) cannot attain.

This is important because Neuropath is ultimately (perhaps in spite of its author’s intentions) less about what human beings really are, than it is about what human beings can suffer, and what we can accomplish technologically. To put it otherwise, the novel is not so much about the (alleged) essence of the human mind and brain, than it is about power. What I have left out of my account of the novel so far is that Neil has long worked for the National Security Agency, and that the technologies he makes use of have all been developed, and employed, for torturing alleged “terrorists” and other prisoners. The demonstration that Neil seeks to make to Thomas, and perhaps to other people as well, is actually a national security secret. The FBI enlists Thomas to capture Neil, not on account of his actual crimes (which they do not care about, and do their best to cover up), but in order to recover the information that, in the process of going rogue, he has hidden, encrypted, or stolen.

Also, it turns out that Neil’s neurotechnology is double-edged. It is used to destroy the personal integrity of prisoners, to turn them into abject and grotesque reversals of what they previously were, in order to control them and extract information from them. But it is also used on NSA agents themselves, in order to transform them into killers and enforcers without remorse or conscience. Neil has in fact used the Marionette technology on himself, in order to dissolve any sense of obligation, gratitude, empathy, or guilt with regard to others; but also to annihilate any sense of being or having a self. At least, Neil claims that his “personal experience” or consciousness has been freed of any sense of agency or will: he just performs actions, he says, without having the feeling that he himself is an entity who wills these things, or actively does them. By cutting out portions of his neural circuitry, Neil has transformed himself into the sort of subject described by David Hume, who famously wrote that, when he looks within himself, he sees various “ideas” (desires, feelings, sense data, etc.), but never observes a “self” that would somehow “have” these ideas, or exist in addition to them. Neil is sort of a demonic version of the body/mind described by, for instance, the psychologist Susan Blackmore, who (combining cognitive psychology with Buddhism) precisely argues for a form of existence in which one has abandoned the fiction of being a self.

This leads me to several final comments about Neuropath. If The Argument has a “fallacy” that is not preempted by the book itself, this fallacy would lie, not in its positive expression of what science has discovered and what technology can do, but in its claims about what it is disqualifying or arguing against. The Argument tells me that I do not really have a “self”; and it proves its claim precisely by annihilating my very sense of “self.” This is dubious on two grounds. In the first place, in order to deny that the “self” exists, The Argument needs to substantialize, or essentialize, the very “thing” whose existence it goes on to negate. You have to first transform the fluid process of consciousness into a substantial entity, in order then to triumphantly demonstrate that such a substantial entity does not exist, and indeed cannot exist. But this has no weight against conceptions of the mind that do not reify it in the first place. The Argument works against Descartes, but not necessarily against William James. In the second place, a demonstration of power is not the same as a demonstration of essence. Modern neurotechnology is capable — or in Bakker’s SF extrapolation, may well soon be capable — of radically “rewiring” and rearranging the brain, with concomitantly radical effects upon the “mind.” This is indeed a demonstration of power — of the power of a technical and political-social apparatus — but it is not a demonstration of essence. The fact that we are capable of doing certain things to the brain does not in itself prove anything about the nature of the brain in all circumstances. As Bruno Latour or Isabelle Stengers might put it, the combination of the brain and the Marionette technology is itself an apparatus that must be constructed, and whose effects do not pre-exist its construction. What Bakker’s novel is really warning us of, is a drastic expansion of what intrusive brain technologies can accomplish, and therefore of what human beings can be made to suffer. (I’m reminded of Zizek’s warning, or suggestion, that virtual technologies could allow for a degree of torture that no one was previously able to inflict; and of the realization of just such a scenario in Richard K Morgan’s “Takeshi Kovacs” SF trilogy). The self that is destroyed by The Argument is in fact perpetuated by it, in order precisely that it may be made to suffer more horribly and concertedly.

Neuropath, like The Argument invoked within it, involves (among other things) a drastic overvaluation of consciousness itself — something of which cognitive psychology is in general guilty. The fact that, as Benjamin Libet’s experiments seem to suggest, my brain has already primed me to act in a certain way, before I become conscious of making the decision to act in that way, does not mean that my sense of “decision” is illusory, but only that the “decision” in question is not made by my consciousness. It is still entirely coherent to argue that my brain/mind/organism actually does “choose,” or make decisions, with my consciousness only being a secondary feature of the process (consciousness is apparently able to nullify the decision instead of ratify it, even without consciousness being that which makes the decision in the first place). The idea that everything the brain does is strictly causally determined can also be thrown into doubt, without invoking the “ghostly” actions of consciousness that hardcore empirical materialists have so long decried.(Walter J. Freeman does so, for instance, by invoking chaotic processes, in his book How Brains Make Up Their Minds). All this is not a matter of refuting The Argument of Neuropath, but of tracing its pragmatic consequences. Neuropath is all the more remarkable a work of SF, because of how it forces us to rethink its own premises, as much as the presuppositions that it gleefully destroys.

[ADDENDUM: see a lecture by Scott Bakker, recapitulating The Argument of the novel, together with some responses, here.]

Biopolitics

Reading Roberto Esposito’s Bios has only confirmed my doubts about the whole discourse of what is today called “biopolitics.” Esposito’s book is a good one, in that it details, and clearly explains, what is meant by this term — but the effect of this has only been to strengthen my criticisms of the concept, or my sense of its inadequacy, when it comes to consider the role that “life,” or even just discourses about life, play in contemporary society.

Esposito traces both the ways that “life” — by which is meant the view of human beings as biological organisms, or the biological processes that human beings undergo, i.e. birth, growth, and death, sickness and health — has been caught up in politics (in the sense of being a subject, or object, of political practices, of political struggles, and of state power), and the ways that political theory has considered the meaning of “life.” This is a large field, as it includes, on the one hand, everything from medical interventions in the name of public health to Nazi practices of racial extermination; and on the other hand, philosophical concepts of the “body politic” and of the vitality of individuals, races, and peoples, in thought ever since the ancient Greeks, but especially in the span of time that extends from Hobbes, through Nietzsche, and on to 20th-century vitalism. This is a large amount of material to synthesize — and Esposito does it by tracing the lines in Western thought that lead towards and away from Nietzsche and Foucault, on the one hand, and the practices of the Nazi regime, on the other.

I’m not sure if the term “biopolitics” was invented by Foucault, but of course he did the most to make the concept thinkable. Foucault traces, in his genealogical investigations of medicine, madness, prisons, sexuality, etc., the ways that a regime of sovereignty, still prevalent in Europe in the Renaissance, was gradually displaced, or supplemented, by a regime of discipline, which was less concerned with the prohibition of certain behaviors than with the surveillance, manipulation, and management of all aspects of human life. Among other things, this involves a shift from being concerned with particular acts, and with clearly-defined hierarchies and chains of command, to being concerned with the bodies and souls of the entire populace. Foucault’s well-known account traces the links between attempts to contain disease by imposing quarantines, for instance, and attempts to regiment people in schools, factories, military barracks, and prisons. Power moves from prohibiting certain actions to actively shaping and manipulating peoples’ actions overall, and from drawing lines of exclusion, lines that it is forbidden to transgress, to finding ways to include everybody and everything within a grid of carefully managed alternatives and possibilities. Foucault also describes this as a shift from the power of death (the power of the sovereign to impose death as a punishment) to a right over life (the power of the state to manage, for the sake of health, growth, productivity, etc., all aspects of peoples’ bodily habits and tendencies). It is through this shift that “life” becomes a coherent concept, and a matter or focus of concern. “Life” gets defined conceptually, by doctors and judges as well as by philosophers, insofar as it emerges pragmatically as a target and focus of power. As always, Foucault is saying, not that “discourse” is the sole reality, but rather that both discourses and concrete, physical practices, varying historically, constitute so many ways in which we manage and control a “real” that always exceeds them. Contrary to some foolish interpretations, Foucault always remains a materialist, and a realist (in the ontological sense). “Life” refers to a particular way that we have conceived the multiplicity of lives, living beings, and life processes that surround and include us — but these always exist beyond our conceptualizations and manipulations of them.

So far so good. Esposito is an excellent close reader. He helpfully focuses on the ambiguity, in Foucault’s work: between claiming, on the one hand, that the regime of discipline and the management of life has replaced the earlier regime of sovereignty; and on the other hand, that such a disciplinary form of power is overlaid upon a sovereign power that continues to exist. Foucault proposes, precisely, that different modern regimes have been characterized by different mixtures between sovereign command over, and disciplinary positive investment of, the lives of individuals and populations. Esposito then moves backwards from Foucault to Nietzsche, in whom, he argues, “life” really emerges in its modern sense as an object and focus of both power and inquiry for the first time. For Nietzsche demystifies spirituality and the soul, presenting them as effects of physiology and neurology. Thus he allows us to understand all aspects of human culture and mentality as expressions of biological “life.” Further, there is a telling ambiguity in the way that Nietzsche regards “life” so constituted. On the one hand, there is a continual effort to judge, or evaluate, this “life” in terms of sickness and health, descent and ascent, decadence and triumph. In this respect, Nietzsche’s language is akin to that of the Social Darwinism of his time, and it clearly leads into the racist and fascist formulations of the following century. At the same time, Nietzsche affirms the mutability and metamorphosing power of “life”: in this sense, “sickness” is as vital as “health,” and is necessary in order to avoid stagnation; transgression and transformation are posed against the racist, pseudo-biological obsession (which reached its most terrifying expression in Nazism, but which was already prevalent among Nietzsche’s contemporaries) with “purity” and blood lines.

Again, Esposito’s reading is subtle, insightful, and overall unexceptionable. But at the same time, I found myself muttering, over and over again, a weary “so what?”. Whatever the historical value of reading Nietzsche, it is unclear to me that his texts have the same resonance, and the same importance, today in the 21st century that they did at the time of Nazism, or even that they did in France in the 1960s. Esposito refuses to extend his thought beyond the Nietzschean matrix, which he sees as dominating all that came since. Nietzshce remains the crucial reference point both for the “thanatopolitics” of Nazism, which he presents as the culmination of a certain kind of biopolitics, or politicization of “life” and death, and for the post-World War II emergence of a critical biopolitics, which Esposito sees exclusively as an attempt to rescue the forces of “life” from their subordination to the Nazi mythologies of the master race, of the centrality of childbirth, and of “the absolute normativization of life.” Heidegger, Arendt, Foucault, Simondon, Deleuze, 20th century French neo-Spinozianism: these are all read as efforts to liberate the forces of life from racial and familial normativization, from myths of purity and the Fatherland, etc. In this way, Esposito (much like Giorgio Agamben) sees the Holocaust as the central reference point for all biopolitical thought (and indeed, for all political thought whatsoever) today; with Niezsche providing the crucial conceptual framework, since his thought is the source both of 20th century notions of racial “cleanliness” and “health”, and of any possible critique and overcoming of such notions.

Can I dare to suggest (without being denounced as a “self-hating Jew”) that such a focus on the Holocaust, on the Adornian lament about the difficulty (or impossibility) of poetry (or anything else) “after Auschwitz,” is at this point, 63 years after the end of World War II, an obscurantist evasion rather than a moral imperative? Not only is Esposito’s focus upon Nazi thanatopolitics blindly Eurocentric, but it also fails to take account of the many forms racism, nationalist chauvinism, etc. have taken around the world in the last half century and more.The politicization of “life” and the management of “life” have become all the more pervasive and ubiquitous in the last half century, precisely because of (rather than in spite of) the discrediting (for the most part) of Nazi racist/nationalist themes. For instance, bigotry and genocide today tend to be expressed in “cultural” and religious terms, rather than in the terms the Nazis used; but these new terms are themselves related to how we have come to reconceptualize “life”. The same could be said about national and international responses to plagues (AIDS, SARS, bird flu), about population control measures (ranging all the way from the nativist encouragement of more births, and the attempts to ban all forms of birth control, to draconian attempts, like that of the Chinese government, to restrict population growth). And questions about agriculture and food production, about access to water and other vital resources, about the patenting of genetic material, about the use of biometric data to track both individuals and populations, and so on almost ad infinitum — all these are excluded from Esposito’s purview, largely because his reductively Eurocentric and Holocaust-centric view of the biologization of politics and the politicization of biology has no room for them.

More generally, the European (perhaps I should just say, Italian and French) view of biopolitics, which Esposito summarizes so well (and variants of which are upheld by Agamben, Negri, and others) ironically seems to ignore two things: biology, and political economy. It is telling that Esposito says nothing whatsoever about the ways in which biology and life have themselves been so totally reconfigured in the (more than) half-century following Watson and Crick’s determination of the structure of DNA. Biochemistry, genetics, neuroscience, genetic engineering, etc etc — all of these have profoundly changed how we conceive “life”, as well as how governments and corporations seek to manage and contain it — yet Esposito writes as if none of this were relevant. You wouldn’t know, from reading his genealogies, that today we tend to conceive a life force more on the model of mindless viral replication, than as anything like Bergson’s elan vital. Nor that eugenics has been recast, in its contemporary variant, as a matter of “bad genes” rather than “bad blood” (both formulations are lying, ideological ones, but they have entirely different connotations). Nor that the alleged fatality of genetic makeup has become an alibi for all sorts of social discrimination and inequality. Nor that the goal of contemporary biotechnology has to do with the pragmatic manipulation of genetic material — and hence with a certain notion of flexibility and differential control, rather than with the old-style racial essentialism. Although he is ostensibly concerned with how our society conceptualizes “life”, Esposito fails to consider how changes in biology have changed this conceptualization, and how things are still very much up for grabs today, as witnessed both by the continually emerging new potentials of biological research and bioltechnology, and by the ways in which, on a theoretical level, the orthodox neodarwinian synthesis is itself under considerable challenge from other biophilosophical visions (as I have written about before).

But not only is Esposito’s account of biology incomplete; his account of politics is, as well. This is due to the fact that, like far too many contemporary theorists, he considers questions of domination and authority, and political-philosophical arguments about the nature of law and sovereignty, without giving any thought to matters of political economy (more specifically, to processes of the extraction of surplus value, and the circulation and accumulation of capital). He has no account, in other words, of the ways in which conceptualizations of, and decisions about, “life”, are today at least as overdetermined by considerations of money and economy as they are by politics and political considerations. Biological research today is an expensive proposition; it must be publicly or privately funded (cf. the race between public and private bodies to sequence the human genome). Money sets the agenda. Even as the management of “life” expands, in terms of everything from health care to biometrics in the name of “public safety,” priorities are set more by cost-benefit analyses than by strictly “political” forms of decision. “Biopolitics” today is intimately entangled with neoliberalism, alike in theory, in policy, and in practice. And this is yet another dimension that Esposito altogether ignores. It’s significant that Foucault himself, in his lectures on The Birth of Biopolitics, presciently focused his analysis mostly on the strategies and doctrines of a then (1978-1979) just emerging neoliberalism. Foucault discusses both the post-War German state-guided version of neoliberalism, and (at lesser length, but even more crucially for an understanding of the world today) the neoliberalism of the Chicago School of Milton Friedman, and especially Gary Becker. Rather than offering any judgment on neoliberal practices, Foucault discusses them with the icy objectivity of an entomologist describing the habits of parasitic wasps. His emphasis, nonetheless, is on “the generalization of the grid of homo oeconomicus to domains that are not immediately and directly economic” (page 268). This expansion of the “economic” (as narrowly understood by neoclassical marginalism, as a form of calculative rationality) to all forms of human activity is indeed the largest “ideological” change we have experienced in the years since Foucault’s death; it has altered our very sense of the social and the political. It is odd that, even as Foucault, at the extreme limits of his own thought, proclaimed the fundamental significance of this transformation of the modern episteme, his supposed disciples almost completely ignore it. (And I should note that the crisis we are currently undergoing does not in the least represent the “end” of neoliberalism — the state’s rescue of financial institutions, and its efforts to reboot the economy through spending and re-regulation, come out of the same economistic principles that motivated the deregulation of the 1980s and 1990s in the first place).

I don’t have any conclusion to this discussion, except to say that a biopolitics that is relevant to, let alone adequate to, the contemporary world, and that at least tries (even if not altogether successfully) to be “as radical as reality itself,” is yet to be born. Certainly none of the currently fashionable European theorists and philosophers provide anything like it — or even a starting place.

Copyright, again

Sorry I haven’t written for so long. Things have just been too busy, and too hectic, for the last several months. I hope to return to more frequent posting after the New Year.

Anyway, about a year ago I was bitching and moaning about copyright issues. This is sort of an update of that. I mentioned then about how a publisher I coyly called “C” — the press in question was Continuum — had ridiculously harsh contract terms, and how I wouldn’t give them an essay for an anthology they were publishing unless they modified those terms. Basically, the contract stipulated that the press would get permanent, exclusive rights of publication in all media, specifically including electronic — this means, for instance, that, were I to put an article I gave them on my own website, I would be in violation of contract. The only exception to this is that they permit the author to reuse the article in a collection of his/her own writings — but this is not allowed until FIVE YEARS after publication in the Continuum volume.

Well, they backed down in that case a year ago, and I got a compromise I thought could live with — I was permitted to publish my own book, which contains the text of the article in question, without having to wait five years. The anthology in question is finally out: it is called Deleuze, Guattari, and the Production of the New, it is in hardcover only, and it can be yours for a mere $95.11 from Amazon (a considerable savings from the list price of $130).

So think about it: if I had signed the contract originally offered by Continuum, my article could not be posted on my own website, nor included even in a book exclusively written by myself until 2014. It would have only appeared in an anthology so expensive that even most libraries would refuse to buy it, let alone individual readers. In return for getting a line on my academic vita, representing an officially “peer-reviewed” publication, I would have had to agree to a situation in which nobody would actually ever get a chance to read my writing.

There is clearly something wrong here. Authors are not permitted to disseminate their own work, and that work is made available by the press that controls it at an absolutely ridiculous price. Some of the best theory books of the last decade have received far less notice than they deserved, all because they have been caught in the limbo of this sort of publishing arrangement. I would cite, for instance, all from different publishers:

There are loads of more examples. These are just a few books that I happen to have read, and that I can recall offhand. (I read them, either by getting my hands on illicit and illegal pdfs, or by getting them through interlibrary loan).

In any case, I was recently solicited to write an article for another anthology of essays, on a subject that interested me. So I said yes. However, it turned out that Continuum was again going to be the publisher, and they offered me the same egregious contract terms as they had previously. This time, rather than negotiate, I simply withdrew from the anthology. I suppose I could have tried to negotiate again, but I am sick of the situation in which the default is so horrible and you can only get something different by making a stink. In addition, at this point I am sufficiently fed up that I would no longer accept the compromise they agreed to last time.

I should also mention that, in addition to the lousy contract, Continuum this time also sent me advisory guidelines stating that “text (prose) extracts of more than 400 words, or a total of 800 words from the same volume if there are several shorter extracts, require permission from the copyright holder.” This represents a far more restrictive interpretation of “fair use” than has ever been the case before; its
effect, I believe, is to make honest scholarship impossible. I believe that fair use guidelines extend considerably further than this, and I will simply not publish with a press that restricts fair use so harshly. Not only am I not allowed by this sort of policy to disseminate my own words, I am also not allowed to remix the words of others.

I can get more readers for anything I post on this blog than for an article published under such circumstances; so what’s the point? I realize I am in a privileged position in this regard; I already have tenure and a senior position at my university, so I am not faced with the “publish or perish” situation that forces many (junior or younger) academics to agree to publication under such horrible circumstances with regard either to price and availability, or the right to be able to disseminate their own work on the web and elsewhere.

There obviously needs to be some sort of open access policy for scholarship in the humanities, as there already is to a great extent in the sciences. We don’t really get paid for our writing, except very indirectly in the sense that a scholarly reputation increases your “marketability” and hence the kind of salary you can get as a professor. In these cases, the policies of presses like Continuum (which I am singling out here only because of my own dealings with them; many other academic presses are just as bad) serve the interests neither of writers nor of readers. I don’t have a blueprint of how to get there (open access) from here (restrictive copyright arrangements), but a first step would be for those academics who, like me, can afford to forgo the lines on their vitas, to refuse to publish with presses that have such policies.

Addendum

Surely k-punk is right when he criticizes the “quaint” and unjustified optimism of Gilbert Achar, as quoted in my previous posting by way of No Useless Leniency. I will stand by my basic point that I do not think that capitalist crisis somehow leads to increased opportunity for radical change. Crisis is how the capitalist system works: as Deleuze and Guattari say, it only functions — but precisely it does in fact function — by incessantly breaking down. I do believe that some sense of general abundance is necessary for there to be a radical questioning of the way things are — and that not being allowed to share in a general abundance is one of the most important stimuli for rebellion. When abundance seems to have vanished altogether, the largest effect is demoralization. I cannot even feel Schadenfreude over bank houses going under, once I am aware that I, and, non-rich people in general, are going to suffer from this more than the bankers and brokers will.

On the other hand, I do not see any possibility of an “optimism of the will” counterbalancing the necessary “pessimism of the intellect.” In fact, there is little that is more odious than the “positive thinking” and overall optimism that is a hallmark of our contemporary capitalism — as k-punk again was entirely on target in mentioning. As regards the current effort to save us from financial ruin and deep depression, I think this picture says it all:

Just one day of government injection of capital into the banks, and the Masters of the Universe at the Stock Exchange are back on Easy Street.

But I also don’t think that a dose of “negativity” is likely to help us in doing anything about the situation either. In fact, any optimism whatsoever seems to me unjustified. I am left, as always, in a position which could alternately be described as “Stoic” or as “petit bourgeois”: trying to observe and understand what’s happening with as much lucidity as possible, but utterly detached from any pretension of doing anything about it.

Crisis

Nobody should be all that surprised by the recent unraveling of the financial system. Crises are endemic to capitalism, as Marx argued long ago, and as generations of Marxist economists since have repeatedly demonstrated. Capitalism often has periods of dynamic growth; but these tend to turn into crises of underconsumption, or of overproduction and/or the overaccumulation of capital, because the very processes that boost productivity and profit end up increasing the imbalance between what is produced, and what workers and consumers are able to afford to buy. For a while this imbalance is alleviated by easy credit — consumers are able to buy beyond their means, and businesses are able to produce even more — but eventually the mismatch is replicated on a larger scale, and the whole house of cards tumbles down.

It is only in the fictional models of neoclassical economics that any sort of equilibrium is maintained, or that “efficiency” and “optimal” conditions are achieved. Neoclassical economics borrows its models from a 19th century physics that physicists do not accept any more (as Robert Nadeau points out). In the real world, there is no such thing as a perfect match of supply and demand in which the markets are cleared. Indeed, conditions that are far from any equilibrium, and in which (for instance) large amounts of productive capacity lie fallow and unutilized, while large numbers of people remain in a state of deprivation, can in many circumstances become self-perpetuating: this is something that Keynes understood over seventy years ago, but that was forgotten in the recent spate of “irrational exuberance.”

In 1997, in his essay “Culture and Finance Capital,” Fredric Jameson argued for the congruence between “the narrativized image fragments of a stereotyped postmodern language” without reference to anything beyond itself, and the relentless circulation of finance capital, in the ever-more-abstract form of derivatives and other arcane financial instruments. Postmodern culture seems to involve the autonomous play of stereotypes, signifiers that are “independent of the formerly real world,” precisely “because the real world has already been suffused with culture and colonized by it, so that it has no outside in terms of which it could be found lacking.” Similarly, “finance capital brings into being… a play of monetary entities that need neither production (as capital does) nor consumption (as money does), which supremely, like cyberspace, can live on their own internal metabolisms and circulate without any reference to an older type of content.” Fictitious capital and fictitious stereotypes can both circulate indefinitely, without any “grounding” or external reference. The play of media-driven simulacra that do not refer to any external reality, because they are themselves as “real” as anything else, and which largely constitute the human and material conditions to which they ostensibly refer, is the same thing as the play of arcane financial instruments that are themselves as “real”, in their effects, as (for instance) the houses whose subprime mortgages they are supposedly, at many removes, based upon — houses which, ironically, would not have been built in the first place were it not for the financial instruments in which their deferred debts could be embodied.

Jameson ended his essay with the lines:

Stereotypes are never lacking in that sense, and neither is the total flow of the circuits of financial speculation. That each of these also steers unwittingly towards a crash I leave for another essay and another time.

He was much criticized, as I recall, for the Cassandra-like prophecy of these lines. Academics didn’t like the fact that he was impugning the viability both of the novels of Don DeLillo, and of their TIAA-CREF accounts. (Me, even though I pay into my own TIAA-CREF account regularly, I take it for granted that I will never be able to afford to retire). But of course, the “crash” of which Jameson warned (and which it required no particular prophetic skill, but only a basic understanding of Marx, to be able to foresee) is precisely what we are dealing with today.

I don’t have much to add to the accounts of others. Jane Dark gives a better and more detailed account of what has actually happened than I ever could. Also, I am afraid that I share Ben’s pessimism as to whether anything good can come out of this crisis. Ben cites Gilbert Achar to the effect that, it is not in periods of crisis, but in ones of prosperity and “rising expectations”, that it becomes possible to envision radical change.

Marx got capitalism right as to its structural tendencies; his mistake was to think that the inevitable, and in the long run inevitably worsening, crises to which capitalism is prone were the points at which the system itself could be overthrown. But in point of fact, not only are these crises so demoralizing that they effectively work to block any hope of action to make things different, they are positively useful to capitalist domination — and even perhaps necessary to that domination. Capitalism will never resolve its “contradictions”; and a crisis is the point at which these “contradictions” come to a head. But for that very reason, crisis is the point at which capitalism is able to reinvent itself, and prolong thhe “contradictions” that are its paradoxical conditions of possibility.

In other words, orgies of destruction of capital, such as we are witnessing now, are part and parcel of the “creative destruction” (Schumpeter’s term, very much following Marx’s observations) that is the modus operandi of capitalism. Individual capitalists may suffer (though usually far less than the rest of us do), but these convulsions clear up the system, unclog it, so that new rounds of exploitation and capital accumulation may then take place. Crisis is the mechanism that transforms the abundance which capitalism produces into the condition of scarcity and deprivation which is necessary to its continued functioning. Or, crisis (as the flip side of manic speculation) is the way that Bataillean expenditure and excess can be reintroduced into the “restricted economy” of calculation and universal equivalence.

All this is why I don’t think the current crisis marks the end of neoliberalism and market fundamentalism. For the sole aim of all the government intervention that is happening now is precisely to restart (reboot) the currently clogged market. Whether it works or not is still open to quesiton; but if it does work, this will not mean a paradigm shift of any sort, but only the restoration of corporate and financial business as usual. In times of prosperity, the best we can hope for is trickle-down (though often even that is not guaranteed; the last twenty-five years have instead involved a redistribution of wealth from everyone else to the already-rich). But in times of crisis, recession, and depression, all we can hope for is to “share the pain” that the corporate and financial sector is feeling, and thereby to restore that sector at our own expense. The game is rigged, in times of prosperity and calamity alike.

But no matter what, the worst never leads to the better. Revolution will never come from sacrifice. It is only under conditions of (relative) prosperity and abundance — which capitalism does provide, after a manner, during one part of its cycle — that we will ever find the power to imagine things differently, and that people will have the motivation and the energy to devote themselves to hopes for the future, rather than being stuck in the moment-to-moment struggle for bare survival. Abundance and non-commodified leisure are the only things that capitalism is unable to endure. Both the crazed accumulation and conspicuous consumption that characterized the financial sector over the last two decades, and the crazed destruction and disaccumulation that are overtaking that same sector today, serve the purpose of averting the threat of a generalized abundance and leisure for everybody. Abundance and leisure — which are technologically attainable, but economically unthinkable — must be revived as the basis for any sort of political struggle. Now more than ever is the time to (as Lenin’s Tomb suggested some years ago) “be unrealistic, demand the possible.”

Issue #1

Ron Silliman reports on a new publication, modestly entitled Issue 1. (I was first alerted to this by The Mumpsimus). This e-text is 3785 pages long (!); each page contains a “poem” attributed to one of 3785 3164 writers. The names of the writers range from Silliman himself and other language poets, through a number of (now dead) poets and writers, onto various bloggers (especially ones who appear in Silliman’s blogroll, it would seem). In point of fact, none of the writers have actually written the pieces attributed to them. My name appears among the list of authors, together with the names of several people I know, including some who read (and sometimes comment on) this blog. My own “poem” appears on page 1893; for what it’s worth, it doesn’t strike me as being very good, nor is it like anything that I could ever imagine myself writing, either in style or in sentiment.

I kind of wonder how other “victims” of this hoax (if that’s what it is) respond to it. Silliman seems kind of pissed off, as do many (but not all) of the commenters on his blog entry. Matthew Cheney (of The Mumpsimus blog) seems more or less amused:

The whole thing strikes me as a stunt pulled by someone who desperately wants attention. (And now I’m giving it to ’em. So it goes.) I’m still amazed that anyone would put the time into creating something like this, but the amazement now is the sort of amazement one has when watching the totally insane rather than watching the harmlessly obsessive.

Me, I think that the stunt raises all sorts of interesting questions (or perhaps I should say, in Palin-speak, that lots of interesting questions “rear their heads”). Early-20th-century Dadaist stunts raised meta-questions about art, about what could be considered art, etc. But such meta-questions have long since been so well assimilated into our culture (both artistic culture and commercial culture) that they scarcely raise an eyebrow any longer. Today, we can only be blase about self-referentiality, conceptual art, and so on.

In such a context, Issue 1 attempts to up the ante, by asking meta-meta-questions, as it were. Most notably, there’s the difficulty of deciding whether the publication actually is some sort of interesting conceptual art, or whether it is rather just a dumb prank, or a malicious hoax. Then there is the issue of obsessiveness that Matthew Cheney raises. Certainly a lot of modernist and post-modernist art is quite obsessive (I am thinking of everything from Yayoi Kusama’s polka dots to Henry Darger’s weather chronicles). But Issue 1 might well only be pseudo-obsessive; it seems to be something that would have required an insane amount of time and energy (if only to collect all those author names and write all those poems), but I wouldn’t be surprised to learn that it was all generated by a computer program in just a few hours. Even insanity isn’t what it used to be, in our age of digital simulation.

Finally, given all the questions about the status of the author that have been raised in the last half-century or so, it only makes sense that I should be credited with the authorship of something that I had nothing to do with writing. Remember, Roland Barthes proclaimed “the death of the author” more than forty years ago, in 1967. And even well before that, in 1940, Borges proposed a literary criticism that would “take two dissimilar works — the Tao Te Ching and the 1001 Nights, for instance — attribute them to a single author, and then in all good conscience determine the psychology of that most interesting homme de lettres…” (from “Tlon, Uqbar, Orbis Tertius”). Issue 1 is a logical outgrowth of the situation in which such ideas no longer seem new, or radical, or outrageously counterintuitive, but have instead been entirely assimilated into our “common sense.”

In short, Issue 1 makes sense to me as a conceptual art project precisely to the extent that it marks the utter banalization, routinization, and digitization of any sort of conceptualism and experimentalism in art, and of all supposedly “avant-garde” gestures. There is something melancholy in coming to this conclusion; but perhaps something liberating as well, since it suggests that the whole strain of avant-gardism that starts in the 19th century, goes through dadaism and other forms of radical modernism, and moves through conceptualism in the 1960s and 1970s to the supposedly oppositional political art of the last few decades, has finally outlived its relevance and its usefulness. We have finally reached the point where we can shake off the dead weight of the anti-traditionalist tradition, and perhaps move on to something else. This doesn’t mean rejecting all the art of the avant-garde tradition, much of which I still very much love. But it does mean seeing that art historically, just as we see the art of the Baroque historically, or as we see the science fiction of the “Golden Age” of the early-to-mid 20th century historically. It’s still there to be tapped (or looted) for clever ideas, formal approaches, and so on. But modernist experimentation and avant-gardism is no longer a living resource; in an age of arcane financial instruments capable at one moment of generating huge quantities of fictitious wealth, and at another moment of sending shockwaves through the entire society, wiping out retirement accounts, causing businesses to go bankrupt and jobs to disappear, etc, etc — in such a climate, modernist avant-gardism fails to be “as radical as reality itself.” (I am fully aware that financial panics with real effects upon people’s lives are as old as capitalism itself; what’s new in the present situation comes from the way that new technologies have a multiplier effect, as well as adding additional layers of meta-referentiality and meta-feedback into the system).

I am sorely tempted to add the “poem” of mine which appears in Issue 1, and which I had absolutely nothing to do with producing, to my CV.

A Note on Evil

My comment in the previous post on how voting for McCain is evil drew a lot of negative respnse, both in the comments here and in those on Jodi’s blog. This led to Jodi’s own explicit comments on evil in politics, to which, I think, I need to add my own. Like Jodi, though perhaps for different reasons, I am not in general prone to use moral categories to address political issues. I think that the leap from the political to the moral register often leads to the effacement of contextual complexities, through the simplistic imposition of absolute, transcendent modes of judgment. In Deleuze’s terms, the appeal to moral categories is a way of evading the difficult work of developing immanent perspectives and immanent criteria, by simply imposing judgment from outside. It’s a policing action, short-circuiting both political economy and aesthetics.

Nonetheless, there are times when such a judgment seems necessary. At the risk of being excessively pedantic, I want to point out that my use of the term “evil” in the previous posting was quite precise in its reference to Kant — rather than just generally using it as a means of rhetorical posturing. In particular, I was referring to Kant’s essay “An Old Question Raised Again: Is the Human Race Constantly Progressing?”, which forms one part of the late (post-Critical) book The Conflict of the Faculties. I think that this essay deserves a contemporary rethinking and “updating” — in much the same spirit in which Foucault rethought and “updated” Kant’s essay “What is Enlightenment?”.Foucault rejects the way that, in the hands of Habermas and others, Kant’s Enlightenment principles have become the basis for what Foucault “like[s] to call the ‘blackmail’ of the Enlightenment.” Foucault says that it is ridiculous to demand “that one has to be ‘for’ or ‘against’ the Enlightenment.” For “the Enlightenment is an event, or a set of events and complex historical processes,” rather than a permanent set of values to be identified with “rationality” or “humanism” tout court. Indeed, for Foucault it is precisely in refusing this for-or-against “blackmail” that one can most truly remain faithful to the Kantian task of a continued “historico-critical investigation” of our own assumptions and presuppositions, including precisely and especially the ones that seem to us to be most self-evidently “rational” and “humanistic.”

With regard to “An Old Question Raised Again,” similarly, we might do well to rethink Kant’s interrogation of the possibility of “progress,” precisely because we now find ourselves in a world where nobody can believe any longer in “progress” in the sense that Kant meant it. Lyotard wrote in the 1980s that nobody could believe in “grand narratives” (like the Enlightenment and Marxist one of progressive human emancipation) any longer; Francis Fukuyama wrote in the 1990s that the perpetuity of neoliberal capitalism was the only “end of history” that we could ever hope to attain. Today, in 2008, we are if anything even more cynical, as years of booms and busts in the market — with the biggest bust of all currently looming over us — have all the more firmly established capital accumulation, with its concomitant technological improvements, as the only form of “progress” that we can at all believe in.

But it is precisely in this context that Kant’s essay speaks to us with a new relevance. “An Old Question Raised Again” makes the point that there is no empirical evidence whatsoever to maintain the proposition that the human race is progressing — by which Kant means morally progressing, to a state of emancipation instead of slavery, mutual respect (treating all human beings as means, rather than just as ends) instead of subordination and hierarchy, and cosmopolitan peace instead of strife and war. (In other words, Kant is implicitly referring to the three watchwords of the French Revolution — Liberty, Equality, Fraternity — though we might well want to replace the last one with “cosmopolitanism,” to avoid the gendered connotations of “fraternity”). There is no empirical way to assert that humanity is progressing in these terms, rather than regressing or merely remaining at the same point. (It is worth maintaining this Kantian point against all those fatuous attempts to claim that the USA is benevolently improving the lot of the rest of the world, or somehow standing up for “freedom” and “democracy,” when in fact it is exporting the imperious demands of neoliberal capital, whether by outright war or by other forms of influence or coercion, to other parts of the world).

However — and this is the real crux of Kant’s argument — although there is no empirical evidence in favor of the proposition that “progress” has taken place, there is a reason, or an empirical ground, for us to believe in progress, to hope for it, and even to work for it — rejecting the cynicism that tells us that any such hope or belief is deluded or “utopian” (this latter word is most often used pejoratively, in the form of the claim that any attempt to make human life better, such as all the efforts of the Left in the 19th and 20th centuries, inevitably has “unintended consequences” that end up making things worse). This ground is the occurrence of certain events — for Kant, the French Revolution — whose sheer occurrence, in itself, however badly these events miscarried subsequently, “demonstrates a character of the human race at large and all at once… a moral character of humanity, at least in its predisposition, a character which not only permits people to hope for progress toward the better, but is already itself progress in so far as its capacity is sufficient for the present.” Humanity hasn’t actually gotten any better, but its active ability to imagine and project betterment, on a social and cosmopolitan scale, is itself evidence that a “predisposition” to betterment does in fact exist.

Now, I left out a couple of phrases in the citation above; the entire sentence actually reads: “Owing to its universality, this mode of thinking demonstrates a character of the human race at large and all at once; owing to its disinteredness, a moral character of humanity, at least in its predisposition, a character which not only permits people to hope for progress toward the better, but is already itself progress in so far as its capacity is sufficient for the present.” The two key terms here are universality and disinterestedness. Kant is not merely praising enthusiasm and fervor. He is almost oppressively aware that enthusiasm and fervor guarantee nothing, and that they have propelled many of the worst happenings and the worst movements in human history — something that is all the more evident today, after the horrors of the twentieth century. Nothing that is narrowly drawn, chauvanistic, nationalistic, etc., can stand as evidence for a predisposition towards betterment.

But beyond that: Kant is not saying that the French Revolution in itself is the evidence of a human predisposition to betterment. He is saying, rather, that the “universal yet disinterested sympathy” that “spectators” from afar felt for the French Revolution is such evidence. Our “moral predisposition” for betterment is revealed in the way that “all spectators (who are not engaged in this game themselves” feel a “sympathy,” or “a wishful participation that borders closely on enthusiasm,” for the distant revolutionary events of which they are the witnesses. Such sympathy-from-afar can be “dangerous,” Kant warns us; but it is genuine evidence for the potentiality or “predisposition” toward improvement of the human condition — at least to the extent that it is “universal” (rather than being partial, chauvinistic, or favoring one “nation” or “race” against another — as fascist enthusiasm always is), and that it is “disinterested” (not motivated by any expectation of personal gain; an aesthetic concern rather than a merely self-aggrandizing one). (I think that, for example, Foucault’s enthusiasm from afar for the Iranian revolution can be regarded in the same way as Kant’s enthusiasm from afar for the French revolution; in both cases, the bad outcomes of these revolutions does not disqualify the reasons for which Kant and Foucault found themseves in sympathy with them; and this is why such events, and such expressions of sympathy, must be radically distinguished from the enthusiasm for fascism that consumed so many early-20th-century artists and intellectuals).

I suppose that, genealogically, all this is Kant’s secular-Enlightenment updating of the old Christian virtue of hope. But it locates what is hoped for in this life, this world, rather than in an afterlife, or in some sort of post-apocalyptic recovery (in this way, it is actually more secular, and less mystical and religious, than, say, Walter Benjamin’s messianism; and although it refers, or defers, to an as-yet-unaccomplished future, it is more materially and empirically grounded than, say, Derrida’s “democracy to come.” Benjamin and Derrida must both be honored as true descendants of Kant, yet arguably they have both diminished him). The human predisposition towards betterment already exists in the here and now, even if its fulfillment does not. Quoting Kant again:

For such a phenomenon in human history is not to be forgotten, because it has revealed a tendency and faculty in human nature for improvement such that no politician, affecting wisdom, might have conjured out of the course of things hitherto existing, and one which nature and freedom alone, united in the human race in conformity with inner principles of right, could have promised. But so far as time is concerned, it can promise this only indefinitely and as a contingent event.

Human improvement depends upon happening that have not yet taken place, and that in fact may never take place — it requires a “contingent event” in order to be realized. But nonetheless, the “phenomenon” of a capacity towards such improvement is in itself perfectly and altogethe real. In Deleuze’s terms, a “predisposition” is something virtual. Our predisposition towards improvement exists virtually, even if it has not been actualized in our social, political, and economic systems. It is for this reason that the denial of our potential or predisposition towards improvement is a secular version of what the Christians call a “sin against the Holy Spirit” — in Kant’s terms, such a denial is “radical evil”, in that it negates the very potentiality that makes any sort of moral choice thinkable in the first place. (Hence, Kant insists that human beings have a predisposition towards betterment in precisely the same way, and for the same reasons, that we all also have a “propensity to evil” or depravity).

In the grander scheme of things, this means that we must reject, on Kantian grounds, all ideologies that declare that humanity is incapable of betterment because human beings are inherently limited and imperfect (such is the tenor of the anti-“utopian” rejections of anything that goes beyond the limits of contemporary predatory capitalism), and all ideologies that declare that the narrow self-interested maximizing behavior of Homo oeconomicus cannot ever be transcended, as well as all ideologies that limit the prospects of emancipation to any particular group, nation, religion, etc. And in the narrow, tawdry limits of contemporary US politics — to move from great things to small — this is why the boundless cynicism of the Republican Party must be rejected as evil. The Democrats may well be playing games with our hopes for betterment, hypocritically encouraging those hopes only the better to betray them, etc., etc.; but at least they represent a world in which such hopes stil exist.