Paranormal Activity Roundtable and other stuff

The new issue (#10) of the online film journal La furia umana is out, and it contains lots of interesting stuff, including a roundtable discussion, featuring Therese Grisham, Julia Leyda, Nicholas Rombes, and myself on the two (to date) Paranormal Activity films. I think this was a great discussion — my own remarks were very much stimulated by Therese’s questions, and by Julia’s and Nick’s own quite different takes on the films. I think that — whether in spite of, or more likely, precisely because of, our divergences — the discussion stands up pretty well as a whole. [Note added 2015: the roundtable is now available here.]

The journal also presents web-readable reprints of two chapters of my last book, Post-Cinematic Affect: the chapter on Gamer is here, and the Coda is here. (The introduction and the three earlier chapters were intially published here; or you can simply buy the whole book).

SLSA 11

I’ve just spent the last three days at the SLSA (Society for Literature, Science, and the Arts) conference in Kitchener, Ontario. I saw friends, met people whom I had only read before, and heard a good number of excellent talks, plus keynotes by Isabelle Stengers and by Bernard Stiegler. I gave a paper (on which more later) in one of the Whitehead/Stengers/cosmopolitics sessions organized by Steven Meyer. I also was the respondent for a panel on “Aesthetics Beyond the Phenomenal,” with talks by Scott Richmond, Patrick Jagoda, and James Hodge. I don’t know if any of their papers are (yet) available for reading — they will all eventually be published as parts of book projects. But since my response most likely won’t be appearing anywhere else, I will post it here.
———-
These are three fascinating and highly diverse talks. I would like to approach them in a slightly oblique way, as suits the discussion of matters that are themselves oblique. What these papers all have in common is this: they all speak to experiences that are below or beyond the threshold of human perception. They all describe works of art that contrive to bring into our awareness events or processes that cannot be apprehended directly. The video games described by Patrick Jagoda work “to render global systems” — those massively distributed networks in which we find ourselves invisibly enmeshed — “cognitively, perceptually, and aesthetically accessible.” Tony Conrad’s The Flicker, as described by Scott Richmond, engages in “perceptual modulation”: that is to say, it “configures perception such that it becomes affection,” inducing us to see things that aren’t actually there on screen, and bringing into the open the ways that our bodies actively resonate in and with the world. John F. Simon’s Every Icon, discussed by Jim Hodge, operates on a time scale that is incommensurable with our own internal time sense, as it is both too fast — flipping over at a rate of 100 times a second — and too slow — taking a time to complete itself that is far longer than the actual age of the universe — for us to be able to observe it concretely.

My own oblique approach to these three talks will consist in pulling back to consider their metaphysical underpinnings. The question of limits — limits both of sensation and of thought — has long been an important concern of Western philosophy. Even without tracing this question back to medieval formations of negative theology — something that I cannot do, since I know far too little about it — we may say that the problem of limits has been approached in quite various ways over the course of the last several hundred years. Leibniz was interested in the existence of micro-perceptions, which could not be apprehended individually, but whose summation, or integration, produced sensory impressions like the sound of the crashing of waves on the seashore. At the opposite extreme, incommensurable macro-sensations were the raw material of the experience of the sublime, addressed in the 18th century by such thinkers as Burke and Kant. We can also credit Kant with linking the question of the limits of sensation and perception with that of the limits of cognition, and indeed of the limits of Reason itself. In the Analytic of the Sublime of the Third Critique, the mind recognizes its own rational power in the very act of reflecting upon the limits of the (merely finite) imagination. But in the Transcendental Dialectic of the First Critique, reason comes face to face with its own limits, in the form of unavoidable illusions: errors that are intrinsic to its very nature, and that it will never be able to shake off, once and for all.

There are limits, then, both to what we are able to perceive, and to what we are able to comprehend. In the wake of Kant, Romanticism and Modernism alike — both in art and in philosophy — were largely concerned to test and to push against these limits. In the second half of the twentieth century, we still find these concerns at the center of the reflections on aesthetics by such crucial thinkers as Jean-François Lyotard and Gilles Deleuze. Lyotard’s injunction to what he calls “postmodern” artists (though I would rather call them belated modernists) is that they must strive to “present the unpresentable.” Somewhat more subtly, Deleuze sees the task of the modern artist to be both to confront invisible forces so as to render them visible, and to release cosmic forces from the limitations of the visible forms in which they are trapped. Lyotard and Deleuze, like Kant, are concerned with the limits and deformities of representation. Although these more recent thinkers insist upon the possibility of non-representational modes of affirmation, such as Kant never conceived, they remain committed to the modernist, formalist, and ultimately Kantian project of (as Scott here describes it) “the continual reinvention of a continuous medium, in a way that worries its specificity, and by means of aesthetic production that pushes the limits of what will count as a film (or a painting or a sculpture or a piece of music), usually taking the form of the acknowledgment of the material facts of that medium.”

The question to which I am brought by the three papers that we have just heard is this. To what extent do the works that thee papers discuss remain inscribed within the Kantian-Romantic-Modernist paradigm that I have outlined; and to what extent do they gesture towards a new, and different, tracing of the problem of limits? Scott’s talk approaches this question most explicitly, since he argues that the “proprioceptive aesthetics” of Conrad’s work mark a rupture with the standard modernist project. The Flicker works in the register of affectivity, rather than in that of cognition. It addresses the body, rather than assuming a notion of aesthetic experience that would be dissociated from carnality. As Scott says, it “places its faith in the perceiving body as a sensate and sensitive object.” In this way, The Flicker is perhaps no longer a modernist work. On the other hand, Scott also continues to describe the film in ways that suggest the modernist paradigm is simply being modified and expanded a bit, rather than being more radically superseded. On his account, the film entices us to perceive and feel what isn’t actually there; but in this way, it testifies to an undecidable intertwining of body and world which is the very basis of phenomenal experience. Where a more normatively modernist art leads us to cognize the very limits of our experience, Conrad’s piece rather forces us to feel those limits. But in this way it still ultimately conforms to the Kantian-Romantic-Modernist paradigm, in that it is concerned with the act of perception per se, rather than with what it is that we perceive. They deal, as Whitehead would say, with what we can know, rather than what we do know.

In contrast, the system simulation games and alternate reality games described by Patrick offer challenges that remain largely cognitive. But they also involve a sort of experiential immersion in complex networks and widely distributed systems that are entirely real, but that cannot be grasped phenomenologically or existentially. Patrick says that these games serve as “formal equivalents” for worldwide networks and systems — a condition which is something quite different from their being representations of such networks and systems, and which requires a kind collective or transindividual active participation, in a way that differs quite markedly from the sort of spectatorial absorption and/or critical reflection at the heart of what I have been calling the Kantian-Romantic-Modernist paradigm. Yet I am still not entirely convinced that any of these games really have the capacity, as Patrick claims, “to mediate emergent collectivities and render dynamic virtual worlds.” I remain skeptical, if only because each of these games involves, as Patrick concedes, “a particular set of political assumptions” — and also, I would add, of procedural assumptions. The problem here is that the engagement with, and reverse engineering of, underlying algorithmic procedures itself works as a sort of Kantian-reflexive validation of those procedures. I would suggest that this is not a bug, but a feature; the necessary, built-in consequence of any effort to simulate a complex system by means of abstraction. Games like PeaceMaker and Superstruct strike me as being a bit like Keynesian economics: they offer resolutions that might well alleviate suffering in real-world terms; but they are constrained by the very parameters that serve as their enabling conditions, the terms and presuppositions that allow them to function in the first place. Going beyond this horizon would require a game whose own rules and algorithms could be altered in the course of play. So I would say that these games, too, still remain within what I am calling the Kantian-Romantic-Modernist paradigm.

In his discussion of Simon’s Every Icon, Jim argues that the piece provides us with “an articulation of the technological conditions of possibility for an experience of time.” This is the case not just because the piece operates over — and forces us, therefore, to reflect upon — time scales that are incommensurable with our own capacities for phenomenal attention, but also because it demonstrates for us the gap between instruction and execution. Computer code distinguishes itself from other languages due to the fact that it is executed rather than read: that is to say, it is entirely performative, rather than semantically informative. It doesn’t mean something, but rather does something. We often assume, without really thinking about it, that performance is somehow more direct and immediate than signification: as if action were free from the detours and indecisions of hermeneutics. But Jim’s account of Every Icon shows us, to the contrary, that there is as much of an “opaque chasm” between instruction and execution as there is between inscription and interpretation. By the force of this demonstration, Every Icon induces us to reflect in a new way upon the conditions and limits of the “digital” as an aesthetic medium. As Jim notes, it radically revises the modernist figure and technique of the grid. However, while the piece provides a refreshing new version of the critical paradigm that I have been tracing throughout my response, it still concerns itself with its own conditions of possibility, and thereby doesn’t really escape this paradigm.

I do not intend any of my remarks to suggest any disparagement, either of the brilliant and innovative works that the panelists have discussed, or of the elegant and thoughtful accounts of these works that the panelists themselves have given. I seek only to point up the contours of the problematic that we have been bequeathed in this age of globalization and digitalization, and that we have barely begun to work through. The premise of this panel was to consider how “technical and aesthetic objects phenomenalize the non-phenomenal,” and how such a process might “inform an understanding of the non-phenomenal world.” I think that the contradiction between these two goals — of giving us phenomenal access to that which lies beyond the phenomenal on the one hand, and of open up a radically non-phenomenal sort of the experience on the other — in fact describes the difficult aesthetic conjunction with which we are in fact faced today. The tasks of conceiving a social order beyond that of capital, and of dephenomenalzing ourselves on the other, would seem to be still beyond our current powers of invention.

Speculative Realism talk

I’m back home from the Object Oriented Ontology symposium in New York. My own talk, “Pantheism And/Or Eliminativism” is not quite finished — I had to wing it a bit there at the end. And in any case, I am now reworking it for the SLSA conference next week (where I will be delivering it instead of the entirely unwritten talk that I originally planned to give).

I will post the text of my talk online, once I have finished it and revised it to my satisfaction.

In the meantime, Tim Morton livestreamed and archived the entire symposium. So you can watch the morning session, moderated by Ken Wark, and with talks by Graham Harman, Aaron Pedinotti, and myself, here. (The other talks and sessions are also archived on Tim’s blog).

Post-Cinematic Affect symposium

This past week, there has been a symposium on my book, Post-Cinematic Affect, over at In Media Res. There were postings by Elena Del Rio, Paul Bowman, Adrian Ivakhiv, and Patricia MacCormack, plus lively discussions in the Comments sections. Today, my response to the various postings was published. I am reproducing it here:

 

First of all, I would like to thank Michael O’Rourke, Karin Sellberg, and Kris Cannon for setting up this theme week at In Media Res devoted to my book Post-Cinematic Affect, to the curators Elena Del Rio, Paul Bowman, Adrian Ivakhiv, and Patricia MacCormack for their postings, and also to Shane Denson for his comments. The discussion has been so rich, and it has gone in so many directions, that I scarcely know where to begin. I will try to make a few comments, at least, about each of the four curators’ postings in turn.

Elena Del Rio praises the power of affect, for the way that it “throws into disarray the system of recognition and naming.” She opposes the state of “exhaustion” and indifferent equalization that we might seem to have reached in this age of globalized finance capital to the way that “affect or vitality” remains able to energize us, to shake things up, to allow for (in the words of Deleuze) “a vital power that cannot be confined within species [or] environment.” While I remain moved by this vision — which has its roots in Spinoza, Nietzsche, and Deleuze — I am increasingly dubious as to its viability. I’m inclined to say that praising affect as a force of “resistance” is a category error. For we do not live in a world in which the forces of affective vitality are battling against the blandness and exhaustion of capitalist commodification. Rather, we live in a world in which everything is affective. What politics is more virulently affective and vital than that of the American Tea Party? Where is intensive metamorphosis more at work than in the “hyper-chaos” (as Elie Ayache characterizes it, following Quentin Meillassoux) of the global financial markets? It is not a question of a fight between affect and its “waning” or exhaustion (whether the latter is conceived as the actual negation of the former, or just as its zero degree). Rather than being on one side of a battle, affect is the terrain itself: the very battlefield on which all conflicts are played out. All economic and aesthetic events today are necessarily aesthetic ones, both for good and for ill.

Paul Bowman is therefore not being wrongheaded when he wonders “whether approaching the world in terms of affect offers anything specific for cultural theory and the understanding of culture and politics.” Indeed, I answer this question in the affirmative, whereas Bowman seems to lean towards the negative. But my saying this is not because I think that affect offers us “anything specific”; it is rather because affect (much like Whitehead’s creativity, or Spinoza’s conatus) is an entirely generic notion, one that more or less applies to everything. Affect is not a particular quality; rather it designates the fact that every moment of experience is qualitative and qualified. Eliminativist philosophers notoriously argue that “qualia” do not exist; at the opposite extreme from this, I follow WIlliam James and Whitehead in insisting that there is nothing devoid of qualia. For this reason, I am in agreement with the commentators who suggest that the two affective readings Bowman offers of the clip from Old Boy are not in contradiction to one another, and that sensual heightening and loneliness in fact go together. Bowman’s effects are inseparable from what I am calling affects.

Adrian Ivakhiv asks “whether there remain breathing spaces and sources of transcendence outside of hypercapitalism’s ever-modulating codes.” That is to say, he worries that my account of what Marx called the “real subsumption” of all social forces under capitalism in contemporary leaves room for anything else. Do I not run the risk of painting so totalizing a picture that Whitehead’s and Deleuze’s vision of an “open universe” becomes impossible? Imust admit that I present a rather pessimistic view of our prospects. I fear that under the sway of what Mark Fischer has called “capitalist realism” we suffer today from a general paralysis, both of the will and of the imagination. I do not share Gibson-Graham’s happy vision of all sorts of wonderful utopian alternatives burgeoning under the surface of actually existing capitalism. If I instead present what seems like a totalizing picture, this is only to the extent that capitalism “itself” — however multiple and without-identity it may actually be — involves an incessant drive towards totalization. This is capital’s essential project: the ever-expanding accumulation of itself, of capital. It’s a process that is both economic (quantitative) and aesthetic (qualitative). The goal of complete subsumption is of course never entirely realized, precisely because accumulation can never come to an end. Also, we cannot see, feel, hear, or touch this project or process: in itself it is a version of what Ivakhiv calls “magic.” And to my mind, this makes the aesthetic a kind of counter-magic, a spell to force the monstrosity to reveal itself, an effort to make it visible, audible, and palpable.

Patricia MacCormack generously expands upon the aesthetic and affective stakes of what I was trying to accomplish in Post-Cinematic Affect — as opposed to the concerns over “capitalist realism” that also play a large role in the book, and that were the focus of the other posts. I thank her for calling attention to the Whiteheadian and Deleuzian themes that, as several of the other commentators noted, seemed less present in this book than in my earlier ones. Indeed, this is a tension — or a problem that I have been unable to solve — running through pretty much all of my work. Mallarmé’s maxim defines everything that I am trying to do as a critic: “Tout se résume dans l’Esthétique et l’Economie politique” (“everything comes down to Aesthetics and Political Economy”). This seems to me to be a necessary truth about the world; but I am never certain where to draw the line, how to partition the world between aesthetics and political economy, or when they are absolutely incompatible with one another, and when they are able to partially coincide.

In conclusion, I offer a media object that I hope responds to at least some of the tensions and confusions that we have been discussing this week: the music video for Janelle Monae’s song “Cold War.” The song, from Monae’s concept album The ArchAndroid, works as a kind of Afrofuturist counterpoint to Grace Jones’ “Corporate Cannibal.” It addresses the unavoidable conflicts of a world that is increasingly posthuman (as well as post-cinematic). The lyrics to “Cold War” reflect upon the demands and meanings of Emersonian self-reliance and authenticity, and of subjectivity more generally, in a world that is entirely manufactured and commodified. The Metropolis Suite, of which The ArchAndroid is a part, narrates the plight of a robot/slave — a commodity, all the more so because she is nonwhite — who has been slated for demolition because she has fallen in love. She is therefore forced, not only to flee for her life, but to invent out of whole cloth, and without models, what it might mean for her to be a “person” with a “life,” that is to say, with feelings, needs, and desires. The lyrics of “Cold War,” in particular, speak both to the absolute requirement of self-integrity and to the near-impossibility of defining what it might be. The video is a single, continuous take: we even see a time code running in the corner, and a title reading “Take One” appears near the beginning. Against a dark background, we see an extreme close-up head shot of Monae as she sings the song. But at some point, there’s a glitch: she flubs a line, looks to the side and seems to be bantering with someone off-camera. Then she clenches her face and seems to be barely holding back tears. Through all of this, her voice and the music continues to play, indicating that she has in fact been lip-synching all along. The extreme intimacy and emotionality conveyed by the close-up on Monae’s facial expressions coincide with the revelation of the video’s artifice. The video thus resonates with the “Club Silencio” sequence in David Lynch’s Mulholland Drive (which was sampled in Elena DelRio’s video). I don’t think that the revelation of technological artifice undercuts the affective intensity of the performance (as might have been the case in some twentieth-century modernist work). Rather, the incompossibles coexist, without negation and also without synthesis or resolution.

Post-Continuity

I’ve been meaning for some time to give my own take on Mattias Stork’s video-essay, “Chaos Cinema,” which has made quite a sensation in the blogosphere. I think that what Stork is talking about is pretty much the same as what I referred to in my book Post-Cinematic Affect under the rubric of post-continuity. I find Stork’s essay very useful and illuminating for the way that it highlights and describes the stylistic changes in recent Hollywood action films; but I also think he is too monolithic in dismissing this style as an inferior (and almost necessarily exploitative) form of filmmaking. (Many of my problems with Stork’s piece have already been addressed by Matthew Cheney, who very kindly mentions my own work as a counter-example to Stork’s overall claims). In any case, rather than write a full-fledged response to Stork at this point in time, I have decided to make my prospective answer into a proposal for a paper to be given (if it is accepted) at the next Society for Cinema and Media Studies conference.

Here is the full text of my proposal (though, as it exceeded the space limit for proposals, my actual submission is an abridgement of this):

POST-CONTINUITY

In my book Post-Cinematic Affect (2010), I argue that American commercial filmmaking has, in the last decade or so, been increasingly characterized by what I call the stylistics of post-continuity. This is a filmmaking practice in which a preoccupation with moment-to-moment excitement, and with delivering continual shocks to the audience, trumps any concern with traditional continuity, either on a shot-by-shot level or in terms of larger narrative structures.

Post-continuity stylistics is an offshoot, or an extreme development, of what David Bordwell calls intensified continuity. Bordwell demonstrates how, starting with the New Hollywood of the 1970s, commercial filmmaking in America and elsewhere has increasingly involved “more rapid editing… bipolar extremes of lens lengths… more close framings in dialogue scenes…[and] a free-ranging camera.” But although this makes for quite a different style from that of classic Hollywood, Bordwell does not see it as a truly radical shift: “far from rejecting traditional continuity in the name of fragmentation and incoherence,” he says, “the new style amounts to an intensification of established techniques.”

I argue that this situation has changed in the twenty-first century. The expansion of the techniques of intensified continuity, especially in action films and action sequences, has led to a situation where continuity itself has been fractured and devalued, or fragmented and reduced to incoherence. Bordwell himself implicitly admits as much, when he complains that, in recent years, “Hollywood action scenes became ‘impressionistic,’ rendering a combat or pursuit as a blurred confusion. We got a flurry of cuts calibrated not in relation to each other or to the action, but instead suggesting a vast busyness. Here camerawork and editing didn’t serve the specificity of the action but overwhelmed, even buried it.” In mainstream action films by Michael Bay, Tony Scott, and Paul Greengrass, as well as in lower-budget action features by directors like Mark Neveldine and Brian Taylor, continuity is no longer “intensified”; rather, it is more or less abandoned, or subordinated to the search for immediate shocks, thrills, and spectacular effects by means of all sorts of non-classical techniques. This is the situation that I refer to as post-continuity.

Recently, the question of post-continuity cinema has come to the foreground of discussion, thanks in great part to Mattias Stork’s video-essay, “Chaos Cinema,” which argues that, in recent commercial films, “we’re not just seeing an intensification of classical technique, but a perversion,” which is “marked by excess, exaggeration and overindulgence.” Stork’s essay has the great virtue of clearly defining the characteristics of these new cinematic practices, and of both showing and explaining how they differ from the more classical action sequences of directors like Sam Peckinpah, John Woo, and John McTiernan. However, it seems to me that Stork is too monolithic, and even moralistic, in his outright dismissal of nearly anything made in the post-continuity, “chaos cinema” style. Despite his grudging exception for Kathryn Bigelow’s Hurt Locker (which in my view, is still a film that largely observes a more classical conception of continuity), Stork largely regards post-continuity cinema as “an easy way for Hollywood movies to denote hysteria, panic and disorder,” leading to audiences “sensing the action but not truly experiencing it.”

In my talk, I will take a more nuanced look at post-continuity cinema, considering its virtues as well as its defects. I will consider the ways in which post-continuity stylistics are expressive both of technological changes (i.e. the rise of digital and Internet-based media) and of more general social, economic, and political conditions (i.e. globalized neoliberal capitalism, and the intensified financialization associated with it). I will suggest a strong affinity between what Stork calls “the woozy camera and A.D.D. editing pattern of contemporary releases,” and the minimalist and relativel static styles of recent low-budget horror films (like the Paranormal Activity series), “mumblecore” slice-of-life films, and reality television. All of these are post-continuity, in the sense that they do not altogether dispense with the concerns of classical continuity, but move ‘beyond’ it or apart from it, so that their energy and investments point elsewhere. Like any other stylistic norm, post-continuity stylistics involves films of the greatest diversity in terms of their interests, committments, and aesthetic values. What unites, them, however, is not just a bunch of techniques and formal tics, but a kind of shared episteme (Michel Foucault) or structure of feeling (Raymond Williams).

Symposium on Post-Cinematic Affect

This coming week, August 29 to September 2, the website In Media Res will be having a Theme Week devoted to my book Post-Cinematic Affect.

The week is co-curated by Michael O’Rourke and Karin Sellberg and features a response from Steven Shaviro so we would really appreciate it if as many people as possible would join in with the discussions on each day next week. To participate you just need to take a moment to register at In Media Res:

http://mediacommons.futureofthebook.org/imr/user/register

The full line-up for the theme week is:

Monday August 29: Elena Del Rio (University of Alberta, Canada)

Tuesday August 30: Paul Bowman (Cardiff University, UK)

Wednesday August 31: Adrian Ivakhiv (University of Vermont, USA)

Thursday September 1: Patricia MacCormack (Anglia Ruskin University, UK)

Friday September 2: Steven Shaviro (Wayne State University, USA)

Processes and Powers

A few days ago, Ben Woodard put up a provocative and interesting post on the intersections between, as well as the differences between, process philosophy and OOO (object-oriented ontology). Ben (rightly) questioned the dismissal of process by OOO folks as “lava-lamp materialism” or as “lump ontologies.” (He could have added, as well, Bogost’s describing process philosophy as “firehose metaphysics.”). But Ben also warned that “there’s a fuzziness” in process metaphysics “that there doesn’t seem to be an urge to qualify.” The danger is that simply calling on “process”  is supposed to answer everything; “this allows for becoming to be utilized as an escape hatch in argumentation.” Ben expressed the need for “a rigorous account of the breaks, the actualizations, the triads or whatever it may be, that show the work of becoming without a human agent making the call, without the human carving out the individuated bits of the world.” And he ended by asking the people on various process blogs (including me) to give comments.

So far there have been answers from Knowledge Ecology, from Immanence, from Footnotes2Plato, from After Nature, and from Immanent Transcendence. And Ben has responded in turn to all of them here. So I would seem to be the only one left, of the blogs from which Ben initially requested a response. So here goes.

First, I agree with Ben that the answer to lava lamp / lump / firehose criticisms needs to be better articulated. These criticisms all suggest that “becoming” or “process” is a one-size-fits-all generalization, used to answer any questions about particular objects or details. And I do think this may well be a sloppy habit that we have sometimes fallen into in the blogosphere. What needs to be emphasized, therefore, is that such over-generalization is NOT the case in the writings of Whitehead or Simondon. I am in entire sympathy with Harman’s interest in what he calls “the carpentry of things”; Bogost also speaks of the “carpentry” of objects in this sense — as when he explicitly prefers (algorithmic)  “procedure” to (Whiteheadian) “process.” But it seems to me that this (metaphorical sense of) carpentry is very much alive in Simondon, especially — as when he critiques Aristotle’s hylomorphism (figured in the imprintation of form, by means of a mold, upon a supposedly otherwise shapeless lump of clay). In no way is the process by which the clay becomes in-formed, as Simondon puts it, through a whole complex series of actions and procedures, merely an indistinct and continuous, firehose-y or lumpy, flow. (I am not sure whether or not this complex process can be described as “procedural” in Bogost’s terms; I’m inclined to think that all procedures are in fact processes, contra Bogost’s opposition between them; but that not all processes are procedures. I leave this aside for future consideration).

Whitehead writes on a much more abstract or “generic” level, of course but part of the reason for his seemingly scholastic multiplication of terms and distinctions is precisely in order to prevent the use of “becoming” or “process” as an undifferentiated, catch-all term. Whitehead is worried, I think, that Bergsonian duration can all too easily become such a term, a night in which all cows are black. And this is precisely why Whitehead adopts “event epochalism” (as George Lucas calls it), in which duration (or becoming) only applies to each individual occasion taken by itself, but not to the universe as a whole, nor even to the more-or-less-stable things (“societies” of occasions, extended in time and space) that populate the universe. (As mentioned in my previous posting — for Whitehead “there is a becoming of continuity, but no continuity of becoming”). For Whitehead, becoming or duration is what characterizes each individual Jamesian “drop of experience” — there are also (for both James and Whitehead) the (largely non-conscious) transitions between these drops.

So, for Whitehead (as for Simondon in a different way) “process” really means composition, rather than duration or becoming. There are all these atoms of becoming, which do not change or endure, but which “are what they are,” or become what they are, and then perish. And these atoms (the “actual entities” or “actual occasions”) are not themselves in time and space; rather, they generate time and space, together with generating “the real actual things that endure” in space and time and that Whitehead calls “societies” (Adventures of Ideas, page 204). Again, the point of all this is not to deny the actuality of things (or of what OOO calls “objects”), but precisely to account for their actuality, to show how they come into being, and endure in being (or have a conatus). (Whitehead, the great enemy of all theories of substance. nonetheless says that his own “notion of ‘society’ has analogies to Descartes’ notion of ‘substance’ “).

Now, Harman is perfectly right to point out that this argument distinguishes Whitehead from Bergson (and from Deleuze), for whom there is such a thing as a universal duration, or continuity of becoming, within which all the smaller and more particular becomings are nested. I just think that Harman exaggerates, or overstates, the degree, or extent and importance, of this distinction. He says that it makes Whitehead absolutely on the other side from Bergson and Deleuze of a massive philosophical divide. But I think that Whitehead lines up with Bergson and Simondon and Deleuze, and against Harman and OOO, in that all these “process” thinkers seek to account for how things come into existence, and how they endure; whereas OOO just seems to me to assume that its objects are already there.

The question of occasionalism comes into this, too. Harman requires occasional or vicarious causes to explain how objects can ever interact. But classical occasionalism, to the (limited) extent that I understand it, required a specific occasion, not only for how one object would interact with another, but also for how any object could endure at all. For the classical occasionalists, no entity could perpetuate itslef unless God upheld it anew at every instant. If I think that Whitehead is not an occasionalist, this is precisely because he gives us, non-supernaturally, the “actual occasions” by means of which, and as a result of which, things are able to endure. (This also touches upon my disagreement with Harman as to the role of God in Whitehead’s system — I don’t have the time or space to go into this in greater depth here, but see my last post, and Harman’s response to it). (I should add that my understanding of Whitehead’s God also puts me at odds with most of the other process bloggers who have jumped into the debate — but this is also something that I will need to take up at another time).

In any case, all this is why I think that Harman’s critique of philosophies that “undermine” or “overmine” objects (see the opening chapter of The Quadruple Object) doesn’t rightly apply to Whitehead (and here Harman might partly agree with me), and also doesn’t apply to Simondon or to Iain Hamilton Grant (in The Quadruple Object, Harman explicitly lists Simondon as one of those thinkers who is guilty of undermining; he similarly calls Grant an underminer, and a philosopher of the One, in his article on Grant in The Speculative Turn.) In other words, I am largely in agreement with Ben, when he writes: “the critique seems to be there must be some underlying substance with forces and powers but I cannot see why this must be the case. In many ways it seems to be an obfuscation of the difference between the metaphysical and the non-metaphysical – why if metaphysically there are not individual things why can’t there be individual things at the physical level without needing a human mind to carve them up.”

I’m not sure I entirely grasp what Ben means here by metaphysical vs. non-metaphysical levels.  To a certain extent, I suppose that it roughly corresponds to Whitehead’s distinction between actual entities (the “really real things” that compose everything) and societies (the “real actual things” that can endure and that we experience). More immediately, though, I presume that Ben’s distinction has to do with Grant’s arguments about antecedence. The metaphysical level is antecedent both to a One that would be the Whole and to the plurality of actually existing objects. This is very different from claiming that the One alone is real, and that objects are mere epiphenomena or appearances. The same could be said, contra Harman, of the antecedence of Simondon’s pre-individual. For Simondon, a thing cannot just be given, it must have a genesis. But again, this doesn’t mean that the antecedent pre-individual is either unified, or more real than what emerges out of it. In any case, for Simondon, whenever an individual exists, there is a field of preindividuality that is both antecedent to it — since it is that out of which the individual emerges — and remains contemporary with it — because no individual ever exhausts the preindividuality out of which it arises. Now, there may well be a difference, as Harman maintains, between Whitehead’s atomism of actual occasions and Simondon’s and Grant’s sense of antecedence. But these thinkers are still in accord with one another, and with Deleuze as well, in demanding a genetic and dynamic account of everything that exists. It’s from a dynamic and genetic point of view that we can reject Harman’s claim, regarding Simondon’s preindividual “seeds of things,” that “these seeds are either distinct from one another or they are not” (The Quadruple Object, page 9). Antecedence trumps this exclusive-either-or binarism. On this level, the question is not one of substances, but — as Grant and Ben both say — of ungrounded powers. (I realize that I will need, at some point, to go far more deeply into powers metaphysics, and to consider how such a metaphysics relates to Whitehead’s cosmology — there are obvious differences here, although I take it that both positions are on the same side in opposing the claim for Aristotelian substances, as revived by OOO).

But I didn’t start this blog entry intending for it to be another screed against OOO. Rather, I wanted to talk about a way in which Graham and I — or more broadly, OOO and process thought — are actually on the same side with one another: since this is also part of the point that Ben was making, despite expressing criticisms of both. (This is also why I am not here any further pursuing the differences – which Harman has helped to point out — between Whitehead on the one hand, and thinkers like Schelling, Simondon, and Grant on the other).

what I am starting to think about now — and which Ben’s posting gives me a new angle on — is the following. It has to do with “speculative realism” more broadly considered, rather than just with OOO. If one accepts, as I do, the general critique of correlationism (Meillassoux) or of the “philosophy of human access” (Harman), then it seems to me that one is left with a stark alternative. One must say either 1)that all entities, or things, or objects, are in their own right to some degree active, intentional, vital, possessed of powers, possessed of their own “alien phenomenology,” etc; or else 2)that being is radically divorced from thought, that things or objects must be radically divested of their alleged anthropomorphic qualities. In other words, if you push it far enough, you are driven either to panpsychism or to eliminativism. I think that this is the biggest division among the four initlal speculative realists. Both Harman and Grant approach panpsychism without entirely endorsing it (see their articles in David Skrbina’s Mind That Abides anthology); whereas both Meillassoux and Brassier reject any such ascription of mindfulness to the world (or to the entities in the world), and opt instead for some sort of mathematical (Meillassoux via Badiou) or scientistic (Brassier via Sellars) reduction. Once we abandon the notion that mind and (things in the) world must be primordially correlated, then we must either see mind everywhere or nowhere. Panpsychism sees mind as intrinsic to being, existing apart from any question of what it might be correlated with (for panpsychism, everything has a mind, but this doesn’t necessarily mean that everything is apprehended by a mind) (and also, although the not-quite panpsychism of Grant and Harman does not see mind as originary, it regards mind as necessarily arising from the antecedence of productive powers, in Grant’s case, or as necessarily arising from any encounter or relation, in Harman’s case). At the other extreme, eliminativism sees nothing left but brute matter, or primary substance without qualities (hence Meillassoux’s revival of the separation between primary and secondary qualities), or mathematical structure, once the correlation of mind to world has been rejected.

So here we have Harman and OOO lining up on the same side with Grant, with Whitehead, and with any powers metphysics (I also need to say something here about “physical intentionality” in Molnar). Whereas Meillassoux and Brassier are on the other, “mathematical” side (together with “structural realists” like Ladyman and Ross, whom Harman has criticized for their ultra-relationalism).

Of course, this cannot be all of it, since I need to respect my own stricture above against ultimate metaphysical exclusive-either-ors. So I am tempted to describe Ben Woodard’s own “dark vitalist” position, and perhaps those of Reza Negarestani and Eugene Thacker as well, as combining the extremest tendencies of both the panpsychist pole and the eliminativist pole. Would it be possible to construct a fourth speculative realist position, one that rejects both panpsychist and eliminativist tendencies? So far, I cannot see how one could do this — since I am arguing that the two tendencies are potentials, or consequences, that inevitably arise from the critique of correlationism, it would have to be a position that explicitly rejected them both, rather than merely ignoring them.

I have two public presentations coming up in September, at which I hope to develop these ideas further. At the OOO symposium at the New School in New York City, I plan to talk more about the contrasting panpsychist and eliminativist poles of speculative realism. And at the SLSA conference in Kitchener, Ontario, I plan to talk about the absence of consciousness, and what it might mean to have mindedness without consciousness, as well as without correlation (see the abstract here).

The Prince and the Wolf

Today I read The Prince and the Wolf, the short book from Zer0 that transcribes a discussion between Graham Harman and Bruno Latour, held at the London School of Economics in 2008, and organized and introduced by Peter Erdelyi. I found the book very helpful in further pursuing the questions about Harman’s object-oriented ontology that I have been mulling over for several years. This is largely because of the context we have Latour responding to Harman’s reading of him, which suggests different directions for debate than any I have thought of myself, or come upon elsewhere. I haven’t the time to think through all of the stuff I read — so this posting will just mention briefly a few of the key points that emerge from the book, before I forget them.

Basically, Latour objects to Harman’s characterization of him as a relationist, by saying that he doesn’t understand (or doesn’t accept) Harman’s entire opposition between objects/substances and relations. Where the question of whether objects can be defined by their relations, or on the contrary have hidden nonrelational cores, is crucial for Harman, Latour suggests rather that this is a both/and, not an either/or. It is precisely because things are singular, that they need mediators, relations via translation and transportation, in order to have an effect, or assert their presence in the world. So it’s not a question of whether objects are defined by intrinsic substantial natures or by merely relational qualities, but rather that it is precisely to the extent that objects are singular and irreducible to external common measures that they need to establish modes of relationality.

Latour accepts Harman’s definition of him as an occasionalist, and as the first secular occasionalist. This is because, for Latour, all alliances among things are contingent, and can always be broken or articulated differently. However, it still doesn’t seem to me that causation, or contact among entities, is as problematic for Latour as it is for Harman. Harman affirms occasionalism because, given his notion of sel-subsistent objects, sealed off from one another, the fact that objects do affect one another cannot be taken for granted, but needs a special explanation. I don’t see that this is a problem for Latour — he sees objects making alliances and networks, entering into confederations or fights and oppositions, as being the usual course of things; it isn’t in need of special explanation.

This also is an issue in Harman’s reading of Whitehead, which comes up briefly in the book because of Latour’s overt Whiteheadianism. Harman says that Whitehead is also an occasionalist, and not a secular one, because Whitehead requires eternal objects mediated by God in order for things to affect one another. This seems to me to be wrong. In his doctrine of causal efficacy, Whitehead presents entities as affecting one another directly, without mediation, all the time.

This is the whole point of Whitehead’s critique of Hume. Whitehead says that, if Hume were correct in claiming that no connections among events or entities can be detected in the world, then it would be impossible for such connections to be detected in the mind either — there could be no habit or stability of mental associations. Hume in fact assumes, in the case of the mind, the very causal links that he denies to the world outside the mind. But this is unacceptable, once we reject the Cartesian dualistic notion that the mind is somehow separate from the world. Whitehead says in effect that it is impossible to actually disavow causal efficacy. I accept Harman’s brilliant observation that Hume’s scepticism is really just the flip side of Malebranche’s occasionalism — but my conclusion from this is that, if we accept Whitehead’s argument against Humean scepticism, then this is an argument against occasionalism as well. For Whitehead, an entity cannot ever exist apart from its connections, even though the entity itself is not reducible to these connections.

As for eternal objects and God in Whitehead’s cosmology, it seems to me that they are not deployed in order to answer the question of how things can influence other things. Rather, they are there in order to answer a quite different question: that of how novelty is possible, of how creativity takes place, of how things can be something other than just repetitions of previous things. Harman observes that, “for Aristotle… causation itself isn’t really a problem; there are no gaps between things.” I would claim, contra Harman, that the same is true of Whitehead. The problem for Whitehead is not the occasionalist one of how to bring unconnected things together, but rather the one of how to produce gaps, discontinuities, and changes in a world in which everything (every actual entity) has a reason, which reason is always another actual entity (or a number of them).

In other words: Harman rejects Aristotle’s belief that “there are no gaps between things,” while he seeks to revive an Aristotelian notion of substance. Whitehead, as is well known, utterly rejects Aristotelian substance, but like Aristotle he doesn’t have a problem with things touching and affecting one another. Actually, it is a bit more complicated: for Whitehead – contra Bergson – “there is a becoming of continuity, but no continuity of becoming.” Both the continuity and the gaps in continuity have to be produced, and have to be accounted for. Reality, for Whitehead, is atomistic — but this does not mean nonrelational. I think that Whitehead would probably reject Harman’s basic duality between objects and relations in much the same way that Latour does.

To get back to Latour — he says in The Prince and the Wolf that he is not as much of an actualist as Harman makes him out to be, precisely because he does not conceive things in “punctual” terms. Where Harman seeks to revive a notion of substance in order to get away from the contemporary overvaluation of relations, Latour poses the issue quite differently. Several times in the book he says that, precisely because we can no longer accept the notion of substance, the question that exercises him the most is one of subsistence. “Once substance has been excluded, subsistence comes to the fore.” For Harman, things are substances, in their basic being, regardless of whether they subsist or not. For Latour, things cannot be substances at all, and this is why the question of their subsistence is such an important one. Indeed, Latour hints that his still-unpublished exploration of different modes of being (under the influence of Souriau) is really about different ways of subsisting. There are multiple modes of being, because there are multiple ways in which entities, without being substances, nonetheless subsist over time (and also, I would suspect, through space).

Latour adds that what he now sees as the defect of his early treatise “Irreductions” (part of the Pasteur book) is that it is in fact too “punctual” — it presents as points what are really vectors. Now, “vectors” is very much a Whiteheadian term as well — Whitehead insists on the vector quality of existence — and for Latour, vectors are important because they involve both movements of translation and transportation, and processes of subsistence. Harman objects that vectors are only spatial, not temporal, a movement outward but not a movement forward in time — Whitehead’s and Latour’s vector picture has little to do with Bergsonian duration. Harman is right regarding Bergson specifically, but I don’t accept Harman’s further inference that therefore there is no real temporality in Latour: I think it is just that Latour is following Whitehead’s physics-inflected sense of spacetime, rather than Bergson’s radical duality between time and space. The movement of the vector is as irreducible to the kind of temporality of present instants that Harman describes as it is to Bergsonian continuity of becoming. For Latour (as for Whitehead, and in contrast to Harman) everything has “descendants and ascendants” [I suspect that what Latour meant by the latter word was “antecedents”].

And this, coming near the end of the volume (page 108), is perhaps the crux: Latour claims that “every single entity is expectant of a next step.” Harman responds: “Not expectant, but it becomes a possible mediator of other two entities.” Latour responds that he does intend the stronger meaning that Harman rejects: “No, but for itself, we are talking about the thing itself. It is expectant, is it not?” Harman says no, where Latour says yes. As for me, this is precisely where I side with Latour (and Whitehead) against Harman. Things are indeed “expectant,” because they feel what they prehend, and in turn set down conditions for what will prehend them, i.e. ways in which they will (expect to) be felt. Such is the vector character of experience for both Whitehead and Latour; it is also the “physical intentionality” at the heart of George Molnar’s conception of “powers.”

What is the post-cinematic?

I’m currently engaged in a round-table discussion (conducted via email) with Therese Grisham, Julia Leyda, and Nicholas Rombes, concerning the two Paranormal Activities films. The entire discussion among us will be published in the online film journal La furia humana. But I thought it might be worthwhile posting here, in advance, the first part of my contribution — since it summarizes my overall sense of what is meant by the term “post-cinematic” — as I used it in my last book, Post-Cinematic Affect.

My sense of the “post-cinematic” comes first of all from media theory. Cinema is generally regarded as the dominant medium, or aesthetic form, of the twentieth century. It evidently no longer has this position in the twenty-first. So I begin by asking, what is the role or position of cinema when it is no longer what Fredric Jameson calls a “cultural dominant,” when it has been “surpassed” by digital and computer-based media? (I leave “surpassed” in quotation marks in order to guard against giving this term a teleological meaning, as if the displacement of one medium by another were always a question of logical progression, or of advancement towards an overall goal. While André Bazin’s teleological “myth of total cinema” is certainly worth considering in this regard, there are many other factors in play as well; the situation is a complexly overdetermined one).

Of course, if we are to be entirely strict about it, cinema was only dominant for the first half of the twentieth century; in the second half, it gave way to television. But for a long time, a kind of hierarchy was still in place: the “big screen” continued to dominate the “small screen” in terms of social meanings and cultural prestige — even if the latter generated more revenue, and was watched by a far greater number of people. Already in the 1950s, movies achieved a second life on television; it wasn’t until much later that anyone had the idea of doing cinematic remakes of television shows. It’s true that television news, or live broadcast, became important pretty much right away: think of Nixon’s Checkers speech (1952), the Nixon-Kennedy debates (1960), and the coverage of the Kennedy assassination (1963). But it’s only been in the last decade or two that television drama has been seen as deeper and more relevant than cinematic drama. (In the 1970s, the Godfather films and Taxi Driver were cultural landmarks; for the past decade, the similar landmarks are shows like The Sopranos and The Wire).

The movies only gradually lost their dominant role, in the wake of a whole series of electronic, and later digital, innovations. Theorists like Anne Friedberg and Lev Manovich have written about many of these: they include the growth of massively multichannel cable television, the increasing use of the infrared remote, the development of VCRs, DVDs, and DVRs, the ubiquity of personal computers, with their facilities for capturing and editing images and sounds, the increasing popularity and sophistication of computer games, and the expansion of the Internet, allowing for all sorts of uploading and downloading, the rise of sites like Hulu and YouTube, and the availability of streaming video). These developments of video (electronic) and digital technologies entirely disrupted both the movies and traditional broadcast television. They introduced an entirely new cultural dominant, or cultural-technological regime: one whose outlines aren’t entirely clear to us as of yet. We do know that the new digital technologies have made the production, editing, distribution, sampling, and remixing of audiovisual material easier and more widespread than it has ever been before; and we know that this material is now accessible in a wider range of contexts than ever before, in multiple locations and on screens ranging in size from the tiny (mobile phones) to the gigantic (IMAX). We also know that this new media environment is instrumental to, and deeply embedded within, a complex of social, economic, and political developments: globalization, financialization, post-Fordist just-in-time production and “flexible accumulation” (as David Harvey calls it), the precarization of labor, and widespread micro-surveillance. (Many of these developments are not new, in that they are intrinsic to the logic of capitalism, and were outlined by Marx a century and a half ago; but we are experiencing them in new forms, and with new degrees of intensity).

Such is the context in which I locate the “post-cinematic.” The particular question that I am trying to answer, within this much broader field, is the following: What happens to cinema when it is no longer a cultural dominant, when its core technologies of production and reception have become obsolete, or have been subsumed within radically different forces and powers? What is the role of cinema, if we have now gone beyond what Jonathan Beller calls “the cinematic mode of production”? What is the ontology of the digital, or post-cinematic, audiovisual image, and how does it relate to Bazin’s ontology of the photographic image? How do particular movies, or audiovisual works, reinvent themselves, or discover new powers of expression, precisely in a time that is no longer cinematic or cinemacentric? As Marshall McLuhan long ago pointed out, when the media environment changes, so that we experience a different “ratio of the senses” than we did before, older media forms don’t necessarily disappear; instead, they are repurposed. We still make and watch movies, just as we still broadcast on and listen to the radio, and still write and read novels; but we produce, broadcast, and write, just as we watch, listen, and read, in different ways than we did before. 

I think that the two (so far) Paranormal Activity films are powerful in the ways that they exemplify these dilemmas, and suggest possible responses to them. They are made with recent (advanced, but low-cost) digital technologies, and they also incorporate these technologies into their narratives, and explore the new formal possibilities that are afforded by these technologies. As horror films, they modulate the affect of fear through, and with direct attention to, these digital technologies, and the larger social and economic relations within which such technologies are embedded. The Paranormal Activity films in fact work through the major tropes of twentieth-century horror. First, there is the disruption of space that comes when uncanny alien forces invade the home, manifesting in the very site of domesticity, privacy, and the bourgeois-patriarchal nuclear family. And second, there is the warping (the dilation and compression) of time that comes about through rhythms of dread, anticipation, and urgency: the empty time when the characters or the audience are waiting for something to happen, or something to arrive, and the overfull time when they are so overwhelmed by an attack or an intrusion that it becomes impossible to perceive what is happening clearly and distinctly, or to separate the otherworldly intrusion from the viscerally heightened response (or inability to adequately respond). The Paranormal Activity films take up these modulations of space and time, but in novel ways, because their new technologies correspond to, or help to instantiate, new forms of spatiotemporal construction (one might think here of David Harvey’s “space-time compression,” or of Manuel Castells’ “space of flows” and “timeless time”).