One of the central claims of object-oriented ontology is that objects are withdrawn from one another. In this way, OOO radicalizes the Kantian claim. Kant had claimed that the in-itself is withdrawn from humans such that we only have access to phenomena and never things-in-themselves. OOO accepts this thesis with the caveat that this is true of all objects, regardless of whether the entities involved are humans relating to nonhuman objects or other human beings, or whether the entities relating to one another are planets relating to stars. Each object encounters other objects as phenomena or what Graham calls “sensual objects”. The consequence that follows from this is that the inner world of objects is essentially unknowable. We can track inputs and the outputs produced as a result of these inputs (what I would call “local manifestations”), yet the inner world of objects is a black box.
I’ve been delighted to discover that Andrew Pickering articulates a very similar line of thought in The Cybernetic Brain. This is a wonderful book, so take the time to read it if you have access to it. This point and how it modifies our understanding of knowledge and the world comes out very clearly in his discussion of Ross Ashby’s famous homeostat. As recounted by Wikipedia,
The Homeostat is one of the first devices capable of adapting itself to the environment; it exhibited behaviours such as habituation, reinforcement and learning through its ability to maintain homeostasis in a changing environment. It was built by William Ross Ashby in 1948 at Barnwood House Hospital. It was an adaptive ultrastable system, consisting of four interconnected Royal Air Force bomb control units with inputs, feedback, and magnetically-driven, water-filled potentiometers. It illustrated his law of requisite variety — automatically adapting its configuration to stabilize the effects of any disturbances introduced into the system. It was the realization of what he had described in 1946 as an “Isomorphism making machine”.
What we get in the case of the homeostat is a machine where each individual homeostat evolves in response to the outputs of the others. As Pickering describes it,
…we need to think about Ashby’s modelling not of the brain but of the world. The world of the tortoise [which I discuss here] was largely static and unresponsive– a given field of light and obstacles –but the homeostat’s world was lively and dynamic: it was, as we have seen, more homeostats! If in a multiunit setup homeostat 1 could be regarded as a model brain, then homeostats 2, 3, and 4 constitute homeostat 1’s world [what I would call it’s “regime of attraction”]. Homeostat 1 perturbed its world dynamically, emitting currents, which the other homeostats processed through their circuits and responded to accordingly, emitting their own currents back, and so on around the loop of brain and world. (106)
The conclusion to be drawn from this, says Pickering, is that “[a]s ontological theater […] a multihomeostat setup stages for us a vision of the world in which fluid and dynamic entities evolve together in a decentered fashion, exploring each other’s properties in a performative back-and-forth dance of agency” (ibid.). The key point here is that the other objects are not explored cognitively through passive representation, but through action and interaction. One homeostat discovers the properties of another object through perturbing it in particular ways. This, in turn, produces certain outputs. Thus, Pickering goes on to add, that “[…] relations between homeostats were entirely noncognitive and nonrepresentational. The homeostats did not seek to know one another and predict each other’s behavior. In this sense, each homeostat was unknowable to the others, and a multihomeostat assemblage thus staged what I called before an ontology of unknowability” (ibid.).
Here there are a few points worth making. First, the properties– outputs, local manifestations –discovered in the environment of the “brain-homeostat” are a function of the brain-homeostat’s actions (inputs). As a consequence, what the brain-homeostat discovers is a function of its own action. The other homeostats can harbor all sorts of other powers— virtual proper being –that are not manifested because either 1) the other homeostats are not being perturbed in such a way as to activate them, or 2) because the other homeostats do not have channels that allow them to be perturbed by the brain homeostat. In this latter case, we get a situation in which the other homeostat is so withdrawn from the brain-homeostats that it’s as if the homeostats do not even exist for one another. This would be analogous to the neutrinos I discuss elsewhere. Second, as the dance of agency unfolds through the communication of the homeostats with one another, patterned relationships begin to emerge. This would be analogous to what takes place when fireflies flick to one another in such a way that an oscillating pattern emerges where they all appear to simultaneously switch off and on in response to one another. Thus, third, we here begin to get the genesis of higher order objects. Through the formation of these patterned interactions, we begin to get the emergence of an entity in its own right that can interact with other entities at higher levels of scale. Here this higher scale entity draws outputs from lower scale entities (the homeostats) so as to maintain a patterned existence and unity in the order of time. In this case, it is not really one of the homeostats that’s a brain, but rather the homeostats taken as an aggregate that form something like a brain through their ongoing interactions and communications to one another. Here we might think of the homeostats as being akin to individual neurons.
February 23, 2011 at 2:59 pm
nods to Heidegger aside are you using “withdrawn” to signal an action on the part of the object being detected/interacted-with or is this more a matter of the limits of the grasp/know-ability of the detective/manipulator?
February 23, 2011 at 3:47 pm
Hi dmf,
You can find out a bit more as to how I use withdrawal in my Speculative Turn article entitled “The Ontic Principle”. A link to The Speculative Turn can be found in the sidebar, where you can read it in free .pdf form.
February 23, 2011 at 5:10 pm
hi,
same observation here. would your “withdrawn” refers to the retreat-ness mentioned by shaviro in his “the universe of things” paper?
kuja
February 23, 2011 at 5:59 pm
Hi Kuja,
I haven’t read Steve’s universe of things paper and can’t recall from the talk whether he used “retreat-ness” in the same way. Basically withdrawal means that objects never directly relate or encounter one another.
February 25, 2011 at 8:32 am
Levi’s argument needs a bit more spelling out. Here’s a first attempt (pre-breakfast, just so you know):
Assumption: Representational/semantic (RS) properties are static.
1) Agency properties (the behaviour of agents) are non-static (i.e. dynamic, fluid, etc.).
2) RS properties supervene (depend on) on Agency properties.
3) All supervenient properties have the same higher-order properties as their subvenient properties.
By (3) RS properties are non-static (contrary to the assumption)
However, this is an unsound argument because 3) is patently false. Supervenient properties don’t get all their higher order properties from their base of subvenient properties. Aesthetic characteristics properties plausibly supervene on physical properties, but physical properties are quantifiable whereas aesthetic properties are not.
So for the argument to work we need to assert either identity between dynamic agency properties and representational/semantic ones (reductionism) at 3, so we can get to the conclusion via the indiscernability of identicals, or eliminativism (there are no RS properties).
So if this argument supports OOO. OOO is committed to reductionism or eliminativism.
February 25, 2011 at 8:44 am
[…] to be fair, it probably isn’t, but, on the strength of this post over at Larval Subjects, Bryant might just believe that it is. The idea seems to be that […]
February 25, 2011 at 2:13 pm
David,
You’ll have to spell out your argument in more detail here. I’m not clear as to what you’re getting at.
February 25, 2011 at 10:24 pm
Mariela Szirko, 1999, Karl Jaspers Forum TA 15, in a comment entitled ‘BEFORE THE TRUTH, THE GODS PUT SWEATING’,
(Hesiod).
March 1, 2011 at 3:54 pm
Hi Levi,
I’m not contesting the OOO claim regarding the epistemic impenetrability of objects or your general claim regarding the non-representational character of knowledge of objects.
However, the considerations adduced here only establish that knowledge of objects is non-representational if we make extremely deflationary assumptions about the relationship between knowledge and the dynamic processes you describe here since the mere dependence of representation on dynamics does not suffice to press your claim.
For example, even if all states of a representational system S are responsive to changing outputs of objects in S’s world, it doesn’t follow that some of those states are not also responsive to internal states of those objects. S might have a feedforward neural network with the input layer corresponding to the sensory input from the object’s outputs, while a trained up ‘hidden’ layer flips outputs into one state if the object being tracked is in a phototropic way and into another if it is behaving photophobically. If these behaviors are caused by internal states of the object, then the S would track internal states of the object. In terms of the state space of the system the phototropic/photophobic difference would correspond to a partition of that total space by the hidden layer.
If the hidden layer state merely replicates the dynamically changing input or responds randomly (as would be the case in a network prior to training) then this presumably won’t be the case.
So if we identify knowledge states with fluidly changing states recording the passing scene, we get the reductive result that all we can know is the passing scene. We could also get to a similar position if we simply reject the claim that some objects – like homeostats – have internal states with causal roles (input-output conditions). I suppose OOO fans have to commit to some such to enter their enchanted black box universe.
But while this may follow from the OOO assumptions you outline, it only follows from an assumption about cybernetic systems if we ignore the role of hypothesis-forming mechanisms such as those used in ANN’s. This corresponds to a reductive or an eliminativist decision. I don’t think there is anything wrong with reductionism or eliminativism per se, but since both tend to be decried by OOO proponents, I found both the content and tenor of this piece puzzling.
March 1, 2011 at 4:46 pm
David,
I’m unable to respond to your argument because I haven’t the faintest clue as to what it is. I see that you are making the charges of eliminativism and reductivism, but am unable to see how you’re arriving at this conclusion based on the reasoning you provide (i.e., it’s Greek to me).
April 4, 2011 at 7:10 pm
[…] note: Pickering’s book has also attracted positive attention from Levi Bryant. Despite our differences (explored in many previous exchanges here), […]