Review:

Posted 7/31/96

"No Place to Hide: Campbell's and Danielson's Solutions to Gauthier's Coherence Problem" Dialogue: Canadian Philosophical Review, XXXV, No. 2 (1996) 235-40. by Paul Viminitz (University of Waterloo),

Reviewed by Chris MacDonald (University of British Columbia)
(chrismac@ethics.ubc.ca)


p-drop.gif - 1.3 Kaul Viminitz' discussion has its modern roots in the claim by (Gauthier 1986) that conditional cooperation is the appropriate strategy for the one-shot Prisoner's dilemma. The obvious objection to this, as (Danielson 1992) notes, is that it seems to lead to an infinite regress of conditionality. Thus it has been suggested that the conditional cooperation suggested by Gauthier is procedurally incoherent.

The solution offered by (Campbell 1985) is to make one's action conditional upon some observable feature that one's opponent has which is correlated with a cooperative disposition, rather than making it conditional upon her decision procedure itself. But as (Smith 1991) points out, making cooperation conditional upon on some feature distinct from one's opponent's decision procedure allows for the possibility of dissimulation. That is, one's opponent might display the "I'm a cooperator" flag, and yet defect anyway.

Danielson's solution to this is to make cooperation conditional upon the form, rather than the function, of one's opponent's decision procedure. That is, one's own decision procedure should not use, but should only mention, the other's decision procedure. If my decision procedure merely examines yours (say, to look for a conditional disposition) without running it, there is no chance of looping. This, of course, requires that agents be cognitively transparent.

Viminitz claims that Danielson's solution is inadequate, on the grounds that Danielson fails to explain how one agent can really be sure that another apparently transparent agent truly is fully transparent. Is it not possible, Viminitz asks, that an agent could merely seem transparent, while secretly 'holding back' a crucial, treacherous aspect of its decision procedure? Viminitz claims that the only way for Danielson to avoid this possibility is to stipulate that agents in his toy world have a priori knowledge of how much computational space their opponents have.

First, Viminitz fails to acknowledge the conditional nature of Danielson's claim. Danielson's primary claim is the conditional claim that if agents could be cognitively transparent, they would be able to engage in reciprocal cooperation without looping. For Viminitz to show that some agents might not be transparent hardly invalidates Danielson's conditional claim that if such agents existed, they could avoid the looping problem.

Secondly, is Viminitz right to hold that in order for transparency to work, such agents would have to have a priori knowledge of the limits of each other's computational capacities? Danielson claims actually to have produced software agents that are cognitively transparent to each other: does he use the cheat that Viminitz says he must?

The simplest objection to Viminitz claim might be that he simply fails to back it up. Faced with Danielson's claim to have instantiated cognitively transparent agents, Viminitz fails entirely to point out what bit of a priori knowledge these agents have that allow them to function. He fails to do so, of course, for the simple reason that Danielson's simple software agents have no such knowledge. In fact, four lines of Prolog can hardly be said to have any knowledge whatsoever.

This points to a deeper problem with Viminitz' position. The fact that Viminitz identifies a priori knowledge on the part of agents as the only way of 'salvaging' Danielson's proposal shows that Viminitz fails to see that agents are only half the story in any game-theoretical account: the world in which those agents interact is the other half. Danielson's software agents are able to rely on each other's cognitive transparency simply because of the way their world is structured: that is, their world simply makes falsehoods of a certain kind impossible. Danielson's agents can no more falsify their decision procedure than an elephant, upon physical examination, can pretend it's a mouse.

Viminitz himself provides, in his final paragraph, an alternative to a priori knowledge as way of precluding deception. He first claims that the "one and only solution to the coherence problem is to leave [deceptive] protocols no place to hide" (240). Viminitz then suggests that there are two ways of carrying this out. The first is the way upon which Viminitz claims Danielson must rely: provide agents with a priori knowledge of the extent of each other's cognitive capacities. The second way, which, according to Viminitz, is instantiated in human beings, is strictly to limit the hardware space available to each agent. Viminitz bases his claim on the assumption that we real-life agents "...do not have the brains to indulge in the kind of algorithms that virtual entities can perform" (237). Viminitz claims that due to these hardware limitations, constant deception concerning one's degree of cooperativeness would be intellectually taxing: indeed, he claims that it is not worth the energy that must be expended to maintain such a facade.

Viminitz's examples of the instantiation of these possibilities both seem plainly wrong. As mentioned above, Viminitz fails entirely to show any use of a priori knowledge by Danielson's software agents. As for the human world, hardware burdens (i.e., practical limits on the size of human brains) almost certainly do not act as a limit on our ability to dissimulate: there are almost always ways to hide one's intentions. Viminitz seems to think that we can assure each other of our cooperative nature simply by "wearing...[our] co-operative dispositions on...[our] sleeve[s]" (239). But in making such a claim, Viminitz is almost certainly guilty of precisely the crime of which he accuses Danielson: naively thinking that he has, with one brief argument, solved a problem "that has plagued philosophers for two and a half millennia" (238).

References:

Campbell, Richmond. Introduction to Paradoxes of Rationality and Cooperation: Prisoner's Dilemma and Newcomb's Problem, edited by R. Campbell and L. Sowden (Vancouver, University of British Columbia Press, 1985).

Danielson, Peter. Artificial Morality (London: Routledge, 1992).

Gauthier, David. Morals by Agreement (Oxford: Clarendon, 1986).

Smith, Holly. "Deriving Morality from Rationality," in Contractarianism and Rational Choice: Essays on Gauthier, edited by Peter Vallentyne (New York: Cambridge UP, 1991).