Summary of Chapter 4 of Robert Nozick's The Nature of Rationality

This chapter explores the notion that rationality might be a product of evolution, which imparts biases and shapes its capacities, but does not prevent rationality from being used for purposes it was not evolutionarily designed for.

What makes something a "reason" for a belief? Believing that r is a reason for h will help us reach truth only if there is some sort of relation between r and the truth of h. One view holds that such relationships are the sort of thing our mind just recognizes: i.e. we've got a detector in our brains. Another view is that there is a certain factual relation between r and h and that relation makes r a reason for h whether or not we see it as such. N. points out that if we do not recognize that that factual relation holds, then r does not really seem like a reason to believe h. R might still be evidence for h, but it is not a reason, seems to be his point.

N. suggests that we combine the two views: r is a reason for h if the factual relation holds and it somehow strikes us that r makes h believable.

Evolutionarily, the ability to generalize from particulars to types (a fundamental aspect of forming beliefs) may have been selected for, because it would frequently yield truth. Those who could do so faster or better survived and reproduced. That those generalizations appear "self-evidently" true need not mean that they are true (an analogy [not an example]: Euclidean geometry, although technically false, is "true enough" and seems self-evidently true). Some people claim that the "self-evident" rules of logic are likely to be evolutionarily selected-for "wiring" in us that may or may not have to with truth. N. stops short of that: he just wants to claim that they are "true enough."

There is an old problem here: The problem is that if we and reality are independent of each other, how will we ever become correlated? Is it all a dream? How can we know? Kant suggested that objects in the world conform to our abilities (we know only about empirical reality, which is "shaped by our constitution," not about objects in themselves). N. suggests instead that reason conforms to reality via the mechanism of evolution. For example, being overly cautious might be selected for, even though it often leads to false beliefs (that there is danger), because it occasionally leads to more important true beliefs, and is more reliable for achieving the goal of survival. Believing for reasons may be part of a reliable system and may thus be strongly favored by evolution. Although evolution has "installed" and shaped reason in us, we can use reason for purposes other than what evolution selected it for.

N. spends a good deal of time discussing evolutionary theory and how it works on pp. 114-119. The important part for us is that "Z is a function of X when Z is a consequence (effect, result, property) of X, and X's producing Z is itself the goal-state of some homeostatic mechanism M . . . and X was produced or is maintained by this homeostatic mechanism M." (p. 118) So the mechanism (rationality, say) produces X (reasons, belief) which produce Z (truth? whatever the goal of our homeostatic mechanism is: happiness?‹but what evolutionarly function does that have? perhaps dealing with new and unforeseen things). Things that were not originally such may be given a function (e.g. a flat rock can be maintained as a picnic table), and so reason may have been simply a side-effect of something else evolutionarily, that came to have a function after the fact.

N. suggests that some difficult philosophical problems (induction, other minds, existence of externals, justifying rationality) are all assumptions that evolution has installed in us. Those relatives of our ancestors who had more serious problems with these issues did not reproduce and survive. Rationality was not designed to solve those problems, and so it is no surprise that it has not, but it may still do so. The assumptions need not be true either: they need only be "true enough" or reliable.

Next, N. considers that rationality in an individual may be involved in a homeostatic mechanism within that individual, but there are other homeostatic mechanisms: society creates many of them. Perhaps society is the homeostatic mechanism that is producing rationality, and not individuals. Rationality itself is interpersonal to some degree, as Habermas says, but N. points out that that idea should not be taken to extremes (p. 125 note). Societal institutions may be propagating further institutions that evolve too.

Our genes change over eons. Societal institutions change over decades. Individuals change more quickly. Perhaps these are all homeostatic mechanisms that serve goals of our species over different time intervals. Evolutionarily, it might make sense if we generally follow whatever most others do (and so we might have built in a worry about what others think), but not always (e.g. if you have no special reason to move away from societal norms, you don't). Perhaps our biases have some function due to homeostatic mechanisms that produce them.

Having considered societal institutions as evolutionarily conditioned, and society as a whole as having a certain rationality, N. moves on to consider societal change. A worry is the persistence of slavery as a societal institution. Slavery obviously passed certain tests, evolutionarily speaking, but whether it should be retained need not be determined by that past. Our rationality might devise a system that is better. The problem is to get to that system: from the local perspective of a slaveholding regime, it is hard to see the rational reason to give up slaves. Even if one does, it might be hard to see how to get to that point. N. thinks capitalism, for instance, is a local optimum, which resulted from a series of small steps, each one beneficial and passing tests, away from feudalism, but that does not mean capitalism is the global optimum. It is perhaps only a local optimum. Moving from a local optimum to a global optimum is difficult. It can be done in small steps each of which are attractive, but what if the only way to achieve a global optimum is by one huge step?