First, N. explains a few things about rationality, decision-theory, and goals. Then he explores the process that leads to rational beliefs. He ends with some interesting suggestions.
Rationality is an instrument for achieving our goals. It does not determine what our goals are, however. Beliefs that are "rational" exhibit two characteristics: reliability and being based upon reasons. By "reliable" is meant that beliefs can be depended on to achieve a goal. Without reasons, beliefs may be reliable, but they are not rational. Without reliability, reasons seem to have nothing to recommend them, but it is not clear how much reliability is needed to make a belief rational.
"Decision-theory" is a theory of how to make decisions. Its criteria for deciding is what yields or is likely to yield maximum expected utility. Usually, rational action is held to yield maximum expected utility, but if it did not, decision theory would discard rationality.
In general terms, the goal of rational belief is often held to be truth (N. notes, however, that many truths are not worth knowing). (note that the goal of rational belief is truth, but our ultimate goal might be different, e.g. happiness). N. suggests that the ability to formulate true beliefs (or perhaps just valuing truth) is evolutionarily selected for: humans that did it were more likely to survive and reproduce. But approximate truth was sufficient for that.
So should we call "rational" those beliefs that come from a procedure that achieves the goal, true belief? N. suggests that that is only one possibility: perhaps it is better to say that "rational" beliefs are those that arise through a procedure that maximizes true beliefs (and allows some false beliefs if that maximizes true beliefs [belief that I can understand quantum mechanics might lead to more true beliefs than the true belief that I cannot understand it]). Another possible account of "rationality" is that it assigns different weights at different times to maximizing true beliefs versus ensuring that this belief is true. There are certainly situations in which believing the truth will impede a person's other goals, which are of a higher order than the instrumental goal of believing the truth (the murderer's mother: p. 69-70).
But having even one false belief creates a ripple effect among other beliefs. In that case, perhaps a principle to believe only what is true and to apply it stringently, leads to maximum expected utility. "Believing a particular truth comes to have a symbolic utility not tied to its actual consequences."
Not all of our true beliefs are rational. "Rational" beliefs must involve using a network of reasons and the use of those reasons must be the cause of the beliefs' reliability. For example, believing that I am seeing blue does not seem "rational," although it is true. "Rational" belief must arise from (an attempted) consideration of all and only the relevant factors, which are reasons (including both reasons for and against (and any other reasons that affect the reasons for or against)). Rationality considers reasons, but is (or tries to be) aware of its own possible biases.
We are apt to think (or hope) that we will be able to understand rationality by something like what philosophers do, by applying logic to individual questions, but perhaps rationality is more like the emergent output of a set of parallel processors with error-correction devices and constant reassessment of weighting of each individual processor? (see p. 77's 'target' example, p. 78 note, p. 79, p. 81 note).
Even the proposed normative requirement that one's beliefs be consistent and logically closed may not be optimal for achieving a high number of truths. We often acknowledge our fallibility (i.e. we acknowledge that we know that one of more of our beliefs is false), but we limit the damage as much as we can and continue to hold our beliefs (as do quantum mechanics physicists).
On pages 81-85, N. describes in some technical detail a possible mechanism for deciding between various competing belief candidates (including a revised Bayesian calculation which yields a "credibility value"). Then N. proposes rules for rationality:
At the end of the chapter, there are a series of interesting suggestions.
N. says that there is some attractiveness to the idea of avoiding belief, which N. calls "radical Bayesianism." Rather than believe one candidate for belief, simply assess the probability of all the candidates and decide things by maximizing expected utility. N. says that won't work, because in order to make a choice, we have to "believe" we are facing a choice, that there are various options and various outcomes. What is more, it is too large a practical task to keep track of so many probabilities: belief functions as a sifting device to make the world manageable for us.
N. also suggests that just as one might conceive of degrees of belief, so too one might conceive of degrees of rationality. Alternatively, he suggests, beliefs might be tied to context: in a general context, I am willing to believe that none of the people I know are child-abusers without any special reasoning, but when it comes to choosing a baby-sitter, I am more careful.
Finally, N. suggests that to be rational, one must consider and counteract sources of bias, not just first-order bias (e.g. I don't like Lower Slobobians, and so I don't hire them), but also second-order biases (e.g. no one in my firm explicitly discriminates against Lower Slobobians, and none of the firm's policies do, but the policies were formulated in such a way that no Lower Slobobian will qualify).