Can the Increasing Use of Public Opinion Polling be Justified?
by THOMAS MICHAEL NORTON-SMITH
The Midwest Quarterly, volume 36, number 1 (Autumn 1994): 97-112).
Herbert Asher characterizes the measurement of public opinion as "a growth industry in the United States" (2). The polls conducted and reported by the major communications media exemplified by CBS News/New York Times, ABC News/Washington Post, and NBC/Wall Street Journal are perhaps the most familiar. But in the mid-1980s as many as 500 to 600 news organizations were using polls and other quantitative studies as reporting devices, including 144 newspapers which were conducting their own polls in 1986. There has also been an explosion of publicly and privately commissioned polls focusing upon a diversity of national, state, and local interests, both public and private. In advance of a school-levy referendum an apprehensive school board might test the waters by commissioning a poll. Potential candidates sometimes gauge their chances of winning public office by enlisting the services of a polling organization. Congressional representatives poll their constituents, and some special interest groups use opinion surveys to garner contributions.
In 1940 George Gallup and Saul Rae asserted that "[n]ot only have the polls demonstrated by their accuracy that public opinion can be measured; there is a growing conviction that public opinion must be measured" (6). This conviction now goes largely unchallenged, since political philosophers are only beginning to consider whether the increasing use and reporting of opinion polls is justified. After a consideration of the nature of public opinion--what opinion polls purport to measure--I will argue that such a justification will be not be easy to provide, because the increasing use of opinion polls may not be conducive to good public policy. I then consider whether three common justifications for opinion polling are sufficient given the influence polling has on public policy.
Even a cursory examination of the literature on public opinion reveals that while there is some agreement about its meaning, there is no clear consensus, for current debate is marked by significant disagreement about how the public should be defined, or even if it can be defined. Jerry Yeric and John Todd define public opinion as "the summary statement of the distribution of expressed attitudes on some subject at a particular moment" (30), while "the distribution of individual preferences within a population" is Alan Monroe's definition (6). V. O. Key defines "public opinion" as "those opinions held by private persons which governments find it prudent to heed" (14) and L. John Martin defines it as the opinions of a subgroup in a society--a public--which is confronted by, divided by, and actively discussing some issue (15). After spirited consultation with political scientist Jeffrey Orenstein, I believe that public opinion may be best defined as the distribution of expressed beliefs or attitudes of citizens and subjects in a political community about an object, event, policy, or course of action.
This definition assumes that there is a single public in a society, so it is open to the following challenge. Because of ignorance or indifference not every citizen or subject will be competent to or interested in venturing a meaningful opinion on a particular issue. Not every citizen or subject can express a meaningful opinion on, say, the merits of constructing a super-conducting super-collider, since not everyone is aware of the benefits which accrue from basic research. Some may not even know what a particle accelerator is, so clearly the opinions of informed individuals ought to be weighted more heavily. This leads many political scientists to reject the proposal that there is a single, monolithic public in favor of the view that there are many publics in a society which are indexed to issues or concerns.
In light of the observation that competence and interest in particular issues is a matter of degree, this proposal seems plausible. On the other hand, Herbert Blumer remarks that public opinion on an issue may well be different from the opinions held by groups in the public--even competent and interested groups. Robert Weissberg offers three additional criticisms. First of all, since an individual's competence and interest in an issue are a matter of degree, it will be difficult to demarcate the boundary of a particular public, for the amount of knowledge or interest necessary for membership in the public is unclear. Second, since the boundary of a particular public is determined by the competence and interest of its members, the boundary can be unscrupulously determined by those who control information. Finally, if publics are indexed to issues, then the boundaries of some publics extend beyond the limit of the politicaly community. While interested Japanese investors might be competent to intelligently comment on the United States' budget deficit, they are not generally considered to be a part of the public concerned about the deficit.
There is another reason for not indexing publics to particular issues. If one narrowly defines a public to be a collection of individuals who are competent to render a meaningful opinion on a given issue, the result is that by definition their opinions ought to be relevant. On the other hand, the normative questions about the use public opinion polling in policy formation become crucial on a broad definition of the public. And, it is important to observe that in measuring public opinion pollsters often do not distinguish between interested, competent individuals and those who are ignorant or indifferent. Nor do elections differentiate between competent and incompetent members of the electorate. Paraphrasing Alan Monroe, it is only by starting with a definition of the public as including everyone's opinion--not just the opinions of an elite few--that we can evaluate the normative relationship between public opinion polling and public policy.
Many discussions of polling recount the classic example of bias in the selection of a sample. Using names provided from telephone directories and auto registration lists, the Literary Digest distributed a huge number of presidential preference questionnaires during the 1936 presidential election. On the basis of about two million responses the Digest incorrectly predicted a Landon victory over Roosevelt. The Digest's prediction was wrong because its method of sampling was unintentionally yet grossly biased. Many Roosevelt supporters couldn't afford telephones or automobiles during the Depression, so these individuals had no chance of being included in the Digest's sample.
This kind of experience has taught reputable pollsters the importance of rooting out sources of possible bias in sampling, so far as resources permit. However, according to Burns Roper, sampling error--the only error that can be measured mathematically--is usually the smallest source of error in a survey. More significant sources of error have been discovered. For example, when interviewers personally contact respondents, the quality of an interview can be adversely affected if the interviewer and respondent are much different in race or social class. In the case of mailed surveys, shorter mailed questionnaires draw somewhat more responses than longer questionnaires, introducing the problem of non-response bias. And as recently as 1984 it was estimated that the results of telephone surveys are 5 to 6 points more favorable to Republican than Democratic candidates because of the distribution of telephones.
The wording of survey questions can be a significant source of error. As evidence, Roper invites us to compare "Is it all right to smoke while you're praying?" and "Is it all right to pray while you're smoking?" A related source is the use of terminology which is not widely understood by the public, as in the use of "impeachment" in early surveys measuring opinions about Nixon's connection to Watergate. Pollsters have also discovered that surveys dealing with matters of fact tend to be more accurate than surveys which measure opinions--our present concern (Roper, 28-31).
No opinion survey is completely free of error, in part because the cost of formulating such a survey would be prohibitive. Nonetheless, reputable pollsters who pay careful attention to sample selection, question wording and order, interviewing techniques, and analysis of survey results attempt to avoid the more obvious sources of inaccuracy. Surveys crafted in accordance with established polling practices are called scientific opinion polls.
An opinion poll is called an unscientific opinion poll just in case the pollster has not attempted to avoid the more obvious sources of inaccuracy by ignoring established polling practices. Examples of unscientific opinion polls abound. After a 1980 Carter/Reagan presidential debate ABC News encouraged viewers to judge the victor by making a fifty cent phone call. A poll conducted by the National Right to Work Committee asked respondents:
Are you in favor of allowing construction union czars the power
to shut down an entire construction site because of a dispute with
a single contractor, thus forcing even more workers to knuckle under
to union agents? (Asher, 8)
Although political scientist Barry Orton chooses to call unscientific opinion surveys pseudo polls--denying that they are polls at all--I prefer to distinguish between scientific and unscientific opinion polls rather than distinguishing between opinion surveys which are polls and opinion surveys which are not.
Since the advertised function of public opinion polling is the measurement of the distribution of expressed beliefs or attitudes, an accurate opinion poll is one which precisely measures the actual distribution of opinions in the population. This sense of accuracy is an unattainable standard to which opinion polls aspire, so pointing out that any given poll is inaccurate is not a legitimate criticism. We are concerned with the degree to which a given opinion poll falls away from the standard. Thus, an opinion poll is inaccurate to a significant degree if the results of the poll give an overly obscured, distorted, simplified, or otherwise misleading account of the actual distribution of public opinion. I will argue that the reported results of a scientific opinion poll are at best somewhat inaccurate, and probably inaccurate to a significant degree even though the obvious sources of methodological inaccuracy are avoided.
Respondents in surveys conducted in 1978 and 1979 were asked whether or not they favored the passage of obscure pieces of legislation about which they could have known little, if anything. Despite their lack of even rudimentary knowledge about the Agricultural Trade Act (1978) and the Money Control Bill (1979), 31% of respondents ventured an opinion on the former, and 26% on the latter. On another survey respondents in the greater Cincinnati area were asked, "Some people say that the 1975 Public Affairs Act should be repealed. Do you agree or disagree with this idea?" A third of the respondents ventured an opinion on the Public Affairs Act even though it did not exist (Bishop, et al., 198-209). These two studies point up what is known as the problem of non-attitudes. A respondent to an opinion poll expresses a non-attitude about the issue treated if the respondent does not have any genuine beliefs or attitudes about the issue. Asher plausibly suggests that people often express non-attitudes during polling interviews in order to appear informed, so they express a view in response to the polling situation itself, and not because they have any well-formed opinions. The problem comes in differentiating in-formed responses from the uninformed, for if the pollster takes all responses at face value, then the results of the poll give a misleading account of the actual distribution of public opinion.
The use of screening or filter questions is one established way of reducing the number of uninformed responses offered as a result of the interviewing situation. However, the use of screening questions does not eliminate the problem of nonattitudes. In one version of the survey on the Public Affairs Act where a screening question was used, 10% of all respondents--a significant percentage--offered an opinion on the repeal of the non-existent act.
But the desire to appear informed is not the only factor which can skew the results of a scientific poll. Inaccuracies also arise because it is no longer socially acceptable or fashionable to express certain views. Even though informed, the holders of controversial opinions on, say, racial integration, care for the impoverished and homeless, handgun control, and women's rights sometimes respond in a socially expected manner instead of expressing their true opinions. Michael Margolis finds evidence for this phenomenon in the discrepancy between poll results and public political behavior. "In such circumstances," Margolis asserts, "actions--white flight and tolerance of cuts in food stamps, subsidized school lunches, nutrition for pregnant women at risk, rent subsidies, public housing ... and the like--truly speak louder, and with greater validity, than do words" (70). That is, public political behaviors are often a more accurate gauge of public opinion than the results of polls.
Gladys Lang and Kurt Lang consider another source of polling inaccuracy first suggested by Walter Lippmann. Regardless of the care with which surveys are crafted, they rarely capture the diversity of though, "but catch only fleeting reactions" (131). The following illustrates the point. In the third week of the war in the Persian Gulf thirty-three young adults were asked whether or not--and why they did or did not--support the allied war effort in the Gulf. Twenty-four of the thirty-three expressed support, three expressed opposition, and six responded that they did not know. However, there were thirteen distinct reasons for supporting the war expressed by various members of the population, three distinct reasons for opposition, and six distinct reasons given for responding "Don't know." In this case, a claim that 73% of the population of thirty-three supports the war grossly oversimplifies the diversity of thought. Importantly, each of the reasons given for support or opposition would have suggested a distinct public policy option.
The intensity with which opinions are held--another element of diversity--is masked when polling summary percentages are all that are reported. On Cantril's analysis, President Bush was politically embarrassed by his 1990 push for the passage of a constitutional amendment to ban flag burning because the polls indicated wide support for such a measure, but failed to indicate that the support was not deeply held.
Finally, a difficulty which is now being widely discussed in the literature on polling is the problem of interpreting "no opinion" or "don't know" responses. For uninformed respondents a "don't know" response reflects an absence of opinion, while for informed respondents it could well express ambivalence, an inability to choose among contending positions. Some respondents are believed to protect their privacy with the "don't know" reply, while others use the response as a strategy to end the interview session quickly. The concern is that some genuine attitudes may be masked by "don't know" responses.
Probably no one of the foregoing sources of inaccuracy is alone sufficient to render the results of a scientific poll inaccurate to a significant degree. Nor will the sample error of a scientific poll, which generally ranges from 3% to 5%, alone result in a misleading account of the actual distribution of public opinion. Collectively, however, sample error, the problem of non-attitudes and socially expected responses, the oversimplification of diverse opinions and the masking of genuine attitudes with "don't know" responses give good reason to believe that scientific opinion polls give at least an oversimplified and somewhat misleading account of the actual distribution of public opinion.
Reputable pollsters will claim that the results of scientific polls are not inaccurate to a significant degree. Moreover, they have good reason to maintain high standards in survey construction, administration, and analysis, for consistently inaccurate polling results would erode public confidence. But it is not enough to maintain high methodological standards in the construction and administration of opinion polls. It is also necessary to maintain high standards in reporting polling results. Although the media are becoming more sensitive to such a need, Cantril observes that "most reporters and editors eschew criteria of performance defined outside the terms of reference of journalism" (164). The media invoke the authority and standards of the professional pollster, but the journalist's criteria of news-worthiness are often inconsistent with the pollster's methodological standards. "The deadline-driven environment of a news organization is not always congenial to nuanced discussions about the extent to which data from a poll can be generalized" (76).
The National Council on Public Polls and the American Association for Public Opinion Research have formulated "principles of disclosure," elements of polling methodology which ought to be made public with the results of an opinion survey (Cantril, 164-70). But because of First Amendment considerations, Cantril admits that pollsters have little leverage regarding the reporting of survey results by the media except for criticism and education by professional polling organizations.
I have argued that there is reason to believe that scientific opinion polls give at least an oversimplified and somewhat misleading account of the actual distribution of public opinion. Cantril's remarks suggest that such inaccuracies are often compounded in media disclosure. That is, the reported results of any given opinion poll at best convey and at worst amplify the inherent inaccuracy of a scientific opinion poll.
The intended function of opinion polling is the measurement of public opinion. Instead of merely measuring public opinion, however, polls directly and indirectly influence public opinion. That is, polls play a role in the formation of public opinion. This ought not be surprising, since modern opinion polling is a highly successful marketing technique that has been adapted to politics, and the measure of success of a marketing technique is its ability to influence the public. According to Yeric and Todd, the use of polling "helps to select the issues or candidates, or both, for elections" by manipulating the consumers of polls--the voters--into "buying products they would not normally purchase" (240).
The results of a 1985 Roper Organization poll suggest that only 6% of the public regard polling results as seldom accurate, 76% consider pollsters to be largely honest about their polls, and 75% think that most public opinion polls work for the best interest of the general public (Cantril, 258-59). That is, the public views pollsters to be not only largely honest and successful in measuring public opinion, but they also play a beneficial role in the political process. This suggests that pollsters have attained the status of political authorities. Because of this newly endowed status, people express the belief that polling results "exude an authority that is difficult to ignore" (Lang and Lang, 133). According to Lang and Lang, most people express the conviction that the publication and broadcast of opinion poll results directly affect the formation of the opinions of citizens and subjects in the political community.
Lang and Lang also argue that opinion polls have an indirect influence upon public opinion. Citing Noell-Neuman's spiral of silence theory, they argue that polls contribute to the creation and silencing of minority viewpoints. As a controversial viewpoint gains in popularity, the holders of the view become more outspoken. In the face of an onslaught of opinion survey results, the supporters of other viewpoints are apt to lapse into silence, rather than argue. Finally, as the opposition is gradually silenced, "the new opinion becomes the dominant view, even if not everyone is convinced" (139).
Although Cantril rejects Noell-Neuman's theory, he discusses other ways in which polls exercise an indirect effect on public opinion through their influence on elections. First of all, the campaign contributions which candidates for House and Senate seats receive are often contingent upon their showing in the polls. Since candidates require funding to influence public opinion, anything which increases or reduces funding augments or diminishes that influence, so polls indirectly influence public opinion. Moreover, "the polls drive the press" in an important respect. The media have limited resources to cover all political candidates, especially early in a political campaign. Often the media decide to allocate personnel and resources on the basis of poll results. For example, when Richard Gephardt slipped in the polls in September 1987, one of the television networks withdrew the crew assigned to cover his campaign. Again, reduced media coverage diminishes a candidate's chances to influence public opinion, so polls have an indirect influence on public opinion. It seems reasonable to conclude, then, that public opinion is both directly and indirectly influenced by reported opinion poll results.
The results of unscientific opinion polls are inaccurate to a significant degree because the obvious sources of methodological inaccuracy are not avoided. We have seen that there is reason to believe that the results of scientific opinion polls give at least an oversimplified and somewhat misleading account of the actual distribution of public opinion even though the obvious sources of methodological accuracy are avoided. Importantly, the inherent inaccuracy of poll results is at best conveyed, and probably amplified, by media reporting. Because public opinion is directly and indirectly influenced by opinion polls, and the accuracy of reported poll results is suspect, public opinion is influenced by reported results of opinion polls which are at best somewhat inaccurate and at worst inaccurate to a significant degree.
"What the mass of the people thinks," asserts Gallup, "puts governments in and out of office, starts and stops wars, sets the tone of morality, makes and breaks heroes" (6). While perhaps a bit overstated, Gallup makes the point that our political system is one in which the government is supposed to be responsive to public opinion, whether in the form of polling results, direct communications, elections, or special interest group activities. That is, public policy is influenced by public opinion. But if public policy is influenced by public opinion, and public opinion is influenced by the reported results of opinion polls which are at best somewhat inaccurate and at worst inaccurate to a significant degree, then opinion polls of questionable accuracy have an influence on public policy. It seems reasonable to believe that the increasing use and reporting of opinion polls augment the influence which polls have on public policy.
Now, if public policy is influenced by opinion polls of questionable accuracy, and the increasing use of opinion polling augments the influence which polls have on public policy, then it is unclear that good public policy, or even widely acceptable public policy results. Granted that we desire the formulation of good public policy, and the increasing use of polls may not be conducive to good public policy, it is crucial for proponents of polling to provide a justification for their increasing use. However, I will argue that three of the most common justifications for the increasing use of polling are insufficient in the face of the influence that polls have on public policy.
First of all, one might appeal to John Stuart Mill in order to develop a liberal justification for opinion polling, as Gallup and Cantril seem to do. Since a true opinion is more likely to emerge in an atmosphere of free debate, the freedom of expression of an opinion is essential, especially in a democratic society. Now, a proponent of opinion polling might argue that polling contributes to this atmosphere of free debate essential to a democratic society, so the increasing use of opinion polling is thereby justified.
Yet, it is not clear that the increasing use of opinion polling is as conducive to free debate as proponents might believe. A necessary condition for free debate is that all opinions are voiced, for even false or unpopular opinions play an important role in the "clash and conflict of argument and debate." So, any activity which is detrimental to the expression of false or unpopular beliefs is detrimental to an atmosphere of free debate. If Noell-Neuman is correct, then the increasing use of opinion polling is detrimental to the expression of false or unpopular beliefs because it bludgeons the supporters of opposing views into silence. Even if Noell-Neuman is incorrect, Cantril's observations suggest that polling has other indirect detrimental effects upon free debate by silencing some political candidates. Moreover, since survey results given an oversimplified view of the diversity of opinion, subtle but important differences in position are not brought to light. I suggest, then, that the increasing use of opinion polling is difficult to justify on these grounds because it is unclear that it is conducive--rather than detrimental--to an atmosphere of free debate essential to a democratic society.
Even if public policy is influenced by opinion polls, and even if the increasing use of opinion polling augments that influence, it might be argued that these facts are irrelevant, since the individuals who commission polls have a right to use the material as they see fit because the survey results are their property. That is, the commissioner of a poll has an exclusive right to ownership and discretionary use of the results regardless of how that use might affect public policy. But this is certainly hasty. Individuals may have an exclusive right to own firearms, but it does not follow that they have a right to use firearms as they see fit. Likewise, even if the commissioner of a poll has an exclusive right to ownership of the results of the poll, some uses of survey results ought to be regulated if the public would be harmed by those uses. For example, some citizens are functionally disenfranchised by election exit polls, so one can make a plausible case that such a use of survey results should be regulated. So, it is not the case that the commissioner of a poll has a right to discretionary use of the results regardless of the affect that use might have on public policy.
Moreover, it is even unclear that commissioners have exclusive rights to ownership of public opinion polling results. Because public opinion is public, survey results are arguably public property, not private property. Individuals do not have exclusive ownership of public lands, so they ought not have an exclusive claim to ownership of survey results. An obvious rebuttal is that commissioners of polls do not own public opinion, but they own the results of a survey of public opinion, rather like owning a snapshot of public lands. However, there are times when photographers do not have an exclusive right to ownership of photographs of public property, as when the photographs are of Defense Department installations. Yet, respondents give permission for polling organizations to use their answers to survey questions, but the Defense Department does not give its consent to have installations photographed because of the possible harm that could result. I wonder if citizen respondents to public opinion polls would be so forthcoming if they were cognizant of the harm which could result from the increasing use of polls.
These considerations about the ownership of survey results are inconclusive at best. Even so, commissioners do not have discretionary use of survey results, so the increasing use of opinion polling cannot be entirely justified by an appeal to property.
Finally, one might argue that the expanded use of polling is justified on utilitarian grounds, that is, on balance more people benefit from the use of polling. Yet, if public policy is influenced by opinion polls which are at best somewhat inaccurate and at worst inaccurate to a significant degree, and the increasing use of opinion polling augments the influence which polls have on public policy, then it is unclear that good public policy, or even widely acceptable public policy results. If dubious public policies are sometimes a direct or indirect result of polling, and dubious policies harm the public, then it is not always the case that people benefit from the use of polls. Supporters of the utilitarian justification are charged with the difficult task of arguing that polls influence good or widely endorsed public policy more times than not. Thus, whether the increasing use of opinion polling can be justified on utilitarian grounds is an open question.
It is important to note in closing that this argument does not establish that the increasing use and reporting of public opinion poll results cannot be justified, although I believe that it makes it clear that proponents of polling owe such a justification. Nor does the argument imply that no survey research could ever be justified or beneficial. It argues for the more modest claim that the increasing use and reporting of polling is difficult to justify. Finally, this argument is not predicated on the mistaken view that there is something dubious about statistical analysis. In fact, the argument credits reputable pollsters for becoming more sophisticated in the mechanics of statistical analysis. That said, the increasing use and reporting of public opinion poll results will not be easy to justify.
Asch, Solomon. Social Psychology. Englewood Cliffs, New Jersey: Prentice-Hall, 1952.
Asher, Herbert. Polling and the Public. Washington, D.C.: Congressional Quarterly Press, 1988.
Bishop, George, R.W. Oldendick, A.J. Tuchfarber, and S.F. Bennett. "Pseudo-Opinions on Public Affairs." Public Opinion Quarterly, 44 (1980), 198-209.
Blumer, Herbert. "The Mass, the Public, and Public Opinion." Bernard Berelson and Morris Janowitz. Reader in Public Opinion and Communication. Glencoe: Free Press, 1953.
Cantril, Albert H. The Opinion Connection: Polling, Politics and the Press. Washington, D.C.: Congressional Quarterly, 1991.
Gallup, George, and Saul Forbes Rae. The Pulse of Democracy. New York: Greenwood Press, 1968.
Ismach, Arnold. "Polling as a News-Gathering Tool." Polling and the Democratic Consensus. Ed. L. John Martin. Annals of the American Academy of Political and Social Science, 472 (1984), 106-18.
Key V. O. Public Opinion and American Democracy. New York: Alfred A. Knopf, 1964.
Lang, Kurt, and Gladys Engel Lang. "The Impact of Polling on the Mass Media." Annals of the American Academy of Political and Social Science, 472 (1984), 129-42.
Lippmann, Walter. Public Opinion. New York: Macmillan, 1949.
Margolis, Michael. "Public Opinion, Polling, and Political Behavior." Annals of the American Academy of Political and Social Science, 472 (1984), 61-71.
Martin, L. John. "The Genealogy of Public Opinion Polling." Annals of the American Academy of Political and Social Science, 472 (1984), 12-23.
Mill, John Stuart. "On Liberty." Three Essays. Oxford: Oxford University Press, 1975, 5-141.
Monroe, Alan D. Public Opinion in America. New York: Harper & Row, 1975.
Nolle-Neuman, Elisabeth. Die Schweigespirale. Munich: R. Piper and Company, 1980.
Roper, Burns W. "Are Polls Accurate?" Annals of the American Academy of Political and Social Science, 472 (1984), 24-34.
Schuman, Howard, and Stanley Presser. "Public Opinion and Public Ignorance." American Journal of Sociology, 85 (1980), 1214-25.
Weissberg, Robert. Public Opinion and Popular Government. Englewood Cliffs, New Jersey: Prentice-Hall, 1976.
Yeric, Jerry L., and John R. Todd. Public Opinion: The Visible Politics. Itasca, Illinois: F. E. Peacock Publishers, 1989.