Abstract

Surveys that require users to evaluate or make judgments about information systems and their effect on specific work activities can produce misleading results if respondents do not interpret or answer questions in the ways intended by the researcher. This paper provides a framework for understanding both the cognitive activities and the errors and biases in judgment that can result when users are asked to categorize a system, explain its effects, or predict their own future actions and preferences with respect to use of a system. Specific suggestions are offered for wording survey questions and response categories so as to elicit more precise and reliable responses. In addition, possible sources of systematic bias are discussed, using examples drawn from published IS research. Recommendations are made for further research aimed at better understanding how and to what extent judgment biases could affect the results of IS surveys.

Keywords

CategorizationComputer scienceAffect (linguistics)Data scienceConfirmation biasCognitive biasCognitionResponse biasPsychologyCognitive psychologySocial psychologyArtificial intelligence

Affiliated Institutions

Related Publications

A law of comparative judgment.

This chapter describes a new psychophysical law which may be called the law of comparative judgment and to show some of its special applications in the measurement of psychologi...

1927 Psychological Review 5142 citations

Publication Info

Year
1994
Type
article
Volume
5
Issue
1
Pages
48-73
Citations
229
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

229
OpenAlex

Cite This

Ellen M. Hufnagel, Christopher Conca (1994). User Response Data: The Potential for Errors and Biases. Information Systems Research , 5 (1) , 48-73. https://doi.org/10.1287/isre.5.1.48

Identifiers

DOI
10.1287/isre.5.1.48