Abstract

This paper describes the process for creating and validating an assessment test that measures the effectiveness of instruction by probing how well that instruction causes students in a class to think like experts about specific areas of science. The design principles and process are laid out and it is shown how these align with professional standards that have been established for educational and psychological testing and the elements of assessment called for in a recent National Research Council study on assessment. The importance of student interviews for creating and validating the test is emphasized, and the appropriate interview procedures are presented. The relevance and use of standard psychometric statistical tests are discussed. Additionally, techniques for effective test administration are presented.

Keywords

Relevance (law)Test (biology)Process (computing)PsychologyMathematics educationClass (philosophy)Measure (data warehouse)Test validityEducational assessmentStandards for Educational and Psychological TestingPsychometricsComputer scienceHigher educationArtificial intelligenceEducation theoryData mining

Affiliated Institutions

Related Publications

Publication Info

Year
2010
Type
article
Volume
33
Issue
9
Pages
1289-1312
Citations
430
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

430
OpenAlex

Cite This

Wendy K. Adams, Carl Wieman (2010). Development and Validation of Instruments to Measure Learning of Expert‐Like Thinking. International Journal of Science Education , 33 (9) , 1289-1312. https://doi.org/10.1080/09500693.2010.512369

Identifiers

DOI
10.1080/09500693.2010.512369