What Do Reading Comprehension Tests Mainly Measure? Knowledge
By E. D. Hirsch, Jr.
I want to outline some facts about reading comprehension tests that are not widely known, yet need to be familiar to any parent, teacher, or citizen who is interested in educational improvement. Let's begin by considering the fourth-grade guidelines for teaching and testing reading comprehension, as published by two representative states (all states issue these kinds of guidelines).
Students will listen, speak, read, and write for information and understanding. As listeners and readers, students will collect data, facts, and ideas; discover relationships, concepts, and generalizations; and use knowledge generated from oral, written, and electronically produced texts.
• interpret and analyze information from textbooks and nonfiction books for young adults, as well as reference materials, audio and media presentations, oral interviews, graphs, charts, diagrams, and electronic databases intended for a general audience;
• compare and synthesize information from different sources;
• use a wide variety of strategies for selecting, organizing, and categorizing information;
• distinguish between relevant and irrelevant information and between fact and opinion;
• relate new information to prior knowledge and experience; and
• understand and use the text features that make information accessible and usable, such as format, sequence, level of diction, and relevance of details.
The student constructs meaning from a wide range of texts.
• reads text and determines the main idea or essential message, identifies relevant supporting details and facts, and arranges events in chronological order;
• identifies the author's purpose in a simple text;
• recognizes when a text is primarily intended to persuade;
• identifies specific personal preferences relative to fiction and nonfiction reading;
• reads and organizes information for a variety of purposes, including making a report, conducting interviews, taking a test, and performing an authentic task; and
• recognizes the difference between fact and opinion presented in a text.
Given such vague guidelines, consider the predicament of schools and students under the current accountability arrangements. What are educators to do? It becomes logical to think like this: The tests are coming. We don't know what topics the children will be asked to read about (because they are not identified in states' reading guidelines—or, for the most part, in states' content standards). The tests will probe reading comprehension skills, so we must teach those skills.
How does one prepare students to take this kind of test? Logic has led schools, districts, states, and companies that provide test-prep materials to believe that they must train students in the kinds of procedures elicited by the test: Clarify what the passage means, question the author, find the main idea, make inferences about the passage, study the meanings of words, consider which event in the narrative comes first, and which next.
But in fact, this preparation is not mainly what students need. Let's look at a characteristic bit of prose and a typical question from one of these reading tests.
There is a path that starts in Maine and ends in Georgia, 2,167 miles later. This path is called the Appalachian Trail. If you want, you can walk the whole way, although only some people who try to do this actually make it because it is so far, and they get tired. The idea for the trail came from a man named Benton MacKaye. In 1921, he wrote an article about how people needed a nearby place where they could enjoy nature and take a break from work. He thought the Appalachian Mountains would be perfect for this.
This article is mostly about:
• how the Appalachian Trail came to exist;
• when people can visit the Appalachian Trail;
• who hikes the most on the Appalachian Trail; and
• why people work together on the Appalachian Trail.
A student's actual ability to find the main idea of a passage is not a formal ability to follow procedures that will elicit the main idea, but rather the ability to understand what the text says. No repetitions of classroom exercises will help the test-taker who does not know what hiking is, or what low, tree-covered mountains are like (they are not like the snow-covered Himalayan mountains most often pictured in books), or where Maine and Georgia are. Classroom practice in strategies cannot make up for the student's lack of the background knowledge needed to understand this passage, and no instruction in strategies is required in order to answer the questions quickly and accurately if the student knows about hiking in the Appalachians, Maine, and Georgia.1 The inferences that we make when we hear or read speech are derived from our relevant knowledge about the domain of the passage. The comprehension skills that students are supposed to learn by practicing "comprehension skills" cannot lead to high test performance because they do not lead to actual comprehension.
Conscious strategizing is also slow and cumbersome. Speed is slower and scores are lower for unfamiliar topics than for familiar ones. This is true for all readers.2 Tests are time-sensitive, as reading comprehension itself is, because slowness implies mental overload and mental overload impairs understanding. The mental speed that is bestowed by topic familiarity is important not just for completing the test on time, but also for getting the answers right. In sum, a child who already knows about the Appalachian Trail, who has heard or read about it or seen or walked it or read about similar trails, will process the passage much faster and more accurately than a child to whom such things are unfamiliar, even though the two children have identical decoding and strategizing skills. They have learned equally well the lessons that the classroom has taught. Yet these two students make vastly different scores on the reading test because one student possesses more general knowledge than the other.
Every highly valid and reliable reading test contains several different passages sampling several knowledge areas and kinds of writing. That fact in itself gives away the knowledge-based character of reading, since if reading comprehension were a set of all-purpose formal strategies, a single passage would test reading skill perfectly well. But because general reading skill requires broad general knowledge, a valid test must sample several genres and areas of knowledge. Take, for example, the Iowa Test of Basic Skills (ITBS). It contains nine short passages of different genres: fiction about a bird, a biography, some lyric poetry, fiction about sports, exposition about another country, fiction about a TV program, exposition about the habits of an animal, exposition about the lives of Native Americans, and exposition about a religious sect. The prose passages are short—150 to 290 words—and each is followed by roughly four multiple choice questions.
The multiple domains on any valid reading test are chosen not because they directly reflect what is taught in school, but because they reflect an ability to read passages from an unpredictable diversity of domains. In order to read a wide array of passages in different domains, a person must have a wide array of knowledge.3
Reading Tests Are Useful ... but Not for Measuring Yearly Progress in Comprehension
Like all tests, a reading comprehension test is a sampling device. It doesn't test the whole range of possible knowledge domains or kinds of text. That would make it far too long. It offers a few typical samples from a few typical domains, and students' performance on these samples is taken to estimate their reading comprehension over the whole universe of reading tasks that confront the general reader. (The best of the tests do a very good job of making that estimation. For example, scores in early grades predict scores in later years, school grades, and even job performance and income.4)
But these tests have severe shortcomings when used to measure yearly student progress in the early grades. Although imparting the background knowledge needed for general reading ability is a multiyear project (covering at least the first six years of schooling and beyond), real progress in building the background knowledge and vocabulary that underlie reading comprehension can occur in the early grades without that progress being registered on a reading comprehension test. Especially in the early grades, when children are making irregular, desultory progress in knowledge and vocabulary that cannot be sensitively measured by such tests, general reading tests can be quite inadequate gauges.
For example, if a student has just learned about the Civil War, he may not make a noticeably better grade on a short reading test that samples domains far removed from that subject. But in reality, his ability to read passages about Grant and Lee and Lincoln with comprehension has grown, even if the test does not measure that progress. He will also be able to read about events related to war and history with greater comprehension. He will know what a regiment is and what the word bloodshed means, though these are not on the test. He may have learned more about some of the words on the test and still not be able to answer correctly, because some of his gradual gains in word understanding, a slow, subliminal process requiring many exposures to a word, do not reach the measurement threshold of the test.
If schools wish to meet "adequate yearly progress" as required by No Child Left Behind (NCLB), they should systematically teach and then test for the knowledge that leads to proficient reading comprehension. This means that schools must have a specific, grade-by-grade curriculum designed to systematically build the knowledge that an educated reader needs—and a test that has been carefully aligned with that curriculum. The curriculum must be clearly laid out in literature, science, history, and the arts, for these are the large domains that constitute the background knowledge required for reading comprehension. The monitors of NCLB compliance should recognize that adequate yearly progress in early reading is in fact occurring if students show that they are not only decoding well, but also gaining knowledge, as demonstrated on these curriculum-based tests.
What Kinds of Test Preparation Will Enhance Education?
What can calm the frantic and ineffectual test preparation that has overtaken many schools as they labor to meet NCLB's adequate yearly progress requirement? Students and teachers cannot directly prepare for a reading test. (A one-time gain can typically be achieved by devoting a small amount of time to assuring that children are familiar with the testing format and test-taking strategies.) No one should be able to predict the subject matter of the passages on such a test and specifically learn about it. That would be cheating. It would defeat the test's purpose, which is to discover how well the test-taker can be expected to read an unpredictable array of texts in and out of school. The essence of such a test is its unpredictability. But if you cannot predict the subject matter on a valid reading test, how can you prepare students for it? You can't, and, therefore, you shouldn't try. The only useful way to prepare for a reading test is indirectly—by becoming a good reader of a broad range of texts, an ability that requires broad general knowledge.
E. D. Hirsch, Jr. is the author of many books and articles, including the bestselling Cultural Literacy and The Schools We Need. He is a fellow of the Academy of Arts and Sciences and founder of the Core Knowledge Foundation.
1. Recht, D. R. and Leslie, L. (1988). "Effect of Prior Knowledge on Good and Poor Readers' Memory of Text," Journal of Educational Psychology 80, 1:16–20.
2. Hirsch, Jr., E.D. (1981). "Measuring the Communicative Effectiveness of Prose," in J. Dominic, C. Fredricksen, and M.Whiteman (eds), Writing, Hillsdale, N.J.:Erlbaum, p. 189–207. See also Recht and Leslie, "Effect of Prior Knowledge."
3. Carroll, J. B. (1979). "Psychometric Approaches to the Study of Language Abilities," in C. J. Fillmore, D. Kempler, and S.-Y. Wang (eds), Individual Differences in Language Abilities and Language Behavior, New York:Academic.
4. Johnson, W. R. and Neal, D. (1998). "Basic Skills and the Black-White Earnings Gap," in Christopher Jencks and Meredith Phillips (eds), The Black-White Test Score Gap, Washington, D.C.:Brookings Institution Press, p. 480–497.