Ask the Cognitive Scientist

Allocating Student Study Time: "Massed" versus "Distributed" Practice

How does the mind workand especially how does it learn? Teachers make assumptions all day long about how students best comprehend, remember, and create. These assumptionsand the teaching decisions that resultare based on a mix of theories learned in teacher education, trial and error, craft knowledge, and gut instinct. Such gut knowledge often serves us well. But is there anything sturdier to rely on?

Cognitive science is an interdisciplinary field of researchers from psychology, neuroscience, linguistics, philosophy, computer science, and anthropology that seek to understand the mind. In this new American Educator column, we will consider findings from this field that are strong and clear enough to merit classroom application. This issue's question: How to allocate students' practice time as they learn new material.

How much practice do students need to learn a given body of knowledge or group of facts? What strategies for learning different kinds of material work best? What's the most efficient way to allocate practice time? Cognitive science offers insights that can help answer these questions and thus help teachers shape their instruction in especially effective ways. In this article, we will consider one aspect of this broad topic for which the findings are especially consistent: how the "massing" or "distributing" of students' practice time influences students' long-term retention of factual knowledge. This is an important issue for an obvious reason: Knowing important factual information should be a residual effect of good schooling. In addition, in many cases, students' more advanced learning depends on their retention of previously learned material.

Let's begin. Suppose a student is going to spend one hour learning a group of multiplication facts. How should that hour be allocated? Should the teacher schedule a single, one-hour session? Ten minutes each day for six days? Ten minutes each week for six weeks? The straightforward answer that we can draw from research evidence is that distributing study time over several sessions generally leads to better memory of the information than conducting a single study session. This phenomenon is called the spacing effect.

[[{"type":"media","view_mode":"wysiwyg","fid":"970","link_text":null,"attributes":{"alt":"Keppel","height":"214","width":"200","style":"float: left; height: 214px; width: 200px;","class":"media-image media-element file-wysiwyg"}}]]The spacing effect was noted by Hermann Ebbinghaus, the psychologist usually credited with the first scientific study of memory in 1885. In a super-human feat of patience and endurance, Ebbinghaus tested his ability to learn hundreds of lists of meaningless syllables (e.g. "lum") under different conditions. Ebbinghaus noted that if he studied a 12-syllable list 68 times, he could remember the list perfectly the next day if he allowed himself a "refresher" of seven repetitions before the test. However, if he distributed his study over three days, (and again allowed seven repetitions as a refresher before the test) he needed to study the list just 38 times—meaning he could cut study time nearly in half, with the same result, by distributing the practice.

The spacing effect has held up remarkably well over the better than one hundred years that researchers have examined it. Here's another example published about 80 years after Ebbinghaus's work: Geoffrey Keppel had college students learn pairs of nonsense syllables and adjectives (e.g., lum-happy). They were to learn the list so that when they saw the syllable, they could provide the matching adjective. All subjects studied the list eight times, but for half of the subjects, all eight trials occurred on the same day (massed practice) and the other subjects studied the list two times on each of four successive days (distributed practice). Keppel tested their memory of the list either 24 hours after the final study session, or a week later. The results are shown in the chart on the left. The upshot is that the massed practice group does fairly well if they are tested the next day, but they show a considerable drop-off if they are tested a week later. The distributed practice group, on the other hand, shows very little forgetting, even after the delay.

Massed practice is obviously very similar to what is commonly and derisively called "cramming." These results make it look as though cramming might allow you to remember things for a test the next day, but not for the long haul.

These are interesting studies, but for teachers they should raise as many questions as they answer. Before looking at some of the questions, it's important to pause and emphasize what the spacing effect is not.

  • The spacing effect does not address the issue of "review." Reviewing refers to presenting again, material that the student once knew but which has been subject to the ravages of forgetting. Review is designed to strengthen a fragile memory.
  • The spacing effect does not address the usefulness of spending additional time on a topic. It refers only to the distribution of time one has already allocated to study some material.

Let's consider several questions raised by the research.

Does this spacing effect apply to school-age children as well as college students? Does it apply to the sorts of materials students learn and not just nonsense words like "lum"?
It seems to. Kristine Bloom and Thomas Shuell (1981) taught 20 new vocabulary words to high school students enrolled in a French course. Students either studied the words for one 30-minute session (massed) or for a 10-minute session on each of three consecutive days. The groups were indistinguishable on a test administered immediately after practice, with each group remembering about 16 of the 20 words. A retest administered four days later, however, showed that the distributed practice group still remembered the words (15 words correct), whereas the massed practice group forgot much more (11 words correct).

Another study was conducted by Cornelius Rea and Vito Modigliani (1985) with third-grade students. In this experiment, one group was taught spelling words and math facts in a distributed condition and another in a massed condition. A test immediately following the training showed superior performance for the distributed group (70 percent correct) compared to the massed group (53 percent correct). These results seem to show that the spacing effect applies to school-age children and to at least some types of materials that are typically taught in school.

So spacing practice time improves the likelihood that a student will remember new facts. Does spacing work for other types of material?
John Donovan and David Radosevich (1999) conducted a meta-analysis of spacing-effect studies performed on adults. A meta-analysis is a statistical technique that reveals trends across many studies. Donovan and Radosevich noted that spacing has the biggest effect for learning simple motor skills (such as typing), but is also present when subjects learn new facts, as in the studies above. Only a few experiments have investigated highly complex skills (e.g., running an air traffic control simulator), but in those studies, the spacing effect has disappeared altogether. Thus, this meta-analysis supports the idea that the spacing effect applies to some (but probably not all) of the sorts of things that children learn in school. Unfortunately, there is little laboratory data to suggest at what point along the continuum, from learning facts to learning complex material, the spacing effect loses its potency.

How large is the spacing effect's impact on learning?
The reality of the spacing effect is strongly supported by a good deal of data. But is its actual impact on learning large enough to justify altering our teaching plans to accommodate it? The effect could be real in statistical terms, but insignificantly small in practical terms. So just how big is it? Because different studies use different measures, it can be very difficult to compare the relative effectiveness of strategies; this, of course, is the old apples and oranges problem. To overcome this problem, statisticians use "effect size" measures—one of which is denoted d—-that are independent of the particular measurement scale employed in a study.

According to Donovan and Radosevich's meta-analysis of spacing studies, the effect size for the spacing effect is d = .42. This means that the average person getting distributed training remembers better than about 67 percent of the people getting massed training. This effect size is nothing to sneeze at—in education research, effect sizes as low as d = .25 are considered practically significant, while effect sizes above d = 1 are rare.

To put this effect size in perspective, consider another effect size. People who have had a heart attack are often encouraged to take an aspirin each day to help prevent future heart attacks. The effect size associated with this treatment is a puny d = .03. Why, then, is it such a well-known treatment? Partly because the stakes are so high (we're trying to prevent heart attacks) and partly because there aren't many effective alternative treatments.

By all these measures, it seems that a strategy with a d = .42 effect is worth taking very seriously.

Does the spacing effect produce long-term effects or just short term effects?
The tests that we've described used rather short time-frames. Even the "distributed" delays were often minutes or hours, and the test was administered at most, a week (and often much less) after study. In education, we hope that students will remember material for years—both because the knowledge itself is valuable and because we must build on that initial knowledge in order to reach advanced knowledge. Suppose distributing practice helps memory for a month or so, but has no effect in the long run? If that were true, it certainly wouldn't be worth worrying about.

This question has not been investigated too often because of the practical difficulties of conducting studies that last a number of years. The few studies that have been done, however, suggest that distributed practice is very important in forming memories that last for years.

Harry Bahrick and Elizabeth Phelps (1987) examined the retention of 50 Spanish vocabulary words after an eight-year delay. Subjects were divided into three groups. Each practiced for seven or eight sessions, separated by a few minutes, a day, or 30 days. In each session, subjects practiced until they could produce the list perfectly one time.

Notice that in this experiment, the researchers didn't match the total amount of practice across groups. Rather, they matched the level of subjects' performance; at the end of each session, each subject could produce the list without error. Eight years later, people in the no-delay group could recall 6 percent of the words, people in the one-day delay group could remember 8 percent, and those in the 30-day group averaged 15 percent. Everyone also took a multiple choice test, and again, the spacing effect was observed. The no-delay group scored 71 percent, the one-day group scored 80 percent, and the 30-day group scored 83 percent.

This experiment, although impressive, was a bit different than those that came before it. Subjects were trained to a criterion (one perfect repetition of the list), which means that subjects in the longer delay condition studied a bit more than those in the shorter delays; they had forgotten the list during the delay, so they needed more practice to get to the criterion of one perfect recitation of the list. But, clearly, the payoff for this small cost was dramatic.

Nonetheless, this difference in total practice time raises an important issue: Perhaps the improved memory eight years later was not caused by the distributed nature of the practice, but by the slight increase in the number of practice trials.

In a follow-up experiment, Bahrick and his colleagues varied both the spacing of practice and the amount of practice. Practice sessions were spaced 14, 28, or 56 days apart, and totaled 13 or 26 sessions. They tested subjects' memory one, two, three, and five years after training. Once again, it took a bit longer to reach the criterion within each session when practice sessions were spaced farther apart, but again, this small investment paid dividends years later. It didn't matter whether testing occurred at one, two, three, or five years after practice—the 56-day group always remembered the most, the 28-day group was next, and the 14-day group remembered the least. Further, the effect was quite large. If words were practiced every 14 days, you needed twice as much practice to reach the same level of performance as when words were practiced every 56 days!

To summarize what we know from the laboratory: There is a mountain of evidence suggesting that spacing study time leads to better memory of the material; the effect applies to at least some of the types of learning students do—fact learning; and it seems to hold for school-age children. Most of that work used "distributed" timeframes that were not all that distributed—a matter of minutes or perhaps a day. But the small number of experiments that have used longer delays between practice sessions, and very long delays (years) before testing for retention, indicates that the spacing effect holds—and perhaps is even more robust after these long delays.

What Could This Look Like in the Classroom?

How can this research on the spacing effect be applied in the classroom? Here are a few ways to think about applications:

1) Identify key facts and ideas for distributed study: Think about the key sets of facts and ideas that you most want your students to remember twenty years from now—and next year. In an American history class, that set of ideas might include the key principles that the Founders intended to capture in the Constitution and the Bill of Rights. In elementary science, one such idea could be how electricity works. In first-grade math it could be addition and subtraction facts. Once you’ve identified this core content, you can use the next five strategies to engage students in studying this material on a number of occasions over several weeks or even months.

2) Design homework assignments that distribute practice: In developing homework assignments, strongly consider including material that was taught in previous weeks and even months. For example, at the end of a given unit, consider assigning homework that includes questions related to the previous several units (and even units going back to the beginning of the year).

3) Discourage cramming for tests: Carefully consider how to elicit student practice of test material several times before it appears on the test (for example, it might appear in a homework assignment; be elicited as part of a class discussion; and get quizzed in a quick class "bee"). When test time arrives, students have already distributed their learning a bit; the test becomes one more in a series of practice opportunities. In addition, make it a routine to include a number of items from previous units on each test—particularly material that many students did not do well on the first time around. This way students will know that they need to keep working on material that they find challenging—and that they won’t be able to get away with just cramming on the current material.

4) Take advantage of "down time" for practice: Especially in elementary school, when children are lining up for recess or lunch or during other transitions, run down the line asking each student a question related to material that has been introduced and practiced in previous lessons.

5) Break big ideas down into small pieces that can be easily practiced: After introducing a topic and covering enough content for students to understand the key ideas, break those key ideas and their associated facts or skills into small pieces that can be practiced in a variety of ways like class discussions, short quizzes, homework assignments, and class games.

6) Let students in on the secret: By all means, explain to your students that an important part of learning is remembering—and that they’re more likely to remember material if they revisit it a number of times. In fact, students may find that they can spend less total time studying for tests if they distribute their time over several sessions.

 


 

Daniel T. Willingham is associate professor of cognitive psychology and neuroscience at the University of Virginia and author of Cognition: The Thinking Animal. His research focuses on the role of consciousness in learning. Visit his Web site at www.danielwillingham.com. Special thanks to Alice Gill, Rosalind LaRocque, and Diane Airhart of the AFT's Educational Research and Dissemination Program for their ideas in developing the classroom applications. 

Readers can pose specific questions to:
Cognitive Scientist c/o American Educator

555 New Jersey Ave., N.W.
Washington, DC 20001
or e-mail to: amered@aft.org

References

Bahrick, Harry P; Phelphs, Elizabeth. Retention of Spanish vocabulary over 8 years. Journal of Experimental Psychology: Learning, Memory, & Cognition. Vol 13(2) Apr 1987, 344-349

Bloom, Kristine C; Shuell, Thomas J. Effects of massed and distributed practice on the learning and retention of second-language vocabulary. Journal of Educational Research. Vol 74(4) Mar-Apr 1981, 245-248.

Donovan, John J; Radosevich, David J. A meta-analytic review of the distribution of practice effect: Now you see it, now you don't. Journal of Applied Psychology. Vol 84(5) Oct 1999, 795-805

Ebbinghaus, H. Memory: A contribution to experimental psychology. New York: Dover, 1964 (Originally published, 1885).

Rea, Cornelius P; Modigliani, Vito. The effect of expanded versus massed practice on the retention of multiplication facts and spelling lists. Human Learning: Journal of Practical Research & Applications. Vol 4(1) Jan-Mar 1985, 11-18.

American Educator, Summer 2002