How does the mind work—and especially how does it learn? Teachers’ instructional decisions are based on a mix of theories learned in teacher education, trial and error, craft knowledge, and gut instinct. Such knowledge often serves us well, but is there anything sturdier to rely on?
Cognitive science is an interdisciplinary field of researchers from psychology, neuroscience, linguistics, philosophy, computer science, and anthropology who seek to understand the mind. In this regular American Educator column, we consider findings from this field that are strong and clear enough to merit classroom application.
Individuals vary in their views of what students should be taught, but there is little disagreement on the importance of critical thinking skills. In free societies, the ability to think critically is viewed as a cornerstone of individual civic engagement and economic success.
Despite this consensus, it’s not always clear what’s meant by “critical thinking.” I will offer a commonsensical view.1 You are thinking critically if (1) your thinking is novel—that is, you aren’t simply drawing a conclusion from a memory of a previous situation; (2) your thinking is self-directed—that is, you are not merely executing instructions given by someone else; and (3) your thinking is effective—that is, you respect certain conventions that make thinking more likely to yield useful conclusions. These would be conventions like “consider both sides of an issue,” “offer evidence for claims made,” and “don’t let emotion interfere with reason.” This third characteristic will be our main concern, and as we’ll see, what constitutes effective thinking varies from domain to domain.
Critical Thinking Can Be Taught
Planning how to teach students to think critically should perhaps be our second task. Our first should be to ask whether evidence shows that explicitly teaching critical thinking brings any benefit.
There are many examples of critical thinking skills that are open to instruction.2 For example, in one experiment, researchers taught college students principles for evaluating evidence in psychology studies—principles like the difference between correlational research and true experiments, and the difference between anecdote and formal research.3 These principles were incorporated into regular instruction in a psychology class, and their application was practiced in that context. Compared to a control group that learned principles of memory, students who learned the critical thinking principles performed better on a test that required evaluation of psychology evidence.
But perhaps we should not find this result terribly surprising. You tell students, “This is a good strategy for this type of problem,” and you have them practice that strategy, so later they use that strategy when they encounter the problem.
When we think of critical thinking, we think of something bigger than its domain of training. When I teach students how to evaluate the argument in a set of newspaper editorials, I’m hoping that they will learn to evaluate arguments generally, not just the ones they read. The research literature on successful transfer of learning* to new problems is less encouraging.
Teaching Critical Thinking for General Transfer
It’s a perennial idea—teach something that requires critical thinking, and such thinking will become habitual. In the 19th century, educators suggested that Latin and geometry demanded logical thinking, which would prompt students to think logically in other contexts.4 The idea was challenged by psychologist Edward Thorndike, who compared scores from standardized tests that high school students took in autumn and spring as a function of the coursework they had taken during the year. If Latin, for example, makes you smart, students who take it should score better in the spring. They didn’t.5
In the 1960s, computer programming replaced Latin as the discipline that would lead to logical thinking.6 Studies through the 1980s showed mixed results,7 but a recent meta-analysis offered some apparently encouraging results about the general trainability of computational thinking.8 The researchers reported that learning to program a computer yielded modest positive transfer to measures of creative thinking, mathematics, metacognition, spatial skills, and reasoning. It’s sensible to think that this transfer was a consequence of conceptual overlap between programming and these skills, as no benefit was observed in measures of literacy.
Hopeful adults have tried still other activities as potential all-purpose enhancers of intelligence—for example, exposure to classical music (the so-called Mozart effect),9 learning to play a musical instrument,10 or learning to play chess.11 None have succeeded as hoped.
It’s no surprise then that programs in school meant to teach general critical thinking skills have had limited success. Unfortunately, the evaluations of these programs seldom offer a good test of transfer; the measure of success tends to feature the same sort of task that was used during training.12 When investigators have tested for transfer in such curricular programs, positive results have been absent or modest and quick to fade.13
Transfer and the Nature of Critical Thinking
We probably should have anticipated these results. Wanting students to be able to “analyze, synthesize, and evaluate” information sounds like a reasonable goal, but those terms mean different things in different disciplines. Literary criticism has its own internal logic, its norms for what constitutes good evidence and a valid argument. These norms differ from those found in mathematics, for example. Thus, our goals for student critical thinking must be domain-specific.
But wait. Surely there are some principles of thinking that apply across fields of study. Affirming the consequent is always wrong, straw-person arguments are always weak, and having a conflict of interest always makes your argument suspect.14 There are indeed principles that carry across domains of study. The problem is that people who learn these broadly applicable principles in one situation often fail to apply them in a new situation.
The law of large numbers provides an example. It states that a large sample will probably be closer to a “true” estimate than a small sample—if you want to know whether a set of dice is loaded, you’re better off seeing the results of 20 throws rather than two throws. People readily understand this idea in the context of evaluating randomness, but a small sample doesn’t bother them when judging academic performance; if someone receives poor grades on two math tests, observers judge they are simply bad at math.15
In another classic experiment, researchers administered a tricky problem: a malignant tumor could be treated with a particular ray, but the ray caused a lot of collateral damage to healthy tissue. How, subjects were asked, could the ray be used to destroy the tumor? Other subjects got the same problem, but first read a story describing a military situation analogous to the medical problem. Instead of rays attacking a tumor, rebels were to attack a fortress. The military story offered a perfect analogy to the medical problem, but despite reading it moments before, subjects still couldn’t solve the medical problem. Merely mentioning that the story might help solve the problem boosted solution rates to nearly 100 percent. Thus, using the analogy was not hard; the problem was thinking to use it in the first place.16
These results offer a new perspective on critical thinking. The problem in transfer is not just that different domains have different norms for critical thinking. The problem is that previous critical thinking successes seem encapsulated in memory. We know that a student has understood an idea like the law of large numbers. But understanding it offers no guarantee that the student will recognize new situations in which that idea will be useful.
Critical Thinking as Problem Recognition
Happily, this difficulty in recognizing problems you’ve solved before disappears in the face of significant practice. If I solve a lot of problems in which the law of large numbers is relevant, I no longer focus on the particulars of the problem—that is, whether it seems to be about cars, or ratings of happiness, or savings bonds. I immediately see that the law of large numbers is relevant.17 Lots of practice is OK if you’re not in a hurry, but is there a faster way to help students “just see” that they have solved a problem before?
One technique is problem comparison; show students two solved problems that have the same structure but appear to be about different things, and ask students to compare them.18 In one experiment testing this method, business school students were asked to compare two stories, one involving international companies coping with a shipping problem, and the other concerning two college students planning a spring break trip. In each, a difficult negotiation problem was resolved through the use of a particular type of contract. Two weeks later, students were more likely to use the solution on a novel problem if they had contrasted the stories compared to other students who simply read them.19 Richard Catrambone developed a different technique to address a slightly different transfer problem. He noted that in math and science classes, students often learned to solve standard problems via a series of fixed, lockstep procedures. That meant students were stumped when confronted with a problem requiring a slight revision of the steps, even if the goal of the steps was the same. For example, a student might learn a method for solving word problems involving work like “Nicola can paint a house in 14 hours, and Carole can do it in 8. How long would it take them to paint one house, working together?” A student who learns a sequence of steps to solve that sort of problem is often thrown by a small change—the homeowner had already painted one-fourth of the house before hiring Nicola and Carole.
Catrambone20 showed that student knowledge will be more flexible if students are taught to label the substeps of the solution with the goal it serves. For example, work problems are typically solved by calculating how much of the job each worker can do in an hour. If, during learning, that step were labeled so students understood that that calculation was part of deriving the solution, they would know how to solve the problem when a fraction of the house is to be painted.
Open-Ended Problems and Knowledge
Students encounter standard problems that are best solved in a particular way, but many critical thinking situations are unique. There are no routine, reusable solutions for problems like designing a product or planning a strategy for a field hockey match. Nevertheless, critical thinking for open-ended problems is enabled by extensive stores of knowledge about the domain.21
First, the recognition process described above (“oh, this is that sort of problem”) can still apply to subparts of a complex, open-ended problem. Complex critical thinking may entail multiple simpler solutions from memory that can be “snapped together” when solving complex problems.22 For example, arithmetic is needed for calculating the best value among several vacation packages.
Second, knowledge impacts working memory. Working memory refers, colloquially, to the place in the mind where thinking happens—it’s where you hold information and manipulate it to carry out cognitive tasks. So, for example, if I said “How is a scarecrow like a blueberry?,” you would retrieve information about scarecrows (not alive, protect crops, found in fields, birds think they are alive) and blueberries (purple, used in pies, small, featured in Blueberries for Sal) from your memory, and then you’d start comparing these features, looking for overlap. But working memory has limited space; if I added three more words, you’d struggle to keep all five and their associations in mind at once.
With experience, often-associated bits of knowledge clump together and thus take up less room in working memory. In chess, a king, a castle, and three pawns in a corner of the board relate to one another in the defensive position, so the expert will treat them as a single unit. An experienced dancer similarly chunks dance moves allowing him to think about more subtle aspects of movement, rather than crowding working memory with “what I’m to do next.”
Third, knowledge is sometimes necessary to deploy thinking strategies. As noted above, sometimes you have an effective thinking strategy in your memory (for example, apply the law of large numbers) but fail to see that it’s relevant. In other situations, the proper thinking is easily recognized. We can tell students that they should evaluate the logic of the author’s argument when they read an op-ed, and we can tell them the right method to use when conducting a scientific experiment. Students should have no trouble recognizing “Oh, this is that sort of problem,” and they may have committed to memory the right thinking strategy. They know what to do, but they may not be able to use the strategy without the right domain knowledge.
For example, principles of scientific reasoning seem to be content free: for example, “a control group should be identical to the experimental group, except for the treatment.” In practice, however, content knowledge is needed to use the principle. For example, in an experiment on learning, you’d want to be sure that the experimental and control groups were comparable, so you’d make sure that proportions of men and women in each group were the same. What characteristics besides sex should you be sure are equivalent in the experimental and control groups? Ability to concentrate? Intelligence? You can’t measure every characteristic of your subjects, so you’d focus on characteristics that you know are relevant to learning. But knowing which characteristics are “relevant to learning” means knowing the research literature in learning and memory.
Experimental evidence shows that an expert doesn’t think as well outside her area of expertise, even in a closely related domain. She’s still better than a novice, but her skills don’t transfer completely. For example, knowledge of medicine transfers poorly among subspecialties (neurologists do not diagnose cardiac cases well),23 technical writers can’t write newspaper articles,24 and even professional philosophers are swayed by irrelevant features of problems like question order or wording.25
How to Teach Students to Think Critically
So what does all this mean? Is there really no such thing as a “critical thinking skill” if by “skill” we mean something generalizable? Maybe, but it’s hard to be sure. We do know that students who go to school longer score better on intelligence tests, and certainly we think of intelligence as all-purpose.26 Still, it may be that schooling boosts a collection of fairly specific thinking skills. If it increases general thinking skills, researchers have been unable to identify them.
Although existing data favor the specific skills account,27 researchers would still say it’s uncertain whether a good critical thinker is someone who has mastered lots of specific skills, or someone with a smaller set of yet-to-be-identified general skills. But educators aren’t researchers, and for educators, one fact ought to be salient. We’re not even sure the general skills exist, but we’re quite sure there’s no proven way to teach them directly. In contrast, we have a pretty good idea of how to teach students the more specific critical thinking skills. I suggest we do so. Here’s a four-step plan.
First, identify what’s meant by critical thinking in each domain. Be specific by focusing on tasks that tap skills, not skills themselves. What tasks showing critical thinking should a high school graduate be able to do in mathematics, history, and other subjects? For example, educators might decide that an important aspect of understanding history is the ability to source historical documents; that is, to interpret them in light of their source—who wrote it, for what purpose, and for what intended audience. Educators might decide that a key critical thinking skill for science is understanding the relationship between a theory and a hypothesis. These skills should be explicitly taught and practiced—there is evidence that simple exposure to this sort of work without explicit instruction is less effective.28
Second, identify the domain content that students must know. We’ve seen that domain knowledge is a crucial driver of thinking skill. What knowledge is essential to the type of thinking you want your students to be able to do? For example, if students are to source documents, they need knowledge of the relevant source; in other words, knowing that they are reading a 1779 letter from General George Clinton written to George Washington with a request for supplies won’t mean much if they don’t have some background knowledge about the American Revolutionary War—that will enable them to make sense of what they read when they look up Clinton and his activities at the time.
The prospect of someone deciding which knowledge students ought to learn—and what they won’t learn—sometimes makes people uneasy because this decision depends on one’s goals for schooling, and goals depend on values. Selection of content is a critical way that values are expressed.29 Making that choice will lead to uncomfortable tradeoffs. But not choosing is still making a choice. It’s choosing not to plan.
Third, educators must select the best sequence for students to learn the skills. It’s obvious that skills and knowledge build on one another in mathematics and history, and it’s equally true of other domains of skill and knowledge; we interpret new information in light of what we already know.
Fourth, educators must decide which skills should be revisited across years. Studies show that even if content is learned quite well over the course of half of a school year, about half will be forgotten in three years.30 That doesn’t mean there’s no value in exposing students to content just once; most students will forget much, but they’ll remember something, and for some students, an interest may be kindled. But when considering skills we hope will stick with students for the long term, we should plan on at least three to five years of practice.31
Some Practical Matters of Teaching Critical Thinking
I’ve outlined a broad, four-step plan. Let’s consider some of the pragmatic decisions educators face as they contemplate the teaching of critical thinking.
Is it all or none? I’ve suggested that critical thinking be taught in the context of a comprehensive curriculum. Does that mean an individual teacher cannot do anything on his or her own? Is there just no point in trying if the cooperation of the entire school system is not assured?
Obviously that’s not the case; a teacher can still include critical thinking content in his or her courses and students will learn, but it’s quite likely they will learn more, and learn more quickly, if their learning is coordinated across years. It has long been recognized among psychologists that an important factor influencing learning, perhaps the most important factor, is what the student already knows.32 Teaching will be more effective if the instructor is confident about what his or her students already know.
Student age: When should critical thinking instruction start? There’s not a firm, research-based answer to this question. Researchers interested in thinking skills like problem solving or evidence evaluation in young children (preschool through early elementary ages) have studied how children think in the absence of explicit instruction. They have not studied whether or how young children can be made to think more critically. Still, research over the last 30 years or so has led to an important conclusion: children are more capable than we thought.
The great developmental psychologist Jean Piaget proposed a highly influential theory that suggested children’s cognition moves through a series of four stages, characterized by more and more abstract thought, and better ability to take multiple perspectives. In stage theories, the basic architecture of thought is unchanged for long periods of time, and then rapidly reorganizes as the child moves from one developmental stage to another.33 A key educational implication is that it’s at least pointless and possibly damaging to ask the child to do cognitive work that is appropriate for a later developmental stage. The last 30 years has shown that, contrary to Piaget’s theory, development is gradual, and does not change abruptly. It has also shown that what children can and cannot do varies depending on the content.
For example, in some circumstances, even toddlers can understand principles of conditional reasoning. For instance, conditional reasoning is required when the relationship of two things is contingent on a third thing. A child may understand that when she visits a friend’s house, she may get a treat like cake or cookies for a snack or she may not. But if her friend is celebrating a birthday, the relation between those two things (a visit and getting cake) becomes very consistent. Yet when conditional reasoning problems are framed in unfamiliar contexts, they confuse even adult physicians. Much depends on the content of the problem.34
Thus, research tells us that including critical thinking in the schooling of young children is likely to be perfectly appropriate. It does not, however, provide guidance into what types of critical thinking skills to start with. That is a matter to take up with experienced educators, coordinating with colleagues who teach older children in the interests of making the curriculum seamless.
Types of students: Should everyone learn critical thinking skills? The question sounds like a setup, like an excuse for a resounding endorsement of critical thinking for all. But the truth is that, in many systems, less capable students are steered into less challenging coursework, with the hope that by reducing expectations, they will at least achieve “mastery of the basics.” These lower expectations often pervade entire schools that serve students from low-income families.35
It is worth highlighting that access to challenging content and continuing to postsecondary education is, in nearly every country, associated with socioeconomic status.36 Children from high socioeconomic status families also have more opportunities to learn at home. If school is the chief or only venue through which low socioeconomic status students are exposed to advanced vocabulary, rich content knowledge, and demands for high-level thinking, it is absolutely vital that those opportunities be enhanced, not reduced.
Assessment: Assessment of critical thinking is, needless to say, a challenge. One difficulty is expense. Claims to the contrary, multiple-choice items do not necessarily require critical thinking, even when items are carefully constructed and vetted, as on the National Assessment of Educational Progress (NAEP). One researcher37 administered items from the history NAEP for 12th-graders to college students who had done well on other standardized history exams. Students were asked to think aloud as they chose their answers, and the researchers observed little critical thinking, but a lot of “gaming” of the questions. Assessing critical thinking requires that students answer open-form questions, and that means humans must score the response, an expensive proposition.
On the bright side, the plan for teaching critical thinking that I’ve recommended makes some aspects of assessment more straightforward. If the skills that constitute “critical thinking” in, say, 10th-grade chemistry class are fully defined, then there is no question as to what content ought to appear on the assessment. The predictability ought to make teachers more confident that they can prepare their students for standardized assessments.
As much as teaching students to think critically is a universal goal of schooling, one might be surprised that student difficulty in this area is such a common complaint. Educators are often frustrated that student thinking seems shallow. This review should offer insight into why that is. The way the mind works, shallow is what you get first. Deep, critical thinking is hard-won.
That means that designers and administrators of a program to improve critical thinking among students must take the long view, both in the time frame over which the program operates and especially in the speed with which one expects to see results. Patience will be a key ingredient in any program that succeeds.
Daniel T. Willingham is a professor of cognitive psychology at the University of Virginia. He is the author of When Can You Trust the Experts? How to Tell Good Science from Bad in Education and Why Don’t Students Like School? His most recent book is The Reading Mind: A Cognitive Approach to Understanding How the Mind Reads. This article is adapted with permission from his report for the government of New South Wales, “How to Teach Critical Thinking.” Copyright 2019 by Willingham. Readers can pose questions to “Ask the Cognitive Scientist” by sending an email to ae@aft.org. Future columns will try to address readers’ questions.
*For more on the research behind transfer of learning, see “If You Learn A, Will You Be Better Able to Learn B?” in the Spring 2020 issue of American Educator, available here. (return to article)
Endnotes
1. D. T. Willingham, “Critical Thinking: Why Is It So Hard to Teach,” American Educator 31, no. 2 (Summer 2007): 8–19.
2. P. C. Abrami et al., “Instructional Interventions Affecting Critical Thinking Skills and Dispositions: A Stage 1 Meta-Analysis,” Review of Educational Research 78, no. 4 (2008): 1102–1134; and R. L. Bangert-Drowns and E. Bankert, “Meta-Analysis of Effects of Explicit Instruction for Critical Thinking,” in Annual Meeting of the American Educational Research Association (Boston: 1990), 56–79.
3. D. A. Bensley and R. A. Spero, “Improving Critical Thinking Skills and Metacognitive Monitoring through Direct Infusion,” Thinking Skills and Creativity 12 (2014): 55–68.
4. C. F. Lewis, “A Study in Formal Discipline,” The School Review 13, no. 4 (1905): 281–292.
5. E. L. Thorndike, “The Influence of First-Year Latin upon Ability to Read English,” School and Society 17 (1923): 165–168; and C. R. Broyler, E. L. Thorndike, and E. Woodward, “A Second Study of Mental Discipline in High School Studies,” Journal of Educational Psychology 18, no. 6 (1924): 377–404.
6. S. Papert, “Teaching Children to Be Mathematicians versus Teaching about Mathematics,” International Journal of Mathematical Education in Science and Technology 3, no. 3 (1972): 249–262; and S. Papert, Mindstorms (New York: Basic Books, 1980); see also D. H. Clements and D. F. Gullo, “Effects of Computer Programming on Young Children’s Cognition,” Journal of Educational Psychology 76, no. 6 (1984): 1051–1058; and M. C. Linn, “The Cognitive Consequences of Programming Instruction in Classrooms,” Educational Researcher 14, no. 5 (1985): 14–29.
7. Y.-K. C. Liao and G. W. Bright, “Effects of Computer Programming on Cognitive Outcomes: A Meta-Analysis,” Journal of Educational Computing Research 7, no. 3 (1991): 251–268.
8. R. Scherer, F. Siddiq, and B. S. Viveros, “The Cognitive Benefits of Learning Computer Programming: A Meta-Analysis of Transfer Effects,” Journal of Educational Psychology 111, no. 5 (2019): 764–792.
9. J. Pietschnig, M. Voracek, and A. K. Formann, “Mozart Effect-Schmozart Effect: A Meta-Analysis,” Intelligence 38, no. 3 (2010): 314–323.
10. G. Sala and F. Gobet, “When the Music’s Over: Does Music Skill Transfer to Children’s and Young Adolescents’ Cognitive and Academic Skills? A Meta-Analysis,” Educational Research Review 20 (2017): 55–67.
11. G. Sala and F. Gobet, “Do the Benefits of Chess Instruction Transfer to Academic and Cognitive Skills? A Meta-Analysis,” Educational Research Review 18 (2016): 46–57.
12. For example, A. Kozulin et al., “Cognitive Modifiability of Children with Developmental Disabilities: A Multicentre Study Using Feuerstein’s Instrumental Enrichment-Basic Program,” Research in Developmental Disabilities 31, no. 2 (2010): 551–559; D. Kuhn and A. Crowell, “Dialogic Argumentation as a Vehicle for Developing Young Adolescents’ Thinking,” Psychological Science 22, no. 4 (2011): 545–552; and A. Reznitskaya et al., “Examining Transfer Effects from Dialogic Discussions to New Tasks and Contexts,” Contemporary Educational Psychology 37, no. 4 (2012): 288–306.
13. R. Ritchart and D. N. Perkins, “Learning to Think: The Challenges of Teaching Thinking,” in The Cambridge Handbook of Thinking and Reasoning, ed. K. J. Holyoak and R. G. Morrison (Cambridge, UK: Cambridge UP, 2005), 775–802.
14. R. H. Ennis, “Critical Thinking and the Curriculum,” in Thinking Skills Instruction: Concepts and Techniques, ed. M. Heiman and J. Slomianko (West Haven, CT: NEA Professional Library, 1987), 40–48.
15. C. Jepson, D. H. Krantz, and R. E. Nisbett, “Inductive Reasoning: Competence or Skill?,” Behavioral and Brain Sciences 6, no. 3 (1983): 494–501.
16. M. Gick and K. Holyoak, “Analogical Problem Solving,” Cognitive Psychology 12, no. 3 (1980): 306–355; and M. Gick and K. Holyoak, “Schema Induction and Analogical Transfer,” Cognitive Psychology 15, no. 1 (1983): 1–38.
17. For example, Z. Chen and L. Mo, “Schema Induction in Problem Solving: A Multidimensional Analysis,” Journal of Experimental Psychology: Learning Memory and Cognition 30, no. 3 (2004): 583–600.
18. K. J. Kurtz, O. Boukrina, and D. Gentner, “Comparison Promotes Learning and Transfer of Relational Categories,” Journal of Experimental Psychology: Learning Memory and Cognition 39, no. 4 (2013): 1303–1310.
19. J. Loewenstein, L. Thompson, and D. Gentner, “Analogical Encoding Facilitates Knowledge Transfer in Negotiation,” Psychonomic Bulletin and Review 6, no. 4 (1999): 586–597.
20. R. Catrambone, “Aiding Subgoal Learning: Effects on Transfer,” Journal of Educational Psychology 87, no. 1 (1995): 5–17; R. Catrambone, “The Subgoal Learning Model: Creating Better Examples to Improve Transfer to Novel Problems,” Journal of Experimental Psychology: General 127, no. 4 (1998): 355–376; R. Catrambone and K. Holyoak, “Learning Subgoals and Methods for Solving Probability Problems,” Memory & Cognition 18, no. 6 (1990): 593–603; and L. E. Margulieux and R. Catrambone, “Improving Problem Solving with Subgoal Labels in Expository Text and Worked Examples,” Learning and Instruction 42 (2016): 58–71.
21. J. S. North et al., “Mechanisms Underlying Skills Anticipation and Recognition in a Dynamic and Temporally Constrained Domain,” Memory 19, no. 2 (2011): 155–168.
22. K. Koedinger, A. Corbett, and C. Perfetti, “The Knowledge-Learning-Instruction Framework: Bridging the Science-Practice Chasm to Enhance Robust Student Learning,” Cognitive Science 36, no. 5 (2012): 757–798; and N. A. Taatgen, “The Nature and Transfer of Cognitive Skills,” Psychological Review 120, no. 3 (2013): 439–471.
23. R. Rikers, H. Schmidt, and H. Boshuizen, “On the Constraints of Encapsulated Knowledge: Clinical Case Representations by Medical Experts and Subexperts,” Cognition and Instruction 20, no. 1 (2002): 27–45.
24. R. T. Kellogg, “Professional Writing Expertise,” in The Cambridge Handbook of Expertise and Expert Performances, ed. A. Ericsson et al. (Cambridge, UK: Cambridge UP, 2018).
25. E. Schwitzgebel and F. Cushman, “Philosophers’ Biased Judgments Persist Despite Training, Expertise, and Reflection,” Cognition 141 (2015): 127–137.
26. M. Carlsson et al., “The Effect of Schooling on Cognitive Skills,” Review of Economics and Statistics 97, no. 3 (2015): 533–547; S. Ritchie and E. Tucker-Drob, “How Much Does Education Improve Intelligence? A Meta-Analysis,” Psychological Science 29, no. 8 (2018): 1358–1369; and T. Strenze, “Intelligence and Socioeconomic Success: A Meta-Analytic Review of Longitudinal Research,” Intelligence 35, no. 5 (2007): 401–426.
27. S. Ritchie, T. C. Bates, and I. J. Deary, “Is Education Associated with Improvements in General Cognitive Ability, or in Specific Skills?,” Developmental Psychology 51, no. 5 (2015): 573–582.
28. Abrami et al., “Instructional Interventions”; D. F. Halpern, “Teaching Critical Thinking for Transfer across Domains: Disposition, Skills, Structure Training, and Metacognitive Monitoring,” American Psychologist 53, no. 4 (1998): 449–455; A. Heijltjes, T. Van Gog, and F. Paas, “Improving Students’ Critical Thinking: Empirical Support for Explicit Instructions Combined with Practice,” Applied Cognitive Psychology 28, no. 4 (2014): 518–530.
29. D. T. Willingham, When Can You Trust the Experts? How to Tell Good Science from Bad in Education (San Francisco: Jossey-Bass, 2012).
30. A. Pawl et al., “What Do Seniors Remember from Freshman Physics?,” Physical Review Special Topics—Physics Education Research 8, no. 2 (2012): 020118.
31. H. P. Bahrick, “Semantic Memory Content in Permastore: Fifty Years of Memory for Spanish Learned in School,” Journal of Experimental Psychology: General 113, no. 1 (1984): 1–29; and H. P. Bahrick and L. K. Hall, “Lifetime Maintenance of High School Mathematics Content,” Journal of Experimental Psychology: General 120, no. 1 (1991): 20–33.
32. D. Ausubel, Educational Psychology: A Cognitive View (New York: Holt, Rinehart, and Winston, 1968).
33. J. Piaget, The Origins of Intelligence in Children (New York: International Universities Press, 1952).
34. D. T. Willingham, “What Is Developmentally Appropriate Practice?,” American Educator 32, no. 2 (2008): 34–39.
35. P. D. Parker et al., “A Multination Study of Socioeconomic Inequality in Expectations for Progression to Higher Education: The Role of Between-School Tracking and Ability Stratification,” American Educational Research Journal 53, no. 1 (2016): 6–32.
36. Organization for Economic Cooperation and Development, Education at a Glance: 2018: OECD Indicators (Paris: OECD Publishing, 2018).
37. M. D. Smith, “Cognitive Validity: Can Multi-Choice Items Tap Historical Thinking Processes?,” American Educational Research Journal 54 (2017): 1256–1287.
[Illustrations by James Yang]