The math wars might have quieted a bit since their heyday in the mid- and late-1990s, but if you hold your ear up to the internet and listen closely it won't be long before you hear the sound of reformists and traditionalists trading fire. A recent story in Education News by Barry Garelick triggered a battle in the comment section, bringing out many of the usual suspects to fight for the ground held by the other side.
While I might find such battles interesting (in a "straw-men-knocked-down-per-minute" sort of way), rarely do they accomplish anything but bolster the ill-will between the two camps. Occasionally there are hints at research findings, and perhaps somebody links to another story, blog, or website, but I never see much that might convince either side they might be wrong. Remember, this is the world of mathematics we're talking about, and "proving" anything right or wrong requires a standard of evidence not easily found.
Part of what sustains the math wars is the vast divide in what the two sides see as quality research and research methods, how they see the nature of mathematics itself, and how we measure success in mathematics[1]. A lot of math warriors might be willing to concede defeat if they came out on the wrong end of a large-scale, randomized, longitudinal experiment with high-fidelity implementations of reform and traditional curriculum and pedagogy, and multiple forms of assessment measuring a range of mathematical skills and abilities. But such experiments are very rare in social science – not because nobody wants to do them, but because the randomizing and controlling of people quickly veer towards the impossible and the unethical. So in the place of an idealized experiment, researchers have been trying to answer the traditional-vs.-reform question using the best methods available.
One such researcher is Jo Boaler. If I were to conduct a Family Feud-style survey of 100 math teachers, asking them to name a math education researcher, I'd expect Boaler to make it on the board. (Debating who else would be on the list might make for interesting Google+ and Twitter fodder.) Since earning her PhD in mathematics education from Kings College, London University in 1996, she has spent time in both the U.K. and the U.S. and is currently a professor at Stanford. While perhaps not as well-known as her later Railside study, her 3-year Open and Closed case study in the U.K. of two schools with traditional and reform approaches is worth examining here.
Boaler's research grew out of a concern that mathematical knowledge, when learned in a "traditional" way (which I'll define in a moment), isn't very transferrable to contexts outside the classroom. Learning transfer is a slippery subject for learning scientists to pin down, partly because we have a history of viewing learning as a cognitive ("in the head") activity, while transfer requires us to question the importance of our surroundings to what we learn, which is often referred to as situated learning (Lave & Wenger, 1991). Boaler wanted to investigate if, and how, being taught in mathematics affected future math performance in a variety of contexts.
For Boaler's study, she spent three years in two U.K. high schools, observing the daily activities inside mathematics classrooms. A great deal of the work was ethnographic, but she also conducted about 25 interviews per year, collected about 300 surveys, and administered a series of assessments. While the schools were not chosen randomly, they were in the same community, fed by the same primary schools, and had students with very similar demographic backgrounds. Test score averages for the two schools were roughly the same for students entering the 3-year study.
Math classes in "Amber Hill," the traditional school, generally consisted of a 15-20 minute lecture and working of example problems, followed by time for students to practice similar problems. Students were tracked into one of eight different levels depending on their prior test scores and teachers' judgment of their abilities. Overall, the atmosphere was described as calm and the students were motivated; in a short study of time-on-task, Boaler never observed fewer than 90% of students doing their work during the class. However, interview and survey data revealed that students found the work to be "boring and tedious" (Boaler, 1998, p. 45), and students described math as "rule following" (p. 46) and "cue-based" (p. 47), meaning students typically expected a task to indicate which rule to follow for solving a particular type of problem.
The other school in the study, "Phoenix Park," favored progressive education over traditional schooling. The atmosphere was very relaxed, and students were encouraged to accept responsibility for their own learning. Most of the math lessons were open-ended projects and students worked in mixed-ability groups. Boaler's description of the curriculum includes tasks like, "The volume of a shape is 216, what can it be?" When students needed math they did not know, they would get help from the teacher. When students lost interest, they were free to wander both physically and mentally in search of other work that might interest them. In the same short study of time-on-task, Boaler never recorded more than 70% of students working, and some students never appeared to do any work. When asked to describe their math lessons, the most common response from students was "noisy," followed by "good atmosphere" and "interesting" (p. 50). About a fifth of the students reported not liking having so much freedom in the classroom.
When comparing student attitudes in the two schools, Boaler found that Phoenix Park students reported being more interested in their lessons/projects, while Amber Hill students complained about their textbooks. At Amber Hill, boys reported being significantly more positive about mathematics than girls; at Phoenix Park there were no such differences.
One of the ways Boaler measured math performance was to give students a pre-test measuring their skills with volume and angles, then two weeks later give them an architectural activity using those same skills in context. A score of 1 represented a correct (or nearly correct) answer, while a 2 represented an incorrect answer. The percentage of students scoring a 1 on each task is shown in the table below.
Pre-Test Volume | Pre-Test Angle | Architectural Task Volume | Architectural Task Angle | |
Amber Hill | 72% | 94% | 55% | 64% |
Phoenix Park | 60% | 94% | 75% | 82% |
So while Amber Hill students scored better with decontextualized problems, Phoenix Park students did better with the tasks that more closely resembled using math in the real world. Boaler noticed a pattern in Amber Hill students' responses for the architectural angle task: many students took the word "angle" as a prompt to use trigonometry, even though none was needed.
While many might assume the traditional style of Amber Hill would result in those students receiving higher standardized test scores, Boaler suspected that transferring their knowledge from textbook to exam might be more difficult for Amber Hill students, as the exam contained questions that went beyond the simple application of rules and procedures. In examining GCSE exam scores from the end of Year 11, Boaler found 11% of students at both Amber Hill and Phoenix Park received an A-C grade, but 88% of Phoenix Park students passed the exam compared to only 71% at Amber Hill. Boys at Amber Hill received significantly higher grades than girls (20% to 9%), while no such significant differences were found at Phoenix Park (13% for boys, 15% for girls).
In the discussion section of the article, Boaler returns to survey data and exposes the differences in attitudes towards math among students from both schools. Amber Hill students admit that they didn't see connections between their textbook exercises and the real world, while Phoenix Park students talked more about the process of solving problems and using mathematics as an adaptable tool. In her conclusion, Boaler claims that while Amber Hill students knew more mathematics, the students from Phoenix Hill could apply more mathematics because their style of learning had forced them to become more flexible in their approach and more forgiving of their environment. Boaler does criticize the open approach of Phoenix Park because despite the relatively favorable test scores in comparison to Amber Hill, it led to a great deal of wasted student time. Regardless of the curricular and pedagogical details, Boaler's final conclusion is that "a traditional textbook approach that emphasizes computation, rules, and procedures, at the expense of depth of understanding, is disadvantageous to students, primarily because it encourages learning that is inflexible, school-bound, and of limited use" (p. 60).
Are Boaler's findings enough to end the math wars? If the answer was "yes," they would have ended in 1998 when the article was published. While the article does come across as a victory for reform, I don't think we can equate the progressive, open style of Phoenix Park with the expectation of "normal" reform classrooms. Similarly, Amber Hill might be more traditional than a "normal" traditional classroom. Still, Boaler's methodology helps shed light on how researchers try to answer the "traditional-vs.-reform" question, and this work helps us think about the importance of how we assess and how our perspective of learning changes when we view it through the lens of learning transfer. Boaler's conclusion should still be useful information regardless of your perceived approach, even if it falls short of declaring a cessation of hostilities.
[1] In addition, the lack of free public access to high-quality mathematics education research also sustains the math wars. After all, it's largely public opinion that keeps the war going, and researchers have allowed themselves to contribute to a system that discourages the public from seeing their published results. I'm hoping these posts put a dent in that knowledge gap, but I can only do so much. For a big step towards a long-term solution to this problem, I urge you to support HR 4004, the Federal Research Public Access Act, as well as http://thecostofknowledge.com/. Finally, something I think both the traditionalists and reforms can both get behind.
References
Boaler, J. (1998). Open and closed mathematics: Student experiences and understandings. Journal for Research in Mathematics Education, 29(1), 41-62.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation (p. 138). Cambridge, UK: Cambridge University Press.