Reflections of Multicultural Education

It’s the day after the last day of what has been a very busy semester. Being busy is good, and being awash in new information every day is something I relish. But there comes a time when we must pause and reflect, and too often this semester I have not given myself that time. Admittedly, just keeping up with the flood of new information proved to be too much, and the student-to-student whispers of “You can’t read everything, you know” proved too regularly to be true. But finally, now, I can take a few hours and think about the last core course of my doctoral program: Multicultural Education (MCED), taught by Linda Mizell.

Assessing the value of this class has been difficult, as there were plenty of moments during the semester when I felt I wasn’t making much scholarly progress. One reason for that feeling – and a reason I appreciate – is that prior coursework had left me better prepared for MCED than I expected. (Or so I thought.) Rarely were the issues we explored in MCED not ones I’d considered in prior courses like Culture and Ethnography, Ethics in Education, Policy Issues, Education Research and Policy, and Perspectives on Classrooms, Teaching, and Learning. It is a credit to my institution that attention to multiculturalism and equity permeates into most corners of the school, although I admit there are times where I still sense it as artificially layered on to a lesson or, even worse, uncomfortably absent. A second reason for that lack-of-progress feeling stemmed from not being able to keep up with all the reading and assignments for the semester. As I finished the last of my papers last night, I thought back to what remained unfinished and one reading in particular stood out: Eduardo Bonilla-Silva’s Racism Without Racists.

So after submitting my last final paper, I pulled Bonilla-Silva back off the shelf and picked up where I’d left off. I had read all but the last two chapters, but it was in those last two chapters where things appeared to get most interesting. In this, the third edition of Racism Without Racists, Bonilla-Silva added a new chapter at the end addressing the “Obama Phenomenon.” I started reading and almost immediately I was taken back to what I thought made this book so interesting, engaging, and challenging to begin with: Bonilla-Silva’s outspoken criticism of a system that perpetuates racism and inequality. In general, I do not disagree with Bonilla-Silva’s message. But the style with which the message was delivered came as a bit of an uncomfortable shock.

In his detailed analysis of interviews with both white and minority students, Bonilla-Silva exposed the racism found in peoples’ language. For example, in an interview with a white girl named Jill who claimed, “One of my best friends is black” (p. 58), Bonilla-Silva asks her to go into more detail. Jill then describes her friend as “bright” but with “terrible GMAT scores,” and then says, “What he lacks in intellect he makes up for in…he works so hard and he’s always trying to improve himself.” In his analysis, Bonilla-Silva addresses the contradiction about intelligence and points out that Jill never mentions this friend by name. This example by itself might seem lacking in evidence, but it is far from an isolated incident in the text. The dissection of racism in peoples’ speech happens on page after page. Sometimes it’s subtle, sometimes less so, and I remember feeling during my first reading that I’m glad Bonilla-Silva wasn’t interviewing me, because he seemed to make everybody sound racist!

Now, reflecting exactly on that thought, I see how that thinking exposes how I largely missed Bonilla-Silva’s greater point (even though it’s the title of the book): the kind of racism we’re dealing with now is less about the individual and more about a system. Bonilla-Silva wasn’t after Jill to make her sound like a racist – at least not the kind of racist most people imagine when they hear that label. Bonilla-Silva was instead exposing how Jill, along with most of the other interviewees in the book, demonstrates the systems and structures of racism and how they exist in what we all say, do, and believe. In other words, it’s not about Jill. For the same reason, I shouldn’t have worried about Bonilla-Silva interviewing me, as the interview would have only helped me understand how my actions, behaviors, and attitudes are being affected by the subtle yet significant culture of racism that still exists in our society. And until we are forced to recognize it, there is very little we can or will do about it.

It’s also this same system that allowed much of the country to endorse President Obama, and how that endorsement gives us a false sense of accomplishment that we’ve somehow reached a “post-racial” society. (We haven’t.) As an educator I wonder how we can have policies like NCLB which are so bold to declare a school a failure when achievement gaps persist, yet our greater society and government doesn’t always extend that same failure judgment to the enormous gaps in achievement, income, wealth, health, etc. that we see in our society. Sure, the #Occupy protesters have their message, but it’s unfortunate that so few were shouting until the perils of inequity reached beyond minorities.

The system that Bonilla-Silva describes should not have been an “uncomfortable shock” to me. From where I now stand, I can see how other readings described much of the same system, yet somehow by using more academic or less forceful language I was led to think I understood when I didn’t. Perhaps the best example of this is Beverly Daniel Tatum’s book “Why Are All the Black Kids Sitting Together in the Cafeteria?” I remember thinking as I read it, “I really like Beverly Daniel Tatum because she’s making me feel comfortable about a difficult topic.” Where I feared an interview with Bonilla-Silva, I would have welcomed the opportunity to speak with Beverly Daniel Tatum.

But somehow, disguised by my initial affection for the authors, I didn’t immediately see how in many ways Bonilla-Silva and Tatum were largely describing the same system of racism. I’m glad I read Tatum first and then Bonilla-Silva, because now as I reflect I can see how Tatum’s message didn’t really sink in for me; if it had, I wouldn’t have been so challenged by Bonilla-Silva. The lesson for me is not that I need to keep reading more critical work (although that would certainly help), but that it’s going to take more effort to make myself feel uncomfortable about issues of culture, race, class, power, etc., before somebody else gets the chance to do it for me.

For me, the simple title to this post has a double meaning. First is the more obvious, that I’m finally taking some time to think about a class I experienced over the past semester. Second, and more importantly, is the idea that multicultural education has a reflective property like a mirror bouncing light around a corner. As an educator who had a relatively monocultural upbringing in the rural Midwest, and who apparently can still be surprised by the injustices in the world around me, I need to use what I’ve learned about multicultural education to shine some light not only around corners, but back on myself. There’s so much more for me to see, most of which is hidden by its largeness, not its smallness. As an educator this is what we do: we help students explore and understand the world around them, and our reward for doing so comes both in our students’ growth and our own.

Sorting Out the Summative: When Standards-Based Grading Meets the End of the Semester

Source: Wikipedia

Many teachers who choose to use standards-based grading eventually find themselves facing the reality of their school's grading policies and tradition: the expectation of final, summative grades that are reported as percentages and letters. So regardless how hard you try to focus on quality feedback instead of grades all semester long (for good reason), there comes a time when, for reasons probably beyond your control, you have to turn levels and descriptions of student understanding into numbers. This is SBG's "Monday Morning Problem" that doesn't always get addressed in theory. But this week is finals week for my basic statistics students, so for me the time has come to convert standards-based formative grades into a summative grade, including calculating final exam grades. Here I'll try to describe the two steps I'll take to calculate my students' grades: (a) conversion of their formative scores into a summative score and (b) scoring and inclusion of the final exam into their semester grades.

Formative to Summative
Besides giving students a lot of written and verbal feedback about where they should try to improve, I've been using the simplest of measures to record their performance on class objectives: either students (a) "get it," (b) "sort of get it," or (c) "dont' get it/haven't demonstrated it." You could think of these as "green light," "yellow light," and "red light," respectively. I've tried discerning more levels of understanding in a gradebook and it only seems to lead to confusion and indecision (both for me and students), so I'm sticking to three levels, as suggested in Her & Webb (2004). If I need more detail, I can always go back to the copies of the work students have submitted and the comments I've made.

The gradebook we have for class is pretty primitive and as far as I can tell it only accepts numbers, so I mark my three levels as either a 2, a 1, or a 0. It doesn't take much explaining to students that a 1 shouldn't be viewed as "out of two" and therefore worth 50%. I do tell them, though, that in order to receive credit for the course they should average a 1 across all objectives. In other words, you can't pass the class without an average of at least some understanding of every objective.

Around here and in many other places, 70% seems to be the low end of passing grades. (We're not messing with Ds.) So if a student with all 1s should get at least a 70%, and a student with all 2s maxes out at 100, and we choose a linear function between the two, the "conversion formula" to percentages is simply:

percentage = 30 * objective score average + 40

If you feel a little dirty at this point because you know you just reduced all the various skills, knowledge, and abilities of your students into a single number, I say join the club. If you didn't feel that way I wouldn't have expected you to be using standards-based grading to begin with.

A "No Surprises" Approach to Final Exam Grades
Designing a final exam is often tricky business. It can't possibly assess everything in the course, but we generally want it to include the major topics and themes for the class and be possible to complete in the time allowed. We also have to think about difficulty. Trust me, your students are!

Teachers want their finals to be challenging, but they don't want to have that sinking feeling as they grade the exams that maybe the test was too hard. For whatever reason, sometimes students perform poorly and averaging the final exam grade into their other grades will look like a disaster. But ask yourself: What am I more confident in, my careful judgments of students' ability as demonstrated over an entire semester, or a fleeting, one-time judgement of students' ability on a single assessment during the most stressful time of the year? If you're using standards-based grading, I already know how you'll answer that question. If not, consider this example: I have a student who I know can do stats. She's turned in good work. She's asked quality questions. We've had good discussions. But I also know she has seven final exams this week. I still think she'll do fine, but I'll understand if she's not at her best. And I need a grading system that reflects that understanding.

In order to free myself to still give challenging, yet reasonable, assessments, without risking any huge surprises when grades are calculated, I perform a little statistical magic that ensures that the distribution of final grades has the same center and spread of class grades before the final. I'm sure many of you try "curving" your exam scores some other way, such as letting the top score count as the total possible, or even having a pre-set distribution in mind of how many As, Bs, Cs, etc. you'll allow (which is not a good idea, generally, for reasons described by Krumboltz & Yeh, 1996). I prefer my method because it accounts for the distribution of grades, not just the top score, and the distribution is determined by the students, not arbitrarily by me. Allow me to demonstrate with a couple examples.

Suppose before the final the average percentage grade is 85 and the standard deviation of those grades is 10. Then I grade my final exams and find that the average final exam grade is 60 with a standard deviation of 18. Ouch. But don't worry -- statistics will come to our rescue.

Provided you know a little basic descriptive statistics, the conversion is simple. For each student's final exam score, find out how many standard deviations above or below the mean they scored on the final (their final exam z-score), and match that with the same number of standard deviations above or below the mean they'd fall on the pre-final grade distribution (their pre-final z-score). Consider the following students and the class and exam statistics above:

  • Suppose Student A scores a 51 on the final exam. That's 0.5 standard deviations below the mean. (51 - 60 = -9, and -9/18 = -0.5.) So where is 0.5 standard deviations below the mean on the pre-final distribution? If that mean is 85 and the SD is 10, then 0.5 standard deviations below the mean is 80. So I record an 80 for that student instead of a 51.
  • Suppose Student B scores a 75 on the final exam. That's about 0.83 standard deviations above the mean. (75 - 60 = 15, and 15/18 = 0.83.) So where is 0.83 standard deviations above the mean on the pre-final distribution? About 8.3% above an 85, so I record their exam grade as a 93.3.
  • Suppose Student C scores a 60 on the final exam. That's the same as the mean, so zero standard deviations above or below. That conversion is super-easy: their final exam grade is the mean of the pre-final mean, an 85.
For an example of how to set up a spreadsheet to do this, see I recommend making a copy of it for yourself and seeing what happens as you change values.

This is not a perfect system (and comments about its imperfections are welcome in the comments), but it does take away the element of surprise if the final exam happens to be way too easy or too difficult, or if other circumstances prevent grades from working out the way you'd expect. Yes, this is a norm-referenced system instead of a criterion-referenced system, meaning that the grades students earn on the final is measured largely as how they compare to their classmates and the class average. The good news is this: both the teacher and the students have an incentive before the final to master as many objectives as possible, and that is criterion-referenced. A high pre-final average helps everyone get a high final exam average, and a small pre-final standard deviation minimizes variability in final exam scores.


Her, T., & Webb, D. C. (2004). Retracing a path to assessing for understanding. In T. A. Romberg (Ed.), Standards-based mathematics assessment in middle school: Rethinking classroom practice (pp. 200-220). New York, NY: Teachers College Press.

Krumboltz, J. D., & Yeh, C. J. (1996). Competitive grading sabotages good teaching. Phi Delta Kappan, 78(4), 324-326. Retrieved from

Designing a School

During my first fall of teaching in 2003, the school district I worked for passed a bond issue to build a new high school. The following spring the architects and facility committee (which I would later join) asked for staff input. So I ask you: What would your ideal school look like? What features would you prioritize? How would you maximize utility within a budget?

I was reminded of this activity by a recent post by Zac Chase. In that post, he discusses a class activity where he and other students plan the layout of a school in order to think about the environments that best support teaching and learning. In 2004, when I was faced with a similar task, I set to work sketching on paper. When sketching wasn't good enough, I went with graph paper and scale drawings. When graph paper wasn't good enough, I downloaded a free CAD program, taught myself how to use it, and set out to design an entire school. I was a bit obsessed. I had the following goals and guidelines to consider:
  1. The design needed to accommodate 750 students in classrooms and have "common spaces" (gym, locker room, cafeteria, library, etc.) to handle an expansion up to 1500 students.
  2. The school budget allowed for approximately 115,000 square feet.
  3. Hallways are an expensive use of little-used space. I wanted to minimize them.
  4. Departments would have common computer lab space and office space for teachers.
After a lot of editing and calculating, this is what I came up with on the first floor (open link in new tab and zoom for maximum detail):
First floor
And here was the plan for the second floor:
Second floor
I was (and still am) pretty proud of this creation. In fact, I still believe this school offers a better use of space than the school that was eventually built. However, with 7+ years of hindsight, I think this plan could have used the following improvements:
  1. Instead of shared "computer lab" space for each department, students would take their technology with them to the classroom.
  2. In place of the computer spaces, I'd provide more space for student group collaboration. I was thinking about the office for teacher collaboration space, but at the time I didn't think enough about student collaboration space.
  3. I didn't think too much about the cafeteria, and it shows. Dining areas should be more inviting than the rows-of-tables design I designed.
One other lesson I learned after serving on two facility committees is this: Don't just ask teachers what they want in their classroom and school. Ask teachers what they imagine a teacher twenty years from now will want. Otherwise, it's too easy to give teachers the impression that the school is being custom-built for them, something that's not likely to happen. It might not even be desirable. Instead, ask them questions that reflect the school's long-term value to the community.

Those are important, but otherwise I think my floorplans still have something to offer. I don't know why I hadn't shared them until now, but maybe they'll be of some use to somebody looking for school building ideas. Just promise me that if you build this thing, invite me to come see it!

When Grover Went Viral

About a month ago I walked into my office and found this problem on the chalkboard next to my desk:
Do you think Ryan should have made option "C" zero percent? ;)
My officemate, Ryan Grover, had put it there to tease us with what I quickly believed was a paradox. Having dealt with paradoxes before, I knew the first rule of paradoxes: "Don't try to reason with paradoxes." Instead, I thought I'd post it to Google+ to see if some of my math-oriented followers could have some fun with it.

Ryan will be the first to admit that he is not the creator of this problem. He told me that he had seen something like it, had given it some thought, and then searched for similar problems. Ryan found variations of the problem on Reddit, got his wording just right, and wrote it on the chalkboard. He knew it was worth sharing, just as I did. But I had a different sharing mechanism in mind.

I posted on October 20th and nothing much happened the first few days. Then things really took off. You can get a sense for some of the progress thanks to Google+ Ripples. But what really got my attention was seeing Brian Brushwood share the picture on October 27th. I follow Brian because of his work on, not because I expect him to post interesting probability problems. Because the picture had made its way to Twitter, where Brian saw it, and then back to Google+, it was no longer connected to my original post. Still, Brian's post quickly got over 2000 reshares -- including one by Terence Tao. I argued to Ryan's (and my) advisor that "getting cited" by Terence Tao should satisfy Ryan's "publishable work" requirement of our PhD program, but I don't think he bought it.

The same day Brian posted the picture it hit Reddit (and again in days following), getting over 2300 comments, probably 10 times more than in previous posts. I wonder if it's the placement of the problem on the chalkboard that added to its appeal, or if it was just the right question at the right time to engage people's interest. The next day the picture appeared again in Google+, this time in Ed Yong's stream where it got over 1500 reshares and 300 comments. It was also quite a treat to see the picture show up at I'm a pretty big fan of Nathan Yau's blog and book, and in an email Nathan said the 788 comments were probably the longest thread of any post ever made on FlowingData.

There's no way to count precisely, but the picture was reshared publicly on Google+ (where an estimated two-thirds of posts are private) around 5000 times and comments on Google+, Reddit, and FlowingData also number in the several thousand. Some good things have come of this:

  1. It's been fascinating to watch people try to reason through the problems in the comments. Math teachers like to watch people who don't give up easily.
  2. I've picked up several hundred followers on Google+ from all over the world. Many of them have interests in mathematics and how it is taught and learned, and what they've shared with me is many times more valuable than even seeing the comments and shares from Brushwood, Tao, and Yau.
  3. It's bolstered my (and my colleagues') belief that if something is interesting, it should be shared. We're not in the business of keeping good ideas to ourselves.
As far as I can tell, maybe only two bad things have come from this:
  1. We've experienced a little bit of "sharer's guilt" because neither Ryan or I deserve any credit for actually coming up with the problem. Someone who shared the problem before we did, such as in those previous Reddit posts, might be feeling justifiably peeved that they aren't getting the credit they're due. We're sorry.
  2. We've avoided erasing that portion of our chalkboard, even though we could use the space. Next time Ryan has a perplexing problem he'll have to write smaller. :)

RYSK: Alibali et al.'s A Longitudinal Examination of Middle School Students' Understanding of the Equal Sign and Equivalent Equations (2007)

This is the fifth in a series of posts describing "Research You Should Know" (RYSK).

Many math education researchers come from one of two camps: (a) math teachers who want to know more about the psychology of the student, or (b) psychologists who want to know more about how students learn math. When these groups of researchers work together, good things can happen.

The University of Wisconsin has long been known as one of the best universities anywhere for math education research. Historically this tradition has included names like Henry Van Engen, Tom Romberg, Tom Carpenter, and many others. More recently, a new generation of researchers have been making their mark. In this five-author article, you see two math educators (Eric Knuth and Ana Stephens) and three math-specializing psychologists (Martha Alibali, Shanta Hattikudur, and Nicole McNeil) teaming up to do a longitudinal study of middle school students' algebraic reasoning.

Previous research had indicated two key ideas: (a) a proper understanding of equals and equivalence is key to success in algebra, and (b) the equal sign and equivalence is misunderstood by students at all ages. While some of this previous research is very good in its own right, the longitudinal aspect of this study helps it stand out. Commonly, and incorrectly, students hold an operational view of the equals sign. To those students, they see the "=" sign as meaning "do something." When students encounter a problem like \(3+5=\) they think the equal sign is a prompt to "write the answer," which in this case is 8. Unfortunately, some of those same students will see a problem like \(3+5=x+2\) and still think \(x=8\). Not knowing what to do with the 2, they might also think \(x=10\) (because they're just adding all the numbers they see) or they think it's okay to write \(3+5=8+2=10\). Statements like this with multiple equal signs should look familiar to any math teacher who has watched students show their work for a multi-step problem, such as showing work for order of operations. This only makes sense to students who have an operational view of the equal sign, as to them it just means, "the answer to this step is." But that's incorrect. Instead, we want students to understand the equal sign as a relational symbol, one that is neither prompting action nor implying a direction to that action. Without this, solving equations in algebra has very little meaning.

For this part of their study, Alibali et al. studied 81 middle school students (62% white, 25% African American, 7% Asian, 5% Hispanic) from 6th grade through 8th grade. The middle school used the Connected Mathematics curriculum and introduced solving linear equations in grade 7. The students were asked to explain what they thought the "=" sign meant, and to understand their use of that sign they were given an interesting set of tasks. For example:

     Is the value of n the same in the following two equations? Explain.

\( 2 \times n + 15 = 31 \)     and     \(2 \times n + 15 -9 = 31 - 9\)

Here the researchers apply what they call an "atypical transformation," and they look carefully at how students find n. Many students would solve by "doing the same thing to both sides" for both equations, a procedure they can follow whether they had a solid understanding of equals and equivalence or not. But by subtracting 9 from each side in the second equation -- something mathematically "legal" despite not being all that helpful in finding n -- you can more easily identify which students break with "standard" procedure and show an understanding of equivalence. Those students won't treat the second equation like a new problem and instead quickly see that whatever they found for n in the first equation must also be n in the second.

Not surprisingly, Alibali et al. found that students' understanding of the equal sign got better over time. Also not surprisingly, students who have the correct, relational view of equals are more likely to see equivalence relations and solve equations correctly, and the earlier they understand it, the better. At the beginning of 6th grade, about 70% of students had an operational view and only 20% had a relational view. (10% of students held some other view that didn't fit in these two conceptions of equals.) By the end of 8th grade, that balance had almost flipped: only about 30% still held an operational view while 60% had a relational view. That's a lot of improvement, but that improvement took a long time (3 years) and still 40% of students didn't have a correct and meaningful understanding of the equals sign by the end of 8th grade. Also, students who showed a relational view of equals sometimes slipped back into an operational view. Almost a quarter of the students in the study used a less sophisticated strategy sometime after using a better one. Lastly, even when students consistently defined the equal sign as a relational symbol, they didn't always recognize equivalence in problems such as the one above. It's these types of caveats that make teaching equals and equivalence a tricky business.

So if you're a teacher with students having trouble with the equal sign, what can you do? More research needs to be done in this area, but one thing you can do is be more aware of your students' "compulsion to calculate" (a clever term used by Stacey & MacGregor, 1990, p. 151, as cited by Alibali et al., 2007, p. 245). Try giving students a task like the one above, ask them to evaluate the task for a minute or two without touching their pencils, and then find n. Afterwards, have students describe their strategies and solutions. Also, if you want to avoid the complications of using a variable, you can give students a number of statements and see if they can spot the ones that are equivalent. (Alibali et al. suggest statements like 9 + 5 = 14, 9 + 5 - 3 = 14 - 3, and 9 + 5 - 3 = 14 + 3). Also, try putting the unknown to the left of the equals sign. If you ask a student to solve \( \underline{\hspace{0.25in}} = 3 + 4 \) and they tell you the problem is "backwards," then you know they struggling with an operational view of equals. Giving those students more problems where the "answer" doesn't come "last" (to the right or at the bottom) will help the student expand their understanding of what equals really means.


Alibali, M. W., Knuth, E. J., Hattikudur, S., McNeil, N. M., & Stephens, A. C. (2007). A longitudinal examination of middle school students’ understanding of the equal sign and equivalent equations. Mathematical Thinking and Learning, 9(3), 221-247.

RYSK: Freudenthal's Why To Teach Mathematics So As To Be Useful (1968)

This is the fourth in a series of posts describing "Research You Should Know" (RYSK). While the article is not actually the report of research findings, it is part of a foundation upon which a generation of mathematics education research has been based.

Starting in the late 1960s, Dutch mathematician Hans Freudenthal saw the trend of "new math" spreading from the U.S. to the world. He pushed back with a philosophy of mathematics education now known as Realistic Mathematics Education (RME). The following article by Freudenthal, Why To Teach Mathematics So As To Be Useful, provides early insight to the core principles of RME: mathematics as a human activity, mathematization from contexts, and mathematics for all students. This article is also the first article in the first ever issue of the journal Educational Studies in Mathematics. Thankfully, instead of simply summarizing the article, I've been granted permission to reprint it here so you can read Freudenthal's words for yourself.



My first task at this moment is to welcome you who have come here from various countries to sacrifice one week of your holidays for the benefit of mathematical education all over the world. I trust this meeting will be as useful as according to the general theme of this conference mathematical education should be held to be. I trust we all will learn as much from each other's experiences and arguments as we like to do and often have done at such opportunities. With great satisfaction I remember the meeting of December 1964 at Utrecht and I hope the few among you who have participated in that conference will share my feelings of gratitude. But whenever I shall remember those pleasant days and evenings, and lively discussions, I will never forget the man whom I met first and last on that occasion, the liveliest among all of us, the much regretted Wittenberg, this fiery nature who died much too early as though he had burnt himself in his own fire. Though I admit there was none among us who shared his opinions, I am sure everybody was impressed by his honest search into the truth of our educational philosophy. To my mind, his definitive absence overclouds the bright sky of this day.

The present colloquium is an activity of the ICMI sponsored by the government of the Netherlands and by IMU. It is not the first in this new period of ICMI and in this year. In January we met in Lausanne with the physicists, in a meeting sponsored by UNESCO, which was attended by some among you. In my opinion the resolutions adopted at Lausanne are a mile-stone in the philosophy of mathematical education. If I substitute my wishes and hopes for my opinion, I would say they should be so. It is evident that the use of mathematics has been a key criterion in all arguments on mathematics at that meeting.

In this introductory address I feel I have to justify the general theme of the present conference rather than to tell you about techniques of teaching useful mathematics. This means that I will not speak about how to teach mathematics so as to be useful but about why we should teach mathematics so as to be useful, or rather about why we should teach mathematics so as to be more useful.

Of course this is a question of educational philosophy, and as such it will be answered in a different way according to which philosophy we adhere to. Yet educational philosophy is not an abstract system. It depends on the real educational system in which we live, and on our, positive or negative, attitude with respect to that system. Is the variety of national educational philosophies really a drawback to international talks on mathematical education or should I say that there is no better opportunity to test them than to have them bump against each other? Are not we too often and too readily inclined, when reading or hearing about the educational experiences in another country, under another educational system, to sigh: it is just a pity, but this does not apply to our situation? I would say whenever this happens, then something is wrong either in the one system or the other, or, most likely, in both.

It is generally admitted that there is a wide gap between the educational philosophy of the U.S.A. and the Socialist European countries on one hand and the continental Western European countries on the other hand, though this gap has been narrowing to a considerable extent. On the one side one has for long times pursued the ideal of one kind of education for all youth, on the other side one has always overstressed that part of the educational system which provides educational facilities for a small group of students selected more on social than on intellectual grounds. I have to admit, and I do it with shame and distress, that in the Western countries of continental Europe, if we speak about mathematical education, we more often than not, mean the gymnasiums and lycées, and tacitly forget the about more than 90% who do not attend this type of schools. I agree that a more balanced educational system can be as bad if its highest level is too low to do justice to the most gifted students. But instead of discussing the question which kind of justice is the least evil, I would rather try to do the most justice to all people and to the society they belong to.

I need not explain to you why mathematics can be useful though the fact itself is one of the most recent and most astonishing features of the history of civilization. It would be more difficult to tell how mathematics can be useful provided that we do not limit ourselves to counting up instances of the all-pervading influence of mathematics in our culture, but ask what happens in the individual if he applies mathematics or if he tries to. Much has been done to investigate the learning process, though it is a fact that most of this research has been rather laboratory than classroom-oriented. Very little, if anything, is known about how the individual manages to apply what he has learned, though such a knowledge would be the key to understanding why most people never succeed in putting their theoretical knowledge to practical use.

Since mathematics has proved indispensable for the understanding and the technological control not only of the physical world but also of the social structure, we can no longer keep silent about teaching mathematics so as to be useful. In educational philosophies of the past, mathematics often figures as the paragon of a disinterested science. No doubt it still is, but we can no longer afford to stress this point if this keeps our attention off the widespread use of mathematics and the fact that mathematics is needed not by a few people, but virtually by everybody.

Mathematics is distinguished from other teaching subjects by the fact that, even in its actual totality, it is a comparatively small body of knowledge, of such a generality that it applies to a richer variety of situations than any other teaching subject. Modern mathematics can be seen as an effort to reduce this body of knowledge even more and to enhance the flexibility of what remains to be taught. At the same time this fact about mathematics is the source of the principal dilemma in teaching mathematics so as to be useful. In an objective sense the most abstract mathematics is without a doubt the most flexible. In an objective sense, but not subjectively, since it is wasted on individuals who are not able to avail themselves of this flexibility. On the other hand, teaching applied mathematics is as bad, if it means mathematics in a specialized context, which does not account for the greatest virtue of mathematics, its flexibility.

Though it might look different, I am still busy with the question why mathematics has to be taught so as to be useful, after we had agreed that it is useful and that students are expected to use it. There are two extreme attitudes: to teach mathematics with no other relation to its use than the hope that students will be able to apply it whenever they need to. If anything, this hope has proved idle. The huge majority of students are not able to apply their mathematical classroom experiences, neither in the physics or chemistry school laboratory nor in the most trivial situations of daily life. The opposite attitude would be to teach useful mathematics. It has not been tried too often, and you understand that this is not what I mean when speaking about mathematics being taught to be useful. The disadvantage of useful mathematics is that it may prove useful as long as the context does not change, and not a bit longer, and this is just the contrary of what true mathematics should be. Indeed it is the marvellous power of mathematics to eliminate the context, and to put the remainder into a mathematical form in which it can be used time and again.

Between two extreme attitudes one may be inclined to try compromising. If this means teaching pure mathematics and afterwards to show how to apply it, I am afraid we are no better off. I think this is just the wrong order. I have always considered it a remarkable fact that people are able to apply simple arithmetic, but not quadratic equations or even linear functions. Do not object that arithmetic is so easy. It is not. Take such problems as:

   If I have got ten marbles and I give three away, how many are left?
   If I have got ten marbles, and John has three less, how many does he have?
   If there are ten students in the room and three are girls, how many are boys?
   If I am ten years old now, how old was I three years ago?
   If B is between A and C, B is at a distance of 7 miles from A, and C is at a distance of 10 miles from A, how far is B from C?

It is not so easy to learn that in all these and a hundred other situations the same arithmetical operation applies. It takes some time, but finally everybody succeeds in understanding it. Why? I daresay, because arithmetic starts in a concrete context and patiently returns to concrete contexts as often as needed. The counterexample is fractions. In its traditional teaching the concrete context is no more than a ceremony which is hurried through in a jiffy. If afterwards the abstract theory of fractions has to be applied, its comes too late, on too high a level, and is not connected to any previous experience on a level where fractions should have been introduced. What is the reason for this change of attitude of the teacher? Is the patience of the schoolmaster exhausted when fractions turn up? I believe the answer is rather that the schoolmaster himself does not know fractions in a concrete context, and that for this reason he is not able to teach them in a more responsible way than he is used to do.

I am afraid this answer applies to the greater part of our mathematics teaching. Even the fact that a teacher applies mathematics himself, does not necessarily imply that he knows how he is able to do so and to use such a knowledge in his teaching.

The problem, however, is still much more serious. In the past, and mostly even now, textbook writing has been dominated by quite other aims than by the goal of a mathematics that could be useful. Mathematics is a peculiar subject. Arithmetic and geometry have sprung from mathematizing part of reality. But soon, at least from the Greek antiquity onwards, mathematics itself has become the object of mathematizing. Arranging and rearranging the subject matter, turning definitions into theorems and theorems into definitions, looking for more general approaches from which all can be derived by specialization, unifying several theories into one -- this has been a most fruitful activity of the mathematician, and no doubt our students are entitled to enjoy these fruits. No doubt modern mathematics is both much more flexible and much simpler than the mathematics of fifty years ago. No doubt our students have to learn the most modern mathematics. Teachers are more and more prepared and more and more inclined to bridge the gap between school mathematics and grown-up mathematics which had become wider from year to year.

However, this is not the whole story. The problem is not what kind of mathematics, but how mathematics has to be taught. In its first principles mathematics means mathematizing reality, and for most of its users this is the final aspect of mathematics, too. For a few ones this activity extends to mathematizing mathematics itself. The result can be a paper, a treatise, a textbook. A systematic textbook is a thing of beauty, a joy for its author, who knows the secret of its architecture and who has the right to be proud of it. Look how such an author would justify his construction: Why have you defined addition on page 10 in such a circumstantial way? -- because this more general definition will prove useful on p. 110. Why have you proved this geometrical theorem in such an unnatural manner? -- because at this stage I restrict myself to affine notions which have to precede metric notions. Why do not you mention forces as an instance of vectors? -- because mechanics has to be based upon vector algebra and not the other way round.

Systematization is a great virtue of mathematics, and if possible, the student has to learn this virtue, too. But then I mean the activity of systematizing, not its result. Its result is a system, a beautiful closed system, closed, with no entrance and no exit. In its highest perfection it can even be handled by a machine. But for what can be performed by machines, we need no humans. What humans have to learn is not mathematics as a closed system, but rather as an activity, the process of mathematizing reality and if possible even that of mathematizing mathematics.

New mathematics has been met with criticism. People who apply mathematics often feel uneasy when observing that the mathematics they have been used to apply is replaced by something they judge less suited for applications. It is a fact that biologists, economists, sociologists are better prepared to apply modern mathematics than physicists who carry the burden of a longer tradition. In the universities the gap between the mathematics of mathematicians and that of physicists has become terrifying. It is a habit of physicists to treat any particular subject with that kind of mathematics which prevailed at the time when that subject turned up in the history of physics. For instance, though physicists know eigenvalues of symmetric matrices because Laplace introduced them in a physical context, they still deal with orthogonal matrices with such oddities as Eulerian angles, because Euler was not yet acquainted with eigenvalues.

It would be a disaster if this lag would become permanent, though I hope it will not. Time ago I eavesdropped on a talk between a physics professor and his assistants, criticizing his course and particularly such a subject as Lagrange multipliers: this is not physics, one of them said, this is plain linear algebra.

Probably we will have to wait for the next generation to have physicists reconciled with modern mathematics teaching.

It is a pity that most of the criticism against modern mathematics is made with no knowledge about what modern mathematics really is. It is a pity, because there is ample reason for such criticism as long as mathematicians care so little about how people can use mathematics. We are not entitled to reproach physicists for identifying modern mathematics with a preposterous educational philosophy, since this identification is of our own making. I am convinced that, if we do not succeed in teaching mathematics so as to be useful, users of mathematics will decide that mathematics is too important a teaching matter to be taught by the mathematics teacher. Of course this would be the end of all mathematical education.

Mathematisch Instituut der
Rijksuniversiteit, Utrecht

Obligatory disclaimer: Reprinted/republished with kind permission from Springer Science+Business Media:

Freudenthal, H. (1968). Why to teach mathematics so as to be useful. Educational Studies in Mathematics, 1, 3-8. Original copyright © D. Reidel, Dordrecht-Holland.

Download a PDF of the original article

The Publication Paradox

This week is Open Access Week (follow #oaweek on Google+ and Twitter), and while I've shared a few links and talked to some of my officemates, I haven't taken (or had, really) time to expand on my thoughts more fully. But I take Open Access very seriously, and I know the status quo (researchers signing over copyright to journals who lock away the research behind paywalls) won't change unless more of us keep sharing openly to the widest audience possible. Because the antithesis of Open Access isn't copyright -- it is the unwillingness to share any ideas at all.

In the field of education, particularly in education policy, research is conducted and published in one of two ways: either by academics to submit to journals, or by think tanks and other groups who generally do non-university-based research. Academics will defend their system because their research is peer-reviewed, whereas much think tank research is not. In fact, in an effort to force a peer review process onto think tank research, the National Education Policy Center created the Think Tank Review Project, which includes reviews of think tank research and the annual Bunkum Awards. (Disclaimer: I know, work with, and take classes from various scholars at the NEPC.) If the academics are right, and their peer-reviewed research is superior, does that mean it is more influential? Hardly. According to this research by Holly Yettick (also affiliated with the NEPC), university-based education research is only cited about twice as often in major news outlets as research from think tanks, even though universities publish about 15 times more research (2009, emphasis mine). Yettick's conclusions to this report include a recommendation to education reporters, urging them to consider more sources because "Unlike think tank employees, university professors generally lack the incentives and resources to conduct public relations campaigns involving outreach to journalists" (p. 15). My question is this: How does copyright and traditional publishing affect this incentive structure, and how can open access change it?

First, imagine you work at a think tank and you're proposing research. Even before writing a word, you probably have an audience in mind that you'd like to reach with your work. Once your research is approved, you go about the research process and publish a report. Because the think tank does the researching and the publishing, no transfer of rights are necessary -- the work was a work for hire and copyright belonged to the think tank from the very beginning. Now the think tank can set about trying to promote the results to the research to the media and other interested audiences. They have an incentive to promote because the research, the publication, and the promotion is carried out by the think tank, an organized unit that includes you in its shared ownership of the work. This gives the think tank a collective interest in spreading their ideas.

Now imagine you're a researcher at a university. You too have an audience in mind that you'd like to reach, but when your research is finished you submit your report to a peer-reviewed journal. In order for the journal to publish (or sometimes, even to consider) your article, you must transfer to them your copyrights. The journal now owns the report, and this is where the incentive system starts to break down. The article might be read by your peers, and may help you receive tenure, but surely (I hope) your peers and tenure committee don't comprise the true scope of your target audience. If you, the researcher, are still intent on making sure your work reaches the intended audience, how effectively can you promote something you no longer own? Most efforts to share your report will violate the publisher's copyright. You could create derivative work, in the form of conference presentations, blog postings, or articles for magazines, but this actually requires extra effort to avoid a copyright violation, impedes future progress on other research, and often does not count towards tenure.

Instead of self-promotion, can you, a researcher, count on a journal to promote your work? Why would they? Do they know the scope of the audience you would like to reach? What incentives does the journal have to promote work they did not create? The journal wants subscribers, to be sure, but because they have no rights to your future research (or that of any scholar), their main incentive is to preserve a system that positions their journal as one of the few credible outlets for research. For example, the American Education Research Association has 25,000 members and publishes six peer-reviewed journals. If you're an education researcher, you probably belong to AERA and you respect and read the scholarship in their journals. But in Holly Yettick's dissertation research, searching through "nearly forty thousand articles in hundreds of publications" (2011, par. 14), she has yet to see a single AERA-published article mentioned anywhere. So while you might hear Brian Williams start a story on the NBC Nightly News with the phrase, "A new study published in the journal Science...," you won't hear an equivalent statement mentioning an AERA journal, despite education getting plenty of attention from NBC.

Think tanks have an advantage because the shared ownership of the creation and publication of research creates a common incentive for promotion. Even if the research is lower quality, the spread of the research to a wide audience gives the research power and influence. The traditional system of university-based researchers transferring rights to publishers in exchange for publication might produce higher-quality work, but leaves us with a publication paradox: how do creators promote something they don't own, and how do owners promote something they did not create?

I see two options for improving the incentives to promote academic research: (a) publishers should own creation, or (b) creators should maintain ownership (or at least rights to open distribution). Option (a) essentially turns a publisher into a think tank, and would not fit with academia's culture of academic freedom and independence. Some universities host their own journals, but they do not do so for the purpose of sponsoring and publishing their own work. Furthermore, most university researchers don't want their work to be seen as "work for hire." Option (b), which is not without its challenges, is the better option, and the growing Open Access movement is making it a more viable option every day. But for it to be successful, researchers are going to have to support change -- not for selfish reasons, and not out of spite for publishers, but to ensure the best research is freely available to the audience for which it was intended.


Yettick, H. (2009). The research that reaches the public: Who produces the educational research mentioned in the news media? (p. 37). Boulder and Tempe: Education and the Public Inerest Center & Education Policy Research Unit. Retrieved from

Yettick, H. (2011, May). Media, think tanks, and educational research. Academe Online. Washington, D.C. Retrieved from

A Quick-and-Dirty Guide to Fighting the Math Wars

I just posted this to a reply to a post by David Wees on Google+, but I thought it might be useful to some if it had some permanence here.

I've been in and out of "Math Wars" debates for 10+ years, and I find it's helpful to examine the issue at a more granular level. Here's a quick list of questions I jotted down:

What is your definition of mathematics? (Someone who answers, "It's a subject you learn in school" may have very different views from someone who answers, "It's a human activity we undertake to solve problems relating to number and shape.")

What is your philosophy of mathematics? (A Hardyist and a Mathematical Maoist have very different views, as do a Platonist and a Formalist. And for all the consistency in mathematics, this is not something with which we as individuals are necessarily consistent.)

What is our goal for students learning mathematics? (Is it to prepare them for work? For more school? To gain an appreciation of mathematics? For mental exercise?)

How should we assess mathematics? (Often when we claim that students do or do not perform well in mathematics, we are basing those claims on an assessment that may not embrace a balanced view of the issues above. Or, failing that, we make those claims without regard to the biases of the assessment.)

What learning theories do we use, and how do we use them? (A difficulty with learning theories is that in most all cases we can design curriculum and pedagogy around them that show they work -- at least to a degree. The workings of the human brain aren't easy to study, explain, or leverage in a classroom.)

How do we perceive "failure" or "success" of practices of the past? (I fear sometimes we stereotype certain historical movements, such as "New Math" and the "Back to Basics" movement, and we falsely assume that those movements were implemented in every classroom with high fidelity. We also sometimes forget that as time has passed, we are trying to teach higher and higher levels of mathematics to more and more students.)

How do we avoid false dichotomies? (False dichotomies were addressed in that article and Zwaagstra was wise to try to avoid them. But it's such an *easy* trap to fall into! [I've probably done it here without realizing it.] For example, he cited a paper by Alfieri, et al. (2011) that claimed through meta-analysis that "unassisted discovery does not benefit learners." But why would a well-trained constructivist teacher believe discovery should be unassisted? That's the same as assuming that a traditional teacher only has students listen to lectures and work problems in isolation. No teacher or student thrives exclusively on either. Interestingly, Zwaagstra in the next sentence says learners should be "scaffolded," an idea developed by Jerome Bruner in support of learning in a social constructivist environment.)

What skills, abilities, and philosophies do we believe teachers need to be successful? (I'm not sure we fully comprehend the effects on the received curriculum when it's taught by a teacher with skills, abilities, and philosophies that run counter to those supported by the curriculum. In such cases it's easy to misplace blame for poor outcomes.)

I'm sure there are more that I could add, but I strongly recommend that anyone who is serious about this debate to take on these issues one by one. Only if there is some agreement, or at least some sympathy and understanding, on these issues does it become truly productive to talk about "what works."

You can press "Enter," but think twice before pressing "="

I just had a epiphany tonight while reading an article by Alibali et al. (2007) about students' understanding of the equal sign. While some students see it properly as a relational symbol, the most common misunderstanding is that equals is operational -- a sign that indicates "get the answer" or "add them up." It is this operational conception that leads some students to believe x = 10 in a problem like 5 + 5 = x + 3. (Some students also incorrectly believe x = 13, figuring the three has to be added with the two fives somehow.)

So here's my surprise: I had never considered that students might be using a tool every day that is reinforcing that operational conception -- their calculator. Go ahead and search Google Images for calculators. Doesn't every one use an equals button to perform the "get the answer" function? Should that button be labeled with something else? Some say "Enter" but still have an "=" sign on the button.

This is what's fun about being a researcher -- I suddenly want to do an experiment with two sets of classrooms, one that gets traditional calculators and one that get modified calculators without "=" signs for the "Enter" button. Let them go about their business for a year without any other attention paid to the issue, and measure students' understandings of the equal sign at the end of the year and see if the treatment group has better understanding than the control. I know it sounds trivial, but it's often in these small steps where we make new knowledge.

Alibali, M. W., Knuth, E. J., Hattikudur, S., McNeil, N. M., & Stephens, A. C. (2007). A longitudinal examination of middle school students’ understanding of the equal sign and equivalent equations. Mathematical Thinking and Learning, 9(3), 221-247.

3rd International Realistic Mathematics Education Conference (#RME11)

Starting tomorrow I'll be attending the 3rd International Realistic Mathematics Education Conference (#RME11), hosted by the Freudenthal Institute USA (FIUS) here at the University of Colorado at Boulder. The three-day conference features four keynotes, three plenaries, and only 18 breakout sessions, one of which I have the privilege of leading. I attended the previous RME Conference in 2009 before I really had a chance to become familiar with the theory and those who develop and promote it. RME is a theory of mathematics education worth knowing, but for this post I'd rather focus on some of the people who will be presenting. They're well-known in the field of math education research, even if they might not be in the math education blogosphere. I'm hoping this post helps change that.

David C. Webb is the Executive Director for the Freudenthal Institute USA and an assistant professor of mathematics education at the University of Colorado at Boulder. (He's also my advisor.) His involvement with the Freudenthal Institute goes back to his graduate school days at the University of Wisconsin, where we was advised by Tom Romberg and worked on the Mathematics in Context project, an NSF-funded curriculum that combined the goals of the NCTM Standards with the philosophies and design theory of RME. Following Romberg's retirement and David's move to the CU, FIUS came to Boulder in 2005. For RME11, David will lead Friday's plenary titled, "Informed Classroom Practice: Progress and Challenges" and will co-lead Sunday's closing plenary titled, "Design, Research, and Practice: Building a Community of Designers and Practitioners."

Henk van der Kooij (pronounced "koy") is a senior staff member at the Fruedenthal Institute, University of Utrecht, the Netherlands, where he conducts research and trains mathematics teachers. I had the pleasure last summer of taking a class co-taught by David and Henk, and even after ten days of going 8am to 3pm, it was not unusual to catch Henk sitting to the side of the room, molding a not-so-great mathematical task into a much better one. For this year's conference, Henk will co-lead the closing plenary with David Webb, participate in a Q & A Friday afternoon with Keono Gravemeijer and Mieke Abels, and conduct one session called, "What Mathematics is Important for (Future) Work?"

Koeno Gravemeijer gets the honor of the opening keynote at this year's conference, titled, "Helping Students Construct More Formal Mathematics." I've read a number of his articles and book chapters (see here for a sample), and seen many more referenced, so I'm quite excited to see and hear him in person. I'm not sure what areas of math ed research Gravemeijer hasn't tackled, from design research to statistics education, and the list of articles returned in Google Scholar makes me want to just stop what I'm doing and read for about a month.

Doug Clements will deliver Saturday morning's keynote, "Learning Trajectories -- The Core of Standards, Teaching, and Learning." My introduction to Doug Clements came last spring when my advisor asked me to read Clements's chapter in the Second Handbook of Research on Mathematics Teaching and Learning. The chapter, "Early Childhood Mathematics Learning," written with Julie Sarama, is perhaps the most thorough, dense, yet well-organized and enlightening (in a near-overwhelming kind of way) reading I've done yet as a graduate student. Some suggest we know (or we're close to knowing) all there is to know about early childhood mathematics, so summarizing that knowledge is no easy task. If you ever cross this chapter, take the advice my advisor gave me: "Take your time."

There are so many more excellent people presenting at this conference. Mieke Abels. Debra Johanning. Meg Meyer. The point of this post wasn't so much to drop names, or to think you'll be star-struck by this lineup (in a nerdy math ed researcher way), but to let you put a few names and faces together of people who share a common interest -- they can't stop thinking about how we can better teach and learn mathematics. And if you can think of them that way, then the walls of the ivory tower seem to crumble.

I plan on blogging, tweeting (with hashtag #RME11), and posting to Google+ throughout the weekend, although I still have to find enough spare moments to keep up with my other classwork due next week. (And finish putting together my own presentation!) So far I don't know of any other bloggers or members of the math ed Twitter community who will be attending, but be sure to make your presence felt if you're lucky enough to attend the conference.

What Makes a Student Public? An Alternative Outcome for the Douglas County Voucher Program

Although the ruling came in about two weeks ago, lately the Douglas County voucher program hasn't been far from my mind. I credit George Will's column in Friday's Washington Post for making me rethink the case and its outcome, and finally motivating me to organize some loose thoughts that have been floating around in my head.

If you aren't familiar with the case, it basically boils down to this: The Douglas County School Board believed so strongly in school choice that it voted to give 500 "choice scholarships" (vouchers) to its own students to attend area private schools. The scholarships are worth $4,575, or 75% of the district's per pupil revenue. (The district is keeping the remaining 25%.) Several groups representing the interests of taxpayers and those worried about public funding of private religious schools filed lawsuits, and on August 12th Judge Michael Martinez ruled in their favor, saying the plan "violates both financial and religious provisions" of the Colorado Constitution. Some of the consequences of this ruling are unclear, as many students have already accepted part of their scholarships and are enrolled at private schools.

I was a bit surprised by Judge Martinez's ruling. Not because it came down on the side of the plaintiffs, but because it used the public funding of religious schools as a primary reason. In reaching this decision, Martinez seems at odds with Zelman v. Simmons-Harris, a 2002 case where the Supreme Court upheld an Ohio voucher program primarily because the vouchers went to parents and not directly to religious schools. Although I don't know many of the details surrounding either Zelman v. Simmons-Harris or the DougCo case, George Will's assertion that the two are "legally indistinguishable" seems to have some merit. But that got me thinking -- what if Martinez had found different reasoning for his decision, one not relying on religion at all?

I think we tend to focus the debate on public vs. private schools. Instead, let's focus on students. Previous Supreme Court decisions have upheld both students' rights to receive a free appropriate public education and attend private schools, as well as receive vouchers. Students who attend public schools are public school students. Students who attend private schools (without vouchers) are private school students. But what are students who use vouchers to attend private schools? Public or private? Can we have something in-between?  If so, what rights do those students have?

In Douglas County, students who receive the voucher are still required to take the CSAP, Colorado's annual standardized test. Participating private schools are required to provide information to the district about their attendance and the qualifications of their teachers, as well as be willing to waive requirements that participating students attend religious services. It's clear that these provisions are included to meet various requirements of federal education law, namely No Child Left Behind's testing and "highly qualified educator" requirements. These are requirements of public schools and public school students. And let's not forget that Douglas County is keeping 25% of each student's share of state funding. Could it do this if it didn't claim the students were not, at least in some way, students of the Douglas County School District?

Instead of essentially upholding the Establishment Clause, what if Judge Martinez had instead declared the DougCo "scholarship" students as public students and decided that voucher students had the right to simultaneously receive both a free and private education? In essence, what if he had told Douglas County that the vouchers would be legal so long as they covered the entire cost of each student's education at their chosen private school? If the court decided that (a) by benefiting from public monies, the students were public school students (and there is no murky in-between), and (b) public school students have the right to a free education, then (ironically!) Douglas County would be facing a difficult choice about whether or not a voucher program was in their best interest. As proponents of school choice it would be awkward for the district to back away because they couldn't afford the vouchers, although the high cost is surely what keeps many families away from private schools, voucher or no voucher.

At this point I realize that my knowledge of the law is rather limited and this issue was probably dealt with a long time ago. Still, I find it an interesting perspective and it makes me want to hunt down Kevin Welner (who knows a thing or two about vouchers) in the hallways next week to ask him about it. If any of you have any knowledge or thoughts you'd like to share, I'd love to hear it in the comments below.

Modeling Dimensional Analysis

I generally ask myself two questions when I examine the design of a mathematical task:
  1. What is the context?
  2. How can we model the mathematics?
Mathematical concepts with tasks for which these two questions can be answered easily tend to be easier to learn, while teaching and learning generally becomes more difficult when one or both of those questions can't be answered. For dimensional analysis (sometimes called the unit factor method or the factor-label method), the first question is easy to answer. It doesn't take much of an imagination to design a measurement conversion task that is set in a real-world context. A model, however -- whether visual, mental, or a concrete manipulative -- is generally absent. Typical dimensional analysis problems look like this:

Q: What is 60 miles per hour in meters per second?

A: \( \frac{60 \mbox{mi}}{1 \mbox{hr}} \times \frac{5280 \mbox{ft}}{1 \mbox{mi}} \times \frac{12 \mbox{in}}{1 \mbox{ft}} \times \frac{2.54 \mbox{cm}}{1 \mbox{in}} \times \frac{1 \mbox{m}}{100 \mbox{cm}} \times \frac{1 \mbox{hr}}{60 \mbox{min}} \times \frac{1 \mbox{min}}{60 \mbox{sec}} = \frac{9656064 \mbox{m}}{360000 \mbox{sec}} = \frac{26.8224 \mbox{m}}{\mbox{sec}} \)

For those who successfully learn dimensional analysis this way, there's a certain beauty to how the units drive the problem and how the conversion factors are nothing more than cleverly written values of one, the multiplicative identity. Unfortunately, many students struggle with this method. Some are intimidated by the fractions, some can't get the labels in the right place, and some just can't get the problem started.

What we need is a model. Let's start with the most basic of unit conversion models, a ruler with both inches and centimeters:

(Yes, I'm still using the same ruler I got as a 7th grader in a regional MathCounts competition.)
With only simple visual inspection, students should be able to use a ruler to estimate conversions between inches and centimeters. This is an informal model, one students can literally get their hands on. We can assist the learning by making the models progressively more formal. Here we model a trivial conversion from one inch to centimeters with a double number line:
(Yes, you still have to know your conversion factors!)
Such a simple example looks almost too easy to be useful, but we can add number lines for more complex conversions. We can even abstract the model further and go beyond conversions of distance. Suppose we wanted to convert 3 gallons to liters. I could model that conversion with number lines this way:
(I could have used any number of transition units, but I knew 1 quart was roughly 946 milliliters.)
Filling in the question marks from top to bottom, I'll see that 3 gallons, 12 quarts, 11,352 milliliters, and 11.352 liters are all the same volume. It's easy to see they're the same because on each number line those values are the same distance from zero. Because we're only converting one kind of unit (volume), we only need one dimension.

In our initial example we were converting 60 miles per hour to meters per second. That's two kinds of units, distance and time, so our model needs two dimensions. Furthermore, it can help to think of 60 miles per hour as a line, not just a point. After all, we often travel at a speed of 60 miles per hour without actually traveling a distance of 60 miles in exactly one hour.
Can you guess where our double (or however many are necessary) number lines will go in this model? The following video will demonstrate what I would call the graphing model or two dimensional model for performing conversions.
With the work shown in the video, we haven't just done one conversion. In fact, we're prepared to write 60 miles per hour 15 different ways, not that we'll ever be asked to do that. If we needed 60 miles per hour in centimeters per minute or feet per second, all the work is done. Just choose the appropriate quantity from the vertical and divide by the appropriate quantity from the horizontal. Of course, if we're in a hurry, we won't find all those intermediate figures and instead just proceed from miles to meters and hours to seconds as quickly as possible. Will that be quicker than the traditional method shown above? Probably not, but the purpose of using a model is understanding, not speed. Once the understanding is established, students can move on to a formal method or use technology when appropriate.

RYSK: Butler's Effects on Intrinsic Motivation and Performance (1986) and Task-Involving and Ego-Involving Properties of Evaluation (1987)

This is the third in a series of posts describing "Research You Should Know" (RYSK).

As teachers, we care not only about what students learn, but why students learn. In a perfect world, we would all agree on what's important to learn and do and be self-motivated to learn and do those things. But our world isn't perfect, and students are motivated to learn and do things for many reasons. Understanding those reasons is important if we want students to be properly motivated and to perform well with the right attitude.

Ruth Butler earned her Ph.D. in developmental psychology from the Hebrew University of Jerusalem in 1982 and was a relatively new professor there when she teamed with veteran educational psychologist Mordecai Nisan, whose career includes time spent at the University of Chicago, Harvard University, The Max Planck Institute for Human Development, and Oxford University. Together, they sought to build upon studies that compared extrinsic vs. intrinsic motivation and positive vs. negative feedback, looking specifically at how different feedback conditions -- ones that can be manipulated by teachers -- affect students' intrinsic motivation.

For their 1986 paper, Effects of No Feedback, Task-Related Comments, and Grades on Intrinsic Motivation and Performance, Butler and Nisan expected that students who received feedback in the form of simple positive and negative comments (without elements of praise or grading/ranking) would remain motivated, while students who received grades or no feedback would generally become less motivated. To test this hypothesis, Butler and Nisan randomly assigned 261 sixth grade students to one of three groups. They gave the students two types of tasks: Task A was a quantitative "speed" task where students created words from the letters of a longer word, while Task B was a qualitative "power" task that encouraged problem solving and divergent thinking.

Butler and Nisan conducted three sessions with the groups:
  • Session 1: Students performed the tasks.
  • Session 2: Two days after Session 1 the tasks were returned.
    • Students in the first group got comments in the form of simple phrases such as, "Your answers were correct, but you did not write many answers," or "You wrote many answers, but not all were correct."
    • Students in the second group got numerical grades that were computed to reflect a normal distribution of scores from 30 to 100.
    • Students in the third group got their work returned with no feedback.
    After students reviewed their previous work, they were given new tasks and told to expect the same type of feedback when they returned for Session 3.
  • Session 3: Two hours after Session 2 students again reviewed their work and feedback (except for the third group, who got no feedback) from Session 2 and then got a third set of tasks. Students were asked to complete the tasks and were told that they would not get them back. The session ended with a survey of students attitudes towards the tasks.
When Butler and Nisan compared the students' average performance on the tasks in Session 1, all three groups scored approximately the same. That changed in Session 3. On Task A, students receiving comments and grades scored about the same in Session 3 (with an edge to the comments group for the creation of long words), but students receiving no feedback did far worse. For Task B, students receiving comments did significantly better than students who received grades or no feedback, who performed about the same. The only students doing well in Session 3 -- in fact, the only students consistently scoring higher, on average, in Session 3 than in Session 1 -- were the students who received comments.

The survey also showed attitudinal benefits for the comments group, who indicated they found the tasks more interesting and were most willing to do more tasks. Furthermore, 70.5% of students who received comments attributed their effort to their interest in the tasks, compared to only 34.4% of those graded and 43.4% of those receiving no feedback. Only 9% of students receiving comments said their effort was due to a desire to avoid poor achievement, compared to 26.7% of students receiving grades and 9.6% of the no feedback group. Lastly, 86.3% of students receiving comments wanted to keep receiving comments, while only 21% of the graded group wanted to keep receiving grades. The vast majority of graded students, 78.9%, wanted comments. The no feedback group was roughly split 50/50 on wanting comments or grades. None wanted to keep receiving no feedback.

Butler modified this study for her 1987 paper Task-Involving and Ego-Involving Properties of Evaluation: Effects of Different Feedback Conditions on Motivational Perceptions, Interest, and Performance. In it, Butler adapted a theory of task motivation used by Nicholls (1979, 1983, as cited in Butler, 1987):
  • Task involvement: Activities are inherently satisfying and individuals are concerned with developing mastery in relation to the task or prior performance.
  • Ego involvement: Attention is focused on ability compared to the performance of others.
  • Extrinsic motivation: Activities are undertaken as a means to some other end, and the focus is that goal, not mastery or ability.
Butler believed comments would promote task involvement, while grades would promote ego involvement. While both of these can be seen as intrinsic motivation, a third type of feedback needed to be considered: praise. Previous research on praise had gotten mixed results, possibly because researchers hadn't considered if the praise was task- or ego-involved. Butler's study would include ego-involving praise using comments designed to focus a student's attention on their self-worth and not on the task. Therefore, Butler hypothesized that praise and grades would generate similar results, results less desirable than task-involved comments.

The study was similar to the 1986 study, with 200 fifth and sixth graders split into four groups (comments, grades, praise, and no feedback) with subgroups in each for high- and low-achieving students. Tasks were administered in three sessions, with no feedback given after the third session. The tasks this time were divergent thinking tasks, used as Task B in the 1986 study. Praise would come in the form of a single phrase: "Very good." An attitude survey was given after Session 3.

As Butler expected, comments promoted task-involved attitudes while grades and praise promoted ego-involved attitudes. Students' interest in the tasks after Session 3 was higher for the comments group than for the grades, praise, and no feedback groups combined. Students who received praise showed more interest than those who received grades. As for performance, the comments group easily performed the best in Session 3, with both high and low groups improving their scores over Session 1, while all other groups performed about the same or worse compared to their Session 1 performance.

So what does this mean?

As a teacher who struggled with assessment and grading, it was Butler's work that most inspired me to start this RYSK blog series. Despite these results being 25 years old, there's not much evidence that Butler's findings have had a serious impact on the practice of most teachers. I suspect that few teachers know about Butler's work -- I certainly didn't. I was wrapped up in the scores and grades game, not fully aware of the impact those scores were having on my students. I knew it wasn't working, but I didn't have this kind of theoretical knowledge to support a significant change in my practice.

I'm not suggesting that we should suddenly demand a grade-free world. That's just not a realistic thing to expect given where we are now. What I would like to suggest is that teachers become more aware of how the feedback they give affects student motivation, and be careful to focus on task-involved comments whenever possible. Because students aren't likely to get this kind of feedback from standardized tests or computer-based learning systems (i.e., Khan Academy), it takes a teacher's touch to carefully craft the kind of feedback a student needs to sustain their motivation.


Butler, R., & Nisan, M. (1986). Effects of no feedback, task-related comments, and grades on intrinsic motivation and performance. Journal of Educational Psychology, 78(3), 210-216. doi:10.1037/0022-0663.78.3.210

Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest, and performance. Journal of Educational Psychology, 79(4), 474-482.

RYSK: Erlwanger's Benny's Conception of Rules and Answers in IPI Mathematics (1973)

This is the second in a series of posts describing "Research You Should Know" (RYSK).

In 1973, Stanley Erlwanger was a doctoral student at the University of Illinois at Urbana studying under Robert Davis (who taught many of us math as an advisor for Sesame Street) and Jack Easley when he published his landmark "Benny" article in Davis's new Journal of Children's Mathematical Behavior. (Now simply the Journal of Mathematical Behavior.) This and other Erlwanger articles became known as disaster studies (Spieser & Walter, 2004, p. 33) because they painfully reveal learning gone wrong, and they continue to impact the way we think about learning math and how we do research in mathematics education.

During the back-to-basics movement of the 1970s there was a push for programs that supported individualized instruction. One such program was Individually Prescribed Instruction, or IPI. IPI was designed for students to "proceed through sequences of objectives that are arranged in a hierarchical order so that what a student studies in any given lesson is based on prerequisite abilities that he has mastered in preceding lessons" (Lindvall and Cox, as cited in Erlwanger, 1973, p. 51). To measure that mastery, IPI relied heavily on assessments that were checked by the teacher or an aide, who would then have the opportunity to conference with the student and check for understanding. Erlwanger, however, saw a conflict inherent in the program: while the goals of IPI were "pupil independence, self-direction, and self-study" (Erlwanger, 1973, p. 52), teachers were supposed to have "continuing day-by-day exposure to the study habits, the interests, the learning styles, and the relevant personal qualities of individual students" (Lindvall and Cox, as cited in Erlwanger, 1973, p. 52). So is a teacher, with a class of students each working at their own pace, supposed to continuously monitor each individual student? How? The logical way to do this is to monitor assessment results and focus attention on strugging students. After all, if a student is passing the assessments and "mastering" objectives, how much could go wrong?

Benny was a twelve-year-old boy with an IQ of 110-115 in a 6th grade IPI classroom. Benny had been in the IPI program since 2nd grade, and the teacher identified Benny as one of her best students. By sitting down and talking to Benny about the math he was learning, Erlwanger discovered that Benny's conception of math was not only very rule based, but in many cases Benny's rules yielded wrong answers. For example:

  • Benny believed that the fraction \(\frac{5}{10} = 1.5\) and \(\frac{400}{400} = 8.00\) because he believed the rule was to add the numerator and denominator and then divide by the number represented by the highest place value. Benny was consistent and confident with this rule and it led him to believe things like \(\frac{4}{11} = \frac{11}{4} = 1.5\).
  • Benny converted decimals to fractions with the inverse of his fraction-to-decimal rule. If he needed to write 0.5 as a fraction, "it will be like this ... \(\frac{3}{2}\) or \(\frac{2}{3}\) or anything as long as it comes out with the answer 5, because you're adding them" (Erlwanger, 1973, p. 50).
  • When Benny adds decimals, he adds the number and moves the decimal point the total number of places he sees in the problem. So \(0.3 + 0.4 = 0.07\) and \(0.44 + 0.44 = 0.0088\). Benny's rule for multiplication is very similar: \(0.7 \times 0.5 = 0.35\), \(0.2 \times 0.3 \times 0.4 = 0.024\), and \(8 \times 0.4 = 3.2\). Because these are correct answers, that only served to reinforce Benny's rules about the addition of decimals.
  • Benny thinks different kinds of numbers should yield different answers: "2 + 3, that's 5. If I did 2 + .3, that will give me a decimal; that will be .5. If I did it in pictures [i.e., physical models] that will give me 2.3. If I did it in fractions like this [i.e., \(2 + \frac{3}{10}\)] that will give me \(2\frac{3}{10}\)" (Erlwanger, 1973, p. 53).

As you might guess, Benny got a lot of wrong answers and sometimes failed to achieve the 80% mastery mark on his assessments. It's clear that Benny isn't simply guessing and getting wrong answers -- his methods are consistent and he can confidently explain his reasoning. When Benny is wrong, he tries to change his answers until he gets ones that match the answer key, a process he called a "wild goose chase" (Erlwanger, 1973, p. 53). Because Benny's teacher/aide is only looking for answers that match the key (and trying to do so quickly), the emphasis is on the answer, not the reasoning. It was only Benny's persistence that resulted in him mastering more objectives than most of his classmates.

This style of learning led Benny to believe that math is little more than a collection of arbitrary rules and singularly correct answers: "In fractions, we have 100 different kinds of rules" (Erlwanger, 1973, p. 54). Erlwanger asked Benny where he thought the rules came from. "By a man or someone who was very smart. ... It must have took this guy a long time ... about 50 years ... because to get the rules he had to work all of the problems out like that..." (Erlwanger, 1973, p. 54). For both reasons of scholarship and concern for Benny, Erlwanger returned to the school twice a week for 8 weeks to work with Benny one-on-one. Unfortunately, despite Benny's eagerness to learn, Erlwanger found this to be too little time to change Benny's firmly-established view of mathematics and little progress was made.

What Benny Means to Theory, Research, and to Khan Academy

(It might be helpful to read yesterday's post about constructivism and the Khan Academy before reading this section.)

Erlwanger summed up the theoretical aspect in his conclusion:
Benny's misconceptions indicate that the weakness of IPI stems from its behaviorist approach to mathematics, its mode of instruction, and its concept of individualization. The insistence in IPI that the objectives in mathematics be defined in precise behavioral terms has produced a narrowly prescribed mathematics program that rewards correct answers only regardless of how they were obtained, thus allowing undesirable concepts to develop. (1973, p. 57)
Looking back at Benny in 1994, Steffe and Kieren summarized that
Erlwanger was able to demonstrate how Benny's understanding of mathematics conflicted with any "common sense" understanding of what would be regarded as "good mathematics." This was a crucial part of Erlwanger's work, because by demonstrating what a "common sense" view of mathematics should not be, Erlwanger was able to falsify (naively) the behavioristic movement in mathematics education at that very place where behaviorism has its greatest appeal -- at the level of common sense. (p. 72)
Prior to Benny, the large majority of research in mathematics education depended on quantitative methods -- using statistics to summarize and compare the performance of treatment and control groups. Erlwanger had opened the door to qualitative research, which essentially meant that researchers could now see the value of interviews, case studies, and similar methods. In other words, Benny showed researchers that they can, and should, talk to children.

Although we're approaching the 40th anniversary of the Benny study, anyone who has been paying attention to the debates regarding Khan Academy should be able to draw parallels between it and IPI and realize we're retreading a lot of the same water. In a recent Wired Magazine article about Khan, stories are told of students working individually, at their own pace, with their progress measured by a computer that judges answers right or wrong. The article highlights Matthew Carpenter, a fifth grader who has completed "an insane 642 inverse trig problems" (Thompson, para. 2). Carpenter has earned many Khan Academy badges, a sign of progress that pleases his teacher and amazes his classmates. Unfortunately, the article provides no evidence that Matthew Carpenter is not Benny. I, and hopefully everyone, sincerely hope he is not Benny. I hope he's developing a proper view of the nature of mathematics and developing solid mathematical reasoning and understanding. But I can't be sure, and maybe Carpenter's teacher can't be sure, either. While we sometimes can and do use behaviorist programs of instruction to learn, we can't rely on them to be sure that learning is happening the right way. That's Benny's lesson, and that's why we need to be critical (but not necessarily dismissive) of Khan Academy. People who fail to do so might be surprised with the results they get for all the wrong reasons.


Erlwanger, S. H. (1973/2004). Bennyʼs conception of rules and answers in IPI Mathematics. In T. P. Carpenter, J.A. Dossey, & J. L. Koehler (Eds.), Classics in mathematics education research (pp. 48-58). Reston, VA: NCTM.

Speiser, B., & Walter, C. (2004). Remembering Stanley Erlwanger. For the Learning of Mathematics, 24(3), 33-39. Retrieved from

Steffe, L. P., & Kieren, T. (1994/2004). Radical constructivism and mathematics education. In T. P. Carpenter, J. A. Dossey, & J. L. Koehler (Eds.), Classics in Mathematics Education Research (pp. 68-82). Reston, VA: NCTM.

Thompson, C. (2011, July). How Khan Academy is changing the rules of education. Wired. Retrieved from