RYSK: Ball, Thames, & Phelps's Content Knowledge for Teaching: What Makes It Special? (2008)

This is the 17th in a series describing "Research You Should Know" (RYSK) and part of my OpenComps. I also Storified this article as I read.

My last two posts summarized the underpinnings of Shulman's pedagogical content knowledge and Deborah Ball's early work building upon and extending Shulman's theories. Now we jump from Ball's 1988 article to one she co-authored in 2008 with University of Michigan colleagues Mark Thames and Geoffrey Phelps, titled Content Knowledge for Teaching: What Makes It Special?

This article starts by looking at the 20+ years we've had to further develop Shulman's theories of pedagogical content knowledge (PCK). Despite the theory's widespread use, Ball and colleagues claim it "has lacked definition and empirical foundation, limiting its usefulness" (p. 389). (See also Bud Talbot's 2010 blog post and related efforts.) In fact, the authors found that a third of the more than 1200 articles citing Shulman's PCK

do so without direct attention to a specific content area, instead making general claims about teacher knowledge, teacher education, or policy. Scholars have used the concept of pedagogical content knowledge as though its theoretical founcations, conceptual distinctions, and empirical testing were already well defined and universally understood. (p. 394)

To build the empirical foundation that PCK needs, Ball and her research team did a careful qualitative analysis of data that documented an entire year of teaching (including video, student work, lesson plans, notes, and reflections) for several third grade teachers. Combined with their own expertise and experience, and other tools for examining mathematical and pedagogical perspectives, the authors set out to bolster PCK from the ground up:

Hence, we decided to focus on the work of teaching. What do teachers need to do in teaching mathematics -- by virtue of being responsible for the teaching and learning of content -- and how does this work demand mathematical reasoning, insight, understanding, and skill? Instead of starting with the curriculum, or with standards for student learning, we study the work that teaching entails. In other words, although we examine particular teachers and students at given moments in time, our focus is on what this actual instruction suggests for a detailed job description. (p. 395)

For Ball et al., this includes everything from lesson planning, grading, communicating with parents, and dealing with administration. With all this information, the authors are able to sharpen Shulman's PCK into more clearly defined (and in some cases, new) "Domains of Mathematical Knowledge for Teaching." Under subject matter knowledge, the authors identify three domains:
  • Common content knowledge (CCK)
  • Specialized content knowledge (SCK)
  • Horizon content knowledge

And under pedagogical content knowledge, the authors identify three more domains:
  • Knowledge of content and students (KCS)
  • Knowledge of content and teaching (KCT)
  • Knowledge of content and curriculum

Ball describes each domain and uses some examples to illustrate, mostly from arithmetic. For my explanation, I'll instead use something from high school algebra and describe how each domain applied to my growth of knowledge over my teaching career.

Common Content Knowledge (CCK)

Ball et al. describe CCK as the subject-specific knowledge needed to solve mathematics problems. The reason it's called "common" is because this knowledge is not specific to teaching -- non-teachers are likely to have it and use it. Obviously, this knowledge is critical for a teacher, because it's awfully difficult and inefficient to try to teach what you don't know yourself. As an example of CCK, my knowledge includes the understanding that \((x + y)^2 = x^2 + 2xy + y^2\). I've known this since high school, and I would have known it whether or not I became a math teacher.

Specialized Content Knowledge (SCK)

SCK is described by Ball et al. as "mathematical knowledge and skill unique to teaching" (p. 400). Not only do teachers need this knowledge to teach effectively, but it's probably not needed for any other purpose. For my example, I need to have a specialized understanding of how \((x+y)^2\) can be expanded using FOIL or modeled geometricaly with a square. It may not be all that important for students to understand both the algebraic and geometric ways of representing this problem, but I need to know both so I can better understand student strategies and sources of error. Namely, the error that \((x + y)^2 = x^2 + y^2\).

Horizon Content Knowledge

This domain was provisionally included by the authors and described as, "an awareness of how mathematical topics are related over the span of mathematics included in the curriculum" (p. 403). For my example of \((x + y)^2 = x^2 + 2xy + y^2\), I need to understand how previous topics like order of operations, exponents, and the distributive property relate to this problem. Looking forward, I need to understand how this problem relates to factoring polynomials and working with rational expressions.

Knowledge of Content and Students (KCS)

This is "knowledge that combines knowing about students and knowing about mathematics" (p. 401) and helps teachers predict student thinking. KCS is what allows me to expect students to incorrectly think \((x + y)^2 = x^2 + y^2\), and to tie that to misconceptions about the distributive property and exponents. I'm not sure I had this knowledge for this example when I started teaching, but it didn't take me long to figure out that it was a very common student mistake.

Knowledge of Content and Teaching (KCT)

Ball et al. say KCT "combines knowing about teaching and knowing about mathematics" (p. 401). While KCS gave me insight about why students mistakingly think \((x + y)^2 = x^2 + y^2\), KCT is the knowledge that allows me to decide what to do about it. For me, this meant choosing a geometric representation for instruction over using FOIL, which lacks the geometric representation and does little to address the problem if students never recognize that \((x + y)^2 = (x + y)(x + y)\).

Knowledge of Content and Curriculum

For some reason, Ball et al. include this domain in a figure in their paper but never describe it explicitly. They do, however, scatter enough comments about knowledge of content and curriculum to imply that teachers need a knowledge of the available materials they can use to support student learning. For my example, I know that CPM uses a geometric model for multiplying binomials, Algebra Tiles/Models can be used to support that model, virtual tiles are available at the National Library of Virtual Manipulatives (NLVM), and the Freudenthal Institute has an applet that allows students to interact with different combinations of constants and variables when multiplying polynomials.

Some of the above can be hard to distinguish, but thankfully Ball and colleagues clarify by saying:

In other words, recognizing a wrong answer is common content knowledge (CCK), whereas sizing up the nature of an error, especially an unfamiliar error, typically requires nimbleness in thinking about numbers, attention to patterns, and flexible thinking about meaning in ways that are distinctive of specialized content knowledge (SCK). In contrast, familiarity with common errors and deciding which of several errors students are most likely to make are examples of knowledge of content and students (KCS). (p. 401)

In their conclusion, the authors hope that this theory can better fill the gap that teachers know is important, but isn't purely about content and isn't purely about teaching. We can hope to better understand how each type of knowledge above impacts student achievement, and optimize our teacher preparation programs to reflect that understanding. Furthermore, that understanding could be used to create new and improved teaching materials and professional development, and better understand what it takes to be an effective teacher. With this in mind, you can gain some insight to what Ball was thinking when she gave this congressional testimony:


References


Ball, D. L., Thames, M. H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 59(5), 389–407. doi:10.1177/0022487108324554

RYSK: Ball's Unlearning to Teach Mathematics (1988)

This is the 16th in a series describing "Research You Should Know" (RYSK) and part of my OpenComps. I also Storified this article as I read.

Dan Lortie's 1975 book Schoolteacher clarified an idea that teachers already know: how we teach is greatly influenced by the way we've been taught. Lortie called the idea apprenticeship of observation, and it specifically refers to how teachers, having spent 13,000+ hours in classrooms as students, take that experience as a lesson in how to be a teacher. What we often fail to deeply reflect on, however, is that we were only seeing the end product of teaching. We didn't see the lesson planning, go to summer conferences, attend professional development workshops, study the science of learning, or take part in the hundreds of decisions a teacher makes every day. Just observing isn't a proper apprenticeship, even after thousands of hours watching good teachers. I think of it this way: I watch a lot of baseball, and I can tell good baseball from bad. This hardly makes me ready to play, sadly, because I'm not spending hours taking batting practice, participating in fielding drills, studying video, digesting scouting reports, and working out in the offseason. Just as watching a lot of baseball doesn't really prepare me to play baseball, watching a lot of teaching doesn't really prepare someone to teach. Still, all those hours heavily influence our beliefs, both of teaching and of subject matter.

Deborah Ball (CC BY-NC-ND
House Committee on Education
and the Workforce Democrats
)
In 1988, the year she earned her Ph.D at Michigan State, Deborah Ball was spending a lot of time thinking about math teachers' apprenticeship of observation. In an article called Unlearning to Teach Mathematics, she describes a project involving teaching permutations to her class of introductory preservice elementary teachers. The goal was not simply to teach her students about permutations, but also to learn more about their beliefs about the nature of mathematics and to develop strategies that might enlighten those beliefs and break the cycle of simply teaching how you were taught.

By selecting permutations as the topic, Ball hoped to expose these introductory teachers to a topic they'd never studied formally. By carefully observing how her students constructed their knowledge, Ball would be able to see how their prior understandings about mathematics influenced their learning. The unit lasted two weeks. In the first phase of the unit, Ball tried to engage the students in the sheer size and scope of permutations, like by thinking about how the 25 students could be sat in 1,551,121,000,000,000,000,000,000 different seating arrangements. Working back to the simplest cases, with 2, 3, and 4, students, students could think and talk about the patterns that emerge and understand how the permutation grows so quickly. For homework, Ball asked students to address two goals: increase their understanding of permutations, but also think about the role homework plays in their learning, including how they approach and feel about it and why. In the second phase of the unit, Ball has her students observe her teaching young children about permutations, paying attention to the teacher-student interactions, the selection of tasks, and what the child appears to be thinking. In the last phase of the unit, the students become teachers and try helping someone else explore the concept of permutations. After discussing this experience, students wrote a paper reflecting on the entire unit.

From other research, Ball knew that teacher educators often assumed their students had mastery of content knowledge. Even moreso, future elementary math teachers themselves assumed they had mastery over the mathematical content they'd be expected to teach. She knew, however, that there was something extra a teacher needed to teach that content. Citing Shulman's pedagogical content knowledge, along with numerous others, Ball describes some ways we can think about what that special content knowledge for teaching is, but admits that her permutations project was too narrow to explore how teachers construct and organzie that knowledge. The project would, however, give insight to her students' ideas about mathematics, and assumptions they make about what it means to know mathematics. For example, a student named Cindy wrote:

I have always been a good math student so not understanding this concept was very frustrating to me. One thing I realized was that in high school we never learned the theories behind our arithmetic. We just used the formulas and carried out the problem solving. For instance, the way I learned permutations was just to use the factorial of the number and carry out the multiplication ... We never had to learn the concepts, we just did the problems with a formula. If you are only multiplying to get the answer every time, permutations could appear to be very easy. If you ask yourself why do we multiply and really try to understand the concept, then it may be very confusing as it was to me. (p. 44)

Comments like this revealed that many of Ball's students relied on a procedural view of mathematics, one where the question "Why?" had been rarely asked. Ball also noticed a theme in her students' reflections about knowing math "for yourself" versus for teaching. Alison wrote:

I was trying to teach my mother permutations. But it turned out to be a disaster. I understood permutations enough for myself, but when it came time to teach it, I realized that I didn't understand it as well as I thought I did. Mom asked me questions I couldn't answer. Like the question about there being four times and four positions and why it wouldn't be 4 x 4 = 16. She threw me with that one and I think we lost it for good there.

From observing a young student learn about permutations in phase two, Ball noticed that some of her students started to challenge some of their assumptions they made about themselves as learners. Both from her experience and from the literature, Ball knew that elementary preservice teachers are often the most apprehensive about teaching mathematics. In some cases, these students choose to teach elementary in the hopes of avoiding any mathematical content they might find difficult. Changing these feelings about mathematics and about themselves is a difficult task for the teacher educator, but Ball did see progress. Christy, for example, said, "Most of all, I realized that I do have the ability to learn mathematics when it is taught in a thoughtful way" (p. 45). Unfortunately, not all shared this experience, as Mandy said she "did not enjoy the permutations activities because I was transported in time back to junior high school, where I remember mathematics as confusing and aggravating. Then as now, the explanations seemed to fly by me in a whirl of disassociated numbers and words" (p. 45).

In her conclusion, Ball says activities like the permutations project can be used by teacher educators to expose students' "knowledge, beliefs, and attitudes" (p. 46) about math and teaching math. By understanding the ideas prospective teachers bring with them, teacher educators can better develop preparation programs that address those beliefs in ways that strengthen the positive ones while changing some negative ones. Also, by including these kinds of activities with introductory preservice teachers, this can raise their expectations for what they will encounter later in methods classes. Summarizing, Ball concludes:

How can teacher educators productively challenge, change, and extend what teacher education students bring? Knowing more about what teachers bring and what they learn from different components of and approaches to professional preparation is one more critical piece to the puzzle of improving the impact of mathematics teacher education on what goes on in elementary mathematics classrooms. (p. 46)

References


Ball, D. L. (1988). Unlearning to teach mathematics. For the Learning of Mathematics, 8(1), 40–48. Retrieved from http://www.jstor.org/stable/40248141

RYSK: Shulman's Those Who Understand: Knowledge Growth in Teaching (1986)

This is the 15th in a series describing "Research You Should Know" (RYSK) and part of my OpenComps. I also Storified this article as I read.

Lee Shulman. (CC BY-NC) Penn State
George Bernard Shaw once said, "He who can, does. He who cannot, teaches." For that, you could say that Lee Shulman takes offense. Shulman, a long-time faculty member at both Michigan State (1963-1982) then Stanford, explained his position and a new way of thinking about teacher knowledge in his AERA Presidential Address and the paper, Those Who Understand: Knowledge Growth in Teaching. Shulman is now an emertius professor but stays active traveling, speaking, and occasionally blogging.

Wondering why the public often has a low opinion of teachers' knowledge and skill, Shulman first looks at the history of teacher examinations. In the latter half of the 1800s, examinations for people wishing to teach were almost entirely content-based. In 1875, for example, the California State Board examination for elementary teachers gave a day-long, 1000-point exam that covered everything from mental arithmetic to geography to vocal music. Its section on the theory and practice of teaching, however, was only worth 50 of the 1000 points and included questions like, "How do you interest lazy and careless pupils?" (p. 5)

By the 1980s, when Shulman wrote this article, teacher examinations painted almost the opposite picture. Instead of focusing on content, they focused on topics such as lesson planning, cultural awareness, and other aspects of teacher behavior. While the topics usually had roots in research, they clearly did not represent the wide spectrum of skills and knowledge a teacher would need to be a successful teacher. More specifically, by the 1980s our teacher examinations seemed to care as little about content as the examinations a century prior seemed to care about pedagogy.

Looking back even further in history, Shulman recognized that we haven't always made this distinction between content and teaching knowledge. The origins of the names of our highest degrees, "master" and "doctor," both essentially mean "teacher" and reflected the belief the highest form of knowing was teaching, an idea going back to at least Aristotle:

We regard master-craftsmen as superior not merely because they have a grasp of theory and know the reasons for acting as they do. Broadly speaking, what distinguishes the man who knows from the ignorant man is an ability to teach, and this is why we hold that art and not experience has the character of genuine knowledge (episteme) -- namely, that artists can teach and others (i.e., those who have not acquired an art by study but have merely picked up some skill empirically) cannot. (Wheelwright, 1951, as cited in Shulman, 1986, p. 7)

Shulman saw a blind spot in this dichotomy between content and teaching knowledge. What he saw was a special kind of knowledge that allows teachers to teach effectively. After studying secondary teachers across subject areas, Shulman and his fellow researchers looked to better understand the source of teachers' comprehension of their subject areas, how that knowledge grows, and how teachers understand and react to curriculum, reshaping it into something their students will understand.

Pedagogical Content Knowledge

To better understand this special knowledge of teaching, Shulman suggested we distinguish three different kinds of content knowledge: (a) subject matter knowledge, (b) pedagogical content knowledge, and (c) curricular knowledge. It was the second of these, pedagogical content knowledge (PCK), that Shulman is best remembered for. Shulman describes the essence of PCK:

Within the category of pedagogical content knowledge I include, for the most reguarly taught topics in one's subject area, the most useful forms of representation of those ideas, the most powerful analogies, illustrations, examples, explanations, and demonstrations -- in a word, the ways of representing and formulating the subject that make it comprehensible to others. Since there are no single most powerful forms of representation, the teacher must have at hand a veritable armamentarium of alternative forms of representation, some of which derive from research whereas others originate in the wisdom of practice. (p. 9)

In addition to these three kinds of teacher knowledge, Shulman also proposed we consider three forms of teacher knowledge: (a) propositional knowledge, (b) case knowledge, and (c) strategic knowledge. These are not separate from the three kinds of knowledge named above, but rather describe different forms of each kind of teacher knowledge. Propositional knowledge consists of those things we propose teachers do, from "planning five-step lesson plans, never smiling until Christmas, and organizing three reading groups" (p. 10). Shulman organized propositional knowledge into principles, maxims, and norms, with the first usually emerging from research, the second coming from a practical experience (and generally untestable, like the suggestion to not smile before Christmas), and the third concerning things like equity and fairness. Propositions can be helpful but difficult to remember to implement as research intended.

Learning propositions out of context is difficult, so Shulman proposed case knowledge as the second form of teacher knowledge. By case, he means learning about teaching in a similar way a lawyer learns about the law by studying prior legal cases. In order to truly understand a case, a learner starts with the factual information and works towards the theoretical aspects that explain why things happened. By studying well-documented cases of teaching and learning, teachers consider prototype cases (that exemplify the theoretical), precedents (that communicate maxims), and parables (that communicate norms and values). (If you're scoring at home, Shulman has now said there are three types of cases, which itself is one of three forms of knowledge, each of which capable of describing three different kinds of content knowledge.)

The last form of knowledge, strategic knowledge, describes how a teacher reacts when faced with contradictions of other knowledge or wisdom. Knowing when to bend the rules or go against conventional wisdom takes more than luck -- it requires a teacher to be "not only a master of procedure but also of content and rationale, and capable of explaining why something is done" (p. 13).

The value of this article by Shulman goes beyond the theoretical description of pedagogical content knowledge. Additionally, this article serves as a strong reminder that when we judge a teacher, we must consider a broad spectrum of skills and abilities, and not limit ourselves to only those things we think can be easily measured. As Shulman explains:

Reinforcement and conditioning guarantee behavior, and training produces predictable outcomes; knowledge guarantees only freedom, only the flexibility to judge, to weigh alternatives, to reason about both ends and means, and then to act while reflecting upon one's actions. Knowledge guarantees only grounded unpredictability, the exercise of reasoned judgment rather than the display of correct behavior. If this vision constitutes a serious challenge to those who would evaluate teaching using fixed behavioral criteria (e.g., the five-step lesson plan), so much the worse for those evaluators. The vision I hold of teaching and teacher education is a vision of professionals who are capable not only of acting, but of enacting -- of acting in a manner that is self-conscious with repect to what their act is a case of, or to what their act entails. (p. 13)

In our current era of teacher evaluation and accountability, with all its observational protocols and test score-driven value added models, this larger view of teaching presented to us by Shulman is a gift. His recommendation that teacher evaluation and examination "be defined and controlled by members of the profession, not by legislators or laypersons" (p. 13) is a wise one, no matter how politically difficult. Shulman hoped for tests of pedagogical content knowledge that truly measured those speical skills that teachers have, skills that non-teaching content experts would not pass. I don't think those measurement challenges have been overcome, but continuing towards that goal should strengthen teacher education programs while also improving the perception of teaching as a profession. As Shulman concludes (p. 14):

We reject Mr. Shaw and his calumny. With Aristotle we declare that the ultimate test of understanding rests on the ability to transform one's knowledge into teaching.

Those who can, do. Those who understand, teach.

References

Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. Retrieved from http://www.jstor.org/stable/3202180

OpenComps Update

With five weeks to go before beginning the written portion of my comprehensive exam, I recently met with my advisor to discuss gaps in my reading list. I think everybody has holes somewhere in their knowledge, but given my interests in research and practice we came up with additional readings focused on three areas: teacher learning, teacher beliefs, and cognitively guided instruction (CGI). I'm starting with teacher learning, which includes the following four articles:

Ball, D. L. (1988). Unlearning to teach mathematics. For the Learning of Mathematics, 8(1), 40–48. Retrieved from http://www.jstor.org/stable/40248141

Ball, D. L. (2008). Content knowledge for teaching: What makes it special ? Journal of Teacher Education, 59(5), 389–407. doi:10.1177/0022487108324554

Lampert, M. (2009). Learning teaching in, from, and for practice: What do we mean? Journal of Teacher Education, 61(1-2), 21–34. doi:10.1177/0022487109347321

Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. Retrieved from http://www.jstor.org/stable/3202180

Although I have a vague understanding of pedagogical content knowledge (PCK) and mathematical knowledge for teaching (MKT), I knew I needed to dig into Shulman's and Ball's thoughts to better understand their origins. In a way, it's a pretty good sign when the gaps you perceive yourself as having are more or less the ones your adivsor sees, too. There are places for life to contain wonderful surprises, but I don't think this needs to be one of them. Now, on to the reading!

RYSK: Greeno, Pearson, & Schoenfeld's Implications for NAEP of Research on Learning and Cognition (1996)

This is the 14th in a series describing "Research You Should Know" (RYSK).

You might have read my recent post about Lorrie Shepard's 2000 article The Role of Assessment in a Learning Culture and assumed she focused on classroom assessment because changing large-scale, standardized assessments was a lost cause. Think again. By that time, an effort to integrate new theories of learning and cognition into the NAEP was already underway, traceable back to a 1996 report titled Implications for NAEP of Research on Learning and Cognition written by by James G. Greeno, P. David Pearson, and Alan H. Schoenfeld. For years Greeno has been recognized as one of education's foremost learning theorists, while Pearson and Schoenfeld are highly-regarded experts in language arts and mathematics education, respectively.

The National Assessment of Educational Progress, sometimes called "The Nation's Report Card," has been given to students in various forms since 1969. Unlike the high-stakes assessments given by states to all students, the NAEP is given to samples of 4th, 8th, and 12th grade students from around the country, and the use of matrix sampling means no student ever takes the entire test. The goal of the NAEP is to inform educators and policymakers about performance and trends, and details about how different NAEP exams try to achieve this are described in depth at the NAEP website.

Greeno et al. tried to answer two main questions in their report: (a) Does the NAEP inform the nation "about significant aspects of the knowing and learning" (p. 2) in math and reading, and (b) What changes in NAEP would make it a better tool for informing the nation about the performance and progress of our educational system? The authors acknowledge the tradition with what they call differential and behaviorist perspectives on learning, and focus more of their attention on the ability to assess cogntiive and situative perspectives, which have strong theoretical foundations but hadn't been reflected in most large-scale assessments.

Concisely, the report says the "key features of learning in the cognitive perspective are meaningful, conceptual understanding and strategic thinking" and that the "key feature of learning in the situative perspective is engaged participation with agency" (p. 3, emphasis in original). Greeno et al. say that if students are engaged in learning activities that reflect these perspectives, then the NAEP should try to capture the effects of those experiences.

One of the main reasons I'm writing about this report is because it gives me another chance to describe current learning perspectives that go beyond the simpler "behaviorism vs. constructivism" argument I knew as a teacher and heard from others. This report does this well without burdening the reader with all the gory details that learning theorists grapple with as they try to push these theories even further. So here's my summary of their summaries of each perspective:

Differential

This perspective accepts the assumption that "Whatever exists, exists in some amount and can be measured" (p. 10). For knowledge, that "whatever" is referred to as a trait, and different people have traits in different amounts. Evidence of traits can be detected by tests, and the correlation of different tests supposedly measuring the same trait is an indication of our confidence in our ability to measure the trait. Because the person-to-person amount of a trait is assumed to be relative, it's statistically important to design tests where few people will answer all items correctly or incorrectly.

Behaviorist

Behaviorism assumes that "knowing is an organized collection of stimulus-response associations" (p. 11). To learn is to acquire skills (usually and best in small pieces) and measuring learning is seen as an analysis of behaviors which can be decomposed into responses to stimuli. Behaviorism's influence on curriculum is seen when behavioral objectives are organized as a sequence building bigger ideas out of smaller, prerequisite objectives.

Cognitive

The cognitive perspective primarily focuses "on structures of knowledge, including principles and concepts of subject-matter domains, information organized by schemata, and procedures and strategies for problem solving and reasoning" (p. 12). Learners actively construct their knowledge rather than accept it passively, and conceptual understanding is not just the sum total of facts. The early part of the cognitive revolution was reflected in the math and science reforms of the 1950s and 1960s, while Piagetian ideas and research on student understanding have pushed the perspective further. Assessments need to determine more than right and wrong answers, and research involving think-aloud protocols, student interviews, eye-tracking studies, and patterns of responses have yielded better theories about how to assess for student understanding.

Situative

The situative perspective is a social view of learning focused on "interactive processes in which people participate in practices that are organized by the societies and communities they belong to, using the technologies and natural resources in their environments" (p. 14). Knowing is no longer in the head -- instead it is seen as participation in a community, and learning is represented by increased and more effective participation. John Dewey took parts of this perspective in the early 20th century, but we owe much of the theory to Lev Vygotsky, whose work in the 20s and 30s in the Soviet Union eventually emerged and has heavily influenced learning science since the late 1970s. The situative perspective is more readily applied to interactions between people or between people and technology (which is seen as a cultural artifact with social roots), but even solitary learners can be assessed with the situative perspective if we focus on "the individual's participation in communities with practices, goals, and standards that make the individual's activity meaningful, either by the individual's adoption of or opposition to the community's perspective" (p. 14). The influence of the situative perspective on curriculum and classrooms is most easily seen in the focus on student participation, project work, small-group discussions, and authentic work in subject-area disciplines.

In summary, achievement in each perspective can be described as:
Differential/Behaviorist
- "progress a student has made in the accumulation of skills and knowledge" (p. 16)
Cogntive
- a combination of five aspects (pp. 16-18):
  1. Elementary skills, facts, and concepts
  2. Strategies and schemata
  3. Aspects of metacognition
  4. Beliefs
  5. Contextual factors
Situative
- a combination of five aspects (pp. 19-21):
  1. Basic aspects of participation
  2. Identity and membership in communities
  3. Formulating problems and goals and applying standards
  4. Constructing meaning
  5. Fluency with technical methods and representations

What Does This Mean for the NAEP?

Greeno et al. declared that the NAEP was "poorly aligned" (p. 23) with the cognitive perspective. It hadn't captured the complexity of student knowledge and they recommended a greater focus on problems set in meaningful contexts and tasks that reflected the kind of knowledge models and structures theorized in the research. As for the situative perspective, Greeno et al. went so far to say that what the NAEP had been measuring was "of relatively minor importance in almost all activities that are significant for students to learn" (p. 27). Whereas the situative perspective focuses on participation in a particular community or knowledge domain, it's impossible to escape the reality that on the NAEP, the domain is test-taking itself, a "special kind of situation that is abstracted from the variety of situations in which students need to know how to participate" (pp. 28-29). Measuring learning from the situative perspective would require a complicated set of inferences about a student's actual participation practices in an authentic domain, and the technical limitations of the NAEP limits our ability to make those inferences.

The report continues with specific details about how we might measure learning in language arts and mathematics with the NAEP from both a cognitive and situative perspective. In the conclusion, the authors first recommend some systemic changes: First, NAEP needed more capacity for attending to the long-term continuity of the test and its design. Given how important NAEP is for measuring longitudinal trends, we can't change it without a careful study of how to compare new results to old. Second, the authors wanted a national system for evaluating changes in the educational system. The NAEP alone can't tell us everything we need to know about the effectiveness of educational reforms.

As for recommendations for the test itself, Greeno et al. emphasized the need to align the assessment with ongoing research, especially in the cognitive perspective. Instead of planning for NAEP tests one at a time and contracting out various work, the development process needed to become more continuous with particular sustained attention given to progress in the cognitive and situative dimensions. More ambitiously, the authors recommended a parallel line of test development to begin establishing new forms of assessment that might capture learning in these newer perspectives. This is a critical challenge because while we know the least about assessing from the situative perspective, the situative is often the perspective that frames our national educational goals. The NAEP can't measure progress to situative-sounding goals without better measurement of learning from a situative perspective.

It has now been 12 years since the release of this report. I don't know how Greeno et al.'s recommendations have specifically been followed, but there is good news. If you read most any of the current NAEP assessment frameworks, you can find evidence of progress. The frameworks have changed to better measure student learning, particularly from the cognitive perspective. Some frameworks honesty address the difficulty in measuring the situative perspective using an on-demand, individualized, pencil-and-paper (but increasingly computer-based) test. (See Chapter One of the science framework, for example.) Will we see any radical changes any time soon? I doubt it. The information we get about long-term trends from the NAEP requires a certain amount of stability. Given the onset of new national consortia tests based on the Common Core State Standards, I think the educational system will get its fill of radical change in the next 3-5 years. With that as the comparison, we all might contently appreciate the stability and attention to careful progress reflected in the NAEP.

References

Greeno, J. G., Pearson, P. D., & Schoenfeld, A. H. (1996). Implications for NAEP of research on learning and cognition (p. 84). Menlo Park, CA.

How Can Texas Instruments Adapt to Post-Tech-Monopoly Classrooms?

Bill Cosby has been right about a lot of things, but he might not have seen the future when he advertised the Texas Instruments TI-99 computer as "The One:"



I think I'm glad the TI-99 computer didn't become "The One," because when the TI-83 graphing calculator became "The One" for students, Texas Instruments showed they were all-too-happy to keep pushing that same basic technology for about the same price for 10+ years. Only when you have a tech monopoly can you resist that much change for so long.

Now I finally feel like TI is facing some real competition in the classroom. If I were them, I'd be developing and marketing smartphone apps that replicate the functionality of their calculators with one key feature: the ability for the user to put the app in "lock mode" which makes the device a dedicated calculator for a predetermined amount of time. I wouldn't worry about students cheating with their phones if I could see them trigger the lock mode at the beginning of a test and then prove to me at the end that the calculator app was the only app running the entire time. If TI could get that approved by the ACT and SAT, I think it's an app that students would gladly pay for.

RYSK: Shepard's The Role of Assessment in a Learning Culture (2000)

This is the 13th in a series describing "Research You Should Know" (RYSK).

In her presidential address at the 2000 AERA conference, Lorrie Shepard revealed a vision for the future of educational assessment. That message turned into an article titled The Role of Assessment in a Learning Culture, and its message is still very much worth hearing today. Lorrie Shepard remains a globally-respected expert in assessment, psychometrics, and their misuses, and I'd think she was totally awesome even if she wasn't my boss.

Shepard is often present for debates about large-scale testing, but this paper focuses on classroom assessment -- the kind, says Shepard, "that can be used as a part of instruction to support and enhance learning" (p. 4). Shepard does this by first explaining a historical perspective, then describing a modern view of learning theories, then envisioning how new assessment practices could support those theories. Impressively, she does this all in just 11 well-written pages. (In fact, given that the paper is available on the web, I wouldn't blame you at all for skipping this summary and just reading the article for yourself.)

History

Shepard highlights several major themes from history that have continued to drive our assessment practices. One is the social efficiency movement, which "grew out of the belief that science could be used to solve the problems of industrialization and urbanization" (p. 4). While this movement might have helped our economic and educational systems scale rapidly (think about Ford and the assembly line), social efficiency carries with it a belief that people have a certain innate (and largely fixed) set of capabilities, and our society operates its most efficiently when we measure people and match their capabilities to appropriate education and employment. For example, students were often given IQ tests to determine if their future path should lie on a particular academic or vocational track.

The dominant learning theories of the early and mid-1900s were associationism and behaviorism, both of which promoted the idea that learning was an accumulation of knowledge that could be broken into very small pieces. Behaviorism was also tied closely to theories of motivation, as it was believed learning was promoted when knowledge was made smaller and opportunities for positive reinforcement for learning were made greater. Much of the assessment work related to these beliefs can be traced back to Edward Thorndike, considered to be the father of scientific measurement and earliest promoter of "objective" testing. It's been 100 years since Thorndike was elected president of the American Psychological Association, and decades since his ideas seriously influenced the leading edges of learning theory. Still, as most anyone who works in schools or experienced a traditional education can attest, ideas of social efficiency and behaviorism are still evident in schools -- especially in our assessment practices.

Together, the theories of social efficiency, scientific measurement, and beliefs about intelligence and learning form what Shepard sees as the dominant 20th-century paradigm. (See page 6 of the paper for a diagram.) It's important to begin our discussion here, says Shepard, because "any attempt to change the form and purpose of classroom assessment to make it more fundamentally a part of the learning process must acknowledge the power of these enduring and hidden beliefs" (p. 6).

Modern Theories

In the next section, Shepard describes a "social-constructivist" framework that guides modern thought on learning:

The cognitive revolution reintroduced the concept of mind. In contrast to past, mechanistic theories of knowledge acquisition, we now understand that learning is an active process of mental construction and sense making. From cognitive theory we have also learned that existing knowledge structures and beliefs work to enable or impede new learning, that intelligent thought involves self-monitoring and awareness about when and how to use skills, and that "expertise" develops in a field of study as a principled and coherent way of thinking and representing problems, not just as an accumulation of information. (pp. 6-7)

These ideas about cognition are complimented by Vygotskian realizations that the knowledge we construct "is socially and culturally determined" (p. 7). Unlike Piaget's view that development preceded learning, this modern view sees how development and learning interact as social processes. While academic debates remain about the details of cognitive vs. social (and vs. situative vs. sociocultural vs. social constructivist vs. ...), for practical purposes these theories can coexist and are already helping teachers view student learning in ways that improve upon behaviorism. However, Shepard says, since about the 1980s this has left us in an awkward state of using new theories to inform classroom instruction, while still depending on old theories to guide our assessments.

Improving Assessment

If we wish to make our theories of assessment compatible with our theories of learning, Shepard says we need to (a) change the form and content of assessments and (b) change the way we use and regard assessment in classrooms. Some of the potential changes in form are already familiar to most teachers, such as a greater use of open-ended performance tasks and setting assessment tasks in real-world contexts. Furthermore, Shepard suggests that classroom routines and related assessments should reflect the need to socialize students "into the discourse and practices of academic disciplines" (p. 8) as well as foster metacognition and important dispositions. Shepard does not go into much more detail here because others have already given attention to these ideas, but gives us this simple yet powerful idea (p. 8):

"Good assessment tasks are interchangeable
with good instructional tasks."

Next Shepard pays special attention to negative effects of high-stakes testing. Shepard could be called a believer in standards-based education, but recognizes how "the standards movement has been corrupted, in many instances, into a heavy-handed system of rewards and punishments without the capacity building and professional development originally proposed as part of the vision (McLaughlin & Shepard, 1995)" (p. 9). Unfortunately, Shepard's predictions have held true over the past 12 years: we've seen test scores distorted under political pressure, a corruption of "teaching to the test," and a trend towards the "de-skilling and de-professionalization of teachers" (p. 9). What's worse might be a decade of new teachers who've learned to "hate standardized testing and at the same time reproduce it faithfully in their own pre-post testing routines" (p. 10) because they've had such little exposure to better forms of assessment.

For the rest of the article, Shepard focuses on how assessment can and should be used to support student learning. First, classrooms need to support a learning culture where "students and teachers would have a shared expectation that finding out what makes sense and what doesn't is a joint and worthwhile project" (p. 10). This means assessment that is more informative and reflective of student learning, one where "students and teachers look to assessment as a source of insight and help instead of an occasion for meting out rewards and punishments" (p. 10). To do this, Shepard describes a set of specific strategies teachers should use in combination in their classrooms.

Dynamic Assessment

When Shepard wrote this article, formal ideas and theories about formative assessment were still emerging and the field had yet to settle on some of the language we now use. But if you're at all familiar with formative assessment, Shepard's description of "dynamic" assessment will sound familiar: teacher-student interactions continuing through the learning process rather than delayed until the end, with the goal of gaining insight about what students understand and can do both on their own and with assistance from classmates or the teacher.

Prior Knowledge

The idea of a pre-test to see what students know before instruction begins is not new, but Shepard says we should recognize that traditional pretests don't usually take account of social and cultural contexts. Because students are unfamiliar with a teacher's conceptualization of the content prior to instruction (and vice versa), scores might not accurately reflect students' knowledge as well as, say, a conversation or activity designed to elicit the understandings students bring to the classroom. Also, as Shepard has frequently observed, traditional pre-testing often doesn't significantly affect teachers' instruction. So why do it? Instead, why not focus on building a learning culture of assessment: "What safer time to admit what you don't know than at the start of an instructional activity?" (p. 11)

Feedback

The contrast in feedback under old, behaviorist theories and newer, social-constructivist theories is clear. Feedback under old theories generally consisted of labeling answers right or wrong. Feedback under new theories takes greater skill: teachers need to know how to ignore student errors that aren't immediately relevant to the learning at hand, while crafting questions and comments that force the student to question themselves and any false knowledge they might be constructing. (See Lepper, Drake, and O'Donnell-Johnson, 1997, for more on this.)

Transfer

While it is our hope that our students will be able to generalize the specific knowledge they have learned and apply it to other situations, our ability to accurately research and make claims about knowledge transfer turns out to be a pretty tricky business. Under a strict behaviorist perspective, it was appropriate to believe that each application of knowledge should be taught separately. Many of our current theories support an idea of transfer, and evidence shows that we can help students by giving them opportunities to see how their knowledge reliably works in multiple applications and contexts. So while some students might not agree, Shepard says teachers should not "agree to a contract with our students which says that the only fair test is one with familiar and well-rehearsed problems" (p. 11).

Explicit Criteria

If students are to perform well, they need to have clear guidance about what good performances look like. "In fact, the features of excellent performance should be so transparent that students can learn to evaluate their own work in the same way their teachers would" (p. 11). This reinforces ideas of metacognition and, perhaps more importantly, fairness.

Self-Assessment

There are cognitive reasons to have students self-assess, but other goals are to increase student self-responsibility and make teacher-student relationships more collaborative. Students who self-evaluate become more interested in feedback from others, are more aware of standards of excellence, and take more ownership over the learning process.

Evaluation of Teaching

This is another idea now heavily intertwined with formative assessment, but Shepard takes it one step farther than I normally see it. Instead of just using assessment to improve one's teaching, Shepard recommends that teachers be transparent about this process and "make their investigations of teaching visible to students, for example, by discussing with them decisions to redirect instruction, stop for a mini-lesson, and so-forth" (p. 12). This, Shepard says, is critical to cultural change in the classroom:

If we want to develop a community of learners -- where students naturally seek feedback and critique of their own work -- then it is reasonable that teachers would model this same commitment to using data systematically as it applies to their own role in the teaching and learning process. (p. 12)

Conclusion

Shepard admits that describing this new assessment paradigm is far easier than it is to implement in practice. It relies on a great deal of teacher ability and confronting some long-held beliefs. Shepard recommended a program of research accompanied by a public education campaign to help citizens and policymakers understand the different goals of large-scale and classroom assessments. Neither the research or educating the public is easy, because both are built upon a history of theories and practice that a new paradigm needs to discard. Perhaps we haven't taken on this challenge with the effort and seriousness we've needed, and I worry that now we're more apt to talk about "learning in an assessment culture" rather than the other way around, as Shepard titled this article. I sometimes wonder if she's considered writing a follow-up with that title, or if she's hoping she'll never have to. I guess the next time it comes up I'll have to ask her.

Math note: This is an article about assessment and not specific to mathematics, but I'd be remiss if I didn't share Shepard's inclusion of one of my all-time favorite fraction problems:


References

Lepper, M. R., Drake, M. F., O'Donnell-Johnson, T (1997). Scaffolding techniques of expert human tutors. In K. Hogan & M. Presley (eds.), Scaffolding student learning: Instructional approaches & issues. Cambridge, MA: Brookline Books.

McLaughlin, M. W., & Shepard, L.A. (1995). Improving education through standards-based reform: A report of the National Academy of Education panel on standards-based educational reform. Stanford, CA: National Academy of Education.

Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14. doi:10.2307/1176145

Thompson, P. W. (1995). Notation, convention, and quantity in elementary mathematics. In J. T. Sowder & B. P. Schappelle (Eds.), Providing a foundation for teaching mathematics in the middle grades (pp. 199-221). New York: State University of New York Press.

RYSK: Gravemeijer's Local Instruction Theories as Means of Support for Teachers in Reform Mathematics Education (2004)

This is the 12th in a series describing "Research You Should Know" (RYSK) and part of my OpenComps.

Gravemeijer (from above) at the 2011 RME Conference
I began my recent reading of the literature on learning trajectories by reading Clements & Sarama's (2004) Learning Trajectories in Mathematics Education, and then went back to where the idea formally began, Simon's (1995) Reconstructing Mathematics Pedagogy from a Constructivist Perspective. Now I'm jumping to 2004 again with Koeno Gravemeijer's Local Instruction Theories as Means of Support for Teachers in Reform Mathematics Education. Koeno Gravemeijer (pronounced Koo-no Grav-meyer) has worked at multiple institutions in the Netherlands and spent time at Vanderbilt working with Paul Cobb, but he's best known for his long time association and leadership with the Freudenthal Institute and his advancements of Realistic Mathematics Education (RME).

When Martin Simon introduced the concept of hypothetical learning trajectories in his 1995 paper Reconstructing Mathematics Pedagogy from a Constructivist Perspective, he described them as part of a teaching cycle that was informed by the teacher's knowledge and then revised after assessment of student understanding. While much of the focus was placed on the idea of the trajectory, Simon made clear that no two trajectories will be alike, as each one is hypothesized for a unique group of students who are uniquely constructing knowledge. In other words, you can't just prescribe a trajectory and ask teachers to follow it to the letter. Instead, Simon suggested we needed to build an understanding of the knowledge teachers were using to inform and modify their trajectories:

A possible contribution that can be made by the analysis of data and the resulting model reported in this paper is to encourage other researchers to examine teachers' "theorems in action" and to make teachers' assumptions, beliefs, and emerging theories about teaching explicit. (p. 142)

This paper by Gravemeijer is, in part, a response to Simon's call to other researchers. Gravemeijer first states that in a constructivism-inspired reform mathematics, the traditional goals of instructional design must change:

What is needed for reform mathematics education is a form of instructional design supporting instruction that helps students to develop their current ways of reasoning into more sophisticated ways of mathematical reasoning. For the instructional designer this implies a change in perspective from decomposing ready-made expert knowledge as the starting point for design to imagining students elaborating, refining, and adjusting their current ways of knowing. (p. 106)

Next, Gravemeijer recognizes that while every teacher can use their knowledge to hypothesize a learning trajectory, we (researchers, teacher educators, curriculum designers) need to have some knowledge in common if we want to help teachers:

The example Simon (1995) worked out shows that designing hypothetical learning trajectories for reform mathematics is no easy task. We can, therefore, ask ourselves what kind of support can be given to teachers. It is clear that we cannot rely on fixed, ready-made, instructional sequences, because the teacher will continuously have to adapt to the actual thinking and learning of his or her students. Thus it seems more adequate to offer the teacher some framework of reference, and a set of exemplary instructional activities that can be used as a source of inspiration. (p. 107)

This is where Gravemeijer introduces the concept of a local instruction theory, which he describes as "the description of, and rationale for, the envisioned learning route as it relates to a set of instructional activities for a specific topic" (p. 107). I admit, it's difficult at first to discern this from a hypothetical learning trajectory, but I think the key is the relationship to instructional activities (which are more fixed/solid) instead of a trajectory's relationship to student understanding (which is more flexibile/fluid). By addressing the relationship of learning to the instructional activities, Gravemeijer uses local instruction theories to describe a common foundation teachers can use for building trajectories, saying that "Externally developed local instruction theories are indispensable for reform mathematics education" and that it is "unfair to expect teachers to invent hypothetical learning trajectories without any means of support" (p. 108). (If you're still confused, I think I can safely oversimplify it like this: Simon says trajectories are about student learning, not mathematical tasks. Gravemeijer agrees, but since trajectories are unique because student learning is unique, it helps if we have some agreed-upon ideas about how mathematical tasks should be designed.) Given Gravemeijer's long association with the Freudenthal Institute, he naturally describes how design principles from Realistic Mathematics Education (RME) provide the kind of instructional design framework for creating a local instruction theory.

Design Research and RME

Some curricula and instructional strategies are developed then subjected to treatment and control groups to test their effectiveness. That's not design research and not how RME has been developed. Instead, design research consists of cyclical iterations of thought experiments, teaching experiments, and retrospective analyses. It's similar to how teachers improve their instruction as they gain experience: they plan an activity for year one, then conduct that activity, then reflect on the activity so it will be better in year two. Of course, a team of researchers who are carefully theorizing, observing, collecting data, and analyzing the results across multiple classrooms can more quickly and effectively improve tasks and instruction than a teacher can alone.

Gravemeijer describes the design research he conducted with Paul Cobb and others around the development of mental computation strategies for addition and subtraction with elementary students. There are numerous papers and at least part of one dissertation all related to this work, so I won't describe it here. I will, however, describe the three RME design principles that Gravemeijer cites as helping form the local instruction theory that guided the design research process.

Guided Reinvention

Hans Freudenthal (1973) believed mathematics is best learned when students get to experience a process of learning that's similar to the way the mathematics was invented.

If mathematics is to be applied, applying mathematics should be taught and learned. Applying is often interpreted, as mentioned above, as substituting numerical values for parameters in general theorems and theories. This is a misleading terminology. Mathematics is applied by creating it anew each time -- I will expound this in more detail too. This activity can never be exercised by learning mathematics as a ready-made product. Drilling algorithms may be indispensable, but inventing problems to drill algorithms does not create opportunities to teach applying mathematics. This so-called applied mathematics lacks the flexibility of good mathematics. (Freudenthal, 1973, p. 118)

I've heard criticisms of this approach. "How in the world can a student reinvent mathematics that took mathematicians hundreds of years to understand?" That's a valid question, and the best answer is: "Through carefully designed curriculum and instruction." The goal is not to replicate the invention of the mathematics, but learn from history how a mathematical idea might be constructed in the mind of a student. Of course, this takes an extensive and special knowledge of the history of mathematics, and largely explains why Freudenthal's Mathematics as an Educational Task is almost 700 pages long.

Didactical Phenomenology

The concept of didactical phenomenology relates the mathematical "thought thing" and the phenomenon it describes. This is not a theory I know well but hope to study more in the future.

Mathematical concepts, structures, and ideas serve to organise phenomena -- phenomena from the concrete world as well as from mathematics -- and in the past I have illustrated this by many examples. By means of geometrical figures like triangle, parallelogram, rhombus, or square, one succeeds in organising the world of contour phenomena; numbers organise the phenomenon of quantity. On a higher level the phenomenon of geometrical figure is organised by means of geometrical constructions and proofs, the phenomenon "number" is organised by means of the decimal system. So it goes in mathematics up to the highest levels: continuing abstraction brings similar looking mathematical phenomena under one concept -- group, field, topological space, deduction, induction, and so on. (Freudenthal, 1983, p. 28)

Traditionally we teach an abstract mathematics and then find examples for students to make the mathematics concrete. With didactical phenomenology, we focus on progressive mathematization, suggesting "looking for phenomena that might create opportunities for the learner to constitute the mental object that is being mathematized" (Gravemeijer, p. 116). Yes, it's hard to understand without a lot of specific examples, and that's why Freudenthal wrote almost 600 pages on this topic. It's all in a book I have yet to read, so I'll forgive myself for giving a better description here.

Emergent Modeling

I can best describe emergent modeling with an example. Imagine an elementary class learning about fractions. Instead of giving students a formal model (like a numerator and denominator), the concept of emergent modeling says we should let students reach these models informally and progressively. If a task involves the sharing of parts of cookies with the students, students might begin with breaking apart actual cookies. Once realizing this isn't convenient, students might move to drawing cookies on paper. At some point they'll realize that drawing all the details of the cookie isn't necessary and just use a circle to represent a cookie. Up until this point, these are all models-of a cookie. The key step in this process is when students start using circles to model other contextual situations, like working with fractions of time, money, space, etc. Now the circle is a model-for a part-whole relationship, and not representing a specific object like a cookie. These models-for have the power to generalize to other contexts, and eventually students no longer need the circle and rely on formal mathematics to represent and work with fractions. Gravemeijer describes a similar process in this paper, except with how bead strings, unifix cubes, and rulers can lead to marked and empty number lines as students develop ideas of cardinality, ordinality, and distance as they learn mental strategies for addition and subtraction.

Conclusion

I hope by now you have some sense for a local instruction theory. The three RME principles above -- guided reinvention, didactical phenomenology, and emergent modeling -- do not describe a detailed instructional sequence of tasks and instructions for a teacher. They are, however, a way of theorizing how a particular instructional sequence should work, grounded in the design research conducted by Gravemeijer et al. This kind of local instruction theory is what allows teachers to design hypothetical learning trajectories that focus on the construction of student understanding, and provide some common ground for helping teachers become better at trajectory hypothesizing.

References

Freudenthal, H. (1973). Mathematics as an educational task (p. 680). Dordrecht, The Netherlands: D. Reidel.

Freudenthal, H. (1983). Didactical phenomenology of mathematical structures (p. 595). Dordrecht, The Netherlands: D. Reidel.

Gravemeijer, K. (2004). Local instruction theories as means of support for teachers in reform mathematics education. Mathematical Thinking and Learning, 6(2), 105–128. doi:10.1207/s15327833mtl0602_3

Simon, M. A. (1995). Reconstructing mathematics pedagogy from a constructivist perspective. Journal for Research in Mathematics Education, 26(2), 114–145. doi:10.2307/749205

RYSK: Simon's Reconstructing Mathematics Pedagogy from a Constructivist Perspective (1995)

This is the 11th in a series describing "Research You Should Know" (RYSK) and part of my OpenComps.

After reading Clements & Sarama's (2004) Learning Trajectories in Mathematics Education a few days ago, I wanted to go back to the origins of learning trajectories: a 1995 paper from Martin Simon that explored how mathematics can and should be taught differently with a constructivist mindset. Simon is a professor of math education at NYU and has a history of researching how students and their teachers come to understand their mathematical knowledge.

From the start, I immediately appreciated two strengths of this article: Simon's clear writing and the relatively straightforward description of constructivism he offers. When you're still trying to sort out what constructivism is and is not (like me and many classroom teachers), it's a whole lot easier to parse this article from 1995 than, say, a heavily theoretical, mid-2000s piece by Jim Greeno. Recognizing that there are multiple (and often subtly different) ways to describe constructivism, Simon lays out his interpretation like this:

Constructivism derives from a philosophical position that we as human beings have no access to an objective reality, that is, a reality independent of our way of knowing it. Rather, we construct our knowledge of our world from our perceptions and experiences, which are themselves mediated through our previous knowlege. Learning is the process by which human beings adapt to their experiential world. (p. 115)

So when we have an idea that "works," meaning it does what we need it to do to make sense of our experiences, then we've constructed knowledge. This can be a tough sell to the mathematically-minded who say things like, "I didn't construct two plus two equals four. That's an objective fact." In response, the constructivist would disagree about having access to objective reality, but acknowledge that we (almost?) universally construct the knowledge that 2+2=4 because we experience no evidence suggesting otherwise (what Simon and others refer to as disequilibrium).

There's also a theoretical debate about the construction of knowledge as an individual, cognitive process versus a social process. This is an interesting debate, to be sure, and it has been pushing the leading edges of theories for learning mathematics for about the past twenty years. If you're wearing a theoretician hat, then you care deeply about how this debate might be won. But if you're wearing a researcher hat (like Simon is here), then you use both theories to help you gain whatever insights they might afford you. Simon (crediting work by Cobb, Yackel, and Wood) calls this coordination of psychological and sociological approaches "social constructivism," and compares it to how a physicist can better explain the nature of light by considering it both a particle and a wave.

Simon takes pains in this article to separate the theory of constructivism from a notion of "constructivist teaching." It's a mistake that I've seen and heard many times, and it's important to understand the differences and nuances. Simon states:

As I stated above, constructivism, as an epistemological theory, does not define a particular way of teaching. It describes knowledge development whether or not there is a teacher present or teaching is going on. ... There is no simple function that maps teaching methodology onto constructivist principles. A constructivist epistemology does not determine the appropriateness or inappropriateness of teaching strategies. ... The commonly used misnomer, "constructivist teaching," [suggests that] constructivism offers one set notion of how to teach. The question of whether teaching is "constructivist" is not a useful one and diverts attention from the more important question of how effective it is. From a theoretical perspective, the question that needs attention is, In what ways can constructivism contribute to the development of useful theoretical frameworks for mathematics pedagogy? (p. 117)

Using this perspective and a lot of theoretical support from work done in the early 1990s, Simon sets out to explore "the ongoing and inherent challenge to integrate the teacher's goals and direction for learning with the trajectory of students' mathematical thinking and learning" (p. 121, emphasis in original). Unlike a traditional perspective, where the pedagogical focus tended towards chopping mathematical content into manageable pieces to be demonstrated and practiced, Simon wished to focus on student understanding and a plan for mathematical tasks that improved that understanding.

I won't describe Simon's teaching experiment in great detail (after all, it was data-rich enough for Simon to publish multiple papers), but it involved a group of preservice elementary teachers and a set of tasks designed to elicit understandings about how multiplication related to the simple area formula A = l x w. Simon knew that his students had no trouble multiplying or using the formula. That wasn't the problem. Instead, he gave them this task:

Determine how many rectangles, of the size and shape of the rectangle that you were given, could fit on the top surface of your table. Rectangles cannot be overlapped, cannot be cut, nor can they overlap the edges of the table. Be prepared to describe to the class how you solved this problem. (p. 123)

Students used their rectangle (I'm imagining an index card) to measure the length and width of their table. A few groups questioned whether the rectangle should maintain its orientation, or if the long edge should always align with the edge of the table. This launched a class discussion and Simon pushed students to explain how they found the area without defaulting to "I used the formula." Some students talked about rows and columns, some talked about counting rectangles, but comments about "overlapping" rectangles suggested misunderstanding was still apparent. Compounding the problem was that these students were not accustomed to provide this level of justification.

Simon tried varying the task to elicit better student explanations. Some students seemed to get it while others still struggled or remained silent. (The transcript excerpts in Simon's paper are very valuable here, if you can get a copy of it.) Simon began to worry that students were actually misunderstanding things about area, not just how multiplication relates to rectangle area, so he assigned a second task about finding the area of an irregular shape. This was less of a problem for students, so Simon returned to the "turned rectangle" problem and tried another activity with measuring tables, both with rectangular cards and also with sticks. Some students were stuck in their thinking that the area unit must be the size and shape of the card, while others began to see how using the long edge of the card for length and width of the table created new, square units not shaped like the card.

All these classes were observed by researchers who took notes and videotaped the classroom activity. Simon also kept a journal of his reflections after each lesson and planning session. Following the teaching experiment, Simon analyzed his role as the decision maker in the classroom activities. First, he had hypothesized that his students would otherwise be satisfied with knowing and using a formula for area, but had probably never explored why the formula worked. This hypothesis was based on prior experience with similar students, prior research, and pretesting. Simon carefully thought out what he thought would happen in his initial activity, saying this thinking

provides an example of the reflexive relationship between the teacher's design of activities and consideration of the thinking that students might engage in as they participate in those activities. The consideration of the learning goal, the learning activities, and the thinking and learning in which students might engage make up the hypothetical learning trajectory, a key part of the Mathematical Learning Cycle described in the next section. (p. 133)

The "Mathematical Learning Cycle" Simon, in a simplified way, suggests how a teacher's knowledge can be used to create a hypothetical learning trajectory (containing a learning goal, a plan for learning activities, and a hypothesis about the learning process), and how assessment of student knowledge gives the teacher new and better knowledge upon which to refine the hypothetical learning trajectory. (I can't help but wonder if Simon once thought he'd be remembered for "learning cycles," not "learning trajectories.") The trajectory as planned is always hypothetical because it is just the teacher's prediction and the true trajectory cannot be known in advance. Modification of the trajectory happens as the teacher increases his/her knowledge about what students understand, which can be during a planning session between lessons or on-the-fly during a classroom activity. Of course, the more knowledge a teacher has in advance -- about their students, about the mathematical content, and about theories of learning that content -- the better the hypothetical learning trajectory can be. Sometimes we don't have all the information we want, says Simon:

As a teacher, I often do not have a well-developed map of the mathematical conceptual area in which I am engaging my students; that is, I may not have fully articulated for myself (or found in the literature) the specific connections that constitute understanding or the nature of development of understanding in that area. ... Thus, in such cases, my operational definition of understanding is the ability to overcome these particular difficulties; I may not have unpacked the difficulties in order to understand the conceptual issues that are implicated. Thus, even if I do not have a thorough knowledge of what constitutes mathematical understanding in a particular domain, having a rich set of problem situations that challenge students and having knowledge of conceptual difficulties that they typically encounter provide me with an approximation that lets me be reasonably effective in promoting learning in the absence of more elaborated knowledge. (This is not to suggest that the more elaborated understanding would not be more powerful.) (p. 139)

In his summary, Simon reiterates some major themes:
  1. Student understanding is prioritized in the design of instruction
  2. Teachers learn as students learn
  3. Planning instruction includes the creation of a hypothetical learning trajectory
  4. Because of #2, teachers need to constantly revise #3
Lastly, Simon emphasizes the challenge of teaching using the methods and example he's described. "Teachers will need access to relevant research on children's mathematical thinking, innovative curriculum materials, and ongoing professional support in order to meet the demands of this role" (pp. 142-143). I plan on summarizing more work on learning trajectories, so hopefully I can provide a little bit of that needed support.

References

Simon, M. A. (1995). Reconstructing mathematics pedagogy from a constructivist perspective. Journal for Research in Mathematics Education, 26(2), 114–145. doi:10.2307/749205

Scholarly Reading Strategies

While I welcome greater diversity in higher education, I recognize graduate studies aren't for everybody. More specifically, I'd suggest you think twice about a PhD if you're the kind of person who doesn't like to read. The written word is the stuff on which academia survives and thrives, and as such many more scholarly words are produced than any one person could possibly read. But yet our work depends on reading huge chunks of scholarly literature.

I was only a few weeks into my first semester as a PhD student when I realized that there were going to be times when I couldn't finish all of the assigned readings for class. Thankfully, the ever-kind Elizabeth Dutro addressed this problem in class and told us all that this was okay. Yes, sometimes there were things we'd need to understand in great detail, but other times it was enough to just gain familiarity with an article in case we needed to refer to it later. Some readings (for me, Foucault comes to mind) need multiple readings before they make any coherent sense.

I discussed this with my advisor at the time, Finbarr (Barry) Sloane. Knowing that he was a voracious reader with incredible retention and memory (Vicki Hand once told me she wished her internet connected directly to Barry's brain), I asked if he had any special reading strategies. This is essentially what he told me:

I read things three times. The first time I just read and get a sense for the article. The second time I read for details, take notes, and make connections. On the third reading, I read the article out-of-order. If I can read paragraphs or sections at random and understand them without having to re-read the surrounding context, then I know I understand it.

Now I was understanding why Barry's knowledge of the literature was so strong. Unfortunately, I was also understanding why he routinely only got a few hours of sleep every night -- all that reading and re-reading takes time. He wasn't shy about his love of reading; he said that while in graduate school in the mid-1980s, he read every single article in the Journal for Research in Mathematics Education since its first publication in 1970. That's intense.

Maybe I can't read every JRME article three times between now and my comprehensive exams, but I do need to make the most of my comps readings. So long as the quantity of reading doesn't overwhelm me, my three-part strategy will be (a) read, (b) read for detail and take notes, and (c) blog a summary. That's the approach I took with my last post and I felt very good about it. (It helped that the Clements & Sarama article was less than 10 pages long.) The written part of my comprehensive exam gives me a week to answer three questions with essays/reports of about 8-10 pages each. I figure the more I've written on my blog, the more prepared I'll be to write for comps. There's also the side benefit of giving my advisor a convenient way to keep up with my preparation while he's traveling during his sabbatical this semester. I'd love to blog about at least four or five readings a week, and you'll be the first to know if I can keep up that pace.

RYSK: Clements & Sarama's Learning Trajectories in Mathematics Education (2004)

This is the tenth in a series describing "Research You Should Know" (RYSK) and part of my OpenComps.

When I shared my comps reading list with my committee, Bill Penuel quickly replied with the suggestion that I read this article about learning trajectories by Doug Clements and Julie Sarama. I'd seen Clements present on this topic at last year's RME conference (which focused on learning trajectories/progressions), and I recognized the paper as something I found last spring too late in the writing of a final paper to really read and process, so I am happy to return to it now.

At their most basic, learning trajectories can be thought of as sequences of tasks and activities aimed at the progressive development of mathematical thinking and skill. This appeals to me because, quite frankly, I'm not all that great at focusing on single mathematical tasks. Even with a great task, I find myself wondering, "Where in the curriculum does this task fit? What should students know and be able to do before attempting it? Once students complete this task, what new thing are they ready for?" You could say I get a bit distracted in an effort to see the big picture, a habit of mine that's not necessarily new. Learning trajectories are one way of thinking about curriculum on a larger scale, and the better I understand them, the more organized my thinking can be.

Clements finds the roots of learning trajectories in a 1995 paper by Martin Simon titled Reconstructing Mathematics Pedagogy from a Constructivist Perspective (which I should also add to my comps reading list, I'm sure). While it's certainly possible to create a learning trajectory thinking only about instructional tasks, Clements & Sarama stress the interconnections between the instructional sequence and the psychological developmental progression of students. As teachers, sometimes we make the mistake of breaking down an instructional sequence according to the structure of the mathematics, which may or may not reflect the ways students will actually construct their mathematical knowledge. To avoid this mistake, Clements & Sarama suggest designing learning trajectories using this three-stage process:

  1. Specify a research-based learning model that describes how students construct the mathematical knowledge needed for the trajectory. I think this is a tough task for teachers, both because the specific models in the research are not widely known and understood and because there are surely many areas of mathematics for which specific learning models have not been thoroughly studied.
  2. Select key mathematical tasks to promote learning at each level of students' psychological development. Again, it takes the help of research to judge if a task truly targets a certain level of development or not.
  3. Complete the hypothetical learning trajectory by sequence the tasks to match the students' developmental progression.

Of course, the completed learning trajectory should (a) take advantage of specific and relevant cultural knowledge and practices of your students and (b) be subjected to repeated revision and refinement. Clements & Sarama do not understate the potential of well-constructed learning trajectories:

The enactment of an effective, complete learning trajectory can actually alter developmental progressions or expectations previously established by psychological studies because it opens up new paths for learning and development. This, of course, reflects the traditional debate between Vygotsky (1934/1986) and Piaget and Szeminska (1952) regarding the priority of development over learning. We believe that learning trajectory research, along with other research corpi, suggests the Vygotskian position that, at least in some domains and some ways, learning and teaching tasks can change the course of development. (p. 84)

Finally, Clements & Sarama make two more recommendations regarding the creation of learning trajectories. First, be sure to think carefully about how a trajectory might work for an individual student (following a more cognitive theoretical approach) and also how it might work for a class, complete with student interactions and classroom discourse (a more sociocultural theoretical approach). Second, recognize that these trajectories are always hypothetical and will work best when teachers take the time to create and re-create them to work best with their students.

References

Clements, D. H., & Sarama, J. (2004). Learning trajectories in mathematics education. Mathematical Thinking and Learning, 6(2), 81–89. doi:10.1207/s15327833mtl0602_1

Piaget, J. & Szeminska, A. (1952). The child's conception of number. London, UK: Routledge and Kegan Paul.

Simon, M. A. (1995). Reconstructing mathematics pedagogy from a constructivist perspective. Journal for Research in Mathematics Education, 26(2), 114–145. doi:10.2307/749205

Vygotsky, L. S. (1934/1986). Thought and language. Cambridge, MA: MIT Press.

Project OpenComps

This semester I'll be taking my comprehensive exams, or "comps." As a first-generation college student from the working-class rural Midwest, this is pretty unknown territory for me. I remember being a naive undergraduate who had to ask what masters and doctorates were, and when I started my PhD program I had to ask similarly naive questions about the mysterious and vaguely threatening-sounding comps. Quite simply, comps is my opportunity to show a committee of faculty members that I have the knowledge and skills to take on my own research -- namely, my dissertation. Yes, there are written and oral examinations, but it's the process of working with a committee of faculty to both narrow my focus and double-check that I know what I should know that makes the process valuable.

Thankfully, I've been able to watch other graduate students prepare for and take their comps (usually passing, but not always) and now it's time to prepare for mine. I'm going to share that process and preparation with you and tag things #opencomps along the way. You might consider this a step in the direction of something like Hack the Dissertation. I've learned that the entire comps process can vary from program to program, so I can only really describe what it's like for a math education student in CU-Boulder's School of Education. Let's recap how I got this far:

  • With a BA in Mathematics (Teaching) from the University of Northern Iowa and six years teaching high school math, I decided to go to grad school. Having missed the admissions deadline for a master's program, I spent a fall semester as a continuing education student and was admitted into CU-Boulder's master's program for the spring. It went remarkably smoothly, thanks to the help of my advisor David Webb.
  • I expressed an interest in the PhD program and was encouraged to apply. I got recommendations from my current professors and good (enough) GRE scores to be accepted. This meant abandoning the master's program, but thankfully many of the credits I earned transferred to the PhD program.
  • My first year in the PhD program was spent in the "core," the set of six courses every cohort of incoming doctoral students take in the School of Ed. Those courses include two semesters of quantitative methods, two semesters of qualitative methods, a course on theoretical perspectives on social science research, and a course on education research and policy. I took a seventh core course, covering multicultural education, the first semester of my second year.
  • I focused the rest of my second-year coursework on two areas: math education (a course on algebra and a course on theories of mathematical learning) and educational measurement (a course on survey research with an introduction to item response theory, and an advanced measurement course with more IRT and generalizability theory).

I'm required to have 56 hours of coursework (not including dissertation credits) for a PhD. In some programs you need to finish those classes before comps, but in my area it's okay to just be close to 56 so long as the coursework provides the necessary foundation. With over 50 credits under my belt my advisor says I'm ready, so I've taken the first two steps this semester towards comps. First, I needed to choose a committee of three faculty members. My first choice was easy -- my advisor David Webb. I can trust David to make sure I'm ready in the areas of math education and classroom assessment. Also on my committee is Derek Briggs, who will surely hold me to task in the area of quantitative methods, validity, and causal inference. Derek didn't actually teach my core quantitative classes, but I took my measurement courses from him and enjoyed working with him. Due to my wandering interests, the third choice wasn't so easy. (Someone in policy? Qualitative methods? Stats ed? Learning sciences?) I went a bit onto a limb and chose someone I've never taken a class from: Bill Penuel. I got to know Bill a bit last spring during some facilities work, and I'm working for him this semester on a project that combines many of my interests: math ed, professional development, technology, pedagogy, task design, and assessment. I like what I've seen of the project so far and think working more closely with Bill will be a very good thing.

The second step I've taken towards comps this semester was to assemble a reading list. Basically, the reading list contains what I've read for my classes and what I've cited in papers and it gives my committee a place to look for holes in my knowledge. Thanks to Mendeley and careful curation over the past two years, the list wasn't too difficult to assemble. It's long and looking at the 40+ pages of references made me not feel so bad about not reading much over the summer. Take a look at my reading list for yourself, and feel free to ask about anything there, or suggest something you think might interest me!