Come Study With Me

Have you ever thought about pursuing a PhD in mathematics education? It's application time here at CU-Boulder and I'd like to encourage you to apply. For me, I had no idea what to expect of a PhD program, or why I should consider one program over another. I feel fortunate to have landed a spot in a high-quality program in my now-home state, but for many of our students it's worth a move across the country to be in Boulder to work with some of the best faculty you'd find anywhere. If you apply to CU's School of Education, here's some of what you should expect.

If you get accepted you should hear from us in February and be invited to a recruiting weekend in March. I'm guessing we may want to accept 2-4 math ed students this year, but that all depends on the quality of the applicants for both mathematics education and for all the other programs. At the recruiting weekend you'll meet the faculty, tour the campus, and spend time talking to your future advisor. (More on those people later.)

If you choose to attend CU-Boulder and enter the program, you'll be required to take on graduate studies full-time. You'll take most of your first-year courses as part of a cohort, and you'll have an assistantship that will likely include doing research, teaching courses, or supervising student teachers. There are many PhD programs that allow for part-time study over a decade or more to complete a degree. We're not one of them. Instead, you get to make graduate school your full-time focus, and in return the university promises to support you with an assistantship and tuition credits for at least three years. No, you won't get rich in grad school, but for many students it allows them to pay the bills and avoid racking up debt.

There are five program areas in the School of Education: Ed Psych/Learning Sciences, Foundations and Policy, Equity and Cultural Diversity, Research and Evaluation Methodology, and Curriculum & Instruction. Mathematics education fits into C&I along with science education and literacy studies. During your first year you'll be exposed to people and ideas from all the program areas as you learn the research methods and foundations that will prepare you for your future studies.

We have three faculty members in math education. Over the course of your studies you'll be advised by one or more of them, you may have an assistantship with them, and will probably take a course from all of them. Here's a little bit about each:

David Webb (my advisor) has interests in mathematics curriculum, assessment, computer science, and Realistic Mathematics Education (RME). RME is a curriculum design philosophy originating with Dutch mathematician Hans Freudenthal, and when David came to CU from Wisconsin he brought the Freudenthal Institute US with him as its Executive Director. If you study math ed at CU-Boulder, you're also studying at the Freudenthal Institute, and RME gives us a common foundation for how we conceptualize the teaching of mathematics and the design of curriculum.

Vicki Hand's primary interests are in learning theory and equity. She combines the two in powerful ways to describe how math can be taught and learned to all students. It's tough but critical work. I was fortunate to take Vicki's "Theories of Math and Science Learning" course last spring and it was one of the best courses I've had at CU. Even better, Vicki is an absolute joy to be around.

The newest addition to our math ed faculty is Edd Taylor, who will join us this spring. Not only will Edd give us the extra elementary math ed experience and knowledge we've been looking for, but his interests seem to be a natural bridge between David and Vicki's areas of expertise. Some of Edd's recent work involves working with teachers to understand and modify their curriculum to better meet the needs of culturally diverse learners. We're all excited to have him join us and I can't wait to take a class with him next semester.

While those are the main three math ed faculty, you'll encounter other faculty with knowledge or interests in math ed. For example, Kim Bunning is one of our master teachers in our CU Teach program and is herself a product of our math ed PhD program. Bill Penuel is a learning scientist with a science background, but has several projects (including one I'm on) that reach into mathematics education. Margaret Eisenhart has for a long time studied how to interest females and minorities in STEM fields. If you have any interest in statistics and quantitative research methods, we have some of the best faculty anywhere, including Derek Briggs, Greg Camilli, Andrew Maul, and our Dean, Lorrie Shepard. So while we might not have the biggest program around, we make up for it with high quality experiences and high quality people school-wide.

Last but not least are the other math ed graduate students. Of those who aren't busy finishing their dissertations, you'll come to know:

  • Michael Matassa, a former middle school math teacher and instructional coach who teaches elementary methods and is interested in researching mathematics teaching;
  • Bill Campbell, a former elementary teacher with broad educational interests and knowledge whose research interests lie in RME and elementary math education;
  • Louisa Harris, a math-turned-math ed doctoral student interested in the challenges for women studying math in college and graduate school;
  • Ryan Grover, another math-turned-math ed doctoral student interested in RME approaches in undergraduate mathematics;
  • Ian Her Many Horses, a former computer science and math teacher who's pioneering a route in the emerging field of computer science education;
  • Fred Peck, a former high school math teacher who's interested in better understanding how students progress from informal to formal mathematical understanding; and
  • Vinnie Basil, a former science and math teacher who is interested in educational equity and integrated approaches to math and science curriculum.


So How Do I Apply?

The application process isn't horribly difficult, but you'll have to act quickly if you don't already have application materials put together. You'll need things like your GRE scores, reference letters, transcripts, and a personal statement. Although the personal statement isn't very long, it's your opportunity to show that you have the kind of writing skill that you can later apply to your dissertation. That's important! And of course, as a math ed student, we expect your GRE math score to be pretty good. Applications are due January 1, and you should follow these instructions and not be afraid to ask questions!

OpenComps: Candidate Status Unlocked. Loading Next Level...

I'm more than a week tardy in reporting this, but my oral examination went well and I've transitioned from "PhD student" to "PhD candidate." In other words, I passed my comprehensive exams. Apparently the title isn't universal (I've heard some schools progress you from "candidate" to "Candidate," changing only the capitalization), but what it means is that my focus and responsibility shifts away from coursework and onto my own research.

Normally this means I'd be taking few, if any, classes next semester and working on a prospectus. But as luck would have it, the School of Education is chock-full of great course offerings in the spring. So I'll be taking a full slate of courses: Language Issues in Education Research, Research on Teaching and Teacher Education, and Advanced Topics in Mathematics Education. Throw in our departmental seminar and the five dissertation hours we're required to carry each semester, and it looks like I'll be scheduled for 15 credit hours. Which is a lot for a doctoral student, er, candidate.

Realistically, this means my prospectus will probably wait until summer. That shouldn't be an inconvenience. It's going to take me a while to focus in on a research question anyway, and I think a combination of working on Bill Penuel's Inquiry Hub project and taking the Research on Teaching and Teacher Education class with Dan Liston and Jennie Whitcomb will give me plenty to think about. I am very interested in issues of research to practice, which means I need to look more at Paul Cobb's latest work, Cynthia Coburn's work, and keep working with Bill on Design-Based Implementation Research. I also want to learn more about how and why teachers modify their curriculum, which means getting up-to-date with the work of people like Janine Remillard and Corey Drake. The better I understand the current boundaries of work in these areas, the better I'll know what direction my work should go.

OpenComps: Written Exam Down, Oral Exam to Go

About two weeks ago I submitted my written responses to my comprehensive exam questions. I can't go into detail about the questions, but I'll summarize them this way:
  1. Here's a dichotomy from the learning sciences. Deal with it.
  2. Somebody did a quantitative study X and now wants to do Y. Before you think Y is a good idea, what do you have to know about X?
  3. How would you help math teachers learn about X given conditions Y?
I hadn't quite anticipated Question 1 so there was some background work to do before I could address certain details. Thankfully, I was pretty well prepared to structure my argument, and it was on this question that I did my best writing. While I'd had dreams of finishing a couple questions before the end of the weekend, my actual pace was slower than that. A lot slower. By the end of Friday, I'd written about a paragraph, and by the end of Saturday, I'd written about a page. Fortunately, that was the foothold I needed to have the rest of the 9-page paper finished on Sunday.

Next I answered Question 2. In some ways this was the question that worried me the most, but my studying definitely helped. Still, my writing was slow and it wasn't really until late Wednesday when I had this question finished. When you have three questions to answer in seven days, taking six days to answer the first two questions is less than ideal.

That left me to answer Question 3 in a bit of a writing sprint starting in the wee hours of Thursday morning, breaking to attend and teach class Thursday afternoon and evening, and then writing until 7am Friday morning to finish. Question 3 was my advisor's question and the one for which I was most prepared; in fact, a couple pages was largely a rehash of some of some things I'd blogged about in the past. Having that for a strong start certainly helped the rest of the paper take shape rather quickly.

It was a relief to reach the end of comps week, but I couldn't get too much rest because I had put off a number of things (okay, almost everything) during comps and in the weeks leading up to comps. Professors and fellow students are very understanding about it, which is great, but I wasn't entirely comfortable using comps as an excuse to not do much else during that time. In the past two weeks (including some of every day of my fall break), I've been catching up with the class I take, the class I teach, and the research project I'm on. I haven't been blogging and my social media activity has been pretty minimal during this time, but I'm starting to feel caught up.

The last hurdle to clear is the oral examination, scheduled for this Tuesday morning. I'm not too concerned about it, and thankfully, the message from my comps committee has been not to worry. But between now and then I will be going back over my responses, double-checking the literature I cited, and reading a few new things I uncovered during the comps process. My advisor hinted at some things he wants to talk about and I'll be sure to prepare for those things, too.

OpenComps: Final Preparations

By 9 am Friday, November 2nd, my advisor will email me my three comprehensive exam questions. I have exactly a week to answer them. He says I'm prepared, and I appreciate his confidence in me. I think I'm reasonably prepared, too, and I greatly appreciate that among my numerous anxieties, test-taking isn't one of them. Far from it, in fact. See, I'm one of those mystical kids that policymakers have in mind when they come up with laws like No Child Left Behind. I'm the one who actually likes taking tests and fools himself into thinking they're just a harmless yet useful snapshot of broad academic knowledge and skill. Give me a #2 pencil and bubbles to fill in and I'll happily work for hours.

There won't be any bubbles on my comprehensive exam, but there will be hours of work. Over the past week I've been making my final preparations, most of which are designed to make next week go as smoothly as possible. A summary:

Ready My References

I think my personal library will have most of the math and learning science books I might want, but I felt like some extra perspectives and guides concerning experimental design, casual inference, and statistics might come in handy. I know I can't expect to read any of these cover-to-cover in the course of the next week, but if nothing else the examples and explanations they contain could be valuable.

Having books around is a luxury, but for this level of work, it's even more important to have a way of keeping track of the hundreds of journal articles that I might want to use in my comps responses. I've been using Mendeley as my reference manager since the spring of 2010. Regardless of what tool you use -- Zotero, RefWorks, Endnote, Papers, etc. -- it's important during any writing period to have something that allows you to focus on writing, not scrambling for citation information and digging through the APA style book.

One of the best investments I've made as a grad student has been my diligent attention to the annotation, curation, and metadata accuracy of my Mendeley library. I was somewhat lax about it during my master's year and my first year of the PhD program, but then I spent most of two weeks of a summer going back through every PDF, every book, every syllabus, and every paper I wrote to make sure I had everything neatly cataloged. And I haven't relaxed since. Right now I have 890 references in my personal library, with others in group collections, and I can find or cite any of them in just seconds.

If there's one thing I can't let myself do is turn my comps into a massive search for new literature. I admit, I love the thrill of the hunt, and I've spent many hours digging around in Google Scholar tracking down papers that I realistically have no time to read. I need to trust that most of what I need I already have and I've already read, and keep my literature hunting to a minimum.

Minimize Distractions and Get Comfortable

For my last week before comps, I actually spent very little time studying and more time minimizing potential distractions. I've been to the grocery store, I've washed dishes and laundry, and I reformatted and reinstalled my operating system, virtual machine, and software on my computer because a few things had gotten flaky after a year of hard use. I've never liked studying right before a test anyway, as any attempt to "cram" is nullified by thoughts that always begin, "If I don't know it by now...." I passed my 100-hour studying mark a week or so ago and that will have to be good enough.

I'll probably work mostly at my desktop. If your computer had three monitors, 16 GB of RAM, university broadband peaking at nearly 90Mbps up and down, and a pair of Sennheiser HD 595s, you'd probably work at it, too. I might try working some in my office, and my kitchen table is nice for when a lot of open books are involved. I don't want to be stuck in my office chair for 18 hours a day, so I plan to do some heavy thinking while running and in a pinch I can even prop my laptop up on my exercise bike.

Sometimes I work in silence, but not very often. I don't want to get distracted by moving pictures, but there are a few movies I can play for background noise without losing focus, mostly because I've seen them so many times. I can get distracted by podcasts, so I'll try to listen to those selectively over the next week. I'll listen to a lot of music, and my tastes for a task like this tend to be towards the incredibly gifted (Tori Amos, Curtis Mayfield, Norah Jones, Sia) and music that's downtempo/trip-hop or otherwise having a likable female vocal/bass combo (Thievery Corporation, Zero 7, Garbage). Seriously, in the midst of an important exam, who wouldn't want to perform as relaxed and confidently as LouLou?:



OpenComps Get Less Open

Obviously, yet unfortunately, once I get my questions I'm pretty limited in what I can say about them. I'm not to receive outside help, solicited or unsolicited, and even after the exam is over I'm only to talk about the process in general terms. (I'm assuming that's in the event my committee members want to reuse the same or similar questions in the future.) Assuming I'm not exhausted by the process, I'll try to summarize my approach and workflow, lessons learned, and hopefully some epiphanies that come in the process of working through my questions. I'm looking forward to the week of writing and then readying myself for the oral defense, scheduled for November 27th.

OpenComps: Validity and Causal Inference

With the start of my comprehensive exams beginning in 12 days, my studying has hit the homestretch. Thankfully, my advisor has inspired some confidence by telling me that my understanding of the math education literature is solid and I won't need any more studying in that area. That's good for my studying, and something I take as a huge compliment. So now I can focus for a while on preparing myself for the exam question Derek Briggs is likely to throw my way. Typically, one of the three people on a comps committee is tasked with asking a question related to either the quantitative or qualitative research methodology we learn in our first year of our doctoral program. Derek is a top-notch quantitative researcher, and I enjoyed taking two classes from him last year: Measurement in Survey Research and Advanced Topics in Measurement. Where this gets slightly tricky is that Derek didn't actually teach either of my first-year quantitative methods courses, so there's a potential I could get surprised by something he normally teaches in those classes that I didn't see. It's a risk I was willing to take after working with Derek more recently and more closely in the two measurement courses last year.

It certainly won't be a surprise if Derek asks a question that focuses on issues of validity and causal inference. He mentioned it to me personally and put it in a study guide, so studying it now will be time well spent. I feel like I've had a tendency to read the validity literature a bit too quickly or superficially, so this is a good opportunity for me to revisit some of the papers I've looked at over the past couple of years. Here's the list I've put together for myself:

AERA/APA/NCME. (1999). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association. [Just the first chapter, "Validity."]

Angoff, W. H. (1988). Validity: An evolving concept. In H. Wainer & H. Braun (Eds.), Test validity (pp. 19–32). Mahwah, NJ: Lawrence Erlbaum Associates.

Borsboom, D., Cramer, A. O. J., Kievit, R. A., Scholten, A. Z., & Franic, S. (2009). The end of construct validity. In R. W. Lissitz (Ed.), The concept of validity: Revisions, new directions, and applications (pp. 135–170). Information Age Publishing.

Brookhart, S. M. (2003). Developing measurement theory for classroom assessment purposes and uses. Educational Measurement: Issues and Practice, 22(4), 5–12. doi:10.1111/j.1745-3992.2003.tb00139.x

Chatterji, M. (2003). Designing and using tools for educational assessment (p. 512). Boston, MA: Allyn & Bacon. [Chapter 3, "Quality of Assessment Results: Validity, Reliability, and Utility"]

Cronbach, L. J. (1988). Five perspectives on validity argument. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 3–17). Hillsdale, NJ: Lawrence Erlbaum.

Eisenhart, M. A., & Howe, K. R. (1992). Validity in educational research. In M. LeCompte, W. Milroy, & J. Priessle (Eds.), The handbook of qualitative research in education (pp. 642–680). San Diego, CA: Academic Press.

Gorin, J. S. (2007). Test design with cognition in mind. Educational Measurement: Issues and Practice, 25(4), 21–35. doi:10.1111/j.1745-3992.2006.00076.x

Haertel, E. H., & Herman, J. L. (2005). A historical perspective on validity arguments for accountability testing. In J. L. Herman & E. H. Haertel (Eds.), Uses and misuses of data for educational accountability and improvement (NSSE 104th., pp. 1–34). Malden, MA: Wiley-Blackwell.

Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960. doi:10.2307/2289069

Kane, M. T. (1992). An argument-based approach to validity. Psychological Bulletin, 112(3), 527–535. doi:10.1037/0033-2909.112.3.527

Leighton, J. P., & Gierl, M. J. (2004). Defining and evaluating models of cognition used in educational measurement to make inferences about examinees’ thinking processes. Educational Measurement: Issues and Practice, 26(2), 3–16. doi:10.1111/j.1745-3992.2007.00090.x

Linn, R. L., & Baker, E. L. (1996). Can performance-based student assessments be psychometrically sound? Performance-based student assessment: Challenges and possibilities (pp. 84–103). Chicago, IL: The University of Chicago Press.

Messick, S. (1988). The once and future issues of validity: Assessing the meaning and consequences of measurement. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 33–45). Hillsdale, NJ: Lawrence Erlbaum.

Michell, J. (2009). Invalidity in validity. In R. W. Lissitz (Ed.), The concept of validity: Revisions, new directions, and applications (pp. 111–133). Information Age Publishing.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference (p. 623). Boston, MA: Houghton Mifflin. [Probably Chapters 1-3 and 11, if not more.]

Shepard, L. A. (1993). Evaluating test validity. Review of Research in Education, 19(1), 405–450.

Shepard, L. A. (1997). The centrality of test use and consequences for test validity. Educational Measurement: Issues and Practice, 16(2), 5–24. doi:10.1111/j.1745-3992.1997.tb00585.x

Zumbo, B. D. (2009). Validity as contextualized and pragmatic explanation, and its implications for validation practice. In R. W. Lissitz (Ed.), The concept of validity: Revisions, new directions, and applications (pp. 65–82). Information Age Publishing.

Thankfully, some of these papers I've read recently for my Advances in Assessment course so the amount of reading I have to do is appreciably less than it might look. In my typical fashion, I'll study these in chronological order with the hopes that I get a sense for how the field has evolved its thinking and practice regarding these ideas over the past several decades.

Although I have little other graduate school experience to compare it to, I feel like this reading list is representative of what sets a PhD apart, particularly one earned at an R1 university. It's not necessarily glamorous, and its relevance to the day-to-day teaching and learning in classrooms might not be immediately obvious. But without attending to issues like validity and causal inference, we have a much more difficult time being sure about what we know and how we're using that knowledge. Issues of validity should be at the heart of any assessment or measurement, and when they're attended to properly we greatly improve our ability to advance educational theories and practice.

Jo Boaler, Standing Tall

Last night, Jo Boaler (whose work I've written about before) took to Twitter (welcome, Jo!) to share details of "harassment and persecution" regarding her research, which she has written about on Stanford's website (PDF). Those in the math community had some understanding that this had been going on, and I applaud Boaler's decision to bring it out in the open.

I'm sure much will be said about this in the coming days, but I hope at least some small part of the conversation addresses the discoverability and sharability of academic work. When I search for "boaler railside" on Google, this is what I see:


Instead of the first result pointing me to Boaler's 2008 article in Teachers College Record, I'm instead pointed directly to the Bishop, Clopton, and Milgram paper at the heart of this controversy. As Boaler has pointed out, it has never been published in a peer-reviewed journal. But it is published, in the modern sense, with perhaps something more important than peer review: a top ranking on Google. The second link points to Boaler's faculty profile, through which a couple of clicks will take you Boaler's self-hosted copy of the Railside article. I'm linking directly to it here not only because it's an article you should keep and read, but because it obviously needs all the Google PageRank help it can get. The third link in my search also refers to the "refutation" of Boaler's work, although the site no longer appears to exist.

Why is Boaler's original work not easier to find? Let's look at the Copyright Agreement of the Teacher's College Record. According to TCR, it is their policy to "acquire copyright for all of the material published on TCRecord.org" and that such a policy "is designed to promote the widest distribution of the material appearing on TCRecord.org while simultaneously protecting the rights of authors and of TCRecord.org as the publisher." For TCR, this "widest distribution" means putting the article behind a $7 paywall -- not an extravagant amount, but enough to keep most people from reading the work, which means not linking to it and not elevating its search rankings. (A search in Google Scholar, however, returns it as the top result.) Given the attacks on Boaler and her scholarship, has this copyright policy been "protecting the rights of authors?" In Boaler's case, it's obvious it hasn't. But then again, by signing over copyright I'm not sure exactly what rights TCR says she has left to protect.

I'm glad Boaler is sharing the article on her website. If she wasn't, I'd attempt to gain the rights to share it here, and that's not cheap:


Yes, republishing the article costs $500. Is it worth it for me to pay out of my own pocket? Probably not. But is it worth $500 to the greater mathematics education community to have it more discoverable, searchable, and sharable? Given what she's went through, is it worth it to Jo Boaler? Yes, it is, and that's why encourage all authors to publish in open access journals or otherwise negotiate their copyright agreement to ensure greater rights over their own work, including the ability to post and share in ways that improve search rankings.

OpenComps CGI

No, I don't mean "computer-generated imagery." Or the "Clinton Global Initiative." Or "Common Gateway Interface." In the world of mathematics education, CGI stands for "Cognitively Guided Instruction," one of the most robust lines of research produced in the past several decades. If you study math education, you're probably going to study CGI. If you study math education and your advisor is from the University of Wisconsin, then you're definitely going to study CGI. Here's my reading list:

Carpenter, T. P., Fennema, E., & Franke, M. L. (1996). Cognitively guided instruction: A knowledge base for reform in primary mathematics instruction. The Elementary School Journal, 97(1), 3–20. doi:10.1086/461846

Carpenter, T. P., Fennema, E., Peterson, P. L., Chiang, C.-P., & Loef, M. (1989). Using knowledge of children’s mathematics thinking in classroom teaching: An experimental study. American Educational Research Journal, 26(4), 499–531. doi:10.3102/00028312026004499

Carpenter, T. P., & Moser, J. M. (1984). The acquisition of addition and subtraction concepts in grades one through three. Journal for Research in Mathematics Education, 15(3), 179–202. doi:10.2307/748348

Fennema, E., Carpenter, T. P., Franke, M. L., Levi, L., Jacobs, V. R., & Empson, S. B. (1996). A longitudinal study of learning to use children’s thinking in mathematics instruction. Journal for Research in Mathematics Education, 27(4), 403–434. doi:10.2307/749875

Franke, M. L., Carpenter, T. P., Levi, L., & Fennema, E. (2001). Capturing teachers’ generative change: A follow-up study of professional development in mathematics. American Educational Research Journal, 38(3), 653–689. doi:10.3102/00028312038003653

Knapp, N. F., & Peterson, P. L. (1995). Teachers’ interpretations of “CGI” after four years: Meanings and practices. Journal for Research in Mathematics Education, 26(1), 40–65. doi:10.2307/749227

This works out nicely because CGI also happens to be a topic of discussion this week in my "Advances in Assessment" class. (Related note: Due to Erin Furtak being out of town, Lorrie Shepard will be our "substitute teacher." That leads to the natural question: Great sub, or greatest sub?) CGI was also featured prominently in Randy Philipp's NCTM Research Handbook chapter on teacher beliefs and affect. Even though my knowledge of CGI is limited, I sense that lines of research like CGI are the stuff math education researchers dream about: long-lasting, productive, well-funded areas of study that help both students and teachers in measurable and meaningful ways.

OpenComps Study of Teacher Beliefs; MathEd.net Turns Three

A month from now I'll be in the midst of the written portion of my comprehensive exam. My last #OpenComps update (and several posts since then) listed several readings about teacher learning. With those complete, now I'm moving my attention towards teacher beliefs with the following articles and chapters:

Fennema, E., & Franke, M. L. (1992). Teachers knowledge and its impact. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 147–164). Reston, VA: National Council of Teachers of Mathematics.

Pajares, M. F. (1992). Teachers’ beliefs and educational research: Cleaning up a messy construct. Review of Educational Research, 62(3), 307–332. doi:10.3102/00346543062003307

Philipp, R. A. (2007). Mathematics teachers’ beliefs and affect. In F. K. Lester (Ed.), Second handbook of research on mathematics teaching and learning (pp. 257–315). Charlotte, NC: Information Age.

Thompson, A. G. (1992). Teachers’ beliefs and conceptions: A synthesis of the research. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 127–146). Reston, VA: National Council of Teachers of Mathematics.

Villegas, A. M. (2007). Dispositions in Teacher Education: A Look At Social Justice. Journal of Teacher Education, 58(5), 370–380. doi:10.1177/0022487107308419

Wilkins, J. L. M., & Brand, B. R. (2004). Change in preservice teachers’ beliefs: An evaluation of a mathematics methods course. School Science and Mathematics, 104(5), 226–232. doi:10.1111/j.1949-8594.2004.tb18245.x

As I usually do, I'm reading these in chronological order. I just finished the Pajares article and will next move on to Alba Thompson's well-regarded chapter from the 1992 NCTM research handbook. My advisor said I probably don't need to read the entire Fennema & Franke chapter, but there is a diagram near the end that I should be aware of and the context surrounding it.

MathEd.net Turns Three

Although I've been blogging my random thoughts and personal commentary since 2001, after starting graduate school I knew I'd be blogging more about education. Three years ago today, I decided it was time to split my identity: one blog and Twitter account for professional/educational content, and a separate blog and Twitter account for personal/miscellaneous content. It's been a good decision, one that has spared many of you from numerous updates about the Cubs, college wrestling, or my infrequent travels.

I'm creeping up on 40,000 page views, which I think is pretty good given how infrequently I sometimes post and how technical some of what I'm writing about has become. It reminds me largely of why I started this blog: as a teacher, I was willing to have my practice improved by knowledge from research, if only I could find it. The research literature was locked behind paywalls I couldn't afford, and as a lone math teacher in a rural district, I didn't have instructional coaches or curriculum staff to help me. But I knew smart people and resources existed online, and that social tools were allowing us to come together in new ways. The best ticket for admission in that social world is one's own contributions, and I'm trying to contribute something not easily found elsewhere.

I thank you all for reading, and I look forward to what the future brings -- not only for this blog and for myself, but also where this disintermediated online sharing of educational knowledge might take us.

RYSK: Ball, Thames, & Phelps's Content Knowledge for Teaching: What Makes It Special? (2008)

This is the 17th in a series describing "Research You Should Know" (RYSK) and part of my OpenComps. I also Storified this article as I read.

My last two posts summarized the underpinnings of Shulman's pedagogical content knowledge and Deborah Ball's early work building upon and extending Shulman's theories. Now we jump from Ball's 1988 article to one she co-authored in 2008 with University of Michigan colleagues Mark Thames and Geoffrey Phelps, titled Content Knowledge for Teaching: What Makes It Special?

This article starts by looking at the 20+ years we've had to further develop Shulman's theories of pedagogical content knowledge (PCK). Despite the theory's widespread use, Ball and colleagues claim it "has lacked definition and empirical foundation, limiting its usefulness" (p. 389). (See also Bud Talbot's 2010 blog post and related efforts.) In fact, the authors found that a third of the more than 1200 articles citing Shulman's PCK

do so without direct attention to a specific content area, instead making general claims about teacher knowledge, teacher education, or policy. Scholars have used the concept of pedagogical content knowledge as though its theoretical founcations, conceptual distinctions, and empirical testing were already well defined and universally understood. (p. 394)

To build the empirical foundation that PCK needs, Ball and her research team did a careful qualitative analysis of data that documented an entire year of teaching (including video, student work, lesson plans, notes, and reflections) for several third grade teachers. Combined with their own expertise and experience, and other tools for examining mathematical and pedagogical perspectives, the authors set out to bolster PCK from the ground up:

Hence, we decided to focus on the work of teaching. What do teachers need to do in teaching mathematics -- by virtue of being responsible for the teaching and learning of content -- and how does this work demand mathematical reasoning, insight, understanding, and skill? Instead of starting with the curriculum, or with standards for student learning, we study the work that teaching entails. In other words, although we examine particular teachers and students at given moments in time, our focus is on what this actual instruction suggests for a detailed job description. (p. 395)

For Ball et al., this includes everything from lesson planning, grading, communicating with parents, and dealing with administration. With all this information, the authors are able to sharpen Shulman's PCK into more clearly defined (and in some cases, new) "Domains of Mathematical Knowledge for Teaching." Under subject matter knowledge, the authors identify three domains:
  • Common content knowledge (CCK)
  • Specialized content knowledge (SCK)
  • Horizon content knowledge

And under pedagogical content knowledge, the authors identify three more domains:
  • Knowledge of content and students (KCS)
  • Knowledge of content and teaching (KCT)
  • Knowledge of content and curriculum

Ball describes each domain and uses some examples to illustrate, mostly from arithmetic. For my explanation, I'll instead use something from high school algebra and describe how each domain applied to my growth of knowledge over my teaching career.

Common Content Knowledge (CCK)

Ball et al. describe CCK as the subject-specific knowledge needed to solve mathematics problems. The reason it's called "common" is because this knowledge is not specific to teaching -- non-teachers are likely to have it and use it. Obviously, this knowledge is critical for a teacher, because it's awfully difficult and inefficient to try to teach what you don't know yourself. As an example of CCK, my knowledge includes the understanding that \((x + y)^2 = x^2 + 2xy + y^2\). I've known this since high school, and I would have known it whether or not I became a math teacher.

Specialized Content Knowledge (SCK)

SCK is described by Ball et al. as "mathematical knowledge and skill unique to teaching" (p. 400). Not only do teachers need this knowledge to teach effectively, but it's probably not needed for any other purpose. For my example, I need to have a specialized understanding of how \((x+y)^2\) can be expanded using FOIL or modeled geometricaly with a square. It may not be all that important for students to understand both the algebraic and geometric ways of representing this problem, but I need to know both so I can better understand student strategies and sources of error. Namely, the error that \((x + y)^2 = x^2 + y^2\).

Horizon Content Knowledge

This domain was provisionally included by the authors and described as, "an awareness of how mathematical topics are related over the span of mathematics included in the curriculum" (p. 403). For my example of \((x + y)^2 = x^2 + 2xy + y^2\), I need to understand how previous topics like order of operations, exponents, and the distributive property relate to this problem. Looking forward, I need to understand how this problem relates to factoring polynomials and working with rational expressions.

Knowledge of Content and Students (KCS)

This is "knowledge that combines knowing about students and knowing about mathematics" (p. 401) and helps teachers predict student thinking. KCS is what allows me to expect students to incorrectly think \((x + y)^2 = x^2 + y^2\), and to tie that to misconceptions about the distributive property and exponents. I'm not sure I had this knowledge for this example when I started teaching, but it didn't take me long to figure out that it was a very common student mistake.

Knowledge of Content and Teaching (KCT)

Ball et al. say KCT "combines knowing about teaching and knowing about mathematics" (p. 401). While KCS gave me insight about why students mistakingly think \((x + y)^2 = x^2 + y^2\), KCT is the knowledge that allows me to decide what to do about it. For me, this meant choosing a geometric representation for instruction over using FOIL, which lacks the geometric representation and does little to address the problem if students never recognize that \((x + y)^2 = (x + y)(x + y)\).

Knowledge of Content and Curriculum

For some reason, Ball et al. include this domain in a figure in their paper but never describe it explicitly. They do, however, scatter enough comments about knowledge of content and curriculum to imply that teachers need a knowledge of the available materials they can use to support student learning. For my example, I know that CPM uses a geometric model for multiplying binomials, Algebra Tiles/Models can be used to support that model, virtual tiles are available at the National Library of Virtual Manipulatives (NLVM), and the Freudenthal Institute has an applet that allows students to interact with different combinations of constants and variables when multiplying polynomials.

Some of the above can be hard to distinguish, but thankfully Ball and colleagues clarify by saying:

In other words, recognizing a wrong answer is common content knowledge (CCK), whereas sizing up the nature of an error, especially an unfamiliar error, typically requires nimbleness in thinking about numbers, attention to patterns, and flexible thinking about meaning in ways that are distinctive of specialized content knowledge (SCK). In contrast, familiarity with common errors and deciding which of several errors students are most likely to make are examples of knowledge of content and students (KCS). (p. 401)

In their conclusion, the authors hope that this theory can better fill the gap that teachers know is important, but isn't purely about content and isn't purely about teaching. We can hope to better understand how each type of knowledge above impacts student achievement, and optimize our teacher preparation programs to reflect that understanding. Furthermore, that understanding could be used to create new and improved teaching materials and professional development, and better understand what it takes to be an effective teacher. With this in mind, you can gain some insight to what Ball was thinking when she gave this congressional testimony:


References


Ball, D. L., Thames, M. H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 59(5), 389–407. doi:10.1177/0022487108324554

RYSK: Ball's Unlearning to Teach Mathematics (1988)

This is the 16th in a series describing "Research You Should Know" (RYSK) and part of my OpenComps. I also Storified this article as I read.

Dan Lortie's 1975 book Schoolteacher clarified an idea that teachers already know: how we teach is greatly influenced by the way we've been taught. Lortie called the idea apprenticeship of observation, and it specifically refers to how teachers, having spent 13,000+ hours in classrooms as students, take that experience as a lesson in how to be a teacher. What we often fail to deeply reflect on, however, is that we were only seeing the end product of teaching. We didn't see the lesson planning, go to summer conferences, attend professional development workshops, study the science of learning, or take part in the hundreds of decisions a teacher makes every day. Just observing isn't a proper apprenticeship, even after thousands of hours watching good teachers. I think of it this way: I watch a lot of baseball, and I can tell good baseball from bad. This hardly makes me ready to play, sadly, because I'm not spending hours taking batting practice, participating in fielding drills, studying video, digesting scouting reports, and working out in the offseason. Just as watching a lot of baseball doesn't really prepare me to play baseball, watching a lot of teaching doesn't really prepare someone to teach. Still, all those hours heavily influence our beliefs, both of teaching and of subject matter.

Deborah Ball (CC BY-NC-ND
House Committee on Education
and the Workforce Democrats
)
In 1988, the year she earned her Ph.D at Michigan State, Deborah Ball was spending a lot of time thinking about math teachers' apprenticeship of observation. In an article called Unlearning to Teach Mathematics, she describes a project involving teaching permutations to her class of introductory preservice elementary teachers. The goal was not simply to teach her students about permutations, but also to learn more about their beliefs about the nature of mathematics and to develop strategies that might enlighten those beliefs and break the cycle of simply teaching how you were taught.

By selecting permutations as the topic, Ball hoped to expose these introductory teachers to a topic they'd never studied formally. By carefully observing how her students constructed their knowledge, Ball would be able to see how their prior understandings about mathematics influenced their learning. The unit lasted two weeks. In the first phase of the unit, Ball tried to engage the students in the sheer size and scope of permutations, like by thinking about how the 25 students could be sat in 1,551,121,000,000,000,000,000,000 different seating arrangements. Working back to the simplest cases, with 2, 3, and 4, students, students could think and talk about the patterns that emerge and understand how the permutation grows so quickly. For homework, Ball asked students to address two goals: increase their understanding of permutations, but also think about the role homework plays in their learning, including how they approach and feel about it and why. In the second phase of the unit, Ball has her students observe her teaching young children about permutations, paying attention to the teacher-student interactions, the selection of tasks, and what the child appears to be thinking. In the last phase of the unit, the students become teachers and try helping someone else explore the concept of permutations. After discussing this experience, students wrote a paper reflecting on the entire unit.

From other research, Ball knew that teacher educators often assumed their students had mastery of content knowledge. Even moreso, future elementary math teachers themselves assumed they had mastery over the mathematical content they'd be expected to teach. She knew, however, that there was something extra a teacher needed to teach that content. Citing Shulman's pedagogical content knowledge, along with numerous others, Ball describes some ways we can think about what that special content knowledge for teaching is, but admits that her permutations project was too narrow to explore how teachers construct and organzie that knowledge. The project would, however, give insight to her students' ideas about mathematics, and assumptions they make about what it means to know mathematics. For example, a student named Cindy wrote:

I have always been a good math student so not understanding this concept was very frustrating to me. One thing I realized was that in high school we never learned the theories behind our arithmetic. We just used the formulas and carried out the problem solving. For instance, the way I learned permutations was just to use the factorial of the number and carry out the multiplication ... We never had to learn the concepts, we just did the problems with a formula. If you are only multiplying to get the answer every time, permutations could appear to be very easy. If you ask yourself why do we multiply and really try to understand the concept, then it may be very confusing as it was to me. (p. 44)

Comments like this revealed that many of Ball's students relied on a procedural view of mathematics, one where the question "Why?" had been rarely asked. Ball also noticed a theme in her students' reflections about knowing math "for yourself" versus for teaching. Alison wrote:

I was trying to teach my mother permutations. But it turned out to be a disaster. I understood permutations enough for myself, but when it came time to teach it, I realized that I didn't understand it as well as I thought I did. Mom asked me questions I couldn't answer. Like the question about there being four times and four positions and why it wouldn't be 4 x 4 = 16. She threw me with that one and I think we lost it for good there.

From observing a young student learn about permutations in phase two, Ball noticed that some of her students started to challenge some of their assumptions they made about themselves as learners. Both from her experience and from the literature, Ball knew that elementary preservice teachers are often the most apprehensive about teaching mathematics. In some cases, these students choose to teach elementary in the hopes of avoiding any mathematical content they might find difficult. Changing these feelings about mathematics and about themselves is a difficult task for the teacher educator, but Ball did see progress. Christy, for example, said, "Most of all, I realized that I do have the ability to learn mathematics when it is taught in a thoughtful way" (p. 45). Unfortunately, not all shared this experience, as Mandy said she "did not enjoy the permutations activities because I was transported in time back to junior high school, where I remember mathematics as confusing and aggravating. Then as now, the explanations seemed to fly by me in a whirl of disassociated numbers and words" (p. 45).

In her conclusion, Ball says activities like the permutations project can be used by teacher educators to expose students' "knowledge, beliefs, and attitudes" (p. 46) about math and teaching math. By understanding the ideas prospective teachers bring with them, teacher educators can better develop preparation programs that address those beliefs in ways that strengthen the positive ones while changing some negative ones. Also, by including these kinds of activities with introductory preservice teachers, this can raise their expectations for what they will encounter later in methods classes. Summarizing, Ball concludes:

How can teacher educators productively challenge, change, and extend what teacher education students bring? Knowing more about what teachers bring and what they learn from different components of and approaches to professional preparation is one more critical piece to the puzzle of improving the impact of mathematics teacher education on what goes on in elementary mathematics classrooms. (p. 46)

References


Ball, D. L. (1988). Unlearning to teach mathematics. For the Learning of Mathematics, 8(1), 40–48. Retrieved from http://www.jstor.org/stable/40248141

RYSK: Shulman's Those Who Understand: Knowledge Growth in Teaching (1986)

This is the 15th in a series describing "Research You Should Know" (RYSK) and part of my OpenComps. I also Storified this article as I read.

Lee Shulman. (CC BY-NC) Penn State
George Bernard Shaw once said, "He who can, does. He who cannot, teaches." For that, you could say that Lee Shulman takes offense. Shulman, a long-time faculty member at both Michigan State (1963-1982) then Stanford, explained his position and a new way of thinking about teacher knowledge in his AERA Presidential Address and the paper, Those Who Understand: Knowledge Growth in Teaching. Shulman is now an emertius professor but stays active traveling, speaking, and occasionally blogging.

Wondering why the public often has a low opinion of teachers' knowledge and skill, Shulman first looks at the history of teacher examinations. In the latter half of the 1800s, examinations for people wishing to teach were almost entirely content-based. In 1875, for example, the California State Board examination for elementary teachers gave a day-long, 1000-point exam that covered everything from mental arithmetic to geography to vocal music. Its section on the theory and practice of teaching, however, was only worth 50 of the 1000 points and included questions like, "How do you interest lazy and careless pupils?" (p. 5)

By the 1980s, when Shulman wrote this article, teacher examinations painted almost the opposite picture. Instead of focusing on content, they focused on topics such as lesson planning, cultural awareness, and other aspects of teacher behavior. While the topics usually had roots in research, they clearly did not represent the wide spectrum of skills and knowledge a teacher would need to be a successful teacher. More specifically, by the 1980s our teacher examinations seemed to care as little about content as the examinations a century prior seemed to care about pedagogy.

Looking back even further in history, Shulman recognized that we haven't always made this distinction between content and teaching knowledge. The origins of the names of our highest degrees, "master" and "doctor," both essentially mean "teacher" and reflected the belief the highest form of knowing was teaching, an idea going back to at least Aristotle:

We regard master-craftsmen as superior not merely because they have a grasp of theory and know the reasons for acting as they do. Broadly speaking, what distinguishes the man who knows from the ignorant man is an ability to teach, and this is why we hold that art and not experience has the character of genuine knowledge (episteme) -- namely, that artists can teach and others (i.e., those who have not acquired an art by study but have merely picked up some skill empirically) cannot. (Wheelwright, 1951, as cited in Shulman, 1986, p. 7)

Shulman saw a blind spot in this dichotomy between content and teaching knowledge. What he saw was a special kind of knowledge that allows teachers to teach effectively. After studying secondary teachers across subject areas, Shulman and his fellow researchers looked to better understand the source of teachers' comprehension of their subject areas, how that knowledge grows, and how teachers understand and react to curriculum, reshaping it into something their students will understand.

Pedagogical Content Knowledge

To better understand this special knowledge of teaching, Shulman suggested we distinguish three different kinds of content knowledge: (a) subject matter knowledge, (b) pedagogical content knowledge, and (c) curricular knowledge. It was the second of these, pedagogical content knowledge (PCK), that Shulman is best remembered for. Shulman describes the essence of PCK:

Within the category of pedagogical content knowledge I include, for the most reguarly taught topics in one's subject area, the most useful forms of representation of those ideas, the most powerful analogies, illustrations, examples, explanations, and demonstrations -- in a word, the ways of representing and formulating the subject that make it comprehensible to others. Since there are no single most powerful forms of representation, the teacher must have at hand a veritable armamentarium of alternative forms of representation, some of which derive from research whereas others originate in the wisdom of practice. (p. 9)

In addition to these three kinds of teacher knowledge, Shulman also proposed we consider three forms of teacher knowledge: (a) propositional knowledge, (b) case knowledge, and (c) strategic knowledge. These are not separate from the three kinds of knowledge named above, but rather describe different forms of each kind of teacher knowledge. Propositional knowledge consists of those things we propose teachers do, from "planning five-step lesson plans, never smiling until Christmas, and organizing three reading groups" (p. 10). Shulman organized propositional knowledge into principles, maxims, and norms, with the first usually emerging from research, the second coming from a practical experience (and generally untestable, like the suggestion to not smile before Christmas), and the third concerning things like equity and fairness. Propositions can be helpful but difficult to remember to implement as research intended.

Learning propositions out of context is difficult, so Shulman proposed case knowledge as the second form of teacher knowledge. By case, he means learning about teaching in a similar way a lawyer learns about the law by studying prior legal cases. In order to truly understand a case, a learner starts with the factual information and works towards the theoretical aspects that explain why things happened. By studying well-documented cases of teaching and learning, teachers consider prototype cases (that exemplify the theoretical), precedents (that communicate maxims), and parables (that communicate norms and values). (If you're scoring at home, Shulman has now said there are three types of cases, which itself is one of three forms of knowledge, each of which capable of describing three different kinds of content knowledge.)

The last form of knowledge, strategic knowledge, describes how a teacher reacts when faced with contradictions of other knowledge or wisdom. Knowing when to bend the rules or go against conventional wisdom takes more than luck -- it requires a teacher to be "not only a master of procedure but also of content and rationale, and capable of explaining why something is done" (p. 13).

The value of this article by Shulman goes beyond the theoretical description of pedagogical content knowledge. Additionally, this article serves as a strong reminder that when we judge a teacher, we must consider a broad spectrum of skills and abilities, and not limit ourselves to only those things we think can be easily measured. As Shulman explains:

Reinforcement and conditioning guarantee behavior, and training produces predictable outcomes; knowledge guarantees only freedom, only the flexibility to judge, to weigh alternatives, to reason about both ends and means, and then to act while reflecting upon one's actions. Knowledge guarantees only grounded unpredictability, the exercise of reasoned judgment rather than the display of correct behavior. If this vision constitutes a serious challenge to those who would evaluate teaching using fixed behavioral criteria (e.g., the five-step lesson plan), so much the worse for those evaluators. The vision I hold of teaching and teacher education is a vision of professionals who are capable not only of acting, but of enacting -- of acting in a manner that is self-conscious with repect to what their act is a case of, or to what their act entails. (p. 13)

In our current era of teacher evaluation and accountability, with all its observational protocols and test score-driven value added models, this larger view of teaching presented to us by Shulman is a gift. His recommendation that teacher evaluation and examination "be defined and controlled by members of the profession, not by legislators or laypersons" (p. 13) is a wise one, no matter how politically difficult. Shulman hoped for tests of pedagogical content knowledge that truly measured those speical skills that teachers have, skills that non-teaching content experts would not pass. I don't think those measurement challenges have been overcome, but continuing towards that goal should strengthen teacher education programs while also improving the perception of teaching as a profession. As Shulman concludes (p. 14):

We reject Mr. Shaw and his calumny. With Aristotle we declare that the ultimate test of understanding rests on the ability to transform one's knowledge into teaching.

Those who can, do. Those who understand, teach.

References

Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. Retrieved from http://www.jstor.org/stable/3202180

OpenComps Update

With five weeks to go before beginning the written portion of my comprehensive exam, I recently met with my advisor to discuss gaps in my reading list. I think everybody has holes somewhere in their knowledge, but given my interests in research and practice we came up with additional readings focused on three areas: teacher learning, teacher beliefs, and cognitively guided instruction (CGI). I'm starting with teacher learning, which includes the following four articles:

Ball, D. L. (1988). Unlearning to teach mathematics. For the Learning of Mathematics, 8(1), 40–48. Retrieved from http://www.jstor.org/stable/40248141

Ball, D. L. (2008). Content knowledge for teaching: What makes it special ? Journal of Teacher Education, 59(5), 389–407. doi:10.1177/0022487108324554

Lampert, M. (2009). Learning teaching in, from, and for practice: What do we mean? Journal of Teacher Education, 61(1-2), 21–34. doi:10.1177/0022487109347321

Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. Retrieved from http://www.jstor.org/stable/3202180

Although I have a vague understanding of pedagogical content knowledge (PCK) and mathematical knowledge for teaching (MKT), I knew I needed to dig into Shulman's and Ball's thoughts to better understand their origins. In a way, it's a pretty good sign when the gaps you perceive yourself as having are more or less the ones your adivsor sees, too. There are places for life to contain wonderful surprises, but I don't think this needs to be one of them. Now, on to the reading!

RYSK: Greeno, Pearson, & Schoenfeld's Implications for NAEP of Research on Learning and Cognition (1996)

This is the 14th in a series describing "Research You Should Know" (RYSK).

You might have read my recent post about Lorrie Shepard's 2000 article The Role of Assessment in a Learning Culture and assumed she focused on classroom assessment because changing large-scale, standardized assessments was a lost cause. Think again. By that time, an effort to integrate new theories of learning and cognition into the NAEP was already underway, traceable back to a 1996 report titled Implications for NAEP of Research on Learning and Cognition written by by James G. Greeno, P. David Pearson, and Alan H. Schoenfeld. For years Greeno has been recognized as one of education's foremost learning theorists, while Pearson and Schoenfeld are highly-regarded experts in language arts and mathematics education, respectively.

The National Assessment of Educational Progress, sometimes called "The Nation's Report Card," has been given to students in various forms since 1969. Unlike the high-stakes assessments given by states to all students, the NAEP is given to samples of 4th, 8th, and 12th grade students from around the country, and the use of matrix sampling means no student ever takes the entire test. The goal of the NAEP is to inform educators and policymakers about performance and trends, and details about how different NAEP exams try to achieve this are described in depth at the NAEP website.

Greeno et al. tried to answer two main questions in their report: (a) Does the NAEP inform the nation "about significant aspects of the knowing and learning" (p. 2) in math and reading, and (b) What changes in NAEP would make it a better tool for informing the nation about the performance and progress of our educational system? The authors acknowledge the tradition with what they call differential and behaviorist perspectives on learning, and focus more of their attention on the ability to assess cogntiive and situative perspectives, which have strong theoretical foundations but hadn't been reflected in most large-scale assessments.

Concisely, the report says the "key features of learning in the cognitive perspective are meaningful, conceptual understanding and strategic thinking" and that the "key feature of learning in the situative perspective is engaged participation with agency" (p. 3, emphasis in original). Greeno et al. say that if students are engaged in learning activities that reflect these perspectives, then the NAEP should try to capture the effects of those experiences.

One of the main reasons I'm writing about this report is because it gives me another chance to describe current learning perspectives that go beyond the simpler "behaviorism vs. constructivism" argument I knew as a teacher and heard from others. This report does this well without burdening the reader with all the gory details that learning theorists grapple with as they try to push these theories even further. So here's my summary of their summaries of each perspective:

Differential

This perspective accepts the assumption that "Whatever exists, exists in some amount and can be measured" (p. 10). For knowledge, that "whatever" is referred to as a trait, and different people have traits in different amounts. Evidence of traits can be detected by tests, and the correlation of different tests supposedly measuring the same trait is an indication of our confidence in our ability to measure the trait. Because the person-to-person amount of a trait is assumed to be relative, it's statistically important to design tests where few people will answer all items correctly or incorrectly.

Behaviorist

Behaviorism assumes that "knowing is an organized collection of stimulus-response associations" (p. 11). To learn is to acquire skills (usually and best in small pieces) and measuring learning is seen as an analysis of behaviors which can be decomposed into responses to stimuli. Behaviorism's influence on curriculum is seen when behavioral objectives are organized as a sequence building bigger ideas out of smaller, prerequisite objectives.

Cognitive

The cognitive perspective primarily focuses "on structures of knowledge, including principles and concepts of subject-matter domains, information organized by schemata, and procedures and strategies for problem solving and reasoning" (p. 12). Learners actively construct their knowledge rather than accept it passively, and conceptual understanding is not just the sum total of facts. The early part of the cognitive revolution was reflected in the math and science reforms of the 1950s and 1960s, while Piagetian ideas and research on student understanding have pushed the perspective further. Assessments need to determine more than right and wrong answers, and research involving think-aloud protocols, student interviews, eye-tracking studies, and patterns of responses have yielded better theories about how to assess for student understanding.

Situative

The situative perspective is a social view of learning focused on "interactive processes in which people participate in practices that are organized by the societies and communities they belong to, using the technologies and natural resources in their environments" (p. 14). Knowing is no longer in the head -- instead it is seen as participation in a community, and learning is represented by increased and more effective participation. John Dewey took parts of this perspective in the early 20th century, but we owe much of the theory to Lev Vygotsky, whose work in the 20s and 30s in the Soviet Union eventually emerged and has heavily influenced learning science since the late 1970s. The situative perspective is more readily applied to interactions between people or between people and technology (which is seen as a cultural artifact with social roots), but even solitary learners can be assessed with the situative perspective if we focus on "the individual's participation in communities with practices, goals, and standards that make the individual's activity meaningful, either by the individual's adoption of or opposition to the community's perspective" (p. 14). The influence of the situative perspective on curriculum and classrooms is most easily seen in the focus on student participation, project work, small-group discussions, and authentic work in subject-area disciplines.

In summary, achievement in each perspective can be described as:
Differential/Behaviorist
- "progress a student has made in the accumulation of skills and knowledge" (p. 16)
Cogntive
- a combination of five aspects (pp. 16-18):
  1. Elementary skills, facts, and concepts
  2. Strategies and schemata
  3. Aspects of metacognition
  4. Beliefs
  5. Contextual factors
Situative
- a combination of five aspects (pp. 19-21):
  1. Basic aspects of participation
  2. Identity and membership in communities
  3. Formulating problems and goals and applying standards
  4. Constructing meaning
  5. Fluency with technical methods and representations

What Does This Mean for the NAEP?

Greeno et al. declared that the NAEP was "poorly aligned" (p. 23) with the cognitive perspective. It hadn't captured the complexity of student knowledge and they recommended a greater focus on problems set in meaningful contexts and tasks that reflected the kind of knowledge models and structures theorized in the research. As for the situative perspective, Greeno et al. went so far to say that what the NAEP had been measuring was "of relatively minor importance in almost all activities that are significant for students to learn" (p. 27). Whereas the situative perspective focuses on participation in a particular community or knowledge domain, it's impossible to escape the reality that on the NAEP, the domain is test-taking itself, a "special kind of situation that is abstracted from the variety of situations in which students need to know how to participate" (pp. 28-29). Measuring learning from the situative perspective would require a complicated set of inferences about a student's actual participation practices in an authentic domain, and the technical limitations of the NAEP limits our ability to make those inferences.

The report continues with specific details about how we might measure learning in language arts and mathematics with the NAEP from both a cognitive and situative perspective. In the conclusion, the authors first recommend some systemic changes: First, NAEP needed more capacity for attending to the long-term continuity of the test and its design. Given how important NAEP is for measuring longitudinal trends, we can't change it without a careful study of how to compare new results to old. Second, the authors wanted a national system for evaluating changes in the educational system. The NAEP alone can't tell us everything we need to know about the effectiveness of educational reforms.

As for recommendations for the test itself, Greeno et al. emphasized the need to align the assessment with ongoing research, especially in the cognitive perspective. Instead of planning for NAEP tests one at a time and contracting out various work, the development process needed to become more continuous with particular sustained attention given to progress in the cognitive and situative dimensions. More ambitiously, the authors recommended a parallel line of test development to begin establishing new forms of assessment that might capture learning in these newer perspectives. This is a critical challenge because while we know the least about assessing from the situative perspective, the situative is often the perspective that frames our national educational goals. The NAEP can't measure progress to situative-sounding goals without better measurement of learning from a situative perspective.

It has now been 12 years since the release of this report. I don't know how Greeno et al.'s recommendations have specifically been followed, but there is good news. If you read most any of the current NAEP assessment frameworks, you can find evidence of progress. The frameworks have changed to better measure student learning, particularly from the cognitive perspective. Some frameworks honesty address the difficulty in measuring the situative perspective using an on-demand, individualized, pencil-and-paper (but increasingly computer-based) test. (See Chapter One of the science framework, for example.) Will we see any radical changes any time soon? I doubt it. The information we get about long-term trends from the NAEP requires a certain amount of stability. Given the onset of new national consortia tests based on the Common Core State Standards, I think the educational system will get its fill of radical change in the next 3-5 years. With that as the comparison, we all might contently appreciate the stability and attention to careful progress reflected in the NAEP.

References

Greeno, J. G., Pearson, P. D., & Schoenfeld, A. H. (1996). Implications for NAEP of research on learning and cognition (p. 84). Menlo Park, CA.

How Can Texas Instruments Adapt to Post-Tech-Monopoly Classrooms?

Bill Cosby has been right about a lot of things, but he might not have seen the future when he advertised the Texas Instruments TI-99 computer as "The One:"



I think I'm glad the TI-99 computer didn't become "The One," because when the TI-83 graphing calculator became "The One" for students, Texas Instruments showed they were all-too-happy to keep pushing that same basic technology for about the same price for 10+ years. Only when you have a tech monopoly can you resist that much change for so long.

Now I finally feel like TI is facing some real competition in the classroom. If I were them, I'd be developing and marketing smartphone apps that replicate the functionality of their calculators with one key feature: the ability for the user to put the app in "lock mode" which makes the device a dedicated calculator for a predetermined amount of time. I wouldn't worry about students cheating with their phones if I could see them trigger the lock mode at the beginning of a test and then prove to me at the end that the calculator app was the only app running the entire time. If TI could get that approved by the ACT and SAT, I think it's an app that students would gladly pay for.

RYSK: Shepard's The Role of Assessment in a Learning Culture (2000)

This is the 13th in a series describing "Research You Should Know" (RYSK).

In her presidential address at the 2000 AERA conference, Lorrie Shepard revealed a vision for the future of educational assessment. That message turned into an article titled The Role of Assessment in a Learning Culture, and its message is still very much worth hearing today. Lorrie Shepard remains a globally-respected expert in assessment, psychometrics, and their misuses, and I'd think she was totally awesome even if she wasn't my boss.

Shepard is often present for debates about large-scale testing, but this paper focuses on classroom assessment -- the kind, says Shepard, "that can be used as a part of instruction to support and enhance learning" (p. 4). Shepard does this by first explaining a historical perspective, then describing a modern view of learning theories, then envisioning how new assessment practices could support those theories. Impressively, she does this all in just 11 well-written pages. (In fact, given that the paper is available on the web, I wouldn't blame you at all for skipping this summary and just reading the article for yourself.)

History

Shepard highlights several major themes from history that have continued to drive our assessment practices. One is the social efficiency movement, which "grew out of the belief that science could be used to solve the problems of industrialization and urbanization" (p. 4). While this movement might have helped our economic and educational systems scale rapidly (think about Ford and the assembly line), social efficiency carries with it a belief that people have a certain innate (and largely fixed) set of capabilities, and our society operates its most efficiently when we measure people and match their capabilities to appropriate education and employment. For example, students were often given IQ tests to determine if their future path should lie on a particular academic or vocational track.

The dominant learning theories of the early and mid-1900s were associationism and behaviorism, both of which promoted the idea that learning was an accumulation of knowledge that could be broken into very small pieces. Behaviorism was also tied closely to theories of motivation, as it was believed learning was promoted when knowledge was made smaller and opportunities for positive reinforcement for learning were made greater. Much of the assessment work related to these beliefs can be traced back to Edward Thorndike, considered to be the father of scientific measurement and earliest promoter of "objective" testing. It's been 100 years since Thorndike was elected president of the American Psychological Association, and decades since his ideas seriously influenced the leading edges of learning theory. Still, as most anyone who works in schools or experienced a traditional education can attest, ideas of social efficiency and behaviorism are still evident in schools -- especially in our assessment practices.

Together, the theories of social efficiency, scientific measurement, and beliefs about intelligence and learning form what Shepard sees as the dominant 20th-century paradigm. (See page 6 of the paper for a diagram.) It's important to begin our discussion here, says Shepard, because "any attempt to change the form and purpose of classroom assessment to make it more fundamentally a part of the learning process must acknowledge the power of these enduring and hidden beliefs" (p. 6).

Modern Theories

In the next section, Shepard describes a "social-constructivist" framework that guides modern thought on learning:

The cognitive revolution reintroduced the concept of mind. In contrast to past, mechanistic theories of knowledge acquisition, we now understand that learning is an active process of mental construction and sense making. From cognitive theory we have also learned that existing knowledge structures and beliefs work to enable or impede new learning, that intelligent thought involves self-monitoring and awareness about when and how to use skills, and that "expertise" develops in a field of study as a principled and coherent way of thinking and representing problems, not just as an accumulation of information. (pp. 6-7)

These ideas about cognition are complimented by Vygotskian realizations that the knowledge we construct "is socially and culturally determined" (p. 7). Unlike Piaget's view that development preceded learning, this modern view sees how development and learning interact as social processes. While academic debates remain about the details of cognitive vs. social (and vs. situative vs. sociocultural vs. social constructivist vs. ...), for practical purposes these theories can coexist and are already helping teachers view student learning in ways that improve upon behaviorism. However, Shepard says, since about the 1980s this has left us in an awkward state of using new theories to inform classroom instruction, while still depending on old theories to guide our assessments.

Improving Assessment

If we wish to make our theories of assessment compatible with our theories of learning, Shepard says we need to (a) change the form and content of assessments and (b) change the way we use and regard assessment in classrooms. Some of the potential changes in form are already familiar to most teachers, such as a greater use of open-ended performance tasks and setting assessment tasks in real-world contexts. Furthermore, Shepard suggests that classroom routines and related assessments should reflect the need to socialize students "into the discourse and practices of academic disciplines" (p. 8) as well as foster metacognition and important dispositions. Shepard does not go into much more detail here because others have already given attention to these ideas, but gives us this simple yet powerful idea (p. 8):

"Good assessment tasks are interchangeable
with good instructional tasks."

Next Shepard pays special attention to negative effects of high-stakes testing. Shepard could be called a believer in standards-based education, but recognizes how "the standards movement has been corrupted, in many instances, into a heavy-handed system of rewards and punishments without the capacity building and professional development originally proposed as part of the vision (McLaughlin & Shepard, 1995)" (p. 9). Unfortunately, Shepard's predictions have held true over the past 12 years: we've seen test scores distorted under political pressure, a corruption of "teaching to the test," and a trend towards the "de-skilling and de-professionalization of teachers" (p. 9). What's worse might be a decade of new teachers who've learned to "hate standardized testing and at the same time reproduce it faithfully in their own pre-post testing routines" (p. 10) because they've had such little exposure to better forms of assessment.

For the rest of the article, Shepard focuses on how assessment can and should be used to support student learning. First, classrooms need to support a learning culture where "students and teachers would have a shared expectation that finding out what makes sense and what doesn't is a joint and worthwhile project" (p. 10). This means assessment that is more informative and reflective of student learning, one where "students and teachers look to assessment as a source of insight and help instead of an occasion for meting out rewards and punishments" (p. 10). To do this, Shepard describes a set of specific strategies teachers should use in combination in their classrooms.

Dynamic Assessment

When Shepard wrote this article, formal ideas and theories about formative assessment were still emerging and the field had yet to settle on some of the language we now use. But if you're at all familiar with formative assessment, Shepard's description of "dynamic" assessment will sound familiar: teacher-student interactions continuing through the learning process rather than delayed until the end, with the goal of gaining insight about what students understand and can do both on their own and with assistance from classmates or the teacher.

Prior Knowledge

The idea of a pre-test to see what students know before instruction begins is not new, but Shepard says we should recognize that traditional pretests don't usually take account of social and cultural contexts. Because students are unfamiliar with a teacher's conceptualization of the content prior to instruction (and vice versa), scores might not accurately reflect students' knowledge as well as, say, a conversation or activity designed to elicit the understandings students bring to the classroom. Also, as Shepard has frequently observed, traditional pre-testing often doesn't significantly affect teachers' instruction. So why do it? Instead, why not focus on building a learning culture of assessment: "What safer time to admit what you don't know than at the start of an instructional activity?" (p. 11)

Feedback

The contrast in feedback under old, behaviorist theories and newer, social-constructivist theories is clear. Feedback under old theories generally consisted of labeling answers right or wrong. Feedback under new theories takes greater skill: teachers need to know how to ignore student errors that aren't immediately relevant to the learning at hand, while crafting questions and comments that force the student to question themselves and any false knowledge they might be constructing. (See Lepper, Drake, and O'Donnell-Johnson, 1997, for more on this.)

Transfer

While it is our hope that our students will be able to generalize the specific knowledge they have learned and apply it to other situations, our ability to accurately research and make claims about knowledge transfer turns out to be a pretty tricky business. Under a strict behaviorist perspective, it was appropriate to believe that each application of knowledge should be taught separately. Many of our current theories support an idea of transfer, and evidence shows that we can help students by giving them opportunities to see how their knowledge reliably works in multiple applications and contexts. So while some students might not agree, Shepard says teachers should not "agree to a contract with our students which says that the only fair test is one with familiar and well-rehearsed problems" (p. 11).

Explicit Criteria

If students are to perform well, they need to have clear guidance about what good performances look like. "In fact, the features of excellent performance should be so transparent that students can learn to evaluate their own work in the same way their teachers would" (p. 11). This reinforces ideas of metacognition and, perhaps more importantly, fairness.

Self-Assessment

There are cognitive reasons to have students self-assess, but other goals are to increase student self-responsibility and make teacher-student relationships more collaborative. Students who self-evaluate become more interested in feedback from others, are more aware of standards of excellence, and take more ownership over the learning process.

Evaluation of Teaching

This is another idea now heavily intertwined with formative assessment, but Shepard takes it one step farther than I normally see it. Instead of just using assessment to improve one's teaching, Shepard recommends that teachers be transparent about this process and "make their investigations of teaching visible to students, for example, by discussing with them decisions to redirect instruction, stop for a mini-lesson, and so-forth" (p. 12). This, Shepard says, is critical to cultural change in the classroom:

If we want to develop a community of learners -- where students naturally seek feedback and critique of their own work -- then it is reasonable that teachers would model this same commitment to using data systematically as it applies to their own role in the teaching and learning process. (p. 12)

Conclusion

Shepard admits that describing this new assessment paradigm is far easier than it is to implement in practice. It relies on a great deal of teacher ability and confronting some long-held beliefs. Shepard recommended a program of research accompanied by a public education campaign to help citizens and policymakers understand the different goals of large-scale and classroom assessments. Neither the research or educating the public is easy, because both are built upon a history of theories and practice that a new paradigm needs to discard. Perhaps we haven't taken on this challenge with the effort and seriousness we've needed, and I worry that now we're more apt to talk about "learning in an assessment culture" rather than the other way around, as Shepard titled this article. I sometimes wonder if she's considered writing a follow-up with that title, or if she's hoping she'll never have to. I guess the next time it comes up I'll have to ask her.

Math note: This is an article about assessment and not specific to mathematics, but I'd be remiss if I didn't share Shepard's inclusion of one of my all-time favorite fraction problems:


References

Lepper, M. R., Drake, M. F., O'Donnell-Johnson, T (1997). Scaffolding techniques of expert human tutors. In K. Hogan & M. Presley (eds.), Scaffolding student learning: Instructional approaches & issues. Cambridge, MA: Brookline Books.

McLaughlin, M. W., & Shepard, L.A. (1995). Improving education through standards-based reform: A report of the National Academy of Education panel on standards-based educational reform. Stanford, CA: National Academy of Education.

Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14. doi:10.2307/1176145

Thompson, P. W. (1995). Notation, convention, and quantity in elementary mathematics. In J. T. Sowder & B. P. Schappelle (Eds.), Providing a foundation for teaching mathematics in the middle grades (pp. 199-221). New York: State University of New York Press.