How do I learn to teach people to use this stuff? |
A Madness to Our Methods
A Few Thoughts from NCTM 2015 (#NCTMBoston)
Coherence
+ And now I have a lot of thinking to do about the progression and cohesion of our courses, within and between years.
— Anna Blinstein (@Borschtwithanna) April 19, 2015
I have some thinking to do about this, too. Unfortunately, that's about all I think I have. I've looked at the EQUiP rubrics and they're a step in the right direction, but the challenge of coherence needs to be met with a stronger toolset if we want to tackle this at scale. It's probably a good thing that NCTM president-elect Matt Larson is concerned about this, too.Not talking the same language, but talking different languages similarly?
I spent the first half of the week around researchers and the second half around teachers. Parts of conversations really aren't that different. You're just as likely to hear a researcher say something like, "My research builds off an approach and findings found by Scholar X who published in Journal Y" as you are to hear a teacher-blogger say, "My teaching builds off an approach and findings found by Blogger X who published on Site Y." In this way, the research and teaching community differs by the literature they draw upon, but is quite similar in their willingness and ability to build on others' work. I see a lot of promise here, and it's making me think that the MathEdnet Wiki needs to open up to put more blogger literature side-by-side with traditional academic literature. I don't see many good reasons for Michael Pershan's approach towards giving hints to not be mentioned alongside similar ideas Stein and Smith's 5 Practices, for example.
Beyond Twitter
Thinking about how we build and bridge communities is important, but I need to balance my critiques and commentary about using Twitter with a broader and more positive message about other available tools. A couple times during the week I heard the question asked, "Do you use Twitter?" and the response was "No" or even "I refuse." Some of those times I feared the message was, "You either use Twitter, or you don't. There's nothing else." I should work a little harder to push people to maybe ask, "Do you use social media or math resources online?" and have better knowledge of non-Twitter ways to engage with math ed folks online. (The lack of a network-neutral hashtag is still a nuisance, though.)
Task Analysis and Adaptation
Geoff Krall's adaptation talk was really good, even if the task analysis was limited to "likes" vs. "dislikes." Then again, maybe I'm just jealous because my rejected presentation was about some specific and useful things to look for when analyzing tasks. I'm pretty sure there was just only so much Geoff could tackle in an hour.
Tricks Nixing
I knew about the book Nix the Tricks but hadn't had any direct interactions with Tina Cardone. I think she's my new math-teacher-blogger-writer-twitter-er crush, or at least one of a half dozen or so that I met in the past week. Check her out if you haven't already.
Time to board a plane.
Education, Neuroscience, and Tangled Webs We Weave
Even though there is more communication than ever on peer-reviewed brain research, a lot of that communication distorts the science and ends up spreading or creating new neuromyths (Howard-Jones, 2014). What does that distortion look like? I present to you two examples, where something I saw on social media referring to the brain ended up linking back to research with claims that looked quite different.
Example One: "Your Brain Grew"
Yesterday +Joshua Fisher pointed out this tweet:
"when you got the problem right, your brain did nothing; when you got the problem wrong your brain GREW!" Thanks @joboaler #T3IC
— Julie Shouse Riggins (@jrigginsEFHS) March 13, 2015
Being sensitive to neuromyths, I admit I poked a little fun at this tweet-length, out-of-context claim. Rightly, +Paul Hartzer called me out and suggested I search for some context, such as this:http://tvoparents.tvo.org/HH/making-mistakes |
I immediately went for the "growing evidence" link, which took me to this:
https://www.psychologytoday.com/blog/the-science-willpower/201112/how-mistakes-can-make-you-smarter |
As this was a review of two studies, I dove down to the reference section and tracked down the research. The first, by Moser et al. (2011), had this abstract:
Abstract:
How well people bounce back from mistakes depends on their beliefs about learning and intelligence. For individuals with a growth mind-set, who believe intelligence develops through effort, mistakes are seen as opportunities to learn and improve. For individuals with a fixed mind-set, who believe intelligence is a stable characteristic, mistakes indicate lack of ability. We examined performance-monitoring event-related potentials (ERPs) to probe the neural mechanisms underlying these different reactions to mistakes. Findings revealed that a growth mind-set was associated with enhancement of the error positivity component (Pe), which reflects awareness of and allocation of attention to mistakes. More growth-minded individuals also showed superior accuracy after mistakes compared with individuals endorsing a more fixed mind-set. It is critical to note that Pe amplitude mediated the relationship between mind-set and posterror accuracy. These results suggest that neural mechanisms indexing on-line awareness of and attention to mistakes are intimately involved in growth-minded individuals' ability to rebound from mistakes.This sounds familiar to those who know things about growth vs. fixed mindsets, and shows that growth mindsets are associated with some brain activity that we don't see with fixed mindsets. So maybe brain "growth" doesn't happen to everyone. The second article, by Downar, Bhatt, and Montague (2011), is even more neuroscience-y:
Abstract:
Accurate associative learning is often hindered by confirmation bias and success-chasing, which together can conspire to produce or solidify false beliefs in the decision-maker. We performed functional magnetic resonance imaging in 35 experienced physicians, while they learned to choose between two treatments in a series of virtual patient encounters. We estimated a learning model for each subject based on their observed behavior and this model divided clearly into high performers and low performers. The high performers showed small, but equal learning rates for both successes (positive outcomes) and failures (no response to the drug). In contrast, low performers showed very large and asymmetric learning rates, learning significantly more from successes than failures; a tendency that led to sub-optimal treatment choices. Consistently with these behavioral findings, high performers showed larger, more sustained BOLD responses to failed vs. successful outcomes in the dorsolateral prefrontal cortex and inferior parietal lobule while low performers displayed the opposite response profile. Furthermore, participants' learning asymmetry correlated with anticipatory activation in the nucleus accumbens at trial onset, well before outcome presentation. Subjects with anticipatory activation in the nucleus accumbens showed more success-chasing during learning. These results suggest that high performers' brains achieve better outcomes by attending to informative failures during training, rather than chasing the reward value of successes. The differential brain activations between high and low performers could potentially be developed into biomarkers to identify efficient learners on novel decision tasks, in medical or other contexts.Now we're talking about some brain activity, but the results aren't so simple. Take-away? A group of doctors who performed well on a task had brains that appeared to respond better to failure, while low-performing doctors didn't. Also, don't overlook the last bit: This study is less about finding better teaching than it is about identifying biomarkers that indicate who might be more easily taught. That's an important difference — teachers don't get to scan kids in fMRI machines and only teach the best of the lot.
Example Two: Common Core is Bad for Your Brain
Last year Lane Walker pointed me to this claim in a post on LinkedIn:https://www.linkedin.com/groups/Did-anyone-get-any-interesting-4204066.S.5912659047466680321 |
Curious (and very skeptical), I followed the link to find this:
https://peter5427.wordpress.com/2014/08/28/stanford-study-common-core-is-bad-for-the-brain/ |
That post was referencing this article on Fox News:
http://www.foxnews.com/health/2014/08/18/kids-brains-reorganize-when-learning-math-skills/ |
A search for the actual research took me to an article by Qin et al. (2014) with this abstract:
Abstract:
The importance of the hippocampal system for rapid learning and memory is well recognized, but its contributions to a cardinal feature of children's cognitive development—the transition from procedure-based to memory-based problem-solving strategies—are unknown. Here we show that the hippocampal system is pivotal to this strategic transition. Longitudinal functional magnetic resonance imaging (fMRI) in 7–9-year-old children revealed that the transition from use of counting to memory-based retrieval parallels increased hippocampal and decreased prefrontal-parietal engagement during arithmetic problem solving. Longitudinal improvements in retrieval-strategy use were predicted by increased hippocampal-neocortical functional connectivity. Beyond childhood, retrieval-strategy use continued to improve through adolescence into adulthood and was associated with decreased activation but more stable interproblem representations in the hippocampus. Our findings provide insights into the dynamic role of the hippocampus in the maturation of memory-based problem solving and establish a critical link between hippocampal-neocortical reorganization and children's cognitive development.As I suspected, the neuroscience really had nothing to do with Common Core or how to teach math. It just found out which part of the brain became more active as children increase their ability to do things from memory. That should sound exciting if you're a neuroscientist, but pretty useless if you're a teacher.
Why We Have Theories of Learning
My hope for teachers is this: When you hear claims about the brain and what they mean for your teaching, be skeptical. Avoid the possibility that you'll be fooled by the next big neuromyth. Realize that a lot of neuroscience relies on placing individuals in an fMRI machine and observing their brain activity while they perform a task. Is that cool science? You bet it is. Does this kind of research capture the context and complexity of your classroom? It does not.
Instead, understand and appreciate why education and related fields have theories of learning that don't rely on knowing what the brain does. In general, theories of construcivism don't go into detail about what's happening at the synapse level, nor do they need to. Cognitive theories use schema to theorize what's going on in the head, but no fMRI machines are necessary. Situated and sociocultural theories of learning gain their usefulness not by trying to look inside the learner's head, but rather outward to that learner's environment, the tools they use, the communities they participate in, and how culture and history shape their activity. So teachers, focus on that — focus on the culture of your classroom, how your students participate, and the learning community you support. Focus on how a carefully constructed curriculum, well-enacted, supports a trajectory of student learning. It will get you much further than neuromyths.
References
NCTM's Grand Challenges and Opportunities in Mathematics Education Research
Last summer, the NCTM Research Committee asked members to identify grand challenges in mathematics education (written about here and here), and today they've published their findings in the Journal of Research in Mathematics Education. First thing's first: If you're not a JRME subscriber your access to the article is blocked by a paywall. Sadly, this feels like another case of NCTM's reluctance to move past old models of publishing and communication, leaving teachers interested in the grand challenges to feel like second-class NCTM members, begging for a handout from the privileged NCTM research community. I've written about my concerns and suggestions for NCTM's relationship with its members, so here I'll just focus on the key points found in today's report. Ready to be inspired? Slow your roll, turbo. You might want to prepare yourself to be a bit puzzled, if not disappointed.
The report begins by placing the concept of a "grand challenge" in the hands of researchers:
Mathematics education researchers seek answers to important questions that will ultimately result in the enhancement of mathematics teaching, learning, curriculum, and assessment, working toward “ensuring that all students attain mathematics proficiency and increasing the numbers of students from all racial, ethnic, gender, and socioeconomic groups who attain the highest levels of mathematics achievement” (National Council of Teachers of Mathematics [NCTM], 2014, p. 61). Although mathematics education is a relatively young field, researchers have made significant progress in advancing the discipline. As Ellerton (2014) explained in her JRME editorial, our field is like a growing tree, stable and strong in its roots yet becoming more vast and diverse because of a number of factors.
Next the report talks about the purpose of grand challenges and their development and use in other fields. In some ways, it reminded me of the spread of the standards movement: "Math has standards, we should too!", except now it's "The National Academy of Engineering has grand challenges, math ed should too!" Then the report spends four paragraphs talking about Hilbert's problems and how they influenced the last 100-plus years of research in mathematics. The report shifts back to the present, summarizing grand challenges in other disciplines. Readers at this point are likely getting anxious, sensing that their grand challenge lies just ahead.
But wait! What's the criteria for a grand challenge again? The report slows to grind away at feedback about how a "grand challenge" was defined in the initial survey. Saying a grand challenge is "doable," for example, wasn't specific enough for some concerned respondents. Okay, point taken. Nobody wants a grand challenge that can't be met. (Ahem...NCLB...100% proficiency targets. Been there, done that.) So now, we prepare ourselves for the challenge...
But first, let's talk about three themes of responses the committee got from the math ed community. Let me be clear: These aren't the challenges, just the themes describing a body of suggested challenges:
- Changing perceptions about what it means to do mathematics.
- Changing the public’s perception about the role of mathematics in society.
- Achieving equity in mathematics education.
I was hoping to have a strong, positive reaction to these, but I fear my inner cynic took over: "In a nutshell, survey respondents argued our grand challenge for the future is to finally win the math wars that we've been fighting for the past 25 years." The details that followed this list, while short, were thoughtful. My inner cynic quieted down. We do need public support for improved ways of teaching mathematics. We do need to conceive of equity and teaching that goes beyond simply narrowing the achievement gap. All good things. But like I said, those were just themes. So now, I stand ready for the distillation of those themes to form itself in the shape of a grand challenge. So the winner is...
Will you settle for a "hypothetical" grand challenge instead? NCTM suggests this as a mere example: All students will be mathematically literate by the completion of eighth grade, accompanied with this disclaimer:
Our example is only meant to illustrate how a Grand Challenge could satisfy the criteria listed in the previous section; we are not suggesting that it is necessarily a Grand Challenge we should pursue.
There are then six paragraphs describing the attention and importance given to literacy (the read-and-write text kind) and how we should give the same attention and importance to mathematical literacy. But this isn't the grand challenge. It could be, but it's not. Unless we decide it is. Which we haven't.
What we need next, says the report, is to think about the process we need to draft grand challenges. The design researcher in me says, "Yes, this is how to do this. We asked for grand challenges, got input, and now we're going to make revisions to our thinking and ask for more input, and it's going to be better input the next time around." I get it. But readers expecting a call to action might think NCTM is just calling a big, frustrating "Do over!" on the process. Here's NCTM's proposed plan, which they encourage people to critique: Engage many voices. Give people opportunities to draft the grand challenges and comment on drafts written by others. Engage in conversations online (!) and at conferences. Avoid just handing this work to a committee. So expect to see the NCTM Grand Challenge Grand Tour coming to a town near you — they'll have sessions in Boston at the Research Conference and Annual Meeting, as well as at AREA, AMATYC, AMTE, the Benjamin Banneker Association, EONAS, MAA, NCSM, PME-NA, TODOS, WME, regional NCTM meetings, and online venues. (Forgive me for not spelling out all the organizations. I figure if you don't know what it is, you're probably not attending.) I found this bit interesting:
The NCTM Research Committee will also convene a diverse group with a wide variety of expertise to review all submitted challenges, write additional challenges, vet them according to the criteria set forth in the invitation, and provide opportunities for the field to comment on them.
That sounds a bit like a hand-picked committee working in conjunction yet parallel to all the work described above. There's little detail, but I think NCTM better be clear about how the work of this committee will be weighed against the suggestions of the broader community. So, are we ready? Psyched? Ready to push that boulder back up the hill? I hope not, because the last section, while probably necessary, is a bit of a downer.
The Research Committee knows that a grand challenge — if and when we have one — will have consequences for researchers:
Any time a representative group of people is given an opportunity to identify Grand Challenges for an entire field, there is a moral obligation to consider the associated risks and weigh them against the potential benefits. The risks associated with creating a document that identifies our field’s Grand Challenges could be significant, yet we hope to minimize the risks by acknowledging and addressing them throughout the process.
What are the risks? Some people's research and work will get privileged over others. Funding will get reallocated. Journals will rethink what should and should not be published. The groups we consider to be "stakeholders" in math education could change. In some cases, people's feelings might get hurt; in other cases, careers could be threatened. I know this sounds overly dramatic, but the tenure and promotion game for academic researchers can be a rough one, and the research committee knows that. It still struck me as odd to see this "inside baseball"-type discussion near the end of the report, but it might comfort some and give fair warning to others.
So that's it. NCTM's grand challenge was not, and will not be, the "we asked, you answered" kind of process that some of us might have expected. I guess you could call that the bad news. If you were ready to jump to collective action, you're going to have to wait. But there is good news: If you are looking to give your input, it looks like you'll have multiple opportunities. And now that the task ahead is defined more clearly, we can think not just of possible challenges, but the ways we'll organize ourselves to tackle those challenges. To me, the key to the former will be the latter.