Planning for the 2010 NCTM Regional Conference in Denver

A week from now I'll be exhausted and headed home after a couple of long days at the Denver Convention Center and the NCTM Regional Conference. I've been planning my schedule and this is what I have so far:

Thursday, October 7:
9:30 - 10:30 "Learn <-> Reflect Kickoff: Geometry Must Change! Enough Trivia!" with Johnny W. Lott (Past President, NCTM; University of Mississippi) OR "A Learning Trajectory for Fractions with Low-Achieving Students" with Maria Ables (Freudenthal Institute)
I have class Thursday morning so I'll probably miss both of these. The content of Lott's presentation sounds good and pretty much everything from the Freudenthal Institute is good.

11:00 - 12:00 "Research-Based Practices that Increase Student Achievement -- and Practical Suggestions for Implementing Them In Your Classroom" with Diane Briars (President, National Council of Supervisors of Mathematics)

12:30 - 1:30 "Beyond Common Core Standards: Deep Curriculum Needed to Link Instruction, Assessment, and Professional Development" with Lorrie Shepard (University of Colorado at Boulder) OR "From Informal to Formal with a Click: Using Technology to Facilitate Progressive Formalization in Algebra" with Fred Peck and Jenn Moeller (Boulder Valley School District)
I had class with Fred and Jenn last summer and would love to see their presentation, but I really want to see Dr. Shepard's take on the Common Core Standards and the curriculum we need to support it. In either case, I might have trouble getting to Denver in time.

2:00 - 3:00 "Students' Geometric Reasoning: Research, Assessment, Teaching, and Intervention" with Michael Battista (Ohio State University)

5:00 - 6:30 "Why Videogames Are the Perfect Medium to Learn School Mathematics" with Keith Devlin (Stanford University)

Friday, October 8:
8:00-9:00 "Engaging Middle School Students in Proving" with Eric Knuth (University of Wisconsin -- Madison) OR "Online Professional Development Collaborative for Grades 6-12 Mathematics Teachers" with Robert Mayes (University of Wyoming) OR "Interactive, Visual Tools to Support the Learning of Algebra" with Henk van der Kooj (Freudenthal Institute)
I've seen probably seen the visual tools Henk van der Kooj will show at previous Freudenthal events, but if you haven't seen them you should check them out.

9:30-10:30 "NCTM and Implementation of the Common Core State Standards" with David Masunaga (Board of Directors, NCTM) and Michael Shaughnessy (President, NCTM)

11:00 - 12:00 "Students Reasoning and Sense Making on Some Favorite Geometry Problems" with Michael Shaughnessy (President, NCTM)

12:30 - 2:00 "Accessible Assessment: Selecting and Designing Tasks That Show What Students Know" with David C. Webb (University of Colorado at Boulder)
I've taken a class on assessment with Dr. Webb, so I may look for something else I haven't seen.

2:00 - 3:00 "Classroom Conversations: The Heart of Teaching" with Gail Burrill (Past President, NCTM; Michigan State University)

I still have some gaps to fill. I'll spend some time in the exhibitor hall, confirming my (somewhat baseless) distaste of most for-profit publishers and seeking out friendly faces. Hope to see you there!

Review: Race to Nowhere

Tonight I had the pleasure of viewing a screening of the documentary Race to Nowhere at the Shepherd Valley Waldorf School. With other education films such as Waiting for Superman and The Lottery, Nowhere provides a very different, yet very important perspective on American education. Check out the trailer:


People need to see this movie -- especially people who see Waiting for Superman. (Are you hearing me, Oprah?) Where Superman wants you to point fingers at a school, a teacher, or a union, Nowhere doesn't try to assign blame. Nowhere wants you to understand our educational culture and our roles in it, and use that understanding to change the way we as a society view school. Instead of over-scheduling, over-working, and over-stressing our students, Nowhere advocates for children, letting kids be kids and fostering their creativity and happiness to make them (or let them be) accomplished learners. Nowhere paints a powerful picture of accountability-based, high-stakes reform, and we begin to see how easily we fall into the traps of our system even while we understand its failings. Where Superman demands anger as the impetus for social change, Nowhere is a plea for compassion. Anger generally trumps compassion when it comes to getting people's attention, so I have doubts that Nowhere will be getting the attention it deserves alongside Superman. But we don't always make our best decisions when we're angry.

I'll admit it: for most of my life I scoffed at the idea that student self-esteem was a prerequisite for student achievement. I always thought it should be the other way around. After a year or so of grad school, with sufficient guidance and time for reflection and self-enlightenment, I now realize that my opinions were far too grounded in my own experience as a relatively stress-free, high-achieving student. This movie isn't about the warm and fuzzy student self-esteem I may have discredited in the past; the students in this movie are being harmed both psychologically and physically in ways that are hard to watch, and students who don't want that stress are giving up altogether. Nowhere seeks a balance, a reciprocity between student welfare and achievement that we must desire as an educational outcome. High standardized test scores always look good, but not if they come with higher incidents of student sickness, headaches, sleepless nights, caffiene and stimulant abuse, eating disorders, and suicides. (The film contains a heartbreaking story of a 13-year-old girl who killed herself, essentially, because a bad test in 8th grade algebra was going to prevent her from getting straight As.) The most powerful voices in Nowhere are the students. They are bright, well-spoken, driven students who want to do well as much or more than any parent, teacher, administrator, or policymaker wants for them. They are the stars of this movie and they deserve not only our attention, but our action. Where should you start? Go see the movie and watch for the answers to that question at the end of the film.

Tenure and Union Contracts Are Two Separate Issues

One of many teachers to speak during NBC's "Teacher Town Hall" on Sunday was this young woman, featured again on Monday's NBC Nightly News with Brian Williams:



While I don't know all the details of her particular situation or what it's like at her school, her first words concern me: "I think we don't understand tenure." She then goes on with a passionate speech about how the union contract at her school interferes with her ability to deliver the kind of education she'd like to see her students get. I'm worried that she's failed to distinguish between union contracts and tenure, and it's clear from our current national discourse that she isn't the only one.

Union contracts, sometimes called professional agreements or master agreements, define fundamental working conditions between the school board (or administration) and the teachers, including contract hours, sick days, early retirement, grievance procedures, and many other practical, often common-sense things that are found in many employer-employee relationships. These are generally negotiated at a district level and quality varies. And, as with anything territorial, these negotiations can become very politicized and what started as common sense gets twisted into spiteful actions and reactions. If this teacher works in a district that prevents her from volunteering her non-contract time to help kids who want to be helped, then there is something wrong with both her district's union contract and how it's enforced. She has every right to be upset.

Tenure is something quite different. Tenure is usually defined by state lawmakers, and to avoid the "job for life" misconception many states don't actually use the word "tenure." Colorado, for example, uses "non-probationary teacher." After teaching successfully for several years (usually 3), as shown by regular and multiple administrative evaluations, a teacher earns due process rights. While I am not a lawyer, my understanding of due process is this: a teacher with "tenure," when faced with dismissal from his/her job, has the right to be heard in front of a neutral judge. Unfortunately, given the high stakes of such decisions (for both sides) and the inefficiencies of the legal system, such proceedings are contentious, taxing, and expensive. Tenure protections were designed to protect teachers from the whims of political pressures, not poor performance. But in today's heated climate, however, poor teacher performance couldn't be much more political. Few would disagree that reforms are needed, but the rights of due process are not easily negotiated or redefined.

In some ways I'm happy that this teacher doesn't think she needs tenure. Maybe we finally live in a world where parents and administrators are more supportive when their teachers try innovative teaching practices. Maybe teachers are free to assign grades to students without outside influence. Maybe communities are more tolerant of teachers who affiliate with a religion, political party, or sexual orientation that differs from their own. Maybe teachers, such as the one in the video, can now be outspoken without risking their jobs. But let's get serious -- this is not the world in which we live and the teacher in the video, whether she thinks she needs them or not, is a noble professional who deserves some protection against these kinds of pressures. I want her to be able to argue for what's best for her students and not fear for her career because her sentiments might be unpopular, whether it's with her administration or union officials. Again, this is not about her performance. The protection from pressures I've described above should and can be dealt with separately from concerns of poor performance.

I admire the spirit of the young woman in the video and hope she sees the differences between her union contract and tenure. I also hope she is admired for speaking out and is received in her school with understanding and respect. I also hope she and others learn to calm the discourse, understand the critical distinctions in these important issues, and work to do not only what's best for children, but what's right for their teachers as well. Reforms done "with" are destined to be more successful than reforms done "to," and that kind of cooperation is going to take a refined level of debate.

A Week for Irony and Contradictions in Education

The recent past has been wrought with irony (or at least contradictions) in all corners of education. Let's look at a list:

Read Bruce Baker's "If money doesn't matter..."  Bruce examines the argument that "we keep throwing money at education and it hasn't made a difference," and points out that (a) schools with lots of money tend to do well, and (b) people who make that argument don't mind throwing money at charter schools.

Just days after Vanderbilt University released their study finding no improvement in test scores in Nashville's merit pay system, schools around the country (including Colorado) received millions of dollars from the federal government to implement merit pay systems.

NBC's "Education Nation" summit will gather "the foremost policymakers, elected officials, thought leaders, educators, members of the business community and engaged citizens" to discuss issues in U.S. education. Unfortunately, the list of invited panelists NBC is promoting doesn't include any teachers, students, principals, or professors. The only university-affiliated participant on the list is the President of the University of Phoenix, who happens to be a major sponsor of the event.

In a post called "Does Education Pay?" the Center for College Affordability and Productivity (CCAP) criticize the College Board for "a confusion of correlation with causation." The day before, the CCAP asked, "Should We Abolish Colleges of Education?" and use this logic:
  1. U.S. students "perform in a mediocre fashion" on international tests.
  2. Kids need remediation and/or drop out of college because of their mediocre education.
  3. Good teaching is better than mediocre teaching.
  4. Most teachers studied at a college of education.
  5. The teachers who didn't go to a college of education are as good or better than those who did, such as Teach for America teachers. (Sorry, CCAP, that's rarely true.)
  6. Colleges of education support anti-knowledge and anti-intellectual biases and make their poor students look good by inflating grades.
  7. Colleges of education don't want teachers to be rewarded for student learning because student self-esteem is more important than knowledge.
  8. While there might be some good colleges of education, most of the people who really understand education are not in education schools.
  9. Courses in education are less helpful for math teachers (for example) than advanced math courses. (This was not the finding by Floden and Meniketti (2005), who say it's not as simple as "more math is better.")
  10. THEREFORE, we should close colleges of education, which are a "blight on true 'higher education' [that] should be discouraged at all institutions depending on taxpayer funds."
I admit, that post had no trouble working its way under my skin. If you can keep track of all the assumptions and correlation/causation confusions in their argument, you're doing better than me.

If I missed anything you want to add to this week's list, feel free to add them in the comments.

References
Floden, R. E., & Meniketti, M. (2005). Research on the effects of coursework in the arts and sciences and in the foundations of education. In M. Cochran-Smith & K.M. Zeichner (Eds.), Studying teacher education: The Report of AERA Panel on Research and Teacher Education (pp. 261-308). Mahwah, NJ: Erlbaum.

Teaching as a "Moral Craft"

I recently read The Peculiar Problems of Preparing Educational Researchers by David F. Labaree, and was particularly struck by this paragraph:

The main reason for [teaching as a moral craft] is that, unlike most professionals, teachers do not apply their expertise toward ends that are set by the client. A lawyer, doctor, or accountant is a hired mind who helps clients pursue goals that they themselves establish, such as to gain a divorce, halt an infection, or minimize taxes. But teachers are in the business of instilling behaviors and skills and knowledge in students who do not ask for this intervention in their lives and who are considered too young to make that kind of choice anyway. By setting out to change people rather than to serve their wishes, teachers take on an enormous moral responsibility to make sure that they changes they introduce are truly in the best interest of the student and not merely a matter of individual whim or personal convenience. And this reponsibility is exacerbated by the fact that they student's presence in the teacher's classroom is compulsory. Not only are teachers imposing a particular curriculum on students, then, but they are also denying them the liberty to do something else. The moral implications are clear: If you are going to restrict student liberty, it has to be for very good reasons; you had better be able to show that the student ultimately benefits and that these benefits are large enough to justify the coercive means used to produce them (Cohen, 1988; Fenstermacher, 1990; Tom, 1984).

I've heard other people argue that the teaching profession doesn't compare well to other professions, like being a doctor or lawyer. After reading this, I'm happy they don't - teachers have good reason to feel like they're doing something special.

References
Labaree, D. F. (2003). The peculiar problems of preparing educational researchers. Educational Researcher, 32(4), 13-22. Retrieved from http://edr.sagepub.com/cgi/content/abstract/32/4/13.

2004-2006: My Adventures in Standards-Based Grading (And Why I Stopped)

(cc): NASA
Strap yourself into the wayback machine, boys and girls, because we're going back in time five whole years. Life was different then: the U.S. was engaged in wars overseas, unprecedented disaster had struck our Gulf Coast, and teachers struggled to adapt to an assessment-centric school culture. It was, like, totally different than things are now.

In 2003 I began my teaching career at a medium-sized high school in Southern Colorado. My first year, as it is for many teachers, wasn't much more than a fight for survival. I spent much of my second year learning how to become something other than the teachers who taught me, and part of that meant tinkering with my assessments. By my third year, I was ready to collaborate with the school's other four math teachers in a concerted effort to improve our assessment and grading practices. I'm not sure I knew what an ideal assessment and grading system should look like, but I knew I wanted something other than the traditional quiz/test, either-you-got-it-or-you-don't system.

I think there's a reason math teachers in particular get so heavily invested in assessment and grading. Numbers are our friends. We trust them. We can sort them, scale them, manipulate them, and summarize them in ways that reveal certain truths. I was determined - perhaps obsessed - with finding a grading system that was accurate and fair. To me "accurate" meant "students earn the grade they deserve" and "fair" meant "objective and unbiased." I think I really believed that if I could just find the right scales and weights, the math would solve my assessment problems. 

While I might have been facetious in my opening paragraph, things really were different five years ago. Not many teachers were bloggers, Twitter hadn't been invented, and you couldn't search for #sbg or #sbar hashtags. Sure, standards-based grading existed, but Guskey, Marzano, Wiggins, et. al. sure weren't knocking on my door to tell me about it. I didn't know about SBG and I don't think any of my colleagues or administrators knew about it, either. It's sad that so many good ideas in education struggle to find their way from theory to practice, but that's another story for another day. This story is about my attempts at standards-based grading, my successes, failures, and frustrations I had along the way, and why I reverted back to a traditional grading system.

2004-2005: SBG(ish)
During my first semester of teaching I spent many hours after school designing quizzes and tests, thinking, "This is part of being a first-year teacher. Once I write these tests I'll never have to do this again." HA! Not only did I not reuse any of those assessments, in six years of teaching I hardly reused any of my assessments. Every semester I had new problems, assessment designs, and grading systems that made my old ones look horribly obsolete. As I went into my second year of teaching, I already knew that I wasn't going to be satisfied with a traditional system.

Before I go any further, I want to make this clear: I'm not claiming that I somehow independently invented or discovered standards-based grading. I was making this up as I went along and, as you'll soon read, it didn't necessarily work all that well. I did manage to reorganize my gradebook around concepts and skills instead of dates or arbitrarily-titled ("Chapter 8 Test") assessments. But while that part looked like SBG, sometimes very little else did. In fact, I wouldn't be surprised if you read this and decide I wasn't doing SBG at all.

As I said, I started making SBG-like changes to start my second year, as you can see in this passage from my fall 2004 Algebra 1 syllabus:
Each unit in the text will have several key objectives that should be your focus during that unit. Your score for an objective will usually be established by your performance on a quiz or test and is based on my perception of your understanding. Objectives are graded on a scale of 5 to 10. Think of 10/10 as an A+, 9/10 an A-, 8/10 a B-, and so forth. Objectives not genuinely attempted will be given a zero. If you are dissatisfied with one or more of your objective scores, it is highly recommended that you see me for 1-on-1 help, preferably before or after school. Because the CPM philosophy is mastery over time, you can expect to be quizzed or tested over each objective multiple times, with each time representing an opportunity to raise your objective score.

I got off to a good start by focusing on objectives, but then quickly got bogged down by the point system. SBG isn't about accumulating points. Sure, you'll need a way to record student performance, but a well-implemented SBG system will be formative and focused on feedback, not a point system. By the way, if you ever want to drive yourself into an insane asylum, try a 5-to-10 point grading scale based on perceptions of student understanding. I nearly drove myself crazy, scoring, re-scoring, and re-re-scoring every bit of work out of fear that somebody's 7/10 actually showed the same understanding as a classmate's 8/10. I was far more concerned with ranking and sorting than feedback. Even worse, I would have students who scored 7/10 or 8/10 over and over, earning passing scores for objectives even though I'd never actually seen them get any right answers.

By the following spring (with a new set of classes, like a college schedule) I decided that it would be far better for students to get most problems all right instead of all problems mostly right, so my syllabus now said this:
IT'S ALL ABOUT THE ROADMAP. The objective roadmap is a detailed list of objectives that together make up everything you should learn in the course. The objectives are organized by unit, but each objective will be assessed separately. To pass an objective you need to get 80% or better on the objective test. If you fail an objective, you will have to retake and pass that objective test before moving on to any objectives in the next unit.

(Here are my roadmaps for Algebra 1, Algebra 2, and Business Math.)

Almost every objective test consisted of five problems, and if it wasn't right, it was wrong. I was so sick of agonizing over partial credit I got rid of it entirely. Objectives were added to students' grades on a schedule, and students who fell behind schedule got zeros on objectives they hadn't yet attempted. Zeros have rarely been known to play nice with the statistical mean, so a student who averages 80% on 80% of the objectives (with 0% on the others) is only going to get a 64%. It was setting a high standard, sure, but somehow I convinced myself that I was a one-man-army who could rid the world of grade inflation. Anyhow, I asked students to focus less on the gradebook and more on objective progress charts like this one:
(Those are student numbers on the left. Yes, this class only had 11 students.)
Still, if this was SBG, it was really, really bad SBG. Three major problems stand out. First, I was telling students that there were a bunch of objectives and any one of them could stop them dead in their tracks. (I think I relented on that one not far into the semester.) Second, I was still way too focused on points and percentages, not formative assessment. Third, a student who was "mostly right" on all five test problems got a zero in the gradebook, even if the cause was a small procedural mistake that he/she happened to repeat on all five problems. How's that for demotivation?

Amidst all the problems of this system, there emerged a brilliant, shining light, and her name was Tara. She gave me hope. She was part of a pretty special Algebra 2 class, and I think previous struggles in math had left her without much confidence. She barely spoke, and when she did it was very quiet, and quite often it was apologetic, like "I'm sorry to make you stay after school so I can retake a test." Tara understood the grading system and used it to her advantage, retaking test after test with study sessions in between. I remember a marathon session in the days before the final exam, when Tara was within sight of her goal of passing every objective. I think she stayed after school for five hours until her goal was finally met. Tara, if somehow you happen to read this, I hope you realize you weren't a burden, you were an inspiration. You did and still make me want to be a better teacher.

2005-2006: Professional Learning Communities (PLCs) and Common Assessments
I think every teacher knows that feeling when an administrator latches on to an idea from a conference or meeting they attended and then bring it home for their staff to use. Unfortunately, in a rural district like ours with scant resources, the promotion usually went like this:
This year we're going to implement [insert acronym or edujargon here]. I've talked to a lot of other principals who are doing it and think it's very effective. ["Effective" usually means "Since we didn't make AYP, again, I need to tell the state that we're trying something new. Again."] The principals I talked to sent their teachers to summer workshops and hired a coordinator to guide the implementation of [edujargon], but since we can't afford so much as a box a tissues for this school, I'll just tell you what I remember and we'll make the rest up as we go along.

For 2005-2006, administration was pushing Professional Learning Communities (PLCs) and common assessments. The math department PLC opted to tackle Algebra 1 in the first year, and we began by brainstorming a list of concepts we thought all Algebra 1 students should know. We shuffled our list into eight groups that aligned with our textbook and other materials, and finally scheduled eight test dates spaced evenly throughout the year.

Because of some of the success I experienced the previous year, we opted to grade each concept separately instead of amassing a single score for the whole test. Students who did poorly on a concept would be remediated and given an opportunity to retest until they showed proficiency with each concept. Perhaps more importantly, since all the Algebra 1 teachers were assessing the same objectives, using the same tests, all scored the same way, we now had a basis for comparing our students' performance, and that led to a sharing of tips and techniques for teaching particular skills. Unfortunately our time was limited so I never learned as much from my colleagues as I hoped, but there is certainly an advantage in using the same SGB system across teachers in a department, or even now across schools in our ever-connected world.

We had a schedule change that allowed freshmen to take Algebra 1 all year long. With about a month left in the school year, the progress chart for one of my classes looked like this:
You can detect a problem here: too many students started slacking at the end of the year!
Even though the system was working, it was showing new cracks. Students knew I had a finite (usually just 2 or 3) set of tests for each objective, and if they failed form A, they would carefully memorize their mistakes and the right answers, then pretty much fail form B on purpose so they could then take form A again. (Ugh!) Also, after two years of using this kind of system, I was growing tired of explaining and defending it to students, parents, and administrators. I enjoyed the extra time spent with students who came in after school, but it became increasingly hard to make up for the lost time. Lastly, the paperwork was a challenge - I had to have every form of every test ready at a moment's notice for students who needed to take or retake tests. Because I didn't have my own classroom, this meant hauling around a file box to three or more rooms each day in two different buildings.

One of my biggest dissatisfactions with SBG was the quality of the assessments I was using. During instruction my students worked in groups, delving into big problems in context, problems that required a combination of reasoning and the integration of multiple skills. Because I wanted to assess every skill in isolation, the tests I wrote were nothing like that. Too often they looked like this:
(What, I couldn't think of contexts for these?)

Goodbye, SBG
I was having another major problem at the end of the 2005-2006 school year: because of budget cuts, declining enrollment, and a shuffle of veteran teachers, I found myself out of a job. Even though I knew these to be the reasons, I was pretty hard on myself and thought if I had been a better teacher, then the district would have somehow found a way to keep me.

In the interview for my new job I talked about my assessment practices and quickly got the feeling that they wouldn't work in my new school. The school day was two hours longer and few students would be able to spend time after school. I still hadn't figured out how to measure skills in isolation while wanting students to tackle larger, more comprehensive tasks. I was now the only teacher in the district teaching any of my subjects, so I had no colleagues to work with on common assessments. Frustrated and not wanting to rock the boat, I figured a simple, traditional assessment and grading system would be the easiest and best way forward. I now know that I couldn't have been more wrong! In my bitterness and self-doubt I gave up the progress I had gained over two years of hard work.

If you take anything away from this post, I hope it's this: it's not 2005 anymore. You don't have to stumble around and make the mistakes I did. You don't have to work in isolation like I did. There's a great community of bloggers discussing standards-based and other grading practices, and I haven't met one yet that doesn't want to help their fellow teachers. You might not have access to the research, but some of us do and we enjoy sharing what we think about it. So please, whether you're new to SBG or have been doing it for years, share your experiences and don't hesitate to ask questions!

Colorado's Amendment 61: The Simple Facts

Several stories have hit the news recently about Amendment 61, which Coloradoans will vote on this November. Here's what you need to know:

  • The property taxes that Colorado school districts rely upon for their funding usually start coming in during March.
  • When Colorado school districts used to budget around a calendar year (Jan-Dec), they didn't have long to wait (~3 months) for property tax funds to arrive.
  • When Colorado school districts switched to fiscal years (Jul-Jun), they had to wait much longer (~9 months) for property tax funds. Since districts don't have an extra 6 months of cash lying around, the state has provided interest-free loans to school districts to bridge the gap.
  • A provision of Amendment 61, if passed, prohibits the state from making these loans to school districts, and many districts aren't sure what they will do (or, better put, not do) without that money.
I don't think I need to tell you how to vote, but this does sound like an Amendment worth studying carefully. Ed News Colorado has the best coverage I've seen yet, including links to sites for and against Amendment 61.

A 2-for-1 "Soft Skills" Special: The Sit-Stand Paradox and Defective Girls

Written for The Virtual Conference on Soft Skills, July 3 - July 31, 2010

Of all my courses as a pre-service math education major, I think I enjoyed educational psychology the least. When you spend much of every day deciphering the infallibility of mathematics, the "theories" of social science don't hold up well against the scrutiny of a brain hardened by the concept of rigorous proof. I now realize I should have adjusted my perspective in whatever way necessary to ensure I got more out of the class, but even if I had I don't think anything would have fully prepared me for a classroom full of independently-minded students. You just have to jump in there, year after year, class after class.

In my six years of teaching high school math I developed some wonderful relationships with my students. Without overstepping the bounds of a teacher-student relationship, my students became my friends, something I seem to remember being told I should never let happen. But I would look forward to seeing my students each day; I would try to make the most of my time with them, and I would miss them when they were gone. If that doesn't describe "friends," then I apparently don't know what a friend is. I might be in the "ivory towers" of academia now, but I honestly think of my former students from those six years every single day.

That's not to say that there weren't MANY bumps and hiccups along the way, and I regret the lack of effort and deficits in my own character that prevented me from forming stronger relationships with ALL my students. But as I reflect back, two lessons learned (one a realization, the other a piece of advice) helped strengthen that special student-teacher bond.

The Sit-Stand Paradox
Ask any teacher what period of the day is likely to be their least favorite and most will answer, "last period." It doesn't matter if your school has four, six, seven, or eight periods -- there is always a last period. By far my toughest group of kids I ever attempted to teach was a last-period business math class. It's a bad sign when, on the very first day of class, a student who you've just met pulls you aside and tells you, "I don't know who decided to put this mix of students together in the same class, but it's a really, really bad idea." I'd like to think my chances with them would have been much better if I'd seen them before lunch.

So what makes last period so tough? I think the explanation is simple: after a long day at school, students are tired of sitting and teachers are tired of standing. Should you be trying to hide this fact from your students? No! They're people, not circus animals that might attack if they sense fear. If you want a class that works with you, not against you, share your motivations and frustrations. Establish common goals and understandings so you can move forward together. Maybe it's time for an out-of-seat activity, a lesson outside, or a trip to a less familiar room in the school. I know it sounds easier than it really is, but even something as simple as sending your students to the board while you sit at their desk can be just the change in perspective everybody needs. If you're worried that instruction might suffer with a little chaos, think of how much it's already suffering when all the students are watching the clock hoping to be somewhere else.

Don't Treat Boys Like They're Defective Girls
During my third year of teaching I had the privilege to share a classroom with Miss Sandra Miley, a 30+ year educator who took a distinct pleasure in teaching freshman boys' seminar and P.E. If you didn't already know, freshman boys are at that awkward age (which lasts from about 11 to 25, as far as I can tell) that can make them awfully hard to teach. Not so for Miss Miley, who passed on this hard-earned wisdom:

"Do you want to know the secret to teaching boys? It's simple. Don't treat them like they're defective girls."

Ever since I was given that Yoda-like advice, I've been trying to unravel the mysteries contained within. Certainly Miss Miley had a perspective from 30+ years in the classroom that I may never match, but I think I got the point. As teachers, our jobs are made easier (not necessarily more enjoyable or effective) when students sit at attention, take notes, raise their hands, follow rules and instructions, and hang on our every word. If you have students who fit that description, I'd bet dollars to doughnuts that the majority of them are girls. Should you want or expect every student to belong in that category? I sure hope not. So don't punish boys who don't happen to behave like those girls. Such behavior is probably not in their DNA.

If you're not convinced, here's a little anecdote to consider. A highly-respected education researcher shared this hypothesis at a conference last fall (identities have been hidden to protect the unpublished):
"I've never been brave enough to try to publish this, but I've long wondered if boys develop better problem-solving skills because they aren't paying attention in class. Girls who listen carefully to instructions and take notes always know exactly where to start because the teacher told them. Boys who goof off during instructions spend a lot more time and effort sorting out the aspects of a problem for themselves, and that practice pays off in the long run."

We're Not All Math and English Teachers

Yesterday the Des Moines Register published an editorial applauding Colorado for reforming its teacher evaluation laws. The editorial goes on to criticize Iowa's reforms, saying the state is "moving too cautiously." Iowa requires teachers to "provide multiple forms of evidence of student learning and growth," but the Register is disappointed that the inclusion of standardized test scores is not required.

When will the public and policymakers realize that not every subject is covered by a standardized test? I feel like this is one of the most overlooked aspects of this argument. We're not all math and English teachers! Colorado spends 3 hours, per subject, each school year to measure every grade 3-10 student's achievement in math, reading, and writing (plus 3 hours for science in grades 5, 8, and 10). In a state that requires a minimum of 1080 hours of student contact time, this minimum of 9 hours of testing represents less than 1% of a school year. If only it felt like so little!

But that's only for math and English. If you want to mandate teacher evaluations based on standardized tests, you need equally rigorous tests for every subject and every teacher. Can you imagine tests for P.E., music, art, or vocational courses? What if schools could only offer classes that were backed by standardized tests?

Let's also consider the extra time required. Suppose students average 7 Carnegie units (credits) per year. At 3 hours per subject and 7 subjects, we're up to 21 hours of testing per year. If you were to require that standardized tests measure student growth, you'd need to test each student both at the beginning of the year and at the end. Doubling the testing takes us to 42 hours, or almost 4% of the year. It still may not sound like much, but students aren't going to be testing 7 hours a day for 6 straight days. If students tested 2 hours daily, the testing schedule would stretch out to 21 days, or about a month of the school year (half at the beginning, half at the end). You thought finals week was bad? Try two weeks!

Unfortunately, I have yet to work in any school that could carry on with regular learning during standardized test times. We tried everything from one test per day to four, mixed with full-length classes meeting on a rotation to all classes meeting on a shortened schedules. No matter how little testing is done, that testing time affects everything else in the day. Can you imagine a month spent like that?

This little rant has come from me, a guy who has almost always been pro-test. They have their place in the educational system and are part of an effective assessment strategy. But when people want every teacher to be measured by their students' standardized test scores, we have to think about the possible ramifications. So be careful what you wish for, Des Moines Register!

I Love Good Data Visualization. This Isn't It.

Earlier this week Newsweek ran a story titled, "Classrooms or Prison Cells?" Given some of the more recent education coverage from Newsweek I wouldn't have been very surprised if the article came down in favor of prisons.

Thankfully, the article was generally informative and unbiased, and told the story of California's 30-year rise in corrections costs amidst education budget cuts. According to the article, in 1980 California spent 10% of its budget on higher education and 3% on prisons. Now, almost 11% goes to prisons while higher education spending has dropped to 7.5%. If you thought that was a tragedy, check out the graph that accompanied the article:

(Image Source: http://www.newsweek.com/content/newsweek/2010/06/28/classrooms-or-prison-cells/_jcr_content/body/inlineimage.img.png/1277695326254.png)

Do you get the feeling that somebody in the Newsweek graphics department got this assignment at 4:45pm on a Friday afternoon? I would have loved to see the graph try to predict future spending. Given the assumption that these rates are truly linear, you can predict that by the end of this century California will be spending 35% of its budget on prisons and not a single dime on higher education.

Functions of Functions

Mathematical functions are usually introduced formally to students somewhere around the end of Algebra 1 or maybe in Algebra 2. If my Algebra 2 final exam had included the question, "What do you know about functions?" I probably would have said: (a) You use \( f(x) \) instead of \( y \), and (b) if a graph fails the vertical line test, it's not a function. With all due respect to my high school math teacher, this would have been a lousy answer. I might have known some peripheral information, but not the core understanding. Sadly, the mathematical importance of functions is not that difficult conceptually, yet it's crucial to the majority of the content learned in Algebra 1 and 2. Yet students still struggle with what makes a function a function.

My first experience trying to teach the definition in a non-traditional way was using the "Cola Machine" problem in CPM Algebra (Math 1). The problem, several days into Unit 11, describes the following:
The cola machine at your school offers several types of soda. Your favorite drink, Blast!, has two buttons dedicated to it, while the other drinks (Slurp, Lemon Twister, and Diet Slurp) each have one button.

  1. Explain how the soda machine is a relation.
  2. Describe the domain and range of this soda machine.
  3. While buying a soda, Mr. Hagen pushed the button for Lemon Twister and got a can of Lemon Twister. Later he went back to the same machine but this time pushing the Lemon Twister button got him a can of Blast! Is the machine functioning consistently? Why or why not?
  4. When Karen pushed the top button for Blast! She received a can of Blast! Her friend, Miguel, decided to be different and pushed the second button for Blast! He, too, received a can of Blast! Is the machine functioning consistently? Why or why not?
  5. When Loufti pushed a button for Slurp, he received a can of Lemon Twister! Later, Tayeisha also pushed the Slurp button and received a can of Lemon Twister. Still later, Tayeisha noticed that everyone else who pushed the Slurp button received a Lemon Twister. Is the machine functioning consistently? Explain why or why not. (Sallee, et. al., 2002, p. 375)
Most textbooks define functions approximately the same way: a relation is a function if there exists no more than one output for each input. Without a context, however, that definition might not carry much meaning, and relying solely on the vertical line test in a graph may not be helpful enough for many students. The soda machine problem, with its emphasis on consistency, gives both teacher and student a very approachable context within which to discuss what a function is and is not.

Early in my teaching career I thought Algebra 1 basically boiled down to two big ideas: solving equations and graphing lines. If students could do those two things, I felt pretty good about them passing my class. Now I see the big ideas of Algebra 1 differently and basically aligned with Colorado's revised standards for high school mathematics. The six expectations listed for Colorado's high school algebra standard can be summarized as follows:
  1. Functional representations (equations, graphs, and tables)
  2. Function behavior (qualitative)
  3. Function transformations (parameters and parent graphs)
  4. Equivalent expressions, equations, and inequalities
  5. Solving equations, inequalities, and systems of equations
  6. Mathematical modeling using functions
Four of the six expectations explicitly mention functions. The last expectation, mathematical modeling using functions, represents (for many math educators) the ultimate goal for math instruction: to give students the mathematical power to describe and understand the world around them. Instead of just solving and graphing, the big idea of high school algebra is functions, with most linear function work in Algebra 1 and most non-linear function work in Algebra 2.

I think it's a mistake to delay the explanation and definition of functions until late in Algebra 1 or later. Algebra 1 and younger students can understand the cola machine problem or other, similar contexts. Suppose the class plays an "exchange" game. Student A gives the teacher three triangles in exchange for two squares. What should Student B expect to get in exchange for his three triangles? What exchange would represent a function versus a non-function? For something based more in the real-world, this conversation could be set in the context of currency exchange. Also, a helpful model for learning functions might be function machines, such as:


Function machines are helpful models for learning functions, function composition, inverse functions, and even solving equations. They help stress the input-output relationship in a way that words or equations alone might not. Students will also naturally expect a single output for each input. The real challenge for Algebra 1 or younger students is to present a variety of equations that aren't functions, or else risk having students think that every two-variable equation describes a function.

Patterns can also be used to teach functions. If students are given the sequence 2, 4, 8, …, some students are likely to predict 14 as the next term (adding 8 plus 6, the next consecutive even number), while other students might predict 16 (the fourth power of two). Because the fourth term could reasonably be two different values, we can't establish a functional relationship to describe the sequence. This could even be an example worth graphing to discuss the meaning of the vertical line test:


Our calculators also help us distinguish functions from non-functions. If you try to graph a circle on a Texas Instruments graphing calculator, you have to enter two functions: one for the top half of the circle, and one for the bottom half. Therefore, the written forms of the equation of a circle (such as \( x^2 + y^2 = 1\) or \(y= \pm \sqrt{1-x^2} \)) can't be functions. The graphing calculator is also a good tool for discussing the square root function, and why its graph must only be half of a parabola's inverse if we want it to be a function.

There two key obstacles that are likely to remain in a student's way of understanding functions. First, students will continue to focus the vast majority of the time and effort on functions, and there are too few real-world examples of useful non-functions to help distinguish the two. Non-functions are less powerful and less common, but without them we risk having students who casually accept that any equations with an input-output relationship is a function. The second obstacle is notation, and it's not an obstacle we can likely avoid. The change to function notation, such as using \(f(x) = 3x - 2\) instead of \(y = 3x - 2\), is not only difficult to explain as something other than an arbitrary change in symbols, but includes two aspects that are directly contrary to a student's prior knowledge. Now a letter (such as the \(f\) in the preceding example) is no longer a variable, but a name with no numerical value by itself. Function notation also uses parentheses to represent something other than multiplication, adding more work to the already overloaded duties mathematicians place on these two simple arcs. The power of function notation is the preservation of the input (\(x\)) with the output, but it's confusing that the output (\(f(x)\)) reuses the input variable, uses a letter that isn't a variable, and uses symbolism for multiplication of variables that's no longer multiplication. Perhaps a question like this could help students realize the power of the notation:

  1. If \(y=16\), \(y=49\), and \(y=64\), what might be an equation relating an input, \(x\), to the output, \(y\)?
  2. If \(f(4)=16\), \(f(7)=49\), and \(f(-8)=64\), what is \(f(x)\)?
  3. Are the two questions above the same? Which one is easiest to understand? Why?

References
Sallee, T., Kysh, J., Kasimatis, E., & Hoey, B. (2002). College Preparatory Mathematics 1 (Algebra 1). (L. Dietiker, Ed.) (2nd ed., Vols. 1-2, Vol. 2). Sacramento, CA: CPM Educational Program.

Patterns of Patterns

As a young math student I knew tons of formulas and how to use them, but when it came to counting and generalizing sequences of numbers I often had to resort to brute force, or at least guess-and-check. It frustrated me that I knew there should be an easier way to generalize number patterns, but because I could usually get the right answer (patience and accuracy were on my side, thankfully), I didn't force myself to understand patterns more deeply.

For my first three years of teaching I used the original CPM series, and in their Algebra (Math 1) text they presented students with three key number sequences: the square numbers, the rectangular numbers, and the triangular numbers. These sequences appear so frequently that knowing and understanding their generalizations can be helpful when the sequences appear explicitly or when they are embedded in the foundation of another pattern.

The square number pattern, shown below, is the simplest of the three patterns, although students who are struggling to move beyond a recursive view of the pattern are likely to describe the sequence as "adding the next consecutive odd number." Because of this, the square numbers become a great example for students to see the importance of moving beyond recursive descriptions of patterns and towards expressions that yield the number of tiles for any figure.

The rectangular numbers are the next sequence in the progression of patterns. Instead of the expression (or ) as with square numbers, the rectangular numbers use for one dimension.
The triangular numbers look like a new pattern:

But in fact, the triangular numbers can be seen as half of each rectangular number. This means we can modify the rectangular number expression and represent triangular numbers using .
There are real-life representations of the triangular numbers, for sure (bowling pins, stacks of cans or boxes, etc.), but the real mathematical power behind leading students through this progression is that they see two examples of how to modify a previous pattern and generalization to get a new pattern and generalization.

Our class was presented with a problem known as the "Skeleton Tower," borrowed from http://www.wcer.wisc.edu/archive/nise/Publications/Briefs/Vol_2_No_1/. I've attempted to re-illustrate the tower in the figure below:
Suddenly our figures jump from 2-dimensional to 3-dimensional, but in doing so we've opened up the strategies students might use to generalize the pattern. If a student looks at how the construction of the tower progresses from figure to figure, he or she may think of the tower as a sum of horizontal layers. If a student were only given the table, or the sequence 1, 6, 15, 28,..., he or she would most likely see the mathematics of the sequence the same way.

On the other hand, if a student were familiar with the power of the square, rectangular, and triangular numbers, they may spatially reason the tower to be a single column of blocks of height , surrounded by four "wings," each in the pattern of the triangular numbers. Because the height of each triangular wing is , not , an adjustment to the triangular number expression must be made, giving us a new expression of to describe each of the tower's wings. All together, we can express the number of blocks in a skeleton tower of height as follows:

If a student were to rewrite this expression as , they might visualize the tower as rearranged with two opposite wings removed and each stacked on top of the remaining wings, making a rectangular shape with height and width . Remember, the triangular wings were undersized to begin with; the only reason the width is not is because the center column remains in the middle. Here is the pattern "flattened," which hints at solutions using both the overall dimensions (as just discussed) or using the rectangular numbers plus a center column:


The progression through the square, rectangular, and triangular numbers certainly makes the skeleton tower more approachable. But what if we build a tower that is more complex? What if our three fundamental number sequences are more hidden, and spatial "flattening" strategies are less apparent? Below is one such possible tower, an extension of the skeleton tower.

Unfortunately, after several hours of trying, no generalized expression for the number of blocks in this new tower was found. No amount of spatial reasoning would allow me to rearrange the blocks into an easier shape, but fortunately there were two indications that the generalization would be a cubic.

First, unlike the skeleton tower, this shape is much closer to a true pyramid, so it was difficult to imagine it losing its 3-dimensional character no matter how many blocks were rearranged. (I tried variations of the formula for the volume of a pyramid, , with no success, but that did make me hypothesize that thirds of a cube could be involved in the correct generalization.) My second clue that the generalized expression was cubic was seen in the difference between the terms of the sequence (which I had to extend to a 5th figure):

Because the third differences reach a constant term (four), the generalized expression must be a cubic. This patterning of differences is a very handy tool to have in these cases, but I must admit I don't fully understand why it works, or if it can be used as an aid in determining the expression.

(UPDATE: See update below for ways I could/should have found the generalized form.)

With no real hope of deriving the generalized expression via non-mechanical means, I gave up and used my calculator to find a cubic regression. The generalized expression for this second cube stack is . (The thirds are there, as I suspected, although that was far more luck than intuition.) Even with this expression I still can't imagine a physical restacking of cubes (or fractions of cubes) that would lead a student to this result, nor would knowledge of square, rectangular, and triangular numbers.

I find these five patterns intriguing for a number of reasons. First, we go from very basic to very difficult in a quick but logical way. The skeleton tower is a good choice because it incorporates the triangular and rectangular patterns in its solution, and it adds another dimension (literally and figuratively) that suggests new problem-solving approaches, including 3-dimensional spatial reasoning. As for my pyramid-like tower, I like that it re-establishes a disequilibrium for students (and myself). For Algebra 1 students that aren't ready for quadratic regression, being exposed to such a complex pattern might provide some motivation and could be revisited either later in the course or in Algebra 2. It might also be an example I give to students for a "design-your-own" pattern workshop, where students manipulate different permutations of the square, rectangular, and triangular patterns in 2-dimensional and 3-dimensional shapes.

UPDATE (2010.07.02): With great thanks to two friends/followers, I have two more methods of finding the generalization for my pyramidial stack of blocks.

@msmathaddict sent a tweet a link to the Dr. Math portion of the Math Forum that filled in my missing step from my pattern of differences. Now that I've seen the solution it seems obvious, but we can use a system of equations to devise the correct degree equation we're looking for.

My third differences reached a constant, so I knew the equation describing the equation would be third degree. Third degree (cubic) equations are generalized in this form:


From my original figures and table I had four known pairs: (1,1), (2,6), (3,19), and (4,44). Substituting into the generic cubic form, I can create a system of four equations with four variables:


Simplified, the system becomes:


The system might be most easily solved using matrices (because they are easily entered and manipulated in most graphing calculators):


Solving for , we get , so the general cubic becomes .

The second method for deriving the forumla for my pyramidial stack of blocks was sent to me by my friend Andrew Drenner. He opted to use a summation property to summarize the recursive nature of adding increasingly larger layers. He explains:

In a given horizontal slice of the pyramid there is cubes of height . Thus, the problem of finding the total blocks () in a pyramid tall becomes

.

Given that the summation sequence for squares is

,

can be expressed as follows:


Why Don't More Teachers Practice Proper Formative Assessment?

Research that fails to impact practice is a problem in any arena, and education is no exception. Teachers have many reasons for not implementing practices based on research. Too often research is unknown to teachers, locked away in journals that teachers and schools cannot afford. Beyond the cost of access, much of the best research is generally written for higher academic audiences and not easily digested by a busy and distracted practicing teacher. Transforming research into improved practice takes time, effort and patience, and is made easier in a professional community who share ideas and experiences. Teachers are constantly improving their practice, but too often the improvements are driven by personal failures or anecdotal evidence, not the quality results of dedicated educational researchers. This is a crippling inefficiency in the field of education, one that is largely self-imposed and tied to traditional practices held in place by the inertia of our experience.

Research strongly suggests that teachers could improve student learning by using formative assessment. Chapter tests and final exams are summative: they summarize the knowledge and skills a student has acquired, and are generally assigned a fixed grade. In fact, any assignment or task assigned a fixed and lasting grade can be considered summative, at least in part. Formative assessment, in contrast, focuses on improvement rather than final measurement. Teachers use formative assessment to adapt their teaching, and students, equal partners in the process, use feedback and self-monitoring to improve their knowledge and skills.

For any classroom teacher, the concept of formative assessment should be comprehensible and implementation should not be impeded by any significant obstacles. So why don't more teachers practice proper formative assessment? I suggest two simple reasons, reasons that could be eliminated by improved understanding between teachers, administrators, teachers, and parents.

Reason 1: Teachers think they're already doing formative assessment. My early understanding of summative versus formative assessment came from my curriculum director, who simply defined the two this way: "Summative assessments are tests and quizzes you grade, and formative assessments are anything you use to guide your instruction. You are already doing formative assessment all the time." These definitions were vague and incomplete, but not necessarily incorrect. The real problem was the message that we were "already doing formative assessment." Why then should we, a room full of teachers feeling burdened yet open to new ideas, seek to improve our practice of something we were apparently already doing? Just as with students, there is danger in false praise. Even worse, I knew my questioning techniques in whole-class activities were lacking. Had I been properly introduced to formative assessment, I may have improved my questioning practices and sought better ways to assess student understanding. I shouldn't have been led to think I was doing something well when in reality, I wasn't, or denied access to information that would have helped me improve.

Reason 2: Teachers are pressured to assess performance with grades. Grading practices can influence assessment practices, and pressure to assign grades for all classroom activity can inhibit the use of true formative assessment. In a world of 24/7 access to online gradebooks, parents and students expect to see near real-time measures of progress and achievement on their computer screens. To expect the full benefits of formative assessment, students must invest themselves in the improvement process as much as teachers. Once a teacher assigns a grade to a task, the message received by the student and parent is of summation – the task is complete, learning has been measured, and it's time to move on to the next task. The teacher might not want to send this message, but what's important is the message received by the student, not what was intended by the teacher. The sophisticated give-and-take of formative assessment is best recorded and measured outside the simple percentages and averages calculated by our technically limited gradebooks.

Formative assessment is understandable and practical, but inhibited by false assumptions. Administrators falsely assume teachers already know and use it, and teachers falsely assume students are willing and able to translate a summative grade into formative feedback. Fortunately, both of these obstacles can be overcome through better a understanding of formative assessment, improved communication, and a commitment to collaboration. Teachers, students, and parents alike should welcome an increased focus on improvement, instead of the summative and often harsh dependence on grades and percentages. Summative assessments and grades might be more familiar, but that doesn't make them easier or more beneficial.

Dividing by Fractions

In his recent post "Math is a dangerous subject to teach," Joe Bower discusses the ability to learn the procedures of math without understanding the conceptual foundations. As an example, Joe humbly admits that he has "absolutely no idea why" (emphasis his) dividing by a fraction is replaced by multiplying by the fraction's reciprocal. He can get the answer right without proper understanding, and therein lies the danger.

One of the things I've enjoyed most about teaching is finding new and deeper ways of understanding so-called "simple" math that I thought I had already mastered. Most of my mathematical upbringing focused more on procedure than understanding, so I occasionally find myself in the same position as Joe. Using Joe's post as inspiration, I've given more thought to dividing by fractions and finally have a model and a description that I hope explains what's really happening when you divide by a fraction.

First, let's look at a simple fraction:

The top number, the numerator, simply counts "how many." The bottom number, the denominator, tells us "how big." We read this fraction properly as "two-thirds," and usually think of that as two objects, each one-third the size of the whole (however big that is).

Now let's look at division. Some students are led astray with early beliefs that "multiplying makes bigger" and "division makes smaller." That kind of misguided number sense can be frighteningly persistent. Multiplication is better thought of as a "scaling" operation, and division can be thought of as a "grouping" operation. To see what I mean, let me explain using whole numbers:
The model I imagine for this problem looks like this:
Using the "grouping" concept of division, I've made four groups. Because each group contains two, the answer is two. No surprise. Let's try another:
The model:
I've made two groups. Because each group contains four, the answer is four. Still no surprises. Let's try one more with whole numbers:
The model:
I've made one group, which is as trivial as it gets. Because the group contains eight, the answer is eight. Now that we've established the pattern with a "grouping" definition, it should be easy to see why you can't divide by zero. I can't possibly make zero groups and still have the eight squares.

Okay, now let's try an easy division problem with fractions:
Remember, the numerator tells us "how many" and the denominator tells us "how big." The model:
It's still one group (how many), but a group of halves (how big). If you count objects, you get the answer: sixteen. But we haven't added or taken away anything -- the eight is still there. Got it? Let's try another:
The model:
Eight divided by two thirds, translated into "grouping-speak," is "eight grouped into two groups of thirds." Because each group contains twelve, the answer is twelve, even though you can still imagine the original eight.

So is this the same as multiplying by the reciprocal? Breaking the wholes into thirds gave us three times as many pieces (24, same as 8 times 3), and grouping into two groups gave us half of the pieces in each group (12, same as 24 divided by 2). More concisely, we multiplied 8 by 3 and divided by two. So doesn't that mean dividing by two thirds is the same as multiplying by three halves? Not exactly. We get the same answer, but for a different reason. To me, the model for eight times three halves would mean scaling eight to be three times bigger (24), then scaled back down to half of that (12). It's a different picture, even if we still get the same answer.

You can choose to accept this as a complication or a convenience; either way, I hope you have a better understanding of dividing by fractions. As always, feel free to offer criticisms in the comments below. (There has to be many a sixth grade teacher who could teach me a thing or two about this topic!)

Update 4/18/2010: Gary Davis co-authored a great guide on the division of fractions. It provides more strategies, more examples, and more detail than my post did. Thanks for sharing, Gary!