Tuesday, August 31, 2010

Teaching as a "Moral Craft"

I recently read The Peculiar Problems of Preparing Educational Researchers by David F. Labaree, and was particularly struck by this paragraph:

The main reason for [teaching as a moral craft] is that, unlike most professionals, teachers do not apply their expertise toward ends that are set by the client. A lawyer, doctor, or accountant is a hired mind who helps clients pursue goals that they themselves establish, such as to gain a divorce, halt an infection, or minimize taxes. But teachers are in the business of instilling behaviors and skills and knowledge in students who do not ask for this intervention in their lives and who are considered too young to make that kind of choice anyway. By setting out to change people rather than to serve their wishes, teachers take on an enormous moral responsibility to make sure that they changes they introduce are truly in the best interest of the student and not merely a matter of individual whim or personal convenience. And this reponsibility is exacerbated by the fact that they student's presence in the teacher's classroom is compulsory. Not only are teachers imposing a particular curriculum on students, then, but they are also denying them the liberty to do something else. The moral implications are clear: If you are going to restrict student liberty, it has to be for very good reasons; you had better be able to show that the student ultimately benefits and that these benefits are large enough to justify the coercive means used to produce them (Cohen, 1988; Fenstermacher, 1990; Tom, 1984).

I've heard other people argue that the teaching profession doesn't compare well to other professions, like being a doctor or lawyer. After reading this, I'm happy they don't - teachers have good reason to feel like they're doing something special.

References
Labaree, D. F. (2003). The peculiar problems of preparing educational researchers. Educational Researcher, 32(4), 13-22. Retrieved from http://edr.sagepub.com/cgi/content/abstract/32/4/13.

Sunday, August 22, 2010

2004-2006: My Adventures in Standards-Based Grading (And Why I Stopped)

(cc): NASA
Strap yourself into the wayback machine, boys and girls, because we're going back in time five whole years. Life was different then: the U.S. was engaged in wars overseas, unprecedented disaster had struck our Gulf Coast, and teachers struggled to adapt to an assessment-centric school culture. It was, like, totally different than things are now.

In 2003 I began my teaching career at a medium-sized high school in Southern Colorado. My first year, as it is for many teachers, wasn't much more than a fight for survival. I spent much of my second year learning how to become something other than the teachers who taught me, and part of that meant tinkering with my assessments. By my third year, I was ready to collaborate with the school's other four math teachers in a concerted effort to improve our assessment and grading practices. I'm not sure I knew what an ideal assessment and grading system should look like, but I knew I wanted something other than the traditional quiz/test, either-you-got-it-or-you-don't system.

I think there's a reason math teachers in particular get so heavily invested in assessment and grading. Numbers are our friends. We trust them. We can sort them, scale them, manipulate them, and summarize them in ways that reveal certain truths. I was determined - perhaps obsessed - with finding a grading system that was accurate and fair. To me "accurate" meant "students earn the grade they deserve" and "fair" meant "objective and unbiased." I think I really believed that if I could just find the right scales and weights, the math would solve my assessment problems. 

While I might have been facetious in my opening paragraph, things really were different five years ago. Not many teachers were bloggers, Twitter hadn't been invented, and you couldn't search for #sbg or #sbar hashtags. Sure, standards-based grading existed, but Guskey, Marzano, Wiggins, et. al. sure weren't knocking on my door to tell me about it. I didn't know about SBG and I don't think any of my colleagues or administrators knew about it, either. It's sad that so many good ideas in education struggle to find their way from theory to practice, but that's another story for another day. This story is about my attempts at standards-based grading, my successes, failures, and frustrations I had along the way, and why I reverted back to a traditional grading system.

2004-2005: SBG(ish)
During my first semester of teaching I spent many hours after school designing quizzes and tests, thinking, "This is part of being a first-year teacher. Once I write these tests I'll never have to do this again." HA! Not only did I not reuse any of those assessments, in six years of teaching I hardly reused any of my assessments. Every semester I had new problems, assessment designs, and grading systems that made my old ones look horribly obsolete. As I went into my second year of teaching, I already knew that I wasn't going to be satisfied with a traditional system.

Before I go any further, I want to make this clear: I'm not claiming that I somehow independently invented or discovered standards-based grading. I was making this up as I went along and, as you'll soon read, it didn't necessarily work all that well. I did manage to reorganize my gradebook around concepts and skills instead of dates or arbitrarily-titled ("Chapter 8 Test") assessments. But while that part looked like SBG, sometimes very little else did. In fact, I wouldn't be surprised if you read this and decide I wasn't doing SBG at all.

As I said, I started making SBG-like changes to start my second year, as you can see in this passage from my fall 2004 Algebra 1 syllabus:
Each unit in the text will have several key objectives that should be your focus during that unit. Your score for an objective will usually be established by your performance on a quiz or test and is based on my perception of your understanding. Objectives are graded on a scale of 5 to 10. Think of 10/10 as an A+, 9/10 an A-, 8/10 a B-, and so forth. Objectives not genuinely attempted will be given a zero. If you are dissatisfied with one or more of your objective scores, it is highly recommended that you see me for 1-on-1 help, preferably before or after school. Because the CPM philosophy is mastery over time, you can expect to be quizzed or tested over each objective multiple times, with each time representing an opportunity to raise your objective score.

I got off to a good start by focusing on objectives, but then quickly got bogged down by the point system. SBG isn't about accumulating points. Sure, you'll need a way to record student performance, but a well-implemented SBG system will be formative and focused on feedback, not a point system. By the way, if you ever want to drive yourself into an insane asylum, try a 5-to-10 point grading scale based on perceptions of student understanding. I nearly drove myself crazy, scoring, re-scoring, and re-re-scoring every bit of work out of fear that somebody's 7/10 actually showed the same understanding as a classmate's 8/10. I was far more concerned with ranking and sorting than feedback. Even worse, I would have students who scored 7/10 or 8/10 over and over, earning passing scores for objectives even though I'd never actually seen them get any right answers.

By the following spring (with a new set of classes, like a college schedule) I decided that it would be far better for students to get most problems all right instead of all problems mostly right, so my syllabus now said this:
IT'S ALL ABOUT THE ROADMAP. The objective roadmap is a detailed list of objectives that together make up everything you should learn in the course. The objectives are organized by unit, but each objective will be assessed separately. To pass an objective you need to get 80% or better on the objective test. If you fail an objective, you will have to retake and pass that objective test before moving on to any objectives in the next unit.

(Here are my roadmaps for Algebra 1, Algebra 2, and Business Math.)

Almost every objective test consisted of five problems, and if it wasn't right, it was wrong. I was so sick of agonizing over partial credit I got rid of it entirely. Objectives were added to students' grades on a schedule, and students who fell behind schedule got zeros on objectives they hadn't yet attempted. Zeros have rarely been known to play nice with the statistical mean, so a student who averages 80% on 80% of the objectives (with 0% on the others) is only going to get a 64%. It was setting a high standard, sure, but somehow I convinced myself that I was a one-man-army who could rid the world of grade inflation. Anyhow, I asked students to focus less on the gradebook and more on objective progress charts like this one:
(Those are student numbers on the left. Yes, this class only had 11 students.)
Still, if this was SBG, it was really, really bad SBG. Three major problems stand out. First, I was telling students that there were a bunch of objectives and any one of them could stop them dead in their tracks. (I think I relented on that one not far into the semester.) Second, I was still way too focused on points and percentages, not formative assessment. Third, a student who was "mostly right" on all five test problems got a zero in the gradebook, even if the cause was a small procedural mistake that he/she happened to repeat on all five problems. How's that for demotivation?

Amidst all the problems of this system, there emerged a brilliant, shining light, and her name was Tara. She gave me hope. She was part of a pretty special Algebra 2 class, and I think previous struggles in math had left her without much confidence. She barely spoke, and when she did it was very quiet, and quite often it was apologetic, like "I'm sorry to make you stay after school so I can retake a test." Tara understood the grading system and used it to her advantage, retaking test after test with study sessions in between. I remember a marathon session in the days before the final exam, when Tara was within sight of her goal of passing every objective. I think she stayed after school for five hours until her goal was finally met. Tara, if somehow you happen to read this, I hope you realize you weren't a burden, you were an inspiration. You did and still make me want to be a better teacher.

2005-2006: Professional Learning Communities (PLCs) and Common Assessments
I think every teacher knows that feeling when an administrator latches on to an idea from a conference or meeting they attended and then bring it home for their staff to use. Unfortunately, in a rural district like ours with scant resources, the promotion usually went like this:
This year we're going to implement [insert acronym or edujargon here]. I've talked to a lot of other principals who are doing it and think it's very effective. ["Effective" usually means "Since we didn't make AYP, again, I need to tell the state that we're trying something new. Again."] The principals I talked to sent their teachers to summer workshops and hired a coordinator to guide the implementation of [edujargon], but since we can't afford so much as a box a tissues for this school, I'll just tell you what I remember and we'll make the rest up as we go along.

For 2005-2006, administration was pushing Professional Learning Communities (PLCs) and common assessments. The math department PLC opted to tackle Algebra 1 in the first year, and we began by brainstorming a list of concepts we thought all Algebra 1 students should know. We shuffled our list into eight groups that aligned with our textbook and other materials, and finally scheduled eight test dates spaced evenly throughout the year.

Because of some of the success I experienced the previous year, we opted to grade each concept separately instead of amassing a single score for the whole test. Students who did poorly on a concept would be remediated and given an opportunity to retest until they showed proficiency with each concept. Perhaps more importantly, since all the Algebra 1 teachers were assessing the same objectives, using the same tests, all scored the same way, we now had a basis for comparing our students' performance, and that led to a sharing of tips and techniques for teaching particular skills. Unfortunately our time was limited so I never learned as much from my colleagues as I hoped, but there is certainly an advantage in using the same SGB system across teachers in a department, or even now across schools in our ever-connected world.

We had a schedule change that allowed freshmen to take Algebra 1 all year long. With about a month left in the school year, the progress chart for one of my classes looked like this:
You can detect a problem here: too many students started slacking at the end of the year!
Even though the system was working, it was showing new cracks. Students knew I had a finite (usually just 2 or 3) set of tests for each objective, and if they failed form A, they would carefully memorize their mistakes and the right answers, then pretty much fail form B on purpose so they could then take form A again. (Ugh!) Also, after two years of using this kind of system, I was growing tired of explaining and defending it to students, parents, and administrators. I enjoyed the extra time spent with students who came in after school, but it became increasingly hard to make up for the lost time. Lastly, the paperwork was a challenge - I had to have every form of every test ready at a moment's notice for students who needed to take or retake tests. Because I didn't have my own classroom, this meant hauling around a file box to three or more rooms each day in two different buildings.

One of my biggest dissatisfactions with SBG was the quality of the assessments I was using. During instruction my students worked in groups, delving into big problems in context, problems that required a combination of reasoning and the integration of multiple skills. Because I wanted to assess every skill in isolation, the tests I wrote were nothing like that. Too often they looked like this:
(What, I couldn't think of contexts for these?)

Goodbye, SBG
I was having another major problem at the end of the 2005-2006 school year: because of budget cuts, declining enrollment, and a shuffle of veteran teachers, I found myself out of a job. Even though I knew these to be the reasons, I was pretty hard on myself and thought if I had been a better teacher, then the district would have somehow found a way to keep me.

In the interview for my new job I talked about my assessment practices and quickly got the feeling that they wouldn't work in my new school. The school day was two hours longer and few students would be able to spend time after school. I still hadn't figured out how to measure skills in isolation while wanting students to tackle larger, more comprehensive tasks. I was now the only teacher in the district teaching any of my subjects, so I had no colleagues to work with on common assessments. Frustrated and not wanting to rock the boat, I figured a simple, traditional assessment and grading system would be the easiest and best way forward. I now know that I couldn't have been more wrong! In my bitterness and self-doubt I gave up the progress I had gained over two years of hard work.

If you take anything away from this post, I hope it's this: it's not 2005 anymore. You don't have to stumble around and make the mistakes I did. You don't have to work in isolation like I did. There's a great community of bloggers discussing standards-based and other grading practices, and I haven't met one yet that doesn't want to help their fellow teachers. You might not have access to the research, but some of us do and we enjoy sharing what we think about it. So please, whether you're new to SBG or have been doing it for years, share your experiences and don't hesitate to ask questions!

Friday, August 20, 2010

Colorado's Amendment 61: The Simple Facts

Several stories have hit the news recently about Amendment 61, which Coloradoans will vote on this November. Here's what you need to know:

  • The property taxes that Colorado school districts rely upon for their funding usually start coming in during March.
  • When Colorado school districts used to budget around a calendar year (Jan-Dec), they didn't have long to wait (~3 months) for property tax funds to arrive.
  • When Colorado school districts switched to fiscal years (Jul-Jun), they had to wait much longer (~9 months) for property tax funds. Since districts don't have an extra 6 months of cash lying around, the state has provided interest-free loans to school districts to bridge the gap.
  • A provision of Amendment 61, if passed, prohibits the state from making these loans to school districts, and many districts aren't sure what they will do (or, better put, not do) without that money.
I don't think I need to tell you how to vote, but this does sound like an Amendment worth studying carefully. Ed News Colorado has the best coverage I've seen yet, including links to sites for and against Amendment 61.