How the Race (to the Top) Was Won (Part 2 of 2)

Note to self: In the future, don't go on a multi-state road trip and become otherwise distracted for more than a month between Parts 1 and 2 of a two-part blog post. Sorry, readers!

In my previous post, I set out to answer the following questions about Phase 1 of Race to the Top (RTT):
  1. Where in the RTT rubric could states score the most points?
  2. For the portions of the RTT rubric identified in #1, which states scored highest?
  3. For the states identified in #2, what did their application propose and what were the judges' comments?
The answer to #1 was Section D of the RTT application, "Great Teachers and Leaders." If you break down that section, you'll find that the highest subsection scores belonged to Delaware, Tennessee, Georgia, South Carolina, Rhode Island, Kentucky, Louisiana, and Kansas. This post will dig into the actual applications to see what proposed reforms warranted those high scores, along with some of the comments made by the judges during the scoring.

(D)(1) – Providing High-Quality Pathways for Aspiring Teachers and Principals (21 points)

The RTT scoring rubric specifies that this criterion must be judged for both teachers and principals. High points are awarded for alternative certification routes that operate independently of institutions of higher education (IHEs) and include at least four of the following five definitional criteria: (a) programs are operated by a variety of providers, (b) candidates are admitted selectively, (c) candidates have school-based experiences and ongoing support, (d) limited coursework, and (e) certifications are the same as traditional programs (Kentucky Department of Education, 2010, p. 7; U.S. Department of Education, 2010b, p. 10).

Kentucky has a 20-year history of alternative certification programs, and in 2003 the Kentucky Legislature allocated funds for the creation and growth of such programs (Kentucky Department of Education, 2010, p. 118). Kentucky now has seven alternative programs, including specific programs for people with extensive work experience, college faculty, and veterans of the Armed Forces, as well as district-based, university-based, and institute-based options. Most of the programs are selective; the work experience route requires at least ten years of work in the area of certification and several others require Bachelor's degrees in the relevant content area (Kentucky Department of Education, 2010, p. 120). Ten percent of Kentucky's current teachers and 17 percent of new teachers in 2009-2010 completed an alternative program.

There were only two aspects of this portion of Kentucky's application that received criticism from the reviewers, and only two of the five reviewers deducted any points for this section. The first deficiency was in the area of alternative principal licensing. Kentucky only has one program for alternative principal certification, and it does not meet all five of the definitional criteria (U.S. Department of Education, 2010c, p. 32). Only one new principal was alternatively licensed in 2009-2010, and that individual went through a university-based alternative program (Kentucky Department of Education, 2010, p. 122). The other deficiency was in Kentucky's process for identifying and acting upon teacher and principal shortages. Shortages are identified by Kentucky's LEAs, but current efforts to place teachers are limited both in number and geographic reach (U.S. Department of Education, 2010c, p. 32).

My take: Kentucky got perfect scores from 3 of 5 judges, and the top score overall in this area, despite only having one alternative program for administrators that produced only one new administrator the previous year. This was the best proposal in the country.

(D)(2) – Improving teacher and principal effectiveness based on performance (58 total points across four subsections)

This criterion, also applicable to both teachers and principals, is worth a maximum of 58 points. Rhode Island scored 94 percent of those points to lead all applicants. Unlike the previous criterion, this one is divided into four distinct subsections. Rhode Island had the top score or tied for top score in three of those subsections.

(D)(2)(i) - Measuring Student Growth (5 points)

Even though this subsection is only worth five points, Tennessee was the only state to be awarded a perfect score. To earn the points, states must "establish clear approaches to measuring student growth and measure it for each individual student" (U.S. Department of Education, 2010b, p. 11).

Tennessee has been using the Tennessee Value-Added Assessment System (TVAAS) since 1992. Tennessee claimed to track every grade 3-12 student in every subject, and the state claims the TVAAS is the largest student database in history (Tennessee Department of Education, 2010, p. 82). Despite the "every subject" claim, a brief look at the publicly-viewable data on the TVAAS website (https://tvaas.sas.com/evaas/welcome.jsp) indicates only core subjects are tested, and not at every grade. Tennessee has used the TVAAS to determine Adequate Yearly Progress, to support progressive districts, identify strengths and weaknesses in grades or subjects, and inform instruction, but prior to the 2009-2010 school year only 14 percent of the state's teachers had access to the database (Tennessee Department of Education, 2010, p. 82). Now that Tennessee has opened up access to all teachers, they plan to train both current teachers and administrators, as well as pre-service teachers, in the use of the database. Not only will this value-added data be linked to teacher and principal compensation and evaluations, the state "will monitor and report access and usage of the system at the teacher, school, and district levels" (Tennessee Department of Education, 2010, p. 82). No explanation is given in Tennessee's application for this level of monitoring and reporting, but one might assume it is to apply pressure to ensure that the system is universally used.

None of the five reviewers of Tennessee's application made any mention of the criticisms that have been levied against the TVAAS, even though such research is easily found (Amrein-Beardsley, 2008; Kupermintz, 2003). The statistical model employed in the TVAAS, the Education Value-Added Assessment System (EVAAS), may currently be the best value-added model available, but experts have had difficulty resolving its flaws because neither the statistical algorithms nor full value-added data sets have been disclosed for peer review. As one researcher stated, "My own and others' attempts to access the EVAAS value-added data have consistently gone without response or been refused with the justification that the value-added data, if released to external researchers, might be misrepresented" (Amrein-Beardsley, 2008, p. 68).

My take: Tennessee got a perfect score despite not collecting data for all students, at all grade levels, for all subjects, and they use a system that researchers aren't allowed to inspect because the state doesn't think they'll understand it. This was the best proposal in the country.

(D)(2)(ii) – Developing Evaluation Systems (15 points)

This subsection encourages the development of an evaluation system that "differentiate[s] effectiveness using multiple rating categories that take into account data on student growth ... as a significant factor" (U.S. Department of Education, 2010b, p. 11). The evaluation system should be "rigorous, transparent, and fair" and be "designed and developed with teacher and principal involvement" (U.S. Department of Education, 2010b, p. 11).

Rhode Island was the top-scoring state in this subcategory, but unlike Tennessee, Rhode Island does not have a value-added assessment system currently in place. Instead, it is rushing to implement one by the 2011-2012 school year so it can be "fully operational" by 2013-2014 (Rhode Island Department of Education, 2010, p. 95). Rhode Island plans to use this system liberally in educator evaluations:
Every decision made in regard to the professional educators in Rhode Island, whether made by an LEA or the state, will be based on evidence of the respective teacher's or principal's impact on student growth and academic achievement in addition to other measures of content knowledge, instructional quality, and professional responsibility. These new RI Standards ensure that no child in Rhode Island will be taught by a teacher who has received an "ineffective" evaluation for two consecutive years. (Rhode Island Department of Education, 2010, p. 97)
Instead of mandating a single statewide evaluation system, Rhode Island will allow individual LEAs to develop their own, provided they comply with the rigorous standards specified by the state. If LEAs choose to develop their own systems (or fail to), they can/must adopt a state-provided evaluation system.

Reviewers of Rhode Island's application awarded them 96 percent of the possible points for this subsection. In response to Rhode Island's "no child will be taught by an ineffective teacher" clause, one reviewer commented, "This is bold, it shows the seriousness of effort and it is an incredibly important foundation for RTT plans to get traction" (U.S. Department of Education, 2010d, p. 5). Only one reviewer seriously questioned Rhode Island's aggressive timeline for implementing their evaluation system. Even though the state forecasts a "fully operational" value-added system by 2013-2014, value-added data will account for 40 percent of a teacher's evaluation starting in 2011-2012 before rising to 45 percent in 2012-2013 and 51 percent in 2013 (Rhode Island Department of Education, 2010, p. 98; U.S. Department of Education, 2010d, p. 44).

My take: Rhode Island outscored every other state by mandating that districts transparently and fairly evaluate teachers based on data that didn't exist yet and a growth model the state didn't yet have.

(D)(2)(iii) – Conducting Annual Evaluations (10 points)

Two states, Tennessee and Rhode Island, scored the maximum ten points on this subsection. To earn the maximum points, the RTT scoring rubric requires that states have policies requiring "annual evaluations of teachers and principals that include timely and constructive feedback" (U.S. Department of Education, 2010b, p. 11) and that those evaluations include student growth data.

Tennessee gained favor in the scoring by having recently passed their "First to the Top Act," which establishes a 15-member Teacher Evaluation Advisory Committee tasked with developing a new evaluation system. All participating Tennessee LEAs will use the new evaluation system as described:
The evaluation system may be used to publicly report data that includes, but is not limited to, differentiation of teacher and principal performance (percentage in each rating category), the LEA's ability to increase the percentage of effective teachers and principals, and percentage of compensation based on instructional effectiveness. To ensure accountability on improving performance of teachers and principals, the state will encourage LEAs to set annual improvement goals, with a minimum of 15% improvement in terms of the number of educators moving up in each rating category. (Tennessee Department of Education, 2010, p. 86)
Much appears to be hinging on the application's use of "may be used" and "the state will encourage." One reviewer, despite awarding a perfect score, advised that "It would make sense to pilot some of these ideas in several districts and make any needed adjustments before adopting them statewide in July, 2011" (U.S. Department of Education, 2010e, p. 4). Meanwhile, another reviewer questioned, "With such heavy weighting on student achievement data, it is not clear what solutions the State has to evaluate teachers in non-tested subjects or grades" and "It is not clear if this new evaluation system will need to be collectively bargained, and if so, how the State intends to secure teacher buy-in" (U.S. Department of Education, 2010e, p. 12). None of the reviewers explicitly questioned the ability to expect a minimum, annual 15 percent improvement of the number of teachers moving up the evaluation rating categories. Only time will tell if this is a sustainable goal.

Compared to Tennessee, Rhode Island's annual evaluation proposal looks decidedly unremarkable and received few comments from reviewers. Rhode Island called for annual evaluations at a minimum, and the state is responsible for providing teachers and principals the academic growth data that constitutes the bulk of their evaluation. The evaluations must also be based on the “quality of instruction (or, for principals, quality of instructional leadership and management), demonstration of professional responsibilities, and content knowledge” (Rhode Island Department of Education, 2010, pp. 101-102). LEAs are expected to review evaluations to guide their professional development programs.

My take: Tennessee planned to evaluate everyone but only had a system designed to measure teachers in just a few subjects. How they would negotiate an expansion of their system wasn't clear. How they expected endless annual 15% improvements wasn't clear. Still, this and Rhode Island's rather bland proposal were the best in the country.

(D)(2)(iv) – Using Evaluations to Inform Key Decisions (28 points)

By far the largest subsection of criteria (D)(2), constituting nearly half its possible points, this subsection is targeted towards using evaluations to inform "key decisions." The RTT rubric specifies four such "key decisions:"
(a) Developing teachers and principals, including by providing relevant coaching, induction support, and/or professional development;
(b) Compensating, promoting, and retaining teachers and principals, including by providing opportunities for highly effective teachers and principals ... to obtain additional compensation and be given additional responsibilities;
(c) [Granting] tenure and/or full certification (where applicable) to teachers and principals using rigorous standards and streamlined, transparent, and fair procedures; and
(d) Removing ineffective tenured and untenured teachers and principals after they have had ample opportunities to improve, and ensuring that such decisions are made using rigorous standards and streamlined, transparent, and fair procedures. (U.S. Department of Education, 2010b, p. 11)
South Carolina and Rhode Island tied for the top score on this subsection, each earning 93 percent of the possible points. South Carolina currently uses two data systems: the system for Assisting, Developing, and Evaluating Professional Teaching (ADEPT) and the Program for Assisting, Developing, and Evaluating Principal Performance (PADEPP). (Acronym-loving South Carolina's RTT application is named INSPIRE, short for “Innovation, Next Generation Learners, Standards & Assessments, Personalized Instruction, Input and Choice, Redesigned Schools, Effective Teachers & Leaders, and Data Systems.”) South Carolina plans to tie these systems into their state-controlled certification system (which determines contract and due process rights) and statewide salary schedule (U.S. Department of Education, 2010f, p. 29). With the state handling certifications, tenure, and salaries, it will be much easier for South Carolina to implement the reforms specified in the RTT scoring rubric.

One reviewer only awarded 18 of 28 points and had particularly critical comments for this part of South Carolina's proposal:
The state proposes to provide induction support for beginning teachers and principals. There is no mention of coaching services after the induction period. The state application explains various statutory issues related to tenure and insists that tenure will be related to performance. The explanation is inadequate and does not lay out a clear plan. (U.S. Department of Education, 2010f, p. 4)
A different reviewer gave South Carolina the maximum 28 points for this subsection, saying only that "all beginning teachers and principals [will] receive induction support and mentoring" and "Salary incentives are part of South Carolina's plan, teacher effectiveness, retention, full certification, and removal, if necessary" (U.S. Department of Education, 2010f, p. 20). This kind of variability is a problem with the design of the RTT rubric and will be discussed in the conclusion of this post.

South Carolina might have the top-scoring proposal for using evaluations to inform decision-making, but their assessment and data systems have some glaring overall problems. The statewide data systems, ADEPT and PADEPP, do not use a value-added model. Some LEAs are piloting a value-added "approach," and the state plans on developing or selecting a statewide model in the near future. The data used in that eventual model will first be from their current statewide assessment and the Measures of Academic Progress (MAP), but the state plans to abandon those assessments in favor of one aligned with the Common Core K-12 standards, whenever one becomes available (South Carolina Department of Education, 2010, p. 102).

Rhode Island equaled South Carolina's score, but did so while retaining a more traditional measure of local control. In most cases, the LEAs will be setting their policies to meet the proposed goals of Rhode Island's RTT application, and the State Department of Education will assume an enforcement role. For the compensation piece, Rhode Island proposes funding four pilot programs with RTT dollars. By 2015, LEAs will be able to choose one of the four compensation models or develop their own with the state providing guidance and support (Rhode Island Department of Education, 2010, p. 106).

As discussed previously, Rhode Island plans to use their evaluation system for promotion, retention, and certification of teachers. LEAs will have to prove to the state that they are using evaluation data in these decisions and report to the state those teachers who have earned promotions or leadership responsibilities, which will require at least an "effective" or "highly effective" rating on their annual evaluation (Rhode Island Department of Education, 2010, p. 107). LEAs will also have to certify that they have removed all non-tenured ineffective teachers and any teacher marked "ineffective" two years in a row (Rhode Island Department of Education, 2010, pp. 108-109). The state will continue to manage the certification system and current educators will be subject to the new rules as their current certificates renew.

My take: Despite the high point value of this subsection, the U.S. DoE seems unclear if they believe more strongly in local control or state control, in current tests or future tests, or in mentoring or induction support. These were the best proposals in the country.

(D)(3) – Ensuring equitable distribution of effective teachers and principals (25 points across two subsections)

Louisiana led all states by taking 90 percent of the maximum 25 points for this section, but did not have the high score in either of the two subsections. Instead of reviewing Louisiana's application, we will instead focus on the applications from Georgia and Kansas.

(D)(3)(i) – Ensuring equitable distribution in high-poverty or high-minority schools (15 points)

The RTT scoring rubric for this subsection requires policies that "ensure that students in high-poverty and/or high-minority schools ... have equitable access to highly effective teachers and principals ... and are not served by ineffective teachers and principals at higher rates than other students" (U.S. Department of Education, 2010b, p. 11). Georgia earned 93 percent of the 15 available points in this subsection to lead all states. Georgia's strategy clearly delineates into solving problems of supply and demand. On the demand side, Georgia plans to award bonuses to effective teachers and principals in high-need schools "tied to the degree of reduction made in the student achievement gap every year" (Georgia Department of Education, 2010, p. 121). To entice effective teachers to move to high-need rural areas, the state is proposing $50,000 tax-free bonuses that vest over three years and require the teacher to maintain a high rating on the state's Teacher Effectiveness Measure (TEM). Districts wanting to participate in this program must compete for the funds and prove that the teachers eligible for bonuses have an established record of high achievement. Georgia is being bold with this plan, despite their decision not to "[offer] these kinds of bonuses to principals, having experimented with significant bonuses for principals in the past and having found that these incentives were not effective in getting principals to relocate" (Georgia Department of Education, 2010, p. 121). To improve the supply side of equitable teacher distribution, Georgia will work with LEAs to improve professional development and partner with organizations like Teach for America and The New Teacher Project that have experience recruiting teachers for hard-to-fill positions.

Only one reviewer offered the most glaring criticism of Georgia's plan: "There is also detail missing ... on the systems to ensure distribution over time" (U.S. Department of Education, 2010g, p. 40). The RTT money allocated for the bonuses is temporary, and programs like Teach for America and The New Teacher Project are not well-known for placing teachers who remain in high-need areas for more than a few years.

My take-away: Georgia actually had a straightforward approach here -- fill difficult assignments by offering significantly more money to teachers who have shown an ability to raise scores and close achievement gaps. Will it work? No one's sure, but this proposal should be worth following up on. After all, it was the best proposal in the country.

(D)(3)(ii) – Ensuring equitable distribution in hard-to-staff subjects and specialty areas (10 points)

Kansas, whose application ranked 29th overall, makes a surprise appearance at the top of the scoreboard. They introduce this section of their application with some startling statistics:

The Teaching in Kansas Commission found that:
  • 42% of Kansas teachers leave the field after seven years,
  • 36% of Kansas teachers can retire within the next 5 years,
  • 25% fewer students entered the teaching profession over the past six years,
  • An 86% decrease in Kansas teacher biology licenses will occur within 6 years,
  • A 50% decrease in chemistry licenses will occur within 6 years, and
  • A 67% decrease in physics licenses will occur within 6 years. (Kansas Department of Education, 2010, p. 81)
Kansas's plan mostly consists of expanding their UKanTeach program at the University of Kansas, both at KU and to other institutions of higher education around the state. Kansas claims that “UKanTeach is dramatically increasing the number of math and science teachers graduating from KU, resulting in over 100 new, highly qualified science and math teachers each year” (Kansas Department of Education, 2010, p. 81). They claim this “dramatic increase” without citing the number of graduates before the UKanTeach program and fail to address non-STEM hard-to-staff subjects such as special education and language instruction. Neither of these criticisms were mentioned by any of the five reviewers of Kansas's application. Additionally, despite having other plans for teacher preparation and retention in hard-to-serve areas, the reviewers almost universally fail to cite them in their comments (U.S. Department of Education, 2010h).

My take: Kansas's proposal sounds practical but lacks details. Can UKanTeach do anything for non-STEM teachers? Why did the judges find this to be the best proposal in the country without an answer to that question?

(D)(4) – Improving the effectiveness of teacher and principal preparation programs (14 points)

This criterion asks for a quality plan for linking student achievement and growth to the in-state teacher and principal preparation programs and expanding those programs identified as successful.

Tennessee, which earned 90 percent of the possible points, uses brief but strong language to sell this part of their application. They proudly boast "The cornerstones are competition and accountability," and "Our State Board of Education (SBE) has broken the monopoly on teacher preparation held by institutions of higher education" (Tennessee Department of Education, 2010, p. 110). Tennessee claims to publicly report their teacher preparation program quality data, but a search of their Department of Education website (http://tn.gov/education/) when the RTT results were announced revealed nothing. Tennessee planned in 2010 to gather stakeholders from across the state to examine how they link student achievement data to teacher preparation programs and develop a plan to "reward programs that are successful and support or decertify those that fail to produce effective teachers" (Tennessee Department of Education, 2010, p. 111). Most of the reviewers of Tennessee's application cited a lack of focus on principal preparation programs to match those for teachers (U.S. Department of Education, 2010e).

Rhode Island doesn't use Tennessee's tough language, but claims to "[act] aggressively to close programs that do not meet its rigorous current standards and has closed two programs, including a principal preparation program, in the last 5 years" (Rhode Island Department of Education, 2010, p. 125). Every educator preparation program in the state must be re-approved every five years and Rhode Island plans to include data from teacher and principal evaluations in the re-approval process. Specifically, Rhode Island wishes to track how many educators from each preparation program earn full Professional Certification and a disaggregation of preparation program graduates in high vs. low poverty and minority schools (Rhode Island Department of Education, 2010, p. 125).

My take: It's troubling to see Rhode Island acknowledge their closing of two teacher/principal preparation programs, and more troubling to see the judges view that as a positive achievement, without knowing in detail the specific failures of those institutions that led to the failure. How were the programs not meeting Rhode Island's "rigorous standards" and what efforts had been made to improve them? It would have been far more impressive for our country's best proposals to describe a successful rebuilding of those programs than their simple termination.

(D)(5) – Providing effective support to teachers and principals (20 points)

This criterion is based on two goals: provide ongoing, targeted professional development and supports while also monitoring and improving the quality of the professional development and supports. The supports could include “coaching, induction, and common planning and collaboration time to teachers and principals” (U.S. Department of Education, 2010b, p. 12).

Delaware earned 95% of the available points by requiring all participating LEAs to adopt a comprehensive professional development plan that contains all the supports specified in the rubric. Despite being the top-scoring plan, one reviewer commented:
The key weakness of this plan is the lack of specificity about how LEAs will know what is a good PD model and what is not – this section seems vague and not well thought through. Compared to other plans in the Delaware application, this area is not very creative nor clear. (U.S. Department of Education, 2010i, p. 15)
Delaware does specify plans for certifying effective professional development programs and requiring states to adopt such high-quality programs by the 2010-2011 school year, but the eleven pages of description in the Delaware application didn't translate into rich commentary from the reviewers, despite the high scores.

My take: It's as if the reviewers are confident in Delaware's plan despite not being able to accurately describe what the plan contains. Somehow, this was still better than the proposals from all other states.

Discussion

Taken all together, we see a policy preference for: (a) many alternative routes to certification, (b) an extensive value-added assessment system, (c) teacher and principal evaluations based on student performance and growth data, (d) annual evaluations of all teachers and principals, (e) teacher and principal compensation, promotion, and retention policies tied to evaluations, (f) incentives for teachers and principals to serve in high-need areas, (g) programs to increase the supply of teachers for hard-to-fill subjects, (h) quality, accountable teacher preparation programs, and (g) effective professional development.

This should be no surprise, because this is precisely what the RTT rubric asked for. How did this encourage a large pool of innovative and creative reforms? Is Kentucky's 20-year-old alternative licensure program creative? Is Tennessee's value-added assessment system, in use since 1992, innovative? It's very possible the RTT rubric has stifled creativity and innovation as much as it encouraged it. Even worse, states may have abandoned the innovative ideas they developed in Phase 1 and instead chose to copy the above high-scoring states in the hopes of winning funding.

A very troubling aspect of many proposed policies is the dependence of so many important decisions on a value-added student performance model that is not 100 percent transparent. Regardless of opinions concerning the use of value-added models, or beliefs that value-added models could achieve perfect accuracy and reliability, the use of a non-transparent model (such as the EVAAS) in so-called transparent evaluation systems is a significant flaw. Software is patentable and profitable, while the underlying mathematics is not, so the motivations for keeping at least some parts of these growth models secret is understandable, even if undesirable. Still, the RTT process could have been strengthened significantly if the scoring rubric had required 100 percent transparency for any and all statistical operations provided on educational data.

My final criticism of this process lie in the RTT rubric itself. Why have 500 total points? Why is "providing high-quality pathways for aspiring teachers and principals" worth 21 points and "ensuring equitable distribution of teachers" worth 25? Who decided that one category should be worth four points more than the other and why? If those four points had been allocated elsewhere, would the results have changed?

In a paper by Peterson and Rothstein (2010), the authors expose the arbitrary nature in which points were allocated in the RTT rubric and show how changes in the weights of categories could have changed the outcome of the entire RTT competition. For example, adding a mere 15 points to any of the four criteria (improving student outcomes, using data to improve instruction, using evaluations to inform key decisions, and ensuring equitable distribution), then decreasing the other criteria less than a half-point to keep the rubric's total score at 500, Georgia would have won the RTT competition (Peterson & Rothstein, 2010, p. 4). Similarly, the "demonstrating other significant reforms" criterion was only allocated one percent (5 points) of the total rubric. Given the innovation possible in this "other" category, including reforms called for in the DoE Blueprint and other federal education programs, it would have been reasonable to justify giving that category a larger weight. If that weight had been 25 percent of the application, then Pennsylvania would have been the winner (Peterson & Rothstein, 2010, p. 5).

This design of the RTT rubric and its point allocation not only affected the outcome of Phase 1, but likely affected the following phases even more strongly. The elements of the proposals examined in this paper were chosen regardless of the margin of victory. Not only are slim margins statistically insignificant in a 500-point rubric, but the scoring process itself leads to some arbitrary selections. Unfortunately, when trying to play catch-up with the winners, the simplest thing to do is copy, not create. In doing so, RTT reinforces a "don't just stand there, do something" atmosphere for reform, even if the choice and effectiveness of those "somethings" is uncertain and arbitrary.

References

Amrein-Beardsley, A. (2008). Methodological Concerns About the Education Value-Added Assessment System. Educational Researcher, 37(2), 65-75. doi:10.3102/0013189X08316420

Georgia Department of Education. (2010, January 19). Race to the Top: Application for Initial Funding. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-
applications/georgia.pdf

Kansas Department of Education. (2010, January 14). Race to the Top: Application for Initial Funding. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-
applications/kansas.pdf

Kentucky Department of Education. (2010, January 14). Race to the Top: Application for Initial Funding. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-applications/kentucky.pdf

Kupermintz, H. (2003). Teacher Effects and Teacher Effectiveness: A Validity Investigation of the Tennessee Value Added Assessment System. Educational Evaluation and Policy Analysis, 25(3), 287-298. doi:10.3102/01623737025003287

Peterson, W., & Rothstein, R. (2010). Let's do the Numbers: Department of Education's "Race to the Top" Program Offers Only a Muddled Path to the Finish Line (Briefing Paper No. 263). EPI Briefing Papers. Washington, D.C.: Economic Policy Institute. Retrieved from http://www.epi.org/page/-/BriefingPaper263.pdf

Rhode Island Department of Education. (2010, January 14). Race to the Top: Application for Initial Funding. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-
applications/rhode-island.pdf

Tennessee Department of Education. (2010, January 18). Race to the Top: Application for Initial Funding. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-applications/tennessee.pdf

U.S. Department of Education. (2010b). Race to the Top Scoring Rubric Corrected. Washington, D.C.: U.S. Department of Education. Retrieved from http://www2.ed.gov/programs/racetothetop/scoringrubric.pdf

U.S. Department of Education. (2010c). Race to the Top: Technical Review Form - Kentucky. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-applications/comments/kentucky.pdf

U.S. Department of Education. (2010d). Race to the Top: Technical Review Form - Rhode Island. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-
applications/comments/rhode-island.pdf

U.S. Department of Education. (2010e). Race to the Top: Technical Review Form - Tennessee. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-applications/comments/tennessee.pdf

U.S. Department of Education. (2010f). Race to the Top: Technical Review Form - South
Carolina. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-applications/comments/south-carolina.pdf

U.S. Department of Education. (2010g). Race to the Top: Technical Review Form - Georgia. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-applications/comments/georgia.pdf

U.S. Department of Education. (2010h). Race to the Top: Technical Review Form - Kansas. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-applications/comments/kansas.pdf

U.S. Department of Education. (2010i). Race to the Top: Technical Review Form - Deleware. Retrieved from http://www2.ed.gov/programs/racetothetop/phase1-applications/comments/delaware.pdf