Having won a windfall of federal education dollars as a result of its successful Race to the Top application, Maryland is now faced with putting into practice the sweeping changes it promised to make in the way its teachers are evaluated. But the state has already fallen significantly behind its original schedule, and it looks as if the new procedures won't go into effect fully before 2014 at the earliest. Maryland school department officials need to get this right; any further delays could put at risk the $250 million in additional U.S. Department of Education funding so many people, from State School Superintendent Nancy S. Grasmick and Gov. Martin O'Malley on down, worked hard to get.
Half of the new evaluation system consists of the standard techniques school systems have used for decades to judge how effective their teachers are in the classroom: personal observations, assessments of lesson planning, instructional quality and the ability to maintain discipline. Different jurisdictions may emphasize different aspects of this traditional evaluation model, but in general they already know how it works.
Not so, however, with the most controversial change, made possible by state legislation last year, which requires that 50 percent of a teacher's evaluation be based on growth in student performance. The idea that teachers' evaluations should be based at least in part on how much academic progress their students make during the year sounds reasonable enough, but in practice it's so complicated that the state school board's flow chart depicting all its elements looks like a Rube Goldberg device on steroids.
The concept of growth, which is key to the system, was developed to describe how many academic grade levels a student progresses during any given year. That distinguishes it from the more familiar concept of achievement, which simply describes what level a student has reached by the time he or she is tested.
For example, a student who enters eighth grade reading on a seventh-grade level and then scores on an eighth-grade level at the end of that year has progressed one grade level – his or her "growth" in performance.
By contrast, a student who starts eighth grade reading at a sixth-grade level and ends the year reading on an eighth-grade level has actually progressed two levels during the year — twice as much as the kid who started out reading at the seventh-grade level. Similarly, the kid who's only reading at the 7.5 grade level at the end of the year has fallen behind half a grade, a form of "negative" growth.
Applying those definitions to the evaluation process, in theory an eighth-grade teacher who brings students from the seventh- to eighth-grade level over the year would be rated "effective," as measured by growth in performance. If he or she brought students from the sixth-grade level to the eighth-grade level over the same period, that would represent twice as much "growth," so that teacher might be rated "highly effective" rather than merely "effective." By the same logic, the teacher whose student ended the year reading at only the 7.5-grade level might be rated as "ineffective," at least as measured by student growth.
Though the Maryland law requires half a teacher's evaluation be based on student growth, it splits that portion into two parts, with 30 percent based on requirements defined by the state and the remaining 20 percent left up to local school districts, who can select from a menu of assessment tools approved by the state or even design their own if they wish. The idea is to encourage as much innovation and experimentation as possible.
A pilot program initially involving seven school districts was supposed to be up and running by the start of the current 2010-2011 school year. That deadline passed, however, when the state task force assigned to codify the new evaluation procedures couldn't complete its work in time. Now officials are aiming for a September start in six counties — Baltimore, Prince George's, Charles, Kent, Queen Anne and St. Mary's — as well as Baltimore City. Those programs represent a cross-section of the state's large and small, urban and rural school systems, and if all goes well the new teacher evaluation system should be ready for statewide introduction in 2012-2013.
Even then, however, the full impact of the reforms won't be felt until 2013-2014 because that will be the first year teachers can be held fully accountable for student growth in the evaluation process; the 2012-2013 school year is more like a dress rehearsal once the system goes statewide. If the process seems lengthy and complicated that's because it was designed in part to allay concerns by the teachers unions that their members' careers could be jeopardized by an evaluation system judged their effectiveness on the basis of one or two measures that may or may not reflect their true performance in the classroom. There's enough wiggle room in the new system for those doing the evaluating to bring the objective data on student growth into line with their general impressions of how well a teacher is doing his or her job.
That's both good and bad. It would be mistake to create a system so rigid that it doesn't allow for correction in cases where the numbers don't match up with what trained educators can plainly see about a teacher's performance. But ultimately, the point of this exercise is to produce better results in the classroom. As the state refines its objective criteria, subjective fudging of the outcome should become rarer and rarer, and the only way to refine the criteria is to put them in practice and see how they work.
In accepting federal Race to the Top funds Maryland has committed itself to a lengthy and enormously complex undertaking in which there are still many unknowns — including whether the final product will eventually pass muster with the feds. Having already gotten off to a late start, there's no more time to lose.