San Diego School Teachers Behind the Curve
Local teacher evals: #FAIL
(page 1 of 3)
School districts around the U.S. are changing the way they measure, nurture, and hold accountable public school teachers. San Diego Unified, however, is changing nothing.
School superintendents across America are talking tough.
The time has come, they say, to get rid of failing teachers, or at the very least to identify them so that weaker teachers can get help to become more effective. No longer should students suffer the ignominy of an educator who isn’t interested, willing, or able to make them learn.
For decades, schools have relied on a principal passing through a classroom once a year or every few years to eyeball how a teacher is doing. Today districts across the country say there’s another way.
They’re using reams of test score data to watch the impact each teacher has on his or her students throughout the year, learning whether students gained or lost ground under each teacher.
And they’re adding that measurement to the teacher’s evaluation. They use it to find stars, to get help for struggling teachers and, in some cases, to dispatch failing teachers like they’ve never been able to before.
In New Haven, Connecticut, the school district pushed out about 2 percent of its teachers last year, after extensive evaluations revealed those educators either couldn’t or wouldn’t improve kids’ test scores. Those evaluations had been crafted hand-in-hand with the local teachers union, which embraced reform in exchange for increases in pay and benefits.
In Houston, former San Diego Unified School District superintendent Terry Grier has overseen a radical redesign of the teacher evaluation process. Grier says there’s no place for underachieving teachers in Houston’s schools, so educators who don’t improve have been shown the door.
And in Los Angeles, superintendent John Deasy has made redesigning teacher evaluation a cornerstone of his leadership. Tackling California’s powerful teachers unions and navigating legislation that crimps his ability to dismiss bad teachers has been tough, Deasy acknowledged, but inaction’s not an option.
“This is both a moral and a legal imperative,” Deasy said.
The San Diego Unified School District, however, isn’t interested in this revolution.
Today, teachers are evaluated in San Diego in much the same way they have been for decades.
Once every year or two, with advance notice, principals pay a perfunctory visit to each classroom. After a brief, formal observation, the principal completes a three-page evaluation form. Teachers are rated on the form as either Effective, Requiring Improvement, or Unsatisfactory.
The vast majority of local teachers receive an evaluation that says they’re effective, which isn’t surprising to many local principals.
“I mean, come on! If you can’t pull it off for a formal evaluation once every couple of years, for one lesson, then you really shouldn’t be a teacher,” said E. Jay Derwae, principal of Marvin Elementary School in Allied Gardens.
The cursory evaluation system in place at San Diego Unified was the norm across the United States until fairly recently. But as education reformers began to realize that a half-century of their efforts had done little or nothing to push up student achievement, attention began to focus on the sticky topic of teacher evaluation.
Districts across the country began jumping on the reform bandwagon. They’d been pushed there by the jarring success of the documentary Waiting for "Superman," which reviled school districts for doing little to weed out underperforming teachers, and pressure from the Obama administration to revamp teacher assessment tools.
As the movement picked up speed, progressive superintendents began to coalesce around an evaluation process that had long been pushed by reformers: value-added metrics.
Very few people in the education community truly understand how value-added metrics work. That opacity is a main reason evaluation systems based on value-added metrics have proven so controversial.
The basic idea is to judge teachers not on how high their students’ test scores are at the end of the year, but on how much kids’ scores have improved while they’ve been sitting in the teacher’s classroom.
The method tracks the progress of each and every student, then compares that progress to how much each student was expected to improve at the beginning of the year. By analyzing how much the students in each classroom have improved, districts can identify which teachers are consistently pushing kids’ scores up, which are keeping scores flat, and which are failing to improve.
In places like Los Angeles and New York, those scores have been made public, based on the argument that parents deserve the information.
But just as its popularity has boomed, there’s been an equally forceful movement against value-added metrics.
Several once-bearish proponents of value-added metrics, including Linda Darling-Hammond, a former top education advisor to President Obama, now rail against the model. They argue that the margins of error are far too high to make such analysis meaningful in even the most complex of statistical models.
Test scores are just as likely to be raised or dropped by changes in a student’s socio-economic status or health, or by economic factors that affect classrooms, like swelling class sizes or dropping budgets for materials, as they are by a teacher’s ability, those critics argue.
There are other serious concerns, too: Critics worry that putting teachers on the hook for their students’ test results inevitably leads to “teaching to the test,” sterilizing classrooms into factories for rote learning.
These concerns have added a wild swirl of controversy that has only been intensified by high-profile media reports on value-added scores. Both the Los Angeles Times and The New York Times have caused uproars by publishing teachers’ value-added scores, stories that have inevitably been spun off in clichéd headlines about each city’s “Worst Teachers.”
None of that criticism has halted the march of data.
School districts from Washington, D.C. to Tennessee have plugged value-added metrics into their evaluation systems, filtering out teachers who aren’t pushing test scores up and, in some cases, firing them for consistently poor results.
In February, New York state legislators inked a deal with teachers unions that will phase in value-added scoring until it accounts for 20 percent of teachers’ evaluations. And in Houston, Grier said 150 teachers were asked last year to take a buyout or get the sack, after evaluations based largely on value-added scores identified them as ineffective.
Grier and others say the method is just one facet of evaluation. That data is simply used to identify if a teacher’s performance warrants further examination, he said.
“It’s like if you have a really bad fever,” Grier said. “A fever is a symptom that something’s wrong, it’s not the problem itself. If a teacher has poor value-added scores, that’s a red flag. That’s when the principal needs to be going into that classroom and doing more observations.”