6 de dezembro de 2012

Baker: New York Educator Evaluation Is Fundamentally Flawed

New post on Diane Ravitch's blog


by dianerav
Bruce Baker of Rutgers says that New York state's educator evaluation system is biased, inaccurate and unfair. Even the consultants who created the system, he writes, acknowledged the high rate of error. But the state says "full speed ahead." Baker urges educators to "just say no."
Baker writes:
"This post is a follow up on two recent previous posts in which I first criticized consultants to the State of New York for finding substantial patterns of bias in their estimates of principal (correction: School Aggregate) and teacher (correction: Classroom aggregate) median growth percentile scores but still declaring those scores to be fair and accurate, and second criticized the Chancellor of the Board of Regents for her editorial attempting to strong-arm NYC to move forward on an evaluation system adopting those flawed metrics - and declaring the metrics to be "objective" (implying both fair and accurate).
Let's review. First, the AIR report on the median growth percentiles found, among other biases:
Despite the model conditioning on prior year test scores, schools and teachers with students who had higher prior year test scores, on average, had higher MGPs. Teachers of classes with higher percentages of economically disadvantaged students had lower MGPs. (p. 1)
In other words... if you are a teacher who so happens to have a group of students with higher initial scores, you are likely to get a higher rating, whether that difference is legitimately associated with your teaching effectiveness or not. And, if you are a teacher with more economically disadvantaged kids, you're likely to get a lower rating. That is, the measures are biased - modestly - on these bases.
Despite these findings, the authors of the technical report chose to conclude:
The model selected to estimate growth scores for New York State provides a fair and accurate method for estimating individual teacher and principal effectiveness based on specific regulatory requirements for a “growth model” in the 2011-2012 school year. p. 40
I provide far more extensive discussion here! But even a modest bias across the system as a whole can indicate the potential for substantial bias for underlying clusters of teachers serving very high poverty populations or very high or very low prior scoring students. In other words, THE MEASURE IS NOT ACCURATE - AND BY EXTENSION - IS NOT FAIR!!!!! Is this not obvious enough?
The authors of the technical report were wrong - technically wrong - and I would argue morally and ethically wrong in providing NYSED their endorsement of these measures! You just don't declare outright, when your own analyses show otherwise, that a measure [to be used for labeling people] is fair and accurate! [setting aside the general mischaracterization that these are measures of "teacher and principal effectiveness"]
Within a few days after writing this post, I noticed that Chancellor Merryl Tisch of the NY State Board of Regents had posted an op-ed in the NY POST attempting to strong-arm an agreement on a new teacher evaluation system between NYC teachers and the city. In the op-ed, the Chancellor opined:
The student-growth scores provided by the state for teacher evaluations are adjusted for factors such as students who are English Language Learners, students with disabilities and students living in poverty. When used right, growth data from student assessments provide an objective measurement of student achievement and, by extension, teacher performance.
As I noted in my post the other day, one might quibble that Chancellor Tisch has merely stated that the measures are "adjusted for" certain factors and she has not claimed that those adjustments actually work to eliminate bias - which the technical report indicates THEY DO NOT. Further, she has merely declared that the measures are "objective" and not that they are accurate or precise. Personally, I don't find this deceitful propaganda at all comforting! Objective or not - if the measures are biased, they are not accurate and if they are not accurate they, by extension are not fair.
Sadly, the story of misinformation and disinformation doesn't stop here. It only gets worse! I received a copy of a letter yesterday from a NY school district that had its teacher evaluation plan approved by NYSED. Here is a portion of the approval letter:
Now, I assume this language to be boilerplate. Perhaps not. I've underling the good stuff. What we have here is NYSED threatening that they may enforce a corrective action plan on the district if the district uses any other measures of teacher or principal effectiveness that are not sufficiently correlated WITH THE STATE'S OWN BIASED MEASURES OF PRINCIPAL AND TEACHER EFFECTIVENESS!
This is the icing on the cake! This is sick- warped- wrong! Consultants to the state find that the measures are biased, and then declare they are "fair and accurate." The Chancellor spews propaganda that reliance on these measures must proceed with all deliberate speed! (or ELSE!!!!!!!). Then the Chancellor's enforcers warn individual districts that they will be subjected to mind control - excuse me - departmental oversight - if they dare to present their own observational or other ratings of teachers or principals that don't correlate sufficiently with the state imposed, biased measures.
I really don't even know what to say anymore??????????
But I think it's time to just say no!

Nenhum comentário:

Postar um comentário