September 30, 2010

R.T.T.T. stress -- and more

Saw this story over at Joanne Jacobs' edu-blog:

An elementary school teacher from South Gate who mysteriously disappeared last week was found dead about 9 a.m. Sunday in the Angeles National Forest, authorities have confirmed.

The Coroner confirmed the body found by a search and rescue team near Big Tujunga Canyon Road is that of Rigoberto Ruelas, 39, a fifth grade teacher at Miramonte Elementary School.

Authorities said it is a suicide, but did not say how he killed himself. An autopsy is scheduled for Monday.

Friends and family said he was feeling stressed about work and a recent teacher evaluation report printed in the Los Angeles Times.

"He kept saying that there's stress at work," said Ruelas' brother, Alejandro.

In my opinion, Ruelas had problems that went beyond just the reporting of his teacher rating in the paper. The report in the LA Times was this. The paper used a "value-added" analysis which "estimates the effectiveness of a teacher by looking at the test scores of his students."

Each student's past test performance is used to project his performance in the future. The difference between the child's actual and projected results is the estimated "value" that the teacher added or subtracted during the year. The teacher's rating reflects his average results after teaching a statistically reliable number of students.

But then we read this under the "What are some of the limitations of the value-added approach?" section:

Scholars continue to debate the reliability of various statistical models used for value-added estimates. Each has an inherent error rate that is difficult to measure. Value-added estimates may be influenced by students not being randomly assigned to classes, or by students moving from class to class during a single year. Likewise, they could be misleading for teachers who team-teach. Even many critics of the approach, however, say value-added is a vast improvement on the current evaluation system, in which principals make subjective judgments based on brief pre-announced classroom visits every few years.

I don't know how many times I've opined here and elsewhere on the idea of basing teacher evaluations solely on student test scores; if you (the public) want that to be the way your teachers get evaluated on their "effectiveness," so be it. You pay our salaries, after all. But the Times itself admits, this value-added method has its skeptics -- there's plenty of debate on its use -- yet it still thought it a good idea to publish the supposed "effectiveness" of all area 3rd, 4th and 5th grade teachers via the method. And even though, through its article FAQ, it notes the limitations of "value-added," how many people would actually take the time to comb through it? Or (more likely) will parents and others merely head for the "Find A Teacher" and "Find A School" menus and take what the results say as gospel? For me, this is essentially the same as a biased newspaper headline -- people see the headline, and barely scan the actual article.

I've also opined that I have little difficulty with such assessments if they're well thought-out and fair. In Ruelas' case, I was left wondering (and perhaps I missed something from the various pages of the Times story) about the across-grade comparison. For example, say a student has truly excellent teachers in 3rd and 4th grade. But then when they reached Ruelas in 5th grade, their test scores dipped -- because, say, Ruelas was just slightly "worse" a teacher than his 3rd and 4th grade counterparts. Contrariwise, Ruelas rating would be the opposite if his 3rd and 4th grade colleagues weren't very adequate; his rating would be positive since when they got to him the students' scores went up a bit. In other words, it is highly dependent on the teachers that precede you for your rating. Not very good teachers preceding you can "mask" another bad teacher, and very good teachers can "mask" another very good teacher.

Delaware is moving in this direction, and trust me -- if you know anyone in education in the first state, they probably don't know much about Race to the Top (RTTT) and, specifically, how it will affect them yet. But it's here now. Don't'cha think they should know (by now)?

In my case, I teach a first-year course. What would be my baseline? There's no teachers in the pipeline before me that teach the subject. Should I assume that I'll always get an "effective" (or "highly effective") rating since it's essentially inevitable that my students will show progress ... because they've never had the subject before me? I don't know! Apparently, we have to have a baseline test in place by next school year. What is it? I don't know. How will I be measured? I don't know. What exactly is on this test? I don't know.

And so on. Yet, this will be part of my job evaluation.

That's why I titled this post what I did. Again, ask educators across the state if they're 1) anxious, 2) uncertain, 3) stressed beyond belief, 4) scared, and 5) very worried. I bet all five will be an "affirmative." I've never seen a school year begin like this. But I will tell you that if things had been concretely laid out and teachers knew what to expect -- and how they'll be evaluated ... well, it'd be a whole different story.

Par for the course for the state? Don't get me started.

Posted by Hube at September 30, 2010 07:14 PM | TrackBack

Comments  (We reserve the right to edit and/or delete any comments. If your comment is blocked or won't post, e-mail us and we'll post it for you.)