Papers

The Role of Gender and Tracking in Student Achievement at the Extremes

Posted on Updated on

The following is the abstract of a recent paper I wrote analyzing student performance data. Here is a link to the entire paper.

http://dl.dropbox.com/u/33186846/Student%20Achievement%20at%20Extremes.pdf

Measurement of student achievement is at the heart of educational policy and standardized testing has been both supported and contested as a genuine representation of student achievement. This study shows that the interpretations of standardized testing may have starkly contrasting meanings for different cohorts of students. By using quantile regression to account for conditional PSAT scores, educational factors such as gender and tracking are shown to effect students in varying ways. For the students in this study, gender was a significant indicator of performance on the PSAT test, with being male accounting for more than a five point “bump.” The conclusion from the quantile regressions is that students with extreme PSAT scores are outliers based on ability or inability, not because of their gender. Meanwhile, students that participated in the “honors” tracking system had more of an increase in their predicted score the higher their conditional PSAT score.

Teacher Value Added Ratings

Posted on

A year ago, the LA Times published a ranking of over 5 thousand third through fifth grade teachers based on their value added scores.  If you are not familiar with Value Added Ratings, they are a statistical technique that measures how much a teacher’s students improve on a standardized test over the year the teacher works with those students.  These ratings are very controversial, especially as a means to evaluate teachers.  To a parent reading these rankings in the LA Times, they seem like a say all end all evaluation of their child’s teacher, prompting calls to change classes and fire teachers.  In May the LA Times released value-added ratings for over eleven thousand more teachers without addressing the intricacies of the data.

Last winter, I analyzed Value-Added Ratings, specifically as a way of evaluating teachers.  Here is a link to my paper (see pages 3-6 for a literature review on the subject and p. 6-8 for a look at the economic model, while p. 9-17 are a critique of the method).

Here is a summary of that paper:

Economists and educational researchers studying student achievement return consistent results about teachers: they matter. While researchers agree that educational achievement depends on a quality teacher, disagreement among both economists and educational researchers occurs when considering what constitutes teacher quality.  Clearly, one of a teacher’s specific duties is to improve student performance and value added statistical methods do measure student progress, but the issue arises when that progress or lack of progress is considered completely the effect of the teacher.

This paper concludes that if school administrators only evaluate teachers on student progress, then they may not be measuring the teacher’s total value to students.  The review shows that many factors, both observed and unobserved, can affect student achievement and that teachers have the ability to impact their students in ways that standardized tests cannot measure.

In a 2008 New Yorker essay, Malcolm Gladwell compared hiring a new teacher to drafting an NFL quarterback.  In both professions observable characteristics do not translate to production, whether on the field or in the classroom.  In the NFL, coaches can examine statistics that their quarterback directly controls and that are easily observed: completions, yards, and touchdowns, but observations from college games do not translate to NFL success.  Meanwhile, school administrators only evaluate teachers on observable characteristics such as experience, highest degree, undergraduate university attended, and in a recent number of cases Value Added Ratings. 

Just as a quarterback’s college statistics do not directly indicate NFL performance, easily measured or observable characteristics of teachers are not always correlated with student success.  Value added scores vary significantly from year to year and only measure student improvement on standardized tests, not necessarily learning.  If, over a ten or fifteen year period a teacher’s Value added score was consistently higher or lower than average, then the scores can tell nus something about the teacher.  But looking at one score from one year and publishing that as an all encompassing value of a teacher is unfair.

Many factors, both observed and unobserved, can affect student achievement; teachers have the ability and responsibility to impact their students in ways that standardized tests cannot measure.  If school administrators only evaluate teachers on student progress on a standardized test, then they are not necessarily measuring the teacher’s total value added to their students.