Comparative Performance Analysis

Charter schools authorized by the Board of Trustees of the State University of New York have in their Accountability Plan a measure of student performance on the state English language arts and mathematics exams that compares the school to similar public schools statewide.  In order to determine if schools are meeting this measure, the SUNY Charter Schools Institute (“the Institute”) conducts a regression analysis to examine how schools perform given the poverty level of their student population.  The analysis yields a predicted mean scale score in each test grade for every New York State public school based on its economically disadvantaged statistics.[1]

Scatter Plot Analysis

The Institute uses a scatter plot graph to represent the results for each grade.  The scatter plot shows all New York State public schools as dots on a graph whose axes are the mean scale score on an exam and percent of economically disadvantaged students.  Given the distribution of schools on the graph, the analysis generates a line which represents the predicted level of performance for all schools given their percent of economically disadvantaged students. The Institute conducts a separate analysis for each tested grade in English language arts and mathematics.

Interpreting the Institute's Comparative Performance Analysis

As an example, a fourth-grade English language arts regression analysis is presented here. The scatter plot shows the distribution of all New York public schools by ELA mean scale score and percent of economically disadvantaged students.  The solid line shows schools’ predicted performance for a given percent of economically disadvantaged students.  The graph shows the example charter school performing better than predicted; the further a school is above the line, the better its performance.


The Comparative Performance Analysis Report displays a table that compares a school’s actual and predicted level of performance in each tested grade and overall.  The difference between a school’s actual and expected performance in each grade, relative to other schools with similar economically disadvantaged statistics is used to produce an Effect Size.

To meet the measure in its Accountability Plan, a school’s result must show a meaningful Effect Size, defined as 0.3 or greater, which means a higher than expected level of performance to at least a small degree.[2]

[1] The Institute is using this statistic in place of percent eligible for free-lunch, because it is SED’s primary socio-economic measure, and is reported for each grade separately. The Institute also recently began the use of mean scale score to measure student achievement in the effect size calculation.  Please refer to the document on changes to the Institute’s effect size analysis for more details.

[2] In interpreting the results, aside from meeting the measure, the Institute takes into account the overall pattern across the grades as well as the particular circumstances in the school’s testing program.  For example, schools with larger positive effect sizes in successive grades may suggest the positive impact of the instructional program.  Also, a test grade which is an entry grade for the school would be taken into account in evaluating the overall school performance.