Home » Search results for 'weber' (Page 2)

Search Results for: weber

One Newark: Choosing “Great” Schools, or Merely Segregated Ones?

Full Length Brief: Weber_One NewarkSegregation-1

Mark Weber, PhD candidate, Rutgers University, Graduate School of Education

Executive Summary

This brief provides a preliminary analysis of the potentially segregative effects of One Newark, the school choice plan implemented by the Newark Public Schools (NPS) in 2014. School choices appear to be influenced by ratings assigned by NPS; however, these ratings are correlated to student population characteristics, such as race and economic status. Newark families, therefore, may be the choosing schools – inadvertently or otherwise – that are more segregated.

“Popular” schools under One Newark – the ones chosen most often by families – enroll fewer students eligible for the federal free-lunch program, a proxy measure of economic disadvantage. Popular charter schools also enroll relatively large proportions of black students compared to all of the city’s publicly-funded schools, even as popular district schools enroll relatively small proportions.

While popular schools show better performance on statewide assessments, their “growth” scores, which are intended to take into account differences in student populations, are more mixed. Because test scores are correlated to student population characteristics, families that choose higher-performing schools under One Newark may be selecting schools that are more segregated.

There are notable differences between popular district and popular charter schools: the popular charters have higher suspension rates and more inexperienced teachers than the popular district schools. Whether families are aware of these discrepancies is unknown.

Although the limited data released by NPS is inadequate for a full analysis, I find these results to be sufficient evidence to warrant the release of the full set of One Newark application data.

Empirical Critique of “One Newark”: First Year Update

Testimony before the Joint Committee on the Public Schools

PDF Version (Introduction below): Weber Testimony

New Jersey Legislature

Mark Weber


Good morning. My name is Mark Weber; I am a New Jersey public school teacher, a public school parent, a member of the New Jersey Education Association, and a doctoral student in Education Theory, Organization, and Policy at Rutgers University’s Graduate School of Education.

Last year, I was honored to testify before this committee regarding research I and others had conducted on One Newark, the school reorganization plan for the Newark Public Schools. Dr. Bruce Baker, my advisor at Rutgers and one of the nation’s foremost experts on school finance and policy, joined me in writing three briefs in 2014 questioning the premises of One Newark. Dr. Joseph Oluwole, a professor of education law at Montclair State University, provided a legal analysis of the plan in our second brief.

I would like to state for the record that neither myself, Dr. Baker, nor Dr. Oluwole received any compensation for our efforts, and our conclusions are solely our own and do not reflect the views of our employers or any other organization.

Our research a year ago led us to conclude that there was little reason to believe One Newark would lead to better educational outcomes for students. There was little empirical evidence to support the contention that closing or reconstituting schools under One Newark’s “Renew School” plan would improve student performance. There was little reason to believe converting district schools into charter schools would help students enrolled in the Newark Public Schools (NPS). And we were concerned that the plan would have a racially disparate impact on both staff and students.

In the year since my testimony, we have seen a great public outcry against One Newark. We’ve also heard repeated claims made by State Superintendent Cami Anderson and her staff that Newark’s schools have improved under her leadership, and that One Newark will improve that city’s system of schools.

To be clear: it is far too early to make any claims, pro or con, about the effect of One Newark on academic outcomes; the plan was only implemented this past fall. Nevertheless, after an additional year of research and analysis, it remains my conclusion that there is no evidence One Newark will improve student outcomes.

Further, after having studied the effects of “renewal” on the eight schools selected by State Superintendent Anderson for interventions in 2012, it is my conclusion that the evidence suggests the reforms she and her staff have implemented have not only failed to improve student achievement in Newark; they have had a racially disparate impact on the NPS certificated teaching and support staff.

Before I begin, I’d like to make a point that will be reiterated throughout my testimony: my analysis and the analyses of others actually raise more questions than they answer. But it shouldn’t fall to independent researchers such as me or the scholars I work with to provide this committee or other stakeholders with actionable information about Newark’s schools.

Certainly, we as scholars stand ready to provide assistance and technical advice; but the organization that should be testing the claims of NPS and State Superintendent Anderson is the New Jersey Department Of Education. The students and families of Newark deserve nothing less than a robust set of checks and balances to ensure that their schools are being properly managed.

One Newark can be thought of as containing four components: the expansion of charter schools; a “renewal” program for schools deemed to be underperforming; a system of consumer “choice,” where families select schools from a menu of public and charter options; and continuing state control of the district.

This last component is clearly a necessary precondition for the first three. Given the community outcry against State Superintendent Anderson and One Newark, it’s safe to say that none of the other three components would have been implemented were it not for continuing state control.

The critical questions I ask about these components are simple: do they work, are there unintended consequences from their implementation, and is One Newark being properly monitored and evaluated? Let me start by addressing the expansion of charter schools in Newark.

Buyer Beware: One Newark and the Market For Lemons

Mark Weber, PhD student, Rutgers University, Graduate School of Education

PDF of Policy Brief: Weber_OneNewarkLemonsFINAL

             The cost of dishonesty, therefore, lies not only in the amount by which the purchaser is cheated; the cost also must include the loss incurred from driving legitimate business out of existence.

– George A. Akerlof, The Market for “Lemons”: Quality Uncertainty and the Market Mechanism.

In his classic economics paper, Akerlof[1] addresses the problem of “asymmetrical information” in market systems. Using the used car market as an example, Akerlof shows that consumers who do not have good information about the quality of goods often get caught buying “lemons.” This not only hurts the individual consumer; it damages the market as a whole, as honest consumers and producers refuse to participate, concerned that false information keeps consumers from distinguishing a good car from a “lemon.”

The “school choice” movement is predicated on the idea that treating students and their families as “consumers” of education will introduce market forces into America’s school systems and improve the quality of education for all.[2]

But what if those families must make their choices armed only with incomplete or faulty data? How can a market operate successfully when consumers suffer from an asymmetry of information?

This brief looks at one example of asymmetrical information in a school choice system: Newark, NJ, whose schools were recently restructured under a plan entitled One Newark.

Newark’s schools have been under state control for nearly two decades; the elected school board only serves in an advisory capacity, making rapid, large-scale transformations much easier to facilitate. Under State Superintendent Cami Anderson, the district introduced One Newark, a plan that calls for students and their families to select a list of eight schools in order of preference for enrollment in the fall of 2014.

This author, in collaboration with Dr. Bruce D. Baker of Rutgers University and Dr. Joseph Oluwole of Montclair State University, has published several briefs analyzing One Newark’s consequences.[3] Among our findings:

  • The plan affects a disproportionate number of black and low-income students, whose schools are more likely to close, to be turned over to charter management organizations (CMOs), or to be “renewed.”
  • CMOs in Newark have little experience in educating demographically equivalent populations of students to the NPS schools; consequently, there is little evidence they will perform any better.
  • The statistical practices and models NPS has used to justify the classification of schools are fundamentally flawed.

This last point is of particular concern. The One Newark application[4] gives one of three ratings for each school a family may choose: “Great,” “On The Move,” or “Falling Behind.” While the district does offer its own profiles of each school, and the NJ Department of Education does offer individual school profiles, it is likely that the ratings on the application will have the most influence on families’ decisions.

If, however, these ratings suffer from the same defects we found in NPS’s previous attempts to classify schools – the lack of accounting for student characteristics, poor statistical practice, and using flawed or incomplete measures, among other problems – families may have a disadvantage when attempting to make an informed choice.

To ascertain whether the One Newark application ratings make sense, I used a statistical modeling technique embraced by NPS itself: linear regression. Only schools reporting Grade 8 test scores were included. The model here uses four covariates the district acknowledges affect test score outcomes: free lunch eligibility, Limited English Proficiency (LEP) status, special education status, and race.[5] The percentage of each student subpopulation for each school is included in this model, along with a covariate for gender, which has been shown to have an effect on test-based outcomes. This model is quite robust: over three-quarters of the difference in test-based outcomes can be statistically explained by these five student population characteristics.

The outcome used here is the one preferred by NPS: scale scores on the English Language Arts (ELA) section of the NJASK, New Jersey’s yearly statewide test. NPS averages this score across grade levels; however, as we have shown in our previous reports on One Newark, this is poor practice, as scale score means and distributions vary by grade.[6] I explore this problem more fully in the Appendix; for now, however, I accede to NPS and use their preferred measure, however flawed it may be.

When all five covariates are included in this model, they create a prediction of how a school will perform (relative to the other schools in Newark). We can then compare the predicted performance of the school with its actual performance. While not all of the difference can or should be attributed to the effectiveness of the school, this technique does allow us to compare the school’s performance against prediction to how the district rated the school in the One Newark application.

Figure 1

Figure 1 shows the difference from prediction for Newark schools – both NPS and charters – and their rating under One Newark. Schools that are being closed or turned over to CMOs are included for comparison. This graph illustrates several important points:

  • While there are many schools labeled as “Falling Behind” that perform below prediction, there are several schools that perform above prediction. Miller St. and South 17th, in particular, perform especially well given their student population. Under what criteria does NPS find that these schools are “Falling Behind”?
  • Conversely, several schools that perform below prediction are rated as “On The Move” or “Great.”
  • Only one charter school in the One Newark application is rated “Falling Behind” (University Heights, which did not report Grade 8 scores and is, therefore, not included in this analysis). But two charters in the application perform below prediction (Greater Newark and Great Oaks), and all except North Star[7] perform below Miller and South 17th.
  • Two other charters that perform below prediction – Robert Treat and Maria Varisco-Rogers – are not included in the One Newark application; these schools opted not to participate in the universal enrollment process.

Certainly, no school should be judged solely on one (flawed) metric. The point here, however, is that even by NPS’s own questionable standards, the classification of schools under the One Newark rating system appears to be arbitrary and capricious.

To be fair, the One Newark application does state that the district did not use averaged scale scores as its sole measure of a school’s effectiveness (to my knowledge, however, NPS has never publicly released a white paper or other document that outlines its precise methodology for rating schools). The district has also used median Student Growth Percentiles (mSGPs) to create its ratings.

SGPs, as measures of growth, are ostensibly measures that do not penalize schools for having students who start at lower absolute levels but still demonstrate progress. Supposedly, SGPs account for the differences in student population characteristics, which are correlated to test results. Former Education Commissioner Christopher Cerf[8] has stated: “You are looking at the progress students make and that fully takes into account socio-economic status… By focusing on the starting point, it equalizes for things like special education and poverty and so on.”

If this were true, we would expect to see little correlation between a school’s average scale score – its “starting point” – and its mSGP. Figure 2 plots these two measures in one graph.

Figure 2

As this scatterplot shows, there is a moderate but significant correlation between a school’s growth and its “starting point.” The bias shown in a school’s scale score, created by its student characteristics, is then at least partially shown also in its mSGP. In other words: SGPs are influenced by student characteristics, but NPS does not account for that bias when using SGPs to create its ratings.[9]

If a school’s student population, then, affects its mSGP, how do student characteristics affect the One Newark ratings? Figure 3 shows the differences in student populations for all three classifications.

Schools that are “Falling Behind” have significantly larger proportions of black students than schools that are “On The Move” or “Great.” Those “Great” schools also have significantly fewer students in poverty (as measured by free lunch eligibility) than “Falling Behind” and “On The Move” schools. “Great” schools also serve fewer special education students, and a slightly smaller proportion of boys.

Figure 3


Arguably, the One Newark rating is less a measure of a school’s effectiveness than it is a measure of its student population. If a family chooses “Great” schools, they are really choosing schools with fewer black, poor, and special needs students.

There is a serious debate to be had as to whether a “choice” system of education is viable for a city like Newark. If, however, NPS has committed to One Newark, it should view its role as a “consumer advocate,” correcting the asymmetry in information and providing justifiable school ratings, rather than limiting the choices students and their families have.

Unfortunately, it appears that NPS is choosing not to be an impartial arbiter; by forcing the closure of NPS schools that, by at least some measures, outperform charters, the district is actively distorting the market forces it claims will improve education.

Under Akerlof’s theory, then, One Newark may not only lead to more student stuck with lemons: it may actually drive more non-lemons out of the market.


Technical Appendix: Problems With Averaging Scale Scores


In its response to our first report on One Newark, NPS made the case that averaging scale scores across grade levels is a superior methodology to ours, which used Grade 8 proficiency rates. We acknowledged that scale scores are a limited but legitimate measure of test-based student performance; certainly no less limited than proficiency rates, but still arguably as valid and reliable.

In our response to NPS, however, we do argue that while scale scores are acceptable for this sort of analysis, averaging scale scores

across grade levels creates a distortion that renders the scale scores less valid as school performance measures.

The problem with averaging scale scores across grades is that each grade level has a different mean scale score and a different distribution of scores around that mean. Table 1, originally presented in our response, shows the different mean scores for each grade level of Newark’s schools, both charter and NPS. The Grade 8 mean score differs from the Grade 4 mean score by over 16 points.

Table 1– Weighted Mean Scale Scores, NJASK LAL, 2013, Newark Only (Charter & NPS)

Test Obs Mean Std. Dev. Min Max
LAL 8 3301 205.0583 11.06671 183 235.8
LAL 7 3154 193.2245 15.9329 170.5 227.6
LAL 6 3631 192.7007 11.03825 172.9 224.5
LAL 5 3255 189.9525 12.66214 166.1 217.0
LAL 4 3223 188.3744 14.46348 165.6 235.5
LAL 3 3680 194.5205 12.0455 173.9 235.7

Why does this matter? Consider two schools with exactly the same average scale scores in all grades; now imagine that they each scored exactly at the citywide mean in all grades. One school, however, has considerably more 8th graders than 4th graders. That school would have an advantage when compared to the other: its larger proportion of 8th graders would push up its overall average, because the mean score for 8th grade is higher than the mean score for 4th. Weighting the means by the number of students in each grade wouldn’t solve this problem; in fact, it creates the problem, because the “average” student in 8th grade gets a higher score than the “average” student in 4th. More weight is being put on the score that is arbitrarily higher.

This problem is further compounded when running a linear regression. Because the dependent variable, grade-averaged mean ELA scale scores, is distorted by grade enrollment, the independent variables do not have a consistent relationship to the dependent variable from school to school. In effect, the rules change for every player.

A more defensible technique for averaging across grades is to run a linear regression for each grade, then calculate standardized residuals, which allow for comparisons across different mean scores. Those residuals are then averaged, weighted for student enrollment.

Figure 4 uses this methodology. Careful readers will notice that the relative position of many schools has shifted from Figure 1, significantly in some cases. Once again, however, there are “Great” schools that underperform relative to “Falling Behind” schools.

Even under this improved method, the classification of schools under One Newark remains arbitrary and capricious.



[1] Akerlof, G.A. (1970). Quarterly Journal of Economics (84) 3, 488-500. https://www.iei.liu.se/nek/730g83/artiklar/1.328833/AkerlofMarketforLemons.pdf

[2] For a classic example, see: Friedman, M. (1980) “What’s Wrong With Our Schools?” http://www.edchoice.org/the-friedmans/the-friedmans-on-school-choice/what-s-wrong-with-our-schools-.aspx

[3] – An Empirical Critique Of “One Newark”: https://njedpolicy.wordpress.com/2014/01/24/new-report-an-empirical-critique-of-one-newark/
– “One Newark’s” Racially Disparate Impact On Teachers:
– A Response to “Correcting the Facts about the One Newark Plan: A Strategic Approach To 100 Excellent Schools”: https://njedpolicy.wordpress.com/2014/03/24/a-response-to-correcting-the-facts-about-the-one-newark-plan-a-strategic-approach-to-100-excellent-schools/

[4] The paper application used for One Newark is no longer available at NPS’s website. Originally retrieved from: http://onewark.org/wp-content/uploads/2013/12/One-Newark-Enrolls-Paper-Application.pdf

[5] http://onewark.org/wp-content/uploads/2013/12/StrategicApproach.pdf

[6] See: A Response to “Correcting the Facts about the One Newark Plan: A Strategic Approach To 100 Excellent Schools,” p. 8.

[7] North Star, however, does engage in significant patterns of student cohort attrition that likely affect its student population and test scores. See: http://schoolfinance101.wordpress.com/2013/10/25/friday-story-time-deconstructing-the-cycle-of-reformy-awesomeness/



A Response to “Correcting the Facts about the One Newark Plan: A Strategic Approach To 100 Excellent Schools”

Full report here: Weber.Baker.OneNewarkResponsewithexecsum

Mark Weber & Bruce Baker


This brief is a response to the Newark Public Schools rebuttal of our analysis of the district’s schools restructuring plan, One Newark. In this response, we find:

  • The consequences of the One Newark plan are racially disparate, creating a possible legal challenge for both the families of students and staff. NPS, however, has not acknowledged this part of our analysis.
  • NPS uses scale scores from state tests, averaged across grade levels, in their rebuttal. We find these measures to be seriously flawed, and certainly no better than the measures we used in our initial report.
  • Even using these flawed measures, we still find the classifications of schools under One Newark to be arbitrary and capricious when accounting for student population characteristics.
  • Even when using scale scores, we find no evidence that the student population of Newark will do better under schools run by charter management organizations. Further, the patterns of student cohort attrition in some charter schools and other behaviors lead us to question the validity of One Newark’s charter takeover strategy.
  • The statistical models used by NPS in their rebuttal are fundamentally flawed: specifically, the author(s) did not account for collinearity within the NPS model, biasing the results towards NPS’s favored position.


On March 11, 2014, the Newark Public Schools (NPS) released a response to our policy brief of January 24, 2014: “An Empirical Critique of One Newark.”[1] Our brief examined the One Newark plan, a proposal by NPS to close, “renew,” or turn over to charter management organizations (CMOs) many of the district’s schools. Our brief reached the following conclusions:

  •  Measures of academic performance are not significant predictors of the classifications assigned to NPS schools by the district, when controlling for student population characteristics.
  • Schools assigned the consequential classifications have substantively and statistically significantly greater shares of low income and black students.
  • Further, facilities utilization is also not a predictor of assigned classifications, though utilization rates are somewhat lower for those schools slated for charter takeover.
  • Proposed charter takeovers cannot be justified on the assumption that charters will yield better outcomes with those same children. This is because the charters in question do not currently serve similar children. Rather they serve less needy children and when adjusting school aggregate performance measures for the children they serve, they achieve no better current outcomes on average than the schools they are slated to take over.
  • Schools slated for charter takeover or closure specifically serve higher shares of black children than do schools facing no consequential classification. Schools classified under “renew” status serve higher shares of low‐income children.

In its response[2], NPS questions both our methodology and our data sources. We are pleased to engage NPS in a thoughtful dialogue about One Newark; however, their rebuttal unfortunately confirms many of our conclusions about the plan, and refuses to even acknowledge many of our critiques.

Rather than answer NPS’s criticisms point-by-point, we take this opportunity to focus on the larger issues NPS raises about our brief, addressing specific arguments within the body of this response. It is our intention here to further the dialogue about One Newark in the hopes that NPS will move toward a position of transparency and engagement with stakeholders, both in and out of Newark.


We are pleased that “An Empirical Critique of One Newark” has generated a response from the Newark Public Schools administration. We have watched over the last few months as the topic of the One Newark plan has generated strong reactions from stakeholders both in and out of Newark. Given the changes that One Newark will bring – changes that even NPS agrees are profound and far-reaching – a measured, careful analysis of the rationale and consequences of these changes is clearly necessary.

Our conclusions are informed by public data using standard statistical methods. We labor to make our results replicable and understandable: we believe it is a testament to our work that NPS was able to respond to “An Empirical Critique” without any questions as to why we reached the conclusions that we did, even if they disagreed with those conclusions.

We believe it is time for NPS to make a similar commitment to transparency in their own formulations of policy. Despite their protestations, we are still no closer to understanding how NPS classified particular schools than we were before. We still do not know NPS’s rationale for why three particular schools are being taken over by two particular CMOs. We still do not know why staff at particular schools face an employment consequence while staff at other schools do not. We don’t know why NPS proposes to divest particular facilities to particular parties.

Backwards-engineering a rationale for One Newark does not contribute to transparency. Using flawed measures like averaged scale scores does not increase stakeholders’ faith in NPS’s ability to justify its plan. Engaging in poor statistical practice does not lead to confidence in NPS’s judgments. And failing to fulfill legal obligations to release data in a timely manner does not encourage a candid exchange of views.

We agree that the educational outcomes of Newark’s students are not acceptable, and that change is needed in the lives of Newark’s deserving children. Whether that change can come solely, or even primarily, through the policies of a state-run school district is an open question. We heartily agree, however, that school policies certainly matter, and Newark should constantly strive to make its schools better, even in the face of seemingly insurmountable problems whose solutions lie outside the purview of the public schools.

But no change can come unless and until an open dialogue about education takes place in front of a well-informed public, where all stakeholders have access to the inner working of the mechanisms that generate policies. If our briefs have compelled NPS to begin to engage in this dialogue, we will consider our time analyzing One Newark to have been well spent.

“One Newark’s” Racially Disparate Impact on Teachers

PDF of Policy Brief: Weber.Baker.Oluwole.Staffing.Report_3_10_2014_FINAL

As with our previous One Newark policy brief, this one is too long and complex to post in full as a blog. Below are the executive summary and conclusions and policy recommendations. We encourage you to read the full report at the link above.

Executive Summary

In December of 2013, State Superintendent Cami Anderson introduced a district-wide restructuring plan for the Newark Public Schools (NPS). In our last brief on “One Newark,” we analyzed the consequences for students; we found that, when controlling for student population characteristics, academic performance was not a significant predictor of the classifications assigned to schools by NPS. This results in consequences for schools and their students that are arbitrary and capricious; in addition, we found those consequences disproportionately affected black and low-income students. We also found little evidence that the interventions planned under One Newark – including takeovers of schools by charter management organizations – would lead to better student outcomes.

In this brief, we continue our examination of One Newark by analyzing its impact on NPS’s teaching staff. We find the following:

  • There is a historical context of racial discrimination against black teachers in the United States, and “choice” systems of education have previously been found to disproportionately affect the employment of these teachers. One Newark appears to continue this tradition.
  • There are significant differences in race, gender, and experience in the characteristics of NPS staff and the staff of Newark’s charter schools.
  • NPS’s black teachers are far more likely to teach black students; consequently, these black teachers are more likely to face an employment consequence as black students are more likely to attend schools sanctioned under One Newark.
  • Black and Hispanic teachers are more likely to teach at schools targeted by NJDOE for interventions – the “tougher” school assignments.
  • The schools NPS’s black and Hispanic teachers are assigned to lag behind white teachers’ schools in proficiency measures on average; however, these schools show more comparable results in “growth,” the state’s preferred measure for school and teacher accountability.
  • Because the demographics of teachers in Newark’s charter sector differ from NPS teacher demographics, turning over schools to charter management operators may result in an overall Newark teacher corps that is more white and less experienced.

These findings are a cause for concern: to the extent that the One Newark plan disproportionately affects teachers of one race versus another, the plan may be vulnerable to legal challenge under civil rights laws.

Conclusions and Policy Implications

In our previous brief, we found that the One Newark plan imposed consequences on schools and their students that were arbitrary and capricious. We found little evidence to support the claim of NPS that One Newark would improve student outcomes, and we found that the students who would see their schools closed, turned over to CMOs, or “renewed” were more likely to be black and/or suffering from economic disadvantage.

In this brief, we turn our attention to the effects of One Newark on NPS staff. We find patterns of racial bias in the consequences to staff similar to those we found in the consequences to students, largely because the racial profiles of students and staff within the NPS schools are correlated. In other words: Newark’s black teachers tend to teach the district’s black students; therefore, because One Newark disproportionately affects those black students, black teachers are more likely to face an employment consequence.

NPS’s black teachers are also more likely to have positions in the schools that are designated by the state as needing interventions – the more challenging school assignments. The schools of NPS black teachers consequently lag in proficiency rates, but not in student growth. We do not know the dynamics that lead to more black teachers being assigned to these schools; qualitative research on this question is likely needed to understand this phenomenon.

One Newark will turn management of more NPS schools over to charter management organizations. In our previous brief, we questioned the logic of this strategy, as these CMOs currently run schools that do not teach students with similar characteristics to NPS’s neighborhood schools. Evidence suggests these charters would not achieve any better outcomes with this different student population.

This brief adds a new consideration to the shift from traditional public schools to charters: if the CMOs maintain their current teaching corps’ profile in an expansion, Newark’s teachers are likely to become more white and less experienced overall. Given the importance of teacher experience, particular in the first few years of work, Newark’s students would likely face a decline in teacher quality as more students enroll in charters.

The potential change in the racial composition of the Newark teaching corps under One Newark – to a staff that has a smaller proportion of teachers of color – would occur within a historical context of established patterns of discrimination against black teachers. “Choice” plans in education have previously been found to disproportionately impact the employment of black teachers; One Newark continues in this tradition. NPS may be vulnerable to a disparate impact legal challenge on the grounds that black teachers will disproportionately face employment consequences under a plan that arbitrarily targets their schools.

New Report: An Empirical Critique of “One Newark”

Our new report is too long to post in its entirety in blog form.

The report can be downloaded here: Weber.Baker_OneNewark_Jan24_2014

Below is the executive summary of the report:

Executive Summary

On December 18, 2013, State Superintendent Cami Anderson announced a wide-scale restructuring of the Newark Public Schools. This brief examines the following questions about One Newark:

  • Has NPS identified the schools that are the least effective in the system? Or has the district instead identified schools that serve more at-risk students, which would explain their lower performance on state tests?
  • Do the interventions planned under One Newark — forcing staff to reapply for jobs, turning over schools to charter operators, closure – make sense, given state performance data on NPS schools and Newark’s charter schools?
  • Is underutilization a justification for closing and divesting NPS school properties?
  • Are the One Newark sanctions, which may abrogate the rights of students, parents, and staff, applied without racial or socio-economic status bias?

We find the following:

  • Measures of academic performance are not significant predictors of the classifications assigned to NPS schools by the district, when controlling for student population characteristics.
  • Schools assigned the consequential classifications have substantively and statistically significantly greater shares of low income and black students.
  • Further, facilities utilization is also not a predictor of assigned classifications, though utilization rates are somewhat lower for those schools slated for charter takeover.
  • Proposed charter takeovers cannot be justified on the assumption that charters will yield better outcomes with those same children. This is because the charters in question do not currently serve similar children. Rather they serve less needy children and when adjusting school aggregate performance measures for the children they serve, they achieve no better current outcomes on average than the schools they are slated to take over.
  • Schools slated for charter takeover or closure specifically serve higher shares of black children than do schools facing no consequential classification. Schools classified under “renew” status serve higher shares of low-income children.

These findings raise serious concerns at two levels. First, these findings raise questions about the district’s own purported methodology for classifying schools. Our analyses suggest the district’s own classifications are arbitrary and capricious, yielding racially and economically disparate effects.  Second, the choice, based on arbitrary and capricious classification, to subject disproportionate shares of low income and minority children to substantial disruption to their schooling, shifting many to schools under private governance, may substantially alter the rights of these children, their parents and local taxpayers.



One Newark is a program that appears to place sanctions on schools – including closure, charter takeover, and “renewal” – on the basis of student test outcomes, without regard for student background. The schools under sanction may have lower proficiency rates, but they also serve more challenging student populations: students in economic disadvantage, students with special educational needs, and students who are Limited English Proficient.

There is a statistically significant difference in the student populations of schools that face One Newark sanctions and those that do not. “Renew” schools serve more free lunch-eligible students, which undoubtedly affects their proficiency rates. Schools slated for charter takeover and closure serve larger proportions of students who are black; those students and their families may have their rights abrogated if they choose to stay at a school that will now be run by a private entity.[1]

There is a clear correlation between student characteristics and proficiency rates on state tests. When we control for student characteristics, we find that many of the schools slated for sanction under One Newark actually have higher proficiency rates than we would predict. Further, the Newark charter schools that may take over those NPS schools perform worse than prediction.

There is, therefore, no empirical justification for assuming that charter takeovers will work when, after adjusting for student populations, schools to be taken over actually outperform the charters assigned to take them over. Further, these charters have no track record of actually serving populations like those attending the schools identified for takeover.

Our analysis calls into question NPS’s methodology for classifying schools under One Newark. Without statistical justification that takes into account student characteristics, the school classifications appear to be arbitrary and capricious.

Further, our analyses herein find that the assumption that charter takeover can solve the ills of certain district schools is specious at best.  The charters in question, including TEAM academy, have never served populations like those in schools slated for takeover and have not produced superior current outcome levels relative to the populations they actually serve.

Finally, as with other similar proposals sweeping the nation arguing to shift larger and larger shares of low income and minority children into schools under private and quasi-private governance, we have significant concerns regarding the protections of the rights of the children and taxpayers in these communities.

[1] Green, P.C., Baker, B.C., Oluwole, J. (in press) Having it Both Ways: How Charter Schools try to Obtain Funding of Public Schools and the Autonomy of Private Schools. Emory Law Journal

Charter Schools in Hudson County, NJ: Skimming the Cream?

Mark Weber, Doctoral Candidate, Rutgers University, Graduate School of Education

Hudson County Charter Schools:
A Case Study

The role of charter schools within New Jersey’s education system continues to be a hotly debated topic, particularly in the state’s urban areas. News reports have detailed an ongoing controversy over the expansion of a charter school in Hoboken.[i] In Jersey City, charter school policies have been an important part of the city’s political races.[ii] These communities, both in Hudson County, provide an excellent case study for examining how student achievement, economic disadvantage, and charter enrollment interact.

A central issue in the Hudson County charter school debate is the role of cream-skimming: the practice of serving a disproportionate number of children who are not economically disadvantaged or who do not have special educational needs compared to a charter’s sending area.[iii]

The correlation between poverty and academic performance[iv] is well known, as is the influence of “peer effect”[v] on student achievement. If charter schools are serving students who largely come from higher socio-economic status groups, they will have an advantage over their neighboring traditional public schools (TPSs); further, that advantage may come at a cost to those TPSs, who must educate the economically disadvantaged students that the charter schools do not serve.

To determine if Hudson County’s charter schools enjoy a cream-skimming advantage, we first need to compare their student populations to the schools that serve students in the same community.

Calculating a “Disparity Ratio”

Drawing on 2012 data from the New Jersey Department of Education (NJDOE)[vi], the graph below shows a free lunch (FL) eligible “disparity ratio” for every public school – both charters and TPSs – in Hudson County. The ratio is calculated through this equation:

Disparity ratio = school’s percentage of free lunch eligible students / “actual” district’s percentage of free lunch students

Students who are eligible for FL are economically disadvantaged[vii]: their families have incomes at or below 130% of the federal poverty line[viii]. The “actual” district of a charter school is the school district where the charter is physically located (charter schools are, by law, considered to be their own districts[ix]); for a TPS, it is the district of the school. To calculate an “actual” district’s population, add the population of all charters located within the boundaries of that school district to the populations of all TPSs in the district; the “actual” district’s FL population is calculated the same way.

The disparity ratio then compares each school’s FL percentage with the “actual” district’s FL percentage. For example, Hoboken’s “actual” FL percentage is 48%.[x] If a school in Hoboken has a disparity ratio of “1,” its free lunch percentage would be 48%. If the school had a free lunch percentage of 96%, it’s disparity ratio would be “2”; a school with 24% FL eligible students would have a ratio of “0.5.”

The eleven charter schools for which NJDOE had data in 2012 are shown in red.


The majority of Hudson County’s charter schools serve a significantly smaller percentage of students who are economically disadvantaged than their neighboring TPSs.

Economic Disadvantage and Student Academic Performance

How does student performance correlate to the free lunch disparity of a school?


This graph shows FL disparity plotted against aggregate grade-weighted proficiency[xi], a measure of the percentage of all test takers at a school who reached the level of “proficient” or higher on New Jersey’s state-wide standardized test, the NJASK. While the measure likely hides important differences in test scores[xii], it is still a viable and concise way to judge differences between schools.

While Hudson County’s charter schools do have relatively high rates of proficiency, it is worth noting that many of the county’s TPSs perform at the same level, even though they have less disparity in the percentage of economically disadvantaged children they enroll. In addition, the trend-lines for both types of schools show that, on average, a TPS will show a higher rate of aggregate proficiency when compared to a charter school with the same FL disparity. Of course, this is merely conjecture when it comes to the highest-performing charters, as there are no TPSs in Hudson County that have comparable levels of FL disparity.

The “r-sq” number represents how much of the difference in proficiency can be statistically “explained” by a difference in FL disparity. Here, over 70% of the variation in proficiency among the Hudson County charters can be explained by their FL disparity, a strong correlation.

Another way to measure student achievement is through “growth” on test scores[xiii]:


The Median Student Growth Percentile (mSGP) is calculated by NJDOE to show how much growth a student exhibited in test scores over the previous year, as compared to students starting with the same score[xiv]. As with proficiency rates, many TPSs in Hudson County show just as much growth – if not more – as measured by mSGP as charter schools; however, those schools often have far less disparity in the percentage of FL students they serve than Hudson’s charters.

Of course, a legitimate reply to this analysis would be to point out that schools in low-poverty districts may not have high levels of disparity with their “actual” district; they may still, however, serve lower percentages of FL students than Hudson’s charter schools.

The graph below compares aggregate grade-weighted proficiency against the percentage of students in each Hudson County school who qualify for Free Lunch:


Even compared to low-poverty district schools, Hudson‘s charters serve a small percentage of FL students. And many TPSs serving far greater percentages of FL students get comparable or better results.

Conclusions: Are Hudson County’s Charters Skimming the Cream?

Certainly, there is no evidence within the NJDOE data to show that charters in Hoboken and Jersey City are engaging in a deliberate pattern of cream-skimming. That same data, however, is quite clear: the charter schools in Hudson County that have higher rates of proficiency and/or student growth do not serve the same percentage of economically disadvantaged students as their neighboring traditional public schools.

Further, there is a clear correlation between these charter schools’ test outcomes and the relative percentage of free lunch students they serve compared to their neighboring TPSs. The  correlation is much stronger for the county’s charter schools than its TPSs.

Hudson County’s policy makers, education leaders, and citizens need to ask themselves a question:

Are cream-skimming charters a good investment if their test score outcomes correlate closely with their disparity in serving economically disadvantaged children?


[i] “Request to block Hoboken charter school’s expansion, renewal put on pause.” The Star-Ledger, 11/13/13. http://www.nj.com/hudson/index.ssf/2013/11/request_to_block_hoboken_charter_schools_expansion_renewal_put_on_pause.html

[ii] “Supporters boast after Fulop wins Jersey City mayor’s race.” The Star-Ledger, 5/15/13. http://www.nj.com/hudson/index.ssf/2013/05/supporters_boast_after_fulop_w.html

[iii] DiCarlo, M. (2012). The evidence on charter schools and test scores. The Shanker Institute. http://shankerblog.org/wp-content/uploads/2011/12/CharterReview.pdf

[iv] Baker, B. & Coley, R. (2013). Poverty and Education: Finding the Way Forward. ETS Center for Research on Human Capital and Education. http://www.ets.org/s/research/pdf/poverty_and_education_report.pdf

[v] Burke, M., Sass, T. (2008). Classroom Peer Effects and Student Achievement. The Urban Institute. http://www.urban.org/UploadedPDF/1001190_peer_effects.pdf

[vii] Free lunch eligibility, rather than free and reduced-price lunch eligibility, is a superior metric for this type of analysis; see Baker, B. (2011) “Measuring poverty in education policy research.” http://schoolfinance101.wordpress.com/2011/03/25/measuring-poverty-in-education-policy-research/

[viii] U.S. Department of Agriculture, Food and Nutrition Service; Income eligibility guidelines.  http://www.fns.usda.gov/cnd/governance/notices/iegs/iegs.htm

[x] Free lunch percentages for “actual” districts in Hudson County, 2012:

  • Bayonne: 52%
  • East Newark: 76%
  • Guttenberg: 71%
  • Harrison: 63%
  • Hoboken: 47%
  • Jersey City: 67%
  • Kearney: 23%
  • North Bergen: 55%
  • Secaucus: 19%
  • Union City: 84%
  • Weehawken: 46%
  • West New York: 72%

[xi] Starting in 2012, NJDOE no longer reports the raw numbers of students who are at various proficiency levels at each school; only proficiency rates for each grade level are calculated, which means the data available for a simple calculation of a school’s overall proficiency is not available. Further, that overall proficiency rate can’t be calculated by “averaging averages,” as different grade levels in each school may have different numbers of students.

To overcome this problem, I used the enrollment file for each grade to weight it relative to the entire population of test takers in grades 3 through 8; in this way, the proficiency levels for each grade can be weighted against the others. Unfortunately, there is no way to know if the number of students in the enrollment file differs substantially from the number of students who actually took the tests.

In addition, there is a legitimate argument to be made that showing proficiency on the NJASK Language Arts test should not be counted as equivalent to showing proficiency on the NJASK Science or Math test. Also, a school like HOLA Hoboken Dual Language Charter School, which only has test takers in 3rd grade, may pay a penalty or reap a reward compared to other schools if the NJASK-3 has higher or lower average proficiency rates than other grade levels.

Given the limitations on the data, aggregate grade-weighted proficiency is a reasonable and concise way to view differences in school-wide test achievement, and more than adequate for the purposes of this analysis. In no way, however, should it be viewed as a comprehensive metric for judging a school’s overall effectiveness.

[xii] DiCarlo, M. (2012) “How Often Do Proficiency Rates And Average Scores Move In Different Directions?” http://shankerblog.org/?p=6265

[xiii] The NJDOE 2012 performance report file did not include NJASK proficiency rates for M E T S Charter School, Jersey City; the file did, however, contain mSGP data. M E T S is, therefore, included on this graph, but not the others.

[xiv] For an explanation of SGPs, see this information from the NJDOE: http://www.state.nj.us/education/AchieveNJ/teacher/percentile.shtml