The most important question parents, teachers, staff, and trustees in DISD can ask themselves this summer is this: Did Dallas ISD perform a miracle in 2013? Because according to one of the most comprehensive performance indexes you’ll find, the answer is yes. And if that’s the case — and it is, you can choose to acknowledge it or not — it should affect everything about how we hold the school board accountable next year in supporting more of Mike Miles’ reform efforts. As well, it means there is just more proof that the school board should extend Miles’ contract for at least two to three years.
The data come from a group out of Houston called ERG Analytics (ERG stands for Education Resource Group). The company is well known to school boards, administrators, and legislators across Texas, to whom they pitch their data analysis tools. (In fact they spoke to the DISD board not long ago, but only to report on the district’s financial performance; the company was brought in by DISD’s CFO, Jim Terry, one of the smartest people in the school district.)
I talked to ERG’s Paul Haeberlen last week so he could explain what had been described to me as “the Dallas Miracle”: that a district that had been mired in sub-mediocrity (by ERG’s numbers) for a decade took a dramatic turn for the better in 2013.
Here’s what happened: ERG has a performance index that measures a school district’s performance based on seven measures of student outcomes, and THEN controls for poverty (which you must do to have any meaningful discussion about how well a school district is doing the job of educating its kids). And THEN ERG uses that index— the final score the district gets — to rank districts against each other.
You see why all this is important? First, you have to take several measurements to get a realistic picture of student achievement across the vast ecosystem of a large school district. They take three difficulty levels of the STAAR assessment, two graduation measures (rate and college-readiness), and two college readiness test scores (mean scores of SAT and ACT). They are weighted appropriately (not equally), and this produces a number. That’s why, for example, graduation rates only count 10 percent toward the final measurement number — because it’s only one metric, albeit a high-profile one. Judging a district based solely on any one metric would be like judging the effect of a storm system by counting lightning strikes.
Then, by adjusting for poverty levels, ERG provides context — whether the storm takes place over an ocean or a dessert. As we’ve talked about, a school district could be doing an average job of teaching rich kids and its test scores will still be outstanding. You could also be doing a great job teaching poor kids, and your district’s test scores are much lower. ERG adjusts for this so everyone is graded on how well they’re teaching the kids in front of them.
Then, ERG understands that the performance score is not the end of the story. Districts aren’t trying to improve in a vacuum. They’re competing against other districts across the state that are also trying to improve. Think about it like ships in an ocean. If your ship is trailing all the others, but knew information helps you figure out a way to increase its speed by 5 knots, you’re still losing ground if everyone else uses that same knowledge to increase their speed by 6 knots. (In this case, using new methods of instruction, or implementing the latest research, what have you.)
Now that we understand how in-depth ERG’s numbers are, how does DISD fare in the company’s performance index? By ERG’s performance measures, DISD had been in the third quartile [of the 200 largest ISDs statewide] for the past decade, Haeberlen said.
The charts and table at the top of this post show just that. But look at what happened in 2013: Dallas ISD, the second-largest ISD in the state, one which has never moved out of the third quartile, shot to mid-second quartile, placing 69th among the 200 largest districts in Texas. For comparison, Houston ISD, a current Broad Award winner, is ranked 62nd.
“What happened in 2013?” Haeberlen asks. “The district improved across the board.” In other words, in six of the seven metrics measured, DISD showed improvement. For a district this size, that’s astonishing.
All DISD really did, Haeberlen points out, is catch up with the many outstanding districts around it. Richardson ranked 2nd by ERG measures (and has a student population almost as poor and challenging as DISD’s, but without the board dissension). HEB, Garland, Carrollton-Farmers Branch, Mesquite, Highland Park — they all show well in ERG’s data.
Why should we all care? Because the quality of leadership in a school district has a profound impact on student learning. And it’s really hard to measure the outcomes of our leaders’ decisions when so much achievement data is just noise, thrown out piecemeal, usually cited to back up a conclusion that an interest group arrived at years ago. That’s why we need to look at the smartest data analysts out there, the ones not involved in the political muck surrounding school districts, to seek clarity — so we can hold our school leaders accountable.
In this case, Mike Miles has earned more time to see his reforms through. We need to track ERG data over the next three years to see if his reforms continue to have a positive outcome in ERG’s performance index. For now, it’s clear to me that Mike Miles is doing what he was hired to do: make DISD a much better district.