Choice of schoo makes little difference ?
March 19, 2012 3:19 PM   Subscribe

How much effect does a choice of high school have on a childs educational outcomes ?

Sometime ago on a BBC Radio programme (page down to "Do schools make a difference?") I heard an educationalist say (something like) "only 10% of the variance in a childs educational outcomes can be attributed to the high school a child goes to".

The speaker went on to say that factors such as the parents attitude to schooling, the natural aptitude of the child, etc made up the other 90% of the variance.

This seemed remarkable to me (and in fact hard to believe). I'm wondering if I've misunderstood what was being said. My understanding of what I heard was that if you took a random child and got them to spend 7 years in the "best" (what ever that means) high school in the world or, alternately, 7 years in the "worst" high school that their educational achievements would be different but not really terribly different.

I'd be interested to see references (for the non-academic reader preferably) to material which supports or detracts from this point of view. I'd also be grateful for some more general comments on her statement as I wonder if I'm misunderstanding what "10% of the variance" means .
posted by southof40 to Education (14 answers total) 6 users marked this as a favorite
Freakonomics (the book) covered something similar, stating that the single most important factor in a child's educational prospects was the education level of the mother, and hinted strongly that all those helicopter parents overly involved in the minutiae of their kids' school rankings were fretting for little measurable benefit.
posted by ambrosia at 3:28 PM on March 19, 2012 [1 favorite]

Just in case you weren't aware, high school in the US is four years instead of seven, so US-based analysis is probably going to show different results than studies wherever you are (New Zealand?).
posted by desjardins at 3:38 PM on March 19, 2012

The problem with these kind of studies is that its based on observational data -- NOT controlled studies. I haven't listened to this particular podcast yet (but thanks for the link!), but I think that it is VERY hard to draw conclusions about causation based on observational data. So, even if only 10% of the variance is explained by high school, you would need to know alot more about what other variables were included in the data, and whether there were confounding errors.

Also see: Correlation does not imply causation
posted by tinymegalo at 3:38 PM on March 19, 2012

I would look at places where local taxes determine amount of money put into the education system. First, I know more money spent does not mean better education. I think a lot depends on what it is spent on. Clearly though if you look at the suburbs of most major cities, people vote on what they perceive the value of an education in that town is by real estate values. Here in the Northeast (I am in the county just north of NYC), real estate values are tied a lot to school systems. A 3500 sq ft house in one district could be worth more than double a similar house in a district 5 miles away.

Instinctively, I would also say that you have to look at the teachers, not just the students. I think a great teacher in a low performing district is better than a poor teacher in a high performing district.

Also, 10% better than expected and then 10% worse than expected is a 20% gap which I think is significant. Also, how is the 10% measured? If your SAT scores went up by 10%, that would be significant.
posted by JohnnyGunn at 3:39 PM on March 19, 2012

I've been reading J. Anthony Lukas's Common Ground, a Pulitzer Prize-winning history of three families in Boston--one Yankee, one Irish-American, and one African-American. There is significant discussion of schooling in the book, including some discussion of the 1996 "Coleman Report" on public schools in America. As I understand it, the report concluded that black and white schools were largely funded equally on a per student basis, and it was home life, rather than school funding, that had the greatest effect on student performance. That seems consistent with the radio program reportage and the Freakonomics piece.

Caveat: I am not an educator, and I have no idea whether the Coleman report--which was controversial enough in its own time--has any sustained value or was entirely discredited, etc. I have no dog in this fight.
posted by Admiral Haddock at 3:41 PM on March 19, 2012

Might help you narrow down people looking at the field a bit: Back in 2000 Alan Krueger and Stacy Berg Dale suggested that except for extreme low-income students, college in the U.S. worked primarily as a high-pass filter.

High school probably has less of a filtering property, which suggests to me that it's likely got even less impact on a student.
posted by straw at 4:01 PM on March 19, 2012

I'm actually not really tremendously surprised at this. When I think back to high school, home life created a huge difference in how much people cared about their studies and thus how hard they tried. The main reason I cared about my grades (and thus strived for them) was because it was important to my dad. My peers who tried hard and were in the top classes at my school all came from homes where education was valued (regardless of income). It was ingrained in me from an early age that it was important to work hard in school. Those core values are acquired from an early age, so it really does make a huge difference. By the time you get to high school, your core values are pretty well set, so going to a good or bad high school isn't going to make as big of a difference.
posted by DoubleLune at 4:22 PM on March 19, 2012 [1 favorite]

Ha! I *JUST* finished a multi-level regression analysis of this very question using the NELS-88 data for my Applied Data Analysis class, which does what you mention: it sorts out the average kid-level and school-level variability for the sample. The amount of attributable variance at each level really depends on which variables you put in the model (we were using a fairly simple one as the point of the assignment was getting practice developing a multi-level model), but yes, student-level variation is much larger than school-level variation, and this is borne out in the literature I've read.

Some authors who address this sort of thing that come to mind are Daniel Koretz (warning: auto-play video); John Murnane, to some extent; and Thomas Kane. These are likely to be pretty technical though, & US-centric. And sorry that they're all Harvard--they're just who come to mind at the moment. For less technical sources, you might want to check out some education blogs, like Education Week (although most of its content is behind a paywall; your library might have access) and the Shanker blog (definitely on the Ravitch side of the US ed reform debate, but I find their data analysis and interpretation better than most). Sorry I can't be more specific with literature; this isn't the focus of my area of study.

I have, however, been playing around with various data analysis of both observational & experimental data for the past year regarding educational issues, and the variables that tend to count the most, on average for US ed data are: socioeconomic status (of child, mean SES of school, of neighborhood) and highest level of parent education (which is a piece of SES); coming into kindergarten prepared; being at least on grade level for reading by the end of the 3rd grade; previous year's academic achievement; race and gender are often significant as well.

So when we look at those variables, and see how many of them are in the child's home environment (SES, parent ed, & pre-K prep), and the fact that not being able to hit reading targets by the 1st and 3rd grade benchmarks are some of strong predictors for kids dropping out of middle and high school, suddenly it doesn't sound so improbable that individual factors matter more than school factors.

Also keep in mind that although these are observational data, you can't ethically create a controlled experiment where you make some kids grow up in poverty to undereducated parents and assign others to wealthy folks, or send some kids to purposefully horrible school v. a (for some value of) great school. So while experimental studies should be conducted for proposed interventions, they don't work so well for testing social factors. Sorry for the essay and US-centric info.

I know all of this needs citations, but I'm knackered and will post tomorrow. If any of the other social science data junkies want to play, I'm pretty sure most, if not all, of the variables I mentioned are in the NELS-88 dataset, a longitudinal study of students from 1988-2000.
posted by smirkette at 5:11 PM on March 19, 2012 [12 favorites]

I am not an education specialist, however I have studied the sociology of education at a university level, and retained an interest in it in the intervening years. The statement that schools make little difference is broadly correct, and not an especially controversial one in educational research - though of course there is a lot of nuance and caveats to a statement so broad.

Firstly, these comparisons are generally confined to educational systems rather than every school. After all, there's no point comparing someone going to a class of 60 in Vietnam with someone going to a Swiss school with a class of 15 or whatever. The Swiss pupil will never end up in the Viet school and vice versa. Comparisons need to be made where there's a matter of realistic choice - and also there would be no point in doing broad-based research like that because it wouldn't prove much;in my experience most education research is conducted with the home country's education system in mind.

Secondly, using the same example above, there are various factors that can influence outcomes, that are not present in every school. These include classroom size, peer effect (smart kids making other kids smarter), teaching hours, etc etc etc. However, the impact that most of these have on final outcomes is smaller than you might think. We are talking single digits to very low teens, here. Absolutely no more.

Further, even if you can prove a difference, it's very very difficult to ascertain the strength of the difference outside the researched cohort - results do not translate well across different student populations, even within the same country, let alone cross-country. Every pupil, classroom, teacher, school has subtle differences, even though education systems as a whole may be quite homogenous. There are almost innumerable confounding factors, and it's very challenging to pin difference on one or a couple of particular things. A great example of this is much of the debate around homework. It has proven surprisingly difficult to prove that homework is generally beneficial. Certain types of homework may be beneficial - but only in some schools; in other schools the difference is neglible. Parental interaction with homework can alter its efficacy etc etc etc - there's a million different things to control for.

Having said all that; yeah, the biggest factors influencing what comes out of the classroom are the same things that influence what goes into it. Parental income is a big one.
posted by smoke at 5:17 PM on March 19, 2012 [1 favorite]

I think this is something that will depend on where the student in question lives.

My high school saved my life. Literally. There just is nothing comparable to the level of education -- or the atmosphere -- anywhere near where I grew up.

I'm pretty sure that, had I not attended the school I did, I would be a lot less happy. And almost certainly not living the life I live now, doing what I do now. Also potentially dead, since I was bullied to the point of suicidal thoughts before going to the high school I ultimately chose.

(I grew up in a very regressive and socially conservative rural part of the US, in a state which universally scores at the bottom of national surveys of educational standards.)

That said, if you're a nice middle class white kid from the suburbs of Long Island or Chicago or the Bay Area, that is probably a lot less true.

I have no idea what high school choice outside the US ultimately means for any given student, as the educational systems are so different.
posted by Sara C. at 6:33 PM on March 19, 2012 [2 favorites]

There are a lot of effects that are present in 'which high school shall we send our child to?' that might not necessarily be called 'high school'. What are the educational values or attainment of the neighborhood, what SES peer group, what SES neighborhood, etc. But that's different than saying 'this high school improves your kid's educational outcomes by X%'.

(I also suspect that 10% difference is more significant than you might think, particularly if you're looking at either the very top (Princeton, Yale, MIT, Harvard) or at the very bottom (incarceration, teen pregnancy, etc) in terms of outcomes).
posted by Lady Li at 6:38 PM on March 19, 2012

I think Sara C. makes a good point. I had the same situation. If I were put into the regular public high school, there is no way I wouldn't be a deadbeat right now. Or dead.

The school made all the difference. But it was a Catholic school in the old tradition, which purposefully accounts for lack of parental involvement, being run by a religious order whose roots are in the education of poor and unfortunate children. They were the ones who set the standards and fostered a love of learning. They were the ones who demanded excellence, and motivated the students to perform, regardless of what the parents did. If the parents were involved, great. But if not, that kid was still going to get a good education.

So maybe the variance does average out to 10%. But there is no way that it only accounts for 10% of the difference in all students. (Unless the schools in this study only have a 10% variance in quality.)
posted by gjc at 7:34 PM on March 19, 2012 [1 favorite]

Best answer: I wonder if I'm misunderstanding what "10% of the variance" means .

I doubt you are. Variance is just what it sounds like--how much variation is there in the outcome (like with test scores there could be lots of variance because some kids will do really bad, a lot will do average, and some will do really great). Educationists try to explain variation (if you know why students score differently, then you can create some interventions that target those reasons and help kids do better). So the report was just saying that features of high schools don't really help us understand very much about why some kids do well and some kids don't. Instead, features of the kids themselves (their aptitude, engagement, parental support, etc.) do a much better job at explaining why some kids do well and why some don't.

And basically, yes it's true that when you compare the effects of "proximate" factors (like aptitude, engagement, family background, etc.) to "distal" factors (things 'further' from unit you're interested in explaining--the student--like features of a high school), the proximate factors do a far better job of explaining variation in the outcome. Here's a blog from Ed Week addressing the issue.

The problem with these kind of studies is that its based on observational data -- NOT controlled studies.

I don't know exactly what data they used in the original report, but yes these are usually observational studies, not controlled experiments, but it's pretty hard to randomly assign students to high schools (but google Tennessee Star class size experiment). Also, the data do have some things going for them in terms of suggesting causality. They're longitudinal, nationally representative (meaning they include thousands of students from all over the country so we can generalize to the whole population) and really rich, so we follow the same students over time and can use things that happen early in life to predict outcomes later, avoiding confounds due to time. Also, we can control for a lot of things we know influences educational outcomes (there are hundreds of variables in the dataset), so it's not simple correlation. And there are pretty sophisticated analytic techniques that help us address the issue (like multilevel modeling). I do this kind of research, and I don't think we're perfectly measuring reality, but I don't think we're completely wrong either.

the variables that tend to count the most, on average for US ed data are: socioeconomic status (of child, mean SES of school, of neighborhood) and highest level of parent education (which is a piece of SES); coming into kindergarten prepared; being at least on grade level for reading by the end of the 3rd grade; previous year's academic achievement; race and gender are often significant as well.


Sara C. and gjc both point out what we can't see in the big datasets, though it's actually impossible for them to really know where they'd be if they had gone to other high schools. Maybe they'd only be 10% less happy. The data might include something like "does your school have a counselor?" but what that counselor means in the lives of a few students may not be detected.

I also suspect that 10% difference is more significant than you might think, particularly if you're looking at either the very top (Princeton, Yale, MIT, Harvard) or at the very bottom (incarceration, teen pregnancy, etc) in terms of outcomes

But that's the point of the study. If you take into account the full range of outcomes, which high school you go to only bumps you up 10%. You don't jump from the "bottom"--say, being a delinquent teen mom--all the way to the top (Ivy League) because you switch high schools. At best the high school you attend will bump you up from 85% (Selective State School) to 95% (Ivy League).
posted by kochenta at 8:09 PM on March 19, 2012 [2 favorites]

Response by poster: Such a lot of great answers, very instructive, thanks to all of you.
posted by southof40 at 2:39 PM on March 21, 2012

« Older Help me find a Google Desktop replacement   |   Mad Men Analysis Newer »
This thread is closed to new comments.