How to talk numbers and not-numbers
April 6, 2016 12:01 AM   Subscribe

Over the years as a researcher in the humanities and social sciences I have often made vague comments expressing an intuition, or perhaps a bias, that quantitative analysis and statistical models often made by e.g. economics, quantitative political science and other math-fetishistic disciplines miss or misrepresent truths about human behaviour and society, which are messier and less easy to quantify than these disciplines assume. I would like to be more able to talk about this in detail and with greater specificity of examples than I currently am. More below...

Essentially, I would love to hear from this wonderfully diverse community specific examples, pointers to readings, news articles and studies, or even just anecdata, in support of this vague intuition that numbers just aren't the whole picture when it comes to studying the complexity human society, history, mentality, culture. And furthermore that the fetishization of numbers might lead us into Bad Places, e.g. corporatization, grade-oriented education policies. However, to correct my own biases, I would also appreciate the same in support of the other side -- what quantification does in fact bring to the table, how it does enhance our understanding, and where numerical simplification of complex humanities-ish phenomena is desirable. I'm open to any disciplinary angle. Sorry if I have phrased this confusingly or too broadly - I don't really know if there is a term for what I'm looking for. Number/Data fetish?

Would appreciate any and all responses. Many thanks in advance!
posted by starcrust to Education (16 answers total) 17 users marked this as a favorite
 
Hi, I'm a statistician/data scientist, so can talk to this in some detail. In general, quantifying decisions can be a very good thing, because, if done well, it can actually quantify intuitions. However, the main problem you'll have is that your model cannot take into account everything, and thus can mislead.

A famous example of big data misfiring is in the google flu tracker, which used search terms to predict the real levels of flu. This proved really successful, until it wasn't. The model became disconnected from reality for (as far I'm aware) unclear reasons. This can happen with any model.

So for a practical example I've encountered, many insurance companies still rely in at least some parts of their industry on people experience rather than models. So individual brokers who have been working at the company for 10 years or so aquire a wealth of knowledge about who they can sell policies too, and who is a "good risk". I created a model which quantified some of this, but when building this

a)I had no idea what the most important variables would be
b)The data I had access to wasn't necessarily completely comprehensive.

As a result I created something useful, but not something which could replace experience. A poorly specified model can be worse than none at all, especially if you follow it blindly.

That said, relying on intuitions when the facts are in front of you can be a bad idea. Look at the US elections, where Nate Silver (and many other data based models) were saying that, while the final result was likely to be close, it was extremely likely to be an Obama victory. Silver predicted 49 states correctly, some models got all 50. Here the pundits were relying on gut feel rather than taking a step back and seeing what reality was saying.

But! In contrast we have the UK elections, where polling data tends to be weaker and less available, which often prove much harder to predict, especially when, as happened in 2015, the polls prove to be disastrously wrong. That said, the exit poll, which is based on far more data and a better model, got very close to the real result.

It's also worth noting that some approaches are probably just plain wrong. A lot of country level analyses leave much to be desired as there are massive differences which need to be taken into account. Many people building these models do not understand the statistical methods they are using, and the consequences of them. An example of this being an issue is Simpsons paradox, where underlying trends can be different than the top level trend. So for instance there were studies which seemed to show that universities were discriminating against women over all, but when you split the data by departments you found that the universities were actually discriminating somewhat in women's favour! This was due to women applying for harder to get in courses than men (on average), so that despite being accepted more frequently than men in all places, they ended up being rejected more overall. Such a result feels counter-intituitive, but represents a trap that real data can often hold.

Worse yet, the solution to Simpson's paradox, which is to split your data into subgroups, can mislead. By chopping and changing data you can show pretty much any result you want if you do it in an unprincipled manner. I might find that people with the star sign Virgo are discriminated against by the university, or people with blue eyes, or short or tall people... I just need to keep looking until I find a pattern. This kind of behaviour is extremely unethical if engaged in deliberately, but a lot of studies end up doing this unthinkingly without realising that they've cheated themselves into spotting a pattern.

A final point is on black swans. These are events which are outside of our current understanding of the way the world works: discovering a black swan after years of only finding white ones. These are likely to impact economic models, which has caused problems in the past by the models assuming that the world will continue roughly as it has already, and so they are incapable of dealing with a surprising event (like a bank crashing, say). The consequences of these kind of mistakes can be large financial shocks.

I don't think the answer is to automatically distrust all quantitative approaches to investigating reality, because human intuitions are often wrong. We are host to lots of biases which prevent us from seeing clearly. A data based approach is not a panacea, however, and can be just as snared up in the biases we had in the first place. I do think there are many instances of public policy which would be better if we could have a discussion as to where the evidence actually points, rather than just our individual prejudices. I do think we should be skeptical about new models about reality, and criticise them, and try to understand their short comings. If our intuitions are screaming that a model is wrong, then our intuitions may well be right. A blind trusting of data will definitely lead us into trouble, but so will a blind belief in our intuitions.
posted by Cannon Fodder at 12:52 AM on April 6, 2016 [24 favorites]


By chopping and changing data you can show pretty much any result you want if you do it in an unprincipled manner.
This tool (and article) by 538 is a good illustration of how that works: Hack Your Way To Scientific Glory. It lets you show that Democrats/Republicans are good/bad for the economy. If you don't get the results you want, you just define "economic performance" differently, or you decide to look at different subgroups of politicians, etc.
posted by blub at 1:15 AM on April 6, 2016 [2 favorites]


i am not sure how literally to take your question, bu you may be interested in the market experience (lane) which tries to combine neoliberal economics with much "softer" ideas about personal development. review. it's an old book (1991) and i don't know how influential it was, but it was the first thing that sprang to mind when i read your question.

the second thought i had was that there's a big problem, currently, with the way that scientific results are presented. the probabilistic arguments for their validity are skewed by people "digging for results" and not publishing "failures". there's been a lot of discussion of this - see for example these google results.
posted by andrewcooke at 2:53 AM on April 6, 2016


ps and if extending "cold, mathematical" economics into more "human" dimensions is what interests you, also look at amartya sen - his development as freedom is a classic.
posted by andrewcooke at 3:01 AM on April 6, 2016


Another thing to keep in mind is that statistics are aggregate data and are not causal when applied to an individual. My daughter was born very sick and the survival rate for the category of babies she was born into is about 60%. I could not function with those odds, but a doctor pointed out that that statistic, while true, is not HER statistic. She had many things going for her - her gender, her race, medical interventions before delivery, my health, the hospital where she was born - that upped her odds of survival significantly. Statistics can be true of a large population but not true for an individual, but it doesn't make them wrong.
posted by peanut_mcgillicuty at 4:45 AM on April 6, 2016 [3 favorites]


While I can't provide any hard examples, I felt very much as you do before I took some basic courses in applied quant for social sciences so that I could be a smarter consumer of educational research. Like you, I was very frustrated with the focus on numbers alone in what I knew was a very complex, nuanced system. As a urban high school teacher, every year I was dismayed reviewing my classes' testing reports. I could look at kids who lost ground, and most of them, knew exactly why (never came to class, had a death in the family, etc.). Where were those factors in my testing report? Nowhere.

As I learned more about descriptive statistics and statistical modeling, I definitely gained an appreciation for what they can do. I was also gratified to hear my profs reiterate time and again that quant isn't the end-all be-all of research methods, that you need both quantitative and qualitative methods to get a fuller picture. (They also paraphrased George Box at least once a week: All models are wrong, but some are useful.)

As Cannon Fodder above says, the real issue is people applying inappropriate tests or drawing inappropriate conclusions (overstating, making causal inferences when they shouldn't). Another issue is that many people don't have a strong enough understanding of stats to challenge bad science or bad reporting (despite a couple semesters under my belt, I would still put myself in the latter category). Staying with education, I think a great use of quant is the Shanker Institute blog--they have some of the best discussion of ed research I've found, really digging into the methods of big studies and looking at where and how reports are getting their data.
posted by smirkette at 5:28 AM on April 6, 2016


I'm not sure what your discipline is, but it sounds a lot to me that you are somewhat misunderstanding what these "math fetishistic" disciplines are actually trying to do.

I'll speak to economics because that's what my background is in. Most economists are distinctly NOT trying to use numbers to explain the "complexity human society, history, mentality, culture". Rather, they are trying to test rather simple, refutable hypotheses of the sort:

"Under conditions X, when Y happens and everything else remains the same, then typically (or on average) Z."

Can we test this by a narrative/description? I guess. But unless you have multiple observations (which means we now have a sample and can calculate statistics/confidence intervals/etc.), it's really hard to figure out what people typically do / do most of the time / on average / what is the typical and atypical case. Because it's not like people behave according to some natural law.

I think you can criticize this method on multiple grounds, e.g., the conditions X are not that common, Y never happens on its own, the quantity of Z is so small as to be irrelevant, our measures of Z are really poor. But those are all things that a critical reader would be able to make a judgement about based on the work at hand. Building that type of critical reading skills is perhaps what you should focus on with your students?

Or maybe the hypotheses that are testable are not actually helpful for us to understand anything about the world. This was a criticism leveraged from within the development economics field several years ago. I think that's a valid criticism, but it's really overseeing a lot of truly useful work that is actually out there.

Tldr; how about just focusing on the critical reading skills that will enable your students to understand the uses and limitations of quantitative social studies work rather than teaching that it's just wrong...?
posted by yonglin at 5:44 AM on April 6, 2016 [2 favorites]


Like yonglin, I'm a little confused about the premise of your question. What you report as a "vague intuition" is something that quantitative analysts appreciate and grapple with every day.

I'm in the economics field, and every good paper explores (1) the limitations of the available data, (2) the robustness of the model/methodology, and (3) the validity of the findings and interpretation. We try to capture how much of a phenomenon is explained by a given variable, and how much is left unexplained. We acknowledge that unmeasured variables are always at play.

Now, even a good paper can get bogged down if the interpretation is too bold. Maybe that's what you mean? But in general, it's not the case that quantitative researchers are simplifying observations into numerical data without also understanding the limitations of that approach.
posted by schroedingersgirl at 5:52 AM on April 6, 2016


To me, the historian who has written most thoughtfully and insightfully about this is Tim Hitchcock, in the essays collected on his Historyonics blog. See especially:

Big Data for Dead People: Digital Readings and the Conundrums of Positivism (Dec 2013)
Big Data, Small Data and Meaning (Nov 2014)

Hitchcock understands the value of 'big data' and has used it brilliantly, but he also understands its limitations:
I end up feeling that in the rush to new tools and ‘Big Data’, Humanist scholars are forgetting what they spent much of the second half of the twentieth century discovering – that language and art, cultural construction, human experience, and representation are hugely complex, but can be made to yield remarkable insight through close analysis. In other words, while the Humanities and ‘Big Data’ absolutely need to have a conversation, the subject of that conversation needs to change, and to encompass close reading and small data.
He also stresses the political dimension of all this:
If today we have a public dialogue that gives voice to the traditionally excluded and silenced – women, and minorities of ethnicity, belief and dis/ability – it is in no small part because we now have beautiful histories of small things. In other words, it has been the close and narrow reading of human experience that has done most to give voice to people excluded from ‘power’ by class, gender and race.
And he worries about the drift towards social science implicit in the use of 'big data': 'It feels to me as if our practice as humanists and historians is being driven by the technology, rather than being served by it.' Read the whole thing (including the exchange with Scott Weingart in the comments to the Nov 2014 post).
posted by verstegan at 6:19 AM on April 6, 2016 [3 favorites]


Part of what you can do is to learn more about statistics. On the blue, I often see people complaining that study X didn't control for Y, so obviously, it's wrong, but a lot of the time once you track down the actually study, the authors did in fact, try to control for Y in their statistical modeling. Similar things go for the conclusions the authors are drawing: a lot of journalism will exaggerate the claims beyond what the authors actually said. See, for example, this XKCD.

And, speaking anecdotally, I'm from a field which encourages the integration of quantitative and qualitative data, and personally, I think it leads to better data and a better understanding of how the world works all around.

For example, as a linguist, I can measure, objectively, certain facts about a non-native speaker of a language's accent. I can quantify exactly how they say their p's and s's, and, thus, say something about how that person talks. But I can also use more qualitative data to get an idea of, for example, how that person in particular feels about their second language (Do they like it? Do they not?) and attitudes in general about that particular accent (Is it prestigious? Is it not?). I can use both those types of data to tell a more rich story about that accent in general, including (1) what exactly that accent sounds like, (2) people's attitudes and feelings about that accent, and maybe even (3), how (2) effects (1) for a particular person in a particular setting. For me at least, (3) is a much more satisfying and interesting research project then just looking at (1) or (2) alone. (Which is not to say that both (1) and (2) aren't useful and interesting in their own right!)
posted by damayanti at 6:44 AM on April 6, 2016


The usual answer here in political science would be to read King/Keohane/Verba and follow it up with the Brady/Collier reader that opposes it.

Without wanting to sound like more than a dick here than I have to, if you want to make convincing arguments about this at a professional academic level, there's really no alternative but to become substantially knowledgeable about quantitative methods. Trying to be helpful, but I will note in passing that claims about how quantitative analysis leaves out details from the messy, big world are particularly unconvincing because of course they do. The entire point of any analytic or empirical model is to leave out those details so that you can focus on the heart of the question you're examining, and because the only model that is not a gross simplification of the real social world is that real social world itself.

But I can also use more qualitative data to get an idea of, for example, how that person in particular feels about their second language (Do they like it? Do they not?) and attitudes in general about that particular accent (Is it prestigious? Is it not?)

Both of those examples are examples of quantification! Any time you're classifying something as like X or not, you're quantifying, whether you like it or not. Any time you're comparing things with more or less of something, you're quantifying. Any time you have a simple 2 by 2 table, you've quantified. You might use very simple methods to analyze those data, you might not have very many data points, but what you have there are quantitative data. Even if you refuse to admit it (not saying that you do!).

most of them, knew exactly why (never came to class, had a death in the family, etc.

These are, likewise, quantitative variables.
posted by ROU_Xenophobe at 7:37 AM on April 6, 2016 [2 favorites]


Both of those examples are examples of quantification! Any time you're classifying something as like X or not, you're quantifying, whether you like it or not. Any time you're comparing things with more or less of something, you're quantifying. Any time you have a simple 2 by 2 table, you've quantified. You might use very simple methods to analyze those data, you might not have very many data points, but what you have there are quantitative data. Even if you refuse to admit it (not saying that you do!).


Of course-- to clarify, in my field, you can take data by, say, asking people general questions about their language backgrounds, their attitudes about language), and either build that into a more qualitative narrative ("In general, language attitudes in community Z are like this. Subject A shows these attitudes, but subject B is a little bit different...") that accompanies your more quantitative one or (like I slightly elided over), quantify it more in some way, and build it into your model (e.g. trying to build some sort of "language attitude" scale based on people's remarks and include it as a predictor), or, both. All of that just all illustrates, again, how qualitative and quantitative data can build off of each other to tell a bigger story than you could do with only one type alone.

The move from qualitative data to a more quantified version of it is where things get more messy, and I think is probably where the complaints about oversimplification or "ignoring the real world" comes it. But, I and a lot of other people, think that abstracting things away from reality in order to build a model buys you a lot. There will be disagreements about how much abstraction is too much, but in any analysis, you need some degree of eliding over the details to tell a bigger story.
posted by damayanti at 7:56 AM on April 6, 2016


I think you might find it helpful to read Stephen Stigler's new book, The Seven Pillars of Statistical Wisdom. (Stigler is a chaired professors of statistics at the University of Chicago.) The book is a very readable and useful introduction to the things that statistical analysis can usefully accomplish and the things it can't, and may also provide you the vocabulary and further leads to investigate some of your intuitions.
posted by willbaude at 8:07 AM on April 6, 2016


While you are reading, get Nate Silver's book The Signal and The Noise.

You can get an idea of the difficulty of using statistical analysis in the social sciences by thinking about rating your wife's or SO's attributes on a scale of 1-5.
posted by SemiSalt at 1:27 PM on April 6, 2016


Perhaps I misunderstood your question, but I thought you were asking something that is a pretty deep question about the philosophy of the social sciences, about the nature of the inquiry they pursue and in what sense it is even possible or meaningful to use the methods of the natural sciences in them. But, you're getting technically oriented answers, about how researchers conduct statistical and quantitative inquiries in such a way as to produce statistically valid results.

If I'm not totally off-base, you might look into the sociologist Max Weber, who seems to have been the founder of an anti-positivist conception of the field, and also the anthropologist Clifford Geertz, especially his concept of the "thick description" of culture.
posted by thelonius at 4:38 AM on April 7, 2016 [1 favorite]


To only answer one aspect of your question, you might like to read John Quiggin's book Zombie Economics. It covers 4 or 5 economic theories which have been proven to be flawed in their mathematical models. Those theories didn't take into account enough real-world human behaviour to survive, and so they failed to predict real economic events. However, they were also disproven by gathering more data - sometimes the best way to deal with problematic data is to gather better data and try again, instead of abandoning the data-driven process.
posted by harriet vane at 7:19 AM on April 7, 2016


« Older Are the chips doing their job?   |   Getting rid of a middle aged beer belly Newer »
This thread is closed to new comments.