How to think critically about research?
May 4, 2014 7:55 PM   Subscribe

I am an undergraduate student frustrated with my lack of "critical thinking" towards research.

I recently went to an undergraduate neuroscience conference. As a psychology major, I have some background in basic neuroscience. I am also a research assistant for a developmental psychology lab at my campus. The conference was unique in that it was presenting research done by undergraduates. Throughout the conference, when viewing poster presentations, I had absolutely no idea how to ask the 'right' questions and to critically think about the research methodology presented. How does one "develop" the kind of skills to analyze research and ask those kind of questions? I feel embarrassed and dumbfounded.
posted by raintree to Science & Nature (16 answers total) 24 users marked this as a favorite
 
One important way is to listen to the questions other people are asking. Even back on campus, attend the next talk/presentation given by someone in the sciences, even if it's geared to the faculty, and then observe what your professors are asking--and how they're framing their questions. You'll start to get a good sense of not only what kinds of hot button issues or debates are being raised, but also the approaches people take when trying to get a handle on the ideas and dig even deeper.

(And also: it's incredibly common to feel like that at conferences! Even your profs do, sometimes. Ask me how I know!)
posted by TwoStride at 8:02 PM on May 4, 2014 [3 favorites]


A few points that aren't necessarily that related to each other:

1. Much of what you're calling critical thinking "skills" is really just knowledge. As you learn more about your field, you'll find that you are better able to make connections between the research being presented and something else you've seen/read/heard/etc. As much as anything else, critical thinking requires extensive background knowledge and the ability to recall that knowledge and compare it to the current situation. Without the knowledge, there is no critical thought. So keep reading, keep attending conferences, and keep researching, and you'll get better.

2. Half the questions asked at conferences are BS, anyway. They're either an excuse to demonstrate the superior knowledge of the questioner or an accidental admission that they questioner didn't understand the presentation.

3. One exercise I've done: force yourself to write down a question for every presentation you sit through. They will likely suck at first. That's okay. You're forcing yourself to put the presentation in the context of your body of knowledge that's good.

4. If you're brave, do the above, but force yourself to ask the question no matter what.

5. When in doubt, blame the speaker. Most talks are terrible, poorly targeted, and improperly use slideware to project a mind-numbing series of bullet points onto a too-small screen at the front of a darkened conference room. Most posters are ill-designed cut-and-paste jobs that are too dense to convey any clear message. It's a miracle that any communication happens in the formal sessions of most conferences.
posted by griseus at 8:35 PM on May 4, 2014 [11 favorites]


Critical thinking is a fancy phrase for "looking for the holes". Ask yourself what's good about the things you hear and also ask what's missing/what's not right/ how could this be better?

One caveat: once you start looking for the holes you'll start to see there are many and life gets less wonderful. Being less good at critical theory has major happiness advantages.
posted by Murray M at 8:40 PM on May 4, 2014 [5 favorites]


If you haven't already done it, take a class in research methods.
posted by cotton dress sock at 9:54 PM on May 4, 2014 [3 favorites]


As griseus and others have said, I find questions at conferences to be more about knowledge than skills.

If I'm asking a question, it's because I genuinely want to know something, usually so I can apply it to my own research eg would that analytical technique work with my system? Have you tried such-and-such an experiment? (because if you haven't, I will!)

Being selfish naturally breeds questions, I find. Imagine your project supervisor had set you some specific research tasks for the conference eg find out the latest developments in your area, find out who the big names are, see if there's a better way to do XYZ... (this is what I actually do with my students, by the way!)

Technique 3 from griseus is basically what you have to do if you're chairing a session, and it can be hard work, but good practice!
posted by firesine at 10:20 PM on May 4, 2014


The biggest secret I learned in grad school is that 95% of "thinking critically" about science is knowing what kind of nonsense other people put in their papers and try to slip past you.

When I read a paper or a conference poster, the first thing I do is look at the abstract: what does the author want me to take away from this poster? What is their agenda? Then I look at the figure panels and think about whether the panels match up with what they are saying. Very often, they don't match up. Is that because the data is presented poorly, or because it says something different than what the authors want it to say? What are the implications if everything they're saying is 100% true? Only after that do I look at the introduction, results and conclusions. If you're familiar with the methods, read them and think about how their protocols differ from yours. Are these differences meaningful? Can they alter the way the data is interpreted? What would it mean for your experiment?

If you see a poster at a conference and a paper subsequently comes out (this will probably happen more on the professional conference level than the undergrad level), look at what changed between the poster and the final manuscript and think about why.

Does your school have journal club undergrad seminars/classes? I found these useful in undergrad, because often the professors comment on why they personally liked or disliked a paper and you get to hear how others think about work in their field. Another useful exercise is to look at rival labs and your lab's old papers, and to see how the rival labs describe work done by your group and vice versa. A lot of times the comments are stupid and snide and overly personal, but also illuminating when it comes to the weakest points of studies from your group. Are these valid? Similarly, if someone in your research group worked on topic X in a previous lab, and you see a paper related directly to X, ask them what their perspective is.

I also don't want to presume, but I think it's really, really common for female science students to feel like you have no critical thinking skills when other people charge up and start asking questions and you just feel blank. I felt that way at the beginning of grad school. Then I realized that (1) no, I had no brilliant questions, but (2) neither did anyone else, they were just faking it and (3) the fact that I thought about my questions at all was probably a good sign. There's nothing wrong with taking a little extra time to think about a problem, and the answerer will probably appreciate it.
posted by angst at 11:06 PM on May 4, 2014 [7 favorites]


No need to feel embarrassed. Asking questions at a conference is difficult and I didn't get comfortable asking them until I was well into my PhD.

The more research you do and the more you read about research methods, the easier it will be.

Things I usually think about/ask:

The sample - Why those sampling methods? Is the sample balanced? Does the researcher believe the sample is big enough? What are some interesting characteristics of the sample? How might the sample/sampling method be applied to other research?

Ethics - Are there ethical issues with the subject and sample and how did the researcher deal with them? (e.g. Research on children, vulnerable people) How did ethical research issues shape the research itself?

Analysis - Why did the researcher think qualitative/quantitative methods were the best for this research? Does the analysis overlook some important considerations? If I were doing this research, would I do it the same way? (This one, obviously, I'd just think about and then come up with a question based on it)

Findings - Do the findings make sense based on other things that I've read? Are they similar/different to other famous studies? Do the findings have any implications (e.g. how might this change the way the mental health system deals with depression?)? If you could do the research again, what would you do differently? Based on this research, what would your next research project look at?

There is, of course, another approach to asking questions at conferences which is basically, 'Let me tell you what I think about your research based on my extensive knowledge and then ask you at the end if you agree with my esteemed assessment', which should only be attempted by the most well-read, well-respected academics [and some would argue, not even by them].
posted by brambory at 11:10 PM on May 4, 2014 [3 favorites]


Question everything. Assume that all the research has been done poorly. The methodology is sucks, they ignored other possible conclusions for the data, it's based on unverified assumptions. Try to figure out what biases the presenter has. It's easier to do with journal articles, because you have more detailed information and as much time as you want.

If you can, take some small seminar classes. I took a few seminar classes as an undergrad where we basically read three or four papers every week and then discussed them in class. My professor would listen to all of us students discuss it, and then he'd tear it apart. This author has wants to prove that his previous research is right, that author deliberately picked this specific set of samples, this paper is from ten years ago and everyone was ignoring x. And the articles weren't just terribly written, it's just that basically every paper has flaws. I didn't necessarily learn a ton about the topic the class was theoretically about but I got much, much better at finding problems with papers.

I think this can be a hard skill to learn, because a lot of undergrad is learning basic knowledge. I know when I was seriously reading articles for the first time, it was for a research paper and evaluating the problems with their conclusions made it really hard to write a paper.
posted by raeka at 11:12 PM on May 4, 2014 [1 favorite]


When evaluating any scientific work, whether conference presentation, journal paper, or whatever, I would approach it more or less as follows.

First, try to determine what is being claimed. The work will be trying to communicate some new knowledge that has been discovered or developed. Hopefully, it is pretty obvious what this is, but you'd be surprised how often the delivery of the claim gets obscured. The first step is to figure out what the claim actually is. Ask yourself, 'what is the claim being made'?

Second, figure out the basis for them making the claim. Most likely they have done a study and have built up some evidence. Hopefully, the evidence fits the claim, i.e. the data they are presenting actually supports the conclusion they have drawn, but again you would be surprised how often this gets mangled too. Or the claim may overreach -- maybe it is overly general, and really should only hold in a specific case that they tested rather than a broader context they are discussing. Ask yourself, 'what evidence is presented, and does it support the claim?'

Third, examine the evidence itself -- not at what it says (its content) but the metadata around it: does it look solid? Performed in a reputable lab by people with appropriate training and experience? Appropriate controls, etc? As reader's we're often left to trust on faith that proper methods were followed and that the evidence is valid, but (especially with journal papers) it's on the author to convince us of this by providing enough details about how the evidence was produced. Hopefully standard techniques were applied. If the methods are being advanced with new techniques, there should be some kind of validation, such as reproducing known results, to confirm that things are working as planned. You should also think about the strength of the experiments, likelihood that the results are a fluke and supporting studies which could increase confidence. Ask yourself, 'how do I know this evidence is saying what is claimed'? There's a bit of a knack to this one -- much easier for domain experts.

Fourth, if everything is solid, examine the novelty and significance of the claim. This one is pretty much impossible for non-experts; you need to have a conception in your head of what the frontier of knowledge looks like in your domain. But if you have that, then you can evaluate what the study has done within the broader context, e.g. what prior work it is responding to, how important or new the domain is, whether it is an incremental push or something revolutionary.

One important thing is not to lose sight of the main argument and evidence when critiquing work. It is very easy, seductively so, to trash a paper or poster for clumsy presentation or perhaps an issue with some of the details. There will always be weaknesses you can jump on if you want and it is easy for a journal club to get stuck in this pattern; especially for novices, it feels good to point out a flaw and we all start out insecure. But if that's the level scientific critique reaches, we will never get anywhere. It takes more effort to evaluate a paper on the merits of its arguments rather than pointing out its flaws, but it's also more valuable. Even flawed papers advance knowledge forward a bit at a time.
posted by PercussivePaul at 11:42 PM on May 4, 2014 [3 favorites]


Always always always read between the lines. I'll go into a few examples of "assume the methodology sucks".

Whenever someone comes up with some statistical technique, which happens often in my corner of neuroscience, it's because the standard techniques either (a) don't address the comparison being made, (b) make bad assumptions about the distribution/reliability of the data, or (c) didn't give a .05 significant result. The researcher assumed a priori that there should be something there but couldn't find it with a t-test or similar. So the question is always Why?

Also (similarly) assume that the file drawer is full-- that is, if there's an obvious other question to ask or analysis to be done, but it wasn't presented, it's usually a good bet that it didn't work/give a nice result. And if 50% of what they tried is buried, that p value they're so proud of doubles. (Always be suspicious of borderline significant p values when the fall just under 0.05.)

When trials are thrown out, that's a big red flag. It was an outlier, you say? Well justify its exclusion. "The measurement was so odd it just couldn't be right" does not count. Odd groupings of data are also a red flag-- why did you average within session and then within subject?

Finally, read as much as you can about the problems of multiple comparisons, and not correcting for them, because oh my god it's so incredibly common to f this one up.
posted by supercres at 5:11 AM on May 5, 2014


One thing that's worth noting - critical thinking about research doesn't have to mean being critical about research.

Yes, it is useful to look for the various types of holes in people's research, and yes, there will likely be some, and yes, that can be a useful way to get some practice. But, if I may get a little starry eyed here, the best part of being in science is learning cool stuff. So you can also ask yourself - do I understand what they just said? Do I know why they did A->B->C? Would I have done the same thing? Did I learn a different way to do it? Does this agree or disagree with what I thought would happen? Why? Is there a paper that had conflicting results, and who do I believe more? What experiment would I run next if this was my work? [On preview, brambory laid this out in a much better way]

Many of these lead you to critical questions for the speaker, but more as a way of improving your own science and thought process, not just critiquing theirs. Also, even though no one wants to look stupid in public, it really is okay to ask a clarifying question, particularly as an undergrad. Now is your chance to learn all the things! Be engaged enough to take it!

(Also, don't feel obligated to ask a question just to ask a question. There is nothing more annoying than someone asking a stock question like "does the effect vary by gender" because they feel they haven't filled some kind of imaginary quota of participating for the day.)
posted by synapse at 6:21 AM on May 5, 2014 [3 favorites]


The best lesson I learned in thinking critically about research is this. The most important part of a research paper is its Methods section. People flip to the results, the implications, but with research you ask a VERY specific question and then you get the answer to it. (Assuming the experiment wasn't botched, contaminated, or the results weren't selected in a biased manner).

Who was in the study population? Do those results translate to others? How was the response measured? Does that method really answer the general question that was asked?

By these standards (and other questions you can generate) every study is limited. That is not a bad thing, we learn from cumulative and sometimes contradictory studies and results.
posted by dances_with_sneetches at 6:40 AM on May 5, 2014


My big theory of learning is that it's hard to insert information into your head when there's not a hole that you've cleared out to put it in, with context. Read over the abstract ahead of time, translate it into a one-sentence 5th-grade English statement. Ask yourself a question about the research. "They measured X to be 5% in Australians. I wonder if the exciting part is that it's 5%, or that they were able to do the measurement at all. Is 5% a lot or a little? Was it a new technique?" Then when you go to the talk, you're already defined gaps that you want to fill in, and you'll be able to listen more closely, and understand/process the information better, because you're defined ways in which you need to know the things they're telling you. The more often you do this, the better you'll get at gauging whether the research is interesting to you and/or to the discipline as a whole, or maybe they just did the experiment to practice the technique; whether it's preliminary, or really solid data, or that they're just polishing and filling holes in a older idea; whether you're more interested in the result or the technique, etc.

If you'd like to work on asking intelligent questions at talks, start by including a friend in your pre-talk prep. "Hey buddy, check this talk out, it's about [summary sentence]. Do you think [speculation]?" Then the two of you can discuss it, and you may come up with a few things you wish you knew. Go to the talk, and you'll find yourself giving each other significant looks during the presentation ("see, Australians aren't special, it's just because the researcher lives there!"). When the talk is winding up, if there's still one of your gaps that's not covered, ask your question. It's already been vetted by your friend as being an interesting question that the presenter didn't mention the answer to, so it's a great thing to ask.
posted by aimedwander at 6:42 AM on May 5, 2014 [2 favorites]


People have done a great job in this thread of giving you a sense of how you can think critically about research. If you are interested in a organized learning experience, you might enjoy looking things up on Critical Appraisal. This is beyond a conversation about how to interact with conference poster presenters but what researchers use to evaulate evidence.

There are a series of online learning modules on how to evaluate different types of research from the Canadian National Collaborative Centre on Methods and Tools (catchy name right?) http://www.nccmt.ca/learningcentre/index.php#main2.html
posted by Gor-ella at 10:17 AM on May 5, 2014


Lots of good advice above. You need to read a LOT about a subfield (and the background and basic courses that lead up to 'expertise' in the subfield) before you can seriously critically think about it.

I agree with griseus about horrible communication in papers/posters. HOWEVER, becuase of these communication problems., one trap to avoid is the following: When a paper is presented in an engaging style, or particularly clearly (I'm looking at you, science-popularizers!), or in a way that appeals to your prejudices, it can shut down your critical thinking skills because it All Seems To Make Sense and It Looks So Good.

One quick test for whether you understand the work is as follows:

If the paper/presentation asserts "If datum X is observed then this supports conclusion Y", then ask yourself the following questions:

Would you agree with the below statements if they presented them with the same panache?

"If datum X is NOT observed, then this supports conclusion Y."
"If datum X is observed, then this supports the opposite of conclusion Y"


If you can't differentiate these three cases, then you don't understand the work enough to critically evaluate it.

I catch myself out all the time using these tests from presentations that appear to be "good"....
posted by lalochezia at 10:21 AM on May 5, 2014


Taking a different tack, some overlap with this thread.

In many fields, statistics are important but poorly grasped, leading to very flawed papers and "established" wisdom. Master stats.
posted by PickeringPete at 10:29 AM on May 5, 2014


« Older Which Nutrition/Calorie Tracker does all this?   |   Trying to remember the name of an indie movie Newer »
This thread is closed to new comments.