Let's all be scientists!
August 11, 2014 1:17 PM   Subscribe

Throwing away all the political and financial pressure that dilutes and warps science, if one wants to engage in scientific inquiry and be productive and contribute in some small way to humanity's scientific understanding of the cosmos, how does one go about things? Can we boil things down to simple generalized steps -- brief enough to put on a wall plaque -- starting with the evaluation of an experimental idea (is it a "good" experiment?) all the way through to publishing one's reproducible results?

Can we generalize these steps enough so that they would apply to science on all levels, from a primary school science class reproducing age-old results and publishing them on their classroom wall to CERN scientists publishing in Nature?
posted by strangeguitars to Science & Nature (18 answers total) 4 users marked this as a favorite
 
Are you thinking of something beyond The Scientific Method, which is usually described as exactly this?
posted by damayanti at 1:25 PM on August 11, 2014 [8 favorites]


Here's a well known monograph from 1964 that addresses your question: http://m.sciencemag.org/content/146/3642/347.citation
posted by u2604ab at 1:39 PM on August 11, 2014


a well known mnemonic for the scientific method is:
A Fast Teacher Always Drinks Coffee

Ask Questions
Form Hypothesis
Test Hypothesis
Analyze Results
Draw Conclusions
Communicate Results
posted by Flood at 1:44 PM on August 11, 2014 [2 favorites]


Response by poster: @damayanti:
I spent a lot of time on that page before formulating my question.

1. It might be hard to fit that on a plaque (of course you could just list the bold items, but I'm not sure those items explain enough).

2. I'm not sure all of those are really required -- maybe I'm concentrating on the experiment and results aspects more than the hypothesis (why do we have to have a hypothesis? What if we just want to see if anything interesting happens when we mix two things together.).

3. Are there other things that need to be addressed? What about isolating variables? Or determining causation?

@Flood:
The short sentences in your answer are great, and still concise enough that they could be expanded upon (in quantity and/or quality) and still fit on a plaque.
posted by strangeguitars at 1:54 PM on August 11, 2014


Maybe I'm a little biased because I spent a couple of decades of my life studying/practicing/working at science, but each of the the steps in the scientific method are there for a reason and skipping them to suit whatever purpose you have in mind is no longer engaging in science as it is done today.

why do we have to have a hypothesis? What if we just want to see if anything interesting happens when we mix two things together.

The scientific method separates naturalists from scientists. Both are observers, yes, but only one has a means of systematically approaching the subject. The SM keeps us honest, so to speak, when working toward a categorical system of knowledge. Can science be done without the scientific method? Probably. Is it worth much to scientists nowadays if it is not done by this system? Probably not.

Are there other things that need to be addressed? What about isolating variables? Or determining causation?

This is all addressed by and within the steps of the scientific method.

It's a good system! No need to throw it out just yet because it's been tainted by political and financial concerns.
posted by GoLikeHellMachine at 2:30 PM on August 11, 2014 [1 favorite]


(why do we have to have a hypothesis? What if we just want to see if anything interesting happens when we mix two things together.).

There is always a hypothesis -- in this case, your hypothesis would be "something interesting will happen when we mix X and Y together", and you'd test to see if that was true.

For it to be testable, it's better to not just be stabbing in the dark though, you should ideally already have some sort of reason -- if it's just a hunch -- for thinking that something will cause something else, and it's better to have a specific hypothesis. Otherwise, what are you testing for, and how do you test it? "Something interesting" could be, say, a temperature change, or a volume change, or a mass change (something gets released), or a color change, or 15 other possible changes and these might be too subtle to see with the naked eye, so you'll need to have the right tools available, and to do that you'll need to have some hypotheses about what will happen.

For it to be useful, you need to know why you're asking what you're asking. Science works much better if there's something you're trying to answer or resolve. You don't always end up getting there, or sometimes you find something else, but knowing why you're doing what you're doing is of paramount importance. In fact, in my opinion the hypothesis step is the most important step in the scientific method, and one that's easy to extrapolate from kindergarten to publication, and from general to specific.
posted by brainmouse at 2:31 PM on August 11, 2014 [2 favorites]


Hypothesis, prediction, testing to see if your prediction was right or wrong -- that's the boiled-down essence of it. Investigating is *good*, but the most useful thing to do when you want to make new knowledge is to take everything you know about a phenomenon, try and see if you can predict one or two or more things that *might* happen if you adjust the context of the phenomenon, and then figuring out how you would *know* which one of those things happened, if any of them did. Predict, test, reflect.
posted by Made of Star Stuff at 2:42 PM on August 11, 2014 [1 favorite]


Test Hypothesis
Analyze Results
Draw Conclusions



This is a very Platonic ideal and loses much in the reality and complexity of experimental design and analysis.

The scientific process is, at its heart, about bracketing uncertainty, about knocking down size of the error bars. That's what p-value arguments are about and why the Bayesians think all experimentalists are fools (I kid because I love).

Testing, Analysis and Concluding steps have to be understood as not as binary, even Manichean processes, but as part of a series of successive approximation. The classical hypothesis->test cycle isn't wrong, as such, but it misses the trees for the forest. It's annoying to see this formulation because it leaves the impression in the popular mind that science is a much more certain process than it really is.
posted by bonehead at 2:55 PM on August 11, 2014 [5 favorites]


The thing about the scientific method is, that, as has already been noted, it eventually demolishes all the petty human considerations like politics and money, and produces truth, which is a good thing, because we are all, scientists included petty little humans. People just need to keep turning the crank and checking to make sure the machine is oiled.

As for what you can do and how to get started pick something that doesn't seem to get much attention, test your hypothesis that it doesn't get much attention by doing literature research and talking to people who might know better than you. Repeat until you find an sufficiently unexplored niche. Then keep on going.
posted by Good Brain at 5:36 PM on August 11, 2014


if one wants to engage in scientific inquiry and be productive and contribute in some small way to humanity's scientific understanding of the cosmos, how does one go about things?
Mature science is all about realising that the Baconian ideal of confronting a falsifiable hypothesis with experiment is an aspiration rather than the reality.

Science is really about uncertainty and humility. These are subtle ideas to get across on a t-shirt.

For what it's worth, I try to do my bit as a scientist by telling people that I Fucking Love Science is toxic garbage, which you could put on a plaque.
posted by caek at 6:01 PM on August 11, 2014


Response by poster: I'm definitely not trying to toss out the scientific method! I'm just trying to see if it needs to be re-expressed in a way that can be "put on a plaque" for anyone looking to use it to guide them.

Flood's mnemonic seems to be a great start, but why are his steps (which he labels "well known") different from the steps on Wikipedia?:

Question
Hypothesis
Prediction
Experiment
Analysis

How is it that prediction didn't end up in the mnemonic? Is prediction less important?

@GoLikeHellMachine: the WP article doesn't say much about isolating variables or causation. In which steps are these contained?
posted by strangeguitars at 8:22 PM on August 11, 2014


The scientific method is not Officially Codified anywhere. Sometimes different people break up the steps differently or name them differently, but the sentiment is the same.

the WP article doesn't say much about isolating variables or causation. In which steps are these contained?

These wouldn't be contained in individual steps, they would likely be complete series through the method. Say you know A is correlated with B. You might hypothesize that A causes B. Then you would design an experiment to test your hypothesis, and analyze the results. Isolating variables could just be part of good experiment design regardless of the hypothesis, or again could be its own hypothesis (hypothesis: "B does not occur if we control for variable C").
posted by brainmouse at 11:02 PM on August 11, 2014


I hate the Scientific Method. It's a completely valid description of what scientists do, and serves it's purpose for introducing people to the world of scientific research, but it's also pretty damn vague and, as bonehead wrote above, does not really reflect reality in any practical sense. And herein lies the problem with the premise of your question: The only way to describe what all the different disciplines do is to be so vague that it loses currency as a how-to guide.

The underlying idea, on the other hand, that anyone should be able to contribute to science, has a great deal of merit. But presuming that you're not just talking about data collection and actually want people to formulate new questions and test them themselves, then they will have to engage their brains in ways that cannot be readily distilled into a simple go-to list. That being said, for narrower fields of study, it might be possible to generate flow charts to guide people through some of the procedures.

If you want mnemonics to help prevent would-be scientists from wasting their time, I have a couple that spring to mind. These are things that even professional scientists forget from time-to-time and as such would be something I'd like to see on a plaque.

Top of my list is: "Know your assay". Know what you're measuring. Know what the assumptions are. Know what you can answer and what you cannot answer. Know what the variables are. Know what the distributions look like. Know what when you're looking at a real difference and when you're looking at natural variation.

There are others:
"Correlation is not causation"
"The world is full of questions, not all of them can be answered"
"Read the damn literature"
posted by kisch mokusch at 8:10 AM on August 12, 2014 [1 favorite]


Secondoing kisch mokusch - and "Read the damn literature" gets my vote.

Until you're doing original research you're not contributing anything other than a general thumbs-up to the method. Thing is you won't know if your research is useful unless you're clued into the field and have a strong understanding of what's already known/in process. That is why it's so hard to contribute to science as an non-professional, a lot of ground has been covered already.

Alternatively, you could go for 'Become a test subject'.
posted by freya_lamb at 9:54 AM on August 12, 2014


Response by poster: "Read the damn literature" can be difficult when:

-Most of it is behind paywalls
-Some of it is computer-generated bullshit
-Some of it is written extremely poorly

Maybe "Read what you can and be discerning"? (Especially for someone who isn't a tenured professor or an employed scientist with access to major research libraries and their expensive subscriptions.)

Thumbs ups (or downs) are important too.

-Remember Podkletnov?
-The results of experiments that have already been done in one community might not be available in another.
-Repeating experiments with verified results can help one develop one's experimental techniques, much as learning song covers can help an aspiring songwriter or improvisor gain ideas for song structure and technique for playing.

It's quite discouraging that the SM is so vague, and moreso that there is sentiment along the lines that if you're not covering new ground, you're not contributing anything.

It seems to me that any and all of us curious creatures should be encouraged into doing lots of experiments that will tell us about the world we live in. And we should be developing and disseminating guidance that helps ensure that these experiments are done well.
posted by strangeguitars at 11:59 AM on August 12, 2014


Here's how I approach the process with my students and employees:

What's your question?
Has someone already got an answer?
What new can you add?
Can you think of a hypothesis that we could give a clear answer for with an experiment?
Can you do the experiment carefully, controlling for all the possible problems and other factors?
Can you discern a result in your data? What is the nul result and how does your data differ from that?
What's the quality of that result? How clearly does it test your hypothesis?
Can you write a report to give a structure from problem statement to the refined hypothesis through testing then analysis and result? Don't forget to say how significant that result is.

Non-specialist citizen science is a real thing. The largest single problem with non-specialist-gathered knowledge is that its quality is unknown. Note that the list above has many steps where controls on data or tests are extremely important. Traditional knowledge sources have to be treated as uncontrolled, mostly qualitative, as indicative rather than conclusive. They can help point the way, but can't be used solely as evidence of causation.
posted by bonehead at 12:40 PM on August 12, 2014 [1 favorite]


For what it's worth, as a working scientist, I share some of strangeguitars' quibbles with the usual formulations of the scientific method. "Mix two things together and see if anything interesting happens" and "notice something interesting and measure it" are as much as part of actual science as the usual "form a hypothesis and test it" version presented in textbooks. Coming up with testable claims is absolutely vital, and is more or less the difference between science and many other pursuits, but discovery is also a part of science, as is building novel instruments.

But, it's also true that if the next step after discovery doesn't look a lot like the scientific method, it's unlikely the outcomes will have the qualities we associated with science.

Calling "something interesting will happen when we mix X and Y together" a hypothesis strikes me as silly. It may be necessary when filling in the blanks on a science fair registration form, but as a framework to structure inquiry it adds nothing to the process. The formal result of the test is "yes" or "no," but all the actual science happens when looking into the unanticipated details and figuring out what they mean. "Something interesting happened" isn't a result that leads anywhere, nor a result that anyone cares about. I suppose one could say, "I hypothesize that after mixing these two things together and examining the results the experimenter will notice each of the following list of all possible the observables that I can imagine." But nobody would ever say such a thing, because "let's see what happens" captures the actual goal of the experiment. While it's true that most of the time people have a pretty good idea what sort of thing they expect to happen when mixing things together, it's also true that lots of historical discoveries were complete surprises discovered while testing unrelated ideas.

If there are useful guidelines that separate science from not-science that looks like science, I'd argue they are:

1a. Always make claims that are testable
1b. Articulate and then question your assumptions
1c. Never claim something is true without being able to explain what evidence would convince you that you're wrong
2. Show you work and share your data

Ask kisch mokusch and freya lamb point out, it's hard to come up with questions that are both new and soluble without dedicating years to the effort. (Answering questions that aren't new is great, and is also more or less what most scientists spend the first five or six years of their training doing.) But, it's more or less impossible to do so if you're keeping secrets or enslaved by unassailable truths.

On preview - it's true that there's some bad literature out there. But, it's also true that the average peer reviewed paper is overwhelmingly better than the average self-published manuscript written by someone who hasn't read previous work on their topic. Most of the crank literature I get can be immediately falsified with well known and thoroughly documented experiments that appear in textbooks. Becoming familiar with the work that other people have done when looking at the questions that interest you really is the best way to avoid wasting years on false starts.
posted by eotvos at 1:02 PM on August 12, 2014 [1 favorite]


"Read the damn literature" can be difficult when:

-Most of it is behind paywalls
-Some of it is computer-generated bullshit
-Some of it is written extremely poorly


You're preaching to the choir there. You'll be pleased to know that the future of scientific publication is open access, what with the rise and rise of PLoS. I've even seen Elsevier papers become open access, which is truly remarkable given their position on the matter in the past.

It's quite discouraging that the SM is so vague, and moreso that there is sentiment along the lines that if you're not covering new ground, you're not contributing anything.

There is nothing to stop you from repeating somebody's experiment and putting the results onto the web or in a book or however you wish. But journals are pretty much only interested in whether the paper will be cited by other researchers, since the number of citations is the primary metric by which journals are ranked (and all journals aspire to be more prestigious). We can have a conversation about how the system stifles research but it won't change the reality of the situation.

It seems to me that any and all of us curious creatures should be encouraged into doing lots of experiments that will tell us about the world we live in. And we should be developing and disseminating guidance that helps ensure that these experiments are done well.


I agree with the sentiment, and there are in fact many ways in which non-scientists can contribute to science, most of which are at the level of data collection. Collectively, I'm pretty sure that bird watchers and other naturalists contribute more raw data to the field of ornithology and other branches of zoology than the "professional" scientists. And the search for extraterrestrial life is aided by 1000s of personal computers. Also, there are people who have solved crystal structures of proteins by outsourcing the computing to gamers' Playstation consoles.

But without the right tools at your disposal, it is unlikely that non-professional scientists can be in the driving seat of these sorts of enterprises. Personally, I believe that there is a huge amount of untapped potential in the general public. But I would rather see their time spent productively in a coordinated initiative that breaks new ground than have everyone repeating experiments that have been done hundreds of times over.
posted by kisch mokusch at 7:26 PM on August 12, 2014


« Older esophagitis   |   Is there any photo book service that makes 7x5... Newer »
This thread is closed to new comments.