Creating Methods of Analysis for Dummies
September 3, 2011 12:19 AM   Subscribe

When developing a method of analysis in scientific research, which rules and guidelines can you follow to make sure that the qualities by which you analyze things are valid, sound, relevant and related?
posted by Foci for Analysis to Science & Nature (15 answers total) 3 users marked this as a favorite
 
Can you be more specific?
posted by Mr. Papagiorgio at 12:25 AM on September 3, 2011


In which regard?
posted by Foci for Analysis at 12:25 AM on September 3, 2011 [1 favorite]


Let's say that I want to analyze board games. I want to create a method of analysis such that I can (a) analyze board games via several qualities (e.g. fun, replayability, luck-skill, etc) (b) understand what makes a good or bad board game.

How would I go about to create such a method? How do I know that the qualities are the right ones?
posted by Foci for Analysis at 12:30 AM on September 3, 2011


I'm sure others will come along with better answers, but my initial thought is to first follow the Scientific Method. Also, scientific controls are usually needed to avoid false positive and false negatives when it comes to testing.

My understanding is that scientists sorta "piggy back" or copy what has been done previously. That is, if certain tests are usually done in particular experiments then they do those same types of tests for their similar experiment. They sort of build off of what has been done before.

I'm sure there are research articles or something describing ways of measuring board game qualities if you're really interested.
posted by Mr. Papagiorgio at 12:40 AM on September 3, 2011


Another thought is that the qualities being tested should be objective. Thus, how does one measure qualities such as fun....by player questionnaires, or questionnaires about the person from fellow players, or something more? Perhaps replayability could be measured by how many times a player replays the game.

My understanding is that over time the scientific method finds the "right" answer through iteration (ie: calculated trial & error).
posted by Mr. Papagiorgio at 12:49 AM on September 3, 2011


I can think of a few answers. Im going to guess that it depends on your epistemological stance:

1. Focus groups in which you ask "what qualities in a board game are important to you?" This has the underlying assumption that people know the answer and will be honest about it. You could have them play some games and then report---or you could record the playing and do a discourse analysis of things they say while playing to justify your conclusions about what they like.

2. You could rely on other people's published research about what makes a good board game. This assumes those other researchers did everything right and have defensible underlying assumptions.

3. You could simply state what you think makes a good board game and be transparent about it and work from there, letting others decide whether to disagree with you or accept your starting point.

You're starting from the viewpoint that there are definite, measurable qualities---that there is some objective truth. There are people who would disagree, so you're already taking a stand.

That's why there's no straightforward answer to your question.

The scientific method will create a very specific type of truth.

*I created a board game...and then had to decide whether to make it the focus of my dissertation by creating a research project around it.
posted by vitabellosi at 12:53 AM on September 3, 2011


I know your question is more board, but for fun I googled around and found this paper titled MDA: A Formal Approach to Game Design and Game Research. I think the "Aesthetics" section on pg. 2 would interest you.

I also discovered a giant Game Developers Conference (although its geared towards online and computer gaming, the priciples seem similar). For example, at their conference there is a talk titled How Metrics Are Ruining Your Game: Common Pitfalls and Uncommon Solutions, and another talk titled Designers are Human Too - Causes of Poor Design Decisions....kinda interesting if that's your thing.
posted by Mr. Papagiorgio at 1:08 AM on September 3, 2011


There's a kind of bootstrapping approach in bioinformatics, where computer models are validated against "gold standard" experimental data. The abstract idea is that some data are not only better than others, but that you can use "known good" data to improve or condition new observations. A "gold standard" is a level of quality that is agreed upon by the researchers in the field, whatever that standard might be. In my case, this is based upon lab techniques which have been used for decades, and which are well-understood and well-characterized.

How would you apply this to games? I have no idea. You might start from first principles and try to determine what is common to various games, to determine what you can encapsulate about your game, such that you can compare it in clearly measurable qualitative and quantitative ways with others. What are the probabilistic characteristics about taking a turn? How does this affect playability? How do these features compare with other games in the same genre? That sort of thing.
posted by Blazecock Pileon at 1:11 AM on September 3, 2011


I'm not sure if your example was just an example and you're looking for an answer to the more general question, or if you are really want help with board game analysis. If it's the latter, a lot of the other responses here look pretty good.

If it's the former: Development of brand new assays takes a lot of time and requires extensive testing and validation (and even then isn't truly validated until other scientists try using the same assay with similar outcomes). There is no protocol or method for coming up with a completely novel way of assessing something. It comes as a result of creativity, serendipidy, enginuity, foresight and of course necessity and hard work. Completely novel assays can also be worth a lot of money when it comes to, say, finding potential tests for cancer and disease.

Most scientists, when faced with a question, will just find a pre-existing (and validated) assay and modify it to suit their purposes (the use of focus groups, for your example). Scientists put a lot of stock put in precedence.
posted by kisch mokusch at 1:45 AM on September 3, 2011 [1 favorite]


Well, if you're a hard-science person, this will seem insultingly obvious -- but if you're a non-science kind of person, maybe not, so here goes:

If I can't think of a way disprove -- to falsify -- my hypothesis, then I need to narrow my hypothesis until I can think of a way to disprove it. Then, I try to disprove it. If I can't, maybe I've got something.

So in your example... I want to create a method of analysis such that I can (a) analyze board games via several qualities (e.g. fun, replayability, luck-skill, etc) (b) understand what makes a good or bad board game. ... you'll have to define "fun," "replayability," "luck/skill," and "good" in measurable ways in order to work up a hypothesis.

Here's a recent example from the green. A guy had a hypothesis: People who hate cats, did not grow up with cats. If X (hate cats), then not Y (grew up with cats). A reasonable hypothesis, and he investigated in a reasonable way. He asked people who hate cats to tell him whether they grew up with them. If they did, then he's disproved his hypothesis. If they did not, then he's supported his hypothesis for the time being. What he got was many answers from people who liked cats, and who did and did not grow up them -- not relevant to his hypothesis -- and some who hated cats, who did and did not grow up with them. Since he found several cat haters who did grow up with cats, his hypothesis was falsified -- the If X then not Y relationship he posited did not exist, at least not as an absolute. Could he have been hearing from the only three cat haters who grew up with cats to exist in the world? Sure. If this were cancer or public policy or something, he'd need a little more rigor. He'd want to make sure that he was getting data from random cat haters, not (a) self-selecters, aka people-with-a-beef (b) liars, or because this is the Internet, (c) dogs.
posted by pH Indicating Socks at 1:53 AM on September 3, 2011 [1 favorite]


and even then isn't truly validated until other scientists try using the same assay with similar outcomes

Repeatablilty is how scientists agree on models (hypotheses) of how the world works. If others try to repeat your testing procedure and get different answers, then there is usually something wrong with your measurement or your procedure/model.

While scientific articles tend to gloss over details, authors make supplementary materials available that go into greater and more explicit detail about their experimental process.
posted by Blazecock Pileon at 2:16 AM on September 3, 2011


Thank you all for your replies.

I feel like a klutz for not telling the following facts. I will use this method to less stringently analyze things on my blog, i.e. I'm not planning on using it for Proper Scientific Research. I would, however, like the method to share some qualities with scientific methods so that the analyses aren't completely inaccurate. So I'm thinking the most suitable approaches are vitabellosi's second and third category.

Thank you Mr. Papagiorgio for the links! Those actually seem to be very relevant.

Blazecock Pileon, you are advocating a more statistically oriented approach, yes? I think this is interesting and could be very valuable if pulled off. What I worry about is that ultimatly, I'm trying to model aesthetics and I'm not sure that stats are the best approach. But I'll keep this approach in mind.

kisch mokusch, then I don't have to feel so bad about not knowing how to go about this. Maybe what I should be doing is trying to find a existing method or, failing that, look at related fields for a method that might be relevant.

pH Indicating Socks, I'm not quite sure that falsification is relevant for my intent and purpose. Ultimately, I guess I'm trying to analyze aesthetics, design and human behavior - pretty complex concepts. Any method that would truly be stringently scientific would probably be incredibly complex and therefore near useless.
posted by Foci for Analysis at 8:58 AM on September 3, 2011


You're right, Foci -- I cracked myself up thinking about indignant dogs responding to cat threads, and I didn't tell you the other part.

In a case like yours, I'm thinking your hypothesis would be something like this: A "good" game (define good: avg purchaser rating ≥ 8.7, purchaser reports playing ≥ 20 times) with "broad appeal" (define broad appeal: stdev of player age ≥ 8 years, sales ≥ 20,000 units) fits the following model... and then you quantify the aspects of such a game (relative reliance on luck/skill, amount of time required to complete, number of players required, etc...)

So if you want to be kinda sciency about this, you need a lot of data on games, data that is known, and much of which can no doubt be pulled from some website. Then you look at that data... and I really mean, just look at it. Put it all on an excel spreadsheet, and sort by this, then sort by that, and see if you see any two categories that seem to have a relationship. If you think you see that, for example, purchaser rating goes up as time required to complete goes down, then apply a statistical test to see if the numbers really are doing that. Which one, you ask?

So if you get all this data, and analyze it, at the end you'll have a numerical representation of "a good game" (with or without broad appeal) and a numerical prediction (that's your model) of which games will be good. Then you have a nice input-output and can see if you're right.
posted by pH Indicating Socks at 2:07 PM on September 3, 2011


oh, I like the idea of clear definitions and basing this on publicly available stats, pH Indicating Socks. Thanks again for all the feedback, great stuff.
posted by Foci for Analysis at 10:07 PM on September 3, 2011


Is your research actually going to be on board games, or are you just using that as your example? If you are, then I assume you know about BGG, yes? They have a fairly extensive API that provides a lot of the data you're looking for (game info broken out into genres/mechanics, user ratings, etc.). For "my blog"-level analysis, I think you could do a lot with that and some Statistics 101.
posted by mkultra at 8:46 AM on September 4, 2011


« Older Will there be food at the steelband panorama?   |   Slow browsing/downloading over wifi yet fast bt... Newer »
This thread is closed to new comments.