# What statistical tests do I need to run on this data?

November 29, 2006 9:59 AM Subscribe

I need a good resource for conducting statistical tests. I've taken stats courses, but it's been a while. Something SPSS-centric would be ideal...

So I've got a dataset from a survey. I want to conduct some tests on it to confirm a basic hypothesis:

H0: Support for

Ha: Support for

What I want to do is check for correlation with support for

So my

So what I need is a walkthrough that says, "Okay, if you're trying to figure out this, and your dependent variable looks like this but your independent variables are this way, so what you need to do is run this test. Before you do that, ensure that your variables conform to this model. Once you've run your test, this number from the output is what you want."

Oh, just so we're on the same page, this is a one-credit undergrad class, not a Ph.D. dissertation. So, y'know.

So I've got a dataset from a survey. I want to conduct some tests on it to confirm a basic hypothesis:

H0: Support for

**A**is a force unto itself

Ha: Support for

**A**correlates with support for

**x**,

**y**, and/or

**z**

What I want to do is check for correlation with support for

**A**as my sole dependent variable and, one by one, a bunch o' variables as independents. I'll group these independents into

**x**,

**y**, and

**z**, then see which if any related to support for

**A**.

So my

**A**variable is a 1-100 integer scale. Most of my independent variables are represented as binary, yes or no answers, but there are several Likert scales ("on a scale of 1-5, would you say you agree or disagree?"). For some of these, answering

*agreeing*will show support for variable

**x**, say; for others,

*disagreeing*will show support for variable

**y**.

So what I need is a walkthrough that says, "Okay, if you're trying to figure out this, and your dependent variable looks like this but your independent variables are this way, so what you need to do is run this test. Before you do that, ensure that your variables conform to this model. Once you've run your test, this number from the output is what you want."

Oh, just so we're on the same page, this is a one-credit undergrad class, not a Ph.D. dissertation. So, y'know.

Recent SPSS versions should have something basic under Help. Statistics Coach or something like that.

posted by claxton6 at 10:17 AM on November 29, 2006

posted by claxton6 at 10:17 AM on November 29, 2006

Response by poster: Oh, forgot to mention that: The help file in SPSS 14 is pretty good, but I guess what I've been looking for is an initial hand-hold, so I can tell where to start. As I said, I've taken some elementary classes so I've done this very stuff in SPSS before, but it's been a few years. Once I get back into it, I'll be making liberal use of the help files.

posted by electric_counterpoint at 11:07 AM on November 29, 2006

posted by electric_counterpoint at 11:07 AM on November 29, 2006

The chapter on multiple regression in Stevens "Applied Multivatiate Statistics for the Social Sciences" (may not be exact title) is pretty good.

posted by singingfish at 12:56 PM on November 29, 2006

posted by singingfish at 12:56 PM on November 29, 2006

This thread is closed to new comments.

(2) Plain OLS is good enough for you and probably the most convenient way to do this. Run a multiple regression. I've no idea what the syntax is for SPSS.

(3) Look at the results. There should be something labeled coefficients. These are the estimated effects of each variable, holding all other variables constant. For binary variables, this is just the difference between the 0 group and the 1 group. For likert-scale variables, the coefficient is the effect of moving up 1 on the scale.

(4) Also, look at the standard errors next to your coefficients. These help you figure out how confident you should be in each effect. You want to see big coefficients and small standard errors. SPSS probably also reports either t-statistics or p-values, which are ways of comparing standard errors to coefficients. P-values are the easiest: you want to see them under 0.05, which means that you're 95% confident that the coefficient really is nonzero. T-statistics are an intermediate step, and you want to see t-statistics that are bigger than 2 (or more negative than -2).

(5) Finally, the R2 (R squared) statistic does a terrible, shitty job of telling you how good a job the model has done explaining your data. The usual answer is that it's the proportion of the variation that you've successfully explained. This is a terrible way to interpret them, but probably good enough for a 1-credit undergraduate course.

Easy ways to fuck this up:

(1) Not leaving a reference category for binary variables. If you have a variable for women, you can't have one for men or it goes blooey. Do this, and it won't even run, so at least you know.

(2) Having lots of variables that all boil down to just about the same thing. This is called multicollinearity, and the bad part is that the model will go ahead and run, it'll just give you fucked-up results.

posted by ROU_Xenophobe at 10:17 AM on November 29, 2006