Statistical Interaction—I know it's there, I just need directions please.
February 10, 2009 5:43 PM   Subscribe

How do I determine directionality in a significant statistical interaction? I'm using SPSS.

I’m pretty good at understanding Statistics on a qualitative level, decent on a quantitative level, and terrible at SPSS! My advisor helped me with some archival data involving correlations and interactions. Now, I’m having difficulty with trying to figure out the math on my own! Oh, and understanding which numbers really matter!

Okay, let’s say I’ve got orthogonal Factors A and B, and I know that they interact significantly with respect to Dependent Variable C. But now, I’m interested in finding out about directionality. Is it that (high or low) levels of A and (high or low) levels of B lead to (high or low) levels of C!!!

I’m trying to recreate what my advisor did. We standardized the variables (so, in the parlance of SPSS, we would have ZFactorA, ZFactorB and ZVariableC). And then, calculated ZFactorA*ZFactorB as the. . . interaction term (?). Next, we did some type of correlation (multiple regression?) where ZFactorA, ZFactorB and ZFactorA*ZFactorB were the independent variables and ZVaribleC was the dependent variable. The beta weight of ZFactorA*ZFactorB was negative (it was significant, as we knew) and so I think that that is why we knew that the interaction happened with low levels of both ZFactorA and ZFactorB. I’m not really sure.

[We also did a whole other procedure wherein we ranked and divided the data of each of A, B and C into 3 “Ntiles” and then we semantically had a 3x3 resolution of data where we could see if any of the boxes were surprisingly high or low (which is the essence of an interaction—the whole 1+1=3 paradigm). So, the box that corresponded to low levels of both factors was indeed surprisingly low.]

Does anyone have any helpful suggestions? What would be the process you would use to determine the directionality of these interactions? What statistics would you use? Which numbers would be important to you? Bonus points for any software that would allow me to 3D graph this stuff!
posted by No New Diamonds Please to Education (4 answers total) 2 users marked this as a favorite
 
I'm only now getting into advanced and multivariate stats, but I'm don't believe you can do what you're asking without developing a hypothesis about the interaction and then testing that with a second experiment.

But I'm not sure what exactly you're asking: are you wanting to know if it's HIGH levels of A AND B associated with HIGH levels of C (or LOW levels of A AND B associated with LOW levels of C)? Because if THAT's the question, that's six on one hand and half dozen on the other.

If I were in your situation, I'd go back to the advisor and ask for some clarification.
posted by Benjy at 8:28 PM on February 10, 2009


I think that either you're not remembering quite correctly, or your advisor isn't fully up on interactive effects -- no shame in that; they're unintuitive and fussy, and lots of competent people end up confused by them.

We standardized the variables (so, in the parlance of SPSS, we would have ZFactorA, ZFactorB and ZVariableC). And then, calculated ZFactorA*ZFactorB as the. . . interaction term (?).

Yes, that would be an interaction term.

Next, we did some type of correlation (multiple regression?) where ZFactorA, ZFactorB and ZFactorA*ZFactorB were the independent variables and ZVaribleC was the dependent variable.

Interaction terms are commonly used in OLS.

The beta weight of ZFactorA*ZFactorB was negative (it was significant, as we knew) and so I think that that is why we knew that the interaction happened with low levels of both ZFactorA and ZFactorB. I’m not really sure.

This isn't right.

A negative coefficient on the interaction term means that as ZFactorA increases, the coefficient of ZFactorB decreases. And that as ZFactorB increases, the coefficient of ZFactorA decreases.

Any interactive effect doesn't happen with low levels or high levels. They operate across the full range of both constituent independent variables.

Interactive effects don't have directionality... the only way I can think of them as having a direction, they're inherently both directions at the same time.
posted by ROU_Xenophobe at 10:10 PM on February 10, 2009


This book is a great resource for probing and testing interactions in regression.

I would suggest you take high and low values of ZfactorA and ZfactorB and simply put those values into the resulting regression equation to obtain predicted values of ZFactorC. For example, for low values (say, -1 for both ZA and ZB), the equation would be:

predicted ZC = intercept + betaZA*-1 + betaZB*-1 + betaZAZB*(-1*-1)

Do this for combinations of high and low values of ZA and ZB and you'll get an idea of how the interaction is working.
posted by naturesgreatestmiracle at 11:58 PM on February 10, 2009 [1 favorite]


An interaction term is easiest to interpret by thinking that the effect of ZFactorA changes as ZfactorB changes. This means that you can only talk about the effect of FactorA at a particular level of FactorB

y ~ F_a * Beta_a + F_b*Beta_b + F_aF_b*Beta_ab +Beta_0

Since F_a and F_b are both standardized, when F_b is at it's mean value (0) F_ab is also zero, and you can interpret Beta_a as what moving one SD of F_a does when F_b is at its mean.

On the other hand, if you fix F_b at one SD above it's mean,

y ~ F_a * Beta_a +Beta_b + F_a*Beta_ab
= F_a *( Beta_a + Beta_ab) + (Beta_0 + Beta_b)

So now the effect of moving up one SD in F_a is ( Beta_a + Beta_ab)

Similarly, if you fix F_b at one sd below it's mean, the effect of moving one SD up in F_a is ( Beta_a - Beta_ab)
posted by a robot made out of meat at 7:38 AM on February 11, 2009


« Older Can This Be Undone?   |   Stranded in middle America, need to go somewhere... Newer »
This thread is closed to new comments.