How much will SEM model fit vary if I only change the final variable?
April 11, 2013 12:33 PM   Subscribe

I have five structural equation models that are identical except for the final outcome variable. Should I expect the model fit statistics to vary more than negligibly?

I am working on revising a study that looks at the impact of partisanship, media use, and political discussion on five different opinion outcomes. I'm doing this with five structural equation models that are exactly the same -- partisanship in the first level, then media use, then discussion -- before getting to the opinion outcome at the end. My model fit statistics are nearly identical across the five; some statistics are identical to three decimal points, some vary just a tiny bit at that level. A reviewer said, "Surely it cannot be the case that all five models had the same fit to the data." Is this surely not the case? I'm a technician and not a theorist when it comes to SEM, but it seems to me that most of the influence on model fit is coming from the relationships among the other eight variables, which are not going to differ at all based on one endogenous variable that comes later in the model. I can't find any papers that use multiple SEMs like this (the closest I found was one with multiple just-identified models, which have no fit stats), so I have no idea if I'm just overlooking something obvious. Can anyone provide some insight?
posted by aaronetc to Science & Nature (4 answers total)
 
I think this is a question where we'd kind of need to have the data in front of us to play with creatively in order to get a sense of what might be going on for you.
posted by Blasdelb at 2:15 PM on April 11, 2013


It may be that the "structure" of the opinion formation process is basically similar across different opinions, especially if the five opinion outcomes are in the same topic area (i.e. are different versions of the same basic attitude, linked by consistency). It may be unlikely that parameter estimates would be virtually identical though. How correlated are these outcomes? If essentially the same data goes into each estimation, then we wouldn't expect much difference.

But yes, what blasdelb says. Without the data, model, and estimates, it is hard to say more.
posted by lathrop at 4:48 PM on April 11, 2013 [1 favorite]


I'm with lathrop -- the first thing that came to mind when I read your question is that your various outcomes are probably fairly strongly correlated with each other.
posted by plantbot at 8:51 AM on April 12, 2013


When predictive models produce the same fit measure, I would be concerned that some kind of coding error had taken place (such as the outcomes being all missing or non-overlapping with predictors). Some obvious alternative explanations are that the outcomes are essentially colinear, the latent variable is unidentified (potential coding error), the final model is saturated (always perfectly fit - more likely with a binary outcome), or the fit statistic (you don't specify which) is dominated by the other variables. Some fit statistics (like AIC) can have big constants attached to them, do you mean to the third natural decimal or the third non-zero digit (like 1.003e4)? Did you try putting nonsense (or better yet, parametric simulation using plausible parameters) in as the outcome just to make sure the machinery is working?
posted by a robot made out of meat at 8:46 PM on April 12, 2013


« Older Lodging recommendations for Cabo San Lucas/San...   |   Hardwood tumescence: flooring issues and seasonal... Newer »
This thread is closed to new comments.