# Calculating Statistic Ratio for .02347% out of ~4.6m -- 1 for every..?

August 23, 2010 3:59 PM Subscribe

StatisticsFilter: Out of 4,686,100 doctors in the US, a survey claims that "1100 doctors in the US think.." and amounts to roughly .02347% of doctors in the US (1100 / 4,686,100 = .02347% / 100%). So that's one doctor that thinks this, for every how many doctors?

I can figure out percentages using the only Algebra I remember from the most recent Algebra class I took 13 years ago, but that's where I'm at a loss to convert that to "1-for-every" issues. It's for a forum argument about how 1,100 isn't a valid sample size for what doctors in the US think.

I can figure out percentages using the only Algebra I remember from the most recent Algebra class I took 13 years ago, but that's where I'm at a loss to convert that to "1-for-every" issues. It's for a forum argument about how 1,100 isn't a valid sample size for what doctors in the US think.

4686100 divided by 1100 is 4260.090909090909. One in 4260.090909090909.

posted by Rendus at 4:02 PM on August 23, 2010

posted by Rendus at 4:02 PM on August 23, 2010

Response by poster: And how did you get that, so I can know how again should it come up another time?

posted by Quarter Pincher at 4:03 PM on August 23, 2010

posted by Quarter Pincher at 4:03 PM on August 23, 2010

426

posted by If only I had a penguin... at 4:04 PM on August 23, 2010

posted by If only I had a penguin... at 4:04 PM on August 23, 2010

Response by poster: Thanks, from an English major who was happy with his D in College Algebra just to get the math credit!

posted by Quarter Pincher at 4:05 PM on August 23, 2010

posted by Quarter Pincher at 4:05 PM on August 23, 2010

Divide the total number of doctors by how many think that. Round to the nearest 1. Done.

posted by brainmouse at 4:05 PM on August 23, 2010

posted by brainmouse at 4:05 PM on August 23, 2010

Best answer: .02347 for every 100 = 2347 for every 10,000,000

So you divide 10,000,000 by 2347 and get a result of 4260.7584 (so mhjb has the right idea but, techincally, you should round up to 4261.

posted by amyms at 4:06 PM on August 23, 2010

So you divide 10,000,000 by 2347 and get a result of 4260.7584 (so mhjb has the right idea but, techincally, you should round up to 4261.

posted by amyms at 4:06 PM on August 23, 2010

Think of division, a/b, as putting a items into b buckets, evenly, and seeing how many items are in each bucket. In this case, you have one bucket for each of the doctors in the study, and the items you are distributing are all the doctors in the US. If you evenly distribute the doctors among the buckets, you have 4260 doctors in each bucket, with 100 left over, the "remainder" (4260*1100 + 100 = 4686100).

posted by leahwrenn at 4:08 PM on August 23, 2010

posted by leahwrenn at 4:08 PM on August 23, 2010

Not to derail the question, but "It's for a forum argument about how 1,100 isn't a valid sample size for what doctors in the US think." is not what you are arguing. What you are arguing is that 1,100 of all doctors is actually a very small portion of all doctors. The "sample size" is how many doctors they surveyed -- presumably, not all of them, but a much smaller sample size is valid than you probably think, as long as it's correctly randomly sampled. Then the results from that sample was scaled up by some measure to give the numbers for what all doctors (the "population") think.

That may have just been an offhand comment, but I wanted to point it out in case that is what you were planning on putting in your argument.

posted by brainmouse at 4:12 PM on August 23, 2010

That may have just been an offhand comment, but I wanted to point it out in case that is what you were planning on putting in your argument.

posted by brainmouse at 4:12 PM on August 23, 2010

Well maybe my math is wrong and that's what I get for rushing...but really I wanted to address the larger question. The question of whether or not a sample of 1100 has nothing to do with how many doctors there are in the US. The size of an adequate sample is determined by the central limit theorem. Larger populations do not require larger samples. This is a mathematically provable fact, though I don't do mathematical proofs so don't ask me to prove it.

That said, I can help you with the intuition. SO here's the intuition: Think about making lemonade. How much lemonade do you need to taste to determine if you've added enough sugar? Well if you're making a glass of lemonade, you'd probably try a spoonful. And if you made a pitcher you'd try a spoonful. If you made a swimming pool full you'd still need to taste a spoonful. THe amount you need to sample isn't dependent on how much lemonade you've made, just on whether it's enough to detect the taste.

Now I know what you're thinking: If you made a swimming pool full then what if you had a spoonful that wasn't sweet but there was a whole bunch of sugar hiding in the deep end? Well there's the thing: The other thing that determines how much you need to sample is how much variability there is in the variable you're trying to measure. If there's very little variability (you stirred the lemonade real good!) then you can get by without sampling very much. If there's a lot of variability, then you'll need to sample more (a spoonful from here and there all over the pool).

That said, for most purposes a sample of 1000 is considered adequate to get statistically significant results for most variables of interest. Read more about sample size here!

posted by If only I had a penguin... at 4:13 PM on August 23, 2010 [5 favorites]

That said, I can help you with the intuition. SO here's the intuition: Think about making lemonade. How much lemonade do you need to taste to determine if you've added enough sugar? Well if you're making a glass of lemonade, you'd probably try a spoonful. And if you made a pitcher you'd try a spoonful. If you made a swimming pool full you'd still need to taste a spoonful. THe amount you need to sample isn't dependent on how much lemonade you've made, just on whether it's enough to detect the taste.

Now I know what you're thinking: If you made a swimming pool full then what if you had a spoonful that wasn't sweet but there was a whole bunch of sugar hiding in the deep end? Well there's the thing: The other thing that determines how much you need to sample is how much variability there is in the variable you're trying to measure. If there's very little variability (you stirred the lemonade real good!) then you can get by without sampling very much. If there's a lot of variability, then you'll need to sample more (a spoonful from here and there all over the pool).

That said, for most purposes a sample of 1000 is considered adequate to get statistically significant results for most variables of interest. Read more about sample size here!

posted by If only I had a penguin... at 4:13 PM on August 23, 2010 [5 favorites]

To add on, in case I misunderstood what you were saying. If 1,100 is how many doctors they surveyed, and are giving the results from that -- that's totally a valid sample size. Voter surveys are often around 1,000, and give completely statistically valid results. Saying that just because they only asked 1 in 4260 doctors the survey isn't valid is a misunderstanding of how these statistical processes work.

posted by brainmouse at 4:14 PM on August 23, 2010 [1 favorite]

posted by brainmouse at 4:14 PM on August 23, 2010 [1 favorite]

*It's for a forum argument about how 1,100 isn't a valid sample size for what doctors in the US think.*

Yeah, how do you know that? You need to know how the sample was selected in order to estimate its statistical strength. The simple calculation you're performing is neither here nor there.

Also, presumable, if 1,100 doctors agreed with option A, the sample size was greater than 1,100.

posted by mr_roboto at 4:16 PM on August 23, 2010 [1 favorite]

Oh, and any responsible reporting of any sort of study will report statistical significance, if the results are statistically significant, the sample was big enough. But you should know what statistical significance means: It means that it's improbably that a population with property X could yield the result observed. So for example, if you're testing if something changed over time you're saying "It's improbably that a population with the same distribution as time 1 would produce the result observed at time 2" or if you're comparing two groups it means "it's improbably that if doctors had the same distribution as nurses, you'd observe the difference observed in this sample." When you report statistical significance of *one* number (no comparison), the comparison is implicitly 0 (it's improbably that if (practically) no one thought X we would observe what we saw here).

Because 0 often isn't the counter-factual of interest, there should be confidence intervals reported: these are the "margins of errors" that newspapers use. So if it's +/- 3% that means it's improbably that in a population where 3+ more (or more) people thought this or in a population where 3% fewer (or fewer) people thought this we would observe what this sample showed.

The question of whether sample size is large enough is a mathematical one. It is not a matter of opinion. Whatever report your reading should tell you this.

posted by If only I had a penguin... at 4:24 PM on August 23, 2010

Because 0 often isn't the counter-factual of interest, there should be confidence intervals reported: these are the "margins of errors" that newspapers use. So if it's +/- 3% that means it's improbably that in a population where 3+ more (or more) people thought this or in a population where 3% fewer (or fewer) people thought this we would observe what this sample showed.

The question of whether sample size is large enough is a mathematical one. It is not a matter of opinion. Whatever report your reading should tell you this.

posted by If only I had a penguin... at 4:24 PM on August 23, 2010

Yeah, you're misunderstanding how sampling works.

It turns out that the size sample that's "good enough" doesn't depend very much on the size of the population. It matters for very small populations (4 million is not small), and even then small populations can get away with smaller samples.

If they drew a random sample of physicians, 1100 is a perfectly valid sample size for 4 million doctors. It's just as good a sample of 4 billion doctors or 4 trillion doctors or 4 million billion squillion bajillion doctors. In fact, when they were calculating the size of the margin of error, they probably assumed that the number of doctors out of which they drew their sample was, entirely literally, infinity.

The reason it works like this is mostly because increasing your sample size means that the random, unlikely fluctuations you'll see in any string (getting heads each of six times you flip a coin) do a better and better job of canceling each other out. I gave a similar if testier reply earlier.

posted by ROU_Xenophobe at 4:50 PM on August 23, 2010 [1 favorite]

It turns out that the size sample that's "good enough" doesn't depend very much on the size of the population. It matters for very small populations (4 million is not small), and even then small populations can get away with smaller samples.

If they drew a random sample of physicians, 1100 is a perfectly valid sample size for 4 million doctors. It's just as good a sample of 4 billion doctors or 4 trillion doctors or 4 million billion squillion bajillion doctors. In fact, when they were calculating the size of the margin of error, they probably assumed that the number of doctors out of which they drew their sample was, entirely literally, infinity.

The reason it works like this is mostly because increasing your sample size means that the random, unlikely fluctuations you'll see in any string (getting heads each of six times you flip a coin) do a better and better job of canceling each other out. I gave a similar if testier reply earlier.

posted by ROU_Xenophobe at 4:50 PM on August 23, 2010 [1 favorite]

Response by poster: The crux of the opposing argument is actually that "3 out of 4 doctors think..." when the survey size was 1,100 doctors. I was attempting to suggest that a survey of 1,100 doctors is not a valid reasoning to suggest what +4m doctors think.

posted by Quarter Pincher at 5:11 PM on August 23, 2010

posted by Quarter Pincher at 5:11 PM on August 23, 2010

Response by poster: The link to the article the opposition cites is here, that "3 out of 4 doctors" believe miracles happen, as evidence that doctors believe in miracles. I'm suggesting that the scope of the survey does not accurately portray what "3 out of 4 doctors" think except within the scope of the 1,100 doctors surveyed, given that 1,100 surveyed represents one doctor out of every (roughly) 4,261 doctors in the US.

My figures for how many doctors are in the US comes from this US Census stats page (for 2007, citing how many doctors per state per 100,000 population, truncated after the ones digit and added) mingled with this Census stats page (for 2009, using a flat 307m) population.

posted by Quarter Pincher at 5:28 PM on August 23, 2010

My figures for how many doctors are in the US comes from this US Census stats page (for 2007, citing how many doctors per state per 100,000 population, truncated after the ones digit and added) mingled with this Census stats page (for 2009, using a flat 307m) population.

posted by Quarter Pincher at 5:28 PM on August 23, 2010

That's where you're wrong. A survey of 1100 doctors pretty much

Sometimes that means you have to use a "stratified sample", though.

posted by Chocolate Pickle at 5:29 PM on August 23, 2010

*is*an adequate sample, as long as it's chosen well.Sometimes that means you have to use a "stratified sample", though.

posted by Chocolate Pickle at 5:29 PM on August 23, 2010

Statistics is all about taking a small sample size and inferring information about the population. It is totally legit (as long as the study is conducted appropriately according to statistical principles, as others above have noted).

posted by kdar at 5:31 PM on August 23, 2010

posted by kdar at 5:31 PM on August 23, 2010

Actually the sample size is quite reasonable. Again this assumes good sampling techniques (perfect random sampling in surveys isn't really possible), which is really where surveys fail, but the margin of error at 95% confidence for that sort of statistic is about +/- 2.6%. At 99% confidence it's +/- 3.4%.

posted by drpynchon at 5:32 PM on August 23, 2010

posted by drpynchon at 5:32 PM on August 23, 2010

Yeah, you're wrong. Assuming the sample was random or was a properly done probability sample (a stratified sample is one kind of probability sample), the sample is entirely adequate to the task. Just like you can tell how sweet a pool-full of lemonade is by tasting a spoonful.

posted by If only I had a penguin... at 5:36 PM on August 23, 2010

posted by If only I had a penguin... at 5:36 PM on August 23, 2010

That's the thing, though: it

To dispute the conclusions of the study, you would have to demonstrate one of the following:

- Flaws in the randomness of the sampling (the polling organisation is biased and e.g. over-represented rural doctors.)

- The questions asked to the doctors were likely to bias their answer, either because they were leading (do you think that drug is the finest in the world, merely excellent, or completely harmful?) or because doctors would have a tendency to lie (have you ever disguised your ignorance and caused harm?)

- The polling organisation repeated the survey dozens of times, until they had a result they favoured out of pure chance.

If none of this is true, then I'm afraid you'll have to come to terms with the fact that 3/4 of all doctors think what the survey says they think, however unpleasant that may be.

posted by Spanner Nic at 5:44 PM on August 23, 2010

*is*valid reasoning. If the sample of 1100 doctors is random, then maths show that the conclusions from the study of 1100 can be safely attributed to the 4,000,000 in more than 97% of cases.To dispute the conclusions of the study, you would have to demonstrate one of the following:

- Flaws in the randomness of the sampling (the polling organisation is biased and e.g. over-represented rural doctors.)

- The questions asked to the doctors were likely to bias their answer, either because they were leading (do you think that drug is the finest in the world, merely excellent, or completely harmful?) or because doctors would have a tendency to lie (have you ever disguised your ignorance and caused harm?)

- The polling organisation repeated the survey dozens of times, until they had a result they favoured out of pure chance.

If none of this is true, then I'm afraid you'll have to come to terms with the fact that 3/4 of all doctors think what the survey says they think, however unpleasant that may be.

posted by Spanner Nic at 5:44 PM on August 23, 2010

There are a number of ways to impeach a survey, but "1100 is only 1/4260 of the total" isn't really one of them.

Here are some others:

1. The sample may not really be representative of the whole. By far the easiest way for this to happen is "self-selection", for instance if you put a survey on the web and allow any and all visitors to participate. In that case you can't even be sure that they're doctors.

All public-opinion surveys suffer from this to some extent, because some people will hang up on the telephone surveyer or toss the mail survey into the round file, and some will participate. Are those willing to participate the same as those who aren't? There have been infamous examples where it turned out they were not.

Mail surveys of professionals usually have an extremely low return rate. 5% is considered really good. Which 5% are willing to spend the time to fill out the survey, and why?

Generally speaking, when the survey is about something controversial, it turns out to be respondents with a dog in the hunt who are more willing to answer. So political polling tends to get disproportionately more response from people who are active in the parties, for example. Questions relating to religious faith tend to get disproportionately more answers from people who think that's important.

You have to use stratified sampling to cope with that, but if you're a dishonest pollster then not doing so is one of the ways to get the result you want.

2. The questions can be slanted. There are dozens of ways for dishonest pollsters to get a biased result even if they're using a valid sample of respondents, by making the questions ambiguous for instance. An honest pollster will reproduce the exact text of the question for all to see. Dishonest pollsters announce their results and don't let people see the questions.

3. In some kinds of polls, you can't discount the possibility that a significant percentage of the respondents are deliberately lying.

A notorious example of a poll that went badly south was in 1998 when People Magazine ran an online poll to determine "the most beautiful people" in the world. It was won by Hank the Angry Drunken Dwarf (a regular guest on the Howard Stern show) as a write-in.

How did it go wrong? First, it was a self-selected sample. Second, the vast majority of the respondents were lying. No one really believed that Hank was the most beautiful person in the world; but they did think that the survey was stupid and decided to hack it.

So while a sample of 1100 is adequate for this kind of thing,

If you really want to understand this, and have a really good time learning about it, then you should read the classic book "How to lie with statistics". It's short and a very easy read, and extremely amusing.

posted by Chocolate Pickle at 5:51 PM on August 23, 2010

Here are some others:

1. The sample may not really be representative of the whole. By far the easiest way for this to happen is "self-selection", for instance if you put a survey on the web and allow any and all visitors to participate. In that case you can't even be sure that they're doctors.

All public-opinion surveys suffer from this to some extent, because some people will hang up on the telephone surveyer or toss the mail survey into the round file, and some will participate. Are those willing to participate the same as those who aren't? There have been infamous examples where it turned out they were not.

Mail surveys of professionals usually have an extremely low return rate. 5% is considered really good. Which 5% are willing to spend the time to fill out the survey, and why?

Generally speaking, when the survey is about something controversial, it turns out to be respondents with a dog in the hunt who are more willing to answer. So political polling tends to get disproportionately more response from people who are active in the parties, for example. Questions relating to religious faith tend to get disproportionately more answers from people who think that's important.

You have to use stratified sampling to cope with that, but if you're a dishonest pollster then not doing so is one of the ways to get the result you want.

2. The questions can be slanted. There are dozens of ways for dishonest pollsters to get a biased result even if they're using a valid sample of respondents, by making the questions ambiguous for instance. An honest pollster will reproduce the exact text of the question for all to see. Dishonest pollsters announce their results and don't let people see the questions.

3. In some kinds of polls, you can't discount the possibility that a significant percentage of the respondents are deliberately lying.

A notorious example of a poll that went badly south was in 1998 when People Magazine ran an online poll to determine "the most beautiful people" in the world. It was won by Hank the Angry Drunken Dwarf (a regular guest on the Howard Stern show) as a write-in.

How did it go wrong? First, it was a self-selected sample. Second, the vast majority of the respondents were lying. No one really believed that Hank was the most beautiful person in the world; but they did think that the survey was stupid and decided to hack it.

So while a sample of 1100 is adequate for this kind of thing,

*which*1100 they are, and how they got found, and what they were asked, can make all the difference.If you really want to understand this, and have a really good time learning about it, then you should read the classic book "How to lie with statistics". It's short and a very easy read, and extremely amusing.

posted by Chocolate Pickle at 5:51 PM on August 23, 2010

Ah, seeing your update: I'll bet that it's all about how you define 'miracle'. Everyone should believe in miracles, if you define it as 'statistical outlier', i.e. someone beat 1 in a million odds to recover from cancer. That will happen just by chance, 1 time in a million. 'Miracle' as 'divine intervention', that's an entirely different meaning.

posted by Spanner Nic at 5:52 PM on August 23, 2010

posted by Spanner Nic at 5:52 PM on August 23, 2010

Response by poster: Even still, it is purely a guess that the rest do and bothers nothing with determining whether they actually do or do not in the literal sense, no?

It seems to me it is a rationalization to shortcut (take

These, however, are the likely the same mathematicians who believe in probability and the Monty Hall bit being true =)

posted by Quarter Pincher at 5:53 PM on August 23, 2010

It seems to me it is a rationalization to shortcut (take

*shortcut*how you like) otherwise unfeasible amounts of research.These, however, are the likely the same mathematicians who believe in probability and the Monty Hall bit being true =)

posted by Quarter Pincher at 5:53 PM on August 23, 2010

By the way, this particular survey may be a case of ambiguity. What does "miracle" mean?

posted by Chocolate Pickle at 5:57 PM on August 23, 2010

posted by Chocolate Pickle at 5:57 PM on August 23, 2010

*Even still, it is purely a guess that the rest do and bothers nothing with determining whether they actually do or do not in the literal sense, no?*

No, it is not a guess. It is an extrapolation, but it is also a valid extrapolation. Don't try to claim that the study of statistics is some kind of pseudo-science. Statistics is as valid a form of mathematics as calculus, and statistical methods are used constantly in modern science and engineering. If statistics didn't work, the modern world wouldn't look like it does.

posted by Chocolate Pickle at 5:59 PM on August 23, 2010

*It seems to me it is a rationalization to shortcut (take shortcut how you like) otherwise unfeasible amounts of research.*

It is a shortcut, but it's not a rationalization -- it's a practical reality. Moreover, that shortcut was used in almost every single advancement in the history of SCIENCE (and other, squishier fields).

The margin of error for a sample size of 1,100 means that if you did that same survey another 100 times, reselecting subjects at random, about 99 out of 100 of those surveys would yield a percentage between 72-77%. If you did the survey a 1,000 times over, the outcome of about 999 of those 1000 surveys would fall between 71-78%. In other words, 75% is a pretty good estimate for the actual total population if the sampling methods were appropriate. The exact percentage of the total population who would respond similarly isn't too far off.

posted by drpynchon at 6:10 PM on August 23, 2010

Response by poster:

Practical, according to a mathematician (and probably logistically speaking with paid manpower, etc) but not actually true, even according to your own explanation --

"

posted by Quarter Pincher at 6:19 PM on August 23, 2010

*It is a shortcut, but it's not a rationalization -- it's a practical reality.*Practical, according to a mathematician (and probably logistically speaking with paid manpower, etc) but not actually true, even according to your own explanation --

*The margin of error for a sample size of 1,100 means that if you did that same survey another 100 times, reselecting subjects at random, about 99 out of 100 of those surveys would yield a percentage between 72-77%. If you did the survey a 1,000 times over, the outcome of about 999 of those 1000 surveys would fall between 71-78%.*"

**If we had actually done other surveys (which we didn't) we have used our advanced number churning systems to calculate that what we've just said will still be true, for the most part. We aren't actually literally certain, as in, actually asked everyone, so technically our statement that "3 out of 4 doctors in the US" bit is a lie although actually we meant to caveat/imply that with "according to our advanced number churning processes without actually doing the research we claim to present" then it's actually right, and lots of other number-churning specialists agree with us."**

=)=)

posted by Quarter Pincher at 6:19 PM on August 23, 2010

I'm guessing you routinely say things like "Most cars have four wheels" without having exhaustively sampled every single car ever made to see how many wheels it had. Without logical inferences from finite sample sizes, we can't ever say anything useful in the world.

Saying that a statistically valid survey is "purely a guess" is like saying Michael Jordan wasn't good at basketball, he was just really really lucky at making baskets.

posted by 0xFCAF at 6:20 PM on August 23, 2010

Saying that a statistically valid survey is "purely a guess" is like saying Michael Jordan wasn't good at basketball, he was just really really lucky at making baskets.

posted by 0xFCAF at 6:20 PM on August 23, 2010

Response by poster: It seems that last paragraph was not bolded properly as the preview had suggested (whereas the close-bold (and showed properly in the live-preview) came after the seventh word, surveys).

posted by Quarter Pincher at 6:22 PM on August 23, 2010

posted by Quarter Pincher at 6:22 PM on August 23, 2010

*The link to the article the opposition cites is here, that "3 out of 4 doctors" believe miracles happen, as evidence that doctors believe in miracles.*

That link wouldn't open for me, but if the "wnd" in the URL means that the article is at World Net Daily, then yeah, you can't trust their conclusions. They are rightwing wackadoodles (here's some info about them at Wikipedia).

Since I can't see the article, I don't know if they're even citing a real survey, but they might have asked only self-identifying evangelical christian doctors, for all we know.

Or, as someone mentioned upthread, there might be some ambiguity in interpreting the question. To doctors, who see unexpected and unexplainable things happen everyday, the word "miracle" can be used for a lot of events that don't necessarily encompass any greater meaning or higher purposes (even though others with an agenda might twist it that way).

posted by amyms at 6:30 PM on August 23, 2010

Response by poster: I generally avoid inferences of that nature and instead say things like, "Generally, four-wheeled cars have four wheels." I generally attempt to avoid making inferences and try to remember to say exactly what I mean.

Michael Jordan not being good at basketball might be accurately stated if we measured the scoring marks for all games he had ever played, including the practice games during training when he was a rookie, and the games he played as a youth in small public half-courts, cumulatively. However, the remark generally implies Michael's ability versus opponents in his actual career games, and only within a certain context of those even further. It could instead be suggested, with a high probability for accuracy, that Michael Jordan

I did not imply that

posted by Quarter Pincher at 6:37 PM on August 23, 2010

Michael Jordan not being good at basketball might be accurately stated if we measured the scoring marks for all games he had ever played, including the practice games during training when he was a rookie, and the games he played as a youth in small public half-courts, cumulatively. However, the remark generally implies Michael's ability versus opponents in his actual career games, and only within a certain context of those even further. It could instead be suggested, with a high probability for accuracy, that Michael Jordan

*eventually became*a great basketball player.I did not imply that

*rationalization*was a bad thing, but instead as*something rational people make*. Rational, however, could merely be what a person, believing himself to be perfectly sensible perhaps given that other people he estimates to be perfectly sensible agree with him, finds sensible =)posted by Quarter Pincher at 6:37 PM on August 23, 2010

Response by poster: The WND does refer to WorldNetDaily (which upon testing again opens okay for me) and the specific passage about the survey itself and it's veracity states --

"

-- but no other identifying information other than results of the survey and related stats extrapolated.

posted by Quarter Pincher at 6:48 PM on August 23, 2010

"

*The survey was conducted by HCD Research and the Louis Finkelstein Institute for Religious and Social Studies of the Jewish Theological Seminary in New York.*"-- but no other identifying information other than results of the survey and related stats extrapolated.

posted by Quarter Pincher at 6:48 PM on August 23, 2010

You are totally attacking the wrong thing here. Statistical sampling is on perfectly valid theoretical foundations, and it's quite ignorant to claim that a random sample cannot represent the whole.

What you should be attacking is the way the survey is worded. Usually you can force any conclusion that you want by misleading wording. For example the survey probably asked some skewed version of "have you ever seen something happen to a patient that you can't explain?" and then used a positive response as proof of belief of miracles.

posted by Rhomboid at 7:07 PM on August 23, 2010

What you should be attacking is the way the survey is worded. Usually you can force any conclusion that you want by misleading wording. For example the survey probably asked some skewed version of "have you ever seen something happen to a patient that you can't explain?" and then used a positive response as proof of belief of miracles.

posted by Rhomboid at 7:07 PM on August 23, 2010

Good luck finding the actual survey. I've been trying to find it for a bit, and it seems like it was scrubbed from the internet.

Here's an old location:

http://www.jtsa.edu/research.finkelstein/surveys/physicians.shtml

It's not on the Internet Archive or Google Cache. Again, good luck.

posted by mr_roboto at 7:17 PM on August 23, 2010

Here's an old location:

http://www.jtsa.edu/research.finkelstein/surveys/physicians.shtml

It's not on the Internet Archive or Google Cache. Again, good luck.

posted by mr_roboto at 7:17 PM on August 23, 2010

*Even still, it is purely a guess that the rest do and bothers nothing with determining whether they actually do or do not in the literal sense, no?*

It is in the narrowest sense a guess, but it is a highly informed and very conservative guess.

The way most of these work is by reasoning backwards. Presumably you think that fewer than 74% of physicians believe in miracles. I don't know how many you think do... let's say you believe half do, 'cuz then we can flip coins. If the true proportion were 50%, do you know how hard it would be to gather a random sample of 1100 that just by bad luck happened to have 74% saying yes? This would be like flipping a coin 1100 times and only having it come up heads 286 times. Almost impossible.

Doing half-assed math, less than one in a trillion.

The way margins of error get calculated boil down to "The set of population proportions such that it would not be very difficult to get our sample from that population." In practice, they usually lose a relatively loose standard of 95% -- they ask, more or less, "What is the set of possible populations that have at least a 5% chance of producing our sample?" But if you learn a little not-too-difficult math, you can figure out how to adjust what they tell you to a tougher standard.

This part is unassailable: really, honestly, refusing to believe poll results because you think the sample of 1100 was too small is poop-smearing, cat-throwing crazy. You might not have known this before, but now you do.

Where you might attack it is that all of these good things only flow from random samples. If your sample is nonrandom, bad things happen. Drawing random samples is difficult (in practice all samples are a little bit nonrandom, but there are ways to "re-create" a random sample out of them). So, look at the survey methodology.

Another tactic you might pursue is to think narrowly about what the results say. What they say is really limited to "If the sample was random, we can be highly confident that if we somehow gave this survey to every physician in the US, between 71 and 77 percent would give the answer that 74% did in the sample."

*It seems to me it is a rationalization to shortcut (take shortcut how you like) otherwise unfeasible amounts of research.*

Sometimes that's true, but that's a good thing. It lets us learn about things that would otherwise be impossible to learn about,

And, sometimes, a real-world sample survey can be

*better*than a real-world census that counts everyone. The basic idea is that fuckups are inevitable, and some people will never answer even under penalty of law, and other people will manage to answer twice, or you end up interviewing people who aren't actually in the target population, and so on. But it's easier to be careful and controlled about a thousand-person sample than it is a 4.7 million person census. This is a large part of why the government creates unemployment statistics from ~50,000 respondent surveys instead of trying to assemble a census out of other data: the sample is actually more accurate.

posted by ROU_Xenophobe at 7:24 PM on August 23, 2010 [2 favorites]

*My figures for how many doctors are in the US comes from this US Census stats page (for 2007, citing how many doctors per state per 100,000 population, truncated after the ones digit and added) mingled with this Census stats page (for 2009, using a flat 307m) population.*

That method of estimating the number of physicians is not valid. You can't just add up the frequency of physicians in each state and come up with a meaningful number without knowing the population of each individual state. If you scroll down to the bottom of your link and click on the source for the data in the table you will find this which does all the hard work for you and indicates there are only 816,000 physicians total in the U.S.

However, this doesn't change the validity of statistical sampling as others have explained above.

posted by JackFlash at 10:35 PM on August 23, 2010

*but not actually true*

If the sample was valid, and the survey methodology above board, then it is so likely to be true that you're more likely to be struck by lightning several times than it isn't.

*poop-smearing, cat-throwing crazy*

This.

posted by obiwanwasabi at 1:14 AM on August 24, 2010

This thread is closed to new comments.

posted by mhjb at 4:01 PM on August 23, 2010