Skip
# What is a good sample size for determining whether dice are truly random?

(adsbygoogle = window.adsbygoogle || []).push({});

A billion times may be overkill, or it may be insufficient - it depends on how much randomness you need. I think it might help to stop thinking of this in terms of two possible states (dice must be either "completely random" or "less than completely random"), and start thinking of all dice as not completely random, and then ask yourself "how much randomness is sufficient for my purposes? At what point does a slight tendency of a die to favour one side become a problem?"

Once you know how randomness you need/want, you can then fairly easily work out how many rolls you need to make to check that a die meets your standard.

posted by -harlequin- at 12:05 PM on April 30, 2008 [1 favorite]

(adsbygoogle = window.adsbygoogle || []).push({});

Post

# What is a good sample size for determining whether dice are truly random?

April 30, 2008 11:41 AM Subscribe

How many times would I have to roll a standard 6-sided die to get a statistically representative view of whether it was truly random or not?

I have a bunch of dice that I haven't used in years. The other day, I was playing with one, and I noticed that the 5 came up fairly often. I started rolling the die and writing down the results, to see if it was just a short term statistical fluke, or observer bias, or if the die really favored the 5.

I think I ended up rolling it around 200 times, and 5 definitely had a significant edge.

Now, I'd like to check my other dice. However, I'm not a statistician. I understand that, obviously, the larger your data set (the more times you roll each die and write down the results), the better your analysis of non-randomness will be. However, I know that it isn't necessary to roll the die 1 billion times to check for randomness, that there is some generally accepted statistical minimum, below which the margin of error is too large, and above which the margin of error is generally considered acceptable.

How many die rolls is that point?

I have a bunch of dice that I haven't used in years. The other day, I was playing with one, and I noticed that the 5 came up fairly often. I started rolling the die and writing down the results, to see if it was just a short term statistical fluke, or observer bias, or if the die really favored the 5.

I think I ended up rolling it around 200 times, and 5 definitely had a significant edge.

Now, I'd like to check my other dice. However, I'm not a statistician. I understand that, obviously, the larger your data set (the more times you roll each die and write down the results), the better your analysis of non-randomness will be. However, I know that it isn't necessary to roll the die 1 billion times to check for randomness, that there is some generally accepted statistical minimum, below which the margin of error is too large, and above which the margin of error is generally considered acceptable.

How many die rolls is that point?

*I know that it isn't necessary to roll the die 1 billion times to check for randomness*

A billion times may be overkill, or it may be insufficient - it depends on how much randomness you need. I think it might help to stop thinking of this in terms of two possible states (dice must be either "completely random" or "less than completely random"), and start thinking of all dice as not completely random, and then ask yourself "how much randomness is sufficient for my purposes? At what point does a slight tendency of a die to favour one side become a problem?"

Once you know how randomness you need/want, you can then fairly easily work out how many rolls you need to make to check that a die meets your standard.

posted by -harlequin- at 12:05 PM on April 30, 2008 [1 favorite]

Er, to more directly answer your question: the number of rolls you'd need to get a solid inference about whether the die is loaded or not depends on how loaded it is. If 5 is coming up 1/2 the time, you'd need very few rolls to get a significant result. However, if it's loaded such that the 5 is coming up only a little more often than normal, you'd need many more rolls to be able to conclude that it's off.

posted by logicpunk at 12:09 PM on April 30, 2008

posted by logicpunk at 12:09 PM on April 30, 2008

You can't answer the question as formulated. The number of rolls you need depends on just how out of line the data are. If you rolled the dice six times and it came up 5 every time, it would appear pretty significant. If you rolled the dice 100 times it came up 5 25 times, it might not be.

A handy online chi-square calculator so you can check it out for yourself.

posted by Lame_username at 12:29 PM on April 30, 2008

A handy online chi-square calculator so you can check it out for yourself.

posted by Lame_username at 12:29 PM on April 30, 2008

If I recall correctly (and I probably don't) you would expect each number to come up one time in six, plus or minus √(2n/6).

So if you throw a die 100 times and got more than 22 (or less than 11) sixes then something is probably wrong. If you throw a die 1000 times and get more than 185 (or less than 148) sixes then something is probably up.

posted by alby at 12:56 PM on April 30, 2008

So if you throw a die 100 times and got more than 22 (or less than 11) sixes then something is probably wrong. If you throw a die 1000 times and get more than 185 (or less than 148) sixes then something is probably up.

posted by alby at 12:56 PM on April 30, 2008

A complete answer to this question depends on the following factors:

Let's say you want to verify that the die is "fair" in the sense that each face has probability 1/6 plus or minus 0.01. This is a set of multinomials, and you can integrate the posterior density over that set to compute the posterior probability

But that could be more work than you want to go to. Might be more efficient just to buy a new die...

posted by Estragon at 1:27 PM on April 30, 2008

- Your prior belief about the distribution the die's sampling from, expressed as a distribution
*D*on the set of multinomials on six events (or*n*events for an*n*-sided die.) - The degree of precision with which you require the die to be "fair." (It is impossible to establish precise fairness by sampling alone.)
- The degree of confidence you need before you'll conclude that the die is fair to said precision.
- The degree of confidence you need before you'll conclude that the die is
*not*fair to said precision. - The sample you have drawn so far. (There's no fixed sample size: in principle a series can teeter on the edge of plausibility forever, although you could compute a sample size which would
*probably*resolve the question, where "probably" means "with some probability less than but very close to one.")

*m*times, and see side*i**m*times in these throws. A convenient prior to use when working with such data is a Dirichlet. You ought to be fairly confident about that the die is fair, so it makes sense to choose a prior which is fairly heavily concentrated around the uniform distribution. A Dirichlet prior with this property is the one with a pseudocount of α_{i}_{i}=10 for each face*i*. The posterior distribution given the counts*m*is the Dirichlet with pseudocount α_{i}_{i}=10+*m*for face_{i}*i*. The value of this density on the multinomial with probability*x*for face_{i}*i*is given in the first formula here.Let's say you want to verify that the die is "fair" in the sense that each face has probability 1/6 plus or minus 0.01. This is a set of multinomials, and you can integrate the posterior density over that set to compute the posterior probability

*p*that your sample was drawn from a multinomial within that set. If*p*is greater than the confidence you require in the die's "fairness," your sample is big enough, and you can conclude that it's fair. If 1-*p*is greater than the confidence you require in the die's*un*fairness, your sample is big enough, and you can conclude that it's unfair.But that could be more work than you want to go to. Might be more efficient just to buy a new die...

posted by Estragon at 1:27 PM on April 30, 2008

Thanks. Good answers, all, but special thanks have to go to logicpunk for providing a link that not only explains chi-square simply for folks like me, but does it with...an example of determining if a die is loaded. Brilliant!

posted by Bugbread at 2:01 PM on April 30, 2008

posted by Bugbread at 2:01 PM on April 30, 2008

All real dice are biased, incidentally. The very best dice manufactured would take many thousands of rolls for statistical methods to reveal that bias with a fair degree of certainty. The stronger the bias is, the easier it is to see it and the less rolls (i.e., lower statistical "power") is required to demonstrate the bias.

Interestingly, in statistics, a die can never be proven biased. You can only make a statement phrased like the following:

The chance of observing the obtained data set with 1000 rolls, if the die were truly unbiased, is 0.01%. (In other words, if you did 10,000 "thousand-roll" trials, only 1 of them would be expected to produce such extremely biased results as were observed.)

The more you roll the die, the smaller that percentage figure gets. The more heavily the die is biased, the fewer the rolls necessary to get to any given degree of certainty.

posted by ikkyu2 at 11:09 PM on April 30, 2008

Interestingly, in statistics, a die can never be proven biased. You can only make a statement phrased like the following:

The chance of observing the obtained data set with 1000 rolls, if the die were truly unbiased, is 0.01%. (In other words, if you did 10,000 "thousand-roll" trials, only 1 of them would be expected to produce such extremely biased results as were observed.)

The more you roll the die, the smaller that percentage figure gets. The more heavily the die is biased, the fewer the rolls necessary to get to any given degree of certainty.

posted by ikkyu2 at 11:09 PM on April 30, 2008

This thread is closed to new comments.

posted by logicpunk at 12:00 PM on April 30, 2008 [5 favorites]