# Given P outcomes, how many different P's are chosen when N choices are made?December 11, 2005 2:06 AM   Subscribe

Basic statistics: given P possible outcomes, how many different P's are chosen on average when N choices are made?

My question assumes that each selection is made randomly and independently (there's no replacing/removing, i.e. the same option can be selected for each N), and "average" refers to the average of how many different outcomes are chosen. The order of choices is irrelevant -- foo + bar and bar + foo both count as 2 distinct choices total, and foo + foo or bar + bar are both 1 total. In the case of P=2 and N=2, for example, I would intuitively guess 1.5 on average (i.e. half the time selecting the same one both times and half the time selecting two different ones). Is my intuition correct?

Going beyond that though, if I had P=20 options and selected N=10 at random, how many different options should I expect to have actually wound up selecting, on average? (Surely less than 10 due to duplicate selections, but how much below 10?) What if I had P=100 options and I selected N=50 times at random? And so on. How would I express the answer as a formula?

Links are welcome! This is not for class or academia, so for your explanation, brownie points will be awarded for clarity over theory. Out of personal interest, I do appreciate reading and learning what's the appropriate jargon to use but only where relevant. Thanks in advance.
posted by DaShiv to Education (23 answers total)

The probability that you pick an unpicked item (out of P options) in each of N rounds is found (at each round) by:

Round 1 : 1
Round 2 : (P-1)/P
Round 3 : (P-2)/P
...
Round N : (P-N)/P

The cumulative probability is the product of those round-wise probabilities. So, the probability of getting through N rounds without picking any duplicates is (P-N)!/P^(N-1). Now, the average number of distinct choices is a little more tricky. Er, tricky enough that I can't make it up off the top of my head - mostly because, if you screw up one time and double-dip the same item, it sets your probabilities back a round.

But first a question or two: Are you replacing the samples once you've drawn them, or removing them. You suggest that you're doing neither, which is rather difficult to comprehend. If I draw a blue marble (from a bag of outmoded toys, naturally), do I have a chance of drawing that exact same marble again? (My guess here is yes - there is only one 'foo' in the bag, but I might be able to draw it twice in successive rounds.)

 (The exclamation point means factorial: 3! = 3*2*1 = 6 and 123! = 123*122*121 * ... * 2 * 1 = 1.2146*10^205, which is really rather bigger than I thought it would be..)
posted by metaculpa at 3:08 AM on December 11, 2005

Sorry, I meant to say that the options are always the same, so according to the "drawing of marbles" visualization of the problem, they're being replaced after each drawing.

Also, I should have said that "expected value" of how many distinct options (marbles etc) are drawn with N trials, and not "average"... but I hope that was clear in context.

Finally, I'm looking to solve a problem where P is somewhere in the 10-20 range (not sure yet) and N is 10 -- would I need a special computer program for this, or is it doable with a reasonably good calculator?
posted by DaShiv at 3:34 AM on December 11, 2005

I just spent over an hour thinking about this, writing simulations, realizing my theories were wrong, thinking some more, and now I have to hang up the internet and go to bed. If nobody's come up with something good by the time I wake up, I'll have another crack at it, but my current theory is that this problem is retardedly complicated.
posted by aubilenon at 4:10 AM on December 11, 2005

I know we did this in my intro to stats class with the "marbles" being removed from rotation. I can't remember if we did it while also putting them back. I'll check my notes and get back to you, since my final in that class is tomorrow.

by the way, I didn't use anything but a TI-80, which is not a graphing calculator, so you should be fine.

IIRC these equations involve exclamation points.
posted by bilabial at 5:06 AM on December 11, 2005

My question assumes that each selection is made randomly and independently (there's no replacing/removing, i.e. the same option can be selected for each N), and "average" refers to the average of how many different outcomes are chosen.
This sort of simplifies things a little bit. It looks like you're specifying 10 draws out of 20, you're really specifying 1 draw out of 20, done 10 times , giving 1/20^10 for your chances of any particular set. Although set's not quite the right word to describe what you want, since {1,1,1,1,1,1,1,1,1,1,1,1} isn't a valid subset of {1 .. 20}. It's for this same reason that combinations and permutations aren't really valid for your scenario, either.
posted by boo_radley at 5:45 AM on December 11, 2005

This is an excellent question. I'm still working on it, but here's how I've rephrased it in algebraic terms.

Given P variables x1, ..., xP, let x1a1...xPaP be a monomial of degree N. This means that a1+...+aP = N. We want to know the average number of distinct variables in such a randomly chosen monomial. What we'll do is count how many such monomials there are with N distinct, N-1 distinct, ..., 1 distinct variable, then take a weighted average to see what the expected value is.

Some facts to start off with.
1. The total number of such monomials is the binomial coefficient C(P+N-1,N), sometimes called the multichoose function. Here the binomial coefficient C(n,k), also sometimes written nCk, is and the factorial n! is n(n-1)(n-2)...(2)(1). This counts the number of k-element subsets of an n-element set.
2. The number of such monomials with all variables distinct is just C(P,N) -- all we have to do is to choose N of the variables out of P.
3. The number of such monomials with N-1 distinct variables is C(P,N-1)xC(N-1,1) -- first we choose N-1 variables out of the P, then we choose which 1 of those will appear with exponent 2 (the others all getting exponent 1).
4. The number of such monomials with N-2 distinct variables is C(P,N-2)xC(N-2,2) -- first we choose the N-2 distinct variables, then we choose 2 of those to get exponent 2.

This is where I start getting confused. Maybe it will come to me over breakfast. (or maybe Aknaton will appear and make everything all better)
posted by gleuschk at 6:02 AM on December 11, 2005

Aha. 4 above is wrong. Here's the poop.

The number of such monomials with N-k distinct variables is C(P,N-k)xC(N-1,k). First we choose the N-k distinct variables that will appear. Then we need to count the number of monomials of degree N in those N-k variables that include every variable at least once. Since N-k of our exponents are thus spoken for, we have only k left to spread around. Thus it's enough to count the number of monomials of degree k in N-k variables. That's C(k+N-k-1,k) = C(N-1,k).

k=0N-1 C(P,N-k) C(N-1,k) (N-k))/C(P+N,N).

That sum probably has a nicer form, but I don't see it immediately, and if all you want to do is compute it once, this might be enough.

Something like Maple will do this in a flash -- if you can figure out what numbers you want, just let me know and I'll crunch it out for you.
posted by gleuschk at 6:17 AM on December 11, 2005

Aw, nuts -- there are some problems with this. The main one has to do with order. I'm not sure whether it's a misinterpretation on my part, or an ambiguity on yours.

Let's take your example case, where N=P=2. In my terms, what we want to do is count how many monomials of degree 2 there are in 2 variables. That's easy: {x2, xy, y2} is three. Of these, two involve only one variable and one involves both, so it would seem that the average is 4/3. This is different from the 3/2 you expected to get. So where did your 3/2 come from? From {x2, xy, yx, y2}. Even though you said that you wanted foo+bar and bar+foo counted as the same, we need to be more careful about which of these scenarios is the right thing.

also, I goofed on the denominator in my displayed formula up there -- should be C(P+N-1,N).
posted by gleuschk at 6:46 AM on December 11, 2005

Don't mind me, this is my idea of a pleasantly relaxing Sunday morning.

OK, I think the error was mine -- we may not care about what order the outcomes occur in, but for counting purposes we need to take order into account. To take the N=P=2 example again, we have to make sure that we acknowledge that the "mixed" monomial xy occurs twice as often as either of the "pure" monomials x2, y2. Order is the way to do that.

Let's do a couple more examples with small numbers.

P=3, N=2. The (non-commutative!) monomials of degree 2 in x,y,z are {x2, y2, z2, xy, yx, xz, zx, yz, zy}. Of these, three involve just one variable, while 6 involve two. So the average number of variables is (1x3 + 6x2)/9 = 15/9 = 5/3.

Does this match with our intuition? We're making two choices out of three possibilities. The first one is entirely arbitrary, and then we care about whether the second one is the same as the first or something different. It seems like a third of the time we should get the same thing on our two selections, while 2/3 of the time we should get different things. This would mean the average number should be (1/3)x1 + (2/3)x2 = 5/3. Sweet.

P=N=3. There are 27 = 33 total monomials to consider. Three of them ({x3, y3, z3}) involve just one variable, 18 involve just two, and 6 involve all three (e.g., xyz, xzy, etc.). So we get (1x3 + 2x18 + 3x6)/27 = 57/27 = 19/9.

Does this match our intuition? Again, we can make the first selection arbitrarily -- let's suppose it was x. Then we have to make two more selections from a pool of three ... but that's the case above! Now it's not so hard to convince yourself from the list of monomials up there that we should get just one variable 1/9 of the time, two variables 6/9 of the time, and three distinct variables 2/9 of the time, giving an average number of variables of 19/9. Excellent.

I know how to do it now, but the counting is still eluding me. More in a while.
posted by gleuschk at 8:16 AM on December 11, 2005

I did this in matlab. . . for your two examples (P=10, N=20 & P=50, N=100) the mean # of different results I found was 8 and 40, respectively. For P=25, N=50, the mean was about 20. So for cases where the number of choices you're making is 1/2 of the number of possible, it looks like the expected # of different results is going to be around .8*P. There's probably a more general formulation, tho.
posted by logicpunk at 8:44 AM on December 11, 2005

I think it might be easier to ask yourself what number of outcomes will not be chosen. The probability that a given outcome will be skipped on any one choice is (1 - 1/P); after N picks, it'll become (1 - 1/P)N. Finally, since there are P objects, and each has a probability of being ignored of (1 - 1/P)N, we will expect to have P * (1 - 1/P)N objects ignored, and therefore have P * (1 - (1 - 1/P)N) distinct objects that have been selected.

Just checking this with a few test cases gives no objects selected for N = 0 and one object selected for N = 1, regardless of P (which is, of course, what we expect); 1.5 objects, on average, selected for P = 2 and N = 2; and 5/3 objects being selected for P = 3 and N = 2 (you can write out all the combos in this case and show that this works too.) Finally, for the larger cases you asked about, N = 10 and P = 20 gives an average of 8.025 objects selected, and N = 50 and P = 100 gives us 39.499 objects selected.

The form of this answer makes me think that you could use an inclusion-exclusion argument to derive it; and for large P, I think that this could also be answered using a Poisson distribution. If I have any further thoughts, I'll be sure to post 'em.
posted by Johnny Assay at 9:05 AM on December 11, 2005

There's a simpler version of this, the birthday problem: with P=365, what's the probability of two or more people in a group of size N having the same birthday? The answer is surprisingly high: with N=25, the probability is almost 60%.
posted by russilwvong at 9:25 AM on December 11, 2005

Eh, my combinatorics chops aren't up to the task, and I have to go xmas shopping. Drat. Looks like Johnny Assay has the right thing anyway. (For the record, what I was doing is essentially an inclusion-exclusion argument.)
posted by gleuschk at 9:37 AM on December 11, 2005

Ah yes, the birthday problem. With the numbers russilvwong chose, we expect to get about 24.2 distinct birthdays in the group, so that would make sense.
posted by Johnny Assay at 12:27 PM on December 11, 2005

Gleuschk, you crack me up.

Dashiv, would it be useful if we just solved the problem for you, given P = 1:50 and N = 10? That we can do, clearly. We don't seem to be able to find a general formula, though. (I can't get much further than the above, at least.)
posted by metaculpa at 12:36 PM on December 11, 2005

In answer to your question above : there probably is a way to do this with a simple calculator, but the above better-statisticians-than-I don't seem to have it. And neither do I. But here are the answers for N = 10 and P = 1-30; I just estimated them using 10,000 sample for each case. (Woo, computers are fast!) I'll <small> them so I don't waste too much space.

This was done in Matlab, but the code should transport relatively easily to Octave (which is free, and runs easily on Macs if you have one). I'll put the source below as well (which actually takes up _more_ space in <pre>, so I'll leave it like it is.

I'm happy to generate any number of iterations of this for ya, if Octave seems like a little much. Just let me know.

For P = 1 and N = 10, the expected number of unique samples is 1.
For P = 2 and N = 10, the expected number of unique samples is 1.9978.
For P = 3 and N = 10, the expected number of unique samples is 2.9471.
For P = 4 and N = 10, the expected number of unique samples is 3.7741.
For P = 5 and N = 10, the expected number of unique samples is 4.4591.
For P = 6 and N = 10, the expected number of unique samples is 5.0399.
For P = 7 and N = 10, the expected number of unique samples is 5.4943.
For P = 8 and N = 10, the expected number of unique samples is 5.8978.
For P = 9 and N = 10, the expected number of unique samples is 6.2183.
For P = 10 and N = 10, the expected number of unique samples is 6.5108.
For P = 11 and N = 10, the expected number of unique samples is 6.7605.
For P = 12 and N = 10, the expected number of unique samples is 6.9792.
For P = 13 and N = 10, the expected number of unique samples is 7.1797.
For P = 14 and N = 10, the expected number of unique samples is 7.3343.
For P = 15 and N = 10, the expected number of unique samples is 7.492.
For P = 16 and N = 10, the expected number of unique samples is 7.6174.
For P = 17 and N = 10, the expected number of unique samples is 7.7365.
For P = 18 and N = 10, the expected number of unique samples is 7.8491.
For P = 19 and N = 10, the expected number of unique samples is 7.9167.
For P = 20 and N = 10, the expected number of unique samples is 8.0066.
For P = 21 and N = 10, the expected number of unique samples is 8.1002.
For P = 22 and N = 10, the expected number of unique samples is 8.1998.
For P = 23 and N = 10, the expected number of unique samples is 8.256.
For P = 24 and N = 10, the expected number of unique samples is 8.3147.
For P = 25 and N = 10, the expected number of unique samples is 8.3703.
For P = 26 and N = 10, the expected number of unique samples is 8.4307.
For P = 27 and N = 10, the expected number of unique samples is 8.4792.
For P = 28 and N = 10, the expected number of unique samples is 8.518.
For P = 29 and N = 10, the expected number of unique samples is 8.5735.
For P = 30 and N = 10, the expected number of unique samples is 8.622.

%Number of draws
Nmin = 10;
Nmax = 10;
%Number of possible choices.
Pmin = 1;
Pmax = 30;
%Number of samples for each case.
samples = 10000;
%Iterate over the range of N
for n = 1:Nmax-Nmin+1
N = Nmin+n-1;
%Iterate over the range of P
for p = 1:Pmax-Pmin+1
P = Pmin+p-1;
%Do it a bunch of times.
for j = 1:samples

%Do what? The sampling, you fool.
clear s;
for d = 1:N,
s(d) = ceil( P * rand );
end
c(n,p,j) = length(unique(s));
end
fprintf('For P = %g and N = %g, the expected number of unique samples is %g.\n',P,N,mean(c(n,p,:)));
end
end

posted by metaculpa at 12:58 PM on December 11, 2005

Here's a Python function that does it, based on the fact that P(x unique in a sample of c) = P(x unique in a sample of c-1 and we draw a duplicate) + P(x-1 unique in a sample of c-1 and we draw a non-duplicate).
```from __future__ import division  # so int/int = float
def expected_unique(N, C):
P = [1.0, 0.0]
for c in range(1, C+1):
P = [0.0] + [P[x]*x/N + P[x-1]*(N-x+1)/N for x in range(1, c+1)] + [0.0]
return sum([x*p for (x,p) in enumerate(P)])
print expected_unique(2,2)
```

posted by Canard de Vasco at 2:54 PM on December 11, 2005

Johnny Assay's argument / formula seems pretty convincing to me.
posted by andrew cooke at 3:28 PM on December 11, 2005

Oops. Sorry about the wasted space. Johnny Assay's answer is right, and mine is superfluous. Well done, Mr. Best Name Ever Johnny Assay.
posted by metaculpa at 3:28 PM on December 11, 2005

Thanks everyone! Johnny Assay's formula of P * (1 - (1 - 1/P)^N) is very calculator- and spreadsheet-friendly, and I played with the numbers and they look pretty good to my (untrained) eyes. I don't know enough to really judge the answers here but since the concensus seems to agree with Johnny Assay, I'll go ahead and mark his as the best one. Thanks to everyone for delving into the theory and for doing some number-crunching your fancy math programs.

I'm also marking russilwvong's answer since the Birthday Problem is pretty well-known (or at least even I knew about it beforehand), but it had completely escaped me that it's a salient illustration here. Thanks for bringing it up as a very interesting case of P=365 and N=25.
posted by DaShiv at 4:50 PM on December 11, 2005

To assure you that JA's answer is the correctimost, I'll say that his analytical answer jibes with my brute force answer.

What's this for, Shiv?
posted by metaculpa at 11:26 PM on December 11, 2005

As part of a game I work on in my spare time.
posted by DaShiv at 4:10 AM on December 12, 2005

uh, I can't compete with any of that. Especially since my brain is now officially fried by the final last night. 7 problems. 3 hours. Funny thing is, I really like statistics. Good thing I'm going to be an anthropologist when I grow up.
posted by bilabial at 7:10 AM on December 13, 2005

« Older (why) are these French caricatures funny?   |   Advice for buying new dinnerware / flatware? Newer »