Am I biased or random?
October 18, 2005 8:55 AM Subscribe
I'm interested in the neuroscience behind decision making: specifically, do people have "random number generators" in their brains for breaking decision stalemates?
Lets say someone offers me an apple and there are two on the table in front of me. How do I choose which one to take?
If they are different from each other, I can rely on all sorts of criteria: I prefer redder apples; I prefer larger apples; etc. But suppose they seem identical and I really want an apple. I MUST choose between them SOMEHOW or I'll be sitting there forever staring at the two apples. There's a clear survival advantage to breaking these kinds of stalemates.
If I was modeling this situation in an A.I. program, I can think of two possible solutions:
1) Include a random "coin-flip" module which is invoked when all more reasonable decision-making procedures fail.
2) Include arbitrary biases.
Expanding on 2: when we see no qualitative difference between two apples, do we just default to the one on the left (or the one on the right)? This might differ between two people -- one might always choose left and the other might always choose right -- but would a single person always make the same biased choice under the similar circumstances?
If so, there must be dozens of these "defaults" built into the brain (pick the one on the left, pick the bigger one, pick the nearest one, etc.) A random-number generator seems like a simpler method.
But if these biases do exist, are they universal (or close to universal)? Do all people tend to pick the left one? Is there a list somewhere of these biases? [This could be useful for interface design and many other disciplines!] Is it cultural? Do Americans pick the left apple while Chinese pick the right apple?
If there is a random-number generator, do we know how it works? Is the generator only evoked as a last resort (if all "better" decision making processes fail)? Or does it sometimes trump "higher-level" processes?
I guess the same question could be asked for the biases. Do we sometimes choose an apple just because it's on the left, rather than because it looks tastier?
Lets say someone offers me an apple and there are two on the table in front of me. How do I choose which one to take?
If they are different from each other, I can rely on all sorts of criteria: I prefer redder apples; I prefer larger apples; etc. But suppose they seem identical and I really want an apple. I MUST choose between them SOMEHOW or I'll be sitting there forever staring at the two apples. There's a clear survival advantage to breaking these kinds of stalemates.
If I was modeling this situation in an A.I. program, I can think of two possible solutions:
1) Include a random "coin-flip" module which is invoked when all more reasonable decision-making procedures fail.
2) Include arbitrary biases.
Expanding on 2: when we see no qualitative difference between two apples, do we just default to the one on the left (or the one on the right)? This might differ between two people -- one might always choose left and the other might always choose right -- but would a single person always make the same biased choice under the similar circumstances?
If so, there must be dozens of these "defaults" built into the brain (pick the one on the left, pick the bigger one, pick the nearest one, etc.) A random-number generator seems like a simpler method.
But if these biases do exist, are they universal (or close to universal)? Do all people tend to pick the left one? Is there a list somewhere of these biases? [This could be useful for interface design and many other disciplines!] Is it cultural? Do Americans pick the left apple while Chinese pick the right apple?
If there is a random-number generator, do we know how it works? Is the generator only evoked as a last resort (if all "better" decision making processes fail)? Or does it sometimes trump "higher-level" processes?
I guess the same question could be asked for the biases. Do we sometimes choose an apple just because it's on the left, rather than because it looks tastier?
Response by poster: I've noticed the same thing, RustyBrooks: try coming up with 10 random numbers, between 1 and 100. You only have 10 seconds to come up with these numbers. It's pretty hard to do so. We yearn for some better criteria to help us choose.
But all this just shows we have a hard time CONSCIOUSLY generating a random number. Our brains do all sorts of things unconsciously that we can't do as easily consciously.
posted by grumblebee at 9:17 AM on October 18, 2005
But all this just shows we have a hard time CONSCIOUSLY generating a random number. Our brains do all sorts of things unconsciously that we can't do as easily consciously.
posted by grumblebee at 9:17 AM on October 18, 2005
You're looking for decision heuristics. I read the precis of the book Simple Heuristics That Make Us Smart in BBS. If you don't feel like reading the whole book, I'll email you the article.
posted by Gyan at 9:17 AM on October 18, 2005
posted by Gyan at 9:17 AM on October 18, 2005
In terms of left/right bias, I know that there's an entire industry in charge of designing product layout for stores based (at least partly) on which direction most people go after entering the door. So there are probably also marketing surveys / research articles that might help you with this question.
posted by occhiblu at 9:33 AM on October 18, 2005
posted by occhiblu at 9:33 AM on October 18, 2005
rather than because it looks tastier?
We're human...greed & pleasure
posted by thomcatspike at 9:36 AM on October 18, 2005
We're human...greed & pleasure
posted by thomcatspike at 9:36 AM on October 18, 2005
Best answer: The identical apple problem is a digital one, and we don't think like machines. To your computer apple == apple for most values of apple, but in the analog world of natural thinking there aren't exact values to collide. It's not possible to have two truly identical apples simply by virtue of the fact that one has to be located slightly differently than the other, and things that seem fundamentally "the same" to you in all likelihood have a tiny but nonzero perceived difference that is meaningfully factored into the decision.
Allowing that the apples could be parsed identically for simplicity's sake, in all the potentially important variables, the primary decision factor could "arbitrary bias," in a sense, but really just low priority bias. We start with picking the one that doesn't have a worm in it, if both are clean we go with the ripest, if they both look approximately the same shade of red, we move on down the scale until some less-crucial criterion breaks the tie. It could be something that scarcely matters at all w/r/t apple enjoyment, but it has been settled upon as the end product of a methodical process.
This is simplified by the stipulation that the apples are exactly identical in all higher-priority respects, and in reality we don't choose apples for one reason, but sum up the weighted results of all our judgements down the line.
To model this in an AI program it would just have to know a great deal about apples and apple selection criteria and their associated priorities. Somewhere along the priority path between aesthetic sense of the shape of the stem and something about an apple we once saw in that TV show we can't quite remember the name of it becomes unfeasible to model the expontentially growing web of analog vageries and a simple rand() will suffice. I suspect the human mind has no such limitation, and will continue to link ever more abstracted concepts until a decision can be made.
posted by moift at 9:48 AM on October 18, 2005
Allowing that the apples could be parsed identically for simplicity's sake, in all the potentially important variables, the primary decision factor could "arbitrary bias," in a sense, but really just low priority bias. We start with picking the one that doesn't have a worm in it, if both are clean we go with the ripest, if they both look approximately the same shade of red, we move on down the scale until some less-crucial criterion breaks the tie. It could be something that scarcely matters at all w/r/t apple enjoyment, but it has been settled upon as the end product of a methodical process.
This is simplified by the stipulation that the apples are exactly identical in all higher-priority respects, and in reality we don't choose apples for one reason, but sum up the weighted results of all our judgements down the line.
To model this in an AI program it would just have to know a great deal about apples and apple selection criteria and their associated priorities. Somewhere along the priority path between aesthetic sense of the shape of the stem and something about an apple we once saw in that TV show we can't quite remember the name of it becomes unfeasible to model the expontentially growing web of analog vageries and a simple rand() will suffice. I suspect the human mind has no such limitation, and will continue to link ever more abstracted concepts until a decision can be made.
posted by moift at 9:48 AM on October 18, 2005
Response by poster: Let's get away from apples. What if we force the mind to confront something unnatural. Suppose gave the following test to 1000 people:
please draw a red circle around ONE item from each of the following groups:
1. (A) (B) (C) (D)
2. (B) (D)
3. (C) (C) (C)
4. (D) (A) (B)
5. (A) (D)
6. (B) (B)
7. (C) (B) (A)
etc...
Most of the questions are dummies. We're only interested in the answers to 3 and 6. The dummies are to keep people from suspecting that we're trying to figure out the main purpose of the experiment. I suspect if you gave them this:
1. (A) (A)
2. (A) (A)
3. (A) (A)
4. (A) (A)
They'd start purposefully randomizing, just to make things interesting -- or they'd purposefully choose a bias, just to make things easier.
No two apples ARE exactly alike, but (A) and (A) are exactly alike, unless we're going to go down to the level of miniscule differences created by imperfect printing processes. Does the brain really care about (is it even able to detect?) such differences?
The only difference is that one (A) is on the left and one is on the right. So we MUST fall back to randomly choosing or going with a bias, right? Which one do people choose? Would the switch back and forth between the left and the right (A). Or would they be consistent? If they switch, would it be 50/50? If they showed a bias, would all (or most) people show the same bias? What if one (A) was above the other, instead of to-the-left of the other?
I don't think anyone here can take the test, because this thread dirties the test tube.
posted by grumblebee at 10:12 AM on October 18, 2005
please draw a red circle around ONE item from each of the following groups:
1. (A) (B) (C) (D)
2. (B) (D)
3. (C) (C) (C)
4. (D) (A) (B)
5. (A) (D)
6. (B) (B)
7. (C) (B) (A)
etc...
Most of the questions are dummies. We're only interested in the answers to 3 and 6. The dummies are to keep people from suspecting that we're trying to figure out the main purpose of the experiment. I suspect if you gave them this:
1. (A) (A)
2. (A) (A)
3. (A) (A)
4. (A) (A)
They'd start purposefully randomizing, just to make things interesting -- or they'd purposefully choose a bias, just to make things easier.
No two apples ARE exactly alike, but (A) and (A) are exactly alike, unless we're going to go down to the level of miniscule differences created by imperfect printing processes. Does the brain really care about (is it even able to detect?) such differences?
The only difference is that one (A) is on the left and one is on the right. So we MUST fall back to randomly choosing or going with a bias, right? Which one do people choose? Would the switch back and forth between the left and the right (A). Or would they be consistent? If they switch, would it be 50/50? If they showed a bias, would all (or most) people show the same bias? What if one (A) was above the other, instead of to-the-left of the other?
I don't think anyone here can take the test, because this thread dirties the test tube.
posted by grumblebee at 10:12 AM on October 18, 2005
There was a study on pantyhose (which I can't find; "pantyhose study" is not good for googling, trust me); or rather, on decision-making involving pantyhose. A number of women were each placed in front of a table strewn with assorted pantyhose and asked to choose one "at random". A high percentage of them chose a pair on the right. The point of the study was to show that we are bad at self-reporting; the women gave a variety of reasons for why they chose the pantyhose that were all spurious. But the fact of the matter is that our neural nets are programmed against randomness; tendencies are more often than not adaptive. Also, neural nets are not simple on/off left/right switches; there are as you stated a lot of factors that go into a decision, but we will most likely go with the most "comfortable" decision; the one that is most strongly coded into our brains by experience. If the last apple you had to pick out was on the right, and a blemished apple on the left, you will likely go for the one on the right in this case as well, even if there is no observable difference this time.
posted by Eideteker at 10:15 AM on October 18, 2005
posted by Eideteker at 10:15 AM on October 18, 2005
Response by poster: I suspect the human mind has no such limitation, and will continue to link ever more abstracted concepts until a decision can be made.
This may be true. When I'm trying to pick one item out of a group of highly similar item, it takes me longer than if I notice differences. Say I'm trying to pick one can of Coke from a shelf in the supermarket. They all look the same to me, and they are so closely packed that several of them seem about the same distance from me. Which do I pick?
I DO pick one, but it takes a moment. Maybe this is because my brain is searching for some sort of criteria and it must do a much longer search than it does when I'm trying to pick, say, chicken cutlets.
A random decision should be quicker, so maybe I AM eventually settling on some odd bias. OR maybe my brain IS ultimately making a random decision, but it only wants to do this as a last resort, so the length of time is created by my brain first exhausting every other possibility.
It seems like sometimes my unconscious mind CAN'T decide, so it throws the problem to my conscious mind. At which point I think, "This is SILLY. They're all the same. I should just pick one!" And somehow I do. How?
posted by grumblebee at 10:20 AM on October 18, 2005
This may be true. When I'm trying to pick one item out of a group of highly similar item, it takes me longer than if I notice differences. Say I'm trying to pick one can of Coke from a shelf in the supermarket. They all look the same to me, and they are so closely packed that several of them seem about the same distance from me. Which do I pick?
I DO pick one, but it takes a moment. Maybe this is because my brain is searching for some sort of criteria and it must do a much longer search than it does when I'm trying to pick, say, chicken cutlets.
A random decision should be quicker, so maybe I AM eventually settling on some odd bias. OR maybe my brain IS ultimately making a random decision, but it only wants to do this as a last resort, so the length of time is created by my brain first exhausting every other possibility.
It seems like sometimes my unconscious mind CAN'T decide, so it throws the problem to my conscious mind. At which point I think, "This is SILLY. They're all the same. I should just pick one!" And somehow I do. How?
posted by grumblebee at 10:20 AM on October 18, 2005
Just a quick correction to the post by RustyBrooks. A onetime-pad encryption key should not be used over a time period. If you use it a single time then the encrypted message is unbreakable for anybody without previous knowledge of the key (provided that the key really is random). However, if you use more than once and some adversary intercepts those messages then they will be able to decrypt all of them without much effort. Hence the name onetime-pad.
It is probably true that unused keys typically are considered void after a certain time period though.
Also, in relation to the creation of onetime-pad keys. I seem to recall that they initially were often created by having secretaries bang away randomly on type writers. But of course this turned out to not be very random at all since they would alternate between hands when pressing a key causing a pattern in the keys.
As to grumblebee's question: My suspicion is that the bias thing is more likely the answer than that there is some form of RNG in the brain. Historically and still today a human does not need to do decisions in a perfectly random manner (assuming you aren't working as a WWII secretary) so there has been no evolutionary drive towards creating a RNG in the brain. Since the situations where you normally would make a "random" decisions aren't of great importance to you then the slightest (possibly unconscious) bias would become dominant and cause a decision. This would in most normal situations be random enough.
posted by rycee at 10:27 AM on October 18, 2005
It is probably true that unused keys typically are considered void after a certain time period though.
Also, in relation to the creation of onetime-pad keys. I seem to recall that they initially were often created by having secretaries bang away randomly on type writers. But of course this turned out to not be very random at all since they would alternate between hands when pressing a key causing a pattern in the keys.
As to grumblebee's question: My suspicion is that the bias thing is more likely the answer than that there is some form of RNG in the brain. Historically and still today a human does not need to do decisions in a perfectly random manner (assuming you aren't working as a WWII secretary) so there has been no evolutionary drive towards creating a RNG in the brain. Since the situations where you normally would make a "random" decisions aren't of great importance to you then the slightest (possibly unconscious) bias would become dominant and cause a decision. This would in most normal situations be random enough.
posted by rycee at 10:27 AM on October 18, 2005
Best answer: But the fact of the matter is that our neural nets are programmed against randomness; tendencies are more often than not adaptive.
Randomness can be adaptive. I forget the specifics of the study so I can't find information about it online (I'll try to hunt it down later): at an AI lab somewhere researchers developed little puck-like robots that could scoot all over a tabletop trying to find diamonds painted on the walls. Those that did best at finding the diamonds had their programs replicated with slight random mutations introduced into the code. That is, they underwent an evolutionary process. Many generations later, the best performers were finding the diamonds more reliably than the best program that the computer scientists were able to create from scratch.
What really confused the computer scientists, though, was the fact that the winners of the evolutionary competition all had a light sensor off in the corner of their visual field enabled. Because there was a cost involved with enabling a light sensor, the best performers should have pruned away all the extraneous sensors that weren't contributing to any beneficial function. The sensor must have been contributing to the pucks' fitness. But how? Upon analysis, it turned out that the sensor was acting as a (pseudo-)random number generator! If the puck got caught in a Buridan's Ass scenario and didn't know which way to turn next, it would consult the brightness of light faling upon that sensor.
Moral: (pseudo-)randomness can be adpative and can evolve by natural selection. Whether something like this is going on in our brains is far beyond our knowledge at this point. I suspect we must rely on some sort of pseudo-random-number generation (it just makes good engineering sense), but I have no idea how this random-number generation is achieved or whether it is tied to a sensory modality.
(Something else really cool about that study is that they trained the pucks continuously, but didn't realize that the environment was changing when they turned the lights off for the night. The pucks speciated into a diurnal species and a nocturnal species!)
posted by painquale at 11:34 AM on October 18, 2005
Randomness can be adaptive. I forget the specifics of the study so I can't find information about it online (I'll try to hunt it down later): at an AI lab somewhere researchers developed little puck-like robots that could scoot all over a tabletop trying to find diamonds painted on the walls. Those that did best at finding the diamonds had their programs replicated with slight random mutations introduced into the code. That is, they underwent an evolutionary process. Many generations later, the best performers were finding the diamonds more reliably than the best program that the computer scientists were able to create from scratch.
What really confused the computer scientists, though, was the fact that the winners of the evolutionary competition all had a light sensor off in the corner of their visual field enabled. Because there was a cost involved with enabling a light sensor, the best performers should have pruned away all the extraneous sensors that weren't contributing to any beneficial function. The sensor must have been contributing to the pucks' fitness. But how? Upon analysis, it turned out that the sensor was acting as a (pseudo-)random number generator! If the puck got caught in a Buridan's Ass scenario and didn't know which way to turn next, it would consult the brightness of light faling upon that sensor.
Moral: (pseudo-)randomness can be adpative and can evolve by natural selection. Whether something like this is going on in our brains is far beyond our knowledge at this point. I suspect we must rely on some sort of pseudo-random-number generation (it just makes good engineering sense), but I have no idea how this random-number generation is achieved or whether it is tied to a sensory modality.
(Something else really cool about that study is that they trained the pucks continuously, but didn't realize that the environment was changing when they turned the lights off for the night. The pucks speciated into a diurnal species and a nocturnal species!)
posted by painquale at 11:34 AM on October 18, 2005
4. (A) (A), circle one
I circled on the left for the last three questions, I might as well just circle the whole column and get this over with. But then the test proctor will think I'm not taking their research seriously, so maybe I'll circle on the right. She reminds me of my ex though, and I don't need that aggravation again so I don't care what she thinks, and I'm in a hurry. Although she does have a shapely ass...
Most answerers won't put this much conscious thought into their answer, but there are always some tangent variables to consider. If the answerer hasn't sexually fixated on the proctor and there aren't such obvious distinctions between circling targets the grunt work of distinguishing will be done by the subconscious. We typically aren't privy to this process, due to good separation of concerns by the intelligent designer maybe, but a cigar never really is just a cigar, for example, and the workings behind the curtain of conscious thought will find some bias to pick left (A) or right (A).
Randomness is one of those things like objectivity that are helpful as concepts but impossible in practice. A computer's random isn't capital R "Random," it's an algorithm that gives varying results based on a seed. A computer does math and it uses the tools at its disposal. We do concept linking, and we use the tools at our disposal. There's no reason to postulate a special randomness module, in all likelihood we make inconsequential decisions in the same way we do everything else, by examining the direct differences between our choices and the potential affects propagated to conceptual links referenced by our choices. If there are no such differences, it ceases, by definition, to be a choice.
Even if you amend your test to instruct people to "circle up to one in each group", and asked to make a choice between circling or not circling (A), eliminating left and right bias, there is still a foothold for abstraction to the first handy linked concept that does introduce some distinction. Human randomness, like computer randomness, is only truly random in its functionality. Human randomness is the product of decisions made on the basis of highly abstracted conceptual links. We may not be able to say exactly by what path a random decision is made, but that doesn't mean there isn't a path.
Basically, what rycee said better and more succinctly. I'm not letting all this typing go to waste :/
posted by moift at 11:37 AM on October 18, 2005
I circled on the left for the last three questions, I might as well just circle the whole column and get this over with. But then the test proctor will think I'm not taking their research seriously, so maybe I'll circle on the right. She reminds me of my ex though, and I don't need that aggravation again so I don't care what she thinks, and I'm in a hurry. Although she does have a shapely ass...
Most answerers won't put this much conscious thought into their answer, but there are always some tangent variables to consider. If the answerer hasn't sexually fixated on the proctor and there aren't such obvious distinctions between circling targets the grunt work of distinguishing will be done by the subconscious. We typically aren't privy to this process, due to good separation of concerns by the intelligent designer maybe, but a cigar never really is just a cigar, for example, and the workings behind the curtain of conscious thought will find some bias to pick left (A) or right (A).
Randomness is one of those things like objectivity that are helpful as concepts but impossible in practice. A computer's random isn't capital R "Random," it's an algorithm that gives varying results based on a seed. A computer does math and it uses the tools at its disposal. We do concept linking, and we use the tools at our disposal. There's no reason to postulate a special randomness module, in all likelihood we make inconsequential decisions in the same way we do everything else, by examining the direct differences between our choices and the potential affects propagated to conceptual links referenced by our choices. If there are no such differences, it ceases, by definition, to be a choice.
Even if you amend your test to instruct people to "circle up to one in each group", and asked to make a choice between circling or not circling (A), eliminating left and right bias, there is still a foothold for abstraction to the first handy linked concept that does introduce some distinction. Human randomness, like computer randomness, is only truly random in its functionality. Human randomness is the product of decisions made on the basis of highly abstracted conceptual links. We may not be able to say exactly by what path a random decision is made, but that doesn't mean there isn't a path.
Basically, what rycee said better and more succinctly. I'm not letting all this typing go to waste :/
posted by moift at 11:37 AM on October 18, 2005
Response by poster: Painquale, that is absolutely fascinating. Please share the reference if/when you find it!
Truthfully, I felt a human RNG was unlikely. My gut told me that moift was right, but I tried to play devil's advocate, because I want to keep an open mind about it. If your source is correct, it certainly doesn't mean moift is wrong, but it is tantalizing.
An RNG does seem like a simpler solution (when faced with nearly-identical choices) than a big table of biases, so you'd think Natural Selection would find that solution -- unless it has some hidden costs (i.e. do (pseudo) RNGs require a ton of processing just to generate the random number?)
I how hard it would be -- in theory -- for a biological entity to use develop an organ that could read some external data and use that as a randomizing force. For instance, could a animal read cosmic ray emissions and use them to generate random choices? This sounds similar to what the robots were doing with the lights.
posted by grumblebee at 11:42 AM on October 18, 2005
Truthfully, I felt a human RNG was unlikely. My gut told me that moift was right, but I tried to play devil's advocate, because I want to keep an open mind about it. If your source is correct, it certainly doesn't mean moift is wrong, but it is tantalizing.
An RNG does seem like a simpler solution (when faced with nearly-identical choices) than a big table of biases, so you'd think Natural Selection would find that solution -- unless it has some hidden costs (i.e. do (pseudo) RNGs require a ton of processing just to generate the random number?)
I how hard it would be -- in theory -- for a biological entity to use develop an organ that could read some external data and use that as a randomizing force. For instance, could a animal read cosmic ray emissions and use them to generate random choices? This sounds similar to what the robots were doing with the lights.
posted by grumblebee at 11:42 AM on October 18, 2005
Fascinating topic. Gyan, could you send me that article? My email is in my profile.
Thanks.
posted by tdismukes at 11:44 AM on October 18, 2005
Thanks.
posted by tdismukes at 11:44 AM on October 18, 2005
There are elements in a brain that are truly random (this is where it gets Quantum). There's also enough irrelevant input at any decision point (what's in the corner of your eye, audible noise...) to seed a pseudo-random choice. Brains are fairly chaotic systems, so it seems that these tiny randomnesses can propagate into signals strong enough to tilt a decision if there truly was a stalemate.
On the other hand, as mentioned, humans suck at acting randomly on purpose.
posted by springload at 11:55 AM on October 18, 2005
On the other hand, as mentioned, humans suck at acting randomly on purpose.
posted by springload at 11:55 AM on October 18, 2005
Painquale, that is absolutely fascinating. Please share the reference if/when you find it!
All I remember offhand is that the project used kheperas (these little guys). I'll ask around and try to get the reference for the particular study.
posted by painquale at 11:59 AM on October 18, 2005
All I remember offhand is that the project used kheperas (these little guys). I'll ask around and try to get the reference for the particular study.
posted by painquale at 11:59 AM on October 18, 2005
Best answer: tdismukes: turns out it's available for free online.
posted by Gyan at 12:25 PM on October 18, 2005
posted by Gyan at 12:25 PM on October 18, 2005
moift : "The identical apple problem is a digital one, and we don't think like machines."
Can't be sure of this. We don't know if brains are essentially the same and simply operating with a massively larger database and processing capacity. Besides Penrose, do you know of any attempted proof that demonstrates your assertion?
posted by Gyan at 1:54 PM on October 18, 2005
Can't be sure of this. We don't know if brains are essentially the same and simply operating with a massively larger database and processing capacity. Besides Penrose, do you know of any attempted proof that demonstrates your assertion?
posted by Gyan at 1:54 PM on October 18, 2005
do you know of any attempted proof that demonstrates your assertion?
Short answer: There's no proof and there probably never will be, and I don't have any better sources than google for a few relevant terms.
From what we know concretely about the brain's operation, it's either analog or it's digital on such a fine scale that actually discrete elements blur into functionally continuous operation. The prevalent theory is probably that we have a hybrid setup or that we are in fact digital at the most primitive level, but it's useful and not harmful to think of our thought process as analog at the higher levels to avoid the low-level pitfalls of digital systems (what's the floating point precision of a human being? how long between neural spikes is one operating cycle?). There isn't really a way to prove continuity because there could always be discrete elements just below the smallest measurable scale. Discreteness, while theoretically provable, hasn't been demonstrated.
My personal pov is that it is a bit hubristic to try to fit our brains into the mold of the machines we've made (no machine can generate a system more complex than itself, but they can generate plenty lesser) and the continually elusive hard problems of AI point to fundamentally different architectures. YMMV
posted by moift at 2:55 PM on October 18, 2005
Short answer: There's no proof and there probably never will be, and I don't have any better sources than google for a few relevant terms.
From what we know concretely about the brain's operation, it's either analog or it's digital on such a fine scale that actually discrete elements blur into functionally continuous operation. The prevalent theory is probably that we have a hybrid setup or that we are in fact digital at the most primitive level, but it's useful and not harmful to think of our thought process as analog at the higher levels to avoid the low-level pitfalls of digital systems (what's the floating point precision of a human being? how long between neural spikes is one operating cycle?). There isn't really a way to prove continuity because there could always be discrete elements just below the smallest measurable scale. Discreteness, while theoretically provable, hasn't been demonstrated.
My personal pov is that it is a bit hubristic to try to fit our brains into the mold of the machines we've made (no machine can generate a system more complex than itself, but they can generate plenty lesser) and the continually elusive hard problems of AI point to fundamentally different architectures. YMMV
posted by moift at 2:55 PM on October 18, 2005
No two apples ARE exactly alike, but (A) and (A) are exactly alike, unless we're going to go down to the level of miniscule differences created by imperfect printing processes.
I think moift's point about apples still holds with (A) and (A) here. One (A) is to the left of the other. One (A) is situated closer to one's dominant writing hand. One (A) is closer to the center of one's field of view because of the position from which one is viewing the piece of paper.
posted by juv3nal at 3:12 PM on October 18, 2005
I think moift's point about apples still holds with (A) and (A) here. One (A) is to the left of the other. One (A) is situated closer to one's dominant writing hand. One (A) is closer to the center of one's field of view because of the position from which one is viewing the piece of paper.
posted by juv3nal at 3:12 PM on October 18, 2005
it's digital on such a fine scale that actually discrete elements blur into functionally continuous operation.
This is true of computers too, which is why we can model connectionist networks with sinusoidal activation curves on a digital computer. So I very much doubt that we can draw the conclusion that "we don't think like machines" because our brains our analog. We can model continuous activation curves finely enough on a digital computer so that they behave indistinguishably from real analog processors. The distinction between analog and digital computing is a real red herring.
(This doesn't necessarily conflict with what you were saying about the apple problem, though. You were just saying that we can take in fine-grained information about differences between the apples, and that information could be used to drive a preference between the two. That's a real possibility. I also agree that AI really has to let go of its old GOFAI symbol-processing ways, but saying that' is very different from saying that we don't think like machines.)
posted by painquale at 3:22 PM on October 18, 2005
This is true of computers too, which is why we can model connectionist networks with sinusoidal activation curves on a digital computer. So I very much doubt that we can draw the conclusion that "we don't think like machines" because our brains our analog. We can model continuous activation curves finely enough on a digital computer so that they behave indistinguishably from real analog processors. The distinction between analog and digital computing is a real red herring.
(This doesn't necessarily conflict with what you were saying about the apple problem, though. You were just saying that we can take in fine-grained information about differences between the apples, and that information could be used to drive a preference between the two. That's a real possibility. I also agree that AI really has to let go of its old GOFAI symbol-processing ways, but saying that' is very different from saying that we don't think like machines.)
posted by painquale at 3:22 PM on October 18, 2005
(no machine can generate a system more complex than itself, but they can generate plenty lesser)
This phrase gets tossed around like it was a theorem, yet I've never seen anything confirming it. If a machine can gather information independently, what's to say that it cannot increase in complexity beyond its creator? If our species is a spinoff from yeast, that's one prominent counter example.
posted by springload at 3:22 PM on October 18, 2005
This phrase gets tossed around like it was a theorem, yet I've never seen anything confirming it. If a machine can gather information independently, what's to say that it cannot increase in complexity beyond its creator? If our species is a spinoff from yeast, that's one prominent counter example.
posted by springload at 3:22 PM on October 18, 2005
This phrase gets tossed around like it was a theorem
It is. Gregory Chaitin proved it in his "Limitations of Mathematics". The full text is online, but it's mostly about LISP.
posted by moift at 4:13 PM on October 18, 2005
It is. Gregory Chaitin proved it in his "Limitations of Mathematics". The full text is online, but it's mostly about LISP.
posted by moift at 4:13 PM on October 18, 2005
Also, re: your examples, when you gather information you increase the complexity of your system, so it can lead to incrementally more complex creations but the rule isn't broken. I don't get your meaning with the yeast thing.
posted by moift at 4:19 PM on October 18, 2005
posted by moift at 4:19 PM on October 18, 2005
If I saw two apples that were identical I wouldn't eat either ;-)
As for brains being machines, that sounds dangerously like the old "brains are hardware, consciousness is software" argument. Some people would argue that when it comes to the brain, the hardware and software are in fact the same thing. I know that doesn't add much to the topic, so I'll leave it there :-)
posted by ajp at 4:22 PM on October 18, 2005
As for brains being machines, that sounds dangerously like the old "brains are hardware, consciousness is software" argument. Some people would argue that when it comes to the brain, the hardware and software are in fact the same thing. I know that doesn't add much to the topic, so I'll leave it there :-)
posted by ajp at 4:22 PM on October 18, 2005
I very much doubt that we can draw the conclusion that "we don't think like machines"
You're in good company on that, and I don't want to argue too much further out of my depth, but:
We can model continuous activation curves finely enough on a digital computer so that they behave indistinguishably from real analog processors
Map for territory.
A model of a continuous curve is fundamentally not a continuous curve, if it was it wouldn't be a model anymore, more like an identity. The floating point depth supported by the program doing the modelling necessitates internally discrete units, and the model is therefore distinguishable from a pure wave at all the infinite points not accessible beyond this precision. It's certainly "good enough" for any practical application I can think of, but there's no mistaking it analog. We can design digital systems to be extremely fine grained but we cannot completely obscure the seams. As of yet, no one has found the seams in our own thinking.*
* except neural spikes
posted by moift at 4:45 PM on October 18, 2005
You're in good company on that, and I don't want to argue too much further out of my depth, but:
We can model continuous activation curves finely enough on a digital computer so that they behave indistinguishably from real analog processors
Map for territory.
A model of a continuous curve is fundamentally not a continuous curve, if it was it wouldn't be a model anymore, more like an identity. The floating point depth supported by the program doing the modelling necessitates internally discrete units, and the model is therefore distinguishable from a pure wave at all the infinite points not accessible beyond this precision. It's certainly "good enough" for any practical application I can think of, but there's no mistaking it analog. We can design digital systems to be extremely fine grained but we cannot completely obscure the seams. As of yet, no one has found the seams in our own thinking.*
* except neural spikes
posted by moift at 4:45 PM on October 18, 2005
The Nisbet & Ross study mentioned by Eideteker shows the primacy-recency effect: that yep, people usually choose the item (apple) on the right. You might choose the apple on the left if you're left hand dominant, though: the effect's probably related to prevalence of right-handedness rather than cultural factors like directionality of text. Also, what Rustybrooks mentioned about the Q's: people don't have anything near an RNG, but rather an idea of randomness.
For abundant evidence that (untrained) people rely on a long list of cognitive biases over logic, see Kahnemann & Tversky's
Judgment Under Uncertainty.
posted by ellanea at 5:32 PM on October 18, 2005
For abundant evidence that (untrained) people rely on a long list of cognitive biases over logic, see Kahnemann & Tversky's
Judgment Under Uncertainty.
posted by ellanea at 5:32 PM on October 18, 2005
The process in question happens in the dorsolateral frontal lobes, but I'll be blessed if I know how. There certainly hasn't been evidence of RNGs in the ones I've dissected.
posted by ikkyu2 at 8:06 PM on October 18, 2005
posted by ikkyu2 at 8:06 PM on October 18, 2005
moift: My argument re the yeast thing is: If we are considered algorithmic, we have been generated from yeast as much as an AI would have to be generated by humans. The non-increase of complexity holds in a closed system, but we don't have to or want to limit our machines to that.
posted by springload at 11:52 PM on October 18, 2005
posted by springload at 11:52 PM on October 18, 2005
painquale: You're way off. You're confusing random mutations in an algorithm to random generation by an algorithm.
Humans were created by random processes with natural selection, but that does not mean we use random processes to make decisions.
posted by delmoi at 3:31 PM on October 19, 2005
Humans were created by random processes with natural selection, but that does not mean we use random processes to make decisions.
posted by delmoi at 3:31 PM on October 19, 2005
From what we know concretely about the brain's operation, it's either analog or it's digital on such a fine scale that actually discrete elements blur into functionally continuous operation.
The prevalent theory is probably that we have a hybrid setup or that we are in fact digital at the most primitive level, but it's useful and not harmful to think of our thought process as analog at the higher levels to avoid the low-level pitfalls of digital systems
Again, 'prevalent' betwixt whome?
Who's we? Everything I know about the brain points to it being a neural-network type system. And 3-credit class on brain chemistry. Software neural networks are easy to implement on digital hardware. And from the software's 'perspective' it's analog as well.
And anyway, the same thing could be said about any computer.
Think about it:
1) Application level: AI software gives a yes or no answer.
2) Software level: Analog ANN using floating point numbers
3) CPU: floating point numbers are stored as digital information, processed by digital logic gates
4) Transistors: digital logic gates are implemented by analog transistors
5) Electrons: Electrons are either there or not, and voltage is a discrete multiple of the electron voltage.
So the question is, at what 'level' are the important things happening. What 'level' of brain activity is the 'software' level that does the thinking? My thinking is that the implementation of an ANN below level 2 won't change the result. So if you have this setup instead
1) person makes a choice, digital
2) Neural network in the brain, analog input, digital (yes/no) output
3) ions in the brain, voltages are multiples of the electron voltage. (digital)
Discreteness, while theoretically provable, hasn't been demonstrated.
I'm pretty sure the discreetness of electrical charges was proven decades ago...
----
And as I have said before. Once you get to the 'noise floor' of any analog system, you're going to have to divide the signal into discrete components if you want to do further processing on it, otherwise you'll have nothing but random results.
posted by delmoi at 3:49 PM on October 19, 2005
The prevalent theory is probably that we have a hybrid setup or that we are in fact digital at the most primitive level, but it's useful and not harmful to think of our thought process as analog at the higher levels to avoid the low-level pitfalls of digital systems
Again, 'prevalent' betwixt whome?
Who's we? Everything I know about the brain points to it being a neural-network type system. And 3-credit class on brain chemistry. Software neural networks are easy to implement on digital hardware. And from the software's 'perspective' it's analog as well.
And anyway, the same thing could be said about any computer.
Think about it:
1) Application level: AI software gives a yes or no answer.
2) Software level: Analog ANN using floating point numbers
3) CPU: floating point numbers are stored as digital information, processed by digital logic gates
4) Transistors: digital logic gates are implemented by analog transistors
5) Electrons: Electrons are either there or not, and voltage is a discrete multiple of the electron voltage.
So the question is, at what 'level' are the important things happening. What 'level' of brain activity is the 'software' level that does the thinking? My thinking is that the implementation of an ANN below level 2 won't change the result. So if you have this setup instead
1) person makes a choice, digital
2) Neural network in the brain, analog input, digital (yes/no) output
3) ions in the brain, voltages are multiples of the electron voltage. (digital)
Discreteness, while theoretically provable, hasn't been demonstrated.
I'm pretty sure the discreetness of electrical charges was proven decades ago...
----
And as I have said before. Once you get to the 'noise floor' of any analog system, you're going to have to divide the signal into discrete components if you want to do further processing on it, otherwise you'll have nothing but random results.
posted by delmoi at 3:49 PM on October 19, 2005
we have been generated from yeast as much as an AI would have to be generated by humans
Yeast isn't a system and it didn't create us.
The non-increase of complexity holds in a closed system, but we don't have to or want to limit our machines to that.
If you reference outside information it becomes part of your "closed system" and its complexity increases. This includes data as well as algorithm.
'prevalent' betwixt whome?
The writers of the interesting looking articles in the first page of my google search, and most in this thread seem to lean towards the digital side. My guess from what I've seen is that most people agree with you, but I don't have any numbers.
And 3-credit class on brain chemistry.
I said everything 'we' (us) know 'concretely'. You're talking about theory. Discreteness hasn't been shown on the smallest probeable scale yet so from what we know concretely, either it isn't discrete or it is but below the threshold we can see with current technology. I realize this is just a slight restatement but I stand by it. If you have proof of discrete cognitive function in humans congratulations on your impending nobel prize.
Neural networks are a non-symbolic approach to cognition and actually represent the best hope for a continuous system. Even in computer science, they are implemented in analog, digital and hybrid frameworks. The brain's use of neurons doesn't constitute evidence of discrete operation; the complex macro-behavior of NNs like de- and hyper-polarization is thought to be analog.
Even on the level of a single neuron, if it, for example, fires 2/3 of the time when the input is higher than the threshold, it's a probability gate (a nonlinear analog circuit), simple on/off output notwithstanding.
I'm pretty sure the discreetness of electrical charges was proven decades ago...
Neural spikes are discrete in value, but there's no evidence they're discrete in time. Digital systems need clock cycles to get meaningful data, and we don't know if ...|...|...|... is any different from ...|||... and if it is where the discretion lies. (Excuse my crappy ascii electroscope) If the distance between spikes is meaningful as a real number, we're analog.
'noise floor'
But neural networks love noise, nothing has to be rounded off for their sake. Our models are often trained with purposeful jitter from white noise to increase the chances of generalisation (they will associate new experiences within +/-noise from a trained experience with that experience, allowing anticipation). There is no need manually correct for this.
you're going to have to divide the signal into discrete components if you want to do further processing on it in a digital system
Fixed. Analog systems can process analog data. It's what they do.
posted by moift at 12:50 AM on October 20, 2005
Yeast isn't a system and it didn't create us.
The non-increase of complexity holds in a closed system, but we don't have to or want to limit our machines to that.
If you reference outside information it becomes part of your "closed system" and its complexity increases. This includes data as well as algorithm.
'prevalent' betwixt whome?
The writers of the interesting looking articles in the first page of my google search, and most in this thread seem to lean towards the digital side. My guess from what I've seen is that most people agree with you, but I don't have any numbers.
And 3-credit class on brain chemistry.
I said everything 'we' (us) know 'concretely'. You're talking about theory. Discreteness hasn't been shown on the smallest probeable scale yet so from what we know concretely, either it isn't discrete or it is but below the threshold we can see with current technology. I realize this is just a slight restatement but I stand by it. If you have proof of discrete cognitive function in humans congratulations on your impending nobel prize.
Neural networks are a non-symbolic approach to cognition and actually represent the best hope for a continuous system. Even in computer science, they are implemented in analog, digital and hybrid frameworks. The brain's use of neurons doesn't constitute evidence of discrete operation; the complex macro-behavior of NNs like de- and hyper-polarization is thought to be analog.
Even on the level of a single neuron, if it, for example, fires 2/3 of the time when the input is higher than the threshold, it's a probability gate (a nonlinear analog circuit), simple on/off output notwithstanding.
I'm pretty sure the discreetness of electrical charges was proven decades ago...
Neural spikes are discrete in value, but there's no evidence they're discrete in time. Digital systems need clock cycles to get meaningful data, and we don't know if ...|...|...|... is any different from ...|||... and if it is where the discretion lies. (Excuse my crappy ascii electroscope) If the distance between spikes is meaningful as a real number, we're analog.
'noise floor'
But neural networks love noise, nothing has to be rounded off for their sake. Our models are often trained with purposeful jitter from white noise to increase the chances of generalisation (they will associate new experiences within +/-noise from a trained experience with that experience, allowing anticipation). There is no need manually correct for this.
you're going to have to divide the signal into discrete components if you want to do further processing on it in a digital system
Fixed. Analog systems can process analog data. It's what they do.
posted by moift at 12:50 AM on October 20, 2005
Yeast isn't a system and it didn't create us.
...
If you reference outside information it becomes part of your "closed system" and its complexity increases. This includes data as well as algorithm.
Yes. So this non-increase of complexity is interesting for the construction of algorithms and for information theory in general. But when we discuss how capable we can make our machines, the theorem merely states that "the universe can not generate anything more complex than itself", which isn't really a constraint. It does not fix a relationship between the complexity of humans and the complexity of computers.
And while there are likely better examples, yeast is a system. It takes garbage as input and produces more yeast. By the external modification process of natural selection, it transformed its output changed from yeast to human beings. I don't think anyone suggests that algorithms and representations in an AI should be hard coded. External interference is inherent to the concept of intelligence.
I'm also with delmoi on the analog/digital matter. Even though the firing times are not discrete, they too are subject to noise. If we decrease the time increment of an ANN sufficiently, the quantization will drown in thermal noise. Such noise can be applied to a digital system as well as an analog one.
posted by springload at 5:47 AM on October 20, 2005
...
If you reference outside information it becomes part of your "closed system" and its complexity increases. This includes data as well as algorithm.
Yes. So this non-increase of complexity is interesting for the construction of algorithms and for information theory in general. But when we discuss how capable we can make our machines, the theorem merely states that "the universe can not generate anything more complex than itself", which isn't really a constraint. It does not fix a relationship between the complexity of humans and the complexity of computers.
And while there are likely better examples, yeast is a system. It takes garbage as input and produces more yeast. By the external modification process of natural selection, it transformed its output changed from yeast to human beings. I don't think anyone suggests that algorithms and representations in an AI should be hard coded. External interference is inherent to the concept of intelligence.
I'm also with delmoi on the analog/digital matter. Even though the firing times are not discrete, they too are subject to noise. If we decrease the time increment of an ANN sufficiently, the quantization will drown in thermal noise. Such noise can be applied to a digital system as well as an analog one.
posted by springload at 5:47 AM on October 20, 2005
the theorem merely states that "the universe can not generate anything more complex than itself"
The theorem holds in the general form that any system cannot generate anything more complex than itself. The universe is equally subject but not especially so.
stuff about yeast generating human intelligence
Yeast didn't create us any more than silicon created computers. You mention natural selection, so I guess your meaning is that yeast has a creation role in our development because it was an evolutionary root, but polymorphism is not an excuse from Chaitin's rule. If you look at human cognitive development from the primordial ooze level and at any step you see an unattributed increase in complexity you're missing something.
When yeast combines with other organics to form an organ or a digestion environment or whatever, its complexity is unaffected. If may be a part of a more complex system, but it isn't a creator/generator, just a constituent. The complexity of the new system is the sum of the irreducible complexities of its constituent parts.
When simple organics polymorphize into more complex systems, they are not creators. You can't get something from nothing and you can't get a truly spontaneous complexity jump from an organism with the input of just the organism. That would be A = A + C, where A is the "base" complexity of the organic and C is the irreducible complexity of new functionality. If C > 0 you've got a logical paradox. The universe doesn't arbitrarily bestow advances and that's why the theorem works. The equation must balance, C[Source] + A = A + C[Dest]. Evolution is the principle C[Source], and you can't write it off as an "outside system" because as soon as it changes yeast, it is adding its own complexity to the new system.
What you're saying is like saying a mother created a more complex machine than herself when she had a child with 12 toes. There is additional complexity (especially in his footwear choices,) but she was just a party to it, and didn't create it without the balancing complexity of factors like a certain vitamin deficiency or sleeping with a cousin or whatever it is that causes babies to have more than standard number of toes.
We don't know what caused the leap from single-celled to multicellular organisms, for example, it may have been a certain combination of chemicals or lightning striking a puddle of primordial goop or something else entirely, but we know the single cell didn't do it itself, the new complexity came from some x-factor.
I'm also with delmoi on the analog/digital matter.
That's cool.
Even though the firing times are not discrete [snip]
AKA continuous, analog
If we decrease the time increment of an ANN sufficiently, the quantization will drown in thermal noise.
A(rtificial)NN is fundamentally not our brain, and its problems with "decreasing time increment" aren't relevant because the brain hasn't been shown to work on any sort of "increment", hence the time-scale continuity /\. I'm not clear on this, but are you saying it's not analog because spikes can't be too close together? It's true that they can't, neurons have a resting period before they can fire again, but that doesn't stop any larger gaps between spikes from being significant on the real number level. Ie., the difference between a 2 second (plenty of time) pause, and 2 seconds plus some surreal number > 0. Don't let's do number theory, this is already far enough away from apples, I just thought it was interesting :]
posted by moift at 9:57 AM on October 20, 2005
The theorem holds in the general form that any system cannot generate anything more complex than itself. The universe is equally subject but not especially so.
stuff about yeast generating human intelligence
Yeast didn't create us any more than silicon created computers. You mention natural selection, so I guess your meaning is that yeast has a creation role in our development because it was an evolutionary root, but polymorphism is not an excuse from Chaitin's rule. If you look at human cognitive development from the primordial ooze level and at any step you see an unattributed increase in complexity you're missing something.
When yeast combines with other organics to form an organ or a digestion environment or whatever, its complexity is unaffected. If may be a part of a more complex system, but it isn't a creator/generator, just a constituent. The complexity of the new system is the sum of the irreducible complexities of its constituent parts.
When simple organics polymorphize into more complex systems, they are not creators. You can't get something from nothing and you can't get a truly spontaneous complexity jump from an organism with the input of just the organism. That would be A = A + C, where A is the "base" complexity of the organic and C is the irreducible complexity of new functionality. If C > 0 you've got a logical paradox. The universe doesn't arbitrarily bestow advances and that's why the theorem works. The equation must balance, C[Source] + A = A + C[Dest]. Evolution is the principle C[Source], and you can't write it off as an "outside system" because as soon as it changes yeast, it is adding its own complexity to the new system.
What you're saying is like saying a mother created a more complex machine than herself when she had a child with 12 toes. There is additional complexity (especially in his footwear choices,) but she was just a party to it, and didn't create it without the balancing complexity of factors like a certain vitamin deficiency or sleeping with a cousin or whatever it is that causes babies to have more than standard number of toes.
We don't know what caused the leap from single-celled to multicellular organisms, for example, it may have been a certain combination of chemicals or lightning striking a puddle of primordial goop or something else entirely, but we know the single cell didn't do it itself, the new complexity came from some x-factor.
I'm also with delmoi on the analog/digital matter.
That's cool.
Even though the firing times are not discrete [snip]
AKA continuous, analog
If we decrease the time increment of an ANN sufficiently, the quantization will drown in thermal noise.
A(rtificial)NN is fundamentally not our brain, and its problems with "decreasing time increment" aren't relevant because the brain hasn't been shown to work on any sort of "increment", hence the time-scale continuity /\. I'm not clear on this, but are you saying it's not analog because spikes can't be too close together? It's true that they can't, neurons have a resting period before they can fire again, but that doesn't stop any larger gaps between spikes from being significant on the real number level. Ie., the difference between a 2 second (plenty of time) pause, and 2 seconds plus some surreal number > 0. Don't let's do number theory, this is already far enough away from apples, I just thought it was interesting :]
posted by moift at 9:57 AM on October 20, 2005
moift: Look, I am aware of the theorem and its implications, and I'm not out to debunk it. My point is that this particular theorem does not pose a constraint on machines, because other systems than its programmers are allowed to feed them with complexity. Ok, closing my case on that, I'm really here to explain what I mean with the time increment:
The surreal number >0 is not relevant if it does not exceed the noise floor. Since the brain is located at 300K, there is plenty of thermal noise, and that includes 'jitter', i.e. noise in the spike timing. If there is 1ns of RMS noise, differences in firing intervals much shorter than that do not convey information. We can quantize the time in intervals of e.g. 1ps, and the error introduced this way is insignificant compared to the 'errors' that are already present in the simulated neuron.
posted by springload at 2:19 PM on October 21, 2005
The surreal number >0 is not relevant if it does not exceed the noise floor. Since the brain is located at 300K, there is plenty of thermal noise, and that includes 'jitter', i.e. noise in the spike timing. If there is 1ns of RMS noise, differences in firing intervals much shorter than that do not convey information. We can quantize the time in intervals of e.g. 1ps, and the error introduced this way is insignificant compared to the 'errors' that are already present in the simulated neuron.
posted by springload at 2:19 PM on October 21, 2005
This thread is closed to new comments.
Even when people THINK they are being random they usually aren't. A (ficticious, I assume) example from the Cryptonomicon by Neal Stephenson had to do with a woman who was making one-time-pads for encryption. Basically, she was picking letters out of a bingo-thingy and writing them down. There would be two copies made, one for the agent in the field and one for the home base. They would both use the key for a specified period of time and then change it. Anyway, this woman had certain built in expectations for how often a Q should come up, based on human language. Statistically you should get a Q 1 in 26 times but english text contains Qs with a MUCH lower frequency, so she unconciously avoided Q while reaching. This allowed an observer to decrypt some of the messages.
I suspect that we have no randomizing method built in that is reliable, period. We must invent them (rock paper scissors, bubble gum bubble gum, picking petals off a flower, etc).
posted by RustyBrooks at 9:03 AM on October 18, 2005