How much can my noodle handle?
March 6, 2006 2:04 PM Subscribe
What is the approximate bandwidth of the human brain ?
If you could fully capture all the data that was coming into our brains in real time (sight,hearing , plus all the data coming from the rest of our body ) how much do you think it would be in kbps?
It should be easy to make this calculation for sight and sound, but how would you quantify other senses like touch and smell?
If you could fully capture all the data that was coming into our brains in real time (sight,hearing , plus all the data coming from the rest of our body ) how much do you think it would be in kbps?
It should be easy to make this calculation for sight and sound, but how would you quantify other senses like touch and smell?
This might be of some assistance.
So who has more processing power ?
By estimation, the brain has about 100 million MIPS worth of processing power while recent super-computers only has a few million MIPS worth in processor speed. That said, the brain is still the winner in the race. Because of the cost, enthusiasm and efforts still required, computer technology has still some length to go before it will match the human brain's processing power.
posted by Mijo Bijo at 2:10 PM on March 6, 2006
So who has more processing power ?
By estimation, the brain has about 100 million MIPS worth of processing power while recent super-computers only has a few million MIPS worth in processor speed. That said, the brain is still the winner in the race. Because of the cost, enthusiasm and efforts still required, computer technology has still some length to go before it will match the human brain's processing power.
posted by Mijo Bijo at 2:10 PM on March 6, 2006
Response by poster: Matteo, well for sight and sound i think it would be pretty simple:
some where near hd resolution at around 30 or 40 fps with lets say 24bit 96 khz audio .i don't know how much data that would be, but it is a fair bit for just those two.
They would probably be the big ones though.
posted by grex at 2:13 PM on March 6, 2006
some where near hd resolution at around 30 or 40 fps with lets say 24bit 96 khz audio .i don't know how much data that would be, but it is a fair bit for just those two.
They would probably be the big ones though.
posted by grex at 2:13 PM on March 6, 2006
I don't think you could really measure it in kbps, because the information isn't digital, it's analog. That said, it is all converted into electrical data passed along organic circuits, so there must be a way to quantify and/or record it somehow.
Or maybe there's something more to it that I don't understand. It seems like if it's just electrical impulses, you could patch into, say, someone's optic nerve, record the electrical impulses, and then play them back for someone else by patching into their optic nerve, but I've never heard of any experiments like this.
posted by designbot at 2:14 PM on March 6, 2006
Or maybe there's something more to it that I don't understand. It seems like if it's just electrical impulses, you could patch into, say, someone's optic nerve, record the electrical impulses, and then play them back for someone else by patching into their optic nerve, but I've never heard of any experiments like this.
posted by designbot at 2:14 PM on March 6, 2006
how would you quantify other senses like touch and smell
You'd simply take the estimated number of neurons responsible for conveying this information and multiply that by the "clock speed" of those neurons (that is, the frequency at which they can de- and repolarize)
IANANeuroscientist but I'm sure someone has done this sort of rough calculation.
posted by rxrfrx at 2:15 PM on March 6, 2006
You'd simply take the estimated number of neurons responsible for conveying this information and multiply that by the "clock speed" of those neurons (that is, the frequency at which they can de- and repolarize)
IANANeuroscientist but I'm sure someone has done this sort of rough calculation.
posted by rxrfrx at 2:15 PM on March 6, 2006
some where near hd resolution
People must have higher than HD resolution vision, or it would be impossible to see individual pixels on an HDTV, even if you were close enough for it to fill your field of vision.
posted by designbot at 2:16 PM on March 6, 2006
People must have higher than HD resolution vision, or it would be impossible to see individual pixels on an HDTV, even if you were close enough for it to fill your field of vision.
posted by designbot at 2:16 PM on March 6, 2006
there is no meaningful answer to this, except "astronomical".
posted by paradroid at 2:16 PM on March 6, 2006
posted by paradroid at 2:16 PM on March 6, 2006
What do you define as "the brain"? What I mean is that even on the optical level, there's all sorts of retinal pre-processing done before its sent back through the optic nerve. Or by "the brain" do you mean the visual cortex?
I think its an error to think of the brain as a high-speed digital camera, capturing images one at a time and storing them away. We still have a lot to learn on how the brain manages information but I'm pretty sure thats not how it does it.
Is this an idle curiosity question or are you trying to get somewhere?
posted by vacapinta at 2:21 PM on March 6, 2006
I think its an error to think of the brain as a high-speed digital camera, capturing images one at a time and storing them away. We still have a lot to learn on how the brain manages information but I'm pretty sure thats not how it does it.
Is this an idle curiosity question or are you trying to get somewhere?
posted by vacapinta at 2:21 PM on March 6, 2006
Response by poster: designbot , i've never tried that,
so maybe double 1080p as a ball park for the eye or should it be quadrupled , as there is one stream for each eye .
Damn thats alot of data.
posted by grex at 2:25 PM on March 6, 2006
so maybe double 1080p as a ball park for the eye or should it be quadrupled , as there is one stream for each eye .
Damn thats alot of data.
posted by grex at 2:25 PM on March 6, 2006
Response by poster: I think a good analogy for the eye/brain is a digital camera that captures images one at a time and then throws most them away immediately.
posted by grex at 2:29 PM on March 6, 2006
posted by grex at 2:29 PM on March 6, 2006
A lot of compression in the visual information takes place before the signal even leaves the retina.
I don't think this question can be answered meaningfully. It's sort of like asking, "What is the approximate texture of justice?" or even "How high is the Empire State Building, in liters?" - throwing some words together that don't really go together.
posted by ikkyu2 at 2:32 PM on March 6, 2006
I don't think this question can be answered meaningfully. It's sort of like asking, "What is the approximate texture of justice?" or even "How high is the Empire State Building, in liters?" - throwing some words together that don't really go together.
posted by ikkyu2 at 2:32 PM on March 6, 2006
Best answer: I believe they did some experiments with artificial vision that indicated, from the spatial angle, vision requires on the order of one megapixel. That seems "too low" until you remember that your eyes are moving almost constantly and that detail is sensed almost entirely by the center of the visual field (the fovea).
posted by kindall at 2:36 PM on March 6, 2006
posted by kindall at 2:36 PM on March 6, 2006
Response by poster: Ikkyu2 ,I totally disagree with that .
Think of it like this :
If you were to attach a your brain to a computer how much data would that computer have to spit out in order to render a convincing analog of reality ?
How is that at all similar to asking the texture of justice ?
posted by grex at 2:37 PM on March 6, 2006
Think of it like this :
If you were to attach a your brain to a computer how much data would that computer have to spit out in order to render a convincing analog of reality ?
How is that at all similar to asking the texture of justice ?
posted by grex at 2:37 PM on March 6, 2006
Best answer: According to this site, the resolution of human vision can't really be measured accurately, because we only see a small area in detail at once, but we can move our eyes freely to take in a wide area. They provide a rough functional estimate of over 576 megapixels.
You wouldn't need to transmit all that information at once, however, so it's irrelevant to bandwidth. The most relevant figure for figuring bandwidth would probably be the number of photoreceptors in each eye, which according to this site, is around 120 million rods (recording gradations of light or dark), and 6-7 million cones (to capture color). If we pretend that each of these cells transmits one byte of information, representing a single 'pixel', you'd come up with about 254 Mb of information per static image (if the eye actually recorded individual static images, which it doesn't).
posted by designbot at 2:39 PM on March 6, 2006
You wouldn't need to transmit all that information at once, however, so it's irrelevant to bandwidth. The most relevant figure for figuring bandwidth would probably be the number of photoreceptors in each eye, which according to this site, is around 120 million rods (recording gradations of light or dark), and 6-7 million cones (to capture color). If we pretend that each of these cells transmits one byte of information, representing a single 'pixel', you'd come up with about 254 Mb of information per static image (if the eye actually recorded individual static images, which it doesn't).
posted by designbot at 2:39 PM on March 6, 2006
I don't think you could really measure it in kbps, because the information isn't digital, it's analog. That said, it is all converted into electrical data passed along organic circuits, so there must be a way to quantify and/or record it somehow.
There's no problem with measuring analog information in bits (or rather fractions of bits.) Shannon himself did it in his famous paper.
posted by sonofsamiam at 2:41 PM on March 6, 2006
There's no problem with measuring analog information in bits (or rather fractions of bits.) Shannon himself did it in his famous paper.
posted by sonofsamiam at 2:41 PM on March 6, 2006
throwing some words together that don't really go together.
I disagree. There is real data being sent along actual physical nerves in the human body. I don't see any reason why the characteristics of that data should be impossible to measure.
Obviously, the formatting of that data is very different from what we use in our traditional digital & analog media, but it's still just data.
posted by designbot at 2:45 PM on March 6, 2006
I disagree. There is real data being sent along actual physical nerves in the human body. I don't see any reason why the characteristics of that data should be impossible to measure.
Obviously, the formatting of that data is very different from what we use in our traditional digital & analog media, but it's still just data.
posted by designbot at 2:45 PM on March 6, 2006
Since this is just a hypothetical off-the-cuff kind of question, I think some back-of-the-envelope-style calculation is in order. As an upper bound, we can say that every neuron in the brain is involved in perception (which, of course, they're not, but this is just a rough estimate anyway) - there are about 10^11 neurons in the human brain. Each can either be active or inactive at a given moment, so that's 10^11 bits of information. The fastest neurons can change state 1000 times per second, so that's (remember, this is still a huge overestimate) 10^14 bits per second, or about 11,600 gigabytes per second...
Even if only 1/10,000th of the neurons in the brain were involved in perception, that's an awfully fat pipe.
posted by wanderingmind at 2:56 PM on March 6, 2006
Even if only 1/10,000th of the neurons in the brain were involved in perception, that's an awfully fat pipe.
posted by wanderingmind at 2:56 PM on March 6, 2006
In Joel Garreau's book Edge City (which is mainly about shopping malls and urban sprawl), he quotes Dr Thomas A Furness of the University of Washington as saying it would take 3 gigabits/second to transmit all the sensations of sex electronically, which is pretty close to what you're asking.
posted by cillit bang at 3:00 PM on March 6, 2006
posted by cillit bang at 3:00 PM on March 6, 2006
But are neurons binary? Don't they also store (and process) information about signal strength, for example?
posted by signal at 3:02 PM on March 6, 2006
posted by signal at 3:02 PM on March 6, 2006
I wish I could find the link, but I just read an article about an experiment involving goggles with a flashing display. The goggles flashed a number faster than the subject could read. Later (not making this up) the subject jumped off a bungee platform wearing the goggles and was able to read the number fairly easily.
posted by the jam at 3:08 PM on March 6, 2006
posted by the jam at 3:08 PM on March 6, 2006
there is no meaningful answer to this, except "astronomical".
That's not true at all, just add up all the neuron pathways in the spinal cord, optic nerves, and auditory nerves. However many neural pathways there are, just multiply by some sampling rate for a first approximation, for something more precise measure the information value of a neuron's signaling based on the possible firing rate, and noise levels.
I would tend to think that most of the work needed to estimate this has been done.
But are neurons binary? Don't they also store (and process) information about signal strength, for example?
Well, they don't really store anything (in the classic model, although I've seen stuff about neurons changing their *gene expression* meaning that something is going on inside the brain with DNA to store some state, which opens up a *huge* possibility for what the brain could do)
Anyway, the way neurons work is by "reading" all the input voltage and if it sums up to a certain threshold, firing. It's not exactly digital, but it can definitely be modeled digitally.
posted by delmoi at 3:34 PM on March 6, 2006
That's not true at all, just add up all the neuron pathways in the spinal cord, optic nerves, and auditory nerves. However many neural pathways there are, just multiply by some sampling rate for a first approximation, for something more precise measure the information value of a neuron's signaling based on the possible firing rate, and noise levels.
I would tend to think that most of the work needed to estimate this has been done.
But are neurons binary? Don't they also store (and process) information about signal strength, for example?
Well, they don't really store anything (in the classic model, although I've seen stuff about neurons changing their *gene expression* meaning that something is going on inside the brain with DNA to store some state, which opens up a *huge* possibility for what the brain could do)
Anyway, the way neurons work is by "reading" all the input voltage and if it sums up to a certain threshold, firing. It's not exactly digital, but it can definitely be modeled digitally.
posted by delmoi at 3:34 PM on March 6, 2006
Is it really all converted into electrical information? Sure about that now, are we? Final answer?
What about differential generation of mRNA transcripts before and after LTP? That's not very electrical, is it? What about receptor downregulation?
Neurons do not "fire," "store and process data," "activate," or do anything that computers do. They are specialized cells, and they do what cells do, which is more complicated than just the information theory that was designed to describe the behavior of finite state von Neumann machines. If you use that language to describe neurons, you are going to run into trouble, just as you would if you quantified dog barking by saying "My dog barks a purple number of square inches."
posted by ikkyu2 at 3:35 PM on March 6, 2006
What about differential generation of mRNA transcripts before and after LTP? That's not very electrical, is it? What about receptor downregulation?
Neurons do not "fire," "store and process data," "activate," or do anything that computers do. They are specialized cells, and they do what cells do, which is more complicated than just the information theory that was designed to describe the behavior of finite state von Neumann machines. If you use that language to describe neurons, you are going to run into trouble, just as you would if you quantified dog barking by saying "My dog barks a purple number of square inches."
posted by ikkyu2 at 3:35 PM on March 6, 2006
Don't they also store (and process) information about signal strength, for example?
No, it's all or nothing. If they reach their threshold potential, they fire. If not, then they don't. The "strength" of a sensation is related on a physiological level to the frequency of impulses reaching the CNS, not to their intensity. Much preprocessing happens before this signalling. Much post-processing in the CNS and the brain happens afterwards. There is also hyperpolarisation and habituation to consider.
posted by meehawl at 3:38 PM on March 6, 2006
No, it's all or nothing. If they reach their threshold potential, they fire. If not, then they don't. The "strength" of a sensation is related on a physiological level to the frequency of impulses reaching the CNS, not to their intensity. Much preprocessing happens before this signalling. Much post-processing in the CNS and the brain happens afterwards. There is also hyperpolarisation and habituation to consider.
posted by meehawl at 3:38 PM on March 6, 2006
Of course, if you start by assuming, "Let's say the human brain is a funnel," you can eventually find out that the human brain's maple syrup funnel-throughput is, say, 3 quarts per minute.
Since the human brain is not, in fact, a funnel, you have come up with an answer that has very limited utility for any purpose.
posted by ikkyu2 at 3:38 PM on March 6, 2006 [1 favorite]
Since the human brain is not, in fact, a funnel, you have come up with an answer that has very limited utility for any purpose.
posted by ikkyu2 at 3:38 PM on March 6, 2006 [1 favorite]
Response by poster: delmoi ,any links to the neurons changing 'gene expression' stuff ? That sounds interesting.
posted by grex at 3:51 PM on March 6, 2006
posted by grex at 3:51 PM on March 6, 2006
Neurons do not "fire,"
Gosh I just said that!
When I say "fire", of course what I mean is that when the potential difference across the lipid bilayer diminishes to a certain value, then the probability of a voltage-gated transmembraneous sodium channel protein opening to allow ingress of sodium along its concentration gradient increases in a non-linear fashion, mainly because the such proteins are sensitive to changes in their electronic environment due to a sensory helic with many basic residues. Only a tiny fraction of the available sodium channels will change conformation to enable ingress. A sufficient influx balanced against the potassium will reach a threshold potential at which time a predictable cascade of protein conformations results in a bulk exchange of ions that can be described as "firing". There's a classical equation that describes this but it's a beast.
posted by meehawl at 3:51 PM on March 6, 2006 [1 favorite]
Gosh I just said that!
When I say "fire", of course what I mean is that when the potential difference across the lipid bilayer diminishes to a certain value, then the probability of a voltage-gated transmembraneous sodium channel protein opening to allow ingress of sodium along its concentration gradient increases in a non-linear fashion, mainly because the such proteins are sensitive to changes in their electronic environment due to a sensory helic with many basic residues. Only a tiny fraction of the available sodium channels will change conformation to enable ingress. A sufficient influx balanced against the potassium will reach a threshold potential at which time a predictable cascade of protein conformations results in a bulk exchange of ions that can be described as "firing". There's a classical equation that describes this but it's a beast.
posted by meehawl at 3:51 PM on March 6, 2006 [1 favorite]
Neurons do not "fire," "store and process data," "activate," or do anything that computers do.
Computers don't "fire" or "activate" the same way neurons do either. I mentioned gene expression changes in the comment, but ultimately gene expression changes can be simulated via computer too.
But remember the question isn't about wether the brain can be simulated but about how much bandwidth it has. Bandwidth can be measured in bits of data per second, but it can also be measured in bits of information per second. All signals conform to information theory. That is, all signals have an information value, finding the information value, even of an analog signal is possible.
You may know quite a bit about how the brain works, but not that much about information theory. Simply because the brain is analog or cellular or whatever doesn't mean that the information can't ever be encoded, anymore then saying the glucose flow into the brain couldn't be measured in quarts of maple syrup.
posted by delmoi at 4:37 PM on March 6, 2006
Computers don't "fire" or "activate" the same way neurons do either. I mentioned gene expression changes in the comment, but ultimately gene expression changes can be simulated via computer too.
But remember the question isn't about wether the brain can be simulated but about how much bandwidth it has. Bandwidth can be measured in bits of data per second, but it can also be measured in bits of information per second. All signals conform to information theory. That is, all signals have an information value, finding the information value, even of an analog signal is possible.
You may know quite a bit about how the brain works, but not that much about information theory. Simply because the brain is analog or cellular or whatever doesn't mean that the information can't ever be encoded, anymore then saying the glucose flow into the brain couldn't be measured in quarts of maple syrup.
posted by delmoi at 4:37 PM on March 6, 2006
delmoi says that we can just add up "however many neural pathways there are..." But we can't, because outside of the peripheral nervous system (nerves in your body), the pathways are not clear. Sure, the general structure of the visual cortexes is mapped (v1 to v5 at least, then it too gets hazy). But knowing all of the pathways in the brain? All of the common (worn) pathways, or all of the pathways that ever get used? Because that's an expotential number. And if the 10^11 neurons in the brain is right in terms of magnitude, you do not want to be finding even a small subset of the total pathways that are ever traversed by impulses.
I think that this problem is not a fools errand, but there are a lot of open research questions here (most are under active work -- the 2000s is the "Decade of the Brain"...):
1. What's a pathway? What's a pathway worth keeping track of?
2. What counts as a signal (i.e., a bit of information)?
3. What counts as *information*? (i.e., is an animal with a big visual cortex more informed than an animal with a lot of frontal lobe activity)?
posted by zpousman at 6:13 PM on March 6, 2006
I think that this problem is not a fools errand, but there are a lot of open research questions here (most are under active work -- the 2000s is the "Decade of the Brain"...):
1. What's a pathway? What's a pathway worth keeping track of?
2. What counts as a signal (i.e., a bit of information)?
3. What counts as *information*? (i.e., is an animal with a big visual cortex more informed than an animal with a lot of frontal lobe activity)?
posted by zpousman at 6:13 PM on March 6, 2006
Its also worth noting, as way of correction for the grossly inaccurage estimations of the resolution of the visual field that took place early on in this thread, most of which have already been adequately refuted, that human vision (what's called "flicker fusion") takes place somewhere around 60hz, not 30 or 40. Again, this is rough, because the eye behaves very differently from a photosensory array in a digital camera, but its a good rule of thumb if you are going to do X resolution x Y fps to try to estimate bandwidth.
Its also important to note that the brain doesn't make the same kind of redundancy/efficiency tradeoffs that computers do. Computers (at least those outside advanced research institutions) insist upon each calculation being correct, and therefore require a great deal of accuracy, and therefore require a great deal of redundany. Brains don't require much of any of those things, counting on the fact that they will be getting alot of redundant data anyway and as such doesn't need to build it into its transmissions, and can make decisions based upon the general tendency of a great deal of less-than-perfectly-accurate calculations, while a computer has no such luxury. Sampling rate, redundancy, precision, noise: all of these considerations effect "badwidth" as you mean it, or "channel capactiy" as it is more precisely called.
See Shannon's equation.
posted by ChasFile at 6:29 PM on March 6, 2006
Its also important to note that the brain doesn't make the same kind of redundancy/efficiency tradeoffs that computers do. Computers (at least those outside advanced research institutions) insist upon each calculation being correct, and therefore require a great deal of accuracy, and therefore require a great deal of redundany. Brains don't require much of any of those things, counting on the fact that they will be getting alot of redundant data anyway and as such doesn't need to build it into its transmissions, and can make decisions based upon the general tendency of a great deal of less-than-perfectly-accurate calculations, while a computer has no such luxury. Sampling rate, redundancy, precision, noise: all of these considerations effect "badwidth" as you mean it, or "channel capactiy" as it is more precisely called.
See Shannon's equation.
posted by ChasFile at 6:29 PM on March 6, 2006
Human senses aren't static. Each sense is adaptively tuned within the sense organ itself, and through many layers of conscious and unconscious thought. In this way, relatively limited bandwidth and sensitivity can effectively gather significant information from an extraordinarily vast quantity of raw data.
The ear, for example, doesn't have sensitivity anywhere close to 16-bit/44.1kHz. However, a sound recording can't possibly anticipate the instantaneous tuning of the ear. The solution for acoustic reproduction is to broadcast everything the ear could possibly tune to at any one time, which is thousands or millions (billions?) of times more data, hence 24-bit/96kHz and beyond.
posted by Chuckles at 7:37 PM on March 6, 2006
The ear, for example, doesn't have sensitivity anywhere close to 16-bit/44.1kHz. However, a sound recording can't possibly anticipate the instantaneous tuning of the ear. The solution for acoustic reproduction is to broadcast everything the ear could possibly tune to at any one time, which is thousands or millions (billions?) of times more data, hence 24-bit/96kHz and beyond.
posted by Chuckles at 7:37 PM on March 6, 2006
Fight! Fight! Fight!
But seriously. I think you could determine a theoretical maximum possible "bandwidth," but this could be several orders of magnitude larger than what's actually used. That is to say, when dealing with the human brain, you're dealing with massively multi-core, symmetric processing systems. The input data is only one part of the operation. Not just "this part of the brain works the eyes, this part the ears." Instead you've got approximately a trillion cores, each capable of independent operation, and who knows how many possible pathways interconnecting.
I've read stories of people losing large parts of their brains that were traditionally "dedicated" to certain senses or functions, only to have other parts "pick up the slack" (so-to-speak). The limiting factor to determining data transfer wouldn't be the number of neurons, then, but the pathway that the information flows.
I'm not sure how many afferent nervous system pathways exist in the human body, nor the amount of "data" that they could handle, but I imagine the answer would be close to "a shitload."
posted by Civil_Disobedient at 8:25 PM on March 6, 2006
But seriously. I think you could determine a theoretical maximum possible "bandwidth," but this could be several orders of magnitude larger than what's actually used. That is to say, when dealing with the human brain, you're dealing with massively multi-core, symmetric processing systems. The input data is only one part of the operation. Not just "this part of the brain works the eyes, this part the ears." Instead you've got approximately a trillion cores, each capable of independent operation, and who knows how many possible pathways interconnecting.
I've read stories of people losing large parts of their brains that were traditionally "dedicated" to certain senses or functions, only to have other parts "pick up the slack" (so-to-speak). The limiting factor to determining data transfer wouldn't be the number of neurons, then, but the pathway that the information flows.
I'm not sure how many afferent nervous system pathways exist in the human body, nor the amount of "data" that they could handle, but I imagine the answer would be close to "a shitload."
posted by Civil_Disobedient at 8:25 PM on March 6, 2006
I agree with delmoi; brains and cells are complicated systems, but they conform to the laws of information theory just as do telephone wires and hard drives. There's a lot more work involved in understanding where the signal resides in neurons, how the information is encoded, etc. than it would be for a phone cable, but it's not an impossible task.
There's been quite a lot of research about the information transmission rate of cells in the visual cortex. See, for example this paper, or the work of William Bialek. The data seem to show that these neurons are able to transmit a few hundred bits per second or so. There's also evidence that brains (and sensory organs) are really efficient at encoding data about the outside world.
This goes a bit beyond bandwith to computing capacity, but one way to put an upper limit on our information-processing ability is to look at the metabolic cost of information processing. In information-theoretic terms, it's erasure that costs energy, but in real metabolic terms, it seems that the cost -- at least in a fly system -- is a few tens of thousands to a few million eV of energy to transmit a bit. This is a back-of-the-envelope calculation, so take this with a grain of salt... but given that the brain consumes a few tens of Watts of power, that means we're talking about 10^14 bits/second = about 10 terabytes/sec.
I touch upon some of this in my latest book.
posted by cgs06 at 8:29 PM on March 6, 2006
There's been quite a lot of research about the information transmission rate of cells in the visual cortex. See, for example this paper, or the work of William Bialek. The data seem to show that these neurons are able to transmit a few hundred bits per second or so. There's also evidence that brains (and sensory organs) are really efficient at encoding data about the outside world.
This goes a bit beyond bandwith to computing capacity, but one way to put an upper limit on our information-processing ability is to look at the metabolic cost of information processing. In information-theoretic terms, it's erasure that costs energy, but in real metabolic terms, it seems that the cost -- at least in a fly system -- is a few tens of thousands to a few million eV of energy to transmit a bit. This is a back-of-the-envelope calculation, so take this with a grain of salt... but given that the brain consumes a few tens of Watts of power, that means we're talking about 10^14 bits/second = about 10 terabytes/sec.
I touch upon some of this in my latest book.
posted by cgs06 at 8:29 PM on March 6, 2006
This question is based on the presumption that the brain is analogous to a computer, which it's not, or that the senses work like peripheral devices connected by wires, which they don't. Much of what we "see" in fact arises from within the brain. Much of what we think and how we think it, what we perceive and how we perceive it, is unknown. The brain may work in a way that is simply alien to our metaphor, so statements like:
just add up all the neuron pathways in the spinal cord, optic nerves, and auditory nerves
Are misleading, or misled.
posted by Hildago at 9:37 PM on March 6, 2006
just add up all the neuron pathways in the spinal cord, optic nerves, and auditory nerves
Are misleading, or misled.
posted by Hildago at 9:37 PM on March 6, 2006
I really dont think that information theory is that useful in answering this question. The closest that it will be able to come in answering is to provide a vast overestimation of an upper bound.
zpousman hit on the crucial aspect of the problem: how do you define a piece of information? As far as I understand Shannon's work, information may be quantified by identifying and reducing the redundancy of information in a signal by taking the signal n-bits at a time, forming a dictionary of probabilities for each chunk, and giving the chunks codes having lengths inversely proportional to their probability of occurring.
Lets apply this theory to incoming visual information, using the lowest estimates that have appeared so far of bandwidth- we assume a resolution of 1 megapixel, and quantize at 8 bits/pixel. Your frame of information is 8 MB. Your dictionary needs 2^8000000 entries in it, and you need enough samples of each of these to estimate the probability of each within a reasonable bound of certainty. Good luck.
Of course, you may argue that so far this is an unreasonable argument, because a large number of these "chunks" will be similar enough to be pooled together with one another. Also, it might be argued that each of these retinal neurons is somewhat, but not completely separable from its distant neighbors in terms of the redundancy (or its opposite, independence) of transmitted information. But, exactly how independent it is is a really difficult answer.
cgs has a really good idea to use evolutionary constraints to quantify information. However, it should be pointed out that his model does not distinguish input/output from processing, that humans probably have a more efficient nervous system than flies, and that the firing of neurons is not independent (in other words, there are many states of firing that will just not be observed-- only a subset of the exponential number of firing patterns are possible).
posted by Maxwell_Smart at 9:40 PM on March 6, 2006
zpousman hit on the crucial aspect of the problem: how do you define a piece of information? As far as I understand Shannon's work, information may be quantified by identifying and reducing the redundancy of information in a signal by taking the signal n-bits at a time, forming a dictionary of probabilities for each chunk, and giving the chunks codes having lengths inversely proportional to their probability of occurring.
Lets apply this theory to incoming visual information, using the lowest estimates that have appeared so far of bandwidth- we assume a resolution of 1 megapixel, and quantize at 8 bits/pixel. Your frame of information is 8 MB. Your dictionary needs 2^8000000 entries in it, and you need enough samples of each of these to estimate the probability of each within a reasonable bound of certainty. Good luck.
Of course, you may argue that so far this is an unreasonable argument, because a large number of these "chunks" will be similar enough to be pooled together with one another. Also, it might be argued that each of these retinal neurons is somewhat, but not completely separable from its distant neighbors in terms of the redundancy (or its opposite, independence) of transmitted information. But, exactly how independent it is is a really difficult answer.
cgs has a really good idea to use evolutionary constraints to quantify information. However, it should be pointed out that his model does not distinguish input/output from processing, that humans probably have a more efficient nervous system than flies, and that the firing of neurons is not independent (in other words, there are many states of firing that will just not be observed-- only a subset of the exponential number of firing patterns are possible).
posted by Maxwell_Smart at 9:40 PM on March 6, 2006
Let me propose an analogy here.
Hydraulics were a big deal in 17th century France. They were the new and exciting technology du jour. Even Rene Descartes got involved. He got excited about hydraulics and started thinking about human brains. Clearly, the brain has a lot of fluid in its ventricles. Therefore, it is a fluid reservoir. Nerves are all connected to the brain, and nerves ooze fluid when cut.
Therefore, (he reasoned), the brain is the master hydraulic valve which regulates the flow of your brainal fluids hither and yon, to make the muscles work by means of hydraulic force transfers. What chooses which muscles work? Well, that must be regulated by the pineal gland, which is the little joystick that can be operated by the Immortal Soul. (Religion was also a prevalent technology in 17th century France, so it was natural that it be included.)
The same muddle-headed process has taken place with elementary topology (Aristotle's proposal that brain surface area was maximized because it's a radiator of heat), railroad yards, telephone switching networks, Volta's "animal electricity," von Neumann machines, simple oscillators, neural networks, social Darwinism, evolutionary psychology, et cetera ad nauseam. Christopher Wren, the architect of St. Paul's in London, felt his training qualified him to comment on the structure of the nervous system (and in fact he made major advances in the understanding of cerebral arterial anatomy.) Nearly every major political, scientific, engineering, and other technological advance has either been proposed as an explanation of the way the brain works or else has cited the structure and function of the brain as its own clear and necessary progenitor.
I humbly submit to you that this is wrong-headed and unlikely to further understanding. Information technology wasn't designed for this, and the brain is not, to a first approximation, solely an information technique.
Hey, it's cold up on this soapbox.
posted by ikkyu2 at 10:23 PM on March 6, 2006 [1 favorite]
Hydraulics were a big deal in 17th century France. They were the new and exciting technology du jour. Even Rene Descartes got involved. He got excited about hydraulics and started thinking about human brains. Clearly, the brain has a lot of fluid in its ventricles. Therefore, it is a fluid reservoir. Nerves are all connected to the brain, and nerves ooze fluid when cut.
Therefore, (he reasoned), the brain is the master hydraulic valve which regulates the flow of your brainal fluids hither and yon, to make the muscles work by means of hydraulic force transfers. What chooses which muscles work? Well, that must be regulated by the pineal gland, which is the little joystick that can be operated by the Immortal Soul. (Religion was also a prevalent technology in 17th century France, so it was natural that it be included.)
The same muddle-headed process has taken place with elementary topology (Aristotle's proposal that brain surface area was maximized because it's a radiator of heat), railroad yards, telephone switching networks, Volta's "animal electricity," von Neumann machines, simple oscillators, neural networks, social Darwinism, evolutionary psychology, et cetera ad nauseam. Christopher Wren, the architect of St. Paul's in London, felt his training qualified him to comment on the structure of the nervous system (and in fact he made major advances in the understanding of cerebral arterial anatomy.) Nearly every major political, scientific, engineering, and other technological advance has either been proposed as an explanation of the way the brain works or else has cited the structure and function of the brain as its own clear and necessary progenitor.
I humbly submit to you that this is wrong-headed and unlikely to further understanding. Information technology wasn't designed for this, and the brain is not, to a first approximation, solely an information technique.
Hey, it's cold up on this soapbox.
posted by ikkyu2 at 10:23 PM on March 6, 2006 [1 favorite]
I touch upon some of this in my latest book.
I normally hate plugs, but that looks like a good read.
And ikkyu2, our understanding of all sciences is a slow progression of better understanding, (with the occasional side-track). We may not yet have it exactly "right," but how do you know we aren't getting closer?
posted by Civil_Disobedient at 10:43 PM on March 6, 2006
I normally hate plugs, but that looks like a good read.
And ikkyu2, our understanding of all sciences is a slow progression of better understanding, (with the occasional side-track). We may not yet have it exactly "right," but how do you know we aren't getting closer?
posted by Civil_Disobedient at 10:43 PM on March 6, 2006
I agree with Maxwell_Smart and ikkyu2. Of course you can use information theory and apply it to just about anything but that doesnt mean this is a valid or meaningful thing. Also, i think people are confusing channel capacity and information. I can have a wide channel with no information: Its called noise.
The question needs a lot more definition. As it is, it carries with it a lot of implicit assumptions about the answer, as ikkyu2 jas pointed out. These days, everything, including the entire universe seems that we have to model it as a computer. This kind of thinking will seem faddish and naive in the not-too-distant future.
posted by vacapinta at 10:57 PM on March 6, 2006
The question needs a lot more definition. As it is, it carries with it a lot of implicit assumptions about the answer, as ikkyu2 jas pointed out. These days, everything, including the entire universe seems that we have to model it as a computer. This kind of thinking will seem faddish and naive in the not-too-distant future.
posted by vacapinta at 10:57 PM on March 6, 2006
Response by poster: Yeah vacapinta, I was only referring to channel capacity. How much information the brain actually recieves and uses is a much more difficult question . I was just wondering what the theoretical maximum amount of data that could be carried to the brain.
What happens after that is left as an excercise for the reader.
posted by grex at 11:35 PM on March 6, 2006
What happens after that is left as an excercise for the reader.
posted by grex at 11:35 PM on March 6, 2006
grex, as I hinted above, I bet the answer is - less capacity than you might think, but it is used with extraordinary efficiency.
(how much is 'less than you might think'? Heh...)
I'm sure there are useful measurements to be made. I remember a talk given by a prof. who had data from electrodes connected to a cat's ear. You could see how the gain of the ear's output was tuned based on the repetitiveness of the sound the cat was hearing. A very literal result, but once you start trying to understand how that simple thing relates to the cats thinking process - hear a sound, identify it, and act - all bets are off.
Go check out the Stairway to Heaven demonstration I posted about in the TeachingHearingFilter question.
posted by Chuckles at 12:05 AM on March 7, 2006
(how much is 'less than you might think'? Heh...)
I'm sure there are useful measurements to be made. I remember a talk given by a prof. who had data from electrodes connected to a cat's ear. You could see how the gain of the ear's output was tuned based on the repetitiveness of the sound the cat was hearing. A very literal result, but once you start trying to understand how that simple thing relates to the cats thinking process - hear a sound, identify it, and act - all bets are off.
Go check out the Stairway to Heaven demonstration I posted about in the TeachingHearingFilter question.
posted by Chuckles at 12:05 AM on March 7, 2006
And ikkyu2, our understanding of all sciences is a slow progression of better understanding, (with the occasional side-track). We may not yet have it exactly "right," but how do you know we aren't getting closer?
Nobody's saying we're not getting closer, the point is that we're not there yet. We're definitely not to a point where we can back-of-the-envelope it like we're doing in this thread. The slow and steady progression of science is not without fallacies or groupthink (cf. Kuhn), and the brain-as-computer idea seems so suspiciously like a recurring example of those phenomena (which ikkyu2 reminds us of) that the burden of proof is certainly on the proponents.
So, the question "how do you know we aren't getting closer" is misdirected. First, show that the premise of the original topic of this thread is meaningful (not merely interesting, which it certainly is), then we can start getting into specifics.
posted by Hildago at 8:03 AM on March 7, 2006
Nobody's saying we're not getting closer, the point is that we're not there yet. We're definitely not to a point where we can back-of-the-envelope it like we're doing in this thread. The slow and steady progression of science is not without fallacies or groupthink (cf. Kuhn), and the brain-as-computer idea seems so suspiciously like a recurring example of those phenomena (which ikkyu2 reminds us of) that the burden of proof is certainly on the proponents.
So, the question "how do you know we aren't getting closer" is misdirected. First, show that the premise of the original topic of this thread is meaningful (not merely interesting, which it certainly is), then we can start getting into specifics.
posted by Hildago at 8:03 AM on March 7, 2006
Serious information theorists don't claim that the brain -- or the universe -- is a computer. It's that the brain (and the universe) are subject to the laws of information theory. And as a result, information theory should allow you to learn about the brain, an organ that gathers and processes information.
Part of the profundity of information theory is that these laws -- which often are equivalent to laws of thermodynamics -- transcend the medium of the message. Once you have two things that signal each other in some manner (and I doubt that anyone here would argue that two connected neurons count as such) the laws of information theory allow you to get some insight into what's happening. Of course, it's really difficult to go from single neurons to ensembles and to decode the coding systems of neurons, but information theory definitely opens the door to a deeper understanding of the brain. (Some of the papers I linked above give a sense of this; Bialek's group has figured out some of the basic neural codes that the fly brain uses to transmit visual information from place to place.)
It's fine to be skeptical of the "everything is a computer" people (like Wolfram). But their arguments are often a caricature of information theory, and don't give a sense of the depth of the insight the theory provides.
posted by cgs06 at 8:43 AM on March 7, 2006
Part of the profundity of information theory is that these laws -- which often are equivalent to laws of thermodynamics -- transcend the medium of the message. Once you have two things that signal each other in some manner (and I doubt that anyone here would argue that two connected neurons count as such) the laws of information theory allow you to get some insight into what's happening. Of course, it's really difficult to go from single neurons to ensembles and to decode the coding systems of neurons, but information theory definitely opens the door to a deeper understanding of the brain. (Some of the papers I linked above give a sense of this; Bialek's group has figured out some of the basic neural codes that the fly brain uses to transmit visual information from place to place.)
It's fine to be skeptical of the "everything is a computer" people (like Wolfram). But their arguments are often a caricature of information theory, and don't give a sense of the depth of the insight the theory provides.
posted by cgs06 at 8:43 AM on March 7, 2006
How is that at all similar to asking the texture of justice ?
Gonna have to agree with ikkyu2 on this. It seems like we're trying to measure something that can't measured in the way we want to. It's not exactly the most feasible question to ask how many 1's & 0's the brain transmits/receives, because the brain doesn't do that.
It seems plausible, at first, to ask how a device of similar capability, such as an arificial eye with the same detail acquiring ability (debatable on how much that is) as a human eye -- one facet of input -- but that would nonetheless not be the human eye and not answer the question.
I also enjoyed ikkyu2's abstract list of suggestions, though:
"What is the approximate texture of justice?"
"How high is the Empire State Building, in liters?"
"My dog barks a purple number of square inches."
Suggestions:
"How many yards of love will five beiges of antimatter create?"
"How much disk space does emotion require?"
"If a fraction left the station at 2pm backpedaling at four turbans per minute, how many kilograms would it take for it to make one complete sentence?
posted by vanoakenfold at 8:47 AM on March 7, 2006
Gonna have to agree with ikkyu2 on this. It seems like we're trying to measure something that can't measured in the way we want to. It's not exactly the most feasible question to ask how many 1's & 0's the brain transmits/receives, because the brain doesn't do that.
It seems plausible, at first, to ask how a device of similar capability, such as an arificial eye with the same detail acquiring ability (debatable on how much that is) as a human eye -- one facet of input -- but that would nonetheless not be the human eye and not answer the question.
I also enjoyed ikkyu2's abstract list of suggestions, though:
"What is the approximate texture of justice?"
"How high is the Empire State Building, in liters?"
"My dog barks a purple number of square inches."
Suggestions:
"How many yards of love will five beiges of antimatter create?"
"How much disk space does emotion require?"
"If a fraction left the station at 2pm backpedaling at four turbans per minute, how many kilograms would it take for it to make one complete sentence?
posted by vanoakenfold at 8:47 AM on March 7, 2006
It's fine to be skeptical of the "everything is a computer" people (like Wolfram). But their arguments are often a caricature of information theory, and don't give a sense of the depth of the insight the theory provides.
Fair points in the above response. The other half of the dilemma is the brain itself, though, and do we know enough about that to say whether this idea of arriving at an approximation of its processing constraints can be accomplished by adding up internal connections and path lengths, etc.. Without any kind of formal education in neuroscience, my impression is that the state of the art in understanding how the brain works is that we have some workable theories mixed with a lot bafflement and plenty of existential dread. We don't have a comprehensive model of what thought is, let alone a circuit diagram, do we?
posted by Hildago at 9:32 AM on March 7, 2006
Fair points in the above response. The other half of the dilemma is the brain itself, though, and do we know enough about that to say whether this idea of arriving at an approximation of its processing constraints can be accomplished by adding up internal connections and path lengths, etc.. Without any kind of formal education in neuroscience, my impression is that the state of the art in understanding how the brain works is that we have some workable theories mixed with a lot bafflement and plenty of existential dread. We don't have a comprehensive model of what thought is, let alone a circuit diagram, do we?
posted by Hildago at 9:32 AM on March 7, 2006
I read a book by Howard Rheingold which was about Virtual Reality, or at least featured it heavily, and it had a very scientific-sounding estimate of the bandwidth of human vision. Maybe someone will have the book, sorry I can't be more specific.
posted by AmbroseChapel at 11:51 AM on March 7, 2006
posted by AmbroseChapel at 11:51 AM on March 7, 2006
I think I may have worded my comment a tad strongly. I find that cgs' book looks very interesting, and I'd like to read it and learn more about what information theory has to contribute to the understanding of the brain.
posted by ikkyu2 at 6:31 PM on March 7, 2006
posted by ikkyu2 at 6:31 PM on March 7, 2006
I think this is an appropriate place to reference the Singularity, the point in time where computer processing power will exceed that of the human brain.
posted by blue_beetle at 5:00 PM on March 12, 2006
posted by blue_beetle at 5:00 PM on March 12, 2006
The theoretical point in time, anyway. So far just a science fiction motif.
posted by Hildago at 5:45 PM on March 13, 2006
posted by Hildago at 5:45 PM on March 13, 2006
Without any kind of formal education in neuroscience, my impression is that the state of the art in understanding how the brain works is that we have some workable theories mixed with a lot bafflement and plenty of existential dread. We don't have a comprehensive model of what thought is, let alone a circuit diagram, do we?
that's not the complete point (although i'm unclear just how wrong or right you are). you don't need to know how the brain works to know that it's governed by physical laws. no matter how it works, it consumes energy, generates entropy, processes information.
maybe it's not clear, but physical laws are intimately associated with conserved quantities. the idea is that some things stay the same no matter what. this is why we can laugh at the idea of perpetual motion machines without understanding how every possible machine can work. it's also why you can make statements about these without understanding the details.
so at some level, this is physics, not engineering and it's not about details like circuit diagrams, but rather the amazingly broad statements that you can make of any system.
however, having said that, i cannot square it with the way some (all?) estimates are made here. it seems to me that you might be better looking at how much energy the brain requires. i suspect that places an upper limit on the information processing rate. it might be a very high limit, but, for the reasons i've just given, it will be much more secure than the estimates here which do, to some extent, suffer from the problems you identify.
cgs06 - if you're still reading this, does your book look at bub's work on information and qm? i can't work out how deep it goes from the amazon blurb...
posted by andrew cooke at 8:07 PM on March 14, 2006
that's not the complete point (although i'm unclear just how wrong or right you are). you don't need to know how the brain works to know that it's governed by physical laws. no matter how it works, it consumes energy, generates entropy, processes information.
maybe it's not clear, but physical laws are intimately associated with conserved quantities. the idea is that some things stay the same no matter what. this is why we can laugh at the idea of perpetual motion machines without understanding how every possible machine can work. it's also why you can make statements about these without understanding the details.
so at some level, this is physics, not engineering and it's not about details like circuit diagrams, but rather the amazingly broad statements that you can make of any system.
however, having said that, i cannot square it with the way some (all?) estimates are made here. it seems to me that you might be better looking at how much energy the brain requires. i suspect that places an upper limit on the information processing rate. it might be a very high limit, but, for the reasons i've just given, it will be much more secure than the estimates here which do, to some extent, suffer from the problems you identify.
cgs06 - if you're still reading this, does your book look at bub's work on information and qm? i can't work out how deep it goes from the amazon blurb...
posted by andrew cooke at 8:07 PM on March 14, 2006
This thread is closed to new comments.
posted by matteo at 2:05 PM on March 6, 2006