Why can't they fix my camera, and yours?
March 27, 2008 2:33 PM   Subscribe

Why, with all the advances in photography, can't they fix my camera so it shoots what I see?

I'm talking about light. About the huge spectrum of light that our eyes see in a room - I can be in a room with light streaming in, and I can see everything inside the room with perfect accuracy but I can also see things out the window, where the light is coming in.

But my camera can't.

I understand the basics of lighting: I get why we have to light MORE sometimes to capture what our eyes see, why people use grad filters for landscape photography, why we have to light the hell out of a movie set at night to show up anything on the cameras, even though the final shot "looks like night", why people use software to create HDR shots, why you might use exposure bracketing, etc.

I get all that.

What I don't get is why they haven't come up with a way to FIX this. I can fit the every photograph I've ever taken including all the crap that didn't make it onto a single hard drive, there's wifi and digital backs for really old enormo cameras and 39 megapixel Hasselblads, I can play music, talk on the phone, scan barcodes, take pictures, write a book, read a book and balance my checkbook in a single device but why can't they figure out how to make the camera's sensor (or lens, or whatever combination is causing this problem) grab what my eye is seeing in terms of light?

Seems like with all the advances in technology, we should be able to fix this, right? Or am I wrong? Help me to understand.
posted by twiki to Technology (18 answers total) 12 users marked this as a favorite
 
Because "how we see" is different than "what we see," and cameras only record "what we see." The ways in which wide-angle stereoscopic vision keeps proportions and geometry correct is as you note, difficult to translate to a two-dimensional recording method.
posted by rhizome at 2:36 PM on March 27, 2008 [1 favorite]


I'm with rhizome. The camera only has one "eye". Also, inn the span of a minute, you can focus and refocus your eyes on a single object, a single part of an object or a group of objects a hundred times. There will be subtle lighting changes at each depth of focus, but we don't see it in a static and 2 dimensional fashion.

If still cameras could capture exactly what we see, I don't think they would be still cameras, anymore.
posted by Cat Pie Hurts at 2:47 PM on March 27, 2008


Also, inn the span of a minute, you can focus and refocus your eyes on a single object, a single part of an object or a group of objects a hundred times.

Yes, part of the answer is that its an illusion that you can see the room "all at once." As you turn your eyes they are quickly refocusing, pupils are dilating to compensate for light and so forth.

You might say that cameras are in fact capturing reality and it our eyes which carefully construct illusions.
posted by vacapinta at 2:55 PM on March 27, 2008


A remarkable amount of vision is tricks your brain plays on itself. Until they develop psychic cameras, we're stuck. Check out some of these totally freaky color illusions. What these illustrate, for example, is that once your brain has decided what the white-point for a scene is, it adjusts everything else to compensate. There might be a way for a camera to obtain that information using eye-tracking and a "set this as white point" button or something, but it'd be pretty tricky, and probably still imperfect.

Consider also how your perception of light is better and color worse in your peripheral vision, but you somehow manage to assemble all that into useful pictures in your head because your eyes keep scanning, so you're building the scene based partly on what you remember you saw.
posted by adamrice at 3:00 PM on March 27, 2008


Best answer: I think you underestimate the capabilities of your pair of eyes plus brain is capable of. In terms of resolution (visual acuity), field of view, and dynamic range, the combination of two eyes plus your brain completely blows any CCD/CMOS sensor out of the galaxy.

To begin with, as the above posters mentioned, you have two eyes which take in slightly different images. In effect, your eyes and brain is generating a continuous stream of HDR shots.

As for raw figures, here's a page with some discussion of the human eye based on typical photography metrics. In terms of line resolution/visual acuity, 74megapixels would be needed for the level of detail that an eye resolves. With a field of view of 120 degrees, the full image captured would be about 576 megapixels. In terms of dynamic range, the human eye captures about 10-14 stops from dark to light. In the real world, even a top-of-the-line camera like the Canon 1Ds series only do about 6-7 stops.

In other words, your vision can't be replicated by a single camera because it's essentially the result of two cameras (that are better than even the best cameras available now) taped next to each other and hooked up to a supercomputer doing real-time image interpolation and processing.
posted by junesix at 3:00 PM on March 27, 2008 [14 favorites]


To follow up on what others have been saying, the popular idea that our eyes work like a camera does is true up to a point, but the brain does an amazing amount of processing to make the images you see. To give you an example, my parents and I have pretty dark skin. The other day I tried to take some photos of us playing in snow. I could see my parents' features absolutely clearly and make out the details of the snow around. But the photos that came out were horrible - the snow was this all-encompassing field of white and my parents and I looked so dark that our features were indistinguishable. Our brain is amazingly good at looking at a piece of the surroundings and deciding how much light it needs from it in order to get a clear picture of the details (that's a bit simplistic, but you get the idea). So why a camera isn't like your eyes and brain is a pretty important and profound question. It ties in closely with why optical illusions occur like others said above. Your eyes and brain have evolved over the years to make high probability assumptions. Interestingly, when presented with a field of Gaussian noise -- where every pixel was a randomly selected shade of gray -- the eyes and brain do no better than a camera. Not so for any natural scene -- there your highly evolved and specialized brains do far better than any camera so far produced.
posted by peacheater at 3:24 PM on March 27, 2008


"What you see" is a small percentage of the physics of light, and a large percentage of your brain filling in the rest. Memory (what you saw in the past) and desire (what you'd like to see) play a large role here.

"What the camera sees" is 100% the physics of light hitting a sensitive surface (film, digital sensor, ground glass, etc.).

Oliver Sacks has some good stories that explore this very idea - I can't recall the names off the top of my head but they had to do with persons being given back sight via an operation or trauma and their brains having no idea how to process the raw information...
posted by gyusan at 3:34 PM on March 27, 2008


The answer is while, yes, you can get a high dynamic range image, either on-camera or by combining exposure-bracketed images, the dynamic range of whatever you're looking the image on is still low. In order to match the dynamic range of real life, your monitor would have to be simultaneously have parts literally both as bright as daylight (the view out of a window) and as dark as pitch black. Your pupils would then adjust to these brightness levels just as in real life.

In order to make up for the low dynamic range of computer screens or reflective prints, a lot of people bring in tone mapping to squash that dynamic range down. And if you don't do it right (which is what almost always happens), you get really unnatural-looking results.
posted by zsazsa at 3:40 PM on March 27, 2008


Best answer: The main reason is that each "pixel" in your eye has the ability to automatically set its own sensitivity. A camera's sensor can only take a picture with all of the pixels being set to the same sensitivity.

If you're in a dark room with a brightly lit window, some of the sensors in your eyes are adjusted to the bright light, and some to the dark light. (This is why if you look out the window and then turn around and look at a dark wall, you will see a spot for a few seconds.)

This is a difficult problem for the sensor manufacturers to solve, but it is being worked on.
posted by qvtqht at 4:18 PM on March 27, 2008


Best answer: A key difference is exposure.

Try blinking your eyes as fast as you can. Still as proud of your vision? The average photograph is taken in a tiny fraction of a second -- much faster than you can blink.

In fact, if you sit a camera on a tripod and give it a good lengthy exposure, you can often do much better than your eyes -- I've taken pictures in dark cathedrals than pull out astounding details I could never see with my eyes. Similarly in the dark.

Don't forget that your eyes are always seeing. The camera is not.
posted by bonaldi at 5:40 PM on March 27, 2008 [3 favorites]


Also, your eyes don't have anything like constant resolution across their sensing (retinal) surface. The part right in the centre of your field of view is very high resolution, but you get much-worse-than-mobile-phone resolution further out. This is why you can't read text without looking straight at it, and why you can't find that little bolt that fell off the lawnmower into the grass until you run a deliberate search pattern over every square inch of the whole area it might have dropped into. The only reason you perceive sharpness on things you're not looking directly at is because your brain is putting an impression of sharpness there - your eyes simply don't deliver that.

An interesting flip side to your question, whose answer comes from the same place, is this: why can't they fix my brain so it sees what I can shoot?

If a camera designed to reproduce static images were to have a sensor that worked like your eyes, it would have to stitch the image together from a whole bunch of successive snapshots with the centre of the sensor aimed at successive items of interest, just like your brain does. Acquiring a really high-resolution photo would take quite a long time.

Cameras and human visual systems are just fundamentally different. They're each good at different things. Cameras are good at capturing a hell of a lot of information in a hell of a hurry; human visual systems are good at sorting and classifying and making sense of scenes.

You can't really expect a camera that has less processor power than a cockroach to be as good at sorting out what it's looking at as you are. On the other hand, there are things a camera can do that you just can't.
posted by flabdablet at 6:00 PM on March 27, 2008


Look at HDR and tonal mapping, that's as close as you'll get for a while yet.

The way you perceive the world is not the way a camera does, your brain stitches stuff together, hides things, and even makes stuff up, in the process it combines a far larger gamut than a camera can capture or a screen reproduce and presents it to you as a coherent image. I'm reducing and simplifying to the point of error but basically what you actually see is a function of your brain and not really reproducible (although it can be emulated) with current technology.
posted by Grod at 6:54 PM on March 27, 2008


There are in fact high HDR displays, which can be used to display images with more of a range between light and dark, and there are file formats for storing such images. There are also image sensors that can capture more of a scene's dynamic range (Fuji puts one in some of its high-end SLR cameras, but it still doesn't yield much more than a single stop). As you might imagine, the displays are quite expensive and will likely remain so for some time, as they offer no benefit for working on Word documents and the like.
posted by kindall at 7:19 PM on March 27, 2008


Two more points that might clarify things:

#1 - consider that it is possible to go blind when there is nothing wrong at all with your eyes or the nerves connecting them to your brain

#2 - when you have some time and want to be challenged, look into the primary visual cortex section of our brains.
posted by forthright at 7:41 PM on March 27, 2008


I've written about this. The answer is basically that your visual system is dynamic. When you look out the window, your pupil adjusts to the light level. The camera has a single shutter/aperture for the entire image, while you can pan around a scene using the continually adjusting "aperture" of your pupil.

It's not just the camera's fault. The devices we use to display images are the other part of the equation... If we had displays that could simulate the entire range of brightness, from very very dark to as bright as the sun on a summer day, you could create an image that is just what you see. But as it is, we are limited to display devices with very small ranges compared to the world around them.
posted by knave at 7:51 PM on March 27, 2008


Best answer: There's a book called Digital Image Processing written by Gonzalez/Wood, and they cover this in great depth in the first chapter of their book, which you can download for free off of their website.
posted by spiderskull at 1:52 AM on March 28, 2008


Your eyes are tied in with your brain which uses memories and biases and various other advanced types of processing to view the scene in front of you. When you look at a friend from across a crowded room, you do not see the other people. The moment you lock eyes there is a bucketload of extraneous data that is filtered out. It's like your eye did a Photoshop lasso on the friend's face and blurred out everything else. No camera will ever do that for you. Not in my lifetime.

I like to use photography as an abstraction of what we see because that is what it is. It actually takes awhile to learn the the language of an image. A person who has never before seen a 2d image cannot make sense of a photograph. The image is only a random set of colors and shades.

Not to mention that no two people will see a scene the same way. You can have two different pro photographers who know all the tricks to make a picture work. Put them in front of the same subject and you will have two different images. Even if they used the same camera from the same viewpoint and focal length.
posted by JJ86 at 7:56 AM on March 28, 2008


junesix is right. It's worth knowing that image processing is being done on things you see before the information even leaves the retina. Check out the Amacrine cell for more, although that article isn't very good.
posted by ikkyu2 at 11:05 AM on March 28, 2008


« Older Tax question for an entertainment writer   |   Asian stereotype: why the exaggerated front teeth? Newer »
This thread is closed to new comments.