Why isn't there a camera that takes photos which look like what I see with my eyes?
August 24, 2005 3:04 AM   Subscribe

Why isn't there a camera that takes photos which look like what I see with my eyes?

I'm a point-and-shoot guy, so forgive my naivete, but why isn't there a camera that takes photos which approximate what I see when I close one eye and look at something? Is it because our visual resolution is much higher than available film sizes/image sensors? Because the rods in our eyes are so sensitive to light? So many times I've tried to photograph a striking scene and the resulting image has fallen short of my perception.
posted by symbebekos to Media & Arts (21 answers total)
 
Can you elaborate between the differences between the two? Is it just that the images are not as sharp as your perception or what? I didn't quite understand the question.
posted by keijo at 3:10 AM on August 24, 2005


I see things that no camera can capture....

Yeah, that was an awesome flashback. K, I'm back. The brain filters things and emphasizes things in your visual field which obviously no camera can do. If you are talking strictly shadow/highlight details then the range of our eyes is vastly superior to what film or digital sensors can capture. Ansel Adams created the zone system after years of experiment to extend or compress the luminance range of images to match what he saw. For a simple example you can extend the range of digital images by taking two shots of the same image to expose for shadow details and highlight details, then compositing them in Photoshop. That will give a greater range of luminance. Lots of things are possible with the right knowledge. But knowledge isn't instant, it takes time to compile which is why the greatest photographers are never the amateurs who use automatic cameras.
posted by JJ86 at 3:22 AM on August 24, 2005


Because your eye is a fairly small part of the equation. It's your brain which is doing most of the work. A camera sits still and statically captures an image. Your eye and brain roam dynamically across a field of view constantly adjusting, measuring adapting and filling in.

Having said that, cameras and film have been lovingly tweaked over a century so that they do a remarkably good job - but consider what they don't do - they don't capture depth, they can't dynamically adjust brightness or contrast depending on where you are looking, they can't adjust focus or detail in the same way either. Part of the photographers art is to be aware of these limitations and find a way to compensate.

If you think about it, a photograph is really nothing at all like something you see with your eyes, (it's flat for a start) but we are so used to the conventions a photograph employs that we forgive these deficiencies.
posted by grahamwell at 3:31 AM on August 24, 2005


I'm not an expert, but seeing isn't photographic. I seem to remember only the outer ring of the eye can see colour, and the brain processes different views to fill in the gaps.

There's depth perception which you need two eyes for, and probably other stuff going on.

Eyewitness are notoriously sketchy on details...the brain/mind is assuming things when we see.
posted by lunkfish at 3:32 AM on August 24, 2005


I'm far from an expert on this but is it possible that the fault lies not with the camera but with the way your eye works. I.e., your eye only sees a very small fragment of what's in front of you but moves around such that you have the illusion that you see a lot. When you focus on something in particular your breadth of vision is very limited, this can then appear to stand out but when shot with a camera is less impressive.

(Very willing to stand correction).
posted by biffa at 3:33 AM on August 24, 2005


The reason is dynamic range - the eye can see a far vaster range of darks and lights than the best camera. How much vaster? I'm glad you asked.... (I love it when I know the answer to something!).

Camera exposures are measured (in one sense) by the aperture or sensitivity (which we will call the f-stop) A full explanation would involve much much more than that, (and there was a good one called out on the Blue a while back in the sidebar) but for the purposes of our simple discussion here the best cameras have a 15-stop range of exposures. The human eye has around 32 "stops" of sensitivity. Now, the relationship between one stop and the next is double the amount of light; so the extra 15 or so "stops" of sensitivity of the human eye gives us a vastly greater ability to sense small variations in light over an enormously wider range of lighting conditions.

To put it another way, use a compression analogy: what a camera 'sees' is a vastly 'compressed' version of what your eye sees. Detail is lost.

Another example of the wonders of the human experience.
posted by pjern at 3:56 AM on August 24, 2005


I'm not an expert, but seeing isn't photographic. I seem to remember only the outer ring of the eye can see colour, and the brain processes different views to fill in the gaps.

This is not correct. You can see color in your entire field of view. You have more receptors that are sensitive to light (and not so much to color) in the perifery of your vision. That's why you can see objects more in the corner of your eye in really low light situations, but they seem to disappear when you look directly at them. Color receptors are not good at seeing in low light.

Now on the question. The biggest reason you have a different perception than what you get in a photo (in my opinion) is that the lens on most "point and shoot" cameras is wide angle. So items that seem a normal distance away, suddenly looks way into the distance. It also has the effect of making things look kinda "fish eye" and objects that have straight lines come out curved and bowed.

The next problem is one of contrast. Your eyes (brain really) can handle a very wide range of contrast. You do this on the fly, so that when you look at someone in a dark room sitting in front of a bright window, you can see the person fine, and then you can instantly focus on the scene outside. A photo can do one, or the other. The only way to compensate is to use a flash to "over expose" the person in the dark room so that they are in less contrast to the scene outside. There are some films that are better at a wider sensitivity to dark and light, but most cameras and pretty much all digital, will have this problem.

So what it really comes down to is that your eyes have the ability to dynamically adjust to a scene over and over, and a photo takes one "slice" of that. To get the same result as your eye, you'd have to take a whole bunch of pictures in series to cover the whole range of light that your eyes can instantly adjust to.

And lastly, your eyes can refocus into the distance and up close quickly and photos are usually a compromise between something in the foreground is in focus, but the distance is unfocused. Those previously mentioned wide angle lenses are good at getting most things from about 3 feet to infinity in focus, so that is one of the reasons why they are used.
posted by qwip at 4:01 AM on August 24, 2005


Let me give you an example of the difference. Recently I found a cool photo stitching software called Autostitch that lets you create panoramic photos. So I went to the boardroom on the 28th floor of my work and decided to take a picture of the view I get to enjoy during meetings. Do you know how many pictures it took to get a panorama that was more or less similar to what I saw just by looking out the window? 34 photos at 1600 x 1200 and even then it still seems a bit smaller.

I'd say the first main difference is we have stereoscopic vision and depth perception. We also have a bigger field of view than cameras although we don't realize how much our eyes move around for us.
posted by furtive at 4:35 AM on August 24, 2005


There's also the extent of the field of view. You need some pretty wacky lenses to get the sort of field of view the eyes have in each direction. Then the actual image would have to be circular/oval rather than the typical rectangle, which is only focused in the centre.
posted by wackybrit at 4:38 AM on August 24, 2005


I'm also wondering what differences you're finding most problematic - so another way to ask the question would be "What would a photograph (that matches what the eye sees) actually look like?"

An exact match seems impossible, because (among other reasons) regardless of whether you're a painter or a photographer, paper has no peripheral vision - there is simply no way to frame an object in the scene the way we see it.

With the exception of the range of light/contrast problems, I think the real answer is that you can shoot what you see, provided you're talking about a much smaller frame than full peripheral vision - but you'll have to find the right lense/zoom, and other stuff. I know the setting on my camera that will do it, but the fact that the shots are extremely similar to what I see (if I were to print them on transparency, go back to the scene, and hold the photo between my eye and the scene to see how well they match) doesn't mean that the photo looks like what I thought I saw. Just the fact that they're printed on a small piece of paper makes a big difference to perception of detail and distance.
posted by -harlequin- at 4:42 AM on August 24, 2005


Basically what you are saying is that you are disappointed with the photos that you take, and want to take ones that are more pleasing to the eye.
I was in a similar situation, I would see a scene, see in my mind's eye what I thought it would look like on paper, take the picture, but the result was cock.
So I did a brief course in photography and read a lot about composition and the like, and as a result I now take better snapshots. What I learnt is that it is not so much a case of working out how to capture a scene on paper as you see it with your eye, that is literally impossible, it is more the other way round, you have to learn how to look at something with your eyes as it would appear on paper.
posted by chill at 4:58 AM on August 24, 2005


The camera's use of light is a really interesting subject. I recommend David Langford's "Basic Photography" if you're interesting in learning more about it... his chapter on optics is excellent for an overview.
posted by selfnoise at 5:05 AM on August 24, 2005


When you look at a photo, your eyes are already adjusted to the light in the room, thus the photo will look the wrong colour (etc). If you were actually were where the picture was taken, you wouldn't have any other reference points for colour and brightness and can become fully immersed in the picture.

This is also why watching a movie in a darkened theatre is more immersive than on a TV in daylight.
posted by cillit bang at 5:12 AM on August 24, 2005


Yeah, what chill said. The most interesting and infuriating part of doing photography for me has been learning to see the way a camera sees--it's the foundational skill, and really the only way to learn it is lots of attentive practice. One approach I've found really helpful is to jaunt around with a digital camera and a laptop, take a photo, download it immediately, and study the difference between what you've just seen/attempted to capture, and what the photo recorded.

Once you've really started getting a good sense of how your camera is going to "see" a scene, then classes, books, etc. can help you learn how to get your photographic images closer to what you'd like them to be.
posted by Kat Allison at 5:15 AM on August 24, 2005


solopsist is right on with dynamic range. Here is a great article on high dynamic range photography. Now that digital camera sensors pretty much have the resolution problem taken care of, further improvement will be in sensitivity and dynamic range. This will make pictures that much closer to real life, which may or may not be a good thing to some people. Great photographers know how to game the system when taking the shot and processing it to get as much dynamic range as possible out of what they have to work with.
posted by zsazsa at 6:05 AM on August 24, 2005


So many times I've tried to photograph a striking scene and the resulting image has fallen short of my perception.

Two things.

First, there's the dynamic range issue, which solopsist describes quite well. Your eyes adjust so quickly that you don't notice the difference, but if your iris were locked in a fixed position, and you walked from a dark room into a bright, sunny meadow, you'd be blinded.

There are all sorts of words for light intensity: lux, lumens, foot-candles... if you've ever seen those little stickers on video cameras that say "Works down to .5 lux!" that's what they're talking about. Inside a dark room might be 1 lux. The sun is something like 10,000 lux.

Ever had someone take your picture in a dimly-lit room with a flash? What happens to your eyes? They hurt, don't they? You think you're being blinded, right? But you're not. Just look at the resulting photograph. In the photograph, you're exposed properly, like if you'd gone outside during the day. So why did it hurt to have a flash go off in a dark environment, when your eyes can more than handle the brightness? Because your irises tried to rapidly change from very small to very large and back to very small again in a short amount of time. They're good at that... so good that you normally can't tell.

Anyway, the second reason your photographs don't look like "real life" is because a photo is a fixed perspective. You can't move your head around to "take in" the environment. If you want to stitch together a hundred photos, like furtive mentions, then blow the image up to several feet x several feet, I can guarantee you you'll be impressed with the results.

The "art" part of photography is trying to capture that "wow, man" feeling of a gorgeous sunset within the limits of your minimal focal length and liliputian dynamic range.
posted by Civil_Disobedient at 6:07 AM on August 24, 2005


I saw an example of a neural-network based camera that solved the dynamic range problem. For example, one could take a picture from inside a dark room of a sunny day outside, and see detail in both the room and the outside, which no ordinary camera can do (either one is black or the other is white).

The mechanism used was to make each pixel suppress the response of the adjacent pixels, based on how bright it itself was. So a pinpoint of light would show up bright, but a big expanse of white would suppress itself to light gray. In a dark region, no suppression.

Such a camera is then subject to optical illusions -- in particular the dark dot where the white lines cross standard illusion.

Alas, this was somebody's PhD thesis 14 years ago and I never saw it commercialized. Though maybe I just didn't know where to look.
posted by Aknaton at 9:04 AM on August 24, 2005


That's a cool idea, Aknaton, unfortunately there are times (many, many times) when you want black to be black, even if it's not the darkest area in a scene. Seems like every time technology comes to the aid of humans, we have to spend twice as much time learning how to counter-act it. See: exposure meters.
posted by Civil_Disobedient at 10:32 AM on August 24, 2005


Fascinating thread, full of useful info...thanks, folks!

A couple of points so far not mentioned that seem pertinent to me:

Depth of field doesn’t really come up in our typical experience looking around the world, presumably because our eyes automatically refocus as we redirect them.

Ever notice how often hard it is to capture in photograph how high up you are when shooting from a dramatic elevation? Shoot down a steep hill or even over a cliff and the photo usually comes back looking like you’re almost level or at least nowhere near as vertiginous as it felt in person, even if there’s some angled references or a horizon in the picture. I’m guessing that maybe this has something to do with inner-ear and other somatic triggers being missing when the photo is all you have to go on? In general, any captured image, whether in a photo, a painting, or even on video, grabs a very isolated and reduced portion of what our full-body experience delivers.

Seems like our constant exposure to color photos and especially color films and TV is recalibrating our sense of what the real world actually looks like...or OUGHT to look like. I’m a painter (so I chew on these issues a lot!), and find it often very obvious when realistic paintings have been painted from photo references rather than painted from life, primarily because the reduced contrast (and even bad exposure) in the reference has been faithfully reproduced in the painting, without many other viewers apparently even noticing, or being bothered. But then, painting itself has an even more reduced value range than photography, especially projected or backlit photography. It’s all the more fascinating (and impressive) when you encounter paintings that give a really vivid sense of the color/values/etc. of everyday walking around reality; they’re pretty rare. But even painting from life requires endless choices, reductions and compromises as you try to find color, value and shape devices or symbols for your infinitely rich visual reality. It’s increasingly difficult, I think, to resist the bombardment of photos, movies, etc., telling us how all these symbols ought to work to simulate the world.

Finally, it seems to me that capturing what we actually experience with our eyes is far from the main objective of most accomplished photographers, cinematographers, and even realist painters. To be even minimally productive in ordinary experience, we have to seriously ramp down our potentially endless astonishment at the visual imput the world constantly provides, and most of us are so good at this ramping down that the world often looks pretty flat and boring, if not downright distasteful. Most picture-making folks are much more interested in all the ways they can push their tools into creating a dramatically enhanced image of the world to give back to us. This seems to me to be both basic to the impulse to make images in the first place, and also a very sensible response to the fundamental impossibility of EVER encompassing in any medium the full range of even instantaneous human experience. EVERY moment of experience involves selective attention, but capture with any medium involves not just a zillion translation issues as we try to make tools simulate our nervous systems. It also inevitably requires freezing that selection, whereas our raw experience is a constantly shifting selectivity...
...or something like that:-)

Thanks again for the great topic.
posted by dpcoffin at 11:17 AM on August 24, 2005


Yup, dynamic range.. to put it in simpler terms, from Ansels "The Negative":
"A black and white print has a maximum range of brightness of about 1:100, or ocasionally more. That is, the deep blacks of a print made on glossy paper reflect about 1/100 as much light as the lightest areas. No matter how great the scale of brightness in the original subject, (which can be as high as 1:10,000) we have only this range of 1:100 in the print to simulate it."

Yup, the dynamic process of seeing, vs. the static image. If you are looking at a scene with a bright sky, and a shady spot under a tree, as soon as you focus on the shady spot, your eye/brain adjust to allow you to see more detail in that location. As soon as you look up to the sky, your brain/eye adjust again, to allow for maximum detail in that location. A single shot via a conventional film or digital camera can't do that.

There's also a ideal relationship between focal length of the lens used to shoot, print size, and viewing distance. This relationship makes sure the image looks as "real" as possible, break the relationship, and you're looking at something in a manner that is impossible for the human eye. I don't know the ratios/formula off-hand.

A key concept in Ansel Adams theories of photography and printing is that you're not even trying to capture reality, as dpcoffin mentioned. It's basically impossible, so instead, previsualize the reality you would like to present, and go for that.
posted by Jack Karaoke at 11:31 AM on August 24, 2005


The Eye is a Minox

Minox 8×11mm "spy" cameras take pictures that look like what we see with the human eye. According to the above article, the 15mm f/3.5 Minox lens is similar to the human eye (approx. 15.9mm f/3.5-5.6).
posted by Monk at 11:48 AM on August 24, 2005


« Older The Nineties revival starts here   |   Failing memory, what to do? Newer »
This thread is closed to new comments.