May 16, 2009 3:07 PM Subscribe

Is there an equation that defines the change in apparent size as a function of distance from the viewer? Basically, if I'm looking at a set of railroad tracks head on, if one plank is like 10 ft away, it appears to be one size. The same plank 20 ft away appears smaller. What is the relative size difference? Put another way, how big does a 1ft line appear to be at 10 ft, at 20 ft, etc.?

Furthermore, is there an angle of convergence? Just like the planks on a railroad track will converge to a single point, if i wanted two planks at different distances to appear to be in the same overlapping plane, where would I need to place them?
posted by miasma to Media & Arts (15 answers total) 2 users marked this as a favorite

Furthermore, is there an angle of convergence? Just like the planks on a railroad track will converge to a single point, if i wanted two planks at different distances to appear to be in the same overlapping plane, where would I need to place them?

The answer to the first part of your question is that it's a simple inverse relationship. If the ratio of distances is x, then the ratio of sizes is 1/x. So in your example, the plank 20 feet away appears half the size of the one 10 feet away. A coin 10 feet away appears 1/10th the size of one 1 foot away.

Unfortunately I don't understand the second part of your question.

posted by FishBike at 3:22 PM on May 16, 2009

Unfortunately I don't understand the second part of your question.

posted by FishBike at 3:22 PM on May 16, 2009

Well, first, this may be obvious already, but just in case: You can't measure this as a length. You can measure it as an angle. That is, you can say that a one foot line at 10 feet away appears to take up the same angle of your vision that a two foot line at some other distance.how big does a 1ft line appear to be at 10 ft, at 20 ft, etc.?

With that said, you're looking for the angular size of the plank.

Basically, the angle that the plank subtends is two times the angle whose tangent is half the length of the plank divided by your distance from the plank (assuming you're looking at the center of the plank and that the plank is perpendicular to your line of view).

posted by Flunkie at 3:22 PM on May 16, 2009 [1 favorite]

drdanger and FishBike are not correct. The relationship between visual size of the same object at different distances is *not* linear.

For example, a ten foot object at one foot away subtends an angle of about 2.75 degrees. If you back off a foot, doubling your distance to it, it now subtends an angle of about 2.38 degrees.

Again, the visual angle is 2 * arctan ( L / D ), where L is the object's length and D is your distance to it. This is*not* linear.

posted by Flunkie at 3:30 PM on May 16, 2009

For example, a ten foot object at one foot away subtends an angle of about 2.75 degrees. If you back off a foot, doubling your distance to it, it now subtends an angle of about 2.38 degrees.

Again, the visual angle is 2 * arctan ( L / D ), where L is the object's length and D is your distance to it. This is

posted by Flunkie at 3:30 PM on May 16, 2009

I think it also depends on the focal length of the viewer you are using. The answer to the second part of your question (angle of convergence) depends on the distance your viewer is offset from the plane that the objects are lying on.

Can you explain why you are looking for this info? It might produce more useful answers.

posted by bonobothegreat at 3:41 PM on May 16, 2009

Can you explain why you are looking for this info? It might produce more useful answers.

posted by bonobothegreat at 3:41 PM on May 16, 2009

Flunkie's formula (theta = 2*arctan(L/(2*D)) ) is correct, but his example answers are wrong -- he didn't convert from radians to degrees (The values are actually 157˚ vs 136˚).

Still, the relationship is very close to linear at any reasonable scale, since arctan(x)≈x for small x. There's only a 5% or so error in using that approximation for 45 degrees, and it gets much better for any smaller angles than that.

posted by zxcv at 4:24 PM on May 16, 2009

Still, the relationship is very close to linear at any reasonable scale, since arctan(x)≈x for small x. There's only a 5% or so error in using that approximation for 45 degrees, and it gets much better for any smaller angles than that.

posted by zxcv at 4:24 PM on May 16, 2009

Thanks Flunkie, zxcv, bonobothegreat. Visual Angle is the term I was looking for!

The reason I'm asking is some friends of mine and I are looking to do some perspective-based art that has portions at multiple depths but will appear to be at a single depth when viewed from the right position.

posted by miasma at 4:43 PM on May 16, 2009

The reason I'm asking is some friends of mine and I are looking to do some perspective-based art that has portions at multiple depths but will appear to be at a single depth when viewed from the right position.

posted by miasma at 4:43 PM on May 16, 2009

I'm going to propose a theory that it matters if the image sensor is flat (like a camera) or curved (like a retina). If we assume the apparent "size" of an object is determined by the size of its image on the sensor, we get different results for the same angular size.

With a curved sensor we get a linear relationship between angular size and image size. With a flat sensor we get a tangential relationship between angular size and image size, nicely cancelling out the arctangential relationship between object distance and angular size.

For example taking the extreme example of a 180-degree angle of view (an infinitely big object) we get an infinitely big image on a flat sensor, but only an image that exactly fills the sensor if it is a half-sphere.

Depending on which assumption we make about the shape of the sensor, we either get the "inverse linear" answer or the "arctan" answer.

Any takers on this theory besides me?

posted by FishBike at 4:51 PM on May 16, 2009

With a curved sensor we get a linear relationship between angular size and image size. With a flat sensor we get a tangential relationship between angular size and image size, nicely cancelling out the arctangential relationship between object distance and angular size.

For example taking the extreme example of a 180-degree angle of view (an infinitely big object) we get an infinitely big image on a flat sensor, but only an image that exactly fills the sensor if it is a half-sphere.

Depending on which assumption we make about the shape of the sensor, we either get the "inverse linear" answer or the "arctan" answer.

Any takers on this theory besides me?

posted by FishBike at 4:51 PM on May 16, 2009

@FishBike, I think what yer talking about might have something to do with the depth of field of the lens.

posted by miasma at 6:36 PM on May 16, 2009

posted by miasma at 6:36 PM on May 16, 2009

Nope, nothing to do with depth of field (which is just a term meaning the range of distances that are in acceptably sharp focus within the image), nor to do with the focal length of the lens (which is what I think you might have meant).

It has to do with the 'fisheye' effect of having a curved image sensor (like a retina). Given that kind of imaging system, keeping an object at the same distance but halving its size wouldn't make it look half the size, either, whereas with a flat sensor it would.

But as it turns out, the distinction might not matter to you. I think your explanation of how you plan to use this might actually have ended up subtly redefining the question. That what you actually want to know is not "how much smaller will X look if it's twice as far away as Y" but rather "how much larger do we have to make X so it appears the same size as Y when it's twice as far away". Or something to that effect, anyway, like determining how much to scale up the farther-away parts of this artwork to match up with the closer parts.

If that's the case, X and Y have to represent the same angle of view, and I hope we could all agree that means X must be exactly twice as large as Y if it is exactly twice as far away. Which does not make either of the two different answers you've received wrong--they just come out to exactly the same thing when you are trying to make things appear the same size at different distances!

posted by FishBike at 7:51 PM on May 16, 2009

It has to do with the 'fisheye' effect of having a curved image sensor (like a retina). Given that kind of imaging system, keeping an object at the same distance but halving its size wouldn't make it look half the size, either, whereas with a flat sensor it would.

But as it turns out, the distinction might not matter to you. I think your explanation of how you plan to use this might actually have ended up subtly redefining the question. That what you actually want to know is not "how much smaller will X look if it's twice as far away as Y" but rather "how much larger do we have to make X so it appears the same size as Y when it's twice as far away". Or something to that effect, anyway, like determining how much to scale up the farther-away parts of this artwork to match up with the closer parts.

If that's the case, X and Y have to represent the same angle of view, and I hope we could all agree that means X must be exactly twice as large as Y if it is exactly twice as far away. Which does not make either of the two different answers you've received wrong--they just come out to exactly the same thing when you are trying to make things appear the same size at different distances!

posted by FishBike at 7:51 PM on May 16, 2009

FishBike has it exactly right. (And he's not talking about depth of field.)

It's quite easy to see it with a little diagram -- like this one.

posted by phliar at 8:04 PM on May 16, 2009

It's quite easy to see it with a little diagram -- like this one.

posted by phliar at 8:04 PM on May 16, 2009

Awesome, that is pretty much the exact diagram I have in my head right now.

posted by FishBike at 8:18 PM on May 16, 2009

This thread is closed to new comments.

posted by drdanger at 3:20 PM on May 16, 2009