what we talk about when we talk about color
August 28, 2008 7:20 PM   Subscribe

When I manipulate my digital photos what exactly am I doing?

I am using adobe lightroom to "develop" my digital photographs. While I have a good intuitive grasp of how to change photographs the way I want to but I would like a bit of a theoretical foundation. Specifically, I want to know what is happening numerically when I use the "exposure" or "fill light" sliders and also what happens when I use the color-specific hue, saturation and luminance buttons. As detailed as possible please. A standard computer science reference would be a fine answer.
posted by shothotbot to Media & Arts (12 answers total) 7 users marked this as a favorite
I'm sure there are better, more in depth answers. But basically, you're doing a lot of math. Each pixel is a number that represents a discrete color. The various filters just perform math on those numbers to create an end result that approximates the analog equivalent. But you probably knew that. More complexly and broadly, it's pattern recognition and decision making. If you were to run a "soften" filter, the software runs through the pixels and when it comes across a set of numbers that match the "sharp edge" pattern, it performs some math to make them softer.

Quite basically, they take photographic knowledge and convert it into a step-by-step programmatic procedure.

Or something like that.
posted by gjc at 8:29 PM on August 28, 2008

At the core of it, all these operations define functions that take an input color and return an output color. These functions can be defined in a myriad of different ways and are applied to all pixels in an image to generate a new image. Some functions will depend on the local region around a pixel, some do not.

Take a grayscale image with color values between 0 and 1. You want to increase the contrast in the dark regions (sort of like increasing exposure). You could do this by making all the pixels with values 0.8 to 1 have a value of 1 and then expanding the range of the pixels that had a value of 0 to 0.8 fill the region between 0 and 1. This would be defined by the function

if x is greater than 0.8, f(x) = 1
if x is less than 0.8, f(x) = x/0.8

This is, of course, a very simple example, but it gives you an idea of the type of thing that the lightness and contrast filters are doing. Blur and sharpen and more complex filters use data from the surrounding neighborhoods of each pixel as input to their functions as well.

Any image processing text book should give you the general information you need, although the specifics of the algorithms Adobe uses in their products might be under wraps. The math background required is actually not too difficult, so don't let that stop you.
posted by demiurge at 8:37 PM on August 28, 2008

Reading about color spaces (HSL / HSV in particular) is a good start to understanding the math behind most operations.
posted by 0xFCAF at 8:47 PM on August 28, 2008

If you're looking for more substantive stuff, Real World Color Management and Digital Image Processing are two books I've heard recommended more than once (and DIP is quite impressive from experience).
posted by devilsbrigade at 9:21 PM on August 28, 2008

To clarify, color management isn't your slider stuff, but it relates how the slider stuff relates to your actual photograph.
posted by devilsbrigade at 9:22 PM on August 28, 2008

Response by poster: Sorry if I was unclear. I know that numeric stuff happens, I want to know exactly what numeric stuff happens. For example, I think the exposure slider takes a collection of pixels, defined by average RGB, and makes them brighter by increasing the R, G and B numbers somehow. So I would like to know which pixels and exactly how the values get increased. I realize this is almost certainly beyond the scope of an askme post so maybe just pointers to books or authoritative online discussions would be good
posted by shothotbot at 10:02 PM on August 28, 2008

Your pictures are, essentially, a grid of colored pixels. So far, so good.

There are lots of different ways to mathematically represent the color of a given pixel. Some common ones are:
RGB: a measure of how much Red, how much Green, and how much Blue are mixed together to make that color. So, three separate numbers for each pixel in the image. What you're actually seeing on screen is a literal representation of this, in terms of how bright each of the colored LEDs are shining to make up a given color.
CMYK: Cyan, Magenta, Yellow, and blacK. Four numbers for each pixel this time. This one's usually used for print work -- it's how much ink of each of those colors you'd mix to get the desired color, but it's still necessary to have a way to represent those values digitally so you can work with it on a computer.
HSV: A measure of the Hue, of the Saturation, and Value -- Again, three separate numbers for a single color, but different ones than in RGB. For some tasks, this one's more convenient and intuitive to work with than RGB: want it brighter? Add V. More intense colors? Add S. But there's no physical manifestation of HSV, it's just a mathematical representation.

There are math formulas for converting colors from one system to another; from RGB to HSV for example. For the most part this makes no difference to the image itself -- they're just different numeric ways of representing the same thing: an RGB picture and an HSV picture may be identical in every way, but the numbers representing them are completely different.

(What I just said isn't really true. What's interesting is that none of these systems can represent all possible colors, and some of them can represent colors that others can't. The range of colors that a given system can represent is called its "gamut" -- you can google around to find maps that'll show you what colors get left out of each. So in converting from one system to another, some of the more extreme colors will get flattened out. But that's probably more than you need to worry about.)

So when you're working with photo software, you're basically just changing those numbers. A simple example is converting an image from color to black and white: you want to remove all the color saturation -- so the software goes through each pixel in the image, figures out the HSV numbers that represent it, throws out the S value and sets it to zero. Or if you want to add a red tint to your image, the easiest way is to convert the image to RGB data, then go through and bump up the R a little bit in every pixel.

Most things you do aren't as simple as those -- often there's all sorts of complicated comparisons of nearby pixels involved when deciding what to change -- but at the end of the process you're still ending up with a bunch of numbers representing colors arranged in a grid.
posted by ook at 10:14 PM on August 28, 2008

You are asking for the exact algorithm that Lightroom uses for the exposure slider? The details are probably not published by Adobe. I suppose you could reverse engineer it, if you needed to create a filter that precisely duplicated the effects.
posted by demiurge at 10:16 PM on August 28, 2008

Missed you on preview, so I was more dumbed down than you were looking for. Here are functions that describe how to convert between many different colorspaces; the exposure slider for example is going to be doing something along the lines of converting to HSV than adjusting the V -- but probably with a lot more refined details than just adding a fixed value to every V... that's going to depend on the individual program.
posted by ook at 10:20 PM on August 28, 2008

I want to know exactly what numeric stuff happens.

It gets more complicated if you're shooting in RAW format instead of JPG.

Most camera's save to jpeg pictures in the camera at the moment you take the shot, and higher-end camera's (such as DSLR) have a "raw" setting as well. Generally speaking, RAW stores the data from the sensor in your camera without converting it in any way so you can convert it to an RGB picture later on whilst, at that later time, manipulating quite a lot of sensor settings. Such as the exposure, which you mentioned, so my guess is that this is what you are wondering about.

The above posts already mention a lot of things you can do with pixels, and I suggest researching a lot of "converting raw pictures" articles via google to find out more..
posted by DreamerFi at 6:01 AM on August 29, 2008

There's a strong academic literature in digital image and signal processing. The specific area you're looking for is usually called image processing, and if you look online you can find a bunch of syllabi. It's been eight years since I've done any work in this area, so I won't try to break it down any more for you.
posted by Nelson at 8:05 AM on August 29, 2008

If you're really interested in the algorithms behind a photo editor, and you're not afraid to delve into some code, Paint.NET is open source. You can get a look at the algorithms it uses to perform some of those tasks.
posted by DanW at 8:40 AM on August 29, 2008

« Older Don't go chasin' waterfalls...   |   PDF = Pretty Damn Funky Newer »
This thread is closed to new comments.