March 11, 2014 1:28 PM Subscribe

I have a system that records transmitted light images of samples in three colors by sequential illumination with red, green, and blue LEDs. This works for acquiring color images, but the overall color balance is off. I want to write code for color balancing these images, using data acquired from a reference target (like this). I want this to be part of an open source project, so existing tools in photoshop and similar programs aren't helpful to me. So far, I haven't found a good reference for how to do this; can someone recommend one to me?

I've read the wikipedia pages on color balancing and I have a general understanding of how to do this, but I find all the different color spaces non-intuitive, and don't have a clear vision of how to get from uncalibrated to calibrated images. Naively, it seems like I should be able to take the measured RGB values from a target and use least squares to find the matrix relating them to the ideal RGB values, and then use that to calibrate my images, but everything I read makes it sound harder than that. If you know of a good explanation of this process, or open-source code that already does what I want, that would be great.
posted by pombe to Technology (6 answers total) 4 users marked this as a favorite

I've read the wikipedia pages on color balancing and I have a general understanding of how to do this, but I find all the different color spaces non-intuitive, and don't have a clear vision of how to get from uncalibrated to calibrated images. Naively, it seems like I should be able to take the measured RGB values from a target and use least squares to find the matrix relating them to the ideal RGB values, and then use that to calibrate my images, but everything I read makes it sound harder than that. If you know of a good explanation of this process, or open-source code that already does what I want, that would be great.

First, get your R & G & B images, then average each square's value and stick it in a matrix so that you have a 3 x N matrix with the 3-long dimension being R, G, and B, and the N-long matrix having one entry for each calibration square. We'll call this matrix *g*.

Next, make a matrix of identical shape, except the RGB values for each square should be their specified values, i.e. white is 255-255-55. We'll call this matrix*f*.

What we now need to do is solve for the 3x3 matrix*A* such that *f = Ag*; this can be done by right-multiplying *f* by the pseudo-inverse of *g* (it's mathematically identical to least squares, apparently).

This makes the assumption of a linear color-map; you may need to introduce nonlinearities. For example, you probably want to scale each color so that the white pixel is all 255 and the black pixel is all 0 before you do the matrix-y stuff. Or potentially, do it afterwards. Heck, do it both ways and pick the way that seems best!

I think you may ultimately need to go to higher-order interpolations (3rd/4th order polynomials) which are actually fairly easy to do: you now have*f = C + A g'* where *g*' is 3M x N, where M is the order you want to go to. The actual structure of *g*' is *[g; g^2; g^3...]* so that the calculated red value is a function of the R G B measured values as well as those values squared. This necessitates more calibration squares, but should improve the fitting result.

posted by Maecenas at 2:02 PM on March 11 [2 favorites]

Next, make a matrix of identical shape, except the RGB values for each square should be their specified values, i.e. white is 255-255-55. We'll call this matrix

What we now need to do is solve for the 3x3 matrix

This makes the assumption of a linear color-map; you may need to introduce nonlinearities. For example, you probably want to scale each color so that the white pixel is all 255 and the black pixel is all 0 before you do the matrix-y stuff. Or potentially, do it afterwards. Heck, do it both ways and pick the way that seems best!

I think you may ultimately need to go to higher-order interpolations (3rd/4th order polynomials) which are actually fairly easy to do: you now have

posted by Maecenas at 2:02 PM on March 11 [2 favorites]

Thanks for the comments so far. I do white balance the images first; I just take data with no sample present to normalize the LED intensities so that fully transparent corresponds to 1 in the final image and fully opaque corresponds to 0.

The idea of including non-linearities is a good one.

posted by pombe at 2:06 PM on March 11

The idea of including non-linearities is a good one.

posted by pombe at 2:06 PM on March 11

You probably want an open source color management system.

posted by Sophont at 2:15 PM on March 11 [1 favorite]

posted by Sophont at 2:15 PM on March 11 [1 favorite]

I have no idea if this would help or not, but there is also color hug.

posted by Poldo at 5:32 PM on March 11

posted by Poldo at 5:32 PM on March 11

ArgyllCMS is probably the best maintained open source colour management system. I've used it, along with my scanner and one of the coloraid targets, to make some lovely scans under Linux. If the code doesn't have a reference, I'm sure ColorWiki will.

(And yay for Wolf Faust's targets!! They are awesome and cheap and work well.)

posted by scruss at 7:44 PM on March 11 [1 favorite]

(And yay for Wolf Faust's targets!! They are awesome and cheap and work well.)

posted by scruss at 7:44 PM on March 11 [1 favorite]

You are not logged in, either login or create an account to post comments

what wavelengths are your red, green, & blue leds, and is the intensity balanced? they might be causing some of the color cast.

To start to fix this i think you want to start with shooting a grey card, and figuring out the transformation from your resulting image to the evenly balanced neutral grey it should be.

(for a true grey card, you know that r=g=b in the resulting output, you just have to pick/guess the intensity).

The resulting transformation will include the responsiveness at this wavelength of your imager & the intensity of the led.

posted by TheAdamist at 1:56 PM on March 11