Pixels, Levels and Curves, Oh My!
Part I
In which a house made of light, dark and bias frames falls on the Wicked Witch of Noise.
Sensor pixels act like little buckets, and photons of light as electrons (imagine them as tiny marbles, if you want). For simplicity let’s assume that a pixel can hold 65,536 electrons. (This capacity will vary from sensor to sensor.) In binary notation, the number of electrons in this pixel can be unambiguously given by a positive 16-bit integer. (In this case we say the bucket has a 16-bit depth.)
We define black as the case where there are zero electrons in the bucket; white is when the pixel is filled to capacity.
Electrons are added to the pixel through a number of effects. Let’s consider the primary cause: Photons coming from our target object that interact with pixel to liberate an electron.
Imagine two side-by-side pixels, both getting light from a telescope pointed at an object of uniform brightness. You might reasonably assume that both pixels will fill with electrons at the same rate, and after letting a certain amount of time to pass, both pixels would have the same number of electrons in them. You would be wrong! Let’s do a little thought experiment to find out why.
Take four typical coins and flip them. Count the number of them that land heads-up and write that down for the contents of Pixel 1. Gather up the coins and flip them again, writing the new number of heads as the contents of Pixel 2. Do this nine more times, adding the new numbers of heads to either Pixel 1 or 2 as appropriate. Knowing that the chance of a coin landing head-up is 50% it’s reasonable to assume that you should have 20 heads in each tally when you’re done. Chances are you don’t, though. In fact, it’s quite likely that the two sums aren’t equal. Why? Because a collection of four coins has other possible outcomes than landing with two heads and two tails showing. The effect of these other possibilities is to change the “perfect” outcome of exactly 20 heads into a distribution peaked near 20 heads, but also having other values.
Light entering a pixel resembles this example. During a given time interval we expect a certain number to enter and produce electrons. But during each interval of time a number of different counts can possibly occur, and by the end we don’t always have the exact amounts suggested by chance.
Now imagine a field of pixels, illuminated by light coming from a uniform source (perhaps an electroluminescent flat panel). We allow light to enter the pixels until they’re about half full and then close the shutter. Do all the pixels have the same exact number of electrons in them, providing an image of uniform intensity? No, they don’t. Some have more electrons, some less, in a very random way that results in intensity that is not uniform. So the source of uniform illumination is not imaged as uniform; instead it appears a little gritty. This grittiness is called shot noise.
Another process that introduces shot noise is the spontaneous addition of electrons to pixels by dark current. Dark current results from thermal activity in your sensor that occasionally kicks an electron into a pixel bucket. The warmer your sensor is, the faster this process works. Because dark current has nothing to do with the object you’re imaging, it’s something we want to minimize. The obvious way to do this is to keep the sensor as cool as possible.
There is another way. If we could take an image in which the electrons come only from the dark current, that image would represent a sort of dark image that could be subtracted from our image of the target object. This is what shooting dark frames is all about. Dark frames are images made with the shutter closed and are essentially images of dark current. Subtracting dark frames from images made with the shutter open (light frames) goes a long way toward removing the noise present. In order to match the amount of dark current present in light frames, the dark and light frames must be made with the sensor at the same temperature and have the same exposure time.
But wait, there’s more! Noise can be generated by random processes in the sensor and its electronics, and more noise can be created during the reading and reporting of pixel counts. These can be lumped together as read noise. Read noise doesn’t depend on exposure time, and is probably not very sensitive to temperature. As for dark current noise, we can image the read noise by making a very short exposure with the shutter closed. We keep it short to minimize the contribution of dark current. These images of read noise are called bias frames.
Conveniently, every dark frame image also contains the bias image, so when dark images are subtracted from light images, the bias image gets removed, too. When you can rely on the light and dark frame temperatures being the same, bias frames are not needed. Many people apply bias frames regardless.
One more bit of terminology: The application of dark and bias frames to light frames is called calibration. Calibration can also involve flat frames and flat dark frames. We can leave discussion of those to another time.
Ding-dong! The Noisy Witch is dead? Sorry, but it’s impossible to remove her from the picture completely. Using multiple dark and bias frames to better generate the dark and bias images does help to keep her down, though. And she definitely resents increases in total exposure time.
Coming up...
Part II, In which we see that the road to the land of imaging Ahs is a Grayish-Brick Histrogram
Part III, In which we learn that even without a brain we can use Levels
Part IV, In which we find that that the heart of the matter is Curves
Other news... Our astronomy club has decided to go ahead with an Imaging Messier Marathon on April 5. Should be fun!