Wednesday, February 20, 2013

Pixels, Levels and Curves--Oh My!

(I've slowly come to a sort of useful understanding of how some basic tools of image processing work. This is an attempt to put into words those concepts that are elusive to me. I hope these little essays will help a few people starting out with imaging processing. It's meant to be purely introductory)

Pixels, Levels and Curves, Oh My!


Part I


In which a house made of light, dark and bias frames falls on the Wicked Witch of Noise.


Your imaging sensor (whether DSLR or CCD) is a digital device based on discrete bits of information that are either on or off. Most of what it does is an exercise in counting, not measuring.

Sensor pixels act like little buckets, and photons of light as electrons (imagine them as tiny marbles, if you want). For simplicity let’s assume that a pixel can hold 65,536 electrons.  (This capacity will vary from sensor to sensor.) In binary notation, the number of electrons in this pixel can be unambiguously given by a positive 16-bit integer. (In this case we say the bucket has a 16-bit depth.)

We define black as the case where there are zero electrons in the bucket; white is when the pixel is filled to capacity.

Electrons are added to the pixel through a number of effects.  Let’s consider the primary cause: Photons coming from our target object that interact with pixel to liberate an electron.

Imagine two side-by-side pixels, both getting light from a telescope pointed at an object of uniform brightness. You might reasonably assume that both pixels will fill with electrons at the same rate, and after letting a certain amount of time to pass, both pixels would have the same number of electrons in them.  You would be wrong! Let’s do a little thought experiment to find out why.

Take four typical coins and flip them.  Count the number of them that land heads-up and write that down for the contents of Pixel 1. Gather up the coins and flip them again, writing the new number of heads as the contents of Pixel 2. Do this nine more times, adding the new numbers of heads to either Pixel 1 or 2 as appropriate. Knowing that the chance of a coin landing head-up is 50% it’s reasonable to assume that you should have 20 heads in each tally when you’re done. Chances are you don’t, though. In fact, it’s quite likely that the two sums aren’t equal. Why? Because a collection of four coins has other possible outcomes than landing with two heads and two tails showing. The effect of these other possibilities is to change the “perfect” outcome of exactly 20 heads into a distribution peaked near 20 heads, but also having other values.

Light entering a pixel resembles this example. During a given time interval we expect a certain number to enter and produce electrons. But during each interval of time a number of different counts can possibly occur, and by the end we don’t always have the exact amounts suggested by chance.

Now imagine a field of pixels, illuminated by light coming from a uniform source (perhaps an electroluminescent flat panel). We allow light to enter the pixels until they’re about half full and then close the shutter. Do all the pixels have the same exact number of electrons in them, providing an image of uniform intensity? No, they don’t. Some have more electrons, some less, in a very random way that results in intensity that is not uniform.  So the source of uniform illumination is not imaged as uniform; instead it appears a little gritty. This grittiness is called shot noise.

Another process that introduces shot noise is the spontaneous addition of electrons to pixels by dark current. Dark current results from thermal activity in your sensor that occasionally kicks an electron into a pixel bucket. The warmer your sensor is, the faster this process works. Because dark current has nothing to do with the object you’re imaging, it’s something we want to minimize.  The obvious way to do this is to keep the sensor as cool as possible.

There is another way. If we could take an image in which the electrons come only from the dark current, that image would represent a sort of dark image that could be subtracted from our image of the target object. This is what shooting dark frames is all about.  Dark frames are images made with the shutter closed and are essentially images of dark current. Subtracting dark frames from images made with the shutter open (light frames) goes a long way toward removing the noise present. In order to match the amount of dark current present in light frames, the dark and light frames must be made with the sensor at the same temperature and have the same exposure time.
But wait, there’s more! Noise can be generated by random processes in the sensor and its electronics, and more noise can be created during the reading and reporting of pixel counts. These can be lumped together as read noise. Read noise doesn’t depend on exposure time, and is probably not very sensitive to temperature. As for dark current noise, we can image the read noise by making a very short exposure with the shutter closed. We keep it short to minimize the contribution of dark current. These images of read noise are called bias frames.

Conveniently, every dark frame image also contains the bias image, so when dark images are subtracted from light images, the bias image gets removed, too.  When you can rely on the light and dark frame temperatures being the same, bias frames are not needed. Many people apply bias frames regardless.

One more bit of terminology: The application of dark and bias frames to light frames is called calibration. Calibration can also involve flat frames and flat dark frames. We can leave discussion of those to another time.

Ding-dong! The Noisy Witch is dead? Sorry, but it’s impossible to remove her from the picture completely. Using multiple dark and bias frames to better generate the dark and bias images does help to keep her down, though. And she definitely resents increases in total exposure time.

Coming up...
Part II, In which we see that the road to the land of imaging Ahs is a Grayish-Brick Histrogram
Part III, In which we learn that even without a brain we can use Levels
Part IV, In which we find that that the heart of the matter is Curves


Other news... Our astronomy club has decided to go ahead with an Imaging Messier Marathon on April 5. Should be fun!

Tuesday, February 19, 2013

Imaging Messier Marathon list PDF

Here's a PDF I created that is adapted from Don Machholz's Go-To list in his Observing Guide to the Messier Marathon. I have added the multi-object image opportunities for those using a medium-size sensor and 700mm focal length imaging system. Have a suggested change or find an error? Leave a comment and let me know. Here's what a portion of it looks like:

A portion of the IMM list PDF.

Sunday, February 17, 2013

Imaging Messier Marathon: Combining Targets in One Frame

So many objects, so little time. If only there were fewer objects!

Well, in a way there are. Some of the Ms are close enough together that one shot can encompass two or more objects, thus saving us time. Let's see what we can find in the way of grouped Messiers.

I'm going to list the Messier objects in the order given by Don Machholz in his book The Observing Guide to the Messier Marathon--A Handbook and Atlas. This book is available from AmazonBarnes and Noble and other sellers. Frugal? Look to used book stores. I paid $25 for my copy at Half Price Books.

Here are some combined fields for two different focal lengths. If I missed some, please let me know so  that I can add to the list.

These lists depend your go-to running with at least get-it-in-the-finder accuracy. In the lists I'm leaving out all the single-target fields. I'll add a master list of all 110 objects soon.


700 mm Focal Length Combined Fields


Eleven fields saved, five go-to ra/dec actions, four camera orientation checks.


Go-To target is indicated by bold. Tight fits may require camera rotation

M31+32+110:  Check Camera Orientation

M42+43

M95+96: Target RA 10:45:16, DEC +11:45:54

M65+66

M81+82: Target RA 9:55:39, DEC +69:23:38

M97+108: Target RA11:13:11, DEC +55:20:23 Check Camera Orientation

M84+86

M59+60

M17+18: Target RA 18:20:15, DEC -16:38:38 Check Camera Orientation

M21+20:: Target RA 18:03:23, DEC -23:07:31 Check Camera Orientation


432 mm Focal Length Combined Fields

Fourteen fields saved, four go-to ra/dec actions, five camera orientation checks.

Go-To target is indicated by bold. Tight fits may require camera rotation

M31+32+110

M42+43

M95+96+105 Check Camera Orientation

M65+66

M81+82

M97+108: Target RA11:13:11, DEC +55:20:23

M84+86+87: Target RA 12:27:51, DEC +12:38:53 Check Camera Orientation

M58+59+60: Target RA 12:40:25, DEC +11:42:19 Check Camera Orientation

M17+18: Target RA 18:20:15, DEC -16:38:38 Check Camera Orientation

M21+20 Check Camera Orientation


Your choice of imaging focal length(s) may well differ from mine. Focal lengths longer than 700 mm will find that some of the combinations flagged for an orientation check will not work. If your focal length is between 400 and 700mm you're safe if you follow the recommendations for 700mm.

Thursday, February 14, 2013

Imaging Messier Marathon: Imaging Focal Length


Last time I mentioned planning for an imaging Messier marathon (IMM), where the goal was to produce images that recreate the impression a visual observer would have looking through a 6" or 8" telescope. This time I'll look at the choice of a telescope for this task.

Focal Length:

Under ideal circumstances imagers try to choose an imagin focal length that best matches the target object. In the IMM that's not really possible without spending a lot of time on swapping gear around. We need to use one telescope. Since this is quick and dirty imagaing with little concern about "going deep," any reasonable focal ratio from f/5 to f/10 will do, with faster scopes being preferred.

More important in my judgement is the ability to frame the larger of the Ms in single shots. If smaller objects end up looking small, that's fine, for that simulates the eyepiece view.

Since I'm planning this for myself, my calculations will be based the relatively common KAF-8300 sensor (18x13.5mm) that is in my CCD camera. DSLR imagers with APS-C sensors will have somewhat larger fields of view.

You can follow along with what I'm doing if you have a good planetarium program. I recommend Sky Tools 3, a package that is very good for planning. Also helpful is this list of Messier objects sorted by size.

Let's get started by examining some of the Messier objects.

The Ms range from the enormous M31 (about 3°x1°) to the tiny M57 (105"x78"). 400-500mm is about right for the entirety of M31, while M57 probably looks best at 2000mm or longer. This is what M57 looks like at 400 and 700mm FL:

M57 @ 400 mm FL,  5.4um pixel size, enlarged 4X

M57 @ 700mm, 5.4 um pixel size, enlarged 4X
The first image is very pixelated, while the second is smoother. Surrounding stars are better seen in the 700mm image. (We don't care about the central star, since that isn't available to visual observers except using  very large telescopes.)

The second largest Messier object is M45 (M44 is almost as big). Here's what it looks like at 700mm:

M45 @ 700mm, 5.4 um pixels. Red box is FOV in KAF-8300
It fits very nicely. At 800 or 900mm we could still get most of the cluster in, but it would seem a little cramped. (Here we don't care about the nebulosity because it's essentially invisible to the eye of a beginner. We just want to capture the stars with enough dark space around them to make it look like a cluster.) 

We could continue, but there's a balance to be set, and it's a very subjective balance at that. Longer focal lengths favor the more abundant smaller objects, while longer focal lengths favor relatively few objects. Short er focal lengths give us a chance to capture several Ms in single frames, which helps us do the IMM. Longer focal lengths will eat up more time with getting targets centered in the camera's field of view.

 From the above I would suggest any focal length between 600 and 1000mm should serve well. 

The ultimate choice depends on the telescopes you have available to you. In my case only one scope fits: My TV102, which has a FL of 700mm with its flattener attached.

Next time I'll look at which Ms can be combined on single frames given a 700mm FL and KAF-8300 sensor.


Wednesday, February 13, 2013

Imaging Messier Marathon

A plan is being hatched here for an Imaging Messier Marathon or IMM. Just as with the regular visual MM this is all about quantity, not quality.

I have an secondary motive for doing this. I am co-owner of a small (if it was any smaller I'd be the sole owner and employee!) that creates educational software for the earth sciences. One program is an astronomy package, and I'd like to use my images to populate a photo gallery of common objects.

But this isn't about putting together a lot of images for students to ooh and aah over. Nothing I do can compare to what other imagers create, much less resemble something like a HST image.  My goal is to show them what the Ms look like through a small telescope--say a 6" or 8" Dob. I don't want them to be disappointed at the fact that faint fuzzies usually look just like that until you either use a large telescope or view from a really dark site. Sadly, most kids (and teachers) don't have access to either.

I know from experience that showing them the planets (particularly Saturn, Jupiter, crescent Venus and Mars at opposition) and the Moon can really wow them. But often it's the case that they expect galaxies to be just like those bright, colorful spirals seen on the Internet. Sometimes they can't even find a galaxy or globular in the field of view because they're looking for something much more spectacular.

So my goal is to produce a set of images that give teachers and kids an idea of what something will look like in a small telescope operating under a suburban canopy of light. A few of the images will be accompanied by better images, when I'm able to take them.

This means for most objects I would shoot a handful of light frames. Even though most galaxies and nebula show no color visually, I would probably shoot everything with the same scheme: 30 second exposures, binned 2x2, maybe three frames each of RGB, using my ST8300 and TV102. No autoguiding, and only a simple polar align.

Detailed resolution won't be important because untrained eyes have a hard time picking that out, and colors can be muted at best. Quantity, not quality.

As an act of faith that it will eventually be clear at night, I mailed in my registration for Wisconsin Observer's Weekend. Hooray!

Wednesday, February 6, 2013

Hobby on Idle

Still waiting here for clear skies. I've been reprocessing images from last year; you can find them at my page on Astrobin. I'm slowly learning how to use Astrobin, and I think I'm going to like it.

I'll write up my experiences with it, and make a report on the Orion Mini-Guider/StarShoot Autoguider combo. You'll know the weather has improved when you see the latter appear.

It's almost Mid-February, the time it starts warming up around here. This winter has been notable for the abundance of nighttime clouds. Blecch!