Showing posts with label PixInsight. Show all posts
Showing posts with label PixInsight. Show all posts

Sunday, November 30, 2025

Using PixInsight PixelMath to Correct a Flat Frame Issue

In my last post I talked about a problem I had with flat frames collected at f/2. To correct the issue I used the processes AutomaticBackgroundExtraction and GradientCorrection. This worked very well for two of my images, but I recognized it might not with the huge nebula Sh2-264, (AKA the Lambda Orionis / Angelfish Nebula).

I tried the ABE+GC method on this and it failed spectacularly by obliterating the eastern half of the nebula. So my approach will instead be to use a method that employs PixelMath and an ad hoc model of the donut hole.

Here is the hole as seen in the green master frame from one the other images collected the same night:


Green Master Frame with the donut at center in all its glory


This donut appears in the master green frame of Sh2-264, if not as prominently. So I'll use the above photo as the basis for the correction's structure. It's worth noting that the master frames have already been flatted, so either my system is dust-free, or the flatting took care of any dust donuts. What I'm about to do is apply a simple correction that's limited to the central area of the image.

At the very center the brightness doesn't seem diminished and no correction is needed. As distance from the center increases the distortion progressively darkens to form a ring, then quickly brightens again. It's this ring that must be brightened. Because I'm correcting the master linear frames I'll use simple multiplication of the existing pixel values.

Eyeballing this led me to a polynomial model of the needed correction. Here it is in an Excel graph.

Sixth-order polynomial fit to estimated donut darkness

The horizontal axis is scaled distance from the donut's center, with the value 1 corresponding to radius of the ring's darkest values at about 700 pixels from center. The vertical axis is a dimensionless value for the darkness of the donut relative the image's true background.

This is a complex shape that is fit well by a sixth-order polynomial 

So, how to make use of that polynomial? Let's look at the PixelMath script!


Script for hole correction. Click for larger version


This is a simple script. Basically it looks at the distance a pixel is from the center of the hole, calculates the factor by which to increase the pixel's brightness, and multiplies the pixel's existing brightness by that factor. Pixels outside the donut (R > 1.38)  are left unchanged. 

The constants are essentially input parameters that can be played with to refine the correction. The main adjustment is Amplitude; too large a value and the donut becomes a light ring, too small and there's not enough improvement.

The multiplier is 

        1.0 + (polynomial value times Amplitude). 

I found the best result used Amplitude = 0.04, so at most the correction is a 4% increase in linear value.

So what did this do? Here is the Sh2-264 Green Master before and after the script is applied:

Uncorrected


Corrected


Note that these are given the "Boosted" PixInsight stretch to emphasize the hole; it won't be this evident in the final image.  The script did a fair job, I think. 
 
My post-WBPP workflow for this image uses ABE to subtract first order light pollution from each master light frame; the script is then applied to each channel. After that I return to my conventional workflow: CombineChannels, BlurXTerminator to reduce chromatic aberration prior to color calibration, and so on. 

The result:


Finished Image, corrected (1/4 scale)


North is up; The Angelfish swims nicely westward! I think the best way to validate any image is to compare it with clearly superior one by an accomplished imager, so I used Adam Block's superb image for this one. I think my processing does a very decent job of reproducing his in the central area of the nebula, so I'm happy!

This completes the rehabbing of the Iowa Star Party images.  Next I'll be revisiting some of my images from 2022. Also, Masters of PixInsight are doing a workshop on mosaics in December, so maybe I'll finally process that Veil Nebula mosaic data I've been sitting on.

Sunday, November 23, 2025

Iowa Star Party Images: Repaired!

To recap: I took one night's data at the October Iowa Star Party.  As was my usual (rather lazy) practice I deferred shooting flat frames until my return home. When I processed the data I got something of a shock. The stacked calibrated frames had very obvious dark "donut holes" in the middle. Here's what I mean:


Master Green after ABE


This was the result of not matching the exact focus used while gathering light frames. The remedy would be to obtain correct focus and retake the flats, so the next clear night when the temperature was close to what it had been in Iowa I set up in my back yard, got focused, shot flats, and hoped for the best. Something was off, though; the new result after recalibrating the light frames was an even larger donut hole than what you see above.

I've read that flatting at f/2 is a fiddly thing, and based on this I have to agree.

The only real solution is to collect future data and flats at a slower speed so that focus is less of an issue. And to take flats in the field at the time of data acquisition. But that's too late for the Iowa data, which I really don't want to throw away. So what can I do to salvage it?

How about creating a sort of secondary multiplicative layer of my own? One way to do this might be by using using the GAME script for PixInsight, which basically would create a round mask that I could use to stretch the hole area a bit, or even to create a synthetic secondary flat to use in recalibrating the already calibrated light frames. Nice thought, but I tried both methods and didn't get satisfactory results. 

More labor-intensive would be to create applications in a high level programming language that could directly adjust pixel values in the master frames. It's been maybe 12 years since I did any serious coding and had none of the needed compilers installed on my computer, so no -- that would be too involved. Much simpler would be to use PixInsight's PixelMath to do the same thing. I used Microsoft Excel to model the hole as a 6th-order polynomial and used a simple PixelMath script to transform the master images. This actually worked to some extent, but not well enough. I could have fiddled with that and eventually found the right polynomial, scale size, and amplitude to make a good correction, but I really didn't have the patience for that. There had to be a better way.

And in this case, there was. I wouldn't recommend this as a cure for any other flatting problem, but it seemed to work well enough for the Iowa data. The nice thing is that it used two standard PixInsight processes.

Because the data for two of the targets was low in the eastern sky over a rather conspicuous light dome, it suffered from a considerable light pollution gradient. I used PI's ABE to make a first pass at getting rid of that. This was a fairly typical application of ABE, using subtraction and a function degree of 4:

ABE settings

This essentially revealed the hole and whatever else the bad flat didn't correct. GradientCorrection was then applied with some non-default settings designed to work better on the hole's small-scale structure.  

GradientCorrection settings


Here is the result. The hole is almost completely gone, as are the edge issues left behind by ABE.





I applied this to all three channel masters and processed normally. Without the hole I could process a little more aggressively to bring out Barnard's Loop.



My image based on Iowa data


Wikipedia image (an RGBHa image by Hunter Wilson),
cropped and scaled to match my image above.


Below my image is the picture of Barnard's loop on its Wikipedia page for comparison. I've rescaled and cropped the image to match mine. Wilson's acquisition data is RGBHa, which probably accounts for the very red nebulosity.

Am I totally happy with my image? Not entirely. Theres still some weak signal suppression at the the very center of the image, and I wish the Running Man was bluer. Given that this is based on only half an hour of total exposure (ten minutes per color channel) I am pleased with is how it reveals a lot of the blue reflection nebulae in western Orion. It's a shame it didn't quite extend far enough to catch the Witch Head. It would be wonderful to devote hours to this area, but that's not going to happen.

Here's the other de-holed image from Iowa:


L to R, Soul Nebula, Heart Nebula,
and the Double Cluster (1/4 scale)

As with the Orion image, the hole is essentially eradicated. Too many stars, though. 

Reprocessing the third image, the huge Lambda Nebula, will be problematic as the hole is very entangled with the nebula. I may need  to use my PixelMath script method for that.

Next spring I'll try to get dithering working more reliably so that I can drizzle process and get better stars. With some luck I may get my long-sought wide field image of the Polaris-area IFN after all.

But now it's time for winter hibernation and reprocessing of old data. With what I'm learning from the Masters of PixInsight folks, I may even tackle that big old Veil Nebula mosaic data I've been sitting on for over a year!

Next time: the Lambda Orionis Nebula (Sh2-264) after de-holing!

Monday, September 29, 2025

Getting The Samyang Setup Ready For Imaging

[This is Part 2 of 2 about my new Samyang 135mm f/2 lens imaging system]

What is the Samyang Setup? 

It's the same setup I use for imaging with the FSQ-106 -- with a few changes. Obviously the imaging scope is now the Samyang lens and the Pegasus FocusCube 3 is swapped out for a ZWO EAFN. The connecting hardware between the lens and my imaging camera (ASI 2600MM) is different as well because of backfocus needs.

Questions to Answer

Is the lens optically sound?  Can it provide focus at infinity? Does it have significant aberration? Will it work well at f/2, or does it need to be stopped down to f/2.4, f/2.8, or f/4? Do the lens adapters introduce significant tilt?

Does autofocus work well?

How much can I reduce the time it takes to make a single dither?

Given the cloudy nights typical at this time of year it will take a while to get things sorted out. Because it only requires stars to do this I can stay in my back yard; dark sky is not necessary.

Night One (10 September)

The lens would not focus at infinity.  This meant autofocusing and image quality assessment were off the agenda. 

What did work was tracking. Plate solving was 100% despite the stars being somewhat out of focus.  I was able to slew and center without any issues. 

Clouds came in before I could look into dithering -- or anything else, for that matter.

--------

A letter to the M42 adapter person  at Thinkable Creations got a fast reply that pointed me to this video that shows how to remove the focus travel stop. This was an easy fix and with the stop removed the lens should be able to focus stars. 

--------

Night Two (22 September; summer is over!)

Really, it was almost two weeks between clear nights that I could use! Worst Summer Ever: clouds, smoke aloft, smoke at ground level with air quality alerts, rain, and the abundance of mosquitoes that the rains produced. Onward to Autumn!

First business: star focus. I set the EAF zero position at the full out focus, and infinity focus is near  position 750. Park position will be a little larger than the backlash.

I used a standard methodology* for getting autofocus configured.  

  1. Manually** find a very good focus. 
  2. Change position** gradually until you see greater than 50% growth in star size. Set step size to the amount of position change.
  3. Run autofocus and see if the ratio of defocused:focused HFR is about 3:1 to 4:1; estimate how much backlash is in the system and enter that in the OUT field of NINA's autofocuser settings. Backlash will appear as unchanging HFR in the first few measurements. The change in position from the first measurement to the last one at the same HFR is the amount of backlash.
  4. Run autofocus again and adjust step size and backlash accordingly until a decent hyperbola emerges
  5. Repeat Step 4 until HFR ratio is about 3:1 to 4:1 and hyperbolic quality is close to 1.00
  6. (optional) Reduce number of autofocus points and run autofocus to confirm it still works well 
*This is described by another fine Patriot Astro video starting at the 16:29 point, where the process is used with a ZWO EAF.

**My suggestion is to start at focuser position zero (the new "infinity" stop, or close to it) and move to best focus. Stop at a good focus and don't try for perfection; don't decrease the focus position at any time while hunting for focus. Then continue increasing the position while determining the step size. If you happen to pass through a better focus, note its position and measure step size from it. This insures that backlash does not factor into step size.

I did get AF to work reasonably well, with a focus step of 100 and backlash also at 100. However, this was with NINA's built-in AF, not Hocus Focus, so for Night 3 I'm going back with Hocus Focus

Sample frames were also collected at f/2.0, f2.4, f/2.8, f/3.3, and f/4.0. 

This gives me hope that I can image at f/2.0. Night 3 will be tuning Hocus Focus for better focusing and seeing if I need to adjust backfocus. If this works out Night 4 might be trying to create an actual RGB image!

------------------------------

Loading the sample frames into ASTAP suggests dreadfully large tilt: 42% at f/2.0, and 16% (barely tolerable) at f/4.0. Here is are the diagrams of interest at f/2.0:

f/2.0 Tilt Original

This indicates a strong bottom to top tilt.

f/2.0 Aberration Inspector Original

The bottom row has badly elongated stars, but the top row isn't too bad at all. I think the tilt adds elongation in the bottom row while essentially nulling it out the top row. If I could selectively remove the tilt I'd probably have a better idea of the aberration due to backfocus error and  could possibly fix it mechanically.  

Toward that end I've ordered some very thin 3D-printed tilt shims. (Hardware doesn't permit me to use the tilt plate that came with the camera). Correcting some of the tilt might help with focus and other star-diameter calculations. An alternative it to use software to correct both tilt and any other aberrations simultaneously. The software of choice for doing this is BlurXTerminator (BXT).

Applying BXT (using its default settings) gives me this:

f/2.0 Tilt after BXT

f/2.0 Aberration Inspector after BXT


Quite an amazing improvement, isn't it? Tilt has essentially vanished and corner stars are much rounder.


Night Three (23 September)

Hocus focus worked well with the existing values of backlash and step size. I did bump backlash upward a little to 150 after looking at a few runs. With HF running the hyperbolic fits were much better and the luminance focal position seemed more consistent.

I ran the filter compensation calculator with mixed results. Red and green were basically parfocal with luminance, but blue was quite offset. This might be because I need to adjust exposure times? I'll repeat this.

Night Four (25 September)

This time the best focus (smallest NINA HFR) determined manually was at focuser position 735. Blue best focus came at 835, so the offset was +100. This is essentially the same as the software-determined +93. 

I took a baseline R-G-B-Dither 10 times; the target was M52. My main goal is to get a baseline for how long it takes to gather this data. It appears that a simple 60 s frame consumes about 70.4 s; a frame followed by a dither uses 101.4 s. Ignoring autofocusing, this means a single RGBD(ither) sequences uses about 242.2 s to collect 180 s of data. Roughly speaking, multiply the total exposure time by 4/3 to get the actual acquisition time. It's pretty much the same as if I was shooting LRGB.

An "adequate" data set of 40 frames per channel, suitable for drizzling, means 2 hours of data. This means acquisition time will be about 2.7 hours, plus some for refocusing. This isn't half bad, and might be bettered by adjusting settling times and optimizing the filter order.

Anyway, here is the first light image for the lens:


M52 at center


This is surprisingly good, at least to me. I'm under a Bortle 7 sky and not using filters of any kind. The ability of PixInsight to remove light pollution boggles me, and how well BlurXTerminator reduces aberrations is equally amazing. At f/2.0 the lens has considerable chromatic aberration:

CA in corner star


This star elongation is almost all chromatic aberration, and amounts to about 4 pixels between blue and red. Because BXT is able to correct this I'm going to go with f/2.0 for my first "real" image. If that doesn't turn out well I may move to f/2.8.

Questions Answered?

Is the lens optically sound?  Can it provide focus at infinity? Yes, after a little surgery. Does it have significant aberration? Yes, but it appears to be correctable using BlurXTerminator. Will it work well at f/2, or does it need to be stopped down to f/2.4, f/2.8, or f/4? It's adequate at f/2.0, but might be better at f/2.8. Do the lens adapters introduce significant tilt? I suspect this is the source of the tilt I'm seeing, but I need to look closer at this issue. Maybe the shims I ordered will be the remedy, or I may revert to using a Canon to M42 adapter to see how that works.

Does autofocus work well? It seems to work well enough.

How much can I reduce the time it takes to make a single dither? I still need to play with the dither settings and find out.


------------------------

That's the last of the prep nights!  Next I'm going to try to resolve an issue I've had while using two ZWO cameras (one for imaging, one for guiding) at the same time; this was a problem that first popped up while at a remote dark-sky site in which the two cameras switched roles. Using the ASI2600 as the guide camera really does not work.

Another issue I need to explore is why it takes NINA so long to connect to my Losmandy Gemini 2, why it throws an error at first and then makes a good connection. Strange!




Tuesday, September 2, 2025

Finished: Integrated Flux Nebula Image

Here's the image at quarter-scale:

1/4 Scale Image

Full-Scale image at AstroBin.

Where to even start with this? How about the data?

Originally there were 13.2 hours of data, but I came across a video in which someone explained how they use PixInsight's SubframeSelector process to cull bad frames. My approach to data culling has always been to keep all that aren't terribly bad, but for this project I thought I'd get tough. Using SFS led me to reject 3.6 hours of data!  To be fair, about a third of that was because of my penchant for starting data collection before the end of twilight. There were very few visibly bad frames as viewed in Blink, so I'm going to call this approach "2 sigma" aggressive, in that it basically culls any frame that has  FWHM, eccentricity, or median values more than two standard deviations above the mean. Note that those rejected frames might be perfectly fine in and of themselves, but relative to their cohort they are of significantly lesser quality. Frames with anomalously low star counts are also culled. An example of this is the set collected during the session that a smoke layer moved in and began obscuring stars in the late morning. Star count fell markedly and I removed frames.

Worth mentioning was the need to use WBPP's Grouping Keywords to make sure that light frames and their appropriate flats were processed together. This was the first time I used it, and it worked perfectly. 

Also, I no longer use dark flats, or "flat darks," if you prefer. Only dark, flat, and bias frames are used for calibration. (Flat and dark frames are now taken for granted at Astrobin, it seems; it no longer asks if you use them.)

Now about the calibration frames, specifically the flats. It seems that most of the time my flat illumination was asymmetric for reasons I don't understand, and this gave the background modelization processing some problems. That big bright Polaris didn't help, either, nor did the fact that most of the image was nebulosity. My first pass used GradientCorrection and that left the right side with a green cast. After playing with that for a while I moved on to DynamicBackgroundExtraction. That didn't clear it up, either. After thinking about it for a while I reverted to AutomaticBackgroundExtractor with a 5th-order function and that did the job. 

Next, those darn satellites. The first processing pass got most of them, but a few stuck around in weakened form. They should have been removed during light frame integration, so I looked at what WBPP was using for rejection and it was Generalized Extreme Studentized Deviate (ESD). Some hunting around took me to a PixInsight forum where it was noted that ESD (using its default settings) wasn't doing a great job with satellites. So I told WBPP to instead use Linear Fit Clipping and that seemed to work better. Not perfect, just better. I will need to find out what ESD settings work best since overall it's probably the scheme to use. It may be that satellites and an image full of nebulosity are always going to be a problem.

I also learned that my usual haphazard application of the XTerminator family has been wrong. It's a processing sin to use NoiseXT before BlurXT and NoiseXT before SPCC. For this image I only applied NXT after taking the image nonlinear.

Here's my workflow for this project with the ">" symbol meaning "creates":

WBPP  >  Cropped channel masters

ABE (color channels) > Backgrounded color channel masters

ChannelCombination > RGB master

ImageSolve > RGB master with astrometry 

SPCC > color-calibrated RGB master

ABE (luminance) > Backgrounded luminance master

BXT (luminance master and RGB master) > enhanced masters

STF and HT > nonlinear masters

NXT > de-noised masters

CurveTransformation (with gentle "S" curve) > enhanced masters

LRGBCombination > LRGB master

assorted tweaks (saturation, sharpness, contrast, etc.) > Finished image

Not shown is an additional DynamicCrop after the ABE of luminance because ABE was a little overaggressive at the left edge. Even with two crops, the final image lost only 4.2% off the short axis and 5.5% off the long axis for a 10% areal loss. The reproducibility of the image framing was impressive. Thank you, NINA. 

Another lesson learned was that the XTerminators could be sped up quite a bit. Normally the necessary files are installed by XTs, but on my old computer the install did not engage the GPU. My graphics card is an NVIDEA GeForce GTX 1050 Ti circa 2018. This post explains how to upgrade a computer to use the GPU for faster XT performance. In my case it sped up the XTs by a factor of 4. I may need to repeat this every time XT does an upgrade.

So how did the processing work out? Mostly I was concerned that the area around Polaris was darkened by background extraction and didn't represent reality. I searched AstroBin for an image I could use as a sort of "ground truth" for what I had done. I found just what I wanted in an image by captured_mom8nts (which I'm guessing is not their real name). It appears to have been taken at a much shorter focal length and so should have suffered much less Polaris bloom, keeping the area around the star reasonably pristine. A little crop/rotate/scale/stretch and it matched my image's scale and orientation:

Comparison: Mine (top), captured_mom8nts (bottom)

I think it fairly obvious that the dark areas on either side of  Polaris in my image match those in captured_mom8nt's image, even though mine is much deeper. I'm happy!

I'm also happy with the star color. Shooting only 90 s exposures may have been the key to that in that it kept stars from saturating. Next time I'll be shooting at f/2, but with a smaller objective so I may keep the exposure time as is. 

-----------------------------------------

All the components of my new Samyang 135 mm f/2 imaging system have arrived or are on their way. Next time I'll have a picture of it all assembled and possibly already taken on its first test drive!


Monday, June 2, 2025

Reservations, Smoke, and One Night of Imaging

A few things from a less than fully successful week of dark-sky camping:

Reservations

The initial state park reservations I had were for three nights starting Tuesday. Clouds and rain looked very likely to wipe out the first two nights so I cancelled the reservation and made another for 3 nights starting Thursday when the forecast was much more favorable: one iffy night followed by two that were perfect.  I decided to get everything set up and running that iffy night after seeing the Sun wink out as it set into a heavy smoke layer on the northwest horizon. This turned out to be a good move as this would be the only useable night. The heavy smoke arrived by Friday morning, when two hours after sunrise the Sun was a dim red ball you could look at directly. 

Lesson learned: Minnesota state parks offer same-day reservations.  Next time I'll wait until I'm sure the night will be clear to make my reservation.  All my things are very well organized and I can pack the car and be on the road in less than an hour. My preferred dark-sky camp, Lac qui Parle, is lightly used and usually has unpowered pull-ins available.

Mount Safety Limits

By the time I was ready to shut down that first night my mount had rotated to the point that it was well beyond its safe travel limit. This didn't really matter as looking to Polaris allows much further travel than is usually safe, and the G-11 mount tracks nicely even when it's over-rotated and the counterweight shaft is well beyond horizontal. 

That said, what I expected was a meridian flip sometime around midnight. When that didn't happen I recycled the system and expected go-to would put the scope on the correct side. It didn't. I could see that eventually I'd run the camera against the mount and decided to let it go right up to that point before stopping. 

I got enough frames that night, but in a month when I return to shoot color frames I'll have to stop even earlier in the evening. 

Lesson learned: I need to configure NINA and my Gemini-II mount control to properly handle flips.

Here are two videos that I found useful for doing this and for setting up NINA for flips:

https://www.youtube.com/watch?v=Rk8uOikHPb4

https://www.youtube.com/watch?v=0N0U5chskCQ

There's also a very useful spreadsheet available to members of the Gemini-II user group on groups.io (See the second link above for how to use the spreadsheet.)

I've made the changes to my Gemini-II and go-to now seems to put the telescope on the correct side based on the limits. Seeing if automated meridian flips work will have to wait for a night under stars.

The Coleman Bug Shelter (Previously mentioned here.)

This was my first night out with the shelter, and it worked great--no gnats, no mosquitoes inside. I sat in the shelter linked to the scope with a 16' active USB 3 cable (which was also getting its first all-night imaging test). There wasn't a single glitch. The only awkward part of this is doing polar alignment, when I (and the laptop) need to be at the mount to make adjustments. Once that's done, it's back into the Coleman. It was so comfortable in there that I spent most of the evening relaxing with a good book.

Lesson learned: I'm ready for next year's Nebraska Star Party and its all-night supply of mosquitoes. Will the shelter, even when staked down, be able to endure the winds of Nebraska?

The Results

If the Eagle Lake Observatory setting is Bortle 4 plus a bit, then Lac qui Parle with Thursday night's smoke was Bortle 4 minus a bit: definitely darker than Eagle lake, but certainly it wasn't the Bortle 3 I've seen  there before. Despite that, I gathered 113 luminance frames. Seven were discarded for being in twilight, and one was lost for poor tracking. Adding the new 105 frames to the previously collected 72 Eagle Lake frames brought me to about 4.4 hours of total luminance exposure.

Here is the result, as produced by PixInsight's WBPP and some modest postprocessing of my own:


IFN (luminance, 4.4 hours)

This is much better than my 72-frame image, and it may be all the luminance I need to collect. Using the 3:1:1:1 "standard LRGB model" what's left to shoot is perhaps an hour and a half of each color channel. I have the new moons of June and July to collect my color frames.

Lesson learned: some smoke at a Bortle 3 may be better than clear sky at a Bortle 4+ site.  Given enough good nights I'd still like to add more luminance and get to 6:1:1:1, but good nights around here seem all too rare.

Satellites Galore (with bonus Trek Humor)

These are the satellite tracks rejected by PixInsight. There are a lot of them in 4.4 hours!

"Go home, Tholians, you're drunk"


That's all for this post. In a couple of weeks the moon will go away again and I'll try to get the color data that will bring this luminance to life. 



Sunday, May 11, 2025

Reprocessed IFN Using PixInsight's WBPP

The image in the last post was really not very well processed, with the culprit being me. I suspect I twice subtracted bias or something. It was so bad that I decided to reprocess immediately, adding in some color channel data I collected. The best way I could see to avoid messing up again was to plunge right into using PixInsight's popular Weighted Batch Preprocessing script (WBPP).

Was it easy to use? Yes! If you disagree, I suggest watching the series of WBPP tutorials by Adam Block.

Did it work well with all the default settings? Yes, it did for me. The only step I skipped was Cosmetic Correction. I'll have time to learn how to incorporate CC between now and when I need to process new data collected later this month. 

Was it fast? I fed it my 72 luminance frames, 36 color frames, 100 bias frames, 30 dark frames, and 100 channel flats. WBPP made master frames, calibrated my light frames, and registered and integrated the lights, and finished by doing a crop of all four channels. All that in 51 minutes. Wow!

I know there's some sort of WBPP Fast Integration thing that can reduce this even further, but I'm saving that for the future.

The WBPP result is so much better. Here is the master luminance after post-processing:

Polaris IFN as processed by PixInsight WBPP

The full scale image is on AstroBin. Because Astrometry.net as employed by AstroBin seems to have issues with this, I'll pass along ASTAP's solution:

ASTAP solve of above image.
North is up; the celestial pole is a little beyond the top edge

This is exactly the composition I want: Polaris sitting at top center and giving the illusion of shining its light down on the nebulosity. Which it probably isn't actually doing, but artistic license is allowed, right? 😏

Not only is that ugly vertical banding gone, the stars are better shaped. ASTAP puts the tilt at only 3% ("none") compared to the previous "moderate." I continue to be amazed that so much nebulosity can be captured with less that two hours of total exposure at a Bortle 4 site with a nasty high-in-the-sky first quarter moon.  

The color image was not adequate and you won't see it here. It looked as if the background flattening of the three channels had gone awry. I'll need to play with the color channels and see if I can do better.

The night I collected the color frames give me hope for my camping trip. PHD2 guiding was almost perfect. Of 36 frames, none were rejected. With dithering turned off there were no hiccups. I retrained PHD2 beforehand, this time with the correct focal length for the guide scope, and it seemed better behaved. 

Reacquiring the image area worked great. The evening was the third time I told NINA to go to the target. It seems to be doing this quite well: almost nothing has been lost due to mistargeting: 

Portion of full image removed by WBPP cropping (red)


Everything considered, it all worked as intended. That's a little scary; I have to wonder what mischief my hardware has planned for me when I take it to the dark-sky campground.

-------------------

While the tariff wars have devolved into confusion over what, when, and how much, the Rokinon 135 mm f/2 lens for Canon hangs in there at the same old $449. If you've been watching the astronomy gear dealerships, you've probably noticed that many items are no longer in stock. Buyers seem to be rushing their purchases to avoid the expected higher prices.



Wednesday, May 7, 2025

Integrated Flux Nebula Mini-Test Result

 Let's get right to the image:


Polaris IFN luminance trial


The total exposure was a scant 1.8 hours (72 x 90 s). NINA ran the acquisition and PixInsight handled the processing. Flat frames were used. The nonlinear stretch was the PI Screen Transfer function and no attempt was made to enhance contrast beyond what it provided.

This is so far beyond my expectations that I don't know what to write. It was a not-very-dark site, the moon was at first quarter high in the ecliptic between Cancer and Leo, and there was a thin layer of smoke aloft. I really didn't expect to get much if any nebulosity in the image. But there it is.

The night's goal was to fully test the imaging setup and perhaps answer a few questions:

  • Would go-to compose the image reliably? I started it once, collected a dozen frames, shut it all down, parked the scope and did the entire startup again. Plate solving shows the center changed by 67.5 seconds in RA and 7 seconds in Dec. Translating the RA difference to arcseconds at the equator, it's actually more like 27 arc seconds. That's total shift of about 28 arcseconds. The difference in image axis rotation is also tiny, about 0.11 degrees. So the answer is Yes, go-to works very well!
  • Would guiding work so close to the pole? I had made some changes in PHD2--activating multi-star guiding and predictive PEC, and using the calibration assistant to make sure that was done optimally. Through the evening it collected 72 light frames, and only one had to be rejected (when PHD2 timed out after a dither). Tracking was next to perfect. I'm nor sure the ASI 2600 benefits much from dithering, so I'll disable it.
  • Some people have indicated issues with field rotation when guiding near a celestial pole. I saw no sign of that. Possibly the excellent polar alignment from PoleMaster should get credit for this.
I do like the composition of the image, with Polaris shifted off center northward and looking as if it's shining light down onto the nebulosity. It's nice to see that the offset doesn't produce any significant internal reflection.

There are issues with this image, though. Although ASTAP reports moderate tilt I don't see any evidence of it. (Maybe it's some sort of algorithmic issue?) There are a lot of vertical bands in this that snuck in during the processing. I'll have to find a way to make sure to avoid them. [EDIT 12 May: see the reprocessed image here.]

PHD2 was doing something that seemed odd. Every now and then it would make a too-large declination adjustment and then follow that with smaller corrections. This may also have been my fault as I had the wrong guide scope focal length entered. This has been corrected, so I'll see if that takes care of the issue. 

Tonight I'll be out again to test my RGB acquisition scheme. Basically, I'll try the good old 3:1:1:1 channel ratio, meaning 24 frames for each color channel. How will the colors turn out?




Friday, May 10, 2024

Mosaic Detours and a small surprise

 The mosaic is coming along, but there have been several detours along the way.

That difficulty I had with my guide camera resulted in too many bad frames in two of the panels' luminance and red frames. Why those two channels?  I think it's mainly because of how they fall in the filter sequence, but it could just be chance. These will need to be reimaged, meaning no finished mosaic until later this year.

Something was wrong with my luminance flat frame, too. It was leaving a large light circle in the calibrated images:


Lacking a time machine that could let me reshoot the flats as they were at the time the light frames were collected, I opted to create a synthetic flat of sorts by using PixInsight's ABE. This worked well enough, leaving only a few dust motes to be cleaned up by CloneStamp.

One last issue was a sort of swiss-cheese texture produced by the script StarReduction and by StarXTerminator. This was minimized using CurveTransformation twice: first to reduce the brightness difference between the "holes" and the "cheese," followed by a mild stretch to de-emphasize the background.  You could probably use a masked application of MLT to deal with it, too.

Here's a comparison between the starry original and the final reduced star version

Before

After

Vastly better, I think. Here is the portion of the workflow that is used to take luminance from star-filled linear integrated to nonlinear with fewer and smaller stars:

  1. Open the original calibrated, aligned and integrated image (it's still linear at this point)
  2. Delinearize the original using STF and HT, save as "NL"
  3. Open StarReduction script, set target to NL and click the "Generate starless view" button. If you have both StarNet2 and StarXTerminator installed you'll be asked which to use and what options there are for it.  (I used StarNet2 with a 2x upsample.) When that's completed, close StarReduction and save the new starless image as NL_Starless
  4. Enhance NL_Starless as you see fit. Certainly make cosmetic repairs. I sharpened it with MLT using layer biases (layer 1 = -0.2, layer 2 = -0.1, layer 3 = +0.15) Save the result as NL_Starless_Enhanced.
  5. Reopen StarReduction, set target to NL, starless view to NL_Starless_Enhanced. Choose the reduction method and any associated parameters, and write them down so that they can be used for the other panels. (I used the Transfer method with a scale factor of 0.1) Check "Create new star reduced image" and if you want to use PixelMath or some other means of combining the stars and starless data check "Create 'reduced stars only' image".
  6. Click the green checkmark to apply. Save the resulting image as NL_ReducedStars
  7. If your image suffers from "Swiss cheese", deal with it now. Save the result as NL_Done. 
  8. This isn't actually "done done." It will need cropping and normalizing before it becomes part of the luminance mosaic.

The settings you choose for MLT sharpening, StarReduction, and possible 'cheese' removal will depend on many factors, so play with them to see what what works best for you. It's probably a good idea to create and save process icons once you've found settings you like.

Lessons learned: Shoot flats ASAP after imaging. Inspect light frames ASAP after imaging to see what you collected.

-----------------------

Here's the surprise in panel 6 containing the southern portion of NGC 6960:

Panel 6, luminance, starless version

Look in the lower right corner, that thing that looks alike a ball on a stalk. At first I thought it was an artifact, so I looked at other images on AstroBin. I couldn't find it in any of the images there. Astrometry.net didn't ID it in a plate solve, either, so I went to NINA's Framing Assistant where I could quickly see the area in several surveys. This is what it looks like in the downloadable image files:

Panel area from NINA

And there it was. It shows up in the Nasa Sky Survey and HIPS 2, so it's real and not an artifact. But what exactly is it?  I processed my color frames and got this:

Panel 6 RGB composite

It's got a bluish tinge to it, so my guess is that it's a very faint reflection nebula. So far as I can find it doesn't have a designation. Is there anyone out there who can ID it?



Thursday, May 2, 2024

Mosaic Workflow

I've been working on my Veil Mosaic project and here is the first tentative result, the luminance mosaic:

Original Luminance Mosaic

The full scale version of this is 10257x9687 pixels in size! This has a number of issues, but it really was just an exercise in stitching together six panels. That part worked flawlessly. The main issue I have with this is the stars. There are just so many of them that they obscure the nebulosity. The other issue is how to extend my workflow to incorporate the chrominance channels and deliver a full LRGB mosaic.

Most people suggest building an LRGB mosaic from channel mosaics, so that's what I will do. As for the mosaic-building tools, advice is mixed with most people indicating a preference for GradientMergeMosaic. My experience with GMM has been disappointing; many of my images include dense star fields, and GMM has had problematic issues with stars at the edge of panels. Instead, I'll use PhotometricMosaic.

The workflow might go something like this for each panel/channel combination, although the last two steps operate on channel or panel groups. It's assumed you've already created master frames for dark, bias, and flat frames.

  1. Cull bad images from light frames (Blink)
  2. Calibrate light frames (ImageCalibration)
  3. Clean up residual hot pixels (CosmeticCorrection) 
  4. Assess calibrated frames for quality and select reference frame (SubframeSelector)
  5. Align light frames (StarAlignment)
  6. Integrate light frames (ImageIntegration)
  7. Sort all the resulting frames by panel; for each panel group use DynamicCrop to insure all the channel images for a given panel cover the same sky and have no edge artifacts from dithering. This insures the channel mosaics have identical dimensions and won't require aligning.
  8. Background correction (ABE, DBE, or both)
  9. Reduce noise (NoiseXTerminator)
  10. When all this has been done, sort the panels by channel. If you're archiving images, this is a good time to send all the intermediate products off to storage, they're no longer needed. Only the images from step 9 will be needed.
Because the luminance images will become pseudo-masks for chrominance they need extra attention. Do these steps for each luminance panel:
  1. Create a starless version (StarXTerminator or StarNet2, both have strengths and weaknesses)
  2. Enhance the starless image (MultiscaleLinearTransform, UnsharpMask, NoiseXTerminator, etc.)
  3. Reduce star bloat (StarReduction), apply the same reduction to all luminance panels.
Care should be taken to insure all the enhancements and applications of StarReduction are identical. This is an opportunity to learn how to use PI Containers.

Within each channel, normalize the images using LocalNormalization. The hope is that LocalNormalization will deal with background disparities and that the splining of PhotometricMosaic will make any remaining issues imperceptible. 

Next, create the channel mosaics by repeating these steps for each channel. 
  1. Plate solve each panel (ImageSolver)
  2. Register each solved panel (MosaicByCoordinates)
  3. Merge the panels (PhotometricMosaic)
  4. Reduce noise again (NoiseXterminator)

After you've done all four channels you're ready to combine them all as you would any single LRGB image. 

Taking the channel mosaics nonlinear requires you to try to stretch them in roughly the same manner, perhaps starting with the luminance mosaic and applying that same stretch to each of the chrominance channels. PI lets you do this using the STF process. Having done that you're ready to combine the channels and get on with color balancing, etc.

Notice I'm not using the usual PixInsight noise reduction and deconvolution processes. I think NoiseXTerminator provides superior noise reduction and the PixInsight Deconvolution process? I have never had any real luck with that thing. If your stars are round you're better off using StarReduction, which works exceedingly well and is free, too. Here is a too-quick application of StarReduction:


One pass of StarReduction

This image shows the effect of a single pass of StarReduction. There are a lot of blockish artifacts in this resulting from StarXTerminator being applied to the mosaic rather than individual panels.

With this workflow now defined I can get on with the processing!



Monday, July 10, 2023

A Practice Mosaic Using Photometric Mosaic in PixInsight

During the time between first and third quarter moon I thought it might be good to look at methods used to make mosaics. I had no idea what was available for doing that in PixInsight.

I learned there are a few ways to go about making a mosaic. The one that's often mentioned is the Star Alignment (SA) process. SA is usually spoken of as a rough alignment suitable for two-tile mosaics. For merging more tiles together (my eventual Veil mosaic will mean combining six) other methods can give better results.

I'll give them a try, but first I wanted to get a baseline merge using SA. To do this I'll create two tiles from an image of the California Nebula I made last year. I'll just cut it into two pieces.

Here is the image as fully processed in its nonlinear form:


Original Finished Image

 

Mosaics are built from linear images, and fortunately I kept the linear version of this. The left tile will be a simple crop. The right will start as a crop and then get modified to resemble some of the differences one might see in images taken days or weeks apart. Different dithering or polar align drift might mean different cropping away of poor signal areas, so I'll change its size.  Not having a rotator means it may be off by a few degrees, so I'll rotate it by three degrees. Lastly, the second tile may have a different luminance level thanks to smoke or clouds. Most of this should be removed, but I'll leave some in to see how well it gets handled. (This is done by tweaking the right tile with the Curves Transformation.) Here are the resulting left and right tiles:

 

Left and right "tiles"

 

You can clearly see they're not the same size. The other modifications are more subtle but if not treated correctly could result in a bad  merge seam, misaligned tiles, and possibly other defects.

The easiest method for merging the two panels is Star Alignment (SA). I'm going to do this using the settings suggested by Kayron Mercieca of Light Vortex Astronomy on this page. I'm going to supply SA with previews because it's so easy to do for a 2-panel mosaic. My chosen overlap is about 40%, larger than the 30% I'm using for the Veil mosaic. Executing SA gives me this:


Rough mosaic using only Star Alignment

Inspection of the result shows no seams, and no odd stars. It's perfectly acceptable, and confirms that (at least in this case) SA is suitable for a two-tile mosaic.

Next up is Gradient Merge Mosaic (GMM). I again used the methodology prescribed by Mercieca, first building a synthetic star field and then registering the tiles to it using SA. Then the right tile is processed by the dna Linear Fit script to insure that it matches the left script's brightness. Finally, GMM merges the two tiles.

A common problem with GMM is star pinching at the overlap edge, and my result showed severe pinching. The first fix is to adjust two parameters in GMM. Doing so helped a little, but not nearly enough. When this fails the remedy is to use Clone Stamp to remove bright stars at the offending edge. This did reduce the pinching, but not only left some and created new artifacts stemming from Clone Stamp.

After considerable efforts to deal with the pinching I came to the sense that GMM does not cope will with images having a lot of stars. This is the case for the California Nebula tiles; it will also be true for my planned Veil mosaic. 

SA and GMM are the only methods described by Keller in his book "Inside PixInsight," so I was not happy. A little googling turned up another method: Photometric Mosaic (PM). PM looked very promising!

Here is a great tutorial for PM

I followed the tutorial with one change: I used the Mosaic Join /  Combination mode Overlay rather than the Blend or Average methods the presenter recommends. 

The result was excellent! There are no perceptible seams or artifacts at all, even when boosted autostretching is used. Here is the result, with a quick autostretch to be nonlinear:




If this lacks the contrast of the original image it's due to using only autostretch in the processing.

I think I'm now ready for building my mosaic, and it's a day past 3rd quarter--so let the clear evenings commence!

 -------------------------------------

Here's a callback to an earlier post titled "A Wristwatch for Astronomy?" The watch in question worked well in almost all regards---big, easy to read, and of course it kept time adequately. Where it failed was the luminescence. The watch hadn't been properly "charged" before wearing, so I was only able to read it using my red light. Next time I'll remember to feed it plenty of nice yummy photons!


Friday, June 23, 2023

The Start of a Mosaic

It's been kind of wild since the last post. We've had many days of air quality alerts, most of which have been for excessive surface ozone, a byproduct of smoke and sunlight and "normal" air pollution. Smoke at times thickened to concentrations similar to what was seen earlier on the East Coast. It wasn't healthy at all; hospitals reported a surge of people with breathing difficulty.

The air quality did improve for a bit and I was able to get out and do a little imaging. In fact, I managed to start one of my learning projects!

One item on my to-image list is a mosaic of the Veil Nebula that spans both the east and west sides. The Veil isn't immense like Barnard's Loop, but it's large enough to require something like a 250mm lens to fit it all in a single frame. My FSQ-106 has a focal length of 530mm and it really needs something like a 2x3 mosaic to encompass the Veil. That's 6 frames, and at about 2 hours exposure time for each it will make a good summertime project that could last into September.

Despite the ever-present smoke I was able to collect the data for subframe 1 which includes most of the East Veil (NGC 6992) and the Network Nebula (NGC 6995): left click the image below, then right click the enlarged image and choose "Open Image in New Tab" to see the image at 1/2 scale:

 


 

For fun, here's a try at a starless version using StarNet2 in PixInsight:


 

This is LRGB with all exposures 120s, L = 20 lights, R = 11, G = 12, and B = 12.

I think I dark-clipped this a little in my processing haste, but it will get another processing eventually.  Here it is tucked into its place in the eventual mosaic:


 
6995 is in the overlap area between subframes 1 and 3. The next target will be subframe 3 to complete the Eastern Veil and give me some practice using PixInsight to create a mosaic.

Some other tidbits from this too-rare night of imaging:
  • The QHY-5II guide scope was flawless with over two hours of guiding without a single disconnect. It really does need USB3, it seems. 
  • Not only that, but tracking errors were limited to 2 frames in 58. A rate of 1 bad frame in 29 is a lot better than the 1 in 6 that I had experienced earlier this year.
  • NINA's Advanced Sequencer finished subframe 1 and started subframe 3 imaging without any attention on my part. This was the first time I had tried this. I wasn't willing to do another two to 3 hours of imaging so I reluctantly shut it down at that point.
  • More NINA: Its mosaic feature is nicely integrated into the Framing Assistant and sets up the Advanced Sequencer for all the subframes with simplicity.
  • Even More NINA: If you want to use the Framing Assistant with images while you're someplace without Internet, go to the NINA download page and grab the Offline Sky Map Cache file (2 GB) It replaces the existing cache folder AppData >  Local > NINA > Framing Assistant Cache. Don't forget to change the Framing Assistant Screen's Image Source setting to Offline Sky Map! Incidentally, installing this allows you to zoom out and use Framing Assistant like a (rather strange) planetarium.
  • I seem to have gotten the hang of PI Deconvolution. I don't know why it was so temperamental before, but the key seems to be in the Deringing settings. A Global dark of 0.03 to 0.02 seems to work well, with Global bright typically between zero and 1/2 of Global dark.