Monday, September 29, 2025

Getting The Samyang Setup Ready For Imaging

[This is Part 2 of 2 about my new Samyang 135mm f/2 lens imaging system]

What is the Samyang Setup? 

It's the same setup I use for imaging with the FSQ-106 -- with a few changes. Obviously the imaging scope is now the Samyang lens and the Pegasus FocusCube 3 is swapped out for a ZWO EAFN. The connecting hardware between the lens and my imaging camera (ASI 2600MM) is different as well because of backfocus needs.

Questions to Answer

Is the lens optically sound?  Can it provide focus at infinity? Does it have significant aberration? Will it work well at f/2, or does it need to be stopped down to f/2.4, f/2.8, or f/4? Do the lens adapters introduce significant tilt?

Does autofocus work well?

How much can I reduce the time it takes to make a single dither?

Given the cloudy nights typical at this time of year it will take a while to get things sorted out. Because it only requires stars to do this I can stay in my back yard; dark sky is not necessary.

Night One (10 September)

The lens would not focus at infinity.  This meant autofocusing and image quality assessment were off the agenda. 

What did work was tracking. Plate solving was 100% despite the stars being somewhat out of focus.  I was able to slew and center without any issues. 

Clouds came in before I could look into dithering -- or anything else, for that matter.

--------

A letter to the M42 adapter person  at Thinkable Creations got a fast reply that pointed me to this video that shows how to remove the focus travel stop. This was an easy fix and with the stop removed the lens should be able to focus stars. 

--------

Night Two (22 September; summer is over!)

Really, it was almost two weeks between clear nights that I could use! Worst Summer Ever: clouds, smoke aloft, smoke at ground level with air quality alerts, rain, and the abundance of mosquitoes that the rains produced. Onward to Autumn!

First business: star focus. I set the EAF zero position at the full out focus, and infinity focus is near  position 750. Park position will be a little larger than the backlash.

I used a standard methodology* for getting autofocus configured.  

  1. Manually** find a very good focus. 
  2. Change position** gradually until you see greater than 50% growth in star size. Set step size to the amount of position change.
  3. Run autofocus and see if the ratio of defocused:focused HFR is about 3:1 to 4:1; estimate how much backlash is in the system and enter that in the OUT field of NINA's autofocuser settings. Backlash will appear as unchanging HFR in the first few measurements. The change in position from the first measurement to the last one at the same HFR is the amount of backlash.
  4. Run autofocus again and adjust step size and backlash accordingly until a decent hyperbola emerges
  5. Repeat Step 4 until HFR ratio is about 3:1 to 4:1 and hyperbolic quality is close to 1.00
  6. (optional) Reduce number of autofocus points and run autofocus to confirm it still works well 
*This is described by another fine Patriot Astro video starting at the 16:29 point, where the process is used with a ZWO EAF.

**My suggestion is to start at focuser position zero (the new "infinity" stop, or close to it) and move to best focus. Stop at a good focus and don't try for perfection; don't decrease the focus position at any time while hunting for focus. Then continue increasing the position while determining the step size. If you happen to pass through a better focus, note its position and measure step size from it. This insures that backlash does not factor into step size.

I did get AF to work reasonably well, with a focus step of 100 and backlash also at 100. However, this was with NINA's built-in AF, not Hocus Focus, so for Night 3 I'm going back with Hocus Focus

Sample frames were also collected at f/2.0, f2.4, f/2.8, f/3.3, and f/4.0. 

This gives me hope that I can image at f/2.0. Night 3 will be tuning Hocus Focus for better focusing and seeing if I need to adjust backfocus. If this works out Night 4 might be trying to create an actual RGB image!

------------------------------

Loading the sample frames into ASTAP suggests dreadfully large tilt: 42% at f/2.0, and 16% (barely tolerable) at f/4.0. Here is are the diagrams of interest at f/2.0:

f/2.0 Tilt Original

This indicates a strong bottom to top tilt.

f/2.0 Aberration Inspector Original

The bottom row has badly elongated stars, but the top row isn't too bad at all. I think the tilt adds elongation in the bottom row while essentially nulling it out the top row. If I could selectively remove the tilt I'd probably have a better idea of the aberration due to backfocus error and  could possibly fix it mechanically.  

Toward that end I've ordered some very thin 3D-printed tilt shims. (Hardware doesn't permit me to use the tilt plate that came with the camera). Correcting some of the tilt might help with focus and other star-diameter calculations. An alternative it to use software to correct both tilt and any other aberrations simultaneously. The software of choice for doing this is BlurXTerminator (BXT).

Applying BXT (using its default settings) gives me this:

f/2.0 Tilt after BXT

f/2.0 Aberration Inspector after BXT


Quite an amazing improvement, isn't it? Tilt has essentially vanished and corner stars are much rounder.


Night Three (23 September)

Hocus focus worked well with the existing values of backlash and step size. I did bump backlash upward a little to 150 after looking at a few runs. With HF running the hyperbolic fits were much better and the luminance focal position seemed more consistent.

I ran the filter compensation calculator with mixed results. Red and green were basically parfocal with luminance, but blue was quite offset. This might be because I need to adjust exposure times? I'll repeat this.

Night Four (25 September)

This time the best focus (smallest NINA HFR) determined manually was at focuser position 735. Blue best focus came at 835, so the offset was +100. This is essentially the same as the software-determined +93. 

I took a baseline R-G-B-Dither 10 times; the target was M52. My main goal is to get a baseline for how long it takes to gather this data. It appears that a simple 60 s frame consumes about 70.4 s; a frame followed by a dither uses 101.4 s. Ignoring autofocusing, this means a single RGBD(ither) sequences uses about 242.2 s to collect 180 s of data. Roughly speaking, multiply the total exposure time by 4/3 to get the actual acquisition time. It's pretty much the same as if I was shooting LRGB.

An "adequate" data set of 40 frames per channel, suitable for drizzling, means 2 hours of data. This means acquisition time will be about 2.7 hours, plus some for refocusing. This isn't half bad, and might be bettered by adjusting settling times and optimizing the filter order.

Anyway, here is the first light image for the lens:


M52 at center


This is surprisingly good, at least to me. I'm under a Bortle 7 sky and not using filters of any kind. The ability of PixInsight to remove light pollution boggles me, and how well BlurXTerminator reduces aberrations is equally amazing. At f/2.0 the lens has considerable chromatic aberration:

CA in corner star


This star elongation is almost all chromatic aberration, and amounts to about 4 pixels between blue and red. Because BXT is able to correct this I'm going to go with f/2.0 for my first "real" image. If that doesn't turn out well I may move to f/2.8.

Questions Answered?

Is the lens optically sound?  Can it provide focus at infinity? Yes, after a little surgery. Does it have significant aberration? Yes, but it appears to be correctable using BlurXTerminator. Will it work well at f/2, or does it need to be stopped down to f/2.4, f/2.8, or f/4? It's adequate at f/2.0, but might be better at f/2.8. Do the lens adapters introduce significant tilt? I suspect this is the source of the tilt I'm seeing, but I need to look closer at this issue. Maybe the shims I ordered will be the remedy, or I may revert to using a Canon to M42 adapter to see how that works.

Does autofocus work well? It seems to work well enough.

How much can I reduce the time it takes to make a single dither? I still need to play with the dither settings and find out.


------------------------

That's the last of the prep nights!  Next I'm going to try to resolve an issue I've had while using two ZWO cameras (one for imaging, one for guiding) at the same time; this was a problem that first popped up while at a remote dark-sky site in which the two cameras switched roles. Using the ASI2600 as the guide camera really does not work.

Another issue I need to explore is why it takes NINA so long to connect to my Losmandy Gemini 2, why it throws an error at first and then makes a good connection. Strange!




Tuesday, September 9, 2025

The Samyang 135mm f/2 Lens; Setting It Up and How to Use It

[This is Part 1 of 2 about my new Samyang 135mm f/2 lens imaging system]

Assembling the System

My Samyang 135 mm f/2 imaging system comprises the Canon version of the lens,  a ZWO EAFN, the M42 adapter from Thinkable Creations, and finally the Astrodymium rings. 

The rings (made in Canada) were held up by the changing import rules and regulations about tariffs and fees because I had ordered them from the manufacturer in Canada. The seller was scrambling to sort out what it all meant for any delivery issues and added fees, and was good enough to contact me about what was going on and suggest I cancel the order and buy it from Agena Astro.  Which is what I ended up doing.

The Astrodymium rings/cradle went together like clockwork. I managed to take a couple of missteps by not following the directions and paying attention to the animations in the instructions. I don't use ASIAIR and probably never will, so instead I got a second accessory rail instead. I know when one is accustomed to machined aluminum for tube rings and dovetails the 3D-printed plastic parts may seem a little suspect, but when assembled the entire thing is quite rigid. I don't expect to see any flexure at all.

Through no fault of Thinkable Creations the install of their adapter was more difficult than anticipated. Installing it involves first removing a plate on the Samyang that interfaces with a Canon DSLR and then detaching a spacer ring that's held in place by four tiny screws. The plate came off easily, but two of the spacer ring screws were crazy tight and it took some time (plus a little penetrating oil and elbow grease) to get them out. Other than that the install went well. 

One thing about this adapter that prospective buyers should know: the M42 threads are very long. When I screwed it onto my ZWO EFW it came within a mm of the filter carousel. This seemed dangerous; I imagined it snagging on the carousel and possibly damaging the EFW and filters. 

But fortunately it all worked out. The required backfocus when used with filters is 45 mm. With the adapter in place I have 12.5 (camera) + 20 (EFW) + 5.5 (adapter) for a backfocus of 38 mm. My plan was to add a 7.5 M42 spacer ring to bring it up to 45.5, which I should have been close enough to the magic number of 45.0. Unfortunately those long threads wouldn't allow the ring to fully screw onto the adapter, and instead of 7.5 mm it added 9.5 mm. That put backfocus at 47.5, much too long. Luckily, I had a 5 mm spacer ring on hand. When it screwed on as far as possible there was a gap between it and the adapter face of about 2 mm. This effectively made the adapter's back focus 7.5 mm. Plus the spacer ring's threads don't intrude into the EFW anywhere near as far as the adapter. So if you intend to use the adapter, buy a 5 mm spacer ring, too. The diagram below illustrates how the backfocus works.


Backfocus for ASI 2600 minus tilt ring (red),
ZWO EFW (blue), 5mm spacer (green), and M42 adapter (black);
Diagram is not to scale!

The only thing I didn't get (but should have) in the initial round of orders was a short (150 mm) Vixen-style dovetail, but that's on order and will arrive about the time this is posted. The short length allows the camera/EFW to have full rotation and lets me do flats by resting the light atop the lens shroud.

Imaging at a focal length of 135 mm

Undersampling

Imaging with the Samyang 135 mm f/2 lens is going to be different from my usual imaging mainly because of its short focal length. This will cause what's called undersampling, in which pixels scale is smaller than what the seeing scale. When this is the case, a star's light will illuminate a pixel, but probably not the pixels around it. The star is imaged as a square of pixel size. (When pixel scale is much smaller than seeing scale, a star will illuminate many pixels which makes for nice looking stars at the cost of resolving detail.) 

How do we know undersampling will occur before even taking an image? All we need to do is compare our imaging setup's pixel scale to a value of seeing expected for an imaging session. Suppose we take average seeing as 2.0 arcseconds per pixel  ("/px).

The formula for a setup's image scale is base on the camera's pixel size and the focal length of the imaging telescope or lens: 

Pixel scale (in "/px) = 206 x camera pixel size (in microns) / focal length of lens (in mm)

If you look closely at my recent IFN image, you can see it's on the edge of being undersampled: many of the smaller stars look blocky. According to the above formula, my setup for that had an image scale of 

Pixel scale = 206 x 3.76 microns / 387 mm = 2.0"/px

This confirms the idea that it's mildly undersampled.

Now let's repeat the calculation for the Samyang. We have

Pixel scale = 206 x 3.76 microns / 135 mm =  5.74"/px

This is much larger than 2"/px, so it's safe to assume stars will be undersampled, probably badly.

Drizzling

The way to compensate for undersampling is to drizzle during processing. Drizzling can make those blocky stars rounder and fuzzier, at cost of extra processing time and worse, an amplification of noise. The noise can be reduced by acquiring a large number of light frames and by using a utility like NoiseXTerminator. Drizzling raises the bar on how many light frames to collect and may require frequent dithering. 

If you read forums there seem to be two common answers for how many frames you need -- at least 100 or at least 40. The former comes from those who want the very best images, while the latter is for people like me who are happy with satisfactory results. I'll probably go with 40 for the color frames, but closer to 100 for luminance. There's also disagreement about how often to dither -- once every few frames or with every frame. I'll probably choose to dither after each luminance frame and after each third frame for color channels, if I can reduce the time it takes to dither to something like 20 seconds or so. This might be unrealistic, only testing will tell.

 Dithering

The distance to dither on the imaging camera is generally accepted to be 10 px or so. NINA lets you set this by specifying how many pixels to move on the guide camera. To determine the value to use requires that pixel scale formula again, applied to the imaging system and again to the guiding system:

Imaging Scale = 5.74 "/px (from previously)

Guiding Scale = 206 x 3.75 microns / 130 mm = 5.94 "/px

This means moving one pixel on the guider corresponds to moving 5.94", and moves the imaging camera (5.94 / 5.74) px, or 1.04 px. In other words, the motions of the imaging camera essentially are the same as those of the guider. If I want 10 px dithering on the images, I should use 10 px for the NINA "PHD2 Dither Pixels" setting. The number to use is open to guesswork. Maybe 5 is fine? I'll have to try different values.

Other NINA dither settings are related to the mount. "Settle Pixel Tolerance" is basically how close PHD2 has to be to the guide star before it allows the mount to start settling. You can also set the minimum and maximum times for settling. The defaults for these are 10 and 40 s, respectively. My plan is to experiment with the minimum time and pixel tolerance values to see what works fastest with my mount. 

Some people dither only in RA, but the general advice is to use random dithering.

Exposure time

This is really a guessing game with many trade-offs. For fun I'll use the Sharpcap Sky background calculator for imaging with the Samyang at the Iowa Star party. Sky brightness there is 21.60 magnitude per square arcsecond, and the resulting sky electron rate is 6.49e-/px/s. Read noise is a negligible 1.4e- at gain 100, so to get the sky up to about 1/6 of full well would take 333 s (5.5 minutes). Pretty sure most of the stars in the field of view would be blown out by that. How about simply making sure that the sky signal swamps the read noise? Let's say by a factor of 100? That would only require an exposure of about 21 s. So now I have the exposure time bracketed: 20 to 2000 s!

It's worth noting that some people will shot light frames with only 20 s exposure.

I've been using 90 s exposures and I really like the star color I got in the IFN image so I think I'll stay with that. A test image around the next new moon would be really useful.  

Next Post: Testing

Tuesday, September 2, 2025

Finished: Integrated Flux Nebula Image

Here's the image at quarter-scale:

1/4 Scale Image

Full-Scale image at AstroBin.

Where to even start with this? How about the data?

Originally there were 13.2 hours of data, but I came across a video in which someone explained how they use PixInsight's SubframeSelector process to cull bad frames. My approach to data culling has always been to keep all that aren't terribly bad, but for this project I thought I'd get tough. Using SFS led me to reject 3.6 hours of data!  To be fair, about a third of that was because of my penchant for starting data collection before the end of twilight. There were very few visibly bad frames as viewed in Blink, so I'm going to call this approach "2 sigma" aggressive, in that it basically culls any frame that has  FWHM, eccentricity, or median values more than two standard deviations above the mean. Note that those rejected frames might be perfectly fine in and of themselves, but relative to their cohort they are of significantly lesser quality. Frames with anomalously low star counts are also culled. An example of this is the set collected during the session that a smoke layer moved in and began obscuring stars in the late morning. Star count fell markedly and I removed frames.

Worth mentioning was the need to use WBPP's Grouping Keywords to make sure that light frames and their appropriate flats were processed together. This was the first time I used it, and it worked perfectly. 

Also, I no longer use dark flats, or "flat darks," if you prefer. Only dark, flat, and bias frames are used for calibration. (Flat and dark frames are now taken for granted at Astrobin, it seems; it no longer asks if you use them.)

Now about the calibration frames, specifically the flats. It seems that most of the time my flat illumination was asymmetric for reasons I don't understand, and this gave the background modelization processing some problems. That big bright Polaris didn't help, either, nor did the fact that most of the image was nebulosity. My first pass used GradientCorrection and that left the right side with a green cast. After playing with that for a while I moved on to DynamicBackgroundExtraction. That didn't clear it up, either. After thinking about it for a while I reverted to AutomaticBackgroundExtractor with a 5th-order function and that did the job. 

Next, those darn satellites. The first processing pass got most of them, but a few stuck around in weakened form. They should have been removed during light frame integration, so I looked at what WBPP was using for rejection and it was Generalized Extreme Studentized Deviate (ESD). Some hunting around took me to a PixInsight forum where it was noted that ESD (using its default settings) wasn't doing a great job with satellites. So I told WBPP to instead use Linear Fit Clipping and that seemed to work better. Not perfect, just better. I will need to find out what ESD settings work best since overall it's probably the scheme to use. It may be that satellites and an image full of nebulosity are always going to be a problem.

I also learned that my usual haphazard application of the XTerminator family has been wrong. It's a processing sin to use NoiseXT before BlurXT and NoiseXT before SPCC. For this image I only applied NXT after taking the image nonlinear.

Here's my workflow for this project with the ">" symbol meaning "creates":

WBPP  >  Cropped channel masters

ABE (color channels) > Backgrounded color channel masters

ChannelCombination > RGB master

ImageSolve > RGB master with astrometry 

SPCC > color-calibrated RGB master

ABE (luminance) > Backgrounded luminance master

BXT (luminance master and RGB master) > enhanced masters

STF and HT > nonlinear masters

NXT > de-noised masters

CurveTransformation (with gentle "S" curve) > enhanced masters

LRGBCombination > LRGB master

assorted tweaks (saturation, sharpness, contrast, etc.) > Finished image

Not shown is an additional DynamicCrop after the ABE of luminance because ABE was a little overaggressive at the left edge. Even with two crops, the final image lost only 4.2% off the short axis and 5.5% off the long axis for a 10% areal loss. The reproducibility of the image framing was impressive. Thank you, NINA. 

Another lesson learned was that the XTerminators could be sped up quite a bit. Normally the necessary files are installed by XTs, but on my old computer the install did not engage the GPU. My graphics card is an NVIDEA GeForce GTX 1050 Ti circa 2018. This post explains how to upgrade a computer to use the GPU for faster XT performance. In my case it sped up the XTs by a factor of 4. I may need to repeat this every time XT does an upgrade.

So how did the processing work out? Mostly I was concerned that the area around Polaris was darkened by background extraction and didn't represent reality. I searched AstroBin for an image I could use as a sort of "ground truth" for what I had done. I found just what I wanted in an image by captured_mom8nts (which I'm guessing is not their real name). It appears to have been taken at a much shorter focal length and so should have suffered much less Polaris bloom, keeping the area around the star reasonably pristine. A little crop/rotate/scale/stretch and it matched my image's scale and orientation:

Comparison: Mine (top), captured_mom8nts (bottom)

I think it fairly obvious that the dark areas on either side of  Polaris in my image match those in captured_mom8nt's image, even though mine is much deeper. I'm happy!

I'm also happy with the star color. Shooting only 90 s exposures may have been the key to that in that it kept stars from saturating. Next time I'll be shooting at f/2, but with a smaller objective so I may keep the exposure time as is. 

-----------------------------------------

All the components of my new Samyang 135 mm f/2 imaging system have arrived or are on their way. Next time I'll have a picture of it all assembled and possibly already taken on its first test drive!