Tuesday, September 9, 2025

The Samyang 135mm f/2 Lens; Setting It Up and How to Use It

[This is Part 1 of 2 about my new Samyang 135mm f/2 lens imaging system]

Assembling the System

My Samyang 135 mm f/2 imaging system comprises the Canon version of the lens,  a ZWO EAFN, the M42 adapter from Thinkable Creations, and finally the Astrodymium rings. 

The rings (made in Canada) were held up by the changing import rules and regulations about tariffs and fees because I had ordered them from the manufacturer in Canada. The seller was scrambling to sort out what it all meant for any delivery issues and added fees, and was good enough to contact me about what was going on and suggest I cancel the order and buy it from Agena Astro.  Which is what I ended up doing.

The Astrodymium rings/cradle went together like clockwork. I managed to take a couple of missteps by not following the directions and paying attention to the animations in the instructions. I don't use ASIAIR and probably never will, so instead I got a second accessory rail instead. I know when one is accustomed to machined aluminum for tube rings and dovetails the 3D-printed plastic parts may seem a little suspect, but when assembled the entire thing is quite rigid. I don't expect to see any flexure at all.

Through no fault of Thinkable Creations the install of their adapter was more difficult than anticipated. Installing it involves first removing a plate on the Samyang that interfaces with a Canon DSLR and then detaching a spacer ring that's held in place by four tiny screws. The plate came off easily, but two of the spacer ring screws were crazy tight and it took some time (plus a little penetrating oil and elbow grease) to get them out. Other than that the install went well. 

One thing about this adapter that prospective buyers should know: the M42 threads are very long. When I screwed it onto my ZWO EFW it came within a mm of the filter carousel. This seemed dangerous; I imagined it snagging on the carousel and possibly damaging the EFW and filters. 

But fortunately it all worked out. The required backfocus when used with filters is 45 mm. With the adapter in place I have 12.5 (camera) + 20 (EFW) + 5.5 (adapter) for a backfocus of 38 mm. My plan was to add a 7.5 M42 spacer ring to bring it up to 45.5, which I should have been close enough to the magic number of 45.0. Unfortunately those long threads wouldn't allow the ring to fully screw onto the adapter, and instead of 7.5 mm it added 9.5 mm. That put backfocus at 47.5, much too long. Luckily, I had a 5 mm spacer ring on hand. When it screwed on as far as possible there was a gap between it and the adapter face of about 2 mm. This effectively made the adapter's back focus 7.5 mm. Plus the spacer ring's threads don't intrude into the EFW anywhere near as far as the adapter. So if you intend to use the adapter, buy a 5 mm spacer ring, too. The diagram below illustrates how the backfocus works.


Backfocus for ASI 2600 minus tilt ring (red),
ZWO EFW (blue), 5mm spacer (green), and M42 adapter (black);
Diagram is not to scale!

The only thing I didn't get (but should have) in the initial round of orders was a short (150 mm) Vixen-style dovetail, but that's on order and will arrive about the time this is posted. The short length allows the camera/EFW to have full rotation and lets me do flats by resting the light atop the lens shroud.

Imaging at a focal length of 135 mm

Undersampling

Imaging with the Samyang 135 mm f/2 lens is going to be different from my usual imaging mainly because of its short focal length. This will cause what's called undersampling, in which pixels scale is smaller than what the seeing scale. When this is the case, a star's light will illuminate a pixel, but probably not the pixels around it. The star is imaged as a square of pixel size. (When pixel scale is much smaller than seeing scale, a star will illuminate many pixels which makes for nice looking stars at the cost of resolving detail.) 

How do we know undersampling will occur before even taking an image? All we need to do is compare our imaging setup's pixel scale to a value of seeing expected for an imaging session. Suppose we take average seeing as 2.0 arcseconds per pixel  ("/px).

The formula for a setup's image scale is base on the camera's pixel size and the focal length of the imaging telescope or lens: 

Pixel scale (in "/px) = 206 x camera pixel size (in microns) / focal length of lens (in mm)

If you look closely at my recent IFN image, you can see it's on the edge of being undersampled: many of the smaller stars look blocky. According to the above formula, my setup for that had an image scale of 

Pixel scale = 206 x 3.76 microns / 387 mm = 2.0"/px

This confirms the idea that it's mildly undersampled.

Now let's repeat the calculation for the Samyang. We have

Pixel scale = 206 x 3.76 microns / 135 mm =  5.74"/px

This is much larger than 2"/px, so it's safe to assume stars will be undersampled, probably badly.

Drizzling

The way to compensate for undersampling is to drizzle during processing. Drizzling can make those blocky stars rounder and fuzzier, at cost of extra processing time and worse, an amplification of noise. The noise can be reduced by acquiring a large number of light frames and by using a utility like NoiseXTerminator. Drizzling raises the bar on how many light frames to collect and may require frequent dithering. 

If you read forums there seem to be two common answers for how many frames you need -- at least 100 or at least 40. The former comes from those who want the very best images, while the latter is for people like me who are happy with satisfactory results. I'll probably go with 40 for the color frames, but closer to 100 for luminance. There's also disagreement about how often to dither -- once every few frames or with every frame. I'll probably choose to dither after each luminance frame and after each third frame for color channels, if I can reduce the time it takes to dither to something like 20 seconds or so. This might be unrealistic, only testing will tell.

 Dithering

The distance to dither on the imaging camera is generally accepted to be 10 px or so. NINA lets you set this by specifying how many pixels to move on the guide camera. To determine the value to use requires that pixel scale formula again, applied to the imaging system and again to the guiding system:

Imaging Scale = 5.74 "/px (from previously)

Guiding Scale = 206 x 3.75 microns / 130 mm = 5.94 "/px

This means moving one pixel on the guider corresponds to moving 5.94", and moves the imaging camera (5.94 / 5.74) px, or 1.04 px. In other words, the motions of the imaging camera essentially are the same as those of the guider. If I want 10 px dithering on the images, I should use 10 px for the NINA "PHD2 Dither Pixels" setting. The number to use is open to guesswork. Maybe 5 is fine? I'll have to try different values.

Other NINA dither settings are related to the mount. "Settle Pixel Tolerance" is basically how close PHD2 has to be to the guide star before it allows the mount to start settling. You can also set the minimum and maximum times for settling. The defaults for these are 10 and 40 s, respectively. My plan is to experiment with the minimum time and pixel tolerance values to see what works fastest with my mount. 

Some people dither only in RA, but the general advice is to use random dithering.

Exposure time

This is really a guessing game with many trade-offs. For fun I'll use the Sharpcap Sky background calculator for imaging with the Samyang at the Iowa Star party. Sky brightness there is 21.60 magnitude per square arcsecond, and the resulting sky electron rate is 6.49e-/px/s. Read noise is a negligible 1.4e- at gain 100, so to get the sky up to about 1/6 of full well would take 333 s (5.5 minutes). Pretty sure most of the stars in the field of view would be blown out by that. How about simply making sure that the sky signal swamps the read noise? Let's say by a factor of 100? That would only require an exposure of about 21 s. So now I have the exposure time bracketed: 20 to 2000 s!

It's worth noting that some people will shot light frames with only 20 s exposure.

I've been using 90 s exposures and I really like the star color I got in the IFN image so I think I'll stay with that. A test image around the next new moon would be really useful.  

Next Post: Testing

Tuesday, September 2, 2025

Finished: Integrated Flux Nebula Image

Here's the image at quarter-scale:

1/4 Scale Image

Full-Scale image at AstroBin.

Where to even start with this? How about the data?

Originally there were 13.2 hours of data, but I came across a video in which someone explained how they use PixInsight's SubframeSelector process to cull bad frames. My approach to data culling has always been to keep all that aren't terribly bad, but for this project I thought I'd get tough. Using SFS led me to reject 3.6 hours of data!  To be fair, about a third of that was because of my penchant for starting data collection before the end of twilight. There were very few visibly bad frames as viewed in Blink, so I'm going to call this approach "2 sigma" aggressive, in that it basically culls any frame that has  FWHM, eccentricity, or median values more than two standard deviations above the mean. Note that those rejected frames might be perfectly fine in and of themselves, but relative to their cohort they are of significantly lesser quality. Frames with anomalously low star counts are also culled. An example of this is the set collected during the session that a smoke layer moved in and began obscuring stars in the late morning. Star count fell markedly and I removed frames.

Worth mentioning was the need to use WBPP's Grouping Keywords to make sure that light frames and their appropriate flats were processed together. This was the first time I used it, and it worked perfectly. 

Also, I no longer use dark flats, or "flat darks," if you prefer. Only dark, flat, and bias frames are used for calibration. (Flat and dark frames are now taken for granted at Astrobin, it seems; it no longer asks if you use them.)

Now about the calibration frames, specifically the flats. It seems that most of the time my flat illumination was asymmetric for reasons I don't understand, and this gave the background modelization processing some problems. That big bright Polaris didn't help, either, nor did the fact that most of the image was nebulosity. My first pass used GradientCorrection and that left the right side with a green cast. After playing with that for a while I moved on to DynamicBackgroundExtraction. That didn't clear it up, either. After thinking about it for a while I reverted to AutomaticBackgroundExtractor with a 5th-order function and that did the job. 

Next, those darn satellites. The first processing pass got most of them, but a few stuck around in weakened form. They should have been removed during light frame integration, so I looked at what WBPP was using for rejection and it was Generalized Extreme Studentized Deviate (ESD). Some hunting around took me to a PixInsight forum where it was noted that ESD (using its default settings) wasn't doing a great job with satellites. So I told WBPP to instead use Linear Fit Clipping and that seemed to work better. Not perfect, just better. I will need to find out what ESD settings work best since overall it's probably the scheme to use. It may be that satellites and an image full of nebulosity are always going to be a problem.

I also learned that my usual haphazard application of the XTerminator family has been wrong. It's a processing sin to use NoiseXT before BlurXT and NoiseXT before SPCC. For this image I only applied NXT after taking the image nonlinear.

Here's my workflow for this project with the ">" symbol meaning "creates":

WBPP  >  Cropped channel masters

ABE (color channels) > Backgrounded color channel masters

ChannelCombination > RGB master

ImageSolve > RGB master with astrometry 

SPCC > color-calibrated RGB master

ABE (luminance) > Backgrounded luminance master

BXT (luminance master and RGB master) > enhanced masters

STF and HT > nonlinear masters

NXT > de-noised masters

CurveTransformation (with gentle "S" curve) > enhanced masters

LRGBCombination > LRGB master

assorted tweaks (saturation, sharpness, contrast, etc.) > Finished image

Not shown is an additional DynamicCrop after the ABE of luminance because ABE was a little overaggressive at the left edge. Even with two crops, the final image lost only 4.2% off the short axis and 5.5% off the long axis for a 10% areal loss. The reproducibility of the image framing was impressive. Thank you, NINA. 

Another lesson learned was that the XTerminators could be sped up quite a bit. Normally the necessary files are installed by XTs, but on my old computer the install did not engage the GPU. My graphics card is an NVIDEA GeForce GTX 1050 Ti circa 2018. This post explains how to upgrade a computer to use the GPU for faster XT performance. In my case it sped up the XTs by a factor of 4. I may need to repeat this every time XT does an upgrade.

So how did the processing work out? Mostly I was concerned that the area around Polaris was darkened by background extraction and didn't represent reality. I searched AstroBin for an image I could use as a sort of "ground truth" for what I had done. I found just what I wanted in an image by captured_mom8nts (which I'm guessing is not their real name). It appears to have been taken at a much shorter focal length and so should have suffered much less Polaris bloom, keeping the area around the star reasonably pristine. A little crop/rotate/scale/stretch and it matched my image's scale and orientation:

Comparison: Mine (top), captured_mom8nts (bottom)

I think it fairly obvious that the dark areas on either side of  Polaris in my image match those in captured_mom8nt's image, even though mine is much deeper. I'm happy!

I'm also happy with the star color. Shooting only 90 s exposures may have been the key to that in that it kept stars from saturating. Next time I'll be shooting at f/2, but with a smaller objective so I may keep the exposure time as is. 

-----------------------------------------

All the components of my new Samyang 135 mm f/2 imaging system have arrived or are on their way. Next time I'll have a picture of it all assembled and possibly already taken on its first test drive!