Thus far with my planetary imaging I’ve used capture times and focal ratios just “by the book” without actually understanding how the answers were derived, so I decided it was about time I worked things out for myself. I’ve summarised what I’ve worked out in this article. I should point out that I have no real knowledge of optics other than what was required for O-level Physics in the 1980s, so I’ll grant that there may be errors. Feel free to point them out.
First let’s start with Rayleigh’s formula for angular resolution. Simply put, this tells you how “wide” something needs to appear in the sky for you to be able to resolve it with your telescope. The formula is:
R = λ / d
where R is the angular resolution in radians, λ is the wavelength you’re interested in measured in metres and d is the diameter of the primary lens or mirror (also in metres). If you’re maths-phobic don’t worry about the fact I’m using radians rather than degrees or arc-seconds here. It’ll all come out in the wash later.
The next thing we need to know about is “plate scale”. This isn’t something you get when your dishwasher isn’t functioning correctly, but rather it tells you the relationship between size on the image plane and the field of view. It’s fairly trivial to derive from first principles, just a little basic trigonometry, but for brevity (not my strong suit at the best of times 🙂 I’ll skip that here. It’s given by:
S = 1 / f
where S is angular size in radians per metre of image plane and f is the focal length of the telescope also in metres. We can use this to calculate how much of the field of view is represented by a single pixel on the camera sensor. For example, if we have a telescope with a focal length of 1.2m and an SPC900 camera which has a pixel size of 5.6μm, or 5.6×10-6m, that each pixel represents 5.6×10-6 / 1.2 radians of the field of view, or 4.67×10-6 radians. This is a thoroughly awkward number to work with, so it’s handy that I know there are 206265 arcseconds in a radian and multiplying by that gives us a figure of 0.96 arcseconds per pixel.
When we’re out in the white heat of a planetary imaging session, it would be useful to know that we’re capturing as much detail as we possibly can. Ideally we want to make sure that at least one pixel on the camera sensor is dedicated to the smallest thing our telescope can resolve. If we represent the pixel size in metres by w, we can write this in terms of R and S above:
R = Sw
and expanding the terms:
λ / d = w / f
The appearance of d, the diameter of the primary, and f, the focal length here makes the rearrangement to give the focal ratio just too good an opportunity to miss:
f / d = w / λ
In other words, if we know what our pixel size is and we know what wavelength we’re imaging in, we can calculate the focal ratio we should be using. Picking blue light with a wavelength of 400nm, or 4×10-7m and staying with our SPC900 example with a sensor element size of 5.6×10-6m, that gives us:
focal ratio = 5.6×10-6 / 4×10-7
and doing the calculation that somewhat surprisingly comes out as:
focal ratio = 14
So, to get the best data capture you can all you need to do is bump the focal length up to 14? Is that it? Does Damian Peach need his bumps felt for imaging at a focal ratio of forty-two brazilian? (I almost typed Damian Hirst there. Imagine, if Damian Hirst did astrophotography. You could guarantee that Taurus would never be the same again…)
Well, perhaps not. One reason involves more maths and whilst I understand the results I probably can’t explain it very well so I’ll skip that bit. It’s called Nyquist’s Sampling Theorem and simply put states that if you’re sampling an analogue signal then your sample rate should be at least twice that of the highest signal frequency to accurately recreate the signal. If we allow that the same rules should apply to digital imaging (which is sampling an analogue signal after all) then we really want two pixels per smallest resolvable detail. That would double our pixel size in the above calculation, giving us a desired focal ratio of f/28.
Another reason is that although we’ve used 5.6μm as the pixel size in this instance, that’s actually their width and height. They’re not round though, and are longer on the diagonal by a factor of √2, or about 1.4 (near enough for our purposes). So, perhaps we should increase the pixel size by a factor of 1.4 to allow for this, giving a final desired focal ratio of f/39.2. Since I like round numbers, call it f/40.
If I’d picked a wavelength in the green region, say 560nm, we could have had the numbers come out very simply as f/10, f/20 and f/28, but only because that’s a nice multiple of our pixel size.
What’s interesting about this is that it’s completely independent of your telescope, a result I certainly didn’t expect. Of course there are other factors that will come into play such as the amount of light your scope can get onto the sensor in the first place. With small diameter scopes that’s possibly more likely to be the limiting factor.
Is there any point in going beyond f/40? I don’t know, but it seems unlikely. Beyond that point it seems to me that all you’re likely to be doing is increasing the size of the image without adding more detail and it’s entirely possible that’s easier to do in post-processing, especially given the increased difficulty of keeping the image on the sensor as the focal ratio goes up.
And so to capture times.
For this we’re going to go back to the plate scale calculation and I’m going to stick with our 1.2m focal length SPC900 example where each pixel corresponds to 0.96 arcseconds of the field of view. During the capture run we’d ideally like all the features of the planet to stay in the same place on the camera sensor. Obviously good tracking is necessary for that, but even that won’t help with distortion of the image due to the rotation of the target planet. As the planet rotates its features will move across the image plane and if we don’t stop imaging soon enough they’ll blur into data already recorded for the features “ahead” of them that have already been recorded at that position on the sensor.
We can’t stop it completely, so we just need to make an arbitrary decision about how much movement across the image plane is acceptable. I’m going to suggest half a pixel, corresponding to 0.48 arcseconds of the field of view. To make the maths simpler, I’ll call it 0.5 arcseconds. It’s not going to make a huge amount of difference.
The part of the planet moving fastest across the image plane is at the equator, right in the middle of the image. So how long will it take for a point there to move 0.5 arcseconds? It depends on the period of rotation of the planet and how big it is. For our purposes, the diameter in arcseconds is acceptable as a measurement of size. Simple geometry tells us that any point on the equator must travel πd arcseconds (d being the planet diameter and π being the mathematical constant pi, though it doesn’t look like it in my font) in one rotation, so to find out the time, t, it takes to cover 0.5 arcseconds we need:
t = 0.5p / πd
p being the time for a complete rotation. t will be in the same units as p.
It’s a fairly simple equation, but the fly in the ointment is that as the distances between the Earth and the other planets change so does their angular size, in some cases quite dramatically. As they’re probably the most often imaged, I’ll calculate values for Mars, Jupiter and Saturn here when they’re at their largest apparent diameter.
For Mars, with an apparent diameter between 3.5 and 25 arcseconds and a rotation period of 1477 minutes,
t = 0.5 x 1477 / 25π = 9.4 minutes = 9 minutes 24 seconds.
For Jupiter, apparent diameter between 29 and 50 arcseconds and a rotation period of 595 minutes,
t = 0.5 x 595 / 50π = 1.9 minutes = 113 seconds.
For Saturn, apparent diameter between 15 and 20 arcseconds and a rotation period of 634 minutes,
t = 0.5 * 634 / 20π = 5 minutes.
Obviously if you know the angular diameter at the time you’re actually imaging then you can correct the figures accordingly.
If you have a significantly different plate scale or want to allow for a different amount of drift then the general formula would be:
t = 206265wpk / πfd, where
w is the size of a sensor pixel in metres
p is the period of rotation of the planet
k is the fraction of a pixel allowed for rotational drift (0.5 in my example)
f is the focal length of the telescope in metres
d is the angular diameter of the planet in arcseconds
t will be in whatever units are used for p.
If you’re happy with my assumptions and you’re using an SPC900, but just want to allow for a different focal length then it’s perhaps easiest to start with the following figures and divide by the actual focal length in metres to get the maximum capture time:
Mars: 650 seconds
Jupiter: 130 seconds
Saturn: 350 seconds