Notes on Using the SPC900NC for Planetary Imaging

A while back I was looking for information on which codec was best for data capture from the SPC900 cameras and found several web pages suggesting that YUY2 (aka YUYV, I believe) was best on the grounds that it generated larger capture files and therefore contains more data.

I’ve been looking for some Linux-based code that would allow me to take a still image regularly without needing any kind of graphical display and have been struggling to find anything, so I started to think about writing what I wanted myself. As part of that I wrote some code that used the V4L2 interface to probe the available settings and parameters of a webcam.

I was a little surprised when I found when I tested it against the SPC900 that not all the codecs offered by Sharpcap on Windows were available, and not all the frame rates were available at all resolutions, so I began to poke about in the driver code for the Philips cameras to see why. The driver was written by someone claiming to have an NDA with Philips (and I have no reason to disbelieve this) so I assume he knew what he was up to when he was writing the code.

The first thing I discovered was that the codec chosen for the output of programs such as SharpCap has nothing to do with the wire-format of the data as it comes from the camera. There appear to be two different wire formats and one, a raw Bayer mode is only used for snapshots. Any other output format appears to be generated within the driver from the data pulled from the camera. Unless the driver is throwing data away for no apparently good reason (and certainly the Linux driver which produces a YUV420 format doesn’t appear to be) then it looks like it makes no difference whatsoever which output codec is chosen.

The camera appears to support up to four different compression modes including “none”. The other three produce increasing reductions (I like that, “increasing reductions” 🙂 in the length of the output data for a given frame size. It looks like the driver negotiates with the camera and USB subsystem for the best possible outcome (ie least compression) for the amount of data to be transferred. I have no idea whether all the compression modes are lossy, but if we call them “low”, “medium” and “high” compression I think it’s safe to assume that the “medium” and “high” modes certainly do. What would be the point of having two different compression modes when neither lost any data? I could perhaps work out from the code whether the “low” mode drops data, but I’ve not tried to as yet. It occurs to me however that it would be odd to implement uncompressed transfers and compressed transfers that produced an identical end result.

What initially surprised me was that for the 640×480 (VGA) modes, there is no attempt to negotiate uncompressed data. All VGA modes are compressed. This runs counter to what has been published on some web pages. My attempt to justify this runs as follows:

The image data transferred from the camera averages out at 24 bits per pixel. It’s not a simple relationship because of the way the data is encoded, but that’s the average figure. For a VGA image size that’s just over 7 Mbits. At the lowest frame rate, 5fps, it’s just over 35Mbits per second. But this is a USB 1.1 camera. The data rate is limited to 12Mbits/sec. Compression must therefore take place.

It also explains why the Linux driver arbitrarily refuses to allow the user to set a frame rate of more than 15fps at full resolution. At 20fps the image data would have to be compressed to almost one twelfth of its original size for transmission and the camera would be throwing away so much data that the image would be useless.

So, all 640×480 captures use a compressed data format on the wire. Not all frame rates can negotiate the same compression modes though. They’ll all do the best they can depending on how much of the USB bus bandwidth they can claim, but if we assume the driver gets all the bandwidth it asks for initially because your camera is the only thing on the USB bus then it still looks to me as though each successively faster framerate has a poorer quality image.

For the record, the “best” uncompressed frame size and rate appears to be 355×288 at 5fps, but I can’t imagine anyone using that for imaging.

I’ve no idea what the Windows driver does when you ask it for 30fps at 640×480. I assume it just offers you the same frame at 30fps until it receives the next one.

The finally oddity (so far) is that the exposure settings offered by the Linux driver appear to range from 0 to 255 and default to 200. I’m not sure those numbers relate to a specific exposure time as the frame rate changes however, and it’s totally unclear how it relates to the exposure settings in, for example, SharpCap.

In summary then, unless the Windows camera driver is needlessly throwing away data that it doesn’t need to, it makes no difference which codec you choose for your output. The file sizes may vary, but for the same frame rate and resolution they’re generated from the same raw data. I’m also inclined to believe that increasing the frame rate in order to improve the image quality in poor seeing is counter-productive because at the same time you’re throwing away more data in the compression process.

On the rare occasions when I’ve used 5fps on Mars I’ve struggled to get the gain and exposure nicely balanced, so I think I shall be sticking to 10fps from now on, regardless of the seeing.

If you know more, or know different, please let me know.

Posted in Astro Equipment, Astroimaging, Astronomy | Leave a comment

Observation Report 23 March 2012

I’ve not written many observation reports recently, mainly because I’ve spent my time concentrating on planetary imaging. I’m still doing that, now starting to turn my attention towards Saturn as well as Mars, but there’s time during the imaging runs to do a bit of visual work as well.

My prime target last night was to find Comet Garradd. I’ve never seen a comet before and not really considered Garradd a likely target until someone mentioned that it was visible using binoculars. So, during one imaging run on Saturn I broke out the 15x70s and hoped the fairly average seeing was good enough. In fact, the comet was very easy to find. Stellarium showed it to be amongst a cluster of stars not far to the west of Dubhe. The cluster was simple to recognise and just to the north was a fuzzy ball that resolved into a larger fuzzy ball given a bit of time. In the 200P it might have been more interesting, but I don’t care. It’s my first comet 🙂

Hercules was creeping around to the east shortly before I decided to call it a night so I also took the opportunity to have a look at M13 for the first time this year. I didn’t have an ideal selection of eyepieces with me so I couldn’t make out much detail, but it never fails to please and I’ll certainly return to it over the next few days (for a change, we’re forecast a string of sunny days and clear nights).

Posted in Astronomy | Tagged | Leave a comment

Veg Plot Ready for Planting

Last weekend it was finally dry enough to get the rotavator onto the veggie plot to get it sorted for this year. I say rotavator. I cheated a bit this year. Sadly work too often gets in the way of preparation work for the vegetables and the plot is about two hundred square metres, so I’ve adopted a rather more speedy approach:

This enabled me to get the plot turned over far more easily than in the past (and I even did it twice, just to really break the soil up nicely). I had to leave one corner that has the winter onions and broad beans that I planted last year, but I’m looking forward to starting planting now.

Posted in Smallholding, Veg plot | Leave a comment

Xbox Live All-Sky Camera

I picked up an Xbox Live camera very cheaply in the hope that it might be comparable to the SPC900 for planetary imaging. Very quickly however that turned out not to be the case. When testing the camera I discovered that the sensor is far too noisy, even after scraping the LEDs off the PCB.

What I did discover though is that the camera can be manually configured to have a maximum exposure time of about ten seconds which makes it possible to pick up stars. The camera is therefore a candidate for making an all-sky camera. I had a quick try-out this evening point the camera and Gemini and Leo and the stars do show up though they’re out of focus. Focus needs to be set during the day, I think, so that’s the next step. Here’s a still from a short video sequence I captured from the camera tonight:

The brightest point is Mars, and to the right above and below it at least five of the stars from the Leo asterism are visible, as well as a number of other stars in that constellation. They’re fuzzy grey blobs, but they are definitely there. I think it shows promise.

Posted in Astroimaging, Astronomy | Leave a comment

New Images of Mars

Last night I captured what I think are possibly my best images of Mars ever (maybe even of any of my planetary images), and it may well be the best I can do with the kit I’m using at the moment. It could be time for a break to do some visual observing, or perhaps try imaging the moon or sun. Anyhow, I’m very pleased with them, and they’re linked from the Solar System Images menu item above.

Posted in Astroimaging, Astronomy | Tagged | 2 Comments

Capture codecs for Solar System Imaging

Until the last few days I’ve not been aware that the various codecs offered for capture by my SPC900 webcam (and others) are not equal and some retain more data than others. Once aware of the issue I did a bit of hunting around and found that the YUY2 codec is supposed to be the best for capture. I was using I420, so from now on I shall switch. I’ve already captured some Mars images this weekend. They’re linked from the Solar System Images menu item above.

Posted in Astroimaging, Astronomy | Leave a comment

Measuring Naked Eye Limiting Magnitude

I recently came across this posting on measuring Naked Eye Limiting Magnitude and decided to give it a try for my home. We live very close to Exmoor National Park which is now designated a Dark Sky site, so I was hopeful of a good result.

Last night the seeing was better than average. Not stunning — the Milky Way wasn’t visible, for example, which it is when the seeing is excellent. Once Kochab was higher than 60° above the horizon I gave my eyes a chance to fully dark-adapt and then had a good look at Ursa Minor. The main asterism stars were clear as were 4 UMi and 5 UMi. Towards Polaris star 10 was definitely visible, and I thought I could see star 12 with averted vision, but I couldn’t see star 11. It’s possible that the collection of stars around 12 were visible as a whole, but 11 on its own between ζ UMi and η Umi wasn’t. So, I’ll settle for star 10 being the limit of magnitude for yesterday evening, giving a NELM figure of 5.55.

I’ve always estimated the NELM for this location to be between 5.5 and 6.0, so I’m happy with that result. It will be interesting to repeat the experiment when the seeing is particularly good.

Posted in Astronomy | 2 Comments

Why Can’t We See The Flag On The Moon?

I answered this question for someone the other day, whose work colleagues wanted to know if he could see the flag left behind on the moon from the first lunar landing using his new 300mm telescope and thought I’d write the answer up properly.

The answer is no, you can’t. Not even close. And here’s why. The explanation involves a small amount of maths, but it’s really not that scary.

The scientist John Strutt, Baron Rayleigh, showed in the late 1800s that the angular resolution of a telescope can be calculated as:

R = λ / d

where R is the resolution in radians, λ is the wavelength of light in metres and d is the diameter of the telescope, also in metres. Simple enough so far. Now it has to temporarily get a bit more complicated, but we’ll sweep some of that under the carpet in a moment. Ignore the fact that we’re measuring angles in radians too, as we’ll treat that similarly when the time arises.

Taking the flag example, imagine the light from opposite corners of the flag which, for the sake of argument we’ll assume are a metre apart, travels all the way from the moon down to the earth and into your telescope. To be able to resolve the flag from the rest of the lunar landscape the angle separating those two beams of light must be at least the angular resolution of the telescope. If we calculate the angle between the beams of light, which we can do as we know the distance to the moon and we’ve set the size of the flag, then from Rayleigh’s equation we can work out what size telescope we’d need.

A purist might suggest at this point that we should assume the distance to the moon is the line from your eye behind the telescope to the middle of the flag and that to calculate the angle we’re interested in we should construct a notional right-angled triangle with the right-angled corner at the centre of the flag, one remaining corner at one corner of the flag and the third at your eye and calculate the angle at the eye end, doubling it to account for the angle between the corners of the flag. In practice, the angle is going to be so small that it really won’t matter. In terms of the maths,

tan ( R/2 ) = w / ( 2 x D )

where w is the width of the flag and D is the distance to the moon, but where D is far bigger than w we can closely approximate this as

tan R = w / D

We can also make another approximation, as when we’re dealing with very small angles (again when D is very much larger than w), tan R = R, so we get:

R = w / D

and substituting R from the Rayleigh equation we get

λ / d = w / D

and now all the awkward maths stuff has disappeared and we’re left just with multiplication and division.

We’re going to need to come up with a value for the wavelength of light at some point. Humans can see light of around 400nm to 750nm, so let’s take an average value of 575nm, or 5.75 x 10-7m. Rearranging the above equation to calculate the size of telescope we’d need to see the flag we get:

d = λD / w

We’ve said w is 1m and D, the distance to the moon, is 356,400km or 3.564 x 108m at its closest. So that gives us:

d = 5.75 x 10-7 x 3.564 x 108 / 1

which is 204.93m. So, to be able to see the flag (assuming it was 1m wide) from the first moon landing from Earth you’d need a telescope more than 200m in diameter.

If we set ourselves an easier target, say, to see the bottom of the lander module which was about 9.5m diameter, we’d have:

d = 5.75 x 10-7 x 3.564 x 108 / 9.5

which is just over 21.5m, so even to see the lander module you’d still need a 21m+ telescope.

Let’s turn things around. Given a 300mm (0.3m) telescope, what is the biggest thing that you could resolve on the moon. This is given by:

w = λD / d

or

w = 5.75 x 10-7 x 3.564 x 108 / 0.3

which works out as 683.1 metres. If you have a 100mm refractor then it’s closer to 2km.

So what about the Hubble Space Telescope (HST)? That can see some amazing stuff, can’t it? Well, yes it can. But not stuff “we” left on the moon. The HST isn’t that big. It had to fit in a Shuttle payload bay. In fact the primary optical element is only 2.4m in diameter. The HST orbits at a height of about 560km, but obviously that sometimes takes it closer to the moon and sometimes further away. At it’s closest point it’s going to be 3.5584 x 108m from the moon, so the smallest thing it can resolve is:

w = 5.75 x 10-7 x 3.5584 x 108 / 2.4

which is about 85.25m, so still not even close to resolving either the flag or the lander module.

So, the only way we can see things left behind by the moon missions are through photographs taken from the lunar orbiter missions which are far closer. In those it’s possible to see which way the wheels on the abandoned lunar rover are turned and even people’s footprints.

On other thing that has come out of the maths for this problem is that as a rough first approximation, the smallest thing that can be resolved on the moon is given by:

w = 200 / d

Unfortunately that isn’t the end of the story though, as atmospheric conditions will make life even harder by causing distortion of the view. Life is never easy.

Posted in Astro Equipment, Astronomy | Leave a comment

Solar System Images

I’ve started collecting together some of my better solar system images so they have a page or two each rather than being interspersed with other posts. See the menu bar for links.

Posted in Uncategorized | Leave a comment

Registax 6 Tutorial

I use Registax for stacking video frames for planetary imaging, but the learning curve is steep and many online tutorials are out of date because the UI seems to change quite significantly between releases. I was pleased therefore when someone provided me with a link to Paul Maxson’s Registax 6 Tutorial in which he explains his methodology for processing images. Not all of it works perfectly for me, but it gets pretty close most of the time.

In case his page disappears at some point I’ll summarise here with my own experiences soon.

Posted in Astroimaging, Astronomy | Leave a comment