Notes on Using the SPC900NC for Planetary Imaging

A while back I was looking for information on which codec was best for data capture from the SPC900 cameras and found several web pages suggesting that YUY2 (aka YUYV, I believe) was best on the grounds that it generated larger capture files and therefore contains more data.

I’ve been looking for some Linux-based code that would allow me to take a still image regularly without needing any kind of graphical display and have been struggling to find anything, so I started to think about writing what I wanted myself. As part of that I wrote some code that used the V4L2 interface to probe the available settings and parameters of a webcam.

I was a little surprised when I found when I tested it against the SPC900 that not all the codecs offered by Sharpcap on Windows were available, and not all the frame rates were available at all resolutions, so I began to poke about in the driver code for the Philips cameras to see why. The driver was written by someone claiming to have an NDA with Philips (and I have no reason to disbelieve this) so I assume he knew what he was up to when he was writing the code.

The first thing I discovered was that the codec chosen for the output of programs such as SharpCap has nothing to do with the wire-format of the data as it comes from the camera. There appear to be two different wire formats and one, a raw Bayer mode is only used for snapshots. Any other output format appears to be generated within the driver from the data pulled from the camera. Unless the driver is throwing data away for no apparently good reason (and certainly the Linux driver which produces a YUV420 format doesn’t appear to be) then it looks like it makes no difference whatsoever which output codec is chosen.

The camera appears to support up to four different compression modes including “none”. The other three produce increasing reductions (I like that, “increasing reductions” 🙂 in the length of the output data for a given frame size. It looks like the driver negotiates with the camera and USB subsystem for the best possible outcome (ie least compression) for the amount of data to be transferred. I have no idea whether all the compression modes are lossy, but if we call them “low”, “medium” and “high” compression I think it’s safe to assume that the “medium” and “high” modes certainly do. What would be the point of having two different compression modes when neither lost any data? I could perhaps work out from the code whether the “low” mode drops data, but I’ve not tried to as yet. It occurs to me however that it would be odd to implement uncompressed transfers and compressed transfers that produced an identical end result.

What initially surprised me was that for the 640×480 (VGA) modes, there is no attempt to negotiate uncompressed data. All VGA modes are compressed. This runs counter to what has been published on some web pages. My attempt to justify this runs as follows:

The image data transferred from the camera averages out at 24 bits per pixel. It’s not a simple relationship because of the way the data is encoded, but that’s the average figure. For a VGA image size that’s just over 7 Mbits. At the lowest frame rate, 5fps, it’s just over 35Mbits per second. But this is a USB 1.1 camera. The data rate is limited to 12Mbits/sec. Compression must therefore take place.

It also explains why the Linux driver arbitrarily refuses to allow the user to set a frame rate of more than 15fps at full resolution. At 20fps the image data would have to be compressed to almost one twelfth of its original size for transmission and the camera would be throwing away so much data that the image would be useless.

So, all 640×480 captures use a compressed data format on the wire. Not all frame rates can negotiate the same compression modes though. They’ll all do the best they can depending on how much of the USB bus bandwidth they can claim, but if we assume the driver gets all the bandwidth it asks for initially because your camera is the only thing on the USB bus then it still looks to me as though each successively faster framerate has a poorer quality image.

For the record, the “best” uncompressed frame size and rate appears to be 355×288 at 5fps, but I can’t imagine anyone using that for imaging.

I’ve no idea what the Windows driver does when you ask it for 30fps at 640×480. I assume it just offers you the same frame at 30fps until it receives the next one.

The finally oddity (so far) is that the exposure settings offered by the Linux driver appear to range from 0 to 255 and default to 200. I’m not sure those numbers relate to a specific exposure time as the frame rate changes however, and it’s totally unclear how it relates to the exposure settings in, for example, SharpCap.

In summary then, unless the Windows camera driver is needlessly throwing away data that it doesn’t need to, it makes no difference which codec you choose for your output. The file sizes may vary, but for the same frame rate and resolution they’re generated from the same raw data. I’m also inclined to believe that increasing the frame rate in order to improve the image quality in poor seeing is counter-productive because at the same time you’re throwing away more data in the compression process.

On the rare occasions when I’ve used 5fps on Mars I’ve struggled to get the gain and exposure nicely balanced, so I think I shall be sticking to 10fps from now on, regardless of the seeing.

If you know more, or know different, please let me know.

This entry was posted in Astro Equipment, Astroimaging, Astronomy. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *