Emily Lakdawalla • Jan 30, 2013
Enceladus: A problem of contrast
Four times a year, the Cassini mission releases a pile of science data to NASA's Planetary Data System. I'm a little late to the January release, and still haven't worked through all the neat images I found. In fact, the very first image I downloaded presented an interesting challenge to my image processing skill. Before I maunder on for too long, here's the result of my efforts. There is so much to see in this one image. We're looking at a crescent icy moon of Saturn. Along the terminator (the boundary between day and night) the angled light picks out some impact craters. Behind it, you see most of Saturn's ring system, from F through A, and part of B, foreshortened into a skinny set of bright and dark lines. You don't see all of the B ring, though, because Saturn's shadow is being cast onto the rings here, angling incredibly shallowly across the foreshortened rings.
Below the moon, you see a few fuzzy dots: geysers. This is Enceladus, one of the four worlds in the universe where we have observed active, internally driven geology, and that's some of the froth of its internal sea or ocean, being spewed into the vacuum near Saturn.
So, why was this one challenging? I downloaded the archival data for this photo and about 300 others from the Rings Node and ran it all through IMG2PNG to convert it to a form that is easier for me to handle: a 16-bit PNG. I'll explain why the 16-bit thing is important a bit later. First, though, let me show you what the image looked like when I first opened it in Photoshop.
It's very dark. So the very first thing that anyone would do is to fix that, increasing the brightness and contrast. Essentially, you multiply every pixel in the image by a constant. It makes the whole image brighter, but as long as you don't "max out" any of the pixel values, it's a reversible change, so it remains true to the original data. Also, like nearly every single Cassini image ever, it's not "right side up" according to the conventions we have defined. So I flip it 180°, to put north up. Again, a reversible change, so, so far, there's been no real manipulation of data. It makes a quite pretty picture of a crescent moon poised below a foreshortened ring system.
But while I was figuring out how much to stretch the contrast, I stumbled upon something. Look what happens when you stretch the contrast by an insane amount:
There are three ugly things and two beautiful things about this picture. The ugly things: (1) I have wrecked a lot of the detail visible in the original image, saturating some of the sunlit surfaces so that they are completely white. This isn't a reversible change; if I darkened it again, all those pixels would still be the same value. I couldn't get the lost detail back. (2) I have amplified the blemishes, the hot pixels and cosmic ray hits that speckle the photo. (3) I have amplified the rhythmic linear horizontal banding that appears at a low level in all Cassini photos, an artifact having to do with some interference from spacecraft electronics. The beautiful things: (1) you can see the geysers. (2) You can see Enceladus' full globe just barely picked out against the background, Enceladus' night side appearing darker than whatever is behind it.
It's kind of amazing that either one of the two beautiful things is visible in the data. We can thank the 16-bit format. So now it's time for me to explain what that means. Why 16 bits? When you look at black-and-white images on the Internet, you are almost always looking at an 8-bit photo. That means that each pixel's brightness is encoded with an 8-digit binary number, so the value of the pixel can range from ranging from 0 (black) to 2^8-1 or 255 (white). That's more gray values than the human eye can hope to distinguish all at once, but it's not enough to represent the full possible range of brightness and darkness in our experience. Our eyes can see well both in bright daylight and in dimly lit buildings, where the light is a factor of 1000 or more weaker. But we can't see in both kinds of lighting at once. Our eyes adjust to bright or to dim light, but when we're outdoors in bright sunlight it's hard for us to see what's going on through a window into a dimly lit building.
Many cameras can handle a wider contrast range at once than our eyes can. But that means they need more than 8 bits to record the different brightnesses and darknesses in a scene. In order to record things that are 1000 times dimmer than other things, you need to digitize the view on a scale that has more than 1000 divisions. It's common for spacecraft cameras these days to measure the universe using 12 or more data bits, meaning they can handle more than 4000 different gray levels. The operating systems on the computers that we use back on Earth like to deal in groups of 8 bits at a time, so if you want to deal with 12-bit data, you use instead a 16-bit image format.
So the information is there; but how to represent it in a way that the more limited human eye can discern? A lot of the time it's possible to stretch the contrast in a nonlinear fashion so that you can bring out details at both ends of the brightness range. But that just wasn't working for me here; the geysers were just too dim. Any attempt to bring them out washed out the detail visible on the sunlit crescent and rings, and also made that horizontal banding too obvious. I was going to have to treat the moon and the plumes separately.
First of all, I needed to clean up the low-contrast data. I had to do something to reduce the effects of the horizontal banding and the cosmic ray hits. Here is a really neat trick to remove the horizontal banding. I'm using Photoshop but if you don't have that, you can use the same technique in GIMP. The banding is visible in the space behind Enceladus. If I can make a layer that has just the bands, but no moons or anything else, I can subtract the values of those pixels from the original photo, canceling out the banding.
First I have to find a region in the background that doesn't have any major blemishes. I've outlined such a region below.
I'll use this to produce a sort of a model of the banding across the whole image. I copy and paste this representative rectangle into a new document. Then I change the width of the document to 1 pixel, keeping the height constant. That averages across each row. Then I change the width again to match the original width of the photo, in this case 1020 pixels. Here's the result, a model for the banding across the whole image:
I paste this model onto my original image and set its blending mode to "difference," meaning that I am subtracting the pixel values of the modeled banding from the pixel values of the original image. Here's the result. I have massively improved on the banding across the whole image. But wait -- something's wrong. You can't see the night side of Enceladus against the background anymore. What gives?
The problem is that I assumed that I needed to cancel out the background completely. But the background was actually lighter in value than the night side of Enceladus. When I subtracted out the banding, I made the background black and the night side of Enceladus blacker than black (which Photoshop can't do, so it just leaves the pixels black).
To avoid subtracting out too much from the background, I need to darken my banding model a little bit. I use the eyedropper to figure out what the darkest pixel value in my model is, and I find it to be a pixel value of 8 (out of 255). I make a new layer, color it a very dark gray with a pixel value of 8, and then set its blending mode to difference. Now my model of the banding won't cancel out the background entirely. Here's the result of applying my new model to the photo:
The difference is really subtle, and if your monitor is dark you may not notice it at all. In fact, I decided not to try to bring out the edge of Enceladus' disk against the background, because it was just too hard to see.
There's still a tiny bit of banding visible in Enceladus' plumes, which I could probably deal with by fiddling around with my model for the banding, but I didn't want to take the time here. Instead, I copied the area of the image containing the plumes and overlaid it on the version of the photo in which you can see the details of Enceladus' sunlit crescent and the rings.
Now it's a matter of blending this in to the background. This is a subjective process, and it's at this point that I'd say I've crossed the line from an image that shows Enceladus as Cassini saw it to an artistic product that is representational of reality but which has been artificially enhanced. Everything else up to this point -- brightening the image and removing the effects of banding -- are justifiable from a data processing point of view, but this step definitely isn't. The other artistic step I take here is to paint out the couple of blemishes caused by cosmic ray hits.
When all is said and done, here is my final version. Given all the steps it took to get here and the subjective nature of the plume-brightening part, someone else would likely produce a version of this one that looked slightly different. Maybe you should try! Here's the original image data.
One fun little postscript: I checked JPL's solar system simulator while I was trying to figure out the relative positions of Enceladus, Saturn, and Cassini while this photo was being taken. And I discovered that what I had assumed was black space in the background is not black space at all; it's the night side of Saturn! See for yourself here.
Support our core enterprises
Your support powers our mission to explore worlds, find life, and defend Earth. You make all the difference when you make a gift. Give today!
Donate