Showing posts with label jitter. Show all posts
Showing posts with label jitter. Show all posts

Sunday, 18 April 2010

Ultra-high resolution photography

The "jitter" method (earlier post) can also be used for ultra-high-resolution photography.

People want higher-resolution cameras, but the output resolution of a camera is usually limited by the number of pixels in its sensor. Some digital cameras have a "digital zoom" function, but this is a bit of a cheat: it simply invents extra pixels between the real pixels by smudging the adjacent colour values together. Conventional digital zoom doesn't actually give you any additional information or detail, it just resizes a section of the original image to fill the required size.

A second problem with cameras is camera shake. If you're holding the camera in your hand, then a tiny movement of the camera can result in the image being panned across the sensor while the CCD imaging chip is doing its thing, giving a blurred photograph. The smaller the pixel elements, and the greater the optical zoom, the worse this gets. We can try clamping the camera and taking a shorter-exposure image (so that the camera doesn't have as much time to move), but shorter exposures lead to more random "noise" per pixel, due to the reduced sampling time.



But with enough processing power, we can use jitter techniques to solve both problems:
In our earlier "audio" example, we deliberately added high-frequency noise to an audio signal to shift the sampling threshold up and down with respect to the signal, and we took multiple samples and overlaid them to achieve sub-sample resolution.
With digital photography we can use "positional" noise: we vary the alignment of the camera sensor to the background image, take multiple samples, and overlay those (aligned to subpixel accuracy), to generate images that have higher resolution than the camera sensor. In some ways, this is a little like the Nipkow disc approach used in early television systems, that often used a swept array of less than a hundred sensor elements provide a passable image ... in this case, we're not sweeping a line strip of sensors at right angles, but an entire grid of pixel elements, and using their random(-ish) offsets to extract real intermediate detail.

Instead of camera shake being a problem, it becomes Our Friend! The individual images will be noisier, but when you recombine a secondsworth of images, the end result should have noise levels comparable to a single one-second exposure – and since you might not normally try to take a one-second exposure (because of camera stability issues), static scenes might sometimes end up with reduced noise as well as enhanced resolution.

So, if we have a programmable camera, in theory it's possible to design an "ultra-resolution" mode that fires off a series of short-exposure images while we hold the camera, and then makes us wait while its processor laboriously works out the best way to fit all the shots together ... or saves the individual shots to their own directory, to be assembled later by a piece of desktop software.
If we were able to design the camera from scratch, we'd probably also want to include a gadget to deliberately nudge the CCD sensor diagonally while the component shots were being taken. If the software's smart enough, the nudging doesn't have to be particularly accurate, it just has to give the sensor a decent spread of deliberate misalignments. A cheap little piezo device might be good enough.



The problem with this approach is getting hold of the software: In theory, you can try aligning images by hand, but in practice ... it doesn't really seem sensible.
People are already writing algorithms for this sort of stuff – it's what allows the Hubble space telescope to take those absurdly high-resolution images of distant galaxies, and presumably the military guys also use the technique to get extreme resolution enhancements from spy satellite hardware. For analysing and aligning photos with "free-form" offsets, the necessary techniques already seem to be included in the Autostitch panoramic software, which even includes the ability to distort images to make them fit together better – it wouldn't seem to take a lot to turn Autostitch into an ultra-resolution compositor.

Amateur astronomers are now enthusiastically using the technique, and sharing resources (try using "drizzle" as a Google search keyword).
Suppose that you want to take an ultra-high resolution photograph of the full Moon – you train your camera-equipped telescope at the Moon, lock it down, and set it to keep taking ten pictures per second for an hour while the Moon gradually arcs across the sky and it's corresponding image crawls across your image-sensor ... and then feed the resulting thirty-thousand-odd images into a sub-pixel alignment program, to chew over for a few weeks and pull out the underlying detail. As long as the matching algorithm knows that it's supposed to be lining up the part of the images that contain the big round yellow thing rather than the clouds or the treetops, there wouldn't seem to be any real limit to the achievable resolution. Okay, so you have different atmospheric distortions when the Moon is in different parts of the sky, and when the air temperature drifts, but with a sufficiently-smart autostitch-type warping, even that shouldn't be a problem. If you didn't have a "rewarping" feature, you'd probably just have to decide which part of the moon you wanted the software to use as a master-key when lining up the images.



Techniques like this go beyond conventional photography and enter the territory of hyperphotography – we're capturing additional information that goes beyond our camera's conventional ability to take images, and doing things that, at first sight, would seem to be physically impossible with the available hardware. A bit of knowledge of quantum mechanics principles is useful here: we're not actually breaking any laws of physics, but we're shunting information between different domains to obtain results that sometimes seem impossible.

There's a whole family of hyperphotographic techniques: I'll try to run through a few others in a future post.

Saturday, 23 May 2009

Jitter

Jitter is a fascinating concept, with applications in digital imagery and quantum mechanics. The word is a corruption of the scotticism "chitter", which is an omomatopoeic rendering of the noise that your teeth make when you shiver (another offshoot is "chatter"). So jittering is a jerky jumping between positions that surround a central averaged point, and "having the jitters" means being nervously jumpy, or having the shakes for some other reason (e.g. drug or alcohol withdrawal, see also the origins of the word jitterbug). In digital measuring systems, it's the tendency for background noise to make measurements jump about between adjacent states when the real signal value is close to a quantisation threshold.

At first sight, jitter looks like an engineering annoyance. If you feed a slowly-changing analogue signal into a digitiser you might expect the correct result to be a "simple" stepped waveform, but if the signal is noisy, and the signal level happens to be near a digital crossing-point, then that noise can make the output "jitter" back and forth between the two nearest states. A small amount of noise well below the quantisation threshold can be amplified and generate 1-bit noise on the digital data stream, as the output "jitters" between the two closest states.

So early audio engineers would try to filter this sort of noise out of the signal before quantisation. However, they later realised that the effect was useful, and that the jittering actually carried valuable additional information. If you had an digitiser that could only output a stream of eight-bit numbers, and you needed that stream to run at a certain rate, you could run the hardware at a multiple of the required rate, and deliberately inject low-level, high-frequency noise into the signal, causing the lowest bit to dance around at the higher clockrate. If the original signal level lay exactly between two digital levels, the random jitter would tend to make the output jump between those two levels with a ratio of about ~50:50. If the signal voltage was slightly higher, then additional system noise would tend to make the sampling process flip to the "higher" state more often than the "lower state. If the original input signal was lower than the 50:50 mark, the noise wouldn't reach the higher threshold quite so often, and the "jittered" datastream would have more low bits than high bits. So the ratio between "high" and "low" bit-noise told us approximately where the original signal level lay, with sub-bit accuracy.

This generated the apparently paradoxical result that we could make more accurate measurements by adding random noise to the signal that we wanted to measure! Although each individual sample would tend to be less reliable than it would have been if the noise source wasn't there, when a group of adjacent samples were averaged together, they'd conspire to recreate a statistical approximation of the original signal voltage, at a higher resolution than the physical bit-resolution of the sampling device. All you had to do was to run the sampling process at a higher rate than you actually wanted, then smooth the data to create a datastream at the right frequency, and the averaging process would give you extra digits of resolution after the "point".
So if you sampled a "jittery" DC signal, and measured "9, 10, 9, 10, 10, 9, 10, 10", then your averaged value for the eight samples would be 9.625, and you'd evaluate the original signal to have had a value of somewhere just over nine-and-a-half.

Jitter allowed us to squeeze more data through a given quantised information gateway by using spare bandwidth, and passing the additional information as statistical trends carried on the back of an overlaid noise signal. It was transferring the additional resolution information through the gateway by shunting it out of the "resolution" domain and into a statistical domain. You didn't have to use random noise to "tickle" the sampling hardware – with more sophisticated electronics you could use a high-frequency rampwave signal to make the process a little more orderly - but noise worked, too.

So jitter lets us make measurements that at first sight appear to break the laws of physics. No laws are really being broken (because we aren't exceeding the total information bandwidth of the gateway), but there are some useful similarities here with parts of quantum mechanics – we're dealing with a counterintuitive effect, where apparently random and unpredictable individual events and fluctuations in our measurements somehow manage to combine to recreate a more classical-looking signal at larger scales. Even with a theoretically-random noise source with a polite statistical distribution tickling the detector thresholds, the resulting noise in the digitised signal still manages to carry statistical correlations that carry real and useful information about what's happening under the quantisation threshold.

Once you know a little bit about digital audio processing tricks, some of the supposedly "spooky" aspects of quantum mechanics start to look a little more familiar.