Showing posts with label photography. Show all posts
Showing posts with label photography. Show all posts

Sunday, 18 April 2010

Ultra-high resolution photography

The "jitter" method (earlier post) can also be used for ultra-high-resolution photography.

People want higher-resolution cameras, but the output resolution of a camera is usually limited by the number of pixels in its sensor. Some digital cameras have a "digital zoom" function, but this is a bit of a cheat: it simply invents extra pixels between the real pixels by smudging the adjacent colour values together. Conventional digital zoom doesn't actually give you any additional information or detail, it just resizes a section of the original image to fill the required size.

A second problem with cameras is camera shake. If you're holding the camera in your hand, then a tiny movement of the camera can result in the image being panned across the sensor while the CCD imaging chip is doing its thing, giving a blurred photograph. The smaller the pixel elements, and the greater the optical zoom, the worse this gets. We can try clamping the camera and taking a shorter-exposure image (so that the camera doesn't have as much time to move), but shorter exposures lead to more random "noise" per pixel, due to the reduced sampling time.



But with enough processing power, we can use jitter techniques to solve both problems:
In our earlier "audio" example, we deliberately added high-frequency noise to an audio signal to shift the sampling threshold up and down with respect to the signal, and we took multiple samples and overlaid them to achieve sub-sample resolution.
With digital photography we can use "positional" noise: we vary the alignment of the camera sensor to the background image, take multiple samples, and overlay those (aligned to subpixel accuracy), to generate images that have higher resolution than the camera sensor. In some ways, this is a little like the Nipkow disc approach used in early television systems, that often used a swept array of less than a hundred sensor elements provide a passable image ... in this case, we're not sweeping a line strip of sensors at right angles, but an entire grid of pixel elements, and using their random(-ish) offsets to extract real intermediate detail.

Instead of camera shake being a problem, it becomes Our Friend! The individual images will be noisier, but when you recombine a secondsworth of images, the end result should have noise levels comparable to a single one-second exposure – and since you might not normally try to take a one-second exposure (because of camera stability issues), static scenes might sometimes end up with reduced noise as well as enhanced resolution.

So, if we have a programmable camera, in theory it's possible to design an "ultra-resolution" mode that fires off a series of short-exposure images while we hold the camera, and then makes us wait while its processor laboriously works out the best way to fit all the shots together ... or saves the individual shots to their own directory, to be assembled later by a piece of desktop software.
If we were able to design the camera from scratch, we'd probably also want to include a gadget to deliberately nudge the CCD sensor diagonally while the component shots were being taken. If the software's smart enough, the nudging doesn't have to be particularly accurate, it just has to give the sensor a decent spread of deliberate misalignments. A cheap little piezo device might be good enough.



The problem with this approach is getting hold of the software: In theory, you can try aligning images by hand, but in practice ... it doesn't really seem sensible.
People are already writing algorithms for this sort of stuff – it's what allows the Hubble space telescope to take those absurdly high-resolution images of distant galaxies, and presumably the military guys also use the technique to get extreme resolution enhancements from spy satellite hardware. For analysing and aligning photos with "free-form" offsets, the necessary techniques already seem to be included in the Autostitch panoramic software, which even includes the ability to distort images to make them fit together better – it wouldn't seem to take a lot to turn Autostitch into an ultra-resolution compositor.

Amateur astronomers are now enthusiastically using the technique, and sharing resources (try using "drizzle" as a Google search keyword).
Suppose that you want to take an ultra-high resolution photograph of the full Moon – you train your camera-equipped telescope at the Moon, lock it down, and set it to keep taking ten pictures per second for an hour while the Moon gradually arcs across the sky and it's corresponding image crawls across your image-sensor ... and then feed the resulting thirty-thousand-odd images into a sub-pixel alignment program, to chew over for a few weeks and pull out the underlying detail. As long as the matching algorithm knows that it's supposed to be lining up the part of the images that contain the big round yellow thing rather than the clouds or the treetops, there wouldn't seem to be any real limit to the achievable resolution. Okay, so you have different atmospheric distortions when the Moon is in different parts of the sky, and when the air temperature drifts, but with a sufficiently-smart autostitch-type warping, even that shouldn't be a problem. If you didn't have a "rewarping" feature, you'd probably just have to decide which part of the moon you wanted the software to use as a master-key when lining up the images.



Techniques like this go beyond conventional photography and enter the territory of hyperphotography – we're capturing additional information that goes beyond our camera's conventional ability to take images, and doing things that, at first sight, would seem to be physically impossible with the available hardware. A bit of knowledge of quantum mechanics principles is useful here: we're not actually breaking any laws of physics, but we're shunting information between different domains to obtain results that sometimes seem impossible.

There's a whole family of hyperphotographic techniques: I'll try to run through a few others in a future post.

Monday, 30 November 2009

Panoramic Photography

If you're wondering how to produce panoramic images and "360×360"-degree "bubble" images like the one used for the Airbus 380 website, there are three main methods:
  1. Use a special panorama camera. These typically mask off the lens so that only a thin vertical slit is exposed. The camera is mounted on a tripod with a clockwork mechanism that pans it across the view, and the rotation process winds the film onto the spool, past the slit. This used to get used a lot for school group photographs.
  2. You spend a vast amount of money on a special custom-made fisheye lens. In the (1980's?) a photographer made a bit of a splash using one of these to produce fish-eye cityscapes. People hadn't seen anything quite like it.
  3. Nowadays: you use pretty much any digital camera with a bit of auto-compositing software. Some cameras even have a crude landscape-stitching facility built-in (even my mobile phone does it!)
If you want to do this properly, and you can run Windows software, the Google search term to remember is Autostitch.

It's downloadable from the University of British Columbia's site, and the Windows demo version is free for non-commercial use. They just ask that if you upload anything to the web, you use "autostitch" as one of your tags, so that they can see what people are doing with it. The user-interface is pretty much non-existent – lots of scary number-boxes for people with wierd lenses – but really all you have to do is leave the defaults as they are, click "File / Open", select the pictures you want assembled, and then click the "Open" button and go make a cup of tea. When you come back, you'll probably have a perfect panoramic image. If you want a friendlier front-end, they license the library of routines to commercial software companies, who'll be happy to sell you a less clunky implementation. They've licensed it to George Lucas' Industrial Light and Magic, and there's even now a version of Autostitch for the iPhone.

Autostitch automatically works out the right order to arrange your photos, compensates for lens distortion, compensates for tilt and zoom, assigns angle values to pixels, works out how best to mesh them together, downgrades "inconsistent" parts of individual photos so they don't contribute as much to the final picture, adjusts the colour balance and exposure of each photo to merge in with its neighbours, and then smoothly crossfades everything together.

Here's what Autostitch really generated for the above photo, before I cropped it to remove the black gaps (where hadn't taken a photo for the program to use) ... it's not something that you'd normally want to show someone, but it shows how much mathematical cleverness is going into the fitting process – this is NOT just simple "image-tiling" software:

The authors' research paper ("Automatic Panoramic Image Stitching using Invariant Features", Matthew Brown and David Lowe) goes into this in a lot more detail, with sample pictures.

If you use a 360-degree sequence of images, the Autostitch output will rotate seamlessly from side to side in your graphics-editing software, and if you take a 100% bubble sequence, there are Flash applications that can re-generate the view you'd see by looking at any horizontal or vertical angle (if you still haven't tried the Airbus demo, go now!).

If you don't need a wraparound view, and you have a tripod and a notepad and some patience, you can use a larger zoom setting on your camera than normal and take a LOT of overlapping pictures of a static scene. This lets a cheap five-megapixel camera easily generate monster hundred-megapixel images ("output image size" is one of the more understandable boxes on the "scary parameter page"). It's good to have a couple of "test goes" at this technique, so that on that one fateful day when you're confronted with a perfect picture that's too tall or wide to fit into your camera view, you can rattle off a sequence of overlapping snaps, knowing that Autostitch should be able to assemble them when you get home.

The one real limitation of Autostitch is that it's currently only set up for merging photos taken from a single point. There are some situations where you might want to auto-undistort and tile a sequence of photos taken from different locations: for instance, if you were photographing a mural painted on a long wall, you'd probably want to take a series of images from different locations along the wall and have them all assembled side-by side to form a long strip.
Autostitch doesn't yet do that. But maybe they might add the feature if enough people ask.