Pages

Tuesday, 27 April 2010

'Circular' Polyhedra, and the Apollonian Net

Fractal circular tiling, giving the  Apollonian Net / Apollonian  Gasket / Liebniz packing  diagramThis is the nice design that I used on page 2 of the book.

Annoyingly, rather a lot of other people discovered it before me:
it's indexed on Wikipedia as the Apollonian Net, after Apollonius of Perga (~262 BC – ~190 BC), and it's also referred to elsewhere as the Leibniz Packing diagram, after Gottfried Leibniz (1646-1716), Newton's rival for the invention of calculus. I've even seen it credited to the design of the floor of a Greek temple. But frankly, it's such a nice shape that I'm sure that people have been discovering and rediscovering it for millennia. Draw three touching circles, fill in the inviting gap in the middle with more circles, and when you're feeling pleased with yourself and wondering what to do next, step back and look at the whole thing, draw in a bigger circle to enclose everything (facing away from you), and repeat. That's how I got there, anyway.

There's some rather interesting geometry here to do with tangents, but I got impatient trying to get a complete derivational method, and generated the figures using a vector graphics program (CorelDraw10), driven by an automating script, using a mix of partial derivations, testing, and brute force. If you're calculating a chain of circles that might be twenty or thirty stages long, successive rounding errors tend to screw up these diagrams when you calculate them "properly"(look at the overlap of the smaller circles in the Wikipedia vector graphics version), and my priority was to make sure that the circles really did fit, so I used a hybrid approach where I used trig to get each circle into the ballpark of its proper destination w.r.t. its parents, and then a successive approximation method with error correction to tweek and nudge and jiggle everything snugly into place.



The Apollonian Net makes more sense when you stretch it over the surface of a sphere, so that the four largest "primary" circles are all the same size, and are explicitly equivalent. They then form the intersection of the sphere with the four faces of a tetrahedron, giving the fractal-faceted solid that I used as a vignette on page 378.

Infinitely-truncated sphere, giving an infinite-sided polygon with circular faces, whose map corresponds to an Apollonian NetThere are two main ways to construct this solid:
1: Start with a sphere and grind four flat circular faces into it that correspond to the four faces of an intersecting tetrahedron, then keep grinding maximum-sized circular facets into the remaining curved parts, ad infinitum.

2: Start with a tetrahedron, and lop off the four points to give a shape with four regular hexagonal faces, and four new triangular faces where the tips used to be. Then continue lopping off the remaining points, ad infinitum. Each wave of cutting creates a new face at each cut, and doubles the number of sides on all the existing faces. If we cut at a depth that'll keep these polygons regular, then with an arbitrarily-high number of cuts, the faces converge toward perfect circles, and the point-mesh of the resulting peaks converges downwards to settle onto the surface of the sphere used in method 1.

Either way works.



This sort of duality is common when we construct standard polyhedra – the network of relationships in a regular polyhedron tends to be another regular polyhedron, so we can usually get to a regular shape by starting from either of its two relatives. Four of the five Platonic solids pair up nicely like this, and the last – the tetrahedron – is a special case whose "dual solid" partner is another tetrahedron. But we normally only consider these sorts of dualities when considering combinations of regular polygons with finite numbers of rectilinear sides with each other, and don't include the infinite-sided fractal shapes that show up when one of the parent solids is an infinitely-faceted sphere (which, in some ways, almost counts a a sixth Platonic solid).

We don't have to start with a tetrahedron, we can make these fractal solids from any regular polygon (cube, etc). But the tetrahedral and icosahedral versions probably look the nicest. I find the cube-based version a bit disappointing, but I grew up with rounded-cornered dice with circular faces, so perhaps I'm just a bit blasé about the solid that corresponds to the "six-circle" version of the Apollonian net.

From here, we have three immediate ways to generate new families of solids:
(1) We can choose different starting solids,
(2)
we can vary the number of cuts or cutting stages (from zero to infinity), to produce finite-sided solids that look more like cut gemstones, and
(3)
we can vary how the cutting is done. If we make our cuts too shallow, then the facets are distorted away from circularity, and the overall shape isn't a sphere, but has flat-topped bulges where the original polyhedral points used to be. If we cut too deep, we get bulges in the shape of the original solid's "dual" sibling, with each bulge tipped by an edge.



Another cool thing about these nets is their topological transformability. With the "closed" version, every circle has three parents of the same size of larger, including the four primary circles (who count as each other's parents). You can transform between the different versions of the net by warping and resizing, while still keeping everything as circles.

This lets us get to tilings that don't automatically suggest standard polyhedra, such as the "two-large-enclosed-circles" version that I used for the "fractal Yin-Yang" symbol on page 145, and the asymmetrical versions on page 224. And once I'd written the scripts and code to generate these figures, I had a few more blank bits in the book to fill, so I knocked up the "triangular boundary" version on page 370 which, actually, has some other interesting proportions. The "triangle" version includes parts that represent the limiting case of the edge of the Apollonian Gasket when we zoom in so far that the outer circle tends toward a straight line. Filling these voids then gives the special-case Ford Circles tiling.

Some serious people have worked on this subject. You can also Google Descartes' Theorem (after René Descartes (1596-1650), and Soddy Circles. Lester Ford and Frederick Soddy only produced their papers in 1936 and 1938, so the Apollonian Net involves math research that extends across more than two thousand years, and isn't finished yet.

It would have been nice to meet the person who designed that floor, though.

Sunday, 18 April 2010

Ultra-high resolution photography

The "jitter" method (earlier post) can also be used for ultra-high-resolution photography.

People want higher-resolution cameras, but the output resolution of a camera is usually limited by the number of pixels in its sensor. Some digital cameras have a "digital zoom" function, but this is a bit of a cheat: it simply invents extra pixels between the real pixels by smudging the adjacent colour values together. Conventional digital zoom doesn't actually give you any additional information or detail, it just resizes a section of the original image to fill the required size.

A second problem with cameras is camera shake. If you're holding the camera in your hand, then a tiny movement of the camera can result in the image being panned across the sensor while the CCD imaging chip is doing its thing, giving a blurred photograph. The smaller the pixel elements, and the greater the optical zoom, the worse this gets. We can try clamping the camera and taking a shorter-exposure image (so that the camera doesn't have as much time to move), but shorter exposures lead to more random "noise" per pixel, due to the reduced sampling time.



But with enough processing power, we can use jitter techniques to solve both problems:
In our earlier "audio" example, we deliberately added high-frequency noise to an audio signal to shift the sampling threshold up and down with respect to the signal, and we took multiple samples and overlaid them to achieve sub-sample resolution.
With digital photography we can use "positional" noise: we vary the alignment of the camera sensor to the background image, take multiple samples, and overlay those (aligned to subpixel accuracy), to generate images that have higher resolution than the camera sensor. In some ways, this is a little like the Nipkow disc approach used in early television systems, that often used a swept array of less than a hundred sensor elements provide a passable image ... in this case, we're not sweeping a line strip of sensors at right angles, but an entire grid of pixel elements, and using their random(-ish) offsets to extract real intermediate detail.

Instead of camera shake being a problem, it becomes Our Friend! The individual images will be noisier, but when you recombine a secondsworth of images, the end result should have noise levels comparable to a single one-second exposure – and since you might not normally try to take a one-second exposure (because of camera stability issues), static scenes might sometimes end up with reduced noise as well as enhanced resolution.

So, if we have a programmable camera, in theory it's possible to design an "ultra-resolution" mode that fires off a series of short-exposure images while we hold the camera, and then makes us wait while its processor laboriously works out the best way to fit all the shots together ... or saves the individual shots to their own directory, to be assembled later by a piece of desktop software.
If we were able to design the camera from scratch, we'd probably also want to include a gadget to deliberately nudge the CCD sensor diagonally while the component shots were being taken. If the software's smart enough, the nudging doesn't have to be particularly accurate, it just has to give the sensor a decent spread of deliberate misalignments. A cheap little piezo device might be good enough.



The problem with this approach is getting hold of the software: In theory, you can try aligning images by hand, but in practice ... it doesn't really seem sensible.
People are already writing algorithms for this sort of stuff – it's what allows the Hubble space telescope to take those absurdly high-resolution images of distant galaxies, and presumably the military guys also use the technique to get extreme resolution enhancements from spy satellite hardware. For analysing and aligning photos with "free-form" offsets, the necessary techniques already seem to be included in the Autostitch panoramic software, which even includes the ability to distort images to make them fit together better – it wouldn't seem to take a lot to turn Autostitch into an ultra-resolution compositor.

Amateur astronomers are now enthusiastically using the technique, and sharing resources (try using "drizzle" as a Google search keyword).
Suppose that you want to take an ultra-high resolution photograph of the full Moon – you train your camera-equipped telescope at the Moon, lock it down, and set it to keep taking ten pictures per second for an hour while the Moon gradually arcs across the sky and it's corresponding image crawls across your image-sensor ... and then feed the resulting thirty-thousand-odd images into a sub-pixel alignment program, to chew over for a few weeks and pull out the underlying detail. As long as the matching algorithm knows that it's supposed to be lining up the part of the images that contain the big round yellow thing rather than the clouds or the treetops, there wouldn't seem to be any real limit to the achievable resolution. Okay, so you have different atmospheric distortions when the Moon is in different parts of the sky, and when the air temperature drifts, but with a sufficiently-smart autostitch-type warping, even that shouldn't be a problem. If you didn't have a "rewarping" feature, you'd probably just have to decide which part of the moon you wanted the software to use as a master-key when lining up the images.



Techniques like this go beyond conventional photography and enter the territory of hyperphotography – we're capturing additional information that goes beyond our camera's conventional ability to take images, and doing things that, at first sight, would seem to be physically impossible with the available hardware. A bit of knowledge of quantum mechanics principles is useful here: we're not actually breaking any laws of physics, but we're shunting information between different domains to obtain results that sometimes seem impossible.

There's a whole family of hyperphotographic techniques: I'll try to run through a few others in a future post.

Saturday, 10 April 2010

Titanic Syndrome

RMS Titanic Memorial Plaque, detail, Eastbourne BandstandOn the 10th of April 1912, the RMS Titanic set out on her first passenger-carrying voyage. The Titanic (and her Olympic-class sister-ships were state-of-the-art. They had a double-hulled design that meant that if one hull ruptured, the ship was still seaworthy. The ship was considered to be practically unsinkable.

Four days later it was at the bottom of the ocean with the bodies of 1517 crew and passengers. The "unsinkable" ship was arguably the most "sinky" ship in human history.
It's normally difficult to assign a "sinkiness" ranking to ships, given that each failed ship only normally manages to sink once, but by sinking before it even made it to the end of its maiden voyage, and killing so many people, the Titanic flipped straight from being supposedly one of the safest seagoing structures ever built, to one of the most dangerous.



Titanic Syndrome
isn't based on any specific mechanism. "Syndromes" are recognisable convergences of trends, that can sometimes associate a particular outcome with a recognisable set of starting parameters. When we notice one of these patterns, we sometimes have a good idea how things are likely to end without having to know the mechanism that gets us there.

In the case of Titanic Syndrome, the association is pretty self-explanatory: when people tell us that nothing can possibly go wrong, that everything's perfectly safe, that a plan is foolproof ... things usually turn out badly.

Why did the Titanic disaster happen, and happen so emphatically? The obvious answer is that the ship sank because it struck an iceberg, but there are additional factors that track back to that initial belief that the ship was almost indestructible. If the ship's crew had been less confident, perhaps they'd have done a better job of keeping watch for ice, or cut their speed. If the shipyard had been less confident about the ship's hull, maybe they'd have built it with better-quality materials, rather than just assuming that if one hull failed there was a spare. And if the company hadn't been so sure that lifeboats weren't really necessary, perhaps that'd have included enough for everyone, and not so many people would have had to drown when the ship went down, while they were waiting to be rescued.



In science, hyperbole is usually an indicator that something's wrong. Theories that are described as "pretty good" usually are, but theories that were told are excellent, or that can't possibly be wrong usually turn out to be already failing, unnoticed. Titanic Syndrome.

Theories that really are that good, don't need to be oversold – it's usually possible to express confidence in an established model more convincingly with quiet understatement. On the other hand, if a core theory is right, but the people involved are still trying to exaggerate the case for it (even though their actions are likely to backfire), then if they're making that mistake, they've probably been making others, too. So "cheerleading" is usually a red flag that some things in the picture are likely to be dodgy, even if the fundamentals of a theory are right.

And sometimes the "cheerleading" stops people noticing that the fundamentals are wrong. And those are the times ... when everybody's invested so strongly in something that they really don't want to believe in the possibility of problems, or start thinking seriously about fallback positions or lifeboats ... that you get another "Titanic-class" event.

Friday, 2 April 2010

General Relativity is Screwed Up

With Einstein's general theory of relativity, one of the theory's harshest critics was probably Einstein himself. This was partly a matter of personal discipline, and partly – like the joke about sausages – because it's sometimes easier to like a thing if you don't know the gruesome details of how it was actually made. Einstein found it easy to be sceptical about the design decisions that had gone into his general theory, because he was the guy who'd made them. It had been the best general theory that had been possible at the time, said Einstein, but with the benefit of hindsight ... perhaps its construction wasn't entirely trustworthy.
The "iffy" aspects of C20th GR are difficult to see from within the theory, because – where the lower-level design decisions have forced a fudge or bodge – from the inside, these things seem to be completely valid, derived (and quite necessary) features. It's not until we look at the structure from the outside, with a designer's eye, that we see the arbitrary design decisions and short-term fudges that went into making the theory work the way it does.

Sure, the surface math looks pretty (with no obvious free variables or adjustable parameters), but that's because, as part of the theory's development, all the ugliness necessarily got moved down to the definitional and procedural structures that sit below the math. Change those underlying structures, and the surface mathematics break and reform into a different network that looks similarly unavoidable. So even though the current system looks like the simplest possible theory when viewed from the inside, we can't invest too much significance in this, because if the shape and structure was different, that'd look like the simplest possible theory, too.

To see how the theory might have been, we need to look at the subject's protomathematics, the bones and muscles and guts of the theory that dictate its overall shape, and which don't necessarily have a polite set of matching mathematical symbols.

Here are two interlinked examples of decisions that we made in general relativity that weren't necessarily correct:

Problem #1: Gravitational dragging, velocity-dependent gravitomagnetic effects

As Fizeau demonstrated back in ~1849 with water molecules, moving bodies drag light. General relativity describes explicit gravitomagnetic dragging effects for accelerating and rotating masses, and logic pretty much then forces it to describe similar effects for relative velocity, too. When you're buffeted by the surrounding gravitational field of a passing star, the impact gives you some of the star's momentum – momentum exchange means that the interaction of the two gravitational fields acts as a sort of proxy collision, and the coupling effect speeds you up a little, and slows down the star, by a correspondingly tiny amount.

For a rotating star, GR915 also agrees you're pulled preferentially to the receding side – there's an explicit velocity component to gravitomagnetism (v-gm). Even quantum mechanics seems to agree. And we can use this effect to calculate the existence of the slingshot effect, which is not just theory, but established engineering.

But v-gm effects appear to conflict with Newton's First Law of Motion: If all the background stars dragged light according to their velocity, then as you moved at speed with respect to the background starfield, the receding stars would pull on you a little bit stronger than the others, slowing you down. There'd be a preferred state of rest, that'd correspond to the state in which the averaged background starfield was stationary (ish). This doesn't agree with experience.

So the v-gm effect gets edited out of current GR, and when we do slingshot calculations, we tend to use Newtonian mechanics and model them in the time domain, instead. We compartmentalise.
Summary:
Argument: The omission of v-gm effects from general relativity seems to be arbitrary and logically at odds with the rest of the theory, but it seems to be “required” to force agreement with reality … otherwise “moving” bodies would show anomalous deceleration.

I'd consider this a fairly blatant fudge, but GR people would tend to refer to it as essential derived behaviour (based on the condition that the theory has to agree with reality).

Problem #2: Gravitational Aberration

If signals move at a finite speed, the apparent positions of their sources get distorted by relative motion. We "see" a source to be pretty much in the direction it was when it emitted the signal, with a position and distance that's out of date, thanks to the signal timelag.

If gravitational and optical signals both move at about the same speed, "c", (ignoring nonlinear complications), then we expect to "feel" the gravitational signal of a body to be coming from the same position that the object is seen to occupy. Which is kinda helpful.

But it seems that under current GR, the apparent "gravitational" position of a body gets assigned to its instantaneous position, as if the speed of gravity was infinite. We say that the speed of gravity isn't actually infinite, but that moving bodies somehow "project" their field forwards and then sideways so that it looks infinite as far as the observer's measurements are concerned. In other words, it seems that under current GR, there's no such thing as gravitational aberration.

This is a bit like the sound of fingernails scratching down a blackboard. It means that there's no longer the concept of a body having a single observed position, and we get separate definitions of "apparent position" for EM and gravity. This badly weakens the theory, because it means that mismatches between the two that that we might normally look out for to show us that we've made a mistake somewhere, are the theory's default behaviour. We lose a method of testing or falsifying the model.

So why do we do it?

We...ell, the usual argument involves planetary orbits and the apparent position of the Sun as seen by an observer on a rotating planet. But that argument's complicated and perhaps still a bit unconvincing, so … the simpler argument is that if gravitational aberration existed, it'd again seem to screw up Newton's First Law. When an astronaut travels through the universe at high speed, the background stars appear to bunch together in front of them (e.g. Scott and van Driel, Am.J.Phys 38 971-977 (1970) ), and if the gravitational effect of all those stars was shifted to the front as well, then we'd expect the astronaut to be pulled towards the region of highest apparent mass-density … forwards … and this'd further increase their forward speed, making the aberration effect even worse, which'd then create an even stronger forward pull.

So again, we manually edit the effect out, say that it's known not to exist, and then do whatever we have to do with math and language to stop the theory contradicting us.

Summary:
Argument: Losing gravitational aberration seems to be arbitrary and logically at odds with the rest of the theory, but seems to be "required" to force agreement with reality … otherwise "moving" bodies would show anomalous acceleration.



Put these two arguments together, and you should immediately begin to see the problem:

If we'd resisted the "urge to fudge", it looks as if our two problems would have eventually canceled each other out anyway, without our having to get involved. They seem to have the same characteristic and magnitude, but different signs. One produces anomalous acceleration, the other anomalous deceleration. Put them together and the moving astronaut doesn't accelerate or decelerate, because the stronger rearward pull of the fewer redshifted stars behind them is balanced by the increased number of stars ahead, which are blueshifted and individually weakened. Instead of our imposing N1L-compliance on general relativity as a necessary initial condition, the theory works out N1L all by itself, as an emergent property of curved spacetime.

So in these two cases, we seem to have corrupted the "deep structure" of the current general theory of relativity not once but twice, by trying to solve problems sequentially rather than letting the geometry generate the solutions for us, organically. Both "deleted" effects turn out to be necessary for a "purist" general theory … but once we'd fudged the theory once to eliminate one of them, we had to go back and fudge the theory a second time to eliminate the second effect that would otherwise have balanced it out.

And in doing that, we didn't just "double-fudge" a few details of the theory, we broke important parts of the structure that should have allowed it to expand and blossom into a larger, more tightly integrated, more strictly falsifiable system that could have embraced quantum mechanics and dealt with properly with cosmological issues. General relativity should have been a tough block of dense, totally interlocking theory, with independent multiply-redundant derivations of every feature, rather than the thing we have now.



The fudging of these two issues also changed some of the theory's physical predictions:

Losing gravitational aberration gave us a different set of observerspace definitions that altered the behaviour of horizons. Losing v-gm meant that we got different equations of motion, once again a different behaviour for black holes, and no way of applying the theory properly to cosmology without generating further cascading layers of manual corrections reminiscent of the old epicycle approach to astronomy. It also created a statistical incompatibility with quantum mechanics.

So general relativity in its current form seems to be pretty much screwed. GR1915 was fine as an initial prototype, but it should really have been replaced half a century ago – in 2010, it's an ugly, crippled, mutated, limited form of what the theory could, and should have been by now. But because people fixate on the math rather than on the structure, they can't see the possibility of change, or the beauty of what general relativity always had the potential to become. And that's why the subject's been almost stalled for pretty much the last fifty years, it's because Einstein died, and too many of the surviving physics people who did this stuff couldn't see past the mathematical and linguistic maze that'd developed around the subject, they didn't "get" the design principles and the dependencies between the choice of initial design decisions and the characteristics of the resulting model, and they didn't appreciate the design aesthetics.

And I find that sad on so many levels.