Thursday, 30 September 2010

Different Types of Zero

It took mathematicians a while to realise that infinities came in different sizes.

The problem was an inadequacy of language. All "infinities" are infinite, but some are a little more infinite than others. For instance, "infinity-squared" gives an infinite result, but it's a stronger infinity than the infinity that we started out from ... but by deciding to assign all these different infinite results the same name — "infinity" — we created an implicit assumption that this "infinity" was a thing, a single entity rather than a family. We ended up reciting things like "infinity is just infinity, by definition". Well, if so, it was a pretty bad definition, because infinity isn't so much a value as a realm, or a concept that allows multiple members, like "integer".

Our conventional language breaks down in these sorts of situations. To try to get a handle on the infinities, we can construct an "infinity-based" number system where our reference base unit [∞] is a "reference infinity" of "one divided by zero" (we can say, "1/0 =[∞]"), and we can compare other infinities to that, so that 2/0 gives 2×[∞], and 2×[∞] / [∞] = 2. It's possible to do proper math and get sane finite results by multiplying and dividing infinities together, as long as you remember to keep track of how big each individual infinity is (and/or where it originally came from).

We do similar things with complex numbers. These have two components, a conventional "real" component, and an "imaginary" component that's a multiple of the "impossible" square root of minus one, which we abbreviate as i. Even though the imaginary components don't exist in our default number system, we can still do useful math with these hybrid numbers ... that's actually how we generate exotic mathematical creatures like the the Mandelbrot Set. The approach works. We've seen the pretty pictures.




So multiple values of infinity are okay.

But there's one last thing that we have to fix. Zero. See, it turns out that if infinities come in different sizes, then zero has to come in different sizes, too.

At first sight this seems even more crazy. We can plot a simple line going through zero, and put the tip of our pencil on the crossing point, and say there it is, right there. How can that single point have different values? Well, as with the infinities, the auxiliary values exist off the page — when different graphs all hit zero at the same position, the properties associated with the coincident points on those different graphs aren't automatically completely identical even though they show up as being at the same position. Coincident points on different intersecting lines can carry different slopes and rates of change, and can have associated vectors and other associated baggage that gets lost when we try to break a line down into instantaneous isolated unconnected values.

Zero times any conventional number gives a zero, just like infinity times any conventional number gives an infinity. But not all zeroes have the same emphasis or strength, and this can become important when you have them fighting against each other. If we're only multiplying our zeroes by normal boring numbers then the auxiliary parameters don't matter, but as with the infinities, when we start multiplying or dividing different zeroes, we have to track the strengths of the zeroes, or else we tend to end up with mathematical garbage.


One of the problems that theoretical physicists currently have is they they're coming up against a range of problems — black hole event horizons, Hawking radiation, gravity-wave and warpfield propagation — where clusters of values assigned to apparent physical properties have a habit of going to zero or to infinity and beyond, even though the underlying local physical properties are non-zero and non-infinite. To deal with those problems we either have to find ways of sidestepping the pathological math, or come up with a more complete mathematical vocabulary that doesn't freak out when we occasionally need to divide a known strength of infinity by an associated known incarnation of zero.
Otherwise, we're liable to come to bad conclusions about how certain things are "provably physically impossible" because they appear to break the math, when in fact, the real problem is that the math that we're trying to apply to the problem is too naive. If we go down that route, we can end up accidentally elevating the result of human error to the status of an accepted mathematical proof.

Which is bad.

Sunday, 1 August 2010

The Decline of Theoretical Physics

Progress in fundamental theoretical physics now seems to have been on hold for quite a while.

I thought that the situation was summed up quite nicely by one of the characters in "The Big Bang Theory" (an improbably funny TV sitcom about sciencey people).
Penny (cheerfully as a conversation-starter):
"So, what's new in the world of physics?"

Leonard (momentarily suprised and slightly amused that anyone would ask such a question):
"Nothing!"

Penny (taken aback):
"Really, nothing?"

Leonard:
"Well ... with the exception of string theory, not much has happened since the 1930's ... and ya can't prove string theory, at best you can say, 'Hey look, my logic has an internal consistency-y!' "

Penny:
"Ah. Well, I'm sure things will pick up."

Leonard unhappily picks his nails, broods briefly, decides that there's nothing positive he can say, and then changes the subject.

And I think that just about sums things up.

Friday, 25 June 2010

A 3D Mandelbrot

Skytopia have a great set of pages on the search for a 3D version of the Mandelbrot Set. Or at least, for an interesting 3D version of the normal Mandelbrot.

It's easy enough to produce fractal solids that have a Mandelbrot on one plane, and if you plot the correct 3D shadows of the 4D Julia Set, you can find shapes that have Mandelbrots on multiple intersecting planes. But getting a Mandelbrot on two perpendicular intersecting planes, while having the transition between them being more interesting than simply spinning or rotating the thing on its axis, is more difficult.




The "normal" Mandelbrot has one "real" component and one "imaginary" component, set on the x and y axes. If you add another imaginary component on axis z, you simply get the sort of boring "spun" shape that you might produce on a lathe. If you distinguish the two "imaginary" axes by whapping a minus sign in front of one of them, you get a hybrid Mandebrot/Tricorn solid, but one of the cross-sections is then a tricorn rather than a 'brot.

From here, you can try hypercomplex numbers, number systems that support multiple distinct imaginary components and define how they should fit together. In a simple hypercomplex system, we have four components, r, i, j and k — “r” is real, i and j are identicallly-acting roots of minus one, but i-times-j gives a third creature, k, and k-squared gives plus one. So we can plot r, i, j to get a 3D Mandelbrot. Trouble is, as Skytopia point out, it's a bit boring … if we look down on the 'brot's side-bulbs, they show up as simple nubbins. There are other way to try to force Mandelbrot cross-sections, but they're a bit arbitrary, and the results tend to look like someone's cut them out of a block of wood using a Mandelbrot template.


Paul Nylander (bugman) then started looking at higher-powered counterparts of the Mandebrot, and realised that the boring hypercomplex solid for z^2 actually got pretty damned interesting when you jacked the power value up to eight (z^8). This gives a gorgeously intricate beast now referred to as a Mandelbulb, with bulbs that spawn bulbs all over the place. It also has Julia-set siblings. But it's not a standard Mandelbrot.


So what else? Well, the “standard” hypercomplex number system isn't the only option. There are alternative systems that give multiple imaginary components with slightly different interrelations. There are quaternions (tried them, didn't like them), and there are other potential configurations and a larger overarching system of eight-parameter octonions. The Mandelbrot-based solid at the top of this blog was made with one of those. The internal shape is also slightly reminiscent of a Buddhabrot.

The semitransparent voxel plot above isn't really able to show the shape properly, you can see there there's some fine floating ribs that connect some of the Mandelbrot features on the two planes that aren't being adequately captured by the plot, so I'll have to run off a larger version at some point, and perhaps experiment with some colour-coding. Some of the more exotic detail, like the floating network of ribbing, might also be an artefact of a technique I used to emphasise surface structure in the plot, so I'll need to spend some time playing with the thing and working out how much of the image is “proper” 3D Mandelbrot detail, and how much is an additional fractal contribution from the enhancement code.

But meanwhile … pretty shape!

Sunday, 30 May 2010

"Tesla Turbine" Pumps

When used as a pump, the Tesla turbine is one of the simplest devices that exists. Its main component is simply a spinning disc – the disc is immersed in a fluid (like air, or water), the moving surface couples frictionally with the surface of the fluid, and makes the surface layer of fluid rotate with the disc. The fluid gets thrown outwards away from the rotation axis by centrifugal forces, and new fluid moves in to take its place. You then typically build a box around the container, with an inlet tube and outlet tube. The inlet feeds fresh fluid to the central axis of the disc, and the higher-pressure "centrifuged" fluid that collects around the disk edge is collected and allowed to escape via the outlet pipe.

You spin the disc (in either direction), and fluid jets through the device.

Now sure, we can do this sort of thing with a conventional bladed propeller, but those beasties have problems. The blades chop up the air or water, and create turbulence, which in turn encourages the assembly to vibrate, and small imperfections in the rotor construction can cause imbalances (and vibrations) that are different at different speeds. So bladed designs tend to be messy and noisy and juddery, and the blades' leading edges are prone to collecting buildups of dust or muck, or being damaged by collisions with any junk that happens to be caught in the fluid stream, which in turn messes up the aerodynamics of the blade and unbalances the assembly.

If you've ever built a PC to be especially quiet, you'll know that as the months pass, it gets noisier and noisier until you have to take the thing apart to clean the accumulated muck off the leading edges of the fanblades. In the case of ship's propellers these vibrations cause more extreme physical damage: sonoluminescence momentarily creates microscopic pockets of superheated steam that can etch pits into the bronze. All this work wastes energy and causes unwanted noise and vibration, and makes for additional engineering complications.

With the Tesla turbine fan, this violent interaction with the stream doesn't happen. For conventional propellers, surface friction wastes energy, with a Tesla disc, surface friction is the useful coupling mechanism that makes the thing work.

Nowadays, if you have a tropical fish tank or an outdoor pond with an ornamental fountain, the little cylindrical pump that circulates the water or drives the fountain is probably a small centrifugal Tesla turbine. Because it's bladeless, it means that any tiny creatures that get into the pump don't risk being chopped or hit by a big nasty blade, they might have a couple of bumps on the way through, but that's it. And weeds can't snag on the propeller blades and jam the pump, because there aren't any propeller blades to snag. So it's a comparatively creature-friendly and low-maintenance type of pump, if you want something to pump water for years without requiring any attention, or mashing up the microfauna.

Recently, they've also starting to consider using Tesla pumps for pumping blood. Blood includes all sorts of delicate gunge that doesn't like being disturbed too much, or it's liable to trigger a clotting reaction or an immune response. You don't want to smash up too many of the blood cells or start banging platelets together -- traditional blood pumps use clear tubing that's "massaged" by rotors to push the blood through, which makes for a nice simple high-visibility sealed unit, but you're still "squashing" some of the blood every time the pinched region travels along the tube.



Perhaps the most surprising thing about Tesla pumps, apart from their simplicity, is how long it took us so many years to realise that these things were useful. A diagram of a conventional bladed fan gives you some indication of what a device does, but a simple smooth spinning disc in a box doesn't look as if it would do anything useful. Nikola Tesla got his turbine patent as late as 1913 claiming it as a novel device, Tesla pumps apparently started being generally manufactured in the 1970's, and a quick Google for references to radial bloodpump designs seems to only throw up results newer than 1990, most in the last five or ten years.

Sometimes we miss out on useful technologies because they require too much R&D or technical skill to get them to point where they actually work, but sometimes we also miss out on trivially-easy technologies that "work first time" because they're just too damned simple.

Friday, 14 May 2010

Rice and the Chessboard

In the story, an Emperor asks his mathematician to solve a difficult problem.

In payment, the mathematician asks for a chessboard with one grain of rice on the first square, two on the next, four on the one after that, eight on the next, and so on. The emperor agrees. Then the smart-alec mathematician points out that by the time we get to the sixty-fourth square, the number of grains of rice is astronomical. It's about 10^19, or 10,000,000,000,000,000,000. In binary, that's 1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111 , which is probably the largest number that you can express as a standard integer on a modern 64-bit processor running a specialist 64-bit version of Windows. One more grain of rice and you probably get an overflow error.

If each grain of rice weighs about 25 mg, then, when we double the last-square figure to get the total number of grains on the chessboard, I think we end up with something like 460 billion metric tonnes (minus one grain).

According to the story, the Emperor's response was to point out that this created a new problem that required the mathematician's involvement. As Emperor, he couldn't go back on his word, even if the mathematician allowed him to. An Imperial Decree couldn't be rescinded. On the other hand, that much rice didn't physically exist. The solution was to point out that if the mathematician didn't exist, the debt would cease to exist, too. So the Emperor signed the mathematicians' death warrant on the grounds that pulling this sort of trick on the Emperor counted as treason, and had him executed.



Here's how to work out the result in your head, without using a calculator (or even pen and paper) :

Square 1 has one grain of rice. The next ten have 2, 4, 8, 16, 32, 64, 128, 256, 512 and 1024.

Every time that you advance another ten squares, you multiply the number on the square by 1024 (2^10), which is only slightly more than a thousand (1000, 10^3). As a first approximation, every ten-square move pretty much shifts the "decimal" version of the number three places to the left.

This means that when we move sixty squares, we're adding those three zeroes six times, giving us eighteen zeroes. That leaves just three more squares, so we go 2, 4, 8 … and write down a "guesstimate" figure of 8 ×10^18 for the number of grains on the last square.

This is an underestimate, but by how much? We treated 1024 as if it was 1000, so we have a missing factor of 1.024 that needs to be multiplied in six times to get to the proper answer. What's 1.024 raised to the sixth power? Eww. :(

Well, when we square "one-point-something", we get one, plus two times the "something", plus "the something-squared" ( (1+x)^2 = 1 + 2x + x^2 ).
If the something is very small, then something-squared is going to be extremely small, and hopefully so small that we can forget about it, and get away with just doubling the original small something.

So "1.024 times 1.024" gives 1.048, plus a little bit. Call it 1.049 .
Now we need "1.049 times 1.049 times 1.049", to get us up to that power of six.
A similar principle applies: cube something very close to one, and the tiny difference kinda triples (plus a little bit).
So we take 1.049, look at the part after the decimal point, 0.049, nudge it up to a nicer 0.05 , then triple that to give ~0.15 as the ratio that has to be multiplied into our original result to find the amount of undershoot correction.

"Eight" times the 0.1 is 0.8
"Eight" times the 0.05 should be half that, so 0.4 .
Adding them together, 0.8 + 0.4 is 1.2 (× 10^18).
That's our error .
Add that to the original guess of 8 (× 10^18), and we get our improved estimate, of ~9.2 ×10^18 .

… and if we check that against our calculator, which says that 2^63 = ~9.22 × 10^18, we were correct to two significant figures. Not bad for calculating something to the sixty-third power. Yayy Us!

Monday, 3 May 2010

Ten Things you can't do on an Apple iPad

Apple iPad: No Can Do
Ten Things you can't do on an Apple iPad:
  1. Watch broadcast TV
    The iPad has nowhere to plug in a DVB TV tuner dongle, and even if if it had, the iPad doesn't decode the MPEG2 video format used for standard-format DVB digital tv broadcasts. It's MPEG4-only. So you can't use it as a personal video recorder, and if you have an existing PVR, you won't be able to copy or stream the recorded MPEG2 files to the iPad. Unless your other machine's fast enough to convert to MPEG4 in real time, you'll have to transcode your files to MPEG4 first. Oh, and not all MPEG4 transcoder software produces files that play properly on the iPhone OS, so even if you do transcode, you still might not be able to watch the files.

  2. Listen to the radio
    The iPhone chipset supposedly includes an onboard hardware FM radio, which the OS doesn't make available. In theory you can plug an FM receiver module into the iPhone/iPad docking connector, but in practice, it's cheaper to buy a separate radio (or a cheap MP3 player with a radio onboard). Apple don't make a separate snap-in radio, and third-party manufacturers ave been a bit reluctant to market one in case it becomes redundant overnight, if and when Apple decide to finally enable the internal device. Apple don't want you listening to FM until they can find a way to make money from it, and with FM, it's the radio station that gets the advertising revenue, not Apple.
    If you have a good internet connection, you can listen to a stack of radio stations online … as long as they don't use Flash as a delivery medium.
    Major radio stations are often also available via DVB ... but that's not an option with the iPad because of point (1).
    Many iPhone owners get their "fix" of radio by buying a speaker dock that includes an FM radio receiver, but fitting an iPad to one of these is a bit more difficult.

  3. Watch DVDs
    Okay, so you don't expect the iPad to have a DVD drive, but netbooks at least have the option of plugging in a cheap USB-powered optical drive to play your DVD movies. Not the iPad. And even if it had a general-purpose USB port, standard DVD video is encoded in MPEG-2, so even if you find a way to get the DVD .vob files de-encrypted and onto the iPad, it won't play them. If a relative passes you a homebrew DVD with your family's home movies, you're back into Transcoding Hell. Transcoding on a mac probably produces "Apple-friendly" MP4 files, first time, every time ... on other platforms, don't count on it.

  4. View or edit OpenOffice files
    Some organisations are trying to migrate away from using MSOffice files to more open formats, to avoid vendor lock-in. The main alternative suite is OpenOffice, which runs under Windows and Linux, can read and write all the main MS formats as well as its own "open" format, and also happens to be free. Apple don't seem to have a reader for "Ooo" files. They don't seem to much approve of open formats, and would rather you used Microsoft's apps and formats than open-source – they see open-source as a bigger threat than Microsoft.

  5. Share photos.
    Jobs says that sharing photos is "a breeze" on the iPad. By "sharing", he presumably means, "tilting the screen so that other people can see it". If you want to actually give someone a copy of a holiday picture, you'll probably have to do it on a different computer, rather than the iPad. There's currently no "file export" media option. Budget picture frames usually have have picture sorting, import/export, and USB/SD card support functions, but the iPad doesn't, it's strictly a secondary device. Any serious file organisation is supposed to be done on a parent computer, so don't expect to be able to sort your piccy collection on the iPad while sitting comportably on your sofa.
    There is a USB/Cardreader accessory listed for the iPad ... the Camera Connection Kit ... but Apple currently only describe it as allowing you to import files to the iPad. To get the photos out of the iPad, you're supposed to synch to the iPad's "parent" PC or Mac, and then save them from that parent device. In which case, it'd be faster to upload the files directly to the parent machine without going via the iPad. Not exactly "breezy".

  6. Use standard peripherals.
    As well as not having internal USB, the OS 3.x iPhone apparently doesn't support much in the way of bluetooth peripherals other than stereo headphones, and apparently doesn't even support Apple's own bluetooth keyboard. Apple's "official" external keyboard for the iPad is a dedicated iPad keyboard-and-stand, which only works in portrait mode. Heath and safety regulations say that you aren't supposed to use keyboards in an office environment unless they're adjustable, and this looks like it probably isn't. But Apple seem to have realised that this restriction sucked too much, and the iPad's OS 4.0 now seems to be more relaxed, and supports Apple's general-purpose bluetooth keyboard (which costs the same as the dedicated iPad keyboard).
    Unless the iPad's "OS 4.0" is a radical departure from 3.x, you probably also won't be able to zap contacts or notes or files into the iPad from general bluetooth peripherals, like you can with decade-old bluetooth-equipped Palm devices. I used to carry about a pocket-sized Targus folding keyboard and an OCR pen-scanner device with my old Palm organiser. Nothing like that seems to be available for the iPad.

  7. Record stereo audio.
    Apple want you buying music, not recording it, so while the Apple dock connector has pins for stereo in, the official iPad Apple specifications don't commit to the pins doing anything. Maybe they're connected, maybe they're not. If they are, great. But its a brave third-party manufacturer who releases a product or connector for a function that an Apple device isn't guaranteed to have – even if your gadget works now, one OS revision later it might not (see also (2) external FM radio). As a playback-only media centre, the iPad again has the problem that onboard organisation is limited – you're supposed to do all your media organising on a separate parent computer, and iTunes usually won't recognise album art originating on a PC. Often it won't recognise PC-ripped tracks and let you download replacement artwork, either. Of course, if you're sick of watching CoverFlow "flipping" blank squares, you can always buy your albums over again as Apple downloads, or rip the CD's again using a mac ...

  8. Use unapproved software.
    Apple reserve the right to decide what software you run on your machine, and there are certain sorts of applications they really don't want you to have. You normally aren't even allowed to load your own media files onto an iPhoneOS device unless the iTunes "sentry" approves – the iPx range won't emulate a basic thumb drive.
    You can often upload these "unapproved" apps and use your iPx gadget as a file caddy, by hacking past the Apple firmware's protection to expose the internal filesystem over USB – "jailbreaking" – but jailbreaking doesn't always work on all models, and it's too early to know what eventual proportion of iPads are likely to be jailbreakable.

  9. Camera functions
    iPhone OS 4 is supposed to finally add proper support for camera functions, but the iPad doesn't actually have a camera. In theory it'd be easy to add support for a camera that snaps onto the dock connector, but AFAIK, no third-party manufacturer has yet produced one.
    It's probably easy in theory to support a swivellable webcam that can point forwards as a camera or backwards for video calls, but that'd need the device to be held upside down with the dock connector at the top. There's no technical problem with this … except that Apple's own OS 3.x applications refuse to work in upside-down mode. On OS4, the onboard applications are supposed to work in any orientation, but it's still a bit discouraging for manufacturers to know that if they launch a camera, it won't work well on v3.x devices. There's also the possibility that if Apple do decide to embrace the idea of an add-on camera, they won't make the function ready until they have a camera of their own to sell. You could buy rotatable snap-in cameras for some Palm organisers nearly ten years ago, so the iPad's still lagging behind in this respect.
    And there's some useful camera-aware apps: the Evernote notetaking apps let you snap images (memos, restaurant menus, street signs), save them with geotagging data, and apply OCR to add the text in the image to a searchable comments field. If you have a iPhone with Evernote, and someone shows you their contact details on their smartphone screen or a business card, you can snap a photo and get a text file. But without a camera, none of this cool stuff will currently work on the iPad. Evernote also has a nice voicenotes feature, but again, on the iPad ... no onboard mic.
    So, no Skype video calling.

  10. SIM-swapping.
    The iPad isn't locked-in to a particular phone provider (hooray!), but the bad news is that if you've just bought a high-capacity service plan for your iPhone, and you want to transfer it to your iPad (which you expect to be using for all your serious mobile web-browsing from now on), you can't. The SIMs are physically different sizes. The iPhone uses a standard-sized SIM, the larger iPad uses a smaller mini-SIM. In theory, a mini-SIM with a holder can fit into a full-size SIM slot, but that chances are that if you're an existing iPhone owner, you won't have one of those. Apple enthusiasts have gotten used to Apple engineering-in incompatibilities with other manufacturers' products, but some have gotten a bit annoyed at what looks like a deliberate incompatibility with other Apple products.



The iPad isn't really what Steve Jobs said it was. It's not a device that's designed to sit in some middle ground between netbooks and laptops, because those two types of device can do pretty much everything on the list.

The iPad's purpose is straightforward: it's designed to kill sales of the amazon Kindle, break amazon's stranglehold on ebook sales, and let Apple add ebook and magazine retailing to their existing music-and-movies portfolio. It's a conduit.
It has to be five hundred dollars in order to crush the Kindle DX, at $500 its facilities have to be limited in order to avoid undercutting Apple's own laptop range (which starts at a thousand dollars) and it has to be based on the iPod Touch (with an updated "iPhone OS" and a bigger screen) to give it an established sales channel, because that's the "other" OS that Apple have, because that preserves separation between the iPad and the more expensive OSX-based products, and because that makes it more difficult for people to dig out and redistribute downloaded paid-for content.

Those three things pretty much define it.

Tuesday, 27 April 2010

'Circular' Polyhedra, and the Apollonian Net

Fractal circular tiling, giving the  Apollonian Net / Apollonian  Gasket / Liebniz packing  diagramThis is the nice design that I used on page 2 of the book.

Annoyingly, rather a lot of other people discovered it before me:
it's indexed on Wikipedia as the Apollonian Net, after Apollonius of Perga (~262 BC – ~190 BC), and it's also referred to elsewhere as the Leibniz Packing diagram, after Gottfried Leibniz (1646-1716), Newton's rival for the invention of calculus. I've even seen it credited to the design of the floor of a Greek temple. But frankly, it's such a nice shape that I'm sure that people have been discovering and rediscovering it for millennia. Draw three touching circles, fill in the inviting gap in the middle with more circles, and when you're feeling pleased with yourself and wondering what to do next, step back and look at the whole thing, draw in a bigger circle to enclose everything (facing away from you), and repeat. That's how I got there, anyway.

There's some rather interesting geometry here to do with tangents, but I got impatient trying to get a complete derivational method, and generated the figures using a vector graphics program (CorelDraw10), driven by an automating script, using a mix of partial derivations, testing, and brute force. If you're calculating a chain of circles that might be twenty or thirty stages long, successive rounding errors tend to screw up these diagrams when you calculate them "properly"(look at the overlap of the smaller circles in the Wikipedia vector graphics version), and my priority was to make sure that the circles really did fit, so I used a hybrid approach where I used trig to get each circle into the ballpark of its proper destination w.r.t. its parents, and then a successive approximation method with error correction to tweek and nudge and jiggle everything snugly into place.



The Apollonian Net makes more sense when you stretch it over the surface of a sphere, so that the four largest "primary" circles are all the same size, and are explicitly equivalent. They then form the intersection of the sphere with the four faces of a tetrahedron, giving the fractal-faceted solid that I used as a vignette on page 378.

Infinitely-truncated sphere, giving an infinite-sided polygon with circular faces, whose map corresponds to an Apollonian NetThere are two main ways to construct this solid:
1: Start with a sphere and grind four flat circular faces into it that correspond to the four faces of an intersecting tetrahedron, then keep grinding maximum-sized circular facets into the remaining curved parts, ad infinitum.

2: Start with a tetrahedron, and lop off the four points to give a shape with four regular hexagonal faces, and four new triangular faces where the tips used to be. Then continue lopping off the remaining points, ad infinitum. Each wave of cutting creates a new face at each cut, and doubles the number of sides on all the existing faces. If we cut at a depth that'll keep these polygons regular, then with an arbitrarily-high number of cuts, the faces converge toward perfect circles, and the point-mesh of the resulting peaks converges downwards to settle onto the surface of the sphere used in method 1.

Either way works.



This sort of duality is common when we construct standard polyhedra – the network of relationships in a regular polyhedron tends to be another regular polyhedron, so we can usually get to a regular shape by starting from either of its two relatives. Four of the five Platonic solids pair up nicely like this, and the last – the tetrahedron – is a special case whose "dual solid" partner is another tetrahedron. But we normally only consider these sorts of dualities when considering combinations of regular polygons with finite numbers of rectilinear sides with each other, and don't include the infinite-sided fractal shapes that show up when one of the parent solids is an infinitely-faceted sphere (which, in some ways, almost counts a a sixth Platonic solid).

We don't have to start with a tetrahedron, we can make these fractal solids from any regular polygon (cube, etc). But the tetrahedral and icosahedral versions probably look the nicest. I find the cube-based version a bit disappointing, but I grew up with rounded-cornered dice with circular faces, so perhaps I'm just a bit blasƩ about the solid that corresponds to the "six-circle" version of the Apollonian net.

From here, we have three immediate ways to generate new families of solids:
(1) We can choose different starting solids,
(2)
we can vary the number of cuts or cutting stages (from zero to infinity), to produce finite-sided solids that look more like cut gemstones, and
(3)
we can vary how the cutting is done. If we make our cuts too shallow, then the facets are distorted away from circularity, and the overall shape isn't a sphere, but has flat-topped bulges where the original polyhedral points used to be. If we cut too deep, we get bulges in the shape of the original solid's "dual" sibling, with each bulge tipped by an edge.



Another cool thing about these nets is their topological transformability. With the "closed" version, every circle has three parents of the same size of larger, including the four primary circles (who count as each other's parents). You can transform between the different versions of the net by warping and resizing, while still keeping everything as circles.

This lets us get to tilings that don't automatically suggest standard polyhedra, such as the "two-large-enclosed-circles" version that I used for the "fractal Yin-Yang" symbol on page 145, and the asymmetrical versions on page 224. And once I'd written the scripts and code to generate these figures, I had a few more blank bits in the book to fill, so I knocked up the "triangular boundary" version on page 370 which, actually, has some other interesting proportions. The "triangle" version includes parts that represent the limiting case of the edge of the Apollonian Gasket when we zoom in so far that the outer circle tends toward a straight line. Filling these voids then gives the special-case Ford Circles tiling.

Some serious people have worked on this subject. You can also Google Descartes' Theorem (after RenƩ Descartes (1596-1650), and Soddy Circles. Lester Ford and Frederick Soddy only produced their papers in 1936 and 1938, so the Apollonian Net involves math research that extends across more than two thousand years, and isn't finished yet.

It would have been nice to meet the person who designed that floor, though.

Sunday, 18 April 2010

Ultra-high resolution photography

The "jitter" method (earlier post) can also be used for ultra-high-resolution photography.

People want higher-resolution cameras, but the output resolution of a camera is usually limited by the number of pixels in its sensor. Some digital cameras have a "digital zoom" function, but this is a bit of a cheat: it simply invents extra pixels between the real pixels by smudging the adjacent colour values together. Conventional digital zoom doesn't actually give you any additional information or detail, it just resizes a section of the original image to fill the required size.

A second problem with cameras is camera shake. If you're holding the camera in your hand, then a tiny movement of the camera can result in the image being panned across the sensor while the CCD imaging chip is doing its thing, giving a blurred photograph. The smaller the pixel elements, and the greater the optical zoom, the worse this gets. We can try clamping the camera and taking a shorter-exposure image (so that the camera doesn't have as much time to move), but shorter exposures lead to more random "noise" per pixel, due to the reduced sampling time.



But with enough processing power, we can use jitter techniques to solve both problems:
In our earlier "audio" example, we deliberately added high-frequency noise to an audio signal to shift the sampling threshold up and down with respect to the signal, and we took multiple samples and overlaid them to achieve sub-sample resolution.
With digital photography we can use "positional" noise: we vary the alignment of the camera sensor to the background image, take multiple samples, and overlay those (aligned to subpixel accuracy), to generate images that have higher resolution than the camera sensor. In some ways, this is a little like the Nipkow disc approach used in early television systems, that often used a swept array of less than a hundred sensor elements provide a passable image ... in this case, we're not sweeping a line strip of sensors at right angles, but an entire grid of pixel elements, and using their random(-ish) offsets to extract real intermediate detail.

Instead of camera shake being a problem, it becomes Our Friend! The individual images will be noisier, but when you recombine a secondsworth of images, the end result should have noise levels comparable to a single one-second exposure – and since you might not normally try to take a one-second exposure (because of camera stability issues), static scenes might sometimes end up with reduced noise as well as enhanced resolution.

So, if we have a programmable camera, in theory it's possible to design an "ultra-resolution" mode that fires off a series of short-exposure images while we hold the camera, and then makes us wait while its processor laboriously works out the best way to fit all the shots together ... or saves the individual shots to their own directory, to be assembled later by a piece of desktop software.
If we were able to design the camera from scratch, we'd probably also want to include a gadget to deliberately nudge the CCD sensor diagonally while the component shots were being taken. If the software's smart enough, the nudging doesn't have to be particularly accurate, it just has to give the sensor a decent spread of deliberate misalignments. A cheap little piezo device might be good enough.



The problem with this approach is getting hold of the software: In theory, you can try aligning images by hand, but in practice ... it doesn't really seem sensible.
People are already writing algorithms for this sort of stuff – it's what allows the Hubble space telescope to take those absurdly high-resolution images of distant galaxies, and presumably the military guys also use the technique to get extreme resolution enhancements from spy satellite hardware. For analysing and aligning photos with "free-form" offsets, the necessary techniques already seem to be included in the Autostitch panoramic software, which even includes the ability to distort images to make them fit together better – it wouldn't seem to take a lot to turn Autostitch into an ultra-resolution compositor.

Amateur astronomers are now enthusiastically using the technique, and sharing resources (try using "drizzle" as a Google search keyword).
Suppose that you want to take an ultra-high resolution photograph of the full Moon – you train your camera-equipped telescope at the Moon, lock it down, and set it to keep taking ten pictures per second for an hour while the Moon gradually arcs across the sky and it's corresponding image crawls across your image-sensor ... and then feed the resulting thirty-thousand-odd images into a sub-pixel alignment program, to chew over for a few weeks and pull out the underlying detail. As long as the matching algorithm knows that it's supposed to be lining up the part of the images that contain the big round yellow thing rather than the clouds or the treetops, there wouldn't seem to be any real limit to the achievable resolution. Okay, so you have different atmospheric distortions when the Moon is in different parts of the sky, and when the air temperature drifts, but with a sufficiently-smart autostitch-type warping, even that shouldn't be a problem. If you didn't have a "rewarping" feature, you'd probably just have to decide which part of the moon you wanted the software to use as a master-key when lining up the images.



Techniques like this go beyond conventional photography and enter the territory of hyperphotography – we're capturing additional information that goes beyond our camera's conventional ability to take images, and doing things that, at first sight, would seem to be physically impossible with the available hardware. A bit of knowledge of quantum mechanics principles is useful here: we're not actually breaking any laws of physics, but we're shunting information between different domains to obtain results that sometimes seem impossible.

There's a whole family of hyperphotographic techniques: I'll try to run through a few others in a future post.

Saturday, 10 April 2010

Titanic Syndrome

RMS Titanic Memorial Plaque, detail, Eastbourne BandstandOn the 10th of April 1912, the RMS Titanic set out on her first passenger-carrying voyage. The Titanic (and her Olympic-class sister-ships were state-of-the-art. They had a double-hulled design that meant that if one hull ruptured, the ship was still seaworthy. The ship was considered to be practically unsinkable.

Four days later it was at the bottom of the ocean with the bodies of 1517 crew and passengers. The "unsinkable" ship was arguably the most "sinky" ship in human history.
It's normally difficult to assign a "sinkiness" ranking to ships, given that each failed ship only normally manages to sink once, but by sinking before it even made it to the end of its maiden voyage, and killing so many people, the Titanic flipped straight from being supposedly one of the safest seagoing structures ever built, to one of the most dangerous.



Titanic Syndrome
isn't based on any specific mechanism. "Syndromes" are recognisable convergences of trends, that can sometimes associate a particular outcome with a recognisable set of starting parameters. When we notice one of these patterns, we sometimes have a good idea how things are likely to end without having to know the mechanism that gets us there.

In the case of Titanic Syndrome, the association is pretty self-explanatory: when people tell us that nothing can possibly go wrong, that everything's perfectly safe, that a plan is foolproof ... things usually turn out badly.

Why did the Titanic disaster happen, and happen so emphatically? The obvious answer is that the ship sank because it struck an iceberg, but there are additional factors that track back to that initial belief that the ship was almost indestructible. If the ship's crew had been less confident, perhaps they'd have done a better job of keeping watch for ice, or cut their speed. If the shipyard had been less confident about the ship's hull, maybe they'd have built it with better-quality materials, rather than just assuming that if one hull failed there was a spare. And if the company hadn't been so sure that lifeboats weren't really necessary, perhaps that'd have included enough for everyone, and not so many people would have had to drown when the ship went down, while they were waiting to be rescued.



In science, hyperbole is usually an indicator that something's wrong. Theories that are described as "pretty good" usually are, but theories that were told are excellent, or that can't possibly be wrong usually turn out to be already failing, unnoticed. Titanic Syndrome.

Theories that really are that good, don't need to be oversold – it's usually possible to express confidence in an established model more convincingly with quiet understatement. On the other hand, if a core theory is right, but the people involved are still trying to exaggerate the case for it (even though their actions are likely to backfire), then if they're making that mistake, they've probably been making others, too. So "cheerleading" is usually a red flag that some things in the picture are likely to be dodgy, even if the fundamentals of a theory are right.

And sometimes the "cheerleading" stops people noticing that the fundamentals are wrong. And those are the times ... when everybody's invested so strongly in something that they really don't want to believe in the possibility of problems, or start thinking seriously about fallback positions or lifeboats ... that you get another "Titanic-class" event.

Friday, 2 April 2010

General Relativity is Screwed Up

With Einstein's general theory of relativity, one of the theory's harshest critics was probably Einstein himself. This was partly a matter of personal discipline, and partly – like the joke about sausages – because it's sometimes easier to like a thing if you don't know the gruesome details of how it was actually made. Einstein found it easy to be sceptical about the design decisions that had gone into his general theory, because he was the guy who'd made them. It had been the best general theory that had been possible at the time, said Einstein, but with the benefit of hindsight ... perhaps its construction wasn't entirely trustworthy.
The "iffy" aspects of C20th GR are difficult to see from within the theory, because – where the lower-level design decisions have forced a fudge or bodge – from the inside, these things seem to be completely valid, derived (and quite necessary) features. It's not until we look at the structure from the outside, with a designer's eye, that we see the arbitrary design decisions and short-term fudges that went into making the theory work the way it does.

Sure, the surface math looks pretty (with no obvious free variables or adjustable parameters), but that's because, as part of the theory's development, all the ugliness necessarily got moved down to the definitional and procedural structures that sit below the math. Change those underlying structures, and the surface mathematics break and reform into a different network that looks similarly unavoidable. So even though the current system looks like the simplest possible theory when viewed from the inside, we can't invest too much significance in this, because if the shape and structure was different, that'd look like the simplest possible theory, too.

To see how the theory might have been, we need to look at the subject's protomathematics, the bones and muscles and guts of the theory that dictate its overall shape, and which don't necessarily have a polite set of matching mathematical symbols.

Here are two interlinked examples of decisions that we made in general relativity that weren't necessarily correct:

Problem #1: Gravitational dragging, velocity-dependent gravitomagnetic effects

As Fizeau demonstrated back in ~1849 with water molecules, moving bodies drag light. General relativity describes explicit gravitomagnetic dragging effects for accelerating and rotating masses, and logic pretty much then forces it to describe similar effects for relative velocity, too. When you're buffeted by the surrounding gravitational field of a passing star, the impact gives you some of the star's momentum – momentum exchange means that the interaction of the two gravitational fields acts as a sort of proxy collision, and the coupling effect speeds you up a little, and slows down the star, by a correspondingly tiny amount.

For a rotating star, GR915 also agrees you're pulled preferentially to the receding side – there's an explicit velocity component to gravitomagnetism (v-gm). Even quantum mechanics seems to agree. And we can use this effect to calculate the existence of the slingshot effect, which is not just theory, but established engineering.

But v-gm effects appear to conflict with Newton's First Law of Motion: If all the background stars dragged light according to their velocity, then as you moved at speed with respect to the background starfield, the receding stars would pull on you a little bit stronger than the others, slowing you down. There'd be a preferred state of rest, that'd correspond to the state in which the averaged background starfield was stationary (ish). This doesn't agree with experience.

So the v-gm effect gets edited out of current GR, and when we do slingshot calculations, we tend to use Newtonian mechanics and model them in the time domain, instead. We compartmentalise.
Summary:
Argument: The omission of v-gm effects from general relativity seems to be arbitrary and logically at odds with the rest of the theory, but it seems to be “required” to force agreement with reality … otherwise “moving” bodies would show anomalous deceleration.

I'd consider this a fairly blatant fudge, but GR people would tend to refer to it as essential derived behaviour (based on the condition that the theory has to agree with reality).

Problem #2: Gravitational Aberration

If signals move at a finite speed, the apparent positions of their sources get distorted by relative motion. We "see" a source to be pretty much in the direction it was when it emitted the signal, with a position and distance that's out of date, thanks to the signal timelag.

If gravitational and optical signals both move at about the same speed, "c", (ignoring nonlinear complications), then we expect to "feel" the gravitational signal of a body to be coming from the same position that the object is seen to occupy. Which is kinda helpful.

But it seems that under current GR, the apparent "gravitational" position of a body gets assigned to its instantaneous position, as if the speed of gravity was infinite. We say that the speed of gravity isn't actually infinite, but that moving bodies somehow "project" their field forwards and then sideways so that it looks infinite as far as the observer's measurements are concerned. In other words, it seems that under current GR, there's no such thing as gravitational aberration.

This is a bit like the sound of fingernails scratching down a blackboard. It means that there's no longer the concept of a body having a single observed position, and we get separate definitions of "apparent position" for EM and gravity. This badly weakens the theory, because it means that mismatches between the two that that we might normally look out for to show us that we've made a mistake somewhere, are the theory's default behaviour. We lose a method of testing or falsifying the model.

So why do we do it?

We...ell, the usual argument involves planetary orbits and the apparent position of the Sun as seen by an observer on a rotating planet. But that argument's complicated and perhaps still a bit unconvincing, so … the simpler argument is that if gravitational aberration existed, it'd again seem to screw up Newton's First Law. When an astronaut travels through the universe at high speed, the background stars appear to bunch together in front of them (e.g. Scott and van Driel, Am.J.Phys 38 971-977 (1970) ), and if the gravitational effect of all those stars was shifted to the front as well, then we'd expect the astronaut to be pulled towards the region of highest apparent mass-density … forwards … and this'd further increase their forward speed, making the aberration effect even worse, which'd then create an even stronger forward pull.

So again, we manually edit the effect out, say that it's known not to exist, and then do whatever we have to do with math and language to stop the theory contradicting us.

Summary:
Argument: Losing gravitational aberration seems to be arbitrary and logically at odds with the rest of the theory, but seems to be "required" to force agreement with reality … otherwise "moving" bodies would show anomalous acceleration.



Put these two arguments together, and you should immediately begin to see the problem:

If we'd resisted the "urge to fudge", it looks as if our two problems would have eventually canceled each other out anyway, without our having to get involved. They seem to have the same characteristic and magnitude, but different signs. One produces anomalous acceleration, the other anomalous deceleration. Put them together and the moving astronaut doesn't accelerate or decelerate, because the stronger rearward pull of the fewer redshifted stars behind them is balanced by the increased number of stars ahead, which are blueshifted and individually weakened. Instead of our imposing N1L-compliance on general relativity as a necessary initial condition, the theory works out N1L all by itself, as an emergent property of curved spacetime.

So in these two cases, we seem to have corrupted the "deep structure" of the current general theory of relativity not once but twice, by trying to solve problems sequentially rather than letting the geometry generate the solutions for us, organically. Both "deleted" effects turn out to be necessary for a "purist" general theory … but once we'd fudged the theory once to eliminate one of them, we had to go back and fudge the theory a second time to eliminate the second effect that would otherwise have balanced it out.

And in doing that, we didn't just "double-fudge" a few details of the theory, we broke important parts of the structure that should have allowed it to expand and blossom into a larger, more tightly integrated, more strictly falsifiable system that could have embraced quantum mechanics and dealt with properly with cosmological issues. General relativity should have been a tough block of dense, totally interlocking theory, with independent multiply-redundant derivations of every feature, rather than the thing we have now.



The fudging of these two issues also changed some of the theory's physical predictions:

Losing gravitational aberration gave us a different set of observerspace definitions that altered the behaviour of horizons. Losing v-gm meant that we got different equations of motion, once again a different behaviour for black holes, and no way of applying the theory properly to cosmology without generating further cascading layers of manual corrections reminiscent of the old epicycle approach to astronomy. It also created a statistical incompatibility with quantum mechanics.

So general relativity in its current form seems to be pretty much screwed. GR1915 was fine as an initial prototype, but it should really have been replaced half a century ago – in 2010, it's an ugly, crippled, mutated, limited form of what the theory could, and should have been by now. But because people fixate on the math rather than on the structure, they can't see the possibility of change, or the beauty of what general relativity always had the potential to become. And that's why the subject's been almost stalled for pretty much the last fifty years, it's because Einstein died, and too many of the surviving physics people who did this stuff couldn't see past the mathematical and linguistic maze that'd developed around the subject, they didn't "get" the design principles and the dependencies between the choice of initial design decisions and the characteristics of the resulting model, and they didn't appreciate the design aesthetics.

And I find that sad on so many levels.

Sunday, 28 March 2010

3D Audio, and Binaural Recording

Binaural recording: NIH 'Virtual Human' head cross-section, Neuman KU100 'dummy head' binaural microphone (inverted image), Sound Professionals in-ear microphone (left ear)

One of the dafter things they teach in physics classes is that because humans only have two ears, we can only hear location by comparing the loudnesses of a sound in both ears, and that because of this we can only hear "lefty-rightiness", unless we start tilting our heads.

It's wrong, of course: Physics people often suck at biology, and (non-physicist) humans are actually pretty good at pinpointing the direction of sound-sources, without having to tilt our heads like sparrows, or do any other special location-finding moves.

And we don't just perceive sound with our ears. It's difficult to locate the direction of a backfiring car when it happens in the street (because the sound often reflects off buildings before it reaches us) ... but if it happens in the open, we can directionalise by identifying the patch of skin that we felt the sound on (usually chest, back, shoulder or upper arm), and a perpendicular line from that "impact" patch then points to the sound-source.
For loud low-frequency sounds, we can also feel sounds through the pressure-sensors in our joints.

But back to the ears ... while its obviously true that we only have two of them, it's not true that we can't use them to hear height or depth or distance information. Human ears aren't just a couple of disembodied audio sensors floating in mid-air, they're embedded in your head, and your head's acoustics mangle and colour incoming sounds differently depending on direction, especally when the sound has to pass through your head to get to the other ear. The back of your skull is continuous bone, whereas the front is hollow, with eyeballs and eyesockets and naso-sinal cavities, with Eustachian tubes linking your throat and eardrums from the inside. You have a flexible jointed spine at the back and a soft hollow cartilaginous windpipe leading to a mouth cavity at the front, and as sounds pass through all these different materials to reach both ears, they get a subtle but distinctive set of differential frequency responses and phase shifts that "fingerprint" them based on their direction and proximity.

To make the colouration even more specific, we also have two useful flappy things attached to the sides of our heads, with cartilaginous swirls that help to introduce more colourations to sounds depending on where they're coming from. Converting all these effects back into direction and distance information probably requires a lot of computation, but it's something that we learn to do instinctively when we're infants, and we do it so automatically that  – like judging an object's distance by waggling our eye-focusing muscles – we're often not aware that we're doing it.

The insurance industry knows that people who lose an external ear or two often find it more difficult to directionalise sound. Even with two undamaged eardrums, simple tasks like crossing the road can become more dangerous. If you've lost an ear, you might find it more difficult working on a building site or as a traffic cop, even if your "conventional" hearing is technically fine.

Binaural, or 3D sound recording:

We're good enough at this to be able to hear multiple sound sources and pinpoint all their directions and distances simultaneously, so with the right custom hardware, a studio engineer can mimic these effects to make the listener "hear" the different sound-sources as coming from specific directions, as long as they're wearing headphones.

There are three main ways of doing this:

1: "Dummy head" recording

This literally involves building a "fake head" from a mixture of different acoustic materials to reproduce the sound-transmission properties of a real human head and neck, and embedding a couple of microphone inserts where the eardrums would be. Dummy head recording works, but building the heads is a specialist job, and they're priced accordingly. Neumann sell a dummy head with mic inserts called the KU100, but if you want one, it'll cost you around six thousand pounds.
Some studios have been known to re-record multitrack audio into 3D by surrounding a dummy head with positionable speakers, bunging it into an anechoic chamber and then routing different mono tracks to different speakers to create the effect of a 3D soundfield. But this is a bit fiddly.

2: 3D Digital Signal Processing

After DSP chips came down in price the odd company started using them to build specialist DSP-based soundfield editors. So for instance, the Roland RSS-10 was a box that let you feed in "mono" audio tracks and it'd let you choose where they ought to appear in the soundfield. You could even add an outboard control panel with alpha dials that let you sweep and swing positions around in real time.
Some cheap PC soundcards and onboard audio chips have systems that nominally let you position sounds in 3D, but the few I've tried have been a bit crap, their algorithms probably don't have the detail or processign power to do this properly.
At "only" a couple of thousand quid, the Roland RSS10 was a cheaper more controllable option for studio 3D mixing than using a dummy head in a sound booth, and Pink Floyd supposedly bought a stack of them. There's also a company called QSound that do this sort of thing: Qsound's algorithms are supposed to be more based on theoretical models, Roland's based more on reverse-engineering actual audio.

3: "Human head" recording

There's now a third option: a microphone manufacturer called Sound Professionals had the idea that, instead of using a dummy human head, why not use a real human head?.
This doesn't require surgery, you just pop the special microphones into your ears (making sure that you have them the right way round), and the mics record the 3D positioning colouration created by your own head's acoustics.
The special microphones cost a lot less than a Neumann KU100, and they're a lot easier to use for field recording than hauling about a dummy head – it's just like wearing a pair of "earbud"-style earphones. The pair that I bought required a mic socket with DC power, but I'm guessing that most field recorders probably provide that (they certainly worked fine with a Sony MZ-N10 minidisc recorder).
Spend a day wandering around town wearing  a pair of these, and when you listen to the playback afterwards with your eyes closed, it's spooky. You hear //everything//. Birds tweet above your head, supermaket trolley wheels squeak at floor level, car exhausts grumble past the backs of your ankles as you cross a road, supermarket doors --swisssh-- apart on either side of you as you enter.
"Human head" recording isn't quite free from problems. The main one is that you can't put on a pair of headphones to monitor what you're recording, real-time, because that's where the microphones are: you either have to record “blind” or have a second person doing the monitoring, and you can't talk to that person or turn your head to look at them (or clear your throat) without messing up the recording. If you move your head, the sound sources in the recording swing around in sympathy. Imagine trying to record an entire symphony orchestra performance while staring determinedly at a fixed point for an hour or two. Tricky.
The other thing to remember is that although the results might sound spectacular to you (because it was your head that was used for the recording), it's difficult to judge, objectively, whether other people are likely to hear the recorded effect quite so strongly. For commercial work you'd also want to find some way of checking whether your “human dummy” has a reasonably "standard" head. And someone with nice clear sinuses is likely to make a better recording that someone with a cold, or with wax-clogged ears.
Another complication is that most people don't seem to have heard of "in-ear" microphones for 3D human head recording, so they can be difficult to source: I had to order mine from Canada. 

Media

For recording and replaying the results: since the effect is based on high-frequency stereo colourations and phase differences, and since these are exactly the sort of thing that MP3 compression tends to strip out (or that gets mangled on analogue cassette tape), it's probably best to try recording binaural material as high-quality uncompressed wav files. If you find by experiment that your recorder can still capture the effect using a high-quality compressed setting, then fine. The effect's captured nicely on 44.1kHz CD audio, and at a pinch, it even records onto high-quality vinyl: the Eurythmics album track "Love you like a Ball and Chain" had a 3D instrumental break in which sound sources rotate around the listener's head, off-axis: if you look at the vinyl LP, the cutting engineer has wide-spaced the tracks for that section of recording to make absolutely sure that it'd be cut with maximum quality.

Sample recordings

I'd upload some examples, but my own test recordings are on minidisc, and I no longer have a player to do the transfer. Bah. :(
However, there's some 3d material on the web. The "Virtual Barber Shop" demo is a decent introduction to the effect, and there are some more gimmicky things online, like Qsound's London Tour demo (with fake 3D positioning and a very fake British accent!). When I was looking into this a few years back, the nice people at Tower Records directed me to their spoken word section where they stocked a slightly odd "adult" CD that included a spectacular 3D recording of, uh, what I suppose you might refer to as an adult "multi-player game". Ahem. This one actually makes you jump, as voices appear without warning from some very disconcerting and alarming places. I'm guessing that the actors all got together on a big bed with a dummy head and then improvised the recording. There's also a couple of 3D audio sites by binaural.com and Duen Hsi Yen that might be worth checking out.
So, the subject of 3D audio isn't a con. Even if the 3D settings on your PC soundcard don't seem to do much, "pro" 3D audio is very real - with the right gear, the thing works just fine. It's also fun.

Friday, 19 March 2010

Virtual Lego


Someone's finally come up with the "killer application" for VR and computer-augmented reality.

It's buying Lego.

You walk into a participating Lego shop, pick up a box of Lego, and walk over to the big screen. A video camera shows you your image. You hold out the box in front of you, horizontally, as if you're holding a tray.

The software sees the box, recognises which product it belongs to, and calculates the exact position of the box corners in three dimensions.

It then retrieves a 3D computer model of the assembled Lego model from its database, and projects a virtual reality image of the completed masterpiece onto the screen as if the completed Lego masterpiece is sitting on top of the box clutched in your little sticky hands.

You rotate the box, and on the screen, the 3D model rotates. Tilt the box and it tilts. Move the box around and you get to see the final Lego construction from different angles, complete with perspective effects.

Oh, and the computer-generated Lego image is also animated. If it's a garage, the little Lego cars scoot about, if it's a building, the little Lego people are wandering about doing their own thing, "Sims"-style, and if its a tipper truck, the truck drives about the top of the box, tipping stuff.

It's very, very cool.

Sunday, 14 March 2010

The Caltech Snowflake Site

thumbnail link image to CalTech's snowflake site, www.snowcrystals.com
While I was finishing off yesterday's snowflake post, I came across Caltech's excellent snowflake site at www.snowcrystals.com (Kenneth G. Libbrecht).

Lots of photos, lots of useful information. Caltech even have their own snowflake creation machine, that, instead of electrostatically levitating the snowflakes as they grow, or using a vertical blower, applies an electric field to grow narrow ice-spikes, and then lets the snowflakes form at the spikes' tips (which means that the central mount is probaby rigidly aligned to the resulting flake with atomic precision, and doesn't seem to affect the growing process).

If you're in the UK, and you've mocked train companies for blaming their electrical locomotive failures on "the wrong kind of snow", well, it turns out that snow crystallisation has a slightly crazy dependency on both temperature and airborne water content, forming a range of very different shapes, from the classic branched hexagon "christmas card" forms, to hexagonal plates or long hexagonal tubes (snowflake chart).

The CalTech site explains the wide variety of snowflake forms by this temperature-dependence: the idea being that snowflakes form symmetrically because the conditions across the flake are the same at any given time, and that the extreme variety of shapes is a function of the varying environmental conditions that the whole snowflake experiences as it falls through different regions of sky. It might go through a "spiky dendrite" phase, then change temperature and start trying to grow plates, and then go back to "dendrite" mode, and the exact amount of time spent in these different phases then dictates the shape that emerges.

If the identical patterning of the arms is purely a result of the identical (varying) growing conditions across the whole flake, then we don't require any additional mechanism for regulating symmetry. In that case, we'll expect individual snowflakes to accumulate diverging asymmetries as they grow, due to gradients of temperature or water availability or light or airflow across the flake. This'd seem to make the formation of extremely regular crystals a bit unlikely.
But the CalTech site argues that actually, most natural snowflakes are pretty irregular, and that people generally overestimate the degree of symmetry because the artsy folks who photograph them (presumably including CalTech!) give a misleading impression by carefully selecting out the "best" (most regular) flakes to photograph and publish.

That explanation seems to be a bit at odds with the current suggestion of how triangular snowflakes form, though: if triangular snowflakes grow because of airflow over the flake creating an asymmetrical growing environment, breaking the hex pattern, then if there wasn't an additional internal regulating symmetry-mechanism, there'd be no obvious reason why the resulting aerodynamically-disfigured flake should have 120-degee rotational symmetry. Airflow and a moisture gradient flowing across the flake in one direction might allows a bilateral left-right symmetry for the two sides of the flake that are experiencing the same growing conditions ... it doesn't explain why the conditions at the leading point of the falling tri-flake (falling point-first) should be identical to that at the two trailing side-points, or why points on the sides of those two trailing spurs points should be equivalent, when the airflow is hitting them at different angles. If triangular flakes are due to sideways airflow, then it means that the flake seems to be fighting to retain some sort of symmetry despite significant asymmetrical disruptive forces that ought to be destroying it. That'd increase the odds of there being a significant internal symmetry mechanism in play.

Of course, it may be that our explanation of triangular snowflakes is simply wrong, that airflow isn't disrupting the hex pattern, and that instead chemical contamination (or some other factor) is causing the alternative triangular crystal structure. But that'd still mean that something in our current understanding of snowflakes is wrong or incomplete. Even if yesterday's wacky suggestion about the quantum mirage effect is midguided, we'd still not know why snowflake formation is so sensitive to environmental conditions, or what the (non-aerodynamic) explanation of triangular snowflakes might be.


So again, more research needed.


The Caltech site's debunking of "mysterious" causes of snowflake symmetry is in the "Myths and Nonsense section" at http://www.its.caltech.edu/~atomic/snowcrystals/myths/myths.htm . The page says that there aren't any special forces at work here regulating symmetry, that most snowflakes are asymmetrical and "rather ugly", and that the published examples (including the ones on the site) are atypical, because "not many people are interested in looking at the irregular ones". In other words, if you look through the published work, you get a misleading impression due to publication bias. Well, yes ... quite possibly. But since the idea of what counts as "significant" symmetry might be a bit subjective,and since the datasets aren't available for us to look at, it's difficult to take this as a definitive answer until there's been actual experimental testing done.

Water is wierd stuff, and it keeps catching us out. I remember when people used to debunk ice spikes as an obvious example of psudoscience, and now those are understood, studied, and have their own page on the CalTech site. A lot of "crazy" ideas about water do turn out to be just as dumb as they first appear, but a few turn out to be correct. The trouble is, it's not always immediately obvious which are which.

Saturday, 13 March 2010

Snowflake Engineering, Quantum Mirages and Matter-Replicators

Julia Set
One of the most impressive things about snowflakes is that we still don't really understand how they work.

We understand how conventional crystals grow – normal crystals assemble into large, faceted, regular-looking forms because the flat facets attract new atoms more weakly than the rougher, "uncompleted" parts of the structure, which provide more friendly neighbours for a new atom to bond with. So if you have an "incomplete" conventional crystal, it'll preferentially attract atoms to the sites needed to fill in the gaps, to produce a nice large-faceted shape that tries to maximise the size of its facets, as far as it can bearing in mind the original random initial distribution of seed crystals.

But snowflakes do something different. Their range of forms makes their growth appears pretty chaotic, but they also manage to be deeply symmetrical. It'd seem that the point of greatest attraction on a region of snowflake doesn't just depend on the atoms that are nearby, but also on the arrangement of atoms on a completely different part of the crystal, which might be some way away, and facing in a different direction, on a different spur. The sixfold symmetry of a snowflake suggests that when you add an atom to the point of one of the six spurs, the other five points become more attractive ... add an atom to the side of a spur, and we're dealing with twelve separate sites (twenty-four if the atom is off the plane). Add an atom to a side-branch, and a copy of the electrical-field image of that single atom is transmitted and reflected and multiplied and refocused at potentially tens of corresponding sites on the crystal surface. And that's for every atom in the crystal.

This would be beyond fibre-optics, and beyond conventional holography. It'd be multi-focus holography, and the holographically-controlled assembly of matter at atomic scales to match a source pattern – making multiple copies without destroying the original. It'd be using holographic projection to assemble multiple macroscopic structures that are atom-perfect copies of an original. And that idea should make the hairs on the back of your neck start to stand up.

The closest thing I've seen in print to this is the quantum mirage effect described in Nature, 3 Feb 2000. Researchers assembled an elliptical quantum corral of atoms on a substrate, and placed another atom at one of the ellipse's two focal points. They then examined the second focal point, and found that the atom's external field properties seemed to be projected and refocused at the second point, to give a partial "ghost" of the source atom [*][*][*]. You could interact with the ghost even though it wasn't there. Presumably your actions on the "ghost particle" copy would be transmitted back to the source, which'd be recreating the ghost behaviour by a process of electrical ventriloquism, using the elliptical reflecting wall to "throw" its voice to the ghost location.

Something similar may be happening in a perfectly-symmetrical monocrystalline snowflake as it grows. Maybe the crystal's regular structure just happens to not just split the image of the atom into multiples, but refocus them with phase coherence at all the key symmetry points. Maybe we could try adding a few metal atoms to one part of a snowflake crystal and seeing if matching atoms are preferentially attracted to the other corresponding sites.



A possible clue is the phenomenon of triangular-symmetry snowflakes.
It's been suggested that these form in nature when an asymmetrical snowflake falls corner-first, with the airflow disrupting regular hexagonal crystal formation (see also Wired). But since the remaining triangular symmetry is still so strong, this hints that perhaps the strongest linkage between crystal sites is in triples, with a secondary slightly weaker triplet attraction producing the hex.

Okay, so I suppose there might be problems in attempting to use giant snowflake crystals as matter-photocopiers ... for snowflake formation, every copied pattern forms an extension of the crystal, if you use the crystal to try to copy other things, then the "irregular" matter being copied is liable to disrupt of the focusing. You might only be able to copy layers an atom or two thick (at least, to start with).

But a giant atom-perfect monocrystalline snowflake would be an awfully fun thing to play with if you had a chip-fabrication lab with goodies like force-sensing tunnelling microscopes.

And to me, that was the one thing that could have justified building the International Space Station. The ability to build a giant, heavy-duty zero-gravity snowflake, hopefully one big and chunky enough to withstand eventually being brought back to Earth immersed in liquid helium for further study (what does Bose-Einstein condensate do when it's in in contact with a hex crystal?). That had to be worth a few billion in research money, and would have given the public something pretty to look at when it came time to tell them what the money had bought. We haven't done it yet, but maybe ...