Thursday, 24 September 2009

Water on the Moon

'Moondrops'In today's Times, there's a front page story saying the the Indian Chandrayaan-1 probe, carrying NASA's Moon Mineralogy Mapper has now found signs of what might be significant amounts of (presumably frozen) water on our Moon.

For anyone who wants bullet points to explain why this is potentially a game-changer, here they are:

  • Water + electricity = life support
    Humans need water and air to survive (along with temperature control). With enough solar cells, the Moon's not short of electrical power – no pesky atmosphere to get in the way – but water and air are biggies. If the water's already there, we can tick one box, and using electricity to electrolyse water gives us hydrogen and oxygen. Oxygen lets us tick the second box. Normally we breath atmospheric-pressure air, with 20% oxygen and nearly 80% nitrogen, but we can use pure oxygen at a lower pressure, if we can deal with the additional fire risk associated with pure O2. It'd be nice to have a decent local supply of nitrogen, too, but not strictly necessary.

  • Water + heat + rock = building materials?
    Use solar furnaces to roast moondust, or break moonrock into pulverised dust and drive off the more volatile elements, then add water ... and we might just have ourselves a form of locally-sourced readymix concrete.

    You know how in films where moonbases are often all shiny white metal? To start with, they'd probably look more like adobe mud huts, or holes in the ground, with all the shiny stuff on the inside (apart from the solar panels). What you'd ideally want is big thick walls at least ten or so feet thick, on all sides, to buffer the temperature changes and block some of the radiation when the sun does annoying things with solar flares. Perhaps you'd want to maximise your protection from flare radiation without tunnelling, by by building in the bottom of a deep crater, near one of the poles ... which is also where we're hoping that some of surviving "accessible" ice might be found.

    Our building materials don't have to be incredibly strong, or even airtight, we could build a crude hollow blocky mesa as our surface structure and inflate a pressurised mylar balloon inside or below for living quarters. But it'd be nice to be able to pour a bit of concrete around the balloon to minimise accidents, and it'd be handy to turn moondust into something more manageable. Other than that, we're stuck trying to stack up rocks and fill sandbags with dust. In a vacuum. Not good. Quite how you're supposed to work with concrete in a vacuum without the water immediately boiling off, I don't know, but I'm sure that some clever concrete technologists are working on it. Supercooling, perhaps?

    One problem with building at the bottom of a polar crater is that having a few kilometres of rock in a straight line between you and the Sun isn't so good for solar power. So you'd probably want an array of thin foil mirrors around set up around part of the crater rim, redirecting and focusing concentrated sunlight down onto your generators. Luckily, your mirrors can be ultra-lightweight, there's no weather to damage them, and no intervening air to soak up the transmitted energy. Using reflectors minimises the amount of heavy power cabling, and also the number of solar generators, and depending on the shape of the ice formation that you're trying to exploit, an aimable solar furnace might also be handy for mining.

  • Hydrogen + Oxygen = rocket fuel
    Hydrogen and oxygen burn rather well together to turn back into water, giving a nice roaring flame. That's the reaction that drives the shuttle's main engines. Given a solar farm and enough time, it'd be nice to have a local fuel production plant on the Moon, making rocket fuel simply from local materials. We'd probably need a robotic refueller to pick up H2 + O2 from the plant, fly back to Earth orbit, find the satellite and fill up its tanks (or swap a standardised empty satellite launch tank with a nice pre-refilled one).

  • H2 + O2 + fuel cell = mobile power
    Fuel cells have a capacity that's only limited by the amount of hydrogen and oxygen you have to feed them. If you're building a water-splitting plant anyway, you might want to send along a spare set of empty fuel cells.

  • Water+ electricity + rock + atmosphere = food
    Sure, we can set up a hydroponics lab to grow our own veggies in space, recycle biomass, and use the plants help remove CO2 and other nasties from the air ... and in theory we can get pretty damned close to a sealed self-perpetuating system. But in practice, you need topups, and safety margins, and an awful lot of water to get the thing started (as the name "hydroponics" kinda suggests). If you're going to be growing algae or fungus or plants to eat, there's a lot of water locked up in the system while they're going through their cycle. Industrial biological reactors usually need whole tanks of the stuff, and water's actually pretty heavy. If water's costing you thousands of dollars per kilo to ship from Earth, it's not cheap stuff. It's probably not quite as expensive as gold, but with current shuttle per-kilo launch costs, it's in the ball-park.

With water, the moon becomes a solar-powered robotically-constructed and remotely-operated gas station and hydroponics plant, remote-controllable from the Earth, with a mild gravity penalty. It can have its own fleet of little refuelling craft, powered by locally-produced lunar rocket fuel.

Without water, its just a big chunk of rock with some handy boulders to hide behind when there's a bad solar storm.

Anyone whose job involves thinking a decade or two ahead about future lunar, manned or deep space payload missions will be watching this story very carefully.


see also: Ice Splat on Mars

Friday, 18 September 2009

Black Holes, Coordinate Reversals, and r=3M

optical caustic effect
Coordinate projections sometimes have a habit of going wierd when you try to project them past a gravitational horizon. Sometimes you can do it, sometimes you can't, and sometimes the attempt turns various things inside out.
A cool physical inversion that happens outside the horizon was used as the March 1993 cover story for Scientific American: Black Holes and the Centrifugal Force Paradox (by Marek Artur Abramowicz).

The effect isn't really paradoxical, but it's counter-intuitive until you think it through. Normally, if you orbit a body, you can break free of that body by firing up your spaceship's engines and going faster – too fast to be able to orbit at your current distance.
What the BHCFP says is that if you're skimming too close to a black hole event horizon, and you fire up your engines, then the faster you try to circle, the more that your trajectory is deflected inwards, towards the hole. The centrifugal forces that would normally throw you away from the body, now seem to be inverted, pointing inwards rather than outwards.

The critical threshold beyond which this effect appears is the distance r=3M, exactly one-and-a-half times the radius of the horizon surface (which is at r=2M).

It turns out that the r=3M radius is the photon orbit. It's the critical distance at which light aimed at 90 degrees to the mass will be deflected enough by gravity to perform a complete orbit and end up at its starting-point. The SciAm article has some nice computer graphics showing what a circular self-supporting scaffolding tube constructed around the hole at r=3M would look like to an observer standing inside it ... it'd appear to be straight, and if the observer pulled out a telescope and looked far enough along the tube, they'd expect to see the back of their own head.

So r=3M is special. From the perspective of the observer at r=3M who's hovering with the aid of rocket engines, or standing in our circular tube up above the hole, the universe seems to be divided into two regions. On one side they see the black hole and its immediate surroundings, and on the other, they see the starfield that represents the outside universe. Topologically, both regions can be thought of as solid spheres, with their external parallel surfaces meeting at r=3M. Both regions are trying to impose their will on the observer's local geometry, but at r=3M, a stationary observer feels the geometrical competition between the effect of the two spheres as being in balance (although in order to maintain their position hovering above the hole, they're feeling rather a strong gravitational pull!). Spin either one of the two spheres, and the observer will be pulled towards it – spin both at exactly the same rotational rate – the effect that we'd see if we passed along the tube at high speed – and the radial gravitomagnetic effects of both spheres cancel.

So if you built an electric train to run around the interior of the tube, it'd feel the black hole's conventional gravitational attraction pulling it against one side of the tube ... but that pull would seem to be exactly the same no matter how quickly it circled the hole.

The author's moral is that if you're in a spaceship close to a black hole, and you want to escape, don't just throttle up your engines, actually point your ship away from the damned thing, or you're liable to get a nasty crashy surprise.

"Observerspace" Description:

When we think about the optics of the situation, though, perhaps the hypothetical spaceship captain wouldn't be all that surprised:

See, if we imagine standing on a suspended non-orbiting platform at r=3m, we find ourselves looking along the r=3M surface in any (perpendicular) direction. The surface appears to us to be a flat plane cutting through our location. And because our view along r=3M circles around the hole indefinitely, our view along this apparent plane repeats indefinitely, too – the plane appears extend indefinitely far in all directions, showing us older and older views of the surface at greater distances, right back to the time that the black hole originally formed. So logically, anything that we see to one side of the plane corresponds to the interior of the r=3M sphere, and everything we see to the other corresponds to the contents of the "rest-of-the-universe" sphere.
The outside universe only seems to exist on one side of this plane. On the other, gravitational lensing effects make the black hole's r=2M surface beneath us appear to be opened out into a second indefinitely-repeating surface, at some distance below the 3M plane.

Once we're at the 3M surface, there are two ways that we can go.
If we slowly winch ourselves upwards away from the hole, then we see the flat 3M boundary of the outside universe curving itself back into a more normal-looking inward-facing enclosing sphere. But if we allow ourselves to be lowered further towards the black hole, to less than r=3M, then the 3M surface continues to distort past being a flat plane, to becoming a concave surface that curves above us, away from the hole. Instead of the universe surrounding the black hole, it now seems to us that the black hole (and the r=3M surface) is surrounding the universe!
The region that we know ought to be just above the 2M surface appears visually to us to be part of a concave shell, apparently wrapped around a ball representing the remaining universe. The abstract, "topological" idea that our location can affect the choice of which sphere is "really" on the inside or outside now appears to us, visually, to be concrete reality!

The further we descend (slowly) towards 2M, the more pronounced the effect becomes, the more sharply the 2M surface appears to be curved around us, and the more that the outside starfield above appears to shrink to something that looks like a little bright ball suspended somewhere above, in the enveloping black-holey gloom directly above us, like a tiny planet or star.

So if we're hovering too close to r=2M, (or flying past in a spaceship) we shouldn't really be surprised if increasing our forward speed results in our colliding with part of the hole, because that's exactly what our forward view tells us is directly in front of us (and on every side, and directly behind us). If we want to escape from the hole's influence and get back to normal space, then we have to aim our spaceship at the little shrunken blob of compacted blueshifted starfield directly above us. All other directions point at the black hole.

So the rule-of-thumb for navigating within r=3M would seem to be: forget about your ship's fancy gyroscopic navigation systems, just look out of your window and make sure that the ship's nose appears to be pointed approximately at the part of the universe that you want to go to. But don't take your eyes off the forward view, because the harder your engines fire on your way out, the the stronger those distortion effects are going to become.

Wednesday, 16 September 2009

My Chocolate Tree is Unhappy

Dead leaf from a Theobroma cacao (chocolate tree). Including the stem, it's over a foot long.
I keep chocolate trees. They're not too difficult to grow (if you set up an incubator), but keeping the things alive as houseplants without a controlled environment can be tricky. They generally do okay until you have One Bad Day with light levels that are too bright, or too dim, or the humidity's too low, or the temperature is too hot or too cold, and the things panic and drop all their leaves and turn into ugly bare sticks. And when that happens, it seems to take about eight months to coax the things into producing more proper leaves, and get back into the swing of things. Maybe it's a way of outliving predators - if any beasties have eaten the last set of leaves, the tree waits until they and their offspring have all starved to death before growing any more. Dunno.

I had two gorgeous bushy indoor trees last year, sitting by the back window, and moved them to the front of the house where the light levels were slightly lower. One day later, all the leaves had gone sickly. A day or so later they all fell off. A couple of earlier trees got trashed by a few hours of unusually harsh UV light on one clear winter's morning.

After a number of house-moves, I'm now down to just one small tree, which is only about a year old. It had a nice cluster of healthy dark-green leaves. But after just one hour's car journey (on a fairly hot day), the thing had virtually turned albino. The leaves went almost white, apart from the veins, and it's been struggling ever since. Once a leaf loses its "green", it's one short step away from dying completely, and going brown and falling off, and when all the leaves fall off, you're in trouble.

So what I have to do now is coddle the thing so that the existing leaves hopefully last until the plant has decided to try cautiously growing some new ones.
Maybe I should switch to growing something less challenging. If I used a set of mirrors to catch and redirect daylight around the room, indoor climbing roses would be nice ...

Friday, 11 September 2009

Dark Stars and Hawking Radiation

The fictional spaceship 'Dark Star', from the 1974 movie of the same name, directed by by John CarpenterSome people have trouble getting used to the idea of Hawking radiation outside the context of strict quantum mechanics. For those people, I'd suggest that they consider the mechanics of a crusty old Nineteenth-Century “Dark Star” model.

The Dark Star was the predecessor to the modern black hole, and the basic properties of the object were worked up and published by John Michell back in 1784. Michell worked out many of the “modern” Twentieth-Century black hole properties from Newtonian principles, including the r=2M event horizon radius, gravitational spectral shifts, and a method of calculating the number of these “invisible” gravitationally-cloaked objects by finding the proportion of unseen “companion stars” in binary star systems, and then using statistics to extrapolate that proportion to the larger stellar population.

The main difference between an old “dark star” and John Archibald Wheeler's 1950's-era “black hole” was that dark stars could emit faint traces of indirect radiation. In theory, signals and particles could still migrate upstream out of the dark star's gravitational trap by using local objects as accelerational stepping-stones, whereas under GR1915, this mechanism couldn't exist – objects smaller than their r=2M event horizon radius weren't just incredibly dark, but totally black. Their signals and radiation-pressure signature weren't just absurdly faint, but entirely missing. The thing really was, as Wheeler memorably described it, a truly black "hole" in the surrounding landscape.


From the perspective of the Twenty-First Century, we can describe the difference in another way: dark stars emit classical Hawking radiation and GR1915 black holes don't.
Some people will take issue with that statement. They'll say that a hypothetical dark star's radiation-pattern is about acceleration effects rather than QM, and that Hawking radiation is all about particle-pair-production, a completely different mechanism.

So here's the sanity-check exercise. Suppose that the GR1915 description of horizon behaviour was wrong, and that a more "dark-starry" description was right … but that we still believed in GR1915. More general approaches (like statistical mechanics) would have to insist that the radiation effect was real, even though GR1915 disagreed. So how would we explain the reappearance of our naughty radiation effect?

There are number of stages we'd have to go through:
  1. In a thought-experiment, catch an escaped particle and measure its trajectory.
  2. Extrapolate that trajectory back to the originating body as a smooth ballistic trajectory. In our "dark star" scenario, this extrapolated trajectory is wrong – the particle only escaped by being "bumped" out of the gravitational pit by interactions with other bodies or radiation – but in our GR1915 description there's no self-supporting atmosphere outside the black hole to allow this sort of acceleration mechanism, so we have to (wrongly) assume an unaccelerated path.
  3. Notice that the earliest part of this (fictional!) escape-path is superluminal. In order to escape along a ballistic trajectory, a particle would have to have started out travelling at more than the speed of light (!).
  4. Apply coordinate systems. Using a distant stationary observer's coordinates, we break the fictitious trajectory into two parts, an initial superluminal section, and the later, legal, sub-lightspeed part of the calculated path. The first section appears to be off-limits in our coordinate system, and an orderly transition between the two, as the particle supposedly jumps down through the lightspeed barrier seems impossible, but …
  5. … then we then notice that in a very idealised description of a superluminally-approaching particle, the particle ends up described as time-reversed ("tachyonic" behaviour). If an (over-idealised) particle approaches at more than the speed of its own light (which shouldn't normally happen, but ...), we'd end up describing it as being seen to arrive before it was seen to set out. Our artificial coordinate system approach then describes the particle as being seen to originate at the nearest part of its path, and to be apparently moving away from us at sub-light speeds, as its earlier signals eventually arrive at our location in reverse order.
  6. Time-reversal counts as a reversal of one dimension, which flips a left-handed object into its right-handed twin, and vice versa (chiral reversal). So if our particle was an electron, this artificial approach would describe the earlier part of its supposed path as belonging to a positron, instead.
  7. Our final description would then say that a particle and its antiparticle both appeared to pop into existence together outside the horizon (from nowhere) and moved in opposite directions, with the "matter" particle escaping and being captured by our detector, and its "antimatter" twin moving towards the black hole to be swallowed.
And this is, essentially, the 1970's QM description of Hawking radiation.

Sunday, 6 September 2009

The Moon, considered as a Flat Disc

The Moon considered as a flat disc gives Lorentz relationships
Mathematics doesn't always translate directly to physics.
That statement might sound odd to a mathematician, but consider this: even if you believe that physics is nothing but mathematics, that makes physics a subset of mathematics ... which means that there'll be other mathematics that lies outside that subset, that doesn't correspond cleanly to real-world physical theory. The key (for a physicist) is to know which is which.

That's not to say that "beauty equals truth" isn't a good working assumption in mathematical physics – it is – the problem is that the aesthetics of the two subjects are different, and mathematical beauty doesn't necessarily correspond well to physical truth. The physicist's concept of beauty is often different to that of the mathematician.

The "beauty equals truth" idea is often used as an argument for special relativity. SR uses the Lorentz relationships, and to a mathematician, it can sometimes seem that these are such beautiful equations that a system of physics that incorporates them has to be correct.

But the Lorentz relationships can also appear in bad theories, as a consequence of rotten initial starting assumptions:
Our Moon is tidally locked to the rotation of the Earth, so that it always shows the same face to us, and we always see the same circular image, with the same mappable features. Now suppose that a 1600's mathematician has a funny turn and decides that it's so outrageously statistically improbable that the moon would just coincidentally just happen to have an orbit that results in it presenting the same face to us at all times, that something else is going on. Our hypothetical "crazy mathematician" might decide that since we always see the same disc-image of the Moon, that perhaps, (mis)applying Occam's Razor, it really IS a flat disc.

Our mathematician could start examining the features on the Moon's surface, and discover a trend whereby circular craters appear progressively more squashed towards the disc's perimeter. We'd say that this shows that we're looking at one half of a sphere, but our mathematician could analyse the shapes and come up with another explanation. It turns out that, in "disc-world" the distortion corresponds to an apparent radial coordinate-system contraction within the disc surface. For any feature placed at a distance r from the disc centre, where R is the disc radius, this radial contraction comes out as a ratio of 1 : SQRT[1 - rr/RR ] .

In other words, by treating the Moon as a flat disc, we'd have derived the equivalent of the Lorentz factor as a ruler-contraction effect! :)
Our crazy mathematician could then go on and use that Lorentz relationship as the basis of a slew of good results in group theory and so on. They could argue that local physics works the same way at all points on the disc surface, because the disc's inhabitants can't "see" their own contraction, because their own local reference-rulers are contracted, too. Our mathematician could arguably have advanced faster and made better progress by starting with a bad theory! So "bad physics" sometimes generates "good" math, and sometimes the worse the physics is, the prettier the results.

The reason for this is that, sometimes, real physics is a bit ... boring. If we screw physics up, the dancing pattern of recursive error corrections sometimes generates more fascinating structures than the more mundane results that we'd have gotten if we simply got the physics right in the first place.

Sometimes these errors are self-correcting and sometimes they aren't.
If we considered the Earth as flat, then, because it's possible to map a flat surface onto a sphere (the Riemann projection), it'd still be theoretically possible to come up with a complete description of physics that worked correctly in the context of an infinite rescaled Flat Earth. We'd lose the inverse square law for gravity, but we'd gain some truly beautiful results, that would allow, say, a lightbeam aimed parallel to one part of the surface to appear to veer away. We'd end up with a more subtle, more sophisticated concept of gravitation than we'd tend to get using more "sane" approaches, and all of those new insights would have to be correct. In fact, studying flat-Earth gravity might be a good idea! We'd eventually end up deriving a mathematical description that was functionally identical to the physics that we'd get by assuming a sphericial(ish) Earth ... it'd just take us longer. Once our description was sufficiently advanced, the decision whether to treat the Earth as "really" flat or "really" spherical would simply be a matter of convenience.

But with the "moon-disc" exercise, we don't have a 1:1 relationship between the physics and the dataset that we're working with, and as a result, although the moon-disc description gets a number of things exactly right, the model fails when we try to extend it, and we have to start applying additional layers of externally-derived theory to bring things back on track.
For instance, the "disc" description breaks down at (and towards) the Moon's apparent horizon. For the disc, the surface stops at a distance R from the centre, and there's a causal cutoff. Events beyond R can't affect the physics of the disk, because there's no more space for those events to happen in. The horizon represents an apparent causal limit to surface physics. But in real life, if the Moon was a busier place, we'd see things happening in the visible region that were the result of events beyond the horizon, and observers wandering about near our horizon would see things that occur outside our map. So if we were to use statistical mechanics to model Moon activity, and were to say that the event-density and event-pressure have to be uniform (after normalisation) at all parts of the surface, then statistical mechanics would force us to put back the missing trans-horizon signals by giving us "virtual" events whose density increased towards the horizon, and whose mathematical purpose was to restore the original event-density equilibrium. In disc-world, we'd have to say that the near-edge observer sees events in all directions, not because information was passing through (or around) the horizon, but because of the disc-world equivalent of Hawking radiation.

So in the disc description, the telltale sign that we're dealing with a bad model is that it generates over-idealised horizon behaviour that can't describe trans-horizon effects, and which needs an additional layer of statistical theory to make things right again. In the "moon-disc" model, we don't have a default agreement with statistical mechanics, and we have to assume that SM is correct, divide physics artificially into "classical" and "quantum" systems, and retrofit the difference between the two predictions back onto the bad classical model – as a separate QM effect, as the result of particle pair-production somewhere in front of the horizon limit – to explain how information seems to appear "from nowhere" just inside the visible edge of the disc.

Clearly, in the Moon-disc exercise this extreme level of retrofitting ought to tell our hypothetical crazy mathematician that things have gone too far, and suggest that the starting assumption of a flat surface was simply bad ...
... but in our physics, based on the early assumption of flat spacetime, and generating the same basic mathematical patterns, we ran into a version of exactly the same problem: Special relativity avoided the subject of signal transfer across velocity-horizons by arguing that the amount of velocity-space within the horizon was effectively infinite (you could never reach v=c), but when we added gravitational and cosmological layers to the theory, the "incompleteness problem" with SR-based physics showed up again. GR1915 horizons were too sharp and clean, and didn't allow outward flow of information, so to force the physics to obey more general rules, we had to reinvent an observable counterpart to old-fashioned transhorizon radiation as a separate quantum-mechanical effect.

So the result of this sanity-check exercise is a little humbling. We can demonstrate to our hypothetical 1600's "crazy mathematician" that the Moon is NOT flat, no matter how much pretty Lorentz math that generates, and we can use the horizon exercise to show them that their approach is incomplete. By assuming that their model is wrong, we correctly anticipate the corrections that they'd have to make from other theories in order to fix things up. That ability to predict where a theory fails and needs outside help is the mark of a superior system, and shows that the "Flat-Moon" exercise isn't just incomplete, it generates results that are physically wrong, and that don't self-correct. It's faulty physics.

But the same characteristic failure-pattern also shows up in our own system, based on special relativity. So have we made a similar mistake?

Wednesday, 2 September 2009

On Catching Rainbows


I saw a nice rainbow yesterday.

I was out to do some shopping but took a random detour, following my feet. The detour just happened to take me to a suitable road junction, at exactly the right time. By rights, I shouldn't have been there to take the picture.

But "lucky catches" aren't just about accidentally being in the right place at the right time by nothing but dumb good luck, or about preserving a certain random element in your approach (although that certainly helps). If you want to be able to catch something that other people miss, you have to expect to spend at least some of your time in places where they aren't, and looking at things that don't always seem to be immediately necessary to the job in hand.
You also have to be prepared for the possibility of success (I try to keep a camera with me, and it had just enough juice left in the batteries to fire off a few shots for the critical sixty or seventy seconds), you have to be able to recognise the preliminary signs of something interesting (I saw a faint 'bow forming, realised what was coming, and was able to fish the camera out and find something to shield it from the rain, in time) and you also have to be prepared to look stupid (standing in the rain with a plastic folder over your camera, taking photos of the sky, at an angle where most of the people who can see you have no idea what you're doing).

But the main thing is to have your eyes open. If you're absolutely sure that nothing interesting is going to happen, then on the occasions when it does happen, you're liable to miss it.

The same thing goes for theoretical physics. If you want to catch things that have eluded other people (whether it's math, or theory, or experimental research), you don't always have to be so much smarter than everyone else, or to have better equipment. Sometimes it's enough just to be prepared for the possibility of being surprised. If you're too rigid about what you're trying to find, you miss out. In my case, I was popping out for a plank of wood for some shelving, and I came back with a plank of wood and a bloggable photograph of a rainbow. If I'd been more singleminded in my shopping, I'd have only come back with the bit of wood.

Saturday, 29 August 2009

M.C. Escher's "Relativity", Intransitivity, and the Pussycat Dolls

PCD: Gravitationally-conflicting staircases in the Pussycat Dolls' video for 'Hush, Hush'There's a nice example of intransitive geometry in the latest Pussycat Dolls video ("Hush hush").
No, really, there is. It's the bit where the girls are on four staircases attached to the sides of a cube, that each have a different local direction of "down". The "stairwell" section of the video starts at about 58 seconds in and goes on until about a minute thirty. While you're waiting for it to start you'll have to put up with the sight of Nicole Scherzinger nekked in a bathtub making "ooo, yeah" noises for nearly a minute, though. Sometimes doing research for this blog is really tough.

The video seems to be inspired by the famous "Relativity" lithograph by M. C. Escher, which had three intersecting sets of stairs and platforms set into three perpendicular walls, as a piece of "impossible" architecture (physically you could build it, but you wouldn't be able to walk on all the surfaces as the people do in the illustration).M.C. Escher's famous lithograph, 'Relativity'Escher's illustration was incredibly influential, and as well as the Pussycat Dolls video (!), there are some more literal tributes online, including Andrew Lipson's recreation of the scene using Lego, part of the 1986 movie Labyrinth, and a funny short video called Relativity 2.0, that has people trapped in a nightmarish Escherian shopping mall.

Andrew Lipson's lego rendition of Escher's 'Relativity', in Legogravitationally-ambiguous staircases in tribute to M.C. Escher's 'Relativity' lithograph, appearing in the 1986 movie, 'Labyrinth'



If you know of any other especially good ones, please add them to the end of this post as a comment!

Next, we need a Beyonce video illustrating the event horizon behavour of acoustic metrics ...

Saturday, 22 August 2009

Special Relativity is an Average

Special Relativity as an average: 'Classical Theory' (yellow block), Special Relativity (orange block), and Newtonian Optics (red block). Special relativity's numerical predictions are the 'geometric mean' average of the predictions for the other two blocksTextbooks tend to present special relativity's physical predictions as if they're somehow "out on a limb", and totally distinct from the predictions of earlier models, but SR's numerical predictions aren't as different to those of Nineteenth-Century models as you might think.

One of the little nuggets of wisdom that the books usually forget to mention is that most of special relativity's raw predictions aren't just qualitatively not particularly novel, they're actually a type of mathematical average (more exactly, the geometric mean) of two earlier major sets of predictions. So, in the diagram above, if the yellow box on the left represents the set of predictions associated with the speed of light being fixed in the observer's frame (fixed, stationary aether), and the red box on the right represents the set of physical predictions for Newtonian optics (traditionally associated with ballistic emission theory), then the box in the middle represents the corresponding (intermediate) set of predictions for special relativity.

If we know the physical predictions for a simple "linear" quantity (visible frequency, apparent length, distance, time, wavelength and so on) in the two "side" boxes, then all we normally have to do to find the corresponding central "SR" prediction is to multiply the two original "flanking" predictions together and square root the result. This can be a really useful method if you're doing SR calculations and you want an independent method of double-checking your results.


This usually works with equations as well as with individual values.
F'rinstance, if the "linear" parameter that we were working with was observed frequency, and we assumed that the speed of light was fixed in our own frame ("yellow" box), we'd normally predict a recession Doppler shift due to simple propagation effects on an object of
frequency(seen) / frequency(emitted) = c / (c+v)
, whereas if we instead believed that lightspeed was fixed with reference to the emitter's frame, we'd get the "red box" result, of
frequency(seen) / frequency(emitted) = (c-v) / c
If there was really an absolute frame for the propagation of light, we could then tell how fast we were moving with respect to it by measuring these frequency-shifts.

The "geometric mean" approach eliminated this difference by replacing the two starting predictions with a single "merged" prediction that we could get by multiplying the two "parent" results together and square-rooting. This gave
frequency(seen) / frequency(emitted) = SQRT[ (c-v) / (c+v) ]
, which is what turned up in Einstein's 1905 electrodynamics paper.

The averaging technique gave us a way of generating a new prediction that "missed" both propagation-based predictions by the same ratio. Since the numbers in the "red" and "yellow" blocks already disagreed by the ratio 1: (1- vv/cc), the new intermediate, "relativised" theory diverged from both of these by the square root of that difference, SQRT[ 1 - vv/cc ]. And that's where the Fitzgerald-Lorentz factor originally came from.

---==---

Why is it important to know this?

Well, apart from the fact that it's useful to be able to calculate the same results in different ways, the "geometric mean" approach also has important implications for how we go about testing special relativity.
Our usual approach to testing SR is to compare just the the "yellow" and "orange" predictions, identify the difference, say that the resulting differential Lorentz redshift/contraction component is something unique to SR and totally separate from any propagation effects, and then set out to measure the strength of this relative redshift/contraction component, in the range "zero-to-Lorentz". Having convinced ourselves that these effects are unique to SR, we usually don't then bother to check whether the data might actually make a better match to a point somewhere to the right of the diagram.
Since the "yellow box" predictions are so awful, special relativity comes out of this comparison pretty well.

But once you know the averaging method, you'll understand that this is only half the story -- these "derivative" effects that appear under SR but not "Classical Theory" ("orange" but not "yellow") must have counterparts under Newtonian optics ("red"), and these are usually stronger than the SR versions. So any experimental procedure or calculation that appears to support the idea of time dilation or length-contraction in an object with simple constant-velocity motion under SR would also generate an apparent positive result for those effects if SR was wrong and the older "Newtonian optics" relationships were the correct set (or if some other intermediate set of relationships was in play). We can say that special relativity's concept of velocity-based time dilation didn't exist under NO, but hardware doesn't care about concepts or interpretations, only results ... and the result of performing an SR-designed test in an "NO universe" would be that the test would throw up a "false positive" result apparently supporting SR (with an overshoot that'd then have to be calibrated out).

And, actually, the situation is worse than this.
... Since the "yellow" and "red" blocks represent the two extremal predictions for theories that allow linkage between the velocity of a light-signal and the motion of a body ("yellow" = zero dependency, "red" = full dependency), they also seem to represent the cutoff-limits for a whole slew of old Nineteenth-Century "dragged aether" models, all of which would be expected to produce similar physical effects to special relativity, differing only in their scaling and strength. So typical test procedures designed to isolate the "new" SR effects should be able to generate "false positive" results with almost all of these old theories and models.

While some of special relativity's concepts might have been new, its testable numerical predictions lie right in the middle of a pre-existing range. Any time you see a claimed experimental verification of SR that forgets to take this into account, treat it with caution.

Monday, 17 August 2009

Fibonacci Kitchenware (well, almost)

I popped into Habitat yesterday, and they're selling a range of five pseudo-Fibonacci nesting trays (four smaller trays plus a bigger one to hold them). It's just a shame that they chose such and awful selection of colours for them (who the heck decided on yellow, brown and navy blue??!?).

Friday, 14 August 2009

Fun with Special Relativity

detail form Salvador Dali's 'http://en.wikipedia.org/wiki/The_Disintegration_of_the_Persistence_of_Memory', (oil on canvas, circa 1952-54)This is where I surprise everyone by saying something nice about Einstein's Special Theory of Relativity for a change. Considered as a piece of abstract geometry, special relativity (aka "SR" or "STR") is prettier than even some of its proponents give it credit for. The problems only kick in when you realise that the basic principles and geometry of SR considered as physics don't correspond well to the rules that real, physical observers and objects appear to follow in real life.

Anyhow, here's some of the pretty stuff:

It's traditional to explain Einstein's special theory of relativity as a theory that says that the speed of light is fixed (globally) in our own frame of reference, and that objects moving with respect to our frame are time-dilated and length-contracted, by the famous Lorentz factor.
And that characterisation certainly generated the appropriate predictions for special relativity, just as it did for Lorentzian Ether Theory ("LET"). But we can't verify that this time-dilation effect is physically real in cases where SR applies the principle of relativity (i.e. cases that only involve simple uniform linear motion). Thanks to its application of Lorentz-factor relationships, Special Relativity doesn't allow us to physically identify the frame that lightspeed is supposed to be constant in. When we make proper, context-appropriate calculations within SR, we have the choice of assuming that lightspeed is globally constant in our frame, or in the frame of the object we're watching, or in the frame of anybody else who has a legal inertial frame – it's usually a sensible choice to use our own frame as the reference, but really, it it doesn't matter which one we pick, and sometimes the math simplifies if we use someone else's frame as our reference (as Einstein did in section 7 of his 1905 paper).

Some people who've learnt special relativity through the usual educational sources have expressed a certain amount of disbelief (putting it mildly) when I mention that SR allows observers a free choice of inertial reference frame, so let's try a few examples, to get a feel of how special relativity really works when we step away from the older "LET" descriptions that spawned it.

Some Mathy Bits:

1: Physical prediction
Let's suppose that an object is receding from us at at a velocity of four-fifths of the speed of light, v = 0.8c
Special relativity predicts that the frequency shift that we'll see is given by
frequency(seen)/frequency(original) = SQRT[ (c-v) / (c+v) ]
= SQRT[ (1-0.8) / (1+0.8) ]
= SQRT[ 0.2/1.8 ] = SQRT[ 1/9 ]

=
1/3
, so according to SR, we should see the object's signals to have one third of their original frequency. This is special relativity's physical prediction. The object looks to us, superficially, as if it's ageing at one third of its normal rate, but we have a certain amount of freedom over how we choose to interpret this result.

2: "Motion plus time dilation"
It's usual to break this physical SR prediction into two notional components, a component due to more traditional "propagation-based" Doppler effects, calculated by assuming that lightspeed's globally constant in somebody's frame, and an additional "Lorentz factor" time dilation component based on how fast the object is moving with respect to that frame.
The "simple" recession Doppler shift that we'd calculate for v = 0.8c by assuming that lightspeed was fixed in our own frame would be
frequency(seen) / frequency(original) = c/(c+v)
= 1/1+0.8 = 1/1.8
, and the associated SR Lorentz-factor time-dilation redshift is given by
freq'/freq = SQRT[ 1 - vv/cc ]
= SQRT[ 1 - (0.8)² ] = SQRT[ 1 - 0.64 ] = SQRT[ 0.36 ]
= 0.6
Multiplying 0.6 by 1/1.8 gives
0.6/1.8 = 6/18
= 1/3

Same answer.

3: Different frame
Or, we can do it by assuming that the selected emitter's frame is the universal reference.
This gives a different propagation Doppler shift result, of
freq'/freq = (c-v)/c
= 1 - 0.8 = 0.2

We then assume that because we're time dilated (because we're moving w.r.t. the reference frame), and that because our clocks are slow, we're seeing everything to be Lorentz-blueshifted, and appearing to age faster than we'd otherwise expect, by the Lorentz factor.
The formula for this is
freq'/freq = 1/SQRT[ 1 - vv/cc ]
= 1/0.6 = 5/3
Multiplying these two components together gives a final prediction for the apparent frequency shift of
0.2× (1/0.6) = 0.2/0.6 = 2/6
= 1/3
Same answer.

So although you sometimes see physicists saying that thanks to special relativity, we know that the speed of light is globally fixed in our own frame, and we know that particles moving at constant speed down an accelerator tube are time-dilated, actually we don't. In the best-case scenario, in which we assume that SR's physical predictions are actually correct, the theory says that we're entitled to assume these things as interpretations of the data, but according to the math of special relativity, if we stick to cases in which SR is able to obey the principle of relativity, it's physically impossible to demonstrate which frame light "really" propagates in, or to prove whether an inertially-moving body is "really" time-dilated or not. It's interpretative. Regardless of whether we decide that we're moving and time-dilated or they are, the final physical predictions are precisely the same, either way. And that's the clever feature that we get by incorporating a Lorentz factor, that George Francis Fitzgerald originally spotted back in the Nineteenth Century, that Hendrik Antoon Lorentz also noticed, and that Albert Einstein then picked up on.

4: Other frames, compound shifts, no time dilation
But we're not just limited to a choice between these two reference frames: we can use any SR-legal inertial reference frame for the theory's calculations and still get the same answer.
Let's try a more ambitious example, and select a reference-frame exactly intermediate to our frame and that of the object that we're viewing. In this description, both of us are said to be moving by precisely the same amount, and could be said to be time-dilated by the same amount ... so there's no relative time dilation at all between us and the watched object. We can then go ahead and calculate the expected frequency-shift in two stages just by using the simpler pre-SR Doppler relationships, and get exactly the same answer without invoking time dilation at all!

The "wrinkle" in these calculations is that velocities under special relativity don't add and subtract like "normal" numbers (thanks to the SR "velocity addition" formula), so if we divide our recession velocity of 0.8c into two equal parts, we don't get (0.4c+ 0.4c), but (0.5c+0.5c)
(under SR, 0.5c+0.5c=0.8c – if you don't believe me, look up the formula and try it)

So, back to our final example. The receding object throws light into the intermediate reference frame while moving at 0.5c. The Doppler formula for this assumes "fixed-c" for the receiver, giving
freq'/freq = c/(c+v)
=1/1.5 = 2/3
Having been received in the intermediate frame with a redshift of f'/f = 66.66'%, the signal is then forwarded on to us. We're moving away from the signal so it's another recession redshift.
The second propagation shift is calculated assuming fixed lightspeed for the emitting frame, giving
freq'/freq = (c-v)/c
=1 - 0.5/1 = 0.5/1 = 1/2
The end result of multiplying both of these propagation shift stages together is then
2/3 × 1/2
= 1/3
Again, exactly the same result.

No matter which SR-legal inertial frame we use to peg lightspeed to, special relativity insists on generating precisely the same physical results, and this is the same for frequency, aberration, apparent changes in length, everything.

So when particle physicists say that thanks to special relativity we know for a physical fact that lightspeed is really fixed in our own frame, and that objects moving w.r.t. us are really time-dilated ... I'm sorry, but we don't. We really, really don't. We can't. If you don't trust the math and need to see it spelt out in black and white in print, try Box 3-4 of Taylor and Wheeler's "Spacetime Physics", ISBN 0716723271. IF special relativity has the correct relationships, and is the correct description of physics, then the structure of the theory prevents us from being able to make unambiguous measurements of these sorts of things on principle. We can try to test the overall final physical predictions (section 1), and we can choose to describe that prediction by dividing it up into different nominal components, but we can't physically isolate and measure those components individually, because the division is totally arbitrary and unphysical. If the special theory is correct, then there's no possible experiment that could show that an object moving with simple rectilinear motion is really time-dilated.

If you're a particle physicist and you can't accept this, go ask a mathematician.

Sunday, 9 August 2009

HTML5 is Coming!

The latest (8 August 2009) draft version of the HTML5 specifications has just been published.

Some of the additions are special dedicated tags for semantic labeling. These are labels that describe the logical content of a block – what it is rather than how it displays - although with Cascading Style Sheets ("CSS"), it's also possible to set associated display parameters for just about any tag type (colours, surrounding boxes, and so on).

Microsoft (who aren't on the HTML5 panel) have queried what the point of these things is, since they don't add any new layout specification tools for the benefit of the website designer. We already have the general-purpose <div> tag that lets us mark out blocks of code, and to assign custom class names and ID names to those blocks, so that they can be displayed in particular ways using CSS. Why duplicate the same functionality in these new tags, <article>, <nav>, <section>, <aside> and so on, if these don't give the webpage designer any new functionality for how a page appears on screen or on paper that they couldn't already achieve with <div>?

Well, even if Microsoft can't quite see the point of them, there are still a number of really good reasons why the end-users and the internet in general need at least some of these new tags.

Blogging
HTML4 came out at the end of the last century (!), and since then the blog phenomenon has pretty much exploded. Blogging software now makes it really easy for authors to produce a mass of rich, mixed, auto-updated content over tens or hundreds of pages. But search engines have to try to make sense of this mess of articles, article links, widgets and addons, and it's not easy. For instance, suppose that I write and upload a blog article about "Einstein and Fish". On Google, "Einstein and fish" currently only gives one result (if it was two words, it'd count as a "Googlewhack").
But as soon as I post the article, the title "Einstein and Fish" will appear in the "recent posts" box in the sidebar of every single page of my blogspace. Point Google's "advanced search" at my blogspace to find how many articles I've written on "Einstein and fish", and instead of one, it'll report back a list of every blog entry I've ever written as apparently containing that piece of search text. It'll also probably include all the text of every widget I've used on the site (like "NASA Photo of the Day"). And this is even though I'm using Blogger, which is Google's own blogsite company.

When webpage designers and companies like Blogger start using the new tags, general-purpose search engines should find it easier to separate out blog articles and webpage content from the surrounding mess of widgets, navigation links, slogans, adverts and general decorative junk.

Client-side reformatting
Some web designers react with outrage at the idea that a browser might display their precious page with a different layout to the one that they carefully designed (to look good on their nice 19" flat-screen monitor).
But people are increasingly looking at web pages on a rangle of devices including mobile phones and ebook readers, and although website designers can in theory produce separate style sheets that allow a page to be displayed with different layouts on every size of device, in practice there's an awful lot who don't bother (including me! :) ). If we use a dedicated blog site, we maybe hope that the site's engineering people will do all that for us, automatically. With CSS-based layouts, some designers tend to go for absolute pixel widths, and frankly, we don't know what devices and screen sizes might be most important a year from now.

Semantic labeling allows dedicated browsers built into these devices to have a good attempt as reformatting and reflowing pages to fit their own tiny screens, by being able to tell which blocks of HTML are the important page content, and which blocks are just there for decoration or navigation.

New Navigation Tools
One of the results of these new tags is that we can expect to see mini-browsers starting to sprout some new navigation buttons. If you have a long page with several sections that takes several sheets to print out, with a figure or two, an inset box with supplementary material, and a navigation bar, then the layout designed for a large screen is going to be hopeless on an iPhone. So what would be cool on an Android mobile phone browser or iPhone would be a function that scans for <section> tags, and then provides additional [<section][section>] buttons that let you skip forwards or backwards through a page. Inset panels with additional info that the designer has "artily" set into the side of the article could be identified by their HTML5 <aside> tag and stripped out and made available on a separate button as [info]. Similarly, if the author produced a number of figures that are referred to in the text, and marked them with the <figure> tag, it'd be handy if the browser could scan for these when the page is loaded, and provide a [figure] button if it finds one, and [<figure][figure>] navigation buttons if it finds several. And it'd also be really handy on a small screen to be able to strip out the navigation bar and put that onto a separate [nav] button, too.
In fact, if this caught on, it'd also be great to be able to jump around a page using these buttons on a conventional "full-size" browser, too.

Accessibility
Finally, if you think that it's difficult navigating a modern "fancy" webpage on a mobile phone, imagine how frustrating it must be if you're sight-impaired, and are using an automated text reader. If you're navigating a page "by ear", it could be useful to be able to find your place again by skipping backwards and forwards a section at a time, until you find a title or intro paragraph that you recognise ... or to be able to jump back and forth between a current reading position and the navigation options, no matter where the designer has put those navigation buttons on the page, or where they happen to appear in the webpage's source code.

One of the problems with CSS, wonderful though it is, is that it allows the designer to place any element in any part of the HTML file, onto any part of the page. This means that the sequential order of chunks of HTML in the field don't necessarily correspond to the order that they have on the screen. A navigation bar that appears at the top of the screen might appear at the bottom of the code. By labelling the sections logically, in a standardised way, it gives audio navigation software the chance of finding key sections of a page and treating them appropriately. For companies and government departments that have disability access policies (and requirements!), adopting HTML5 tags and using them consistently on new projects would be a good initiative both for supporting future standards and for potentially improving long-term disability access.

Friday, 7 August 2009

Misconstructing Fibonacci

The Fibonacci Series sequence mesmerises people. There's something about the idea that a deterministic trail of integers can mysteriously converge on a strange, fundamental, irrational number, the infamous Golden Ratio, or Golden Section, 1.61803 ..... , "phi" – which, like "pi", can't be expressed as any exact ratio between two whole numbers, or written down on paper as a complete series of digits using any conventional number system.

Some people get obsessed with the numbers, and seem to think that if they stare long enough at the simple sequence with its maddening simplicity, that the secret buried inside the integers might reveal itself.

I'm here to give you the answer – the numbers are empty. The secret's not in the numbers at all, it's set one layer back behind the numbers, in the process used to generate them.

If you want to understand how the Fibonacci sequence generates phi, it can be useful to throw away the integers and look at the shapes:
With a conventional square-tiled version of the Fibonacci sequence, we start with a single "fat" rectangle, of nominal side "1×1" (a square), and then we add an additional square to the longest side (in this case, they're all equal, so any side will do), which gives us a "long" rectangle, of dimensions "2×1". Adding another square to one of it's longest sides produces another "fattish" rectangle of size "3×2", although this obviously can't be as fat as the 1×1 square (which was already as fat as you can get). Adding a further square to one of the new longest sides then makes the shape thin-nish again, with size "5×3", although, again, it's not quite as thin as the earlier "2×1" rectangle. As we keep adding squares we get the sequence of ratios 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144. and so on.

Fibonacci Series tiling
And this alternating pattern of overshoots and undershoots repeats forever.
For a rectangle that's already precisely proportioned according to the Golden Section, these sorts of square-adding or square-subtracting processes produce rectangular offspring with precisely the same proportions as their parent. But anything more elongated than the "Golden Rectangle" always produces something "fatter" than phi, and anything more dumpy than a Golden Rectangle is guaranted to produce a rectangle that's "skinnier" than phi.
If we apply the process by lopping off squares, then for a "non-phi" rectangle the proportions swing back and forth more and more wildly, getting further and further away from phi each time we remove a square, and if we do it by adding squares, the process takes us closer and closer to phi each time ... and this gives us the usual tiling construction for phi using the Fibonacci Series, shown above, that you should be able to find in a lot of books.

But this specific sequence of numbers 1, 1, 2, 3, 5, 8, 13, 24, 34, 55, 89, 144, ... isn't required for the trick to work – the method generates alternating "fat" and "thin" rectangles that converge on phi, when we start with any two numbers whatsoever. They don't even have to be integers.
Example:
Suppose that instead of 1, 1, we start with a couple of random-ish numbers, taken from say, the date of the Moon landing, 16.07 & 1969. This gives us a very skinny rectangle (with proportions around ~123 : 1). Adding a square to the longest side gives something that's almost square (very "fat", ratio ~1.008), the next pairing will be on the "skinny" side (ratio~1.99), and already we're looking at ratios close to those of the the "1, 2" entries in the standard sequence. The process then chunters on and converges on phi as before.

16.07, 1969, 1985.07, 3954.07, 5939.14, 9893.21, 15832.35, 25725.56, 41557.91, ...

If we stop there, and divide the last number by it's neighbour, we get
41557.91/25725.56 = ~1.6154

add another couple of stages and we get
108841.38/67283.47 = 1.61765..

So in just those few stages, we've already gone from a start ratio of about 123:1 to something close to the golden section value of ~1.618... , correct to three decimal places.
It really doesn't matter whether the initial ratio is 1:1, or 2:1, or a zillion to the square root of three. Any two numbers whatsoever, processed using the method, give a sequence that will lurch back and forth around the Golden Section, always overshooting and undershooting, but always getting closer and closer, guaranteed.

So the Fibonacci sequence, in this regard, is really nothing special. You can plug in any two start numbers, taken from anywhere, apply the Fibonacci method, and the trick will still work.

--===--

What the usual Fibonacci Series does have going for it is simplicity. It's probably the simplest integer example of this process, and it's been argued that perhaps if we want to approximate phi with a pair of integers, that for any given number range, the standard Fibonacci sequence "owns" the pair that get closest (although I haven't actually checked this for myself). We can also derive the "standard" sequence from tiling and quantisation exercises, and when it comes to dealing with sunflowers and pinecones and the like, where we're dealing with structures that branch recursively (like the core of a pinecone) or are the result of cell division in two dimensions, plus time (giving branching over time), then yes, it's not surprising that Fibonacci sequence integers are a recurring theme. Cell division and branching are quantised processes, like the graph of Fibonacci's rabbits.

But the "music" of the Fibonacci series isn't in the integers, its in the rhythm of the of the underlying processes that generate them. It's those underlying processes that carry the magic, not the integers themselves.

Saturday, 1 August 2009

Fibonacci Rose, Alternative Tiling

Fibonacci Rose, alternative colour tiling
Actually, this is the same arrangement of shapes as in the "double-spiral" version of the Fibonacci Rose, but coloured differently.
As before, each triangle of a given colour has sides that are the sum of the sides of the next two triangles down, giving the sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 ...

And as before, if we zoom out arbitrarily far, the figure becomes indistinguishable from the "golden section" version, in which each triangle's sides are related to the next size up or down by the ratio phi, ~1.618034 ... .

Friday, 31 July 2009

Computers Don't Work


Computers don't work.

This is because, with a few notable exceptions related to applications in scientific and mathematical number-crunching (like computer modelling and rocket science), when a computer system does work, we usually stop calling it a computer. When small numerical calculating computers became reliable and cheap and mass-produced, we stopped calling them "computers" and started calling them "pocket calculators". When personal computers became mainstream and stopped being niche toys for geeks, we started calling them "PCs" or "Macs", without really caring what "PeeCee" stood for.

In offices where the IT system works well, people tend to refer to the things on their desks as "workstations" or "terminals". These are things that you just switch on and start working at. They're functioning business tools.

So, once all the general-purpose systems that work are taken out of the equation, what we're left with is the bespoke systems, and all the systems that don't quite work, aren't quite finished yet, or need a lot of technical support and hand-holding. These are the scary, technical, sometimes-malfunctioning things that we still refer to as computers.

It's interesting to watch this change in naming happening with products as a market sector becomes mature. Home weather monitoring systems drifted from being marketed as "weather computers" to "weather stations", and in-car navigation systems shifted from initially being referred to reverentially as "in-car computers" to being casually referred to as "GPSes". Once the novelty factor has worn off, and people know that a product is reliable, useful and worthwhile, the "computer" tag gets dropped.

So as far as retail products are concerned, "computers", almost by definition, are the remaining gadgets that are either too new to be judged yet, or that don't work properly without a certain amount of expert hand-holding.

This also gives us a handy way of quickly assessing how good a company's IT infrastructure is. If you're visiting an office, and the general office staff refer to their "computers", then the chances are that either the staff aren't very computer-literate, or the office has just been undergoing a painful IT transition, or ... their IT systems simply suck. Try it.

Friday, 24 July 2009

Kew Gardens is Nice

Kew Gardens, map, thumbnail linkVisited Kew Gardens on Thursday with the family, to scatter our Mum's ashes.

Kew Gardens is cool. It's a 121-hectare site, with a collection of plants and habitats from around the world, and various public greenhouses with their own microhabitats (one of which has its own multicoloured lizard running wild). It's been going in various forms for about 250 years, and it's been a national botanical garden for about the last 170. In some ways it's the forerunner of the Eden Project, and it's the only site that I know of in the UK, other than my place, that has chocolate trees.

As well as the on-site research stations, there's now also now a satellite site at Wakehurst Place, where they do more of the Millennium Seed Bank Project stuff.

Mum wanted to be a tree surgeon when she was a kid, so she was really into Kew, and was a paid-up member. She even had an old (legitimately acquired!) Kew Gardens sign in her garden.

So it was kindofa a nice day.

Friday, 17 July 2009

Xenotransplantation and Swine Flu

link to link to New Scientist article with larger original version of photograph
Trying to solve the organ transplant shortage using pig organs was both a really good idea and a really bad idea. It was good because a pig's body is reasonably close to ours in terms of size, biology and organ-loading (and because pigs are omnivores, like us) ... and bad because of the virus problem that some people didn't like talking about.

There are three main reservoirs of "foreign" viruses that sometimes cross over into the human population and catch our immune systems unawares - other primates, livestock, and birds. Primates tend to be blamed for origins of the the AIDS virus, the 1918 "Spanish Flu" outbreak that killed between fifty and a hundred million people is sometimes reckoned to have crossed over from birds, and when mammalian livestock is concerned, the culprit is usually assumed to be pigs.

When a disease like this crosses over from a pig or a chicken, we sometimes get a bit disgruntled in the West and mutter that these poor agricultural communities really shouldn't be living in such close proximity to their animals, but for years we've been planning on going one better. Transplanting pig organs into people means that living pig tissue is in as intimate contact with human tissue as its possible to be - actually snuggled up together subdermally and sharing a common blood supply. In Darwinian terms, if you wanted to encourage pig viruses to evolve so that they could thrive in a human environment, this is exactly how you'd do it, and if you were a genocidal mad scientist intent on "accidentally" killing millions of people in a cost-effective manner, without actually hiring weapons research specialists and running the risk of being spotted, then this'd be a great way to do it.

Now, you might think that we could breed a "special" population of guaranteed "disease-free" oinkers in laboratory conditions, to ensure that any transplant organs are kept squeaky-clean, and to minimise the risk as long as the organ recipients were then kept well away from any live pigs (to protect both the human and pig populations) – some researchers were supposed to be setting up special facilities for breeding "special" pigs, perhaps with a bit of gene-manipulation to make the immune-system rejection problems less severe.

Snag is, it turns out that you can't breed "clean" pigs.
Normally, the DNA in your cell nucleii codes for proteins that get used within the cell, and for RNA that moves out of the cell nucleus to do Very Useful Things in other parts of the cell. DNA also copies itself during cell division. Viruses are often RNA-based, and usually insert themselves into a cell, where they tell the cell to make more RNA-based viruses.
But RNA retroviruses run the cell's usual DNA-RNA mechanism backwards – they write DNA versions of themselves into the cell nucleus ("reverse transcription"), and from that point onwards, the cell's own nucleus generates new viral RNA.
If a retrovirus infects a mammalian egg cell or a sperm-producing cell, and those cells produce viable offspring, then those offspring inherit the virus as part of their genome - it's been written into the DNA of every one of their cells.

Sometimes the inherited virus isn't active, or is corrupted so that it does nothing, or ends up mutating again to do something that's actually useful to the host. If it's active, the individuals who have it will presumably have gene-repression systems and a primed immune system that can deal with it, otherwise they'd not survive long enough to be born. So pigs can carry a payload of porcine viruses in their DNA, and still be perfectly healthy. And they do – it turns out that as farm animals, pigs have been so intensively interbred that it now doesn't seem possible to find a pig that doesn't have a library of piggy viruses already written into their DNA. To encourage those viruses to learn how to infect human cells, all you'd have to do is transplant some living virus-bearing pig tissue into a human, and give that human immunosuppressant drugs to damp their immune system long enough to give the fledgeling viruses a change to get in a good few generations of useful mutation, and – bingo! – you've got yourself a new "alien" human-compatible virus that most human immune systems won't yet recognise.

The xenotransplanation research community were always playing with fire. Getting funding for research that might eventually save thousands or tens of thousands of people's lives (including sick kiddies) is good ... but getting funding for a large-scale xenotransplantation programme that might end up being implicated years later in the deaths of tens of millions would be ... not quite so good. So the ethics watchdogs within the community said that it was important that society as a whole understood the risks and decided consensually to go for xenotransplantation, but when it came to lobbying for funds, the TV news would tend to show pictures of dying children with tubes stuck in them, and impassioned researchers saying that this was necessary to stop people dying ... but forget to mention the risk of a potential associated death toll on the scale of that of World War 2.

So the current swine flu outbreak has probably saved the xenotransplanation community from having to wake up in ten years time and find that their work had been responsible for killin a hell of a lot of people. Their funding bodies probably now know rather more about pig viruses, and will now tend to ask the right questions when someone suggests stitching pig tissue into human recipients. Such as: "But isn't that an insanely irresponsible thing to do?". Since the 2009 outbreak, researchers can no longer pooh-pooh safety concerns by pointing out that nobody on the board has heard of anyone who's actually been hurt by swine flu. Conventional live pig-organ xenotransplanation is probably (hopefully) now a dead field.

Good work can still be done. There are some people now looking at taking pig hearts and dissolving away all the tissue to leave a cartilage skeleton on which human stem cells can be grown, to create a working human-tissue heart. That sounds like a much more sensible idea.

There's just one last question we need to answer. The sites where US researchers were keeping their pigs tended to be secret, to avoid protester sabotage and industrial espionage, and to try to make sure that the pigs were kept free from external contamination of pig or human pathogens. It'd be useful to have a full list of all such sites, to see if any of them had been set up conveniently across the border in Mexico. If there's genuinely not a link between xenotransplantation research and the current swine flu outbreak, then the xenotransplantation community can consider themselves lucky – they dodged a bullet.