Thursday 31 December 2009

New Year's Eve

'THIS IS THE END OF THE BEGINNING': Final image from George Pal's 'Destination Moon' (1950)Okay, that's it. First decade of the new century over, and we've got almost nothing good to show for it, physics-wise.
That's bad. We only get ten of these per century. One down, only another nine to go before 2100. If we're burning through resources at the current rate, we can't afford to waste decades like this if we want to actually achieve something significant this century before we get hit by a resources crash.

So a suggested schedule. Let's officially notice the idea of a no-floor implementation of GR by at least late 2010, and see if we can get rid of dark matter and dark energy. Let's have the quantum gravity guys working on acoustic metrics as a low-velocity approximation have the guts to come out and actually suggest that this might be the basis of a real theory, and not just a toy model. Let's stop issuing press releases claiming that the current version of general relativity is the wonderfullest theory and has never ever failed us, let's acknowledge the problems and let's sit down and write a proper general theory from scratch, stealing that "acoustic metric" work.

Instead of setting a schedule that puts the next theoretical breakthroughs maybe eighty or a hundred years from now because we aren't clever enough to understand string theory, let's get off our arses and do the things that we do know how to do. Kick off with the no-floor approach, and when we're energised by the success of that, converge the acoustic metric work with a GR rewrite .. and suddenly the next generation of theory only looks about five years away. If we're very lucky, two and a half. If we can't get enough people onboard fast enough, maybe eight to ten.

Unless we take that first step of exploring the idea that change might be possible and might be a good thing, we won't get anywhere except by dumb luck and/or massive public spending on hardware. If we're not careful, and we don't change the way we do things, next thing we know it'll be 2020 and we still won't have achieved anything.

So let's write off the 00's as a big double-zero. Let's pretend that the Bush years and Iraq and the financial crash never happened. We don't need multi-billion-dollar hardware for this, we only need to be able to think, and to be a bit more adventurous than we've been for the last few decades. Lets redo general relativity properly and get a theory that we can be proud of without having to spin results, one that actually predicts new effects in advance rather than retrospectively, and has the potential to lead us into genuinely new physics territory.

Tomorrow is 2010. Let's start again.

Wednesday 30 December 2009

Differential Expansion, Dark Matter and Energy, and Voids

2df Galaxy Redshift SurveyA raspberry (NASA: Pinwheel galaxy
Normally with a field theory, you have some idea where to start. You start by defining the shape and other properties of your "landscape" space, and then you add your field to that context, and watch what it does when you play with it.
But in a general theory of relativity (which is forced by Mach's Principle to also be a relativistic theory of gravity), the gravitational field is space. The field doesn't sit inside a background metric, it is the background metric.
So with this sort of model, we've got no obvious starting point – no obvious starting geometry, and not even an obvious starting topology, unless we start cheating and putting in some critical parameters by hand, according to what we believe to be the correct values.

We make an exasperated noise and throw in a quick idealisation. We say that we're going to suppose that matter is pretty smoothly and evenly distributed through the universe (which sounds kinda reasonable), and then we use this assumption of a homogeneous distribution to argue that there must therefore be a fairly constant background field. That then gives us a convenient smooth, regular background shape that we can use as a backdrop, before we start adding features like individual stars, and galaxies.

That background level gives us our assumed gravitational floor.

We know that this idea isn't really true, but it's convenient. Wheeler and others tried exploring different approaches that might allow us to do general relativity without these sorts of starting simplifications (e.g. the pregeometry idea), but while a "pregeometrical" approach let us play with deeper arguments that didn't rely on any particular assumed geometrical reduction, getting from first principles to new, rigorous predictions was difficult.
So while general relativity in theory has no prior geometry and is a completely free-standing system, in practice we tend to implicitly assume a default initial dimensionality and a default baseline background reference rate of timeflow, before we start populating our test regions with objects. We allow things to age more slowly than the baseline rate when they're in a more intense gravitational field, but we assume that the things can't be persuaded to age more quickly than the assumed background rate (and that signals can't travel faster than the associated background speed of light) without introducing "naughty" hypothetical negative gravitational fields (ref: Positive Energy Theorem).
This is one of the reasons why we've made almost no progress in warpdrive theory over half a century – our theorems are based on the implicit assumption of a "flat floor", and this makes any meaningful attempt to look at the problem of metric engineering almost impossible.

Now to be fair, GR textbooks are often quite open about the fact that a homogeneous background is a bit of a kludge. It's a pragmatic step – if you're going to calculate, you usually need somewhere to start, and assuming a homogeneous background (without defining exactly what degree of clumpiness counts as "homogeneous") is a handy place to start.


But when we make an arbitrary assumption in mathematical physics, we're supposed to go back at some point and sanity-check how that decision might have affected the outcome. We're meant to check the dependencies between our initial simplifying assumptions and the effects that we predicted from our model, to see if there's any linkage.
So ... what happens if we throw away our "gravitational floor" comfort-blanket and allow the universe to be a wild and crazy place with no floor? What happens if we try to "do" GR without a safety net? It's a vertigo-inducing concept, and a few "crazy" things happen:

Result 1: Different regional expansion rates, and lobing
Without the assumption of a "floor", there's no single globally-fixed expansion rate for the universe. Different regions with different "perimeter" properties can expand at different rates. If one region starts out being fractionally less densely populated than another, its rate of entropic timeflow will be fractionally greater, the expansion rate of the region (which links in to rate of change of entropy) will be fractionally faster, and the tiny initial difference gets exaggerated. It's a positive-feedback inflation effect. The faster-expanding region gets more rarefied, its massenergy-density drops, the background web of light-signals increasingly deflects around the region rather than going through it, massenergy gets expelled from the region's perimeter, and even light loses energy while trying to enter, as it fights "uphill" against the gradient and gets redshifted by the accelerated local expansion. The accelerated expansion pushes thermodynamics further in the direction of exothermic rather than endothermic reactions, and time runs faster. Faster timeflow gives faster expansion, and faster expansion gives faster timeflow.

The process is like watching the weak spot on an over-inflated bicycle inner tube – once the trend has started, the initial near-equilibrium collapses, and the less-dense region balloons out to form a lobe. Once a lobe has matured into something sufficiently larger than its connection region, it starts to look to any remaining inhabitants like its own little hyperspherical universe. Any remaining stars caught in a lobe could appear to us to be significantly older than the nominal age of the universe as seen from "here and now", because more time has elapsed in the more rarefied lobed region. The age of the universe, measured in 4-coordinates as a distance between the 3D "now-surface" and the nominal location of the big bang (the radial cosmological time coordinate, referred to as "
a" in MTW's "Gravitation",§17.9), is greater at their position than it is at ours.

With a "no-floor" implementation of general relativity, the universe's shape isn't a nice sphere with surface crinkles, like an orange, it's a multiply-lobed shape rather more like a
raspberry, with most of the matter nestling in the deep creases between adjacent lobes (book, §17.11). If there was no floor, we'd expect galaxies to align in three dimensions as a network of sheets that form the boundary walls that lie between the faster-expanding voids.

And if we look at our painstakingly-plotted maps of galaxy distributions, that's pretty much what seems to be happening.

Result 2: Galactic rotation curves
If the average background field intensity drops away when we leave a galaxy, to less than the calculated "floor" level, then the region of space between galaxies is, in a sense, more "fluid". These regions end up with greater signal-transmission speeds and weaker connectivity than we'd expect by assuming a simple "floor". The inertial coupling between galaxies and their outside environments becomes weaker, and the influence of a galaxy's own matter on its other parts becomes proportionally stronger. It's difficult to get outside our own galaxy to do comparative tests, but we can watch what happens around the edges of other rotating galaxies where the transition should be starting to happen, and we can see what appears to be the effect in action.

In standard Newtonian physics (and "flat-floor" GR), this doesn't happen. A rotating galaxy obeys conventional orbital mechanics, and stars at the outer rim have to circle more slowly than those further in if they're not going to be thrown right out of the galaxy. So, if you have a rotating galaxy with persistent "arm" structures, the outer end of the arm needs to be rotating more slowly, which means that the arm's rim trails behind more and more over time. This "lagging behind" effect stretches local clumps into elongated arms, and then twists those arms into a spiral formation.
When we compare our photographs of spiral-arm galaxies with what the theory predicts, we find that ... they have the wrong spiral. The outer edges aren't wound up as much as "flat-floor" theory predicts, and the outer ends of the arms, although they're definitely lagged, seem to be circling faster than ought to be possible.

So something seemed to be wrong (or missing) with "flat-floor" theory. We could try to force the theory to agree with the galaxy photographs by tinkering with the inverse square law for gravity (which is a little difficult, but there have been suggestions based on variable dimensionality and string theory, or MOND), or we could fiddle with the equations of motion, or we could try to find some way to make gravity weaker outside a galaxy, or stronger inside.

The current "popular" approach is to assume that current GR and the "background floor" approach are both correct, and to conclude that there therefore has to be something else helping a galaxy's parts to cling together – by piling on extra local gravitation, we might be able to "splint" the arms to give them enough additional internal cohesiveness to stay together.

Trouble is, this approach would require so
much extra gravity that we end up having to invent a whole new substance – dark matter – to go with it.
We have no idea what this invented "dark matter"might be, or why it might be there, or what useful theoretical function it might perform, other than making our current calculations come out right. It has no theoretical basis or purpose other than to force the current GR calculations to make a better fit to the photographs. Its only real properties are that its distribution shadows that of "normal" matter, it has gravity, and ... we can't see it or measure it independently.

So it'd
seem that the whole point of the "dark matter" idea is just to recreate the same results that we'd have gotten anyway by "losing the floor".

Result 3: Enhanced overall expansion
Because the voids are now expanding faster than the intervening regions, the overall expansion rate of the universe is greater, and ... as seen from within the galactic regions ... the expansion seems faster than we could explain if we extrapolated a galaxy-dweller's sense of local floor out to the vast voids between galaxies. To someone inside a galaxy, applying the "homogeneous universe" idealisation too literally, this overall expansion can't be explained unless there's some additional long-range, negatively-gravitating field pushing everything apart.

So again, the current "popular" approach is to invent another new thing to explain the disagreement between our current "flat-floor" calculations and actual observations.
This one, we call "Dark Energy", and again, it seem to be another back-door way to recreating the results we'd get by losing the assumed gravitational background floor.

So here's the funny thing. We know that the assumption of a "homogenous" universe is iffy. Matter is not evenly spread throughout the universe as a smooth mist of individual atoms. It's clumped into stars and planets, which are clumped into star systems, which are clumped into galaxies. Galaxies are ordered into larger void-surrounding structures. There's clumpiness and gappiness everywhere. It all looks a bit fractal.

It might seem obvious that, having done the "smooth universe" calculations, we'd then go back and factor in the missing effect of clumpiness, and arrive at the above three (checkable) modifying effects, (1) lobing (showing up as "void" regions in the distribution of galaxies), (2) increased cohesion for rotating galaxies, and (3) a greater overall expansion rate. It also seems natural that having done that exercise and having made those tentative conditional predictions, that when all three effects were discovered for real, the GR community would be in a happy mood.

But we didn't get around to doing it. All three effects took us by surprise, and then we ended up scrabbling around for "bolt-on" solutions (dark matter and dark energy) to force the existing, potentially flawed approach to agree with the new observational evidence.

The good news is that the "dark matter"/"dark energy" issue is probably fixable by changing our approach to general relativity, without the sort of major bottom-up reengineering work needed to fix some of the other problems. At least with the "floor" issue, the "homegeneity" assumption is already recognised as a potential problem in GR, and not everyone's happy about our recent enthusiasm for inventing new features to fix short-term problems. We might already have the expertise and the willpower to solve this one, comparatively quickly.

Getting it fixed next year would be nice.

Tuesday 29 December 2009

Black Holes are Rude (in French)

Image of the planet Uranus, outline of France, and a black hole, superimposedEnglish-language physics textbooks (before the mid-1970's) tend to give the impression that everyone had agreed that black holes couldn't radiate. It was supposed to be mathematically proved. Done deal.

But there's a slight geographical cultural bias. Not all countries' research communities adopted the idea of the perfectly-non-radiating black hole with the same enthusiasm. The French theoretical physics community in particular seemed not to like black holes very much at all.

And this was probably at least partly because in French, the term for "black hole" – "Trous Noir" – is slang for "anus".

Now, imagine what that must do to a serious talk on black hole theory delivered in French. To have to give a 45-minute lecture on how things that disappear into a black hole can't be retrieved, including topics like the proof that that "black holes have no hair", and its relationship to the hairy ball theorem. How the heck do you teach this subject without your students snickering?

So the French approach circa 1960 seemed to be to hunker down and wait for the new fashion to blow itself out (err...), after which normality could be restored. And it happened. The Wheeler black hole got assassinated by Stephen Hawking in the 1970's with his presentation on Hawking radiation.

But the English-speaking physics community kept using the term "black hole", even though technically, horizon-bounded objects under QM were now known NOT to be black holes in the Wheeler sense of the word. They weren't black, or holes. Maybe we kept the phrase because we didn't want to admit we'd screwed up, maybe we kept it because of the historical habit of physicists to completely ignore the literal meanings of words when it suits them, and maybe ... we simply liked upsetting the French.


Thanks to Hawking radiation, if you teach black hole theory in French you now have the unenviable job of addressing a room full of students on the subject of black hole emissions, and hoping that nobody thinks its funny to start making quiet comedic fart noises at comically appropriate moments.

Perhaps the smart thing to do is to take this opportunity to come up with a whole new name for a "QM black hole". Call it something like an "Etoile Hawking" (a "Hawking Star"). It's two extra syllables, but it solves the problem.

Sunday 27 December 2009

Nuclear Fusion and the Road to Hell

burning candleThe running joke in the nuclear fusion community is that commercial fusion is thirty or forty years away ... and always will be. The "forty years" rule isn't based on any technical issues, it's based on politics. If you say "a hundred years", then no politician is going to fund you. They want to see results in their lifetime. If you say "twenty years", then people expect you to already have prototype plans drawn up. If you say "thirty", then you get a ten-year grace period, and THEN people expect to start seeing blueprints. "Fifty years" doesn't have enough urgency ... the economy might be different in five decades. And it sounds like a made-up number. But "forty years" conveys a sense that we need to get started NOW. It dangles the carrot just far away for a politician to hope that they're doing the right thing, it won't come to fruition on their career, but they'll see the results in their lifetime.
Which, of course doesn't happen, but by then we have a new crop of politicians that we can give the "forty" schtick to.

So "forty years" is an ongoing collective collective sales pitch by the fusion community to get money for their big conventional tokamak projects from their respective governments. The guys involved sincerely believe that fusion power is the future of the human species, and that the system WILL work one day, and that it HAS to be funded for us to progress. They seem to be using the "tobacco industry" principle – that if you testify that you believe that something is correct, then as long as you can force yourself to believe it at that particular moment, it's difficult for anyone to call you to account for lying. The unattributed quote in the New Scientist editorial after the funding round in 2006, when someone was asked whether they honestly believed the estimates being given for timescales by the fusion community was: "We have to, or the politicians wouldn't give us the money".

The tokamak guys probably reckon that this doesn't technically count as scientific fraud, because they're only misleading politicians, and not other scientists. It's just gaming the system, nobody's getting hurt, right? And anything that gets additional money for science is good, yes?

And that's where the rot sets in. Because "big tokamak" research is so damned expensive, it means that once you've started, you're committed. To spend years of your life on a project and billions of dollars and THEN have it cancelled would be worse than not having started. So you find yourself in a "double or quits" situation, where you have to keep the lie going, and find yourself having to do other bad things to protect the structure you've built.
It basically has you by the balls.

There's quite a few other potential ways to do nuclear fusion, and although lots of them look flakier than the idea of using a nice big solid tokamak, they're also a hell of a lot cheaper to research. So you'd think that the sensible course of action woudl be to put a little money into those alternatives as a side bet, in case the "big toroidal tokamak" idea doesn't pan out, or in case one of those cheap ideas suddenly starts working.

But if you're a "BTT" guy, the side-projects can't be allowed conventional funding or credibility. That's why, when the Cold Fusion story hit, those involved were immediately being written off as con-artists or delusional incompetents by people who knew nothing about palladium-hydrogen geometries – the threat wasn't that the CF guys might successfully con a measly five or ten million in funding from the government, it was that governments might start considering a "mixed basket" approach to fusion research, and if you have five cheap fusion research programmes and one very expensive one, the temptation is to drop the one that costs so much more when funding gets tight. So once you're chasing Big Fusion, it becomes imperative for the success of your mission that there are no other options for a funding committee to look at. Ideally, you want all that other research stopped.

This is the road to damnation. You wake up one day and find that you're no longer the heroic researcher battling the corrupt political system to save a project from cancellation – you're now part of the corrupt political system suppressing other people's fusion research. And it's not the politicians at fault - it's you. Somewhere along the line, you morphed from Anakin Skywalker to Darth Vader, and became one of the Bad Guys.
The only way to justify your actions – and save your scientific soul – is to come up with the goods and save humanity. But this means that you can no longer consider even the possibility that the BTT approach might not be the right way to go, because that'd mean that you'd lose your one shot at salvation.
So what happens if one day you realise that there's something your colleagues overlooked that seems to make the entire project unworkable? Do you go public and risk being responsible for shutting down everything you and your colleagues have devoted your careers to, or do you keep quiet in the hope that you're wrong? If you decided to go public, how far might some of your colleagues go to stop you? Things get nasty.
This is why we have stories about people selling their souls to the devil, and finding out, too late, that they've killed the very thing they wanted to save. They're cautionary tales about human nature and temptation that are supposed to help us to do the right thing in these situations.


The fusion guys have been getting away with it so far, because we all hope that they'll actually be able to come up with the goods. But the public is getting increasingly sceptical about how far they can trust scientists, and the fusion community has to take it's share of the blame for that.

Let's suppose that the global warming argument is correct, and that in 15-20 years time the Earth's weather systems shift in a way that's not terribly convenient for our current city locations, or that we end up bankrupting ourselves in a last-ditch attempt to cut down on carbon emissions. Who're we going to blame?
The climate change people will blame the politicians for not listening to the scientists and planning ahead ... but the politicians will be able to say that they did take the best available scientific advice, and did plan ahead. And spent the money on the big fusion programmes. They didn't properly fund development of next-generation fission reactors that'd be more palatable to the general public than the current monsters, because they were told that fission reactors would be obsolete by now. They didn't do more to fund clean coal, because our power stations weren't still supposed to be burning fossil fuels past the end of the Twentieth Century. They didn't do more to fund wave and wind and solar power research, or try to make society more energy-efficient, because by now we were supposed to be enjoying practially unlimited energy "too cheap to meter". The concentration of strategic oil reserves in Middle-Eastern countries like Iraq and Iran wouldn't matter so much by now, certainly not so much that we'd be prepared to go to war over them. The forty-year estimates back in the 1960's meant that we simply didn't need to prioritise these things. The fusion guys had assured us that we didn't need to, all we had to do was write them a cheque.

Like most people, I hope that the guys can show us that we're wrong, and really can get this to work on a reasonable timescale.

Otherwise ... Welcome to Hell.

Monday 21 December 2009

Fibonacci Fractals

Fibonacci FractalThis fractal's based on the Fibonacci Rose.

The original Rose has two identical interlocking spiral arms. If we delete one of them, we're left with a simple spiral chain of triangles. Each triangle has three sides – one side connects the triangle to its larger parent, one connects it to its single smaller child triangle, and the third side is unused.

Adding child triangles to two sides gives us the fractal – a characteristic cauliflower-shaped branching structure whose adjacent bunches have corners that just touch.

At larger scales, this looks just like one of the family of fractal shapes that we get by using the Golden Ratio to calculate triangle sizes, that let us zoom infinitely far in or out and always get the same shape.

Fibonacci Fractal LimitWith the "Fibonacci" versions we can zoom out infinitely far, but as we zoom in, there's a range where the proportions start to shift perceptibly away from the Golden Ratio, and then, suddenly, the branching sequence hits a dead end, and stops.

Sunday 20 December 2009

The Tetrahedral Triple-Helix

Tetrahedral triple-helix, Eric Baird 2009Mathematicians playing with geometrical solids tend to concentrate on the finite ones. Those provide a nice satisfying sense of closure, and they're cheaper to build with straws and pipecleaners than the infinite ones.

This is an interesting shape that doesn't fall into that category. It's a simple rigid stack of tetrahedra that generates a "column" with a triple-helix. The odd thing is, you'd expect an architect somewhere to have already used this on a structure somewhere ... but I don't recall ever seeing it.
Maybe I missed it.

The sequence rotates through [~]120 degrees and [nearly] maps onto itself every nine tetrahedra (that is, the tenth [nearly] aligns with the first). If you want to follow one of the spiral arms through a complete [~]360-degree revolution, that takes 9×3=27 tetrahedra, (#28 corresponds to #1) .

Oh, and it has a hole running right down the middle.

I'll try to upload some more images in another post.

Saturday 19 December 2009

"Snowflake" Fractal

Hexagonal 'Corner-Cluster fractal snowflake, Eric Baird 2009
If you want something that looks more like a snowflake than the previous hexagonal carpet, you could always use the "Koch Snowflake" fractal, which is gotten by repeatedly adding triangles to the sides of other triangles.
Koch Snowflake FractalBut every single general text on fractals seems to include the Koch. I mean, don't get me wrong, it's a fairly pleasant shape, but after the nth "fractals" text slavishly copying out exactly the same fractal set-pieces, you start to think ... guys, could we have a little bit of variation pleeeeaaase?

So here's a different snowflake. This one's built from hexagons. Each hexagonal corner forms a nucleation site that attracts a cluster of three smaller hexagons, and their free corners in turn attract clusters of three smaller ... you get the idea. The sample image has been drawn with about six thousand hexagons.

The resulting "snowflake" outline is really very similar to the Koch, but the internal structure's a bit more spicy. A suitable design for Christmas cards for mathematicians, perhaps.

Friday 18 December 2009

Hex "Fractal Carpet"

Hex Fractal Carpet, Eric Baird 2009
It snowed here today! Wheee!

In honor of the White Hexagonal Fluffy Stuff, here's a nice fractal carpet made of hexagons that illustrates how an infinite number of copies of a shape can converge on a larger fixed-area version of the same shape.

This one's generated from about five and a half thousand hexagons, but obviously, you can keep going infinitely far.

The construction rule's simple. You start with one hexagon (with sides of length "one"), and then add half-size hexagons to any free corners. Then repeat, infinitely (with sides of length "one half", "one quarter", and so on).

What the process converges on is a larger completely-filled hexagon with sides of length "three", so the final area is exactly nine times the original.

Hex Fractal Carpet: Total areaIf you wanted to get even more recursive, you could try copying the entire hexagonal shape into every hexagon that you used to draw it (and then repeat that). Which would look rather cool. But take rather longer to calculate.

Thursday 17 December 2009

Mongolia, Stochastic Quantum Mechanics, and Spacetime Curvature

reduced-quality thumbnail image of a figure from 'Space-time structure near particles and its influence on particle behavior', Khavtain Namsrai, International Journal of Theoretical Physics [23]  1031-1041 (1984)One of the arguments in the book was that quantum mechanics can describe velocity-dependent distortions in spacetime associated with the motion of a massed particle with respect to its surroundings.

It's a simple enough idea:
For a fundamental particle, QM says that the particle's effective position has a degree of uncertainty to it. It ought to be smeared out over the surrounding region of spacetime as a probability field. Its energy and momentum are smudged. Although any attempts to sample that energy and momentum are going to give quantised results that jump about a lot, if we average the results of a large number of similar possible measurements to produce a smoothed, idealised map of the probability-field for the particle's distribution of massenergy, we end up with a density-field surrounding the particle's notional position that expresses the distribution of massenergy through space that – functionally and definitionally – would appear to have the properties of a gravitational field (because that's effectively what a massenergy field is). The momentum is similarly smudged, giving us a polarised field-distortion component that expresses the particle's state of motion. The shape of the field tells us the probability-weightings for the likelihood of our being able to make certain measurements at given locations.

One interpretation might be that the underlying stochastic processes are truly random, and that the shape and dimensionality of classical physics appears as an emergent feature.
Another might be that the shape represents classical physics principles operating below the quantum threshold, but being drowned out by signal and sampling noise (until we average out the noise to reveal the underlying structure).

Trouble is, if we take these QM averaged-field descriptions seriously, they imply that the correct classical geometrical model for particle interactions isn't flat spacetime – the existence and state of motion of a particle corresponds to a deviation from flat spacetime, and the greater the relative velocity between particles, the more significant the associated gravitomagnetic curvature effects become.

With this approach to quantum mechanics, observer-dependence doesn't have to be some "spooky" effect where the same experiment has physically-diverging results depending on how the observer looks at it, due to reality having an odd, probablistic multiple-personality disorder ... we get different predictions when we change the position and speed of our observers because the presence and properties of those observers physically alters the shape of the experiment, in ways that can cause quantised measurements of the geometry to come out differently. It's a non-Euclidean, nonlinear problem. At those scales there's no such thing as a perfect observer, so to describe how the experiment should play out in isolation, and then to repeat it with different embedded observers charging across the playing-field, is to carry out different experiments.

It's not a difficult argument, but "particle physics" people have a tendency to argue in favour of special relativity by saying that we know for a fact that curvature plays no measurable role in high-energy physics, and mathematicians have a habit of trusting physicists when they say what their experiments show ... so I didn't know of any examples of QM people discussing velocity-dependent curvature when I wrote the book.


Anyhow, earlier this year I stumbled across one. Funny story: I was looking at my Google Analytics statistics, and obsessing about how nobody from Mongolia seemed to have been visiting my website, and then I happened to visit a "citation statistics" site that included a world-map, so naturally I zoomed in to find out what fundamental theoretical research had been coming out of Mongolia. As one does. The search only gave one result, so I clicked on it.

And then I choked into my coffee as this thing came up:
"Space-time structure near particles and its influence on particle behavior"
International Journal of Theoretical Physics [23] 1031-1041 (1984)
Kh. Namsrai, Institute of Physics and Technology, Academy of Sciences, Mongolian People's Republic, Ulan-Bator, Mongolia

Abstract: "An interrelation between the properties of the space-time structure near moving particles and their dynamics is discussed. It is suggested that the space-time metric near particles becomes a curved one ... "
DOI 10.1007/BF02213415

The paper also appears as a chapter towards the end of Namsrai's rather expensive book, Nonlocal Quantum Field Theory and Stochastic Quantum Mechanics (Springer, 1985)

11.8.2:
" ... Physically this relationship means that by knowing the space-time structure near the particle we can calculate its velocity (generalized) and, on the contrary, by the value of the particle velocity one can try to build the space-time structure near the moving particle. Thus, it seems, there exists a profound connection between these two concepts and they enter as a single inseparable entity into our scheme. "
The "stochastic" approach looks at QM from a "shotgun" perspective, superimposes the result of a large number of potential measurements ... and arguably generates a spacetime that's "curved" in the vicinity of a "moving" particle in such a way as to describe the particle's velocity. The curvature generates the velocity, and the velocity generates the curvature. Which was kinda what I'd been saying. But with a lot more advanced math to back it up.

Of course, the irony here is that Namsrai's paper and book describe the emergent classical properties of apparently random processes ... and I only found the piece (which is exactly what I needed to find), by using an apparently random method.

Spooky! ;)

Tuesday 15 December 2009

ReactOS: Windows without Windows

ReactOS 0.4 logo: 'In a world without walls, who needs Windows?'ReactOS is a free, open-source replacement for the Windows XP operating system. Okay, the ReactOS guys will probably take issue with that statement and argue that as far as the latest codebase specifications are concerned, the target platform has now moved on, and is now W2k3/Win7 ... but for most prospective users, ReactOS is effectively an XP-substitute.

Lots of people liked XP, but since Microsoft no longer sell XP retail and the official MS mainstream support for XP ended back in early 2009, it's good to have an alternative. It's also good to have an alternative supplier to Microsoft – Windows 7 is supposed to be fine, but the gap between XP sales being cut off and Win7 appearing rather undermined Microsoft's image as a vendor who could be relied on for continuity of supply. MS temporarily cut off huge numbers of users who'd invested in XP and couldn't use Vista, because it wanted to prop up Vista sales. It's difficult to commit to a single operating system if the sole supplier is in the habit of pulling stunts like that.

ReactOS aims to become the Windows that Windows never quite managed to be (to steal the old IBM OS2 slogan, "A better Windows than Windows"). Written from scratch (to avoid legal issues) the developers haven't had to compromise their code to meet pesky release deadlines, so ReactOS is very, very fast and very, very small. It should run Windows software on a netbook faster than XP, using fewer resources. It's the mythical lean-and-mean "de-bloated Windows" that Windows users were asking for for years.

The downside of this perfectionist "no-deadlines" approach is that ReactOS is also very, very late.

The ReactOS 0.3.11 release was finally finished today (Twitterlink).
ROS 0.3.x is "pre-beta" software, which translates as "It runs, but not all software, and not on all hardware, and expect the occasional crash" Lots of programs are already supposed to run on ROS just fine, others may stall in certain situations when they try to call a Windows function whose ROS counterpart hasn't quite been implemented yet, or where a subsystem is still a work in progress. These problems should become less and less common as the ROS version numbers increase and the remaining holes in ReactOS get filled in. Issues that affect the more popular programs are being fixed first.

The final goal is 100% Windows compatibility, for everything. And that's not just for software but for hardware too. So if you have a scanner or printer that already comes with a Windows driver, that driver should (eventually) install "as-is" under ROS. That's worth repeating: no ReactOS-specific drivers are required. So you won't have the problem that Linux had, of having to wait around for a company to pay someone to write a special driver to get your niche hardware to run on a "non-Windows" OS. If there's already a Windows driver, it should (eventually) install under ReactOS, with no special steps needed.

Problems? Well, the lateness issue. The ReactOS project been going for a while now, and it only really started looking sensible this year. It would have been great if ROS could have been gotten to "retail-level" solidity before Win7 came out ... while people were still desperate for any version of Windows that wasn't Vista ... but that didn't happen. Windows has accumulated a LOT of different subsystems, from video to networking, all of whose features would need to be replicated in order for all programs to run under ROS in exactly the way they do under Windows.
Certain things have been deprioritised. ROS doesn't yet have a "fancy" desktop, and it can't write to an NTFS filesystem. But, like an amoeba collecting DNA from other organisms it eats, ReactOS is picking up tricks from other open-source projects – ROS is now sharing code with the WINE project (which lets you run Windows code in a bubble inside Linux), so further improvements to WINE help ROS, and vice versa.

Currently, one of the main issues for ROS is getting the thing to install on a disparate range of real-world PC hardware with its limited set of default bundled drivers. Once ROS is up and running you can start downloading and installing more specific Windows drivers, but if the included drivers aren't compatible with your hardware, you're stuck. Many ROS enthusiasts are running the OS in a "virtual machine" bubble under another operating system, so default support for real-world hardware hasn't been so much of a priority until recently.

Version 0.3.9 installed fine on my old (~2001) Sony laptop, but wouldn't install on an ASUS netbook, because the netbook had a serial ATA drive that the included ROS drivers didn't understand (to be fair, original-release XP doesn't install on the ASUS either, for the same reason).
At version 0.3.10, they tackled the SATA problem by bundling in a "universal" ATA driver from a different open-source project ... which turned out to have a few compatibility issues of its own, with the result that 0.3.10 refused to install on either of my test machines.
Version 0.3.11 is probably going to be the first version where the OS really starts to be functional to most people, with 0.4.x starting to develop and polish some of the user-interface components and secondary features.

If you can actually install ReactOS on your hardware, what sort of advantage is there to running programs under it instead of under XP? Well, you know how XP takes at least 45 minutes to an hour to install? Because of its small footprint, ReactOS 0.3.9 installed onto my blank laptop in five minutes flat. And by "five minutes" I really do mean five "stopwatch" minutes (as in "approximately three hundred seconds").

That's fast.

Monday 14 December 2009

Momentum and Kinetic Energy

Fast car with motion blurKinetic energy is a slippery subject under relativity theory, so I try to give it a wide berth wherever possible. :) Here's how it behaved under C20th theory:
  • Under "early SR", kinetic energy seemed to appear in the equations in a quite an intuitive form … if we converted a body to light, and moved past the body at velocity v, different parts of the light-complex would be redshifted and blueshifted by Doppler effects, but the overall summed energy (according to the LET/SR relationships) would always end up being increased by the Lorentz factor, 1:SQRT[1 - vv/cc]. We could interpret the increase rather nicely by arguing that, since we could use SR to describe a “moving” body as being Lorentz time-dilated, this should translate into an apparent increase in the object's inertia (and its effective inertial mass), and by applying the E=mc^2 equation to that enhanced “relativistic mass” value, we arrived at the appropriate “enhanced” value for the total energy of the light-complex under SR. This additional energy due to motion wasn't the traditional “half em vee squared”, so our original schoolbook arguments and derivations for KE weren't quite correct in the new context except as a low-velocity first approximation.

  • With modern, “Minkowskian” SR, the subject of special relativity evolved. It cast off some earlier concepts from Lorentzian electrodynamics and Lorentz Ether Theory, and ended up as the theory of the geometrical properties of Minkowski spacetime. The idea of relativistic mass got downgraded, and some of the more mathy people started to say that the concept of relativistic mass was “bad” and shouldn't be taught. What was important (they said) were just two things: (1) the rest massenergy of a particle, and (2) its path through spacetime. Everything else was a derivative of these two things, and the conserved property wasn't either momentum or kinetic energy, but a new hybrid thing, called momenergy. But they kept the extra Lorentz-factor when calcuating things like SR momentum.
    So with the appearance of “modern SR”, our concept of what kinetic energy was (and what it ought to be) changed again. Taylor and Wheeler's “Spacetime Physics” (2nd ed., 1992) has a useful chapter on momenergy (§7) that works through this “Minkowskian” approach.

  • Under general relativity things got even more slippery, because we arguably moved from the concept of “simple” kinetic energy towards that of physical, recoverable kinetic energy that could be expressed as a change in shape of the metric. Defining the nominal energy contained in a region gets difficult unless we also define the properties of other neighbouring regions that it might interact with. Even if we know all the masses and velocities involved, the effective resulting energy can also depend on their distributions and arrangements.

If we switch to the “redder” equations suggested in the book, things change again. Because each ray is now redder than its SR counterpart by the Lorentz factor, that “nice” SR result that the totalled energy of the emitted light increases with velocity by the Lorentz factor vanishes. Now, the sum of all the energies of the rays gives exactly the same value regardless of the relative speed between the observer and the experiment. Adding the ray-energies together gives a fixed value that represents just the rest massenergy of the original body. So (as someone asked, after the Newton and E=mc^2 post) where did the kinetic energy component go?

Well, in this system, the thing that describes the original body's ability to do work due to its motion isn't just the total summed energy of the rays, but the total ray-energy multiplied by the asymmetricality of its distribution.
Suppose that we instead took an electrical charge and distorted its field – it'd now have have two energy components: the default energy associated with the electric charge, and an additional energy due to the way that the effect of that charge was distorted (like the energy bound up in a stretched or squashed spring). If an electrical charge is seen to be moving, its field seems distorted due to aberration effects, and it therefore carries an additional chunk of energy, even though the "quantity of field" is the same. Similarly, in our gravitomagnetic model, we have a moving gravitational charge whose "quantity of field" is the same for all velocities, but whose velocity-dependent distortion carries an additional whack. When we then convert the body to trapped light, the total energy of all the individual rays corresponds to just the "rest field" or the "rest massenergy", and the original body's kinetic energy shows up as the apparent additional energy-imbalance across the light-complex, due to the fact that it's moving.

Thought-experiments: If a “stationary” body is converted to light, the resulting distribution of light-energy is completely symmetrical. No asymmetry means no equivalent kinetic energy. Add energy symmetrically to the light-complex and then convert it back into matter, and because the resulting body still has zero overall momentum, the added energy has to translate into additional rest mass rather than KE. Add energy to the complex asymmetrically, and the imbalance means that the resulting mass now has to be “moving” wrt the original state in order to preserve our introduced asymmetricality, and appears as kinetic energy. Remove energy from the original balanced complex, asymmetrically, and you again create motion and KE in the resulting object (although the reconstituted body now has a smaller rest mass thanks to the energy that you stole).
In this model, the kinetic energy for a simple “moving” point-particle doesn't show up in a simple calculation of summed energy values for the equivalent light-complex. It appears as the energy-differential across the light-complex caused by the way that that rest energy is redistributed in the light-complex due to the original body's motion.

To find the asymmetry of energies in a light-complex, we can use a vector-summing approach, which allows ray-energies to cancel out if they're aimed in opposite directions, leaving us with a residual measure of the differential energy ... which relates to the net momentum of the light-complex.

So under the revised model, there are two energy values to consider, the "quantity of field" associated with a particle, and the angular distribution of how that first energy-field is arranged. The first one's the rest massenergy, the second's the kinetic energy.

This isn't the usual way of doing things, but it arguably gives us a more minimalist logical structure than the one used by special relativity. Under SR, the motional energy of a particle shows up in the geometry twice – first in the gross quantity of energy, and then a second time in the redistribution of that Lorentz-increased energy due to velocity – and we have to do some odd-looking four-dimensional things to cancel one from the other. With this system, the quantity only appears once, as the asymmetrical distribution of the fixed quantity of rest-energy.

To me this looks like it might be a more elegant way of doing physics.

Friday 11 December 2009

Academic Research

A group of scientists crash on a desert island on the way back from a conference. Quickly, they each decide to start doing something to help the group survive until they're rescued, using their own individual skills. In the group are an engineer, a biologist, a mathematician and a tenured research scientist.

"I know," says the engineer, "I'll build a shelter, and start looking for a water source. Maybe start digging a well."

"Okay," says the biologist, "I'll check whether the water's likely to be drinkable, and I'll do a quick audit of the local flora and fauna, and see what's edible and what's poisonous. I'll also keep a look out for signs of any wild pigs that we might be able to trap and eat."

"Fine," says the mathematician, "I'll stocktake our supplies and work out how long they're likely to last and how we should ration them, and I'll also try to work out where we are, what side of the island we ought to use for our signal beacon, and the best location for it, for maximum visibility."

All this time, the academic researcher has been shaking his head, until he can't contain himself any longer. "You're all crazy!", he shouts.
The others look at him, blankly, and wait for him to explain.
"You guys honestly don't have a clue, do you?", says the academic. "All of these plans are totally impractical. Let me point out a few basic facts here that you all seem to have forgotten. We're lost. We're probably hundreds of miles from the nearest shipping lane. Nobody knows we're here. We have no working communications equipment, and even if we did, we have no electricity to power it."
"So how the hell are we supposed to submit the research proposals for all this work?"

Sunday 6 December 2009

Quantum Gravity

Quantum Gravity research logoQuantum Gravity is one of those cool project names that doesn't necessarily have a set definition.

This is because the Quantum Gravity guys know roughly where they're trying to get to, but are still exploring different potential routes that might be able to get us there. It's not necessarily about "quantising gravity" in the graviton sense (although some people are following that path) – more generally, it's any research that tries to work out how the heck we can reconcile quantum mechanics with general relativity.

Currently, the situation with gravitational horizons is that quantum theory lets us prove, absolutely definitely, that horizons radiate (Hawking radiation), while Einstein's general theory lets us prove, absolutely definitely, that they don't. We can superimpose QM "sprinkles" onto a GR background to retrofit the desired QM effect onto a nominal GR metric, but that's really a "hand-crafted" response to the problem. It'd be better to have a single overarching theory with a single consistent set of rules that let us incorporate the best parts of both models in a single internally-consistent scheme, and that's the end result that quantum gravity research tries to take us towards.

A common approach to QG research is seeing what happens when we add additional dimensions, since this increases the range of geometrical possibilities beyond those already defined for the individual models that we're trying to unify ... the hope being that we might find one special extended geometry that has a special claim to owning the sub-theories that it needs to include. The string theory guys tend to do this sort of work (although some folk feel that they may have gotten a little carried away with the extent of their attempt to catalogue all the possible solutions!).
Another is to look at ways of extending the QM approach of carrier particles as mediators for force to include gravitation, and see what happens. So we have things like Higgs Field research. But since classical gravitation is usually considered to be a curvature effect, not everyone agrees that we need a separate "curvature particle", since the idea seems to be a little self-referential.
We now also have people studying acoustic metrics as a way of generating curved-surface descriptions of Hawking radiation.

Different approaches to QM can suggest different approaches to QG. The "stochastic" approach to QM suggests that perhaps we can average the random fluctuations in a model to produce an effective classical geometry, and perhaps we might even be able to leave the dimensionality of these fluctuations unspecified, and see whether our own 3+1 dimensional universe emerges naturally over larger scales once everything's smudged together in the correct statistical proportions.

So, lots of approaches. Folk are working on a lot of potential theories, in the hope that one of them might end up being (or leading to) the winner ... or perhaps a few different methods might succeed if they turn out to be dual to each other. It's probably a bit like being back in the early days of quantum mechanics, where there were multiple approaches and nobody really knew which were good, which were bad, which were dead ends, and which were equivalent.

Meanwhile ... while we're waiting for "the theory of quantum gravity" to arrive ... we have some nice logos to look at.