Saturday, 28 March 2009

Fibonacci and the Baker's Dozen

cover for 'The Abyss of Time: an architect's history of the Golden Section', by Martin Hutchinson
Last year I did some typesetting for a 1970's book by Martin Hutchinson, on archaeology and the Fibonacci Series.

A couple of things stuck in my head that I hadn't come across before. One was the idea that the ribbed stonework of North-European Gothic cathedrals might have been inspired by the ribbing in Northern European longboats. The other was that the Fibonacci Series may once have been used as the basis of an international prehistoric system of weights and measures (which kinda overlaps with Alexander Thom's work on the existence of a possible standardised "megalithic yard").

At first sight, this second idea looks a bit anachronistic ... surely the Fibonacci Series is a comparatively recent invention, with perhaps a few obscure older precedents in ancient texts, and could only have been of interest to a very limited number of people in ancient times?
Well, if you think of the Fibonacci Series as a mathematical thing, sure ... but if you've ever worked on a delicatessen stall, it should strike you that actually, the qualities of the Fibonacci Series make it an ideal system for quickly measuring out and bagging standardised quantities of food and other measurables, if your customers (or staff!) aren't especially numerate.

If you've ever used an old-style kitchen counterweight balance with weights that go up in powers of two, then you'll already be used to the idea of using binary for a weights and measures system. The binary system lets you measure out any integer quantity of something, but it's a bit fiddly. If you run a busy market stall, you don't want to be carefully measuring out whatever weights a customer might ask for. You want a simple set of pre-packaged sizes.

The binary series is the first member of a family of additive systems that form an Extended Fibionacci Series. But as a trader, we don't want to be only supplying our product in quantities that are powers of two - that's not customer friendly. We want a system that does 1, 2, 3 ... and then has units where each step is somewhere in the vicinity on one-and-a-half times the previous size. And that's where the next member of our Extended Fibonacci Series comes in. This second member of the family is the usual Fibonacci Series. It's the basis of an ideal weights-and-measures system for people who can't multiply or divide, and maybe don't even have a strong grasp of number. You can present them with a set of pre-set sizes that you can name, that can be created by stacking rods or blocks together, and the simplicity of the system means that they can easily check for themselves that you aren't cheating them. All they have to do is learn and recognise how the units stack together.

Traditional pre-metric weights and measures (such as the old Imperial system) tended to be based on multiples of threes and fours and sixes and twelves, with a few fives and tens thrown in for good measure. There seems to be a strong influence here from ancient Sumerian mathematics, with its emphasis on base-60 (which allows a large number of convenient integer divisions with integer results). The Sumerians get credited with the decision to use a factor of 360 for the number of degrees in a circle, and for using sixty divisions for minutes into degrees (measuring angles) or minutes into hours (measuring time).

But one of the odd features of many pre-metric (ie non-decimal) systems was the appearance of "thirteen" in some of the definitions of units.
Thirteen has no right to exist in any multiplicative system of weights and measures. It's a prime number! And it's so close to twelve (which divides so nicely into 2, 3, 4 and 6), that there's no obvious reason why we'd want to use multiples of thirteen in a system instead of multiples of twelve.

Except that twelve doesn't appear in the Fibonacci Series, and thirteen does. So all those thirteens in the old archaic weights and measures systems might be leftovers from a more primitive tradition of weighing and measuring, where people created larger sizes by clumping one each of the two smaller sizes together. They might have been the last echoes of an old pre-Sumerian tradition.

Habits and traditions are sometimes passed down through human societies long after the original meanings have been lost, as a kind of behavioural fossil. If Hutchinson's hypothesis is correct, this may be one of the oldest.

Saturday, 21 March 2009

'Hyperbolic Planar Tesselations', by Don Hatch

John Baez's "This week's finds in Mathematical Physics" page often has links to math goodies. I haven't visited it for a while (where "a while" is probably measured in years), but I had a peek today, and it had a link to a site containing a whole collection of these beasties:

thumbnailof images from 'Hyperbolic Planar Tesselations' at http://www.plunk.org, by Don HatchIt's a page by Don Hatch called Hyperbolic Planar Tesselations, and it's full of links to larger versions of the pretty pictures. The image selected on the Baez page is especially nice, because it shows the tiling that you can achieve in negatively-curved space by replacing the usual flat-spacetime hexagonal tiling with heptagons. These regular tilings don't work in a flat plane. If we extrude a flat plane in one direction, then the amount of space per unit area, as judged within the plane, is less than we'd expect. If we extrude in two opposing directions (to produce a "saddle" or "pringle" shape), then as we draw larger shapes on the surface, they include progressively more area that we'd normally expect, thanks to all the folds and crinkles, and the resulting hyperbolic plane allows things like regular heptagonal tiling.

Okay, so I'm probably a sucker for tables of blue, black, and white geometrical figures, but even so, the "Don Hatch" page is really very nice. Some of the figures are reminiscent of Apollonian Net diagrams, which I'm quite fond of as fractal tiling systems, and which also in turn tend to correspond to maps of fractal-faceted solids with an infinite number of circular faces that you can achieve by continually grinding maximally-sized flat circular facets into the remaining curved surface of a truncated sphere:
Infinitely-truncated sphere, giving an infinite-sided polygon with circular faces, whose map corresponds to an Appollonian gasket
I put a quick illustrative connection map of heptagonal space on p.27 of the book ("3: Curved Space and Time"), but it was really just a crude sketch. So while my first reaction to the Hatch page was "Wow! Cool!", my second was, "Damn, I wish I'd done that".

Wednesday, 18 March 2009

They did exactly what we paid them to do

Some economics commentators have recently been getting worked up over why it is that the clever people who work at major financial institutions somehow always seem to end up creating boom-bust cycles.

Those people get paid big bonuses during a boom
They don't have to pay those bonuses back when there's a crash.

It's that simple.

Sunday, 15 March 2009

Special Relativity is not Compulsory

Katsushika Hokusai: The Great Wave off Kanagawa
One of the foundations of Twentieth Century relativity theory was the idea that Einstein's early "special-case" theory of relativity ("Special Relativity", or "SR") had to appear as a complete subset of any larger and more sophisticated models.

At first glance, this seemed unavoidable.

Einstein's later and more sophisticated general theory was at its heart a geometrical theory of curved spacetime... it described gravitational fields in terms of how they warp lightbeam geometry, and then used the principle of equivalence to argue that the effects associated with accelerations and rotations must also follow the same set of rules. We could then model all three classes of effect as an exercise in curved-spacetime geometry, and go on to extend the model to include more sophisticated gravitomagnetic effects.

But Einstein's general theory didn't attempt to apply these new curvature principles to simpler problems involving basic relative motion, because his earlier special theory had already dealt with those cases by assuming flat spacetime. Instead of going over the same ground a second time, Einstein simply said that, just as classically-curved surfaces reduced over sufficiently small regions to apparent flatness, so the geometry and physics of general relativity, if we zoomed in sufficiently far, ought to reduce to flat spacetime and the "flat-spacetime" version of physics described by the special theory.

There were good pragmatic reasons for Einstein's adoption of special relativity as a foundation for GR, but geometrical necessity wasn't one of them. Here's why:
... It's true that if we zoom in on a GR-type model sufficiently far, we end up with effectively-flat spacetime, but this doesn't automatically mean that we then have flat-spacetime physics. It might instead mean that we've zoomed in so far that there's no longer any meaningful classical physics to be had. We have to accept at least the logical possibility that real physical particles (and their interactions) might be unavoidably associated with spacetime curvature, and in that scenario, we can't derive their relationships by presuming absolutely flat spacetime, because that condition would only be met if our particles didn't physically exist.

Allow any form of velocity-dependent curvature at all around moving particles, and SR's flat-spacetime derivations fracture and fail. This is especially unfortunate since the experimental evidence suggests that moving particles do seem to disturb the surrounding lightbeam geometry, just as we'd expect if curvature effects were a fundamental part of physics, and if the flat-spacetime basis of special relativity was wrong.

---==---

This suggestion that "all physics is curvature" was put forward at the end of the Nineteenth Century by a mathematician called William Kingdon Clifford, who's usually remembered for having his name on Clifford Algebra. The critical thing about a "Cliffordian" model in this context is that when we implement the principle of relativity within it, we find that the resulting physics doesn't reduce to special relativity and the relationships of Minkowski spacetime. Instead of a Minkowski metric, it reduces in the presence of moving particles to something that looks more like a relativistic acoustic metric, and which appears to be much more compatible with quantum mechanics than our current classical models.

So the perfect, unbreakable geometrical proofs of SR's inevitability as physics aren't complete. In order to complete them, we have to be able to show that Cliffordian models can't work ... and that seems to be difficult, because the results of taking a Cliffordian approach seem to be pretty damned good.

To date, nobody seems to have been able to come up with a convincing disproof of this class of curvature-based solution, and until that happens we have to accept the possibility that special relativity might not be a part of our final system of physics.

Thursday, 5 March 2009

Relativity Book, Errata


There were a few issues that didn't get sorted (or spotted) before "Relativity in Curved Spacetime" went to press.
  1. The concept of universes spawning other universes via the formation of black holes (pages 241-242, Fig 17.9). I didn't get to find who ought to be credited with the idea in time for publication, so I had to leave the discussion and attribution a bit vague. The idea seems to have been Lee Smolin's. Sorry about that, Lee. :(

  2. I'd really wanted to track down the old textbook reference that I'd had for the electromagnetic analogue of of Mach's Principle, applied to rotation. If you place an electron inside a hollow charged sphere, the field cancels, and the electron doesn't "see" the background field. But if you then spin the charged sphere, the electron is supposed to feel a radial force acting at right angles to the rotation axis, and also a sideways dragging force, analogous to the outward and sideways forces that matter feels when the mass of the outside universe is spun around it (blamed on apparent "centrifugal" and "Corioilis" fields experienced within the rotating frame). Didn't manage to find the reference in time.

  3. Missing reference. The Harwell group produced a controversial paper on centrifuge redshifts in 1960, which caused a bit of a stir. The dispute was documented in a paper by Alfred Schild, which is mentioned at the top of page 158 ("the Schild rebuttal"). Schild should have been listed in the bibliography on page 366, between the 1960 references for Hay, Schiffer et.al., and L.I. Schiff:
    1960 | Alfred Schild " Equivalence Principle and red-shift measurements" Am.J.Phys 28 778-780
    - rebuttal paper
    But the entry was accidentally deleted and the "rebuttal paper" comment ended up attached to the following "Schiff" reference.
    This got corrected in the hardback edition.

  4. There were also a handful of minor typesetting mistakes (typically missing or misplaced "s"-es) in the first half of the book that snuck past the spell-checker, but nothing serious. Those have been corrected for the hardback.
And as far as I know, that's it.

Sunday, 1 March 2009

Cell Fractal

Here's a nice example of a "cell" fractal that I've been staring at, off and on, for the last week or so:


I probably didn't have to zoom in quite so far to demonstrate the thing, but I thought, what the hell, let's just leave the Eee Box running until the zoom calculations hit the 32-bit floating-point limit.

The zoom doubles in size every second, and does that for about 47 seconds.

At some point there'll be a web page to go with this, but it's not quite finished yet.

Isaac Newton and E=mc²

The history of the idea of mass-energy conversion is a slightly murky one. Textbooks and lecturers find it convenient to say that Albert Einstein was the first person to suggest that mass and energy were interchangeable, but really ... he wasn't. That's a handy piece of educational fiction. It ain't so.

By 1905, a number of researchers were reckoned to be close to the E=mc² result. The basic argument went something like this: imagine a mirrored cavity embedded in a piece of material, containing a trapped light-complex, in equilibrium with its container. The radiation pressure of the trapped light within the container is the same in all directions. But if the container and its trapped electromagnetic (EM) energy are now viewed by a different observer who reckons that the container is "moving", then that observer will assign different Doppler-shifted energies and radiation pressures to different parts of the light-complex: The forward-aimed components now get assigned greater energy and momentum than the rearward-aimed components, and the overall momentum of the complex no longer cancels out - the container's nominal motion gives the trapped light an overall momentum that points in the direction of motion.
So the EM contents of the moving container appear to contribute additional momentum to it, as if it contains a speck of matter rather than EM energy. If we aren't allowed to look inside the container, we might not be able to tell whether it contained EM energy or real matter, and by working out how much energy it takes to reproduce the external effects associated with a given amount of mass, we end up with a very short equation for the conversion factor between rest mass and rest energy. That (if we calculate it correctly) is E=mc².

However, it seems that Einstein's competitors either didn't calculate the conversion ratio properly, or failed to come out and suggest in print that this wasn't merely an apparent conversion of mass and energy, but The Real Thing. Einstein did both, and earned the credit.



If we want to go back further, to find an older example of the idea of "interconvertibility" in a major English-language physics text by a famous author, all we have to do is open a copy of Isaac Newton's "Opticks" [Babson archives]/[1717 edition.pdf], and flip to the "Queries" section at the back. The relevant section is Query 30:
Qu.30: Are not gross Bodies and Light convertible into one another, and may not Bodies receive much of their Activity from the Particles of Light which enter their Composition?...
The changing of Bodies into Light, and Light into Bodies, is very conformable to the Course of Nature, which seems delighted with Transmutations.
I've quoted this at the start of Chapter 2 of the "Relativity..." book ("Gravity, Energy and Mass"), which goes through some of these arguments in more detail (with the help of some pictures).

Traditionally, at this point in the discussion, a physicist will interrupt and say something like,
"Okay, perhaps Newton had the idea, but we weren't able to calculate the specific relationship until we had special relativity. Einstein used Lorentz's relationships in his calculations rather than Newtonian physics, so so E=mc² is clearly specific to Einstein's physics."
But that's not true either. It's correct that Einstein originally presented E=mc² in the context of his new "special" theory, but if he'd done the momentum calculations with the same degree of care using "olde" Newtonian emission theory, he'd have gotten the same result (with slightly less working). In fact, we can construct a continuum of hypothetical theories whose relationships differ by Lorentzlike ratios, and all of them generate E=mc². Turns out, E=mc² is a general result. I've put the details of the "Newtonian optics" argument into the book's "Appendices" section, as "Calculations 2"

So, while some physics histories present Einstein's discovery of E=mc² in 1905 as a triumph of the scientific method, the reality seems to be that the equation's discovery is marked by a sequence of earlier human failures going back two hundred years.
To start with, Newton couldn't calculate E=mc² because he'd gotten the relationship between energy and frequency upside down, and assumed (reasonably but wrongly) that the "bigger", redder wavelengths of light carried more energy and momentum for a given amplitude, rather than less ("The Newtonian Catastrophe", chapter 3). Newton lived 'til 1727, and then his his successors still couldn't calculate E=mc², because they trusted Newton to have gotten it right. If you were an English physicist, suggesting that Newton might have made a mistake was heresy. Towards the end of the century (1783), John Michell used Newton's arguments to calculate the gravitational wavelength-shifting of light, but he was still citing Newton's writing and using the old bad "inverted" relationships. Defending Newton from criticism was now a matter of national pride, and in 1772, Joseph Priestley's History of Optics had been cheerfully ridiculing the mental capacity of those poor retards in Europe who were so behind the times that they actually still thought that light was a wave! Antagonism between the two sets of researchers meant that the Newtonian group couldn't admit the possibility of major error.

The next couple of decades saw Europe shaken up by the French Revolution, and then Continental physics really began to hit its stride. Newton's mistake had generated a bad prediction that light should travel more quickly through glass than air, and when Continental experimenters started using new technology to measure lightspeeds, they were able to show, quite conclusively (and perhaps slightly gleefully), that this wasn't the case. As we got to the mid-C19th, work by Christian Doppler and others meant that we were now quite sure how to calculate the effect of velocity on light for any given model, but instead of going back and correcting Newton's error, Newton's supporters slunk off with their tails between their legs, and did their best to rewrite physics history so that later English-speaking physics students hopefully wouldn't realise just how dumb they'd been.

The latter part of the C19th was then "lost", too. Although we now had plenty of expert wave theorists, lightwaves were now generally reckoned to propagate through some sort of aetheric medium, and there was no agreed set of governing principles defining what that medium's properties ought to be. The credibility of the older Newtonian principles concerning the behaviour of light (such as the idea that the behaviour of matter and light ought to obey a single set of underlying rules) were now widely considered to be "damaged goods", and the proliferation of aether models meant that we now had a bewildering array of competing predictions for exactly how the properties of light ought to be affected by motion. There were just too many damned versions for us to be able to do these sorts of calculations confidently, and be sure that our results meant anything.

That state of affairs lasted until the early Twentieth Century.

This is where Einstein came onto the scene. Einstein had three advantages over most other contemporary theorists when it came to deriving E=mc² - he was a fan of the idea that the principle of relativity should apply to light, he was definite about the set of equations that he wanted to use, and he was (apparently) blissfully unaware of almost all of the previous two centuries of political bickering on the subject (probably helped in part by his habit, as a student, of not bothering to turn up for lectures). So Einstein was able to come to the problem "fresh", without a lot of preconceptions. He'd already tinkered with emission theory, recognised some of the problems, and had then latched onto Lorentzian electrodynamics, and decided that this was The Future.

In 1905, he published his "reimagining" of Lorentzian electrodynamics , which took the characteristics of Lorentz's relativistic aether and deleted the "physical medium" aspect as unnecessary. According to Einstein in 1905, aether was irrelevant to the problem - all that was required to generate the Lorentzian relationships was the principle of relativity and an assumption about lightspeeds. These two postulates were then sufficient to generate all of Lorentz's important math.

And then (as a very short followup paper) ... if the Lorentzian relationships in the previous paper were correct, internal energy imparted mass to bodies, according to the relationship E=mc².
At this point, Einstein was on a roll, and he was looking forwards rather than backwards ... he didn't really have much motivation to point out that, if the relationships in his earlier paper were wrong, and we reverted to the previous relativistic calculations for light, we still got E=mc². Pointing that out was a job for peer review and outside commentators, but almost no-one noticed.

We then coasted though another century, without much to suggest that anyone had connected the dots and understood the broader context for what Einstein had done and how it really related to Newton's earlier work. Right into the 1990s, students were still being told that E=mc² was unique to special relativity, and that the fact that atom bombs worked was ample evidence that no other system of equations could be right. Those claims weren't scientifically or mathematically correct, and weren't researched, but everyone seemed to believe them. Some people wrote research papers and entire books on the history of E=mc², and still somehow managed not to mention the Newtonian connection.



Not everybody missed it. The Sandman series by Neil Gaiman quotes and cites the key section in "Opticks" and points out its its significance. But Sandman isn't a book on theoretical physics, it's an illustrated fantasy graphic novel. So what we appear to have here is a subject where some people who write university textbooks seem to be doing rather less background research and fact-checking than some people who write comicbooks.

I feel that this is an unhappy situation. But it seems to explain a lot about why theoretical physics is in its current state.