Saturday, 29 August 2009

M.C. Escher's "Relativity", Intransitivity, and the Pussycat Dolls

PCD: Gravitationally-conflicting staircases in the Pussycat Dolls' video for 'Hush, Hush'There's a nice example of intransitive geometry in the latest Pussycat Dolls video ("Hush hush").
No, really, there is. It's the bit where the girls are on four staircases attached to the sides of a cube, that each have a different local direction of "down". The "stairwell" section of the video starts at about 58 seconds in and goes on until about a minute thirty. While you're waiting for it to start you'll have to put up with the sight of Nicole Scherzinger nekked in a bathtub making "ooo, yeah" noises for nearly a minute, though. Sometimes doing research for this blog is really tough.

The video seems to be inspired by the famous "Relativity" lithograph by M. C. Escher, which had three intersecting sets of stairs and platforms set into three perpendicular walls, as a piece of "impossible" architecture (physically you could build it, but you wouldn't be able to walk on all the surfaces as the people do in the illustration).M.C. Escher's famous lithograph, 'Relativity'Escher's illustration was incredibly influential, and as well as the Pussycat Dolls video (!), there are some more literal tributes online, including Andrew Lipson's recreation of the scene using Lego, part of the 1986 movie Labyrinth, and a funny short video called Relativity 2.0, that has people trapped in a nightmarish Escherian shopping mall.

Andrew Lipson's lego rendition of Escher's 'Relativity', in Legogravitationally-ambiguous staircases in tribute to M.C. Escher's 'Relativity' lithograph, appearing in the 1986 movie, 'Labyrinth'



If you know of any other especially good ones, please add them to the end of this post as a comment!

Next, we need a Beyonce video illustrating the event horizon behavour of acoustic metrics ...

Saturday, 22 August 2009

Special Relativity is an Average

Special Relativity as an average: 'Classical Theory' (yellow block), Special Relativity (orange block), and Newtonian Optics (red block). Special relativity's numerical predictions are the 'geometric mean' average of the predictions for the other two blocksTextbooks tend to present special relativity's physical predictions as if they're somehow "out on a limb", and totally distinct from the predictions of earlier models, but SR's numerical predictions aren't as different to those of Nineteenth-Century models as you might think.

One of the little nuggets of wisdom that the books usually forget to mention is that most of special relativity's raw predictions aren't just qualitatively not particularly novel, they're actually a type of mathematical average (more exactly, the geometric mean) of two earlier major sets of predictions. So, in the diagram above, if the yellow box on the left represents the set of predictions associated with the speed of light being fixed in the observer's frame (fixed, stationary aether), and the red box on the right represents the set of physical predictions for Newtonian optics (traditionally associated with ballistic emission theory), then the box in the middle represents the corresponding (intermediate) set of predictions for special relativity.

If we know the physical predictions for a simple "linear" quantity (visible frequency, apparent length, distance, time, wavelength and so on) in the two "side" boxes, then all we normally have to do to find the corresponding central "SR" prediction is to multiply the two original "flanking" predictions together and square root the result. This can be a really useful method if you're doing SR calculations and you want an independent method of double-checking your results.


This usually works with equations as well as with individual values.
F'rinstance, if the "linear" parameter that we were working with was observed frequency, and we assumed that the speed of light was fixed in our own frame ("yellow" box), we'd normally predict a recession Doppler shift due to simple propagation effects on an object of
frequency(seen) / frequency(emitted) = c / (c+v)
, whereas if we instead believed that lightspeed was fixed with reference to the emitter's frame, we'd get the "red box" result, of
frequency(seen) / frequency(emitted) = (c-v) / c
If there was really an absolute frame for the propagation of light, we could then tell how fast we were moving with respect to it by measuring these frequency-shifts.

The "geometric mean" approach eliminated this difference by replacing the two starting predictions with a single "merged" prediction that we could get by multiplying the two "parent" results together and square-rooting. This gave
frequency(seen) / frequency(emitted) = SQRT[ (c-v) / (c+v) ]
, which is what turned up in Einstein's 1905 electrodynamics paper.

The averaging technique gave us a way of generating a new prediction that "missed" both propagation-based predictions by the same ratio. Since the numbers in the "red" and "yellow" blocks already disagreed by the ratio 1: (1- vv/cc), the new intermediate, "relativised" theory diverged from both of these by the square root of that difference, SQRT[ 1 - vv/cc ]. And that's where the Fitzgerald-Lorentz factor originally came from.

---==---

Why is it important to know this?

Well, apart from the fact that it's useful to be able to calculate the same results in different ways, the "geometric mean" approach also has important implications for how we go about testing special relativity.
Our usual approach to testing SR is to compare just the the "yellow" and "orange" predictions, identify the difference, say that the resulting differential Lorentz redshift/contraction component is something unique to SR and totally separate from any propagation effects, and then set out to measure the strength of this relative redshift/contraction component, in the range "zero-to-Lorentz". Having convinced ourselves that these effects are unique to SR, we usually don't then bother to check whether the data might actually make a better match to a point somewhere to the right of the diagram.
Since the "yellow box" predictions are so awful, special relativity comes out of this comparison pretty well.

But once you know the averaging method, you'll understand that this is only half the story -- these "derivative" effects that appear under SR but not "Classical Theory" ("orange" but not "yellow") must have counterparts under Newtonian optics ("red"), and these are usually stronger than the SR versions. So any experimental procedure or calculation that appears to support the idea of time dilation or length-contraction in an object with simple constant-velocity motion under SR would also generate an apparent positive result for those effects if SR was wrong and the older "Newtonian optics" relationships were the correct set (or if some other intermediate set of relationships was in play). We can say that special relativity's concept of velocity-based time dilation didn't exist under NO, but hardware doesn't care about concepts or interpretations, only results ... and the result of performing an SR-designed test in an "NO universe" would be that the test would throw up a "false positive" result apparently supporting SR (with an overshoot that'd then have to be calibrated out).

And, actually, the situation is worse than this.
... Since the "yellow" and "red" blocks represent the two extremal predictions for theories that allow linkage between the velocity of a light-signal and the motion of a body ("yellow" = zero dependency, "red" = full dependency), they also seem to represent the cutoff-limits for a whole slew of old Nineteenth-Century "dragged aether" models, all of which would be expected to produce similar physical effects to special relativity, differing only in their scaling and strength. So typical test procedures designed to isolate the "new" SR effects should be able to generate "false positive" results with almost all of these old theories and models.

While some of special relativity's concepts might have been new, its testable numerical predictions lie right in the middle of a pre-existing range. Any time you see a claimed experimental verification of SR that forgets to take this into account, treat it with caution.

Monday, 17 August 2009

Fibonacci Kitchenware (well, almost)

I popped into Habitat yesterday, and they're selling a range of five pseudo-Fibonacci nesting trays (four smaller trays plus a bigger one to hold them). It's just a shame that they chose such and awful selection of colours for them (who the heck decided on yellow, brown and navy blue??!?).

Friday, 14 August 2009

Fun with Special Relativity

detail form Salvador Dali's 'http://en.wikipedia.org/wiki/The_Disintegration_of_the_Persistence_of_Memory', (oil on canvas, circa 1952-54)This is where I surprise everyone by saying something nice about Einstein's Special Theory of Relativity for a change. Considered as a piece of abstract geometry, special relativity (aka "SR" or "STR") is prettier than even some of its proponents give it credit for. The problems only kick in when you realise that the basic principles and geometry of SR considered as physics don't correspond well to the rules that real, physical observers and objects appear to follow in real life.

Anyhow, here's some of the pretty stuff:

It's traditional to explain Einstein's special theory of relativity as a theory that says that the speed of light is fixed (globally) in our own frame of reference, and that objects moving with respect to our frame are time-dilated and length-contracted, by the famous Lorentz factor.
And that characterisation certainly generated the appropriate predictions for special relativity, just as it did for Lorentzian Ether Theory ("LET"). But we can't verify that this time-dilation effect is physically real in cases where SR applies the principle of relativity (i.e. cases that only involve simple uniform linear motion). Thanks to its application of Lorentz-factor relationships, Special Relativity doesn't allow us to physically identify the frame that lightspeed is supposed to be constant in. When we make proper, context-appropriate calculations within SR, we have the choice of assuming that lightspeed is globally constant in our frame, or in the frame of the object we're watching, or in the frame of anybody else who has a legal inertial frame – it's usually a sensible choice to use our own frame as the reference, but really, it it doesn't matter which one we pick, and sometimes the math simplifies if we use someone else's frame as our reference (as Einstein did in section 7 of his 1905 paper).

Some people who've learnt special relativity through the usual educational sources have expressed a certain amount of disbelief (putting it mildly) when I mention that SR allows observers a free choice of inertial reference frame, so let's try a few examples, to get a feel of how special relativity really works when we step away from the older "LET" descriptions that spawned it.

Some Mathy Bits:

1: Physical prediction
Let's suppose that an object is receding from us at at a velocity of four-fifths of the speed of light, v = 0.8c
Special relativity predicts that the frequency shift that we'll see is given by
frequency(seen)/frequency(original) = SQRT[ (c-v) / (c+v) ]
= SQRT[ (1-0.8) / (1+0.8) ]
= SQRT[ 0.2/1.8 ] = SQRT[ 1/9 ]

=
1/3
, so according to SR, we should see the object's signals to have one third of their original frequency. This is special relativity's physical prediction. The object looks to us, superficially, as if it's ageing at one third of its normal rate, but we have a certain amount of freedom over how we choose to interpret this result.

2: "Motion plus time dilation"
It's usual to break this physical SR prediction into two notional components, a component due to more traditional "propagation-based" Doppler effects, calculated by assuming that lightspeed's globally constant in somebody's frame, and an additional "Lorentz factor" time dilation component based on how fast the object is moving with respect to that frame.
The "simple" recession Doppler shift that we'd calculate for v = 0.8c by assuming that lightspeed was fixed in our own frame would be
frequency(seen) / frequency(original) = c/(c+v)
= 1/1+0.8 = 1/1.8
, and the associated SR Lorentz-factor time-dilation redshift is given by
freq'/freq = SQRT[ 1 - vv/cc ]
= SQRT[ 1 - (0.8)² ] = SQRT[ 1 - 0.64 ] = SQRT[ 0.36 ]
= 0.6
Multiplying 0.6 by 1/1.8 gives
0.6/1.8 = 6/18
= 1/3

Same answer.

3: Different frame
Or, we can do it by assuming that the selected emitter's frame is the universal reference.
This gives a different propagation Doppler shift result, of
freq'/freq = (c-v)/c
= 1 - 0.8 = 0.2

We then assume that because we're time dilated (because we're moving w.r.t. the reference frame), and that because our clocks are slow, we're seeing everything to be Lorentz-blueshifted, and appearing to age faster than we'd otherwise expect, by the Lorentz factor.
The formula for this is
freq'/freq = 1/SQRT[ 1 - vv/cc ]
= 1/0.6 = 5/3
Multiplying these two components together gives a final prediction for the apparent frequency shift of
0.2× (1/0.6) = 0.2/0.6 = 2/6
= 1/3
Same answer.

So although you sometimes see physicists saying that thanks to special relativity, we know that the speed of light is globally fixed in our own frame, and we know that particles moving at constant speed down an accelerator tube are time-dilated, actually we don't. In the best-case scenario, in which we assume that SR's physical predictions are actually correct, the theory says that we're entitled to assume these things as interpretations of the data, but according to the math of special relativity, if we stick to cases in which SR is able to obey the principle of relativity, it's physically impossible to demonstrate which frame light "really" propagates in, or to prove whether an inertially-moving body is "really" time-dilated or not. It's interpretative. Regardless of whether we decide that we're moving and time-dilated or they are, the final physical predictions are precisely the same, either way. And that's the clever feature that we get by incorporating a Lorentz factor, that George Francis Fitzgerald originally spotted back in the Nineteenth Century, that Hendrik Antoon Lorentz also noticed, and that Albert Einstein then picked up on.

4: Other frames, compound shifts, no time dilation
But we're not just limited to a choice between these two reference frames: we can use any SR-legal inertial reference frame for the theory's calculations and still get the same answer.
Let's try a more ambitious example, and select a reference-frame exactly intermediate to our frame and that of the object that we're viewing. In this description, both of us are said to be moving by precisely the same amount, and could be said to be time-dilated by the same amount ... so there's no relative time dilation at all between us and the watched object. We can then go ahead and calculate the expected frequency-shift in two stages just by using the simpler pre-SR Doppler relationships, and get exactly the same answer without invoking time dilation at all!

The "wrinkle" in these calculations is that velocities under special relativity don't add and subtract like "normal" numbers (thanks to the SR "velocity addition" formula), so if we divide our recession velocity of 0.8c into two equal parts, we don't get (0.4c+ 0.4c), but (0.5c+0.5c)
(under SR, 0.5c+0.5c=0.8c – if you don't believe me, look up the formula and try it)

So, back to our final example. The receding object throws light into the intermediate reference frame while moving at 0.5c. The Doppler formula for this assumes "fixed-c" for the receiver, giving
freq'/freq = c/(c+v)
=1/1.5 = 2/3
Having been received in the intermediate frame with a redshift of f'/f = 66.66'%, the signal is then forwarded on to us. We're moving away from the signal so it's another recession redshift.
The second propagation shift is calculated assuming fixed lightspeed for the emitting frame, giving
freq'/freq = (c-v)/c
=1 - 0.5/1 = 0.5/1 = 1/2
The end result of multiplying both of these propagation shift stages together is then
2/3 × 1/2
= 1/3
Again, exactly the same result.

No matter which SR-legal inertial frame we use to peg lightspeed to, special relativity insists on generating precisely the same physical results, and this is the same for frequency, aberration, apparent changes in length, everything.

So when particle physicists say that thanks to special relativity we know for a physical fact that lightspeed is really fixed in our own frame, and that objects moving w.r.t. us are really time-dilated ... I'm sorry, but we don't. We really, really don't. We can't. If you don't trust the math and need to see it spelt out in black and white in print, try Box 3-4 of Taylor and Wheeler's "Spacetime Physics", ISBN 0716723271. IF special relativity has the correct relationships, and is the correct description of physics, then the structure of the theory prevents us from being able to make unambiguous measurements of these sorts of things on principle. We can try to test the overall final physical predictions (section 1), and we can choose to describe that prediction by dividing it up into different nominal components, but we can't physically isolate and measure those components individually, because the division is totally arbitrary and unphysical. If the special theory is correct, then there's no possible experiment that could show that an object moving with simple rectilinear motion is really time-dilated.

If you're a particle physicist and you can't accept this, go ask a mathematician.

Sunday, 9 August 2009

HTML5 is Coming!

The latest (8 August 2009) draft version of the HTML5 specifications has just been published.

Some of the additions are special dedicated tags for semantic labeling. These are labels that describe the logical content of a block – what it is rather than how it displays - although with Cascading Style Sheets ("CSS"), it's also possible to set associated display parameters for just about any tag type (colours, surrounding boxes, and so on).

Microsoft (who aren't on the HTML5 panel) have queried what the point of these things is, since they don't add any new layout specification tools for the benefit of the website designer. We already have the general-purpose <div> tag that lets us mark out blocks of code, and to assign custom class names and ID names to those blocks, so that they can be displayed in particular ways using CSS. Why duplicate the same functionality in these new tags, <article>, <nav>, <section>, <aside> and so on, if these don't give the webpage designer any new functionality for how a page appears on screen or on paper that they couldn't already achieve with <div>?

Well, even if Microsoft can't quite see the point of them, there are still a number of really good reasons why the end-users and the internet in general need at least some of these new tags.

Blogging
HTML4 came out at the end of the last century (!), and since then the blog phenomenon has pretty much exploded. Blogging software now makes it really easy for authors to produce a mass of rich, mixed, auto-updated content over tens or hundreds of pages. But search engines have to try to make sense of this mess of articles, article links, widgets and addons, and it's not easy. For instance, suppose that I write and upload a blog article about "Einstein and Fish". On Google, "Einstein and fish" currently only gives one result (if it was two words, it'd count as a "Googlewhack").
But as soon as I post the article, the title "Einstein and Fish" will appear in the "recent posts" box in the sidebar of every single page of my blogspace. Point Google's "advanced search" at my blogspace to find how many articles I've written on "Einstein and fish", and instead of one, it'll report back a list of every blog entry I've ever written as apparently containing that piece of search text. It'll also probably include all the text of every widget I've used on the site (like "NASA Photo of the Day"). And this is even though I'm using Blogger, which is Google's own blogsite company.

When webpage designers and companies like Blogger start using the new tags, general-purpose search engines should find it easier to separate out blog articles and webpage content from the surrounding mess of widgets, navigation links, slogans, adverts and general decorative junk.

Client-side reformatting
Some web designers react with outrage at the idea that a browser might display their precious page with a different layout to the one that they carefully designed (to look good on their nice 19" flat-screen monitor).
But people are increasingly looking at web pages on a rangle of devices including mobile phones and ebook readers, and although website designers can in theory produce separate style sheets that allow a page to be displayed with different layouts on every size of device, in practice there's an awful lot who don't bother (including me! :) ). If we use a dedicated blog site, we maybe hope that the site's engineering people will do all that for us, automatically. With CSS-based layouts, some designers tend to go for absolute pixel widths, and frankly, we don't know what devices and screen sizes might be most important a year from now.

Semantic labeling allows dedicated browsers built into these devices to have a good attempt as reformatting and reflowing pages to fit their own tiny screens, by being able to tell which blocks of HTML are the important page content, and which blocks are just there for decoration or navigation.

New Navigation Tools
One of the results of these new tags is that we can expect to see mini-browsers starting to sprout some new navigation buttons. If you have a long page with several sections that takes several sheets to print out, with a figure or two, an inset box with supplementary material, and a navigation bar, then the layout designed for a large screen is going to be hopeless on an iPhone. So what would be cool on an Android mobile phone browser or iPhone would be a function that scans for <section> tags, and then provides additional [<section][section>] buttons that let you skip forwards or backwards through a page. Inset panels with additional info that the designer has "artily" set into the side of the article could be identified by their HTML5 <aside> tag and stripped out and made available on a separate button as [info]. Similarly, if the author produced a number of figures that are referred to in the text, and marked them with the <figure> tag, it'd be handy if the browser could scan for these when the page is loaded, and provide a [figure] button if it finds one, and [<figure][figure>] navigation buttons if it finds several. And it'd also be really handy on a small screen to be able to strip out the navigation bar and put that onto a separate [nav] button, too.
In fact, if this caught on, it'd also be great to be able to jump around a page using these buttons on a conventional "full-size" browser, too.

Accessibility
Finally, if you think that it's difficult navigating a modern "fancy" webpage on a mobile phone, imagine how frustrating it must be if you're sight-impaired, and are using an automated text reader. If you're navigating a page "by ear", it could be useful to be able to find your place again by skipping backwards and forwards a section at a time, until you find a title or intro paragraph that you recognise ... or to be able to jump back and forth between a current reading position and the navigation options, no matter where the designer has put those navigation buttons on the page, or where they happen to appear in the webpage's source code.

One of the problems with CSS, wonderful though it is, is that it allows the designer to place any element in any part of the HTML file, onto any part of the page. This means that the sequential order of chunks of HTML in the field don't necessarily correspond to the order that they have on the screen. A navigation bar that appears at the top of the screen might appear at the bottom of the code. By labelling the sections logically, in a standardised way, it gives audio navigation software the chance of finding key sections of a page and treating them appropriately. For companies and government departments that have disability access policies (and requirements!), adopting HTML5 tags and using them consistently on new projects would be a good initiative both for supporting future standards and for potentially improving long-term disability access.

Friday, 7 August 2009

Misconstructing Fibonacci

The Fibonacci Series sequence mesmerises people. There's something about the idea that a deterministic trail of integers can mysteriously converge on a strange, fundamental, irrational number, the infamous Golden Ratio, or Golden Section, 1.61803 ..... , "phi" – which, like "pi", can't be expressed as any exact ratio between two whole numbers, or written down on paper as a complete series of digits using any conventional number system.

Some people get obsessed with the numbers, and seem to think that if they stare long enough at the simple sequence with its maddening simplicity, that the secret buried inside the integers might reveal itself.

I'm here to give you the answer – the numbers are empty. The secret's not in the numbers at all, it's set one layer back behind the numbers, in the process used to generate them.

If you want to understand how the Fibonacci sequence generates phi, it can be useful to throw away the integers and look at the shapes:
With a conventional square-tiled version of the Fibonacci sequence, we start with a single "fat" rectangle, of nominal side "1×1" (a square), and then we add an additional square to the longest side (in this case, they're all equal, so any side will do), which gives us a "long" rectangle, of dimensions "2×1". Adding another square to one of it's longest sides produces another "fattish" rectangle of size "3×2", although this obviously can't be as fat as the 1×1 square (which was already as fat as you can get). Adding a further square to one of the new longest sides then makes the shape thin-nish again, with size "5×3", although, again, it's not quite as thin as the earlier "2×1" rectangle. As we keep adding squares we get the sequence of ratios 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144. and so on.

Fibonacci Series tiling
And this alternating pattern of overshoots and undershoots repeats forever.
For a rectangle that's already precisely proportioned according to the Golden Section, these sorts of square-adding or square-subtracting processes produce rectangular offspring with precisely the same proportions as their parent. But anything more elongated than the "Golden Rectangle" always produces something "fatter" than phi, and anything more dumpy than a Golden Rectangle is guaranted to produce a rectangle that's "skinnier" than phi.
If we apply the process by lopping off squares, then for a "non-phi" rectangle the proportions swing back and forth more and more wildly, getting further and further away from phi each time we remove a square, and if we do it by adding squares, the process takes us closer and closer to phi each time ... and this gives us the usual tiling construction for phi using the Fibonacci Series, shown above, that you should be able to find in a lot of books.

But this specific sequence of numbers 1, 1, 2, 3, 5, 8, 13, 24, 34, 55, 89, 144, ... isn't required for the trick to work – the method generates alternating "fat" and "thin" rectangles that converge on phi, when we start with any two numbers whatsoever. They don't even have to be integers.
Example:
Suppose that instead of 1, 1, we start with a couple of random-ish numbers, taken from say, the date of the Moon landing, 16.07 & 1969. This gives us a very skinny rectangle (with proportions around ~123 : 1). Adding a square to the longest side gives something that's almost square (very "fat", ratio ~1.008), the next pairing will be on the "skinny" side (ratio~1.99), and already we're looking at ratios close to those of the the "1, 2" entries in the standard sequence. The process then chunters on and converges on phi as before.

16.07, 1969, 1985.07, 3954.07, 5939.14, 9893.21, 15832.35, 25725.56, 41557.91, ...

If we stop there, and divide the last number by it's neighbour, we get
41557.91/25725.56 = ~1.6154

add another couple of stages and we get
108841.38/67283.47 = 1.61765..

So in just those few stages, we've already gone from a start ratio of about 123:1 to something close to the golden section value of ~1.618... , correct to three decimal places.
It really doesn't matter whether the initial ratio is 1:1, or 2:1, or a zillion to the square root of three. Any two numbers whatsoever, processed using the method, give a sequence that will lurch back and forth around the Golden Section, always overshooting and undershooting, but always getting closer and closer, guaranteed.

So the Fibonacci sequence, in this regard, is really nothing special. You can plug in any two start numbers, taken from anywhere, apply the Fibonacci method, and the trick will still work.

--===--

What the usual Fibonacci Series does have going for it is simplicity. It's probably the simplest integer example of this process, and it's been argued that perhaps if we want to approximate phi with a pair of integers, that for any given number range, the standard Fibonacci sequence "owns" the pair that get closest (although I haven't actually checked this for myself). We can also derive the "standard" sequence from tiling and quantisation exercises, and when it comes to dealing with sunflowers and pinecones and the like, where we're dealing with structures that branch recursively (like the core of a pinecone) or are the result of cell division in two dimensions, plus time (giving branching over time), then yes, it's not surprising that Fibonacci sequence integers are a recurring theme. Cell division and branching are quantised processes, like the graph of Fibonacci's rabbits.

But the "music" of the Fibonacci series isn't in the integers, its in the rhythm of the of the underlying processes that generate them. It's those underlying processes that carry the magic, not the integers themselves.

Saturday, 1 August 2009

Fibonacci Rose, Alternative Tiling

Fibonacci Rose, alternative colour tiling
Actually, this is the same arrangement of shapes as in the "double-spiral" version of the Fibonacci Rose, but coloured differently.
As before, each triangle of a given colour has sides that are the sum of the sides of the next two triangles down, giving the sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 ...

And as before, if we zoom out arbitrarily far, the figure becomes indistinguishable from the "golden section" version, in which each triangle's sides are related to the next size up or down by the ratio phi, ~1.618034 ... .