Pages

Saturday, 22 August 2009

Special Relativity is an Average

Special Relativity as an average: 'Classical Theory' (yellow block), Special Relativity (orange block), and Newtonian Optics (red block). Special relativity's numerical predictions are the 'geometric mean' average of the predictions for the other two blocksTextbooks tend to present special relativity's physical predictions as if they're somehow "out on a limb", and totally distinct from the predictions of earlier models, but SR's numerical predictions aren't as different to those of Nineteenth-Century models as you might think.

One of the little nuggets of wisdom that the books usually forget to mention is that most of special relativity's raw predictions aren't just qualitatively not particularly novel, they're actually a type of mathematical average (more exactly, the geometric mean) of two earlier major sets of predictions. So, in the diagram above, if the yellow box on the left represents the set of predictions associated with the speed of light being fixed in the observer's frame (fixed, stationary aether), and the red box on the right represents the set of physical predictions for Newtonian optics (traditionally associated with ballistic emission theory), then the box in the middle represents the corresponding (intermediate) set of predictions for special relativity.

If we know the physical predictions for a simple "linear" quantity (visible frequency, apparent length, distance, time, wavelength and so on) in the two "side" boxes, then all we normally have to do to find the corresponding central "SR" prediction is to multiply the two original "flanking" predictions together and square root the result. This can be a really useful method if you're doing SR calculations and you want an independent method of double-checking your results.


This usually works with equations as well as with individual values.
F'rinstance, if the "linear" parameter that we were working with was observed frequency, and we assumed that the speed of light was fixed in our own frame ("yellow" box), we'd normally predict a recession Doppler shift due to simple propagation effects on an object of
frequency(seen) / frequency(emitted) = c / (c+v)
, whereas if we instead believed that lightspeed was fixed with reference to the emitter's frame, we'd get the "red box" result, of
frequency(seen) / frequency(emitted) = (c-v) / c
If there was really an absolute frame for the propagation of light, we could then tell how fast we were moving with respect to it by measuring these frequency-shifts.

The "geometric mean" approach eliminated this difference by replacing the two starting predictions with a single "merged" prediction that we could get by multiplying the two "parent" results together and square-rooting. This gave
frequency(seen) / frequency(emitted) = SQRT[ (c-v) / (c+v) ]
, which is what turned up in Einstein's 1905 electrodynamics paper.

The averaging technique gave us a way of generating a new prediction that "missed" both propagation-based predictions by the same ratio. Since the numbers in the "red" and "yellow" blocks already disagreed by the ratio 1: (1- vv/cc), the new intermediate, "relativised" theory diverged from both of these by the square root of that difference, SQRT[ 1 - vv/cc ]. And that's where the Fitzgerald-Lorentz factor originally came from.

---==---

Why is it important to know this?

Well, apart from the fact that it's useful to be able to calculate the same results in different ways, the "geometric mean" approach also has important implications for how we go about testing special relativity.
Our usual approach to testing SR is to compare just the the "yellow" and "orange" predictions, identify the difference, say that the resulting differential Lorentz redshift/contraction component is something unique to SR and totally separate from any propagation effects, and then set out to measure the strength of this relative redshift/contraction component, in the range "zero-to-Lorentz". Having convinced ourselves that these effects are unique to SR, we usually don't then bother to check whether the data might actually make a better match to a point somewhere to the right of the diagram.
Since the "yellow box" predictions are so awful, special relativity comes out of this comparison pretty well.

But once you know the averaging method, you'll understand that this is only half the story -- these "derivative" effects that appear under SR but not "Classical Theory" ("orange" but not "yellow") must have counterparts under Newtonian optics ("red"), and these are usually stronger than the SR versions. So any experimental procedure or calculation that appears to support the idea of time dilation or length-contraction in an object with simple constant-velocity motion under SR would also generate an apparent positive result for those effects if SR was wrong and the older "Newtonian optics" relationships were the correct set (or if some other intermediate set of relationships was in play). We can say that special relativity's concept of velocity-based time dilation didn't exist under NO, but hardware doesn't care about concepts or interpretations, only results ... and the result of performing an SR-designed test in an "NO universe" would be that the test would throw up a "false positive" result apparently supporting SR (with an overshoot that'd then have to be calibrated out).

And, actually, the situation is worse than this.
... Since the "yellow" and "red" blocks represent the two extremal predictions for theories that allow linkage between the velocity of a light-signal and the motion of a body ("yellow" = zero dependency, "red" = full dependency), they also seem to represent the cutoff-limits for a whole slew of old Nineteenth-Century "dragged aether" models, all of which would be expected to produce similar physical effects to special relativity, differing only in their scaling and strength. So typical test procedures designed to isolate the "new" SR effects should be able to generate "false positive" results with almost all of these old theories and models.

While some of special relativity's concepts might have been new, its testable numerical predictions lie right in the middle of a pre-existing range. Any time you see a claimed experimental verification of SR that forgets to take this into account, treat it with caution.

4 comments:

  1. None of the experimental confirmations of special relativity fail to take into account how the predictions of special relativity differ from those of other theories. Also, the circumstances in which the relativistic prediction is the geometric mean of two non-relativistic predictions are well known, as are the circumstances when the relativistic prediction is NOT the geometric mean. It goes without saying that, for many common phenomena, all theories that have ever been seriously entertained give extremely close predictions, since otherwise they would never have been seriously entertained. But of course there are other circumstances when the various theories give widely differing predictions. In all cases, the outcomes are perfectly consistent with special relativity, and in the agregate they are inconsistent with any theory that is not observationally equivalent to special relativity. So, your blog comments are rather silly.

    ReplyDelete
  2. Anonymous: FYI, some physicists were still getting special relativity's physical predictions badly wrong as late as the 1950's, before James Terrell and Roger Penrose came along in 1959 and tried to set the record straight.
    The mistake (regarding apparent optically-viewed length-changes) is almost impossible to make once you know the averaging approach, so I have to assume that the "geometric mean" method wasn't sufficiently widely known or understood in the SR community in the 1950's, despite the fact that by this time, SR had already been around for about half a century.

    I also know from personal experience that most of the physicists that I was meeting up with in 1994 were still using the bad old "pre-Terrell" "educational" version of SR that had supposedly been totally debunked and discarded forty years earlier, and again, if these guys had understood the averaging approach, they'd have been able to see for themselves that the version of SR that they were insisting was validated by all the evidence was actually faulty. So I have to assume that they didn't know the method either.

    Two of those guys actually had professorships in the physics departments of major "physics-y" universities and taught relativity theory, so these guys might have been responsible for teaching a faulty 1950's version of the SR predictions to another generation or two newbie physicists who again, probably still don't know about the averaging technique that could have set them straight.

    If the "geometric mean" approach is really widely understood and appreciated, then you should be able to give me mainstream textbook references to it, and we shouldn't find those bad statements in the current books, saying that the physical effects that SR predicts are qualitatively unique to special relativity (an idea that seems to be one of the cornerstones of SR testing).

    So really, you need to provide some sort of supporting argument or reference for your idea that this is all widely understood ... and I'd prefer a reference more than 20 years old, to rule out the possibility that the author might have learnt about the method from me! :)

    Oh, by the way, if you think that it's also well-known which cases the "geometric mean" approach to SR doesn't give the right answers, then it'd be nice if you could provide an example or two.

    An explanation of just how bad current SR testing really is will have to wait for another blog post (or two).
    Because it's really really bad. Baaaaaaaad! :)

    ReplyDelete
  3. Your historical account of the Terrell/Penrose paper on relativistic optics is pure fiction. It is not true that physicists were “getting special relativity’s physical predictions badly wrong” prior to that paper. None of the implications of special relativity that were described by physicists going back to 1905 changed in 1959. Terrell and Penrose merely pointed out yet another implication that hadn’t previously gotten much attention. Also, your notion that what your call “the averaging technique” was unfamiliar to physicists is blatently wrong. You asked for a mainstream textbook, preferably more than 20 years old, that covers this. Well, how about the standard freshman physics text by Haliday and Resnik, published in 1960, in which the relativistic Doppler effect is presented as being bracketed by the two classical cases, and in which the 1937 experiment of Ives and Stillwell (based on Stark’s earlier attempts, which were based on Einstein’s 1907 suggestion for one way of testing special relativity) is described, comparing the actual observed frequency shift both with the classical predictions and with the relativistic prediction (mid-way between them). This rather conclusively refutes your mythology about how physicists never had any idea of how the relativistic prediction for Doppler shift compares with the bracketing classical predictions.

    Your next fallacy is in the statement “and we shouldn't find those bad statements in the current books, saying that the physical effects that SR predicts are qualitatively unique to special relativity (an idea that seems to be one of the cornerstones of SR testing).” This is absurd. The statements made in reputable text books are accurate in describing the quantitative predictions of special relativity, and these are the predictions that are quantitatively tested and confirmed (to incredible levels of precision) on a daily basis. Your belief that special relativity is confirmed only “qualitatively” could not be further from the truth. How you could get the idea that qualitative assessments are a “cornerstone of SR testing” is a complete mystery, with no basis in reality. Of course, time dilation is an effect that distinguishes special relativity from certain classical theories, but the confirmations of the relativistic predictions involving time dilation are by no means merely qualitative. They are strictly quantitative, carried out to incredible levels of precision, more than adequate to distinguish and rule out all the alternatives. (Needless to say, your peculiar misguided ideas about transverse Doppler shift are simply based on lack of understanding, and would be dispelled if you ever took a few minutes to read a good explanation.)

    Lastly, you asked for example of when the special relativistiv prediction is NOT the geometric mean of two classical predictions. Well, such things are abundant. The Thomas precession is not the geometric mean of two classical predictions for the precession of a gyroscope. The Sagnac effect predicted by special relativity is not the geometric mean of two classical predictions. The relativistic prediction is actually equal to the prediction for a stationary ether theory, whereas a classical ballistic theory predicts no effect at all. In general, the Lorentz transformation is not the geometric mean of the Galilean transformation with some other classical transformation. The half-lifes of radio-active particles are not the geometric means of the half-lives predicted by two classical theories. The list is endless.

    ReplyDelete
  4. Anon: "It is not true that physicists were “getting special relativity’s physical predictions badly wrong” prior to that paper."

    Yes, some of them were. And I met some of the guys face to face in 1994 who were still getting the SR predictions for photographable lengths wrong thirty years after Terrell and the ensuing floodlet of papers had supposedly set things straight.

    I said:
    " The mistake (regarding apparent optically-viewed length-changes) is almost impossible to make once you know the averaging approach, so I have to assume that the "geometric mean" method wasn't sufficiently widely known or understood in the SR community in the 1950's ..."

    Your examples description suggests that they deal with frequency rather than length – my statement would seem to stand that if the root product relationship had been better and more widely understood, the mistakes over observable lengths wouldn't have happened.

    The averaging relationship also tells us that SR's transverse redshift predictions are in the middle of the earlier range ... so when sources present transverse redshift results as something that coudln't be explained by other theories, they're misleading at best.

    The "bad" version of SR's supposed predictions for visible length used to be a science cliche - it was what got taught in schools and unis, and TV science programmes used to "explain" SR with pictures of observers' supposed views of uniformly-contracted rockets and tramcars.

    AFAIK, the earliest paper to present the averaging approach for colinearly-photographed lengths was probably Roy Weinstein in 1960, although a bunch of other 1960 papers following Terrell might have a partial claim. That was the one that you were supposed to quote back at me as a legitimate example, but didn't.

    Anon: "They are strictly quantitative, carried out to incredible levels of precision, more than adequate to distinguish and rule out all the alternatives."

    I think you're being a bit naive, here! :)

    Thanks for actually providing some relevant suggestions for cases where the root product approach doesn't hold. In general the rule holds for simple linear quantities involving simple motion in a straight line ("core" SR), so apparent length, wavelength, frequency are good examples, but anything involving geeforces, acceleration or rotation isn't.
    Gyroscopic precession and the Sagnac effect are rotation-based effects, so the simple averaging procedure for linear properties won't automatically be valid for them.

    Half-life for radioactive particles? ... tricky. It probably depends on whether the particles are accelerated or not, and whether we're trying to measure the life of a fast-moving particle by measuring the tracklength. For the latter case, the tracklength for a fast-moving particle with an agreed amount of momentum before decay, turns out to be the same for Newtonian mechanics as with SR (so when particle physicists cite tracklength as proving SR time-dilation is real, and they're using a stright track, they're kinda off-base)

    Curved accelerator tracks involve more complicated physics, because we can calculate the time dilation effect as the end-product of acceleration rather than velocity. We're taught to assume that velocity effects are real and that the alternative "acceleration" interpretation is wrong, but both approaches seem to be compatible with the accelerator data.

    Anon: "(Needless to say, your peculiar misguided ideas about transverse Doppler shift are simply based on lack of understanding, and would be dispelled if you ever took a few minutes to read a good explanation.)"

    (sigh) I think that this relationship may well have broken down irretrievably.
    You may well want to take the opportunity to archive your comments while I consider whether to disable and/or delete anonymous commenting on the blog.

    The best of luck with your future endeavours.

    ReplyDelete

Please sign your comments - an alias or pen-name is fine.