Showing posts with label special relativity. Show all posts
Showing posts with label special relativity. Show all posts

Monday, 23 November 2009

The Relativistic Ellipse

Relativistic Ellipse, v=0.8cThis is an especially cool diagram for relativity theory, but it's rather hard to find in print. There's a limited version of it in Moreau's 1994 "Wave front relativity" paper, and I put it in the book (chapter 8), but I can't think offhand of anywhere else you're liable to find it.

It's simply an ellipse with lines radiating from one focus and converging on the other.

Imagine that you have a point-source of light giving off pulses. Surrounding the point-source is a spherical mirror, which catches the outgoing spherical EM wavefront and bounces it directly back to the source. All parts of the reflected wavefront arrive back at the source at the exact same moment.
This tells us (a) that all parts of the surface are at 90 degrees to the source, and (b) that all parts of the surface are at the same distance from the source.


=Relativistic Aberration=

Now let's replay the same situation, but imagine how it would have looked to us if we were whizzing past the experiment in a spaceship (but not so close that we actually disturbed the light in any significant way).

Now, the geometry seems to be different. We're forced to agree that the reflected wavefront still converges on the emitter (because nothing within the experimental region has physically changed), but since the light takes a finite time to go out and come back again, as far as we're concerned, the experimental hardware has been moving while the light was out doing its thing.
For us, the light was being emitted from one position and refocused at another.

And the shape that does that is an ellipse.

If we look at the shape of the relativistic ellipse, we find that the outgoing rays are angled forwards ... they have to be in order for them to be able to keep up with the "moving" source. And if we measure the angles of these rays on the diagram, it gives us the textbook relativistic aberration formula used by special relativity (and also by Newtonian optics, old ballistic emission theory, and any other relativistic model).


=Velocity-rescaling, distance and time under Special Relativity=

The thing that's slightly counter-intuitive about the diagram is that if the radius of the sphere is half a light-second, and if it's supposed to take exactly one second for the light to return to its starting point (so that the bouncing light makes a clock that supposedly ticks every second), you might expect the distance "v" that the object moves in one second to simply be the distance between the two points. Slightly perversely, under SR, it isn't. The relative proportional velocity v/c (velocity quoted as a fraction of the speed of light) has to be the ratio between the focal point distance and the stretched, longest dimension of the ellipse. So if the distance between the focii is half the length of the ellipse, we can say that the velocity is half lightspeed
(in the diagram above, it's 0.8c).
But since the ellipse is stretched, the distance between the points (if v is defined as a particular fraction of the speed of light) is stretched, too. If we're to follow SR and say that lightspeed is a fixed global reference, then the distance between bounce-points is somewhat more than v metres.

Under special relativity, the width of the ellipse is assumed to be constant regardless of velocity, the ellipse is stretched by the Lorentz factor (calculated from our proportional velocity), and the "point-to-point" distance ends up elongated by the Lorentz factor, too.

Under special relativity we explain the extra distance by invoking Lorentz time dilation. We suggest that the particle travels further than expected in our coordinate system in one of its own seconds, for a given nominal velocity, because its clock is running slow (so for us, it travels for more than a second,and crosses more than v metres). Or we can argue that if an observer moving with the experiment sees a piece of paper with the diagram drawn on it passing by with the same proportional velocity of v, that for them, the distance between the marks is v metres, because their measurements indicate that the moving paper is Lorentz length-contracted. The ellipse looks like a giveaway that lightspeed isn't globally fixed, but if we assume that it is, and need to explain why the ellipse somehow doesn't really count as an ellipse, we end up with the traditional SR length-contraction and time-dilation explanations.

Contract the elongated ellipsoid by the magical gamma factor, and its outline turns neatly back into the original sphere.


=Doppler shifts=

The next thing that we can do is to look at the length of the lines. Turns out that, if we're doing the SR version of the exercise, each ray elongates or shrinks by precisely the right ratio for special relativity's relativistic Doppler effect. The forward and rearward distances are stretched and squashed by the ratio SQRT[(c-v)/(c+v)], and the 90-degree-aimed ray is stretched in length by SQRT[1 - vv/cc].
That's the Lorentz transverse redshift prediction of special relativity.


=Ellipses are Cool=

So this one little diagram tells you almost everything that you need to know about special relativity. Once you've drawn it with the appropriate proportions for a given velocity, all you have to do is read off the angles and distances with a protractor and ruler to find SR's physical predictions about the appearance of a moving body, as seen from any angle.

If you'd prefer not to rely on any "odd" theory-specific definitons of velocity, distance or time whenbuildign the ellipse, all you have to do is draw in two rays from a focus, with lengths rescaled by the theory's particular Doppler shift predictions, and the rest of the diagram constructs itself. Along with the Minkowski lightcone diagram, it's probably one of the most powerful diagrams in special relativity.

So why isn't it in the books?

We-ell, perhaps the problem with the diagram is that it makes people think. Which leads to troubling ponderings, because it turns out that the diagram doesn't have to be used with special relativity. It'll compute the SR relationships if we deliberately stretch the point-to-point distance by the Lorentz factor, or if we use the SR "relativistic Doppler" relationships to define the reference wavelength-distances, or if we decide that lightspeed has to be defined as globally constant for all participants ... but if we're only interested in the principle of relativity, and we're not prepared to commit to these extra SR-specific things, the ellipse also lets us plug in other assumptions, and lets us see the their consequences.

For instance, we know that old Newtonian optics was technically a "relativistic" theory (although nobody could get NO to work properly with wave theory). We know the forward and rearward wavelength changes associated with that theory, so we can draw in these two wave-distances from one of the focal points, and construct the rest of the ellipse around these maximum and minimum radii. What we end up with is an exact duplicate of the SR ellipse, with the same proportions and aberration angles, but with an additional Lorentz magnification. All the NO wavelengths are longer than their SR counterparts by a Lorentz ratio. So transverse redshifts aren't unique to special relativity.

And then you notice some other things. The SR ellipse can be compacted back into its original circular outline just by contracting it on one axis. This is analogous to tilting the diagram off the page to produce a contracted "shadow", which gets us into the subject of Minkowski spacetime, tilted planes of simultaneity, and other cool things. The SR family of ellipses actually represents constant-width tilted cross-sections through a constant Minkowski lightcone and can be visualised as projected conic sections.

The SR version of the constructed ellipse is the only one that has this special property.
This tells us that if we require spacetime to be "flat" in moving-body problems, the SR relationships are the only ones that work. We're still freetoargue argue about the correct philosophical interpretation and presentation of the theory, and about whether the interpreted contractions and clock-changes are physically real or not (and about what wemean by "physically real" in the context of SR), but the defining Doppler characteristics of the theory – the things that dictate the final physical predictions and equations ofmotion, regardless of interpretation – are set, locked and non-negotiable once we've decided that we won't be implementing curvature as part of the model. According to the ellipse, Relativity (limited to simple inertial motion) plus flat spacetime gives SR. It's airtight.

If we now go back to the enlarged Newtonian version of the ellipse, we find that the rules are different. The enlarged NO wavelengths can't be fitted back into the original sphere without distorting the centre of the ellipse out of the page. Instead of a tilted-and-rescaled cross-section through a fixed geometry (Minkowski spacetime) we end up with a geometry whose shape dynamically changes when there's relative motion between physical masses. Instead of a purely "projective" tilt, we have a real physical change of shape. The causal structure of the metric now depends on the presence and motion of physical bodies embedded with it. We end up with a gravitomagnetic theory, with a different form of lightspeed constancy to SR. And that's why nobody, including Einstein, could put together a sane-looking reference model for Newtonian optics that didn't go crazy when you tried to treat it as wave theory. Newtonian optics simply doesn't work in flat spacetime. The wavelengths don't fit.

I still think that it's a shame that they don't teach the relativistic ellipse in physics classes. It's a powerful tool, and a really handy device for demystifying special relativity. But perhaps it's too powerful, and perhaps if you're trying to convince a class that SR is the only possible answer, a tool that suggests the existence of alternative approaches spoils the narrative.

Monday, 26 October 2009

Cosmological Hawking Radiation, and the failure of Einstein's General Theory

The Earth's Horizon, E. Baird 2009Cosmological horizons are rather arbitrary. The cosmological limit to direct observation is at different places for different observers, and if you change position, your horizon position changes to match. In that respect, a cosmological horizon is a little bit like a planetary horizon - it's different for everyone, and every physical location can be considered as being at a horizon boundary for someone.

With a cosmological horizon, we can mark out a region of space that we reckon should be directly visible, and another region beyond that shouldn't be, and try to draw a dividing line between the two that represents the horizon. The unseen region doesn't exist in an observerspace map even as space, which (in an observerspace projection) seems to fizzle out and come to a stop at the horizon limit.
As we try to look at regions further and further away, we're seeing larger and larger cosmological redshifts, and seeing further and further back in time, until we approach a theoretical limit where the redshift is total, time doesn't appear to have moved on at all since the Big Bang, and events apparently frozen into the horizon correspond to those in the vicinity of Time Zero.
In an idealised model, trying to see any further away than this means that we'd be expecting to be seeing spacetime events that originated before the Big Bang, which – in our usual models – don't exist. So the cosmological horizon is the rough analogue of a censoring surface surrounding a notional black hole singularity under general relativity. It kinda ties into the cosmic censorship hypothesis that, if any physical singularities do exist anywhere in Nature, Nature will always make physics work nicely and politely helpfully hiding the nasty singularities from view.

HOWEVER ... with a cosmological horizon, there are logical arguments that insist that we can receive signals though it.

Suppose that we have two star systems, A and B, whose spatial positions are on different sides of our drawn cosmological horizon, a couple of hundred lightyears away from each other. Let's say that B's the closer star to us – 100 ly inside our nominal horizon – and A's 100 ly outside. In an observerspace projection, we'll eventually be able to see the formation of the nearer star B (if we wait a few bazillion years) but A is off-limits.

But the nearer star B is quite capable of seeing events generated by A, and then helpfully relaying their information on to us. If A goes supernova, we should (eventually) be able to see a cloud of gas near B being illuminated by the flash. B can pass A's signals on, just as an observer at a planetary horizon can see things beyond our horizon and describe them to us, or hold up a carefully-angled mirror to let us see for ourselves.

So technically, Star A, under QM definitions, is a virtual object. It doesn't exist for us according to direct observation, but it's real for nearby observers and we can see the secondary result of those observations. B radiates indirectly through the horizon, so not only does the supposed Big Bang singularity have a masking horizon, the horizon emits Hawking radiation. If we'd bee a bit brighter back in the 1950's, we'd have been able to predict Hawking radiation by taking the "cosmological horizon" case and generalising over to the gravitational case. What stopped us from doing this was an incompatibility with the way that GR1915 was constructed.

The cosmological horizon is an acoustic horizon. It fluctuates and jumps about in response to events both in front of it and behind it. If someone near star A lobs a baseball at star B, we'll eventually see that baseball appear, apparently from nowhere, as a Hawking radiation event. And depending on how close the thrower is to the horizon, and how hard they throw the ball, we might even get a glimpse of their shoulder, as the physical acceleration of their arm warps spacetime (accelerative gravitomagnetism, Einstein 1921) making the nominal horizon position jump backwards.

For this sort of acoustic horizon to work, the acceleration and velocity of an object has to affect local optics (if the ball had been thrown in the opposite direction, we'd never have seen it).
If the local physics at a cosmological horizon generates an acoustic horizon, then that physics is going to correspond to that of an acoustic metric. NOT a static Minkowski metric. The presence, velocity and acceleration of objects must change the local signal-carrying properties of a region. Since the operating characteristics of an acoustic metric are different to those of the Minkowski metric that defines the relationships of special relativity, the local physics then has to operate according to a different set of laws to those of special relativity – the velocity-dependent geometry of an acoustic metric makes the basic equations of motion come out differently. For cosmological horizons to work as we expect, the local light-geometry for a patch of horizon has to be something other than simple SR flat spacetime, and the local physics has to obey a different set of rules to those of special relativity.

Now, the punchline: Since our own region of spacetime will in turn lie on the horizon of some distant far-future observer, this means that if we buy into the previous arguments, our own local "baseball physics", here on Earth, shouldn't be that of special relativity either.


The good news
is that if we eliminate special relativity from GR, to force cosmological horizons to make sense, GR's predictions for gravitational horizons would also change. The revised general theory would predict indirect radiation effects through gravitational horizons, bringing the theory in line with quantum mechanics. Which would be a Good Thing, because we've been trying to solve THAT problem for most of the last 35 years.

The bad news
is that there doesn't seem to be any polite way to do it. Disassembling and reconstructing general relativity to address its major architectural problems involves going back to basics and starting from scratch, questioning every assumption and decision that was made the first time around, and being pretty ruthless about which parts get to stay on in the final theory.

I find this sort of work kinda fun, but apparently I'm in a minority.

Sunday, 6 September 2009

The Moon, considered as a Flat Disc

The Moon considered as a flat disc gives Lorentz relationships
Mathematics doesn't always translate directly to physics.
That statement might sound odd to a mathematician, but consider this: even if you believe that physics is nothing but mathematics, that makes physics a subset of mathematics ... which means that there'll be other mathematics that lies outside that subset, that doesn't correspond cleanly to real-world physical theory. The key (for a physicist) is to know which is which.

That's not to say that "beauty equals truth" isn't a good working assumption in mathematical physics – it is – the problem is that the aesthetics of the two subjects are different, and mathematical beauty doesn't necessarily correspond well to physical truth. The physicist's concept of beauty is often different to that of the mathematician.

The "beauty equals truth" idea is often used as an argument for special relativity. SR uses the Lorentz relationships, and to a mathematician, it can sometimes seem that these are such beautiful equations that a system of physics that incorporates them has to be correct.

But the Lorentz relationships can also appear in bad theories, as a consequence of rotten initial starting assumptions:
Our Moon is tidally locked to the rotation of the Earth, so that it always shows the same face to us, and we always see the same circular image, with the same mappable features. Now suppose that a 1600's mathematician has a funny turn and decides that it's so outrageously statistically improbable that the moon would just coincidentally just happen to have an orbit that results in it presenting the same face to us at all times, that something else is going on. Our hypothetical "crazy mathematician" might decide that since we always see the same disc-image of the Moon, that perhaps, (mis)applying Occam's Razor, it really IS a flat disc.

Our mathematician could start examining the features on the Moon's surface, and discover a trend whereby circular craters appear progressively more squashed towards the disc's perimeter. We'd say that this shows that we're looking at one half of a sphere, but our mathematician could analyse the shapes and come up with another explanation. It turns out that, in "disc-world" the distortion corresponds to an apparent radial coordinate-system contraction within the disc surface. For any feature placed at a distance r from the disc centre, where R is the disc radius, this radial contraction comes out as a ratio of 1 : SQRT[1 - rr/RR ] .

In other words, by treating the Moon as a flat disc, we'd have derived the equivalent of the Lorentz factor as a ruler-contraction effect! :)
Our crazy mathematician could then go on and use that Lorentz relationship as the basis of a slew of good results in group theory and so on. They could argue that local physics works the same way at all points on the disc surface, because the disc's inhabitants can't "see" their own contraction, because their own local reference-rulers are contracted, too. Our mathematician could arguably have advanced faster and made better progress by starting with a bad theory! So "bad physics" sometimes generates "good" math, and sometimes the worse the physics is, the prettier the results.

The reason for this is that, sometimes, real physics is a bit ... boring. If we screw physics up, the dancing pattern of recursive error corrections sometimes generates more fascinating structures than the more mundane results that we'd have gotten if we simply got the physics right in the first place.

Sometimes these errors are self-correcting and sometimes they aren't.
If we considered the Earth as flat, then, because it's possible to map a flat surface onto a sphere (the Riemann projection), it'd still be theoretically possible to come up with a complete description of physics that worked correctly in the context of an infinite rescaled Flat Earth. We'd lose the inverse square law for gravity, but we'd gain some truly beautiful results, that would allow, say, a lightbeam aimed parallel to one part of the surface to appear to veer away. We'd end up with a more subtle, more sophisticated concept of gravitation than we'd tend to get using more "sane" approaches, and all of those new insights would have to be correct. In fact, studying flat-Earth gravity might be a good idea! We'd eventually end up deriving a mathematical description that was functionally identical to the physics that we'd get by assuming a sphericial(ish) Earth ... it'd just take us longer. Once our description was sufficiently advanced, the decision whether to treat the Earth as "really" flat or "really" spherical would simply be a matter of convenience.

But with the "moon-disc" exercise, we don't have a 1:1 relationship between the physics and the dataset that we're working with, and as a result, although the moon-disc description gets a number of things exactly right, the model fails when we try to extend it, and we have to start applying additional layers of externally-derived theory to bring things back on track.
For instance, the "disc" description breaks down at (and towards) the Moon's apparent horizon. For the disc, the surface stops at a distance R from the centre, and there's a causal cutoff. Events beyond R can't affect the physics of the disk, because there's no more space for those events to happen in. The horizon represents an apparent causal limit to surface physics. But in real life, if the Moon was a busier place, we'd see things happening in the visible region that were the result of events beyond the horizon, and observers wandering about near our horizon would see things that occur outside our map. So if we were to use statistical mechanics to model Moon activity, and were to say that the event-density and event-pressure have to be uniform (after normalisation) at all parts of the surface, then statistical mechanics would force us to put back the missing trans-horizon signals by giving us "virtual" events whose density increased towards the horizon, and whose mathematical purpose was to restore the original event-density equilibrium. In disc-world, we'd have to say that the near-edge observer sees events in all directions, not because information was passing through (or around) the horizon, but because of the disc-world equivalent of Hawking radiation.

So in the disc description, the telltale sign that we're dealing with a bad model is that it generates over-idealised horizon behaviour that can't describe trans-horizon effects, and which needs an additional layer of statistical theory to make things right again. In the "moon-disc" model, we don't have a default agreement with statistical mechanics, and we have to assume that SM is correct, divide physics artificially into "classical" and "quantum" systems, and retrofit the difference between the two predictions back onto the bad classical model – as a separate QM effect, as the result of particle pair-production somewhere in front of the horizon limit – to explain how information seems to appear "from nowhere" just inside the visible edge of the disc.

Clearly, in the Moon-disc exercise this extreme level of retrofitting ought to tell our hypothetical crazy mathematician that things have gone too far, and suggest that the starting assumption of a flat surface was simply bad ...
... but in our physics, based on the early assumption of flat spacetime, and generating the same basic mathematical patterns, we ran into a version of exactly the same problem: Special relativity avoided the subject of signal transfer across velocity-horizons by arguing that the amount of velocity-space within the horizon was effectively infinite (you could never reach v=c), but when we added gravitational and cosmological layers to the theory, the "incompleteness problem" with SR-based physics showed up again. GR1915 horizons were too sharp and clean, and didn't allow outward flow of information, so to force the physics to obey more general rules, we had to reinvent an observable counterpart to old-fashioned transhorizon radiation as a separate quantum-mechanical effect.

So the result of this sanity-check exercise is a little humbling. We can demonstrate to our hypothetical 1600's "crazy mathematician" that the Moon is NOT flat, no matter how much pretty Lorentz math that generates, and we can use the horizon exercise to show them that their approach is incomplete. By assuming that their model is wrong, we correctly anticipate the corrections that they'd have to make from other theories in order to fix things up. That ability to predict where a theory fails and needs outside help is the mark of a superior system, and shows that the "Flat-Moon" exercise isn't just incomplete, it generates results that are physically wrong, and that don't self-correct. It's faulty physics.

But the same characteristic failure-pattern also shows up in our own system, based on special relativity. So have we made a similar mistake?

Saturday, 22 August 2009

Special Relativity is an Average

Special Relativity as an average: 'Classical Theory' (yellow block), Special Relativity (orange block), and Newtonian Optics (red block). Special relativity's numerical predictions are the 'geometric mean' average of the predictions for the other two blocksTextbooks tend to present special relativity's physical predictions as if they're somehow "out on a limb", and totally distinct from the predictions of earlier models, but SR's numerical predictions aren't as different to those of Nineteenth-Century models as you might think.

One of the little nuggets of wisdom that the books usually forget to mention is that most of special relativity's raw predictions aren't just qualitatively not particularly novel, they're actually a type of mathematical average (more exactly, the geometric mean) of two earlier major sets of predictions. So, in the diagram above, if the yellow box on the left represents the set of predictions associated with the speed of light being fixed in the observer's frame (fixed, stationary aether), and the red box on the right represents the set of physical predictions for Newtonian optics (traditionally associated with ballistic emission theory), then the box in the middle represents the corresponding (intermediate) set of predictions for special relativity.

If we know the physical predictions for a simple "linear" quantity (visible frequency, apparent length, distance, time, wavelength and so on) in the two "side" boxes, then all we normally have to do to find the corresponding central "SR" prediction is to multiply the two original "flanking" predictions together and square root the result. This can be a really useful method if you're doing SR calculations and you want an independent method of double-checking your results.


This usually works with equations as well as with individual values.
F'rinstance, if the "linear" parameter that we were working with was observed frequency, and we assumed that the speed of light was fixed in our own frame ("yellow" box), we'd normally predict a recession Doppler shift due to simple propagation effects on an object of
frequency(seen) / frequency(emitted) = c / (c+v)
, whereas if we instead believed that lightspeed was fixed with reference to the emitter's frame, we'd get the "red box" result, of
frequency(seen) / frequency(emitted) = (c-v) / c
If there was really an absolute frame for the propagation of light, we could then tell how fast we were moving with respect to it by measuring these frequency-shifts.

The "geometric mean" approach eliminated this difference by replacing the two starting predictions with a single "merged" prediction that we could get by multiplying the two "parent" results together and square-rooting. This gave
frequency(seen) / frequency(emitted) = SQRT[ (c-v) / (c+v) ]
, which is what turned up in Einstein's 1905 electrodynamics paper.

The averaging technique gave us a way of generating a new prediction that "missed" both propagation-based predictions by the same ratio. Since the numbers in the "red" and "yellow" blocks already disagreed by the ratio 1: (1- vv/cc), the new intermediate, "relativised" theory diverged from both of these by the square root of that difference, SQRT[ 1 - vv/cc ]. And that's where the Fitzgerald-Lorentz factor originally came from.

---==---

Why is it important to know this?

Well, apart from the fact that it's useful to be able to calculate the same results in different ways, the "geometric mean" approach also has important implications for how we go about testing special relativity.
Our usual approach to testing SR is to compare just the the "yellow" and "orange" predictions, identify the difference, say that the resulting differential Lorentz redshift/contraction component is something unique to SR and totally separate from any propagation effects, and then set out to measure the strength of this relative redshift/contraction component, in the range "zero-to-Lorentz". Having convinced ourselves that these effects are unique to SR, we usually don't then bother to check whether the data might actually make a better match to a point somewhere to the right of the diagram.
Since the "yellow box" predictions are so awful, special relativity comes out of this comparison pretty well.

But once you know the averaging method, you'll understand that this is only half the story -- these "derivative" effects that appear under SR but not "Classical Theory" ("orange" but not "yellow") must have counterparts under Newtonian optics ("red"), and these are usually stronger than the SR versions. So any experimental procedure or calculation that appears to support the idea of time dilation or length-contraction in an object with simple constant-velocity motion under SR would also generate an apparent positive result for those effects if SR was wrong and the older "Newtonian optics" relationships were the correct set (or if some other intermediate set of relationships was in play). We can say that special relativity's concept of velocity-based time dilation didn't exist under NO, but hardware doesn't care about concepts or interpretations, only results ... and the result of performing an SR-designed test in an "NO universe" would be that the test would throw up a "false positive" result apparently supporting SR (with an overshoot that'd then have to be calibrated out).

And, actually, the situation is worse than this.
... Since the "yellow" and "red" blocks represent the two extremal predictions for theories that allow linkage between the velocity of a light-signal and the motion of a body ("yellow" = zero dependency, "red" = full dependency), they also seem to represent the cutoff-limits for a whole slew of old Nineteenth-Century "dragged aether" models, all of which would be expected to produce similar physical effects to special relativity, differing only in their scaling and strength. So typical test procedures designed to isolate the "new" SR effects should be able to generate "false positive" results with almost all of these old theories and models.

While some of special relativity's concepts might have been new, its testable numerical predictions lie right in the middle of a pre-existing range. Any time you see a claimed experimental verification of SR that forgets to take this into account, treat it with caution.

Friday, 14 August 2009

Fun with Special Relativity

detail form Salvador Dali's 'http://en.wikipedia.org/wiki/The_Disintegration_of_the_Persistence_of_Memory', (oil on canvas, circa 1952-54)This is where I surprise everyone by saying something nice about Einstein's Special Theory of Relativity for a change. Considered as a piece of abstract geometry, special relativity (aka "SR" or "STR") is prettier than even some of its proponents give it credit for. The problems only kick in when you realise that the basic principles and geometry of SR considered as physics don't correspond well to the rules that real, physical observers and objects appear to follow in real life.

Anyhow, here's some of the pretty stuff:

It's traditional to explain Einstein's special theory of relativity as a theory that says that the speed of light is fixed (globally) in our own frame of reference, and that objects moving with respect to our frame are time-dilated and length-contracted, by the famous Lorentz factor.
And that characterisation certainly generated the appropriate predictions for special relativity, just as it did for Lorentzian Ether Theory ("LET"). But we can't verify that this time-dilation effect is physically real in cases where SR applies the principle of relativity (i.e. cases that only involve simple uniform linear motion). Thanks to its application of Lorentz-factor relationships, Special Relativity doesn't allow us to physically identify the frame that lightspeed is supposed to be constant in. When we make proper, context-appropriate calculations within SR, we have the choice of assuming that lightspeed is globally constant in our frame, or in the frame of the object we're watching, or in the frame of anybody else who has a legal inertial frame – it's usually a sensible choice to use our own frame as the reference, but really, it it doesn't matter which one we pick, and sometimes the math simplifies if we use someone else's frame as our reference (as Einstein did in section 7 of his 1905 paper).

Some people who've learnt special relativity through the usual educational sources have expressed a certain amount of disbelief (putting it mildly) when I mention that SR allows observers a free choice of inertial reference frame, so let's try a few examples, to get a feel of how special relativity really works when we step away from the older "LET" descriptions that spawned it.

Some Mathy Bits:

1: Physical prediction
Let's suppose that an object is receding from us at at a velocity of four-fifths of the speed of light, v = 0.8c
Special relativity predicts that the frequency shift that we'll see is given by
frequency(seen)/frequency(original) = SQRT[ (c-v) / (c+v) ]
= SQRT[ (1-0.8) / (1+0.8) ]
= SQRT[ 0.2/1.8 ] = SQRT[ 1/9 ]

=
1/3
, so according to SR, we should see the object's signals to have one third of their original frequency. This is special relativity's physical prediction. The object looks to us, superficially, as if it's ageing at one third of its normal rate, but we have a certain amount of freedom over how we choose to interpret this result.

2: "Motion plus time dilation"
It's usual to break this physical SR prediction into two notional components, a component due to more traditional "propagation-based" Doppler effects, calculated by assuming that lightspeed's globally constant in somebody's frame, and an additional "Lorentz factor" time dilation component based on how fast the object is moving with respect to that frame.
The "simple" recession Doppler shift that we'd calculate for v = 0.8c by assuming that lightspeed was fixed in our own frame would be
frequency(seen) / frequency(original) = c/(c+v)
= 1/1+0.8 = 1/1.8
, and the associated SR Lorentz-factor time-dilation redshift is given by
freq'/freq = SQRT[ 1 - vv/cc ]
= SQRT[ 1 - (0.8)² ] = SQRT[ 1 - 0.64 ] = SQRT[ 0.36 ]
= 0.6
Multiplying 0.6 by 1/1.8 gives
0.6/1.8 = 6/18
= 1/3

Same answer.

3: Different frame
Or, we can do it by assuming that the selected emitter's frame is the universal reference.
This gives a different propagation Doppler shift result, of
freq'/freq = (c-v)/c
= 1 - 0.8 = 0.2

We then assume that because we're time dilated (because we're moving w.r.t. the reference frame), and that because our clocks are slow, we're seeing everything to be Lorentz-blueshifted, and appearing to age faster than we'd otherwise expect, by the Lorentz factor.
The formula for this is
freq'/freq = 1/SQRT[ 1 - vv/cc ]
= 1/0.6 = 5/3
Multiplying these two components together gives a final prediction for the apparent frequency shift of
0.2× (1/0.6) = 0.2/0.6 = 2/6
= 1/3
Same answer.

So although you sometimes see physicists saying that thanks to special relativity, we know that the speed of light is globally fixed in our own frame, and we know that particles moving at constant speed down an accelerator tube are time-dilated, actually we don't. In the best-case scenario, in which we assume that SR's physical predictions are actually correct, the theory says that we're entitled to assume these things as interpretations of the data, but according to the math of special relativity, if we stick to cases in which SR is able to obey the principle of relativity, it's physically impossible to demonstrate which frame light "really" propagates in, or to prove whether an inertially-moving body is "really" time-dilated or not. It's interpretative. Regardless of whether we decide that we're moving and time-dilated or they are, the final physical predictions are precisely the same, either way. And that's the clever feature that we get by incorporating a Lorentz factor, that George Francis Fitzgerald originally spotted back in the Nineteenth Century, that Hendrik Antoon Lorentz also noticed, and that Albert Einstein then picked up on.

4: Other frames, compound shifts, no time dilation
But we're not just limited to a choice between these two reference frames: we can use any SR-legal inertial reference frame for the theory's calculations and still get the same answer.
Let's try a more ambitious example, and select a reference-frame exactly intermediate to our frame and that of the object that we're viewing. In this description, both of us are said to be moving by precisely the same amount, and could be said to be time-dilated by the same amount ... so there's no relative time dilation at all between us and the watched object. We can then go ahead and calculate the expected frequency-shift in two stages just by using the simpler pre-SR Doppler relationships, and get exactly the same answer without invoking time dilation at all!

The "wrinkle" in these calculations is that velocities under special relativity don't add and subtract like "normal" numbers (thanks to the SR "velocity addition" formula), so if we divide our recession velocity of 0.8c into two equal parts, we don't get (0.4c+ 0.4c), but (0.5c+0.5c)
(under SR, 0.5c+0.5c=0.8c – if you don't believe me, look up the formula and try it)

So, back to our final example. The receding object throws light into the intermediate reference frame while moving at 0.5c. The Doppler formula for this assumes "fixed-c" for the receiver, giving
freq'/freq = c/(c+v)
=1/1.5 = 2/3
Having been received in the intermediate frame with a redshift of f'/f = 66.66'%, the signal is then forwarded on to us. We're moving away from the signal so it's another recession redshift.
The second propagation shift is calculated assuming fixed lightspeed for the emitting frame, giving
freq'/freq = (c-v)/c
=1 - 0.5/1 = 0.5/1 = 1/2
The end result of multiplying both of these propagation shift stages together is then
2/3 × 1/2
= 1/3
Again, exactly the same result.

No matter which SR-legal inertial frame we use to peg lightspeed to, special relativity insists on generating precisely the same physical results, and this is the same for frequency, aberration, apparent changes in length, everything.

So when particle physicists say that thanks to special relativity we know for a physical fact that lightspeed is really fixed in our own frame, and that objects moving w.r.t. us are really time-dilated ... I'm sorry, but we don't. We really, really don't. We can't. If you don't trust the math and need to see it spelt out in black and white in print, try Box 3-4 of Taylor and Wheeler's "Spacetime Physics", ISBN 0716723271. IF special relativity has the correct relationships, and is the correct description of physics, then the structure of the theory prevents us from being able to make unambiguous measurements of these sorts of things on principle. We can try to test the overall final physical predictions (section 1), and we can choose to describe that prediction by dividing it up into different nominal components, but we can't physically isolate and measure those components individually, because the division is totally arbitrary and unphysical. If the special theory is correct, then there's no possible experiment that could show that an object moving with simple rectilinear motion is really time-dilated.

If you're a particle physicist and you can't accept this, go ask a mathematician.