One of the little nuggets of wisdom that the books usually forget to mention is that most of special relativity's raw predictions aren't just qualitatively not particularly novel, they're actually a type of mathematical average (more exactly, the geometric mean) of two earlier major sets of predictions. So, in the diagram above, if the yellow box on the left represents the set of predictions associated with the speed of light being fixed in the observer's frame (fixed, stationary aether), and the red box on the right represents the set of physical predictions for Newtonian optics (traditionally associated with ballistic emission theory), then the box in the middle represents the corresponding (intermediate) set of predictions for special relativity.
If we know the physical predictions for a simple "linear" quantity (visible frequency, apparent length, distance, time, wavelength and so on) in the two "side" boxes, then all we normally have to do to find the corresponding central "SR" prediction is to multiply the two original "flanking" predictions together and square root the result. This can be a really useful method if you're doing SR calculations and you want an independent method of double-checking your results.
This usually works with equations as well as with individual values.
F'rinstance, if the "linear" parameter that we were working with was observed frequency, and we assumed that the speed of light was fixed in our own frame ("yellow" box), we'd normally predict a recession Doppler shift due to simple propagation effects on an object of
frequency(seen) / frequency(emitted) = c / (c+v), whereas if we instead believed that lightspeed was fixed with reference to the emitter's frame, we'd get the "red box" result, of
frequency(seen) / frequency(emitted) = (c-v) / cIf there was really an absolute frame for the propagation of light, we could then tell how fast we were moving with respect to it by measuring these frequency-shifts.
The "geometric mean" approach eliminated this difference by replacing the two starting predictions with a single "merged" prediction that we could get by multiplying the two "parent" results together and square-rooting. This gave
frequency(seen) / frequency(emitted) = SQRT[ (c-v) / (c+v) ], which is what turned up in Einstein's 1905 electrodynamics paper.
The averaging technique gave us a way of generating a new prediction that "missed" both propagation-based predictions by the same ratio. Since the numbers in the "red" and "yellow" blocks already disagreed by the ratio 1: (1- vv/cc), the new intermediate, "relativised" theory diverged from both of these by the square root of that difference, SQRT[ 1 - vv/cc ]. And that's where the Fitzgerald-Lorentz factor originally came from.
---==---Why is it important to know this?
Well, apart from the fact that it's useful to be able to calculate the same results in different ways, the "geometric mean" approach also has important implications for how we go about testing special relativity.
Our usual approach to testing SR is to compare just the the "yellow" and "orange" predictions, identify the difference, say that the resulting differential Lorentz redshift/contraction component is something unique to SR and totally separate from any propagation effects, and then set out to measure the strength of this relative redshift/contraction component, in the range "zero-to-Lorentz". Having convinced ourselves that these effects are unique to SR, we usually don't then bother to check whether the data might actually make a better match to a point somewhere to the right of the diagram.
Since the "yellow box" predictions are so awful, special relativity comes out of this comparison pretty well.
But once you know the averaging method, you'll understand that this is only half the story -- these "derivative" effects that appear under SR but not "Classical Theory" ("orange" but not "yellow") must have counterparts under Newtonian optics ("red"), and these are usually stronger than the SR versions. So any experimental procedure or calculation that appears to support the idea of time dilation or length-contraction in an object with simple constant-velocity motion under SR would also generate an apparent positive result for those effects if SR was wrong and the older "Newtonian optics" relationships were the correct set (or if some other intermediate set of relationships was in play). We can say that special relativity's concept of velocity-based time dilation didn't exist under NO, but hardware doesn't care about concepts or interpretations, only results ... and the result of performing an SR-designed test in an "NO universe" would be that the test would throw up a "false positive" result apparently supporting SR (with an overshoot that'd then have to be calibrated out).
And, actually, the situation is worse than this.
... Since the "yellow" and "red" blocks represent the two extremal predictions for theories that allow linkage between the velocity of a light-signal and the motion of a body ("yellow" = zero dependency, "red" = full dependency), they also seem to represent the cutoff-limits for a whole slew of old Nineteenth-Century "dragged aether" models, all of which would be expected to produce similar physical effects to special relativity, differing only in their scaling and strength. So typical test procedures designed to isolate the "new" SR effects should be able to generate "false positive" results with almost all of these old theories and models.
While some of special relativity's concepts might have been new, its testable numerical predictions lie right in the middle of a pre-existing range. Any time you see a claimed experimental verification of SR that forgets to take this into account, treat it with caution.