Showing posts with label pathological science. Show all posts
Showing posts with label pathological science. Show all posts

Sunday, 6 September 2009

The Moon, considered as a Flat Disc

The Moon considered as a flat disc gives Lorentz relationships
Mathematics doesn't always translate directly to physics.
That statement might sound odd to a mathematician, but consider this: even if you believe that physics is nothing but mathematics, that makes physics a subset of mathematics ... which means that there'll be other mathematics that lies outside that subset, that doesn't correspond cleanly to real-world physical theory. The key (for a physicist) is to know which is which.

That's not to say that "beauty equals truth" isn't a good working assumption in mathematical physics – it is – the problem is that the aesthetics of the two subjects are different, and mathematical beauty doesn't necessarily correspond well to physical truth. The physicist's concept of beauty is often different to that of the mathematician.

The "beauty equals truth" idea is often used as an argument for special relativity. SR uses the Lorentz relationships, and to a mathematician, it can sometimes seem that these are such beautiful equations that a system of physics that incorporates them has to be correct.

But the Lorentz relationships can also appear in bad theories, as a consequence of rotten initial starting assumptions:
Our Moon is tidally locked to the rotation of the Earth, so that it always shows the same face to us, and we always see the same circular image, with the same mappable features. Now suppose that a 1600's mathematician has a funny turn and decides that it's so outrageously statistically improbable that the moon would just coincidentally just happen to have an orbit that results in it presenting the same face to us at all times, that something else is going on. Our hypothetical "crazy mathematician" might decide that since we always see the same disc-image of the Moon, that perhaps, (mis)applying Occam's Razor, it really IS a flat disc.

Our mathematician could start examining the features on the Moon's surface, and discover a trend whereby circular craters appear progressively more squashed towards the disc's perimeter. We'd say that this shows that we're looking at one half of a sphere, but our mathematician could analyse the shapes and come up with another explanation. It turns out that, in "disc-world" the distortion corresponds to an apparent radial coordinate-system contraction within the disc surface. For any feature placed at a distance r from the disc centre, where R is the disc radius, this radial contraction comes out as a ratio of 1 : SQRT[1 - rr/RR ] .

In other words, by treating the Moon as a flat disc, we'd have derived the equivalent of the Lorentz factor as a ruler-contraction effect! :)
Our crazy mathematician could then go on and use that Lorentz relationship as the basis of a slew of good results in group theory and so on. They could argue that local physics works the same way at all points on the disc surface, because the disc's inhabitants can't "see" their own contraction, because their own local reference-rulers are contracted, too. Our mathematician could arguably have advanced faster and made better progress by starting with a bad theory! So "bad physics" sometimes generates "good" math, and sometimes the worse the physics is, the prettier the results.

The reason for this is that, sometimes, real physics is a bit ... boring. If we screw physics up, the dancing pattern of recursive error corrections sometimes generates more fascinating structures than the more mundane results that we'd have gotten if we simply got the physics right in the first place.

Sometimes these errors are self-correcting and sometimes they aren't.
If we considered the Earth as flat, then, because it's possible to map a flat surface onto a sphere (the Riemann projection), it'd still be theoretically possible to come up with a complete description of physics that worked correctly in the context of an infinite rescaled Flat Earth. We'd lose the inverse square law for gravity, but we'd gain some truly beautiful results, that would allow, say, a lightbeam aimed parallel to one part of the surface to appear to veer away. We'd end up with a more subtle, more sophisticated concept of gravitation than we'd tend to get using more "sane" approaches, and all of those new insights would have to be correct. In fact, studying flat-Earth gravity might be a good idea! We'd eventually end up deriving a mathematical description that was functionally identical to the physics that we'd get by assuming a sphericial(ish) Earth ... it'd just take us longer. Once our description was sufficiently advanced, the decision whether to treat the Earth as "really" flat or "really" spherical would simply be a matter of convenience.

But with the "moon-disc" exercise, we don't have a 1:1 relationship between the physics and the dataset that we're working with, and as a result, although the moon-disc description gets a number of things exactly right, the model fails when we try to extend it, and we have to start applying additional layers of externally-derived theory to bring things back on track.
For instance, the "disc" description breaks down at (and towards) the Moon's apparent horizon. For the disc, the surface stops at a distance R from the centre, and there's a causal cutoff. Events beyond R can't affect the physics of the disk, because there's no more space for those events to happen in. The horizon represents an apparent causal limit to surface physics. But in real life, if the Moon was a busier place, we'd see things happening in the visible region that were the result of events beyond the horizon, and observers wandering about near our horizon would see things that occur outside our map. So if we were to use statistical mechanics to model Moon activity, and were to say that the event-density and event-pressure have to be uniform (after normalisation) at all parts of the surface, then statistical mechanics would force us to put back the missing trans-horizon signals by giving us "virtual" events whose density increased towards the horizon, and whose mathematical purpose was to restore the original event-density equilibrium. In disc-world, we'd have to say that the near-edge observer sees events in all directions, not because information was passing through (or around) the horizon, but because of the disc-world equivalent of Hawking radiation.

So in the disc description, the telltale sign that we're dealing with a bad model is that it generates over-idealised horizon behaviour that can't describe trans-horizon effects, and which needs an additional layer of statistical theory to make things right again. In the "moon-disc" model, we don't have a default agreement with statistical mechanics, and we have to assume that SM is correct, divide physics artificially into "classical" and "quantum" systems, and retrofit the difference between the two predictions back onto the bad classical model – as a separate QM effect, as the result of particle pair-production somewhere in front of the horizon limit – to explain how information seems to appear "from nowhere" just inside the visible edge of the disc.

Clearly, in the Moon-disc exercise this extreme level of retrofitting ought to tell our hypothetical crazy mathematician that things have gone too far, and suggest that the starting assumption of a flat surface was simply bad ...
... but in our physics, based on the early assumption of flat spacetime, and generating the same basic mathematical patterns, we ran into a version of exactly the same problem: Special relativity avoided the subject of signal transfer across velocity-horizons by arguing that the amount of velocity-space within the horizon was effectively infinite (you could never reach v=c), but when we added gravitational and cosmological layers to the theory, the "incompleteness problem" with SR-based physics showed up again. GR1915 horizons were too sharp and clean, and didn't allow outward flow of information, so to force the physics to obey more general rules, we had to reinvent an observable counterpart to old-fashioned transhorizon radiation as a separate quantum-mechanical effect.

So the result of this sanity-check exercise is a little humbling. We can demonstrate to our hypothetical 1600's "crazy mathematician" that the Moon is NOT flat, no matter how much pretty Lorentz math that generates, and we can use the horizon exercise to show them that their approach is incomplete. By assuming that their model is wrong, we correctly anticipate the corrections that they'd have to make from other theories in order to fix things up. That ability to predict where a theory fails and needs outside help is the mark of a superior system, and shows that the "Flat-Moon" exercise isn't just incomplete, it generates results that are physically wrong, and that don't self-correct. It's faulty physics.

But the same characteristic failure-pattern also shows up in our own system, based on special relativity. So have we made a similar mistake?

Saturday, 27 June 2009

Physics Fraud, and the Impossible Diamond


Physicists used to tell me was that physics was a special subject, because you never had to worry about the possibility of fraud. Their reasoning was that You Can't Fake Physics. If you make up an experimental result that isn't right, you're doomed to be found out when other people try the same experiment and can't replicate your result. It's a dumb thing to do, and no physicist would ever be stupid enough to try.

However, it might be more accurate to say that perhaps no sane physicist would try to fake a result that they believed to be wrong. Faking a correct result may be cheating, but doesn't carry the same risk. It's much more difficult to spot a fake result when it agrees with everyone else's results and with what everybody expects to happen.

We can sometimes spot a "false positive" when a theoretical prediction that is successfully verified later turns out to be wrong, or when an experimental technique later turns out to be impossible, or impossible to conduct to the claimed accuracy. When this happens in an experiment that contradicts current theory, we usually rip the person responsible to shreds, and accusations start flying of scientific fraud. When it happens in an experiment that agrees with current theory, we're usually more charitable, and tend to say that perhaps the experimenter was simply mistaken, or overcome with a little too much enthusiasm. There's such a large grey area for honest mistakes, or the unconscious selection of "good" data (or simple wishful thinking) that a certain amount of bad science probably slips under the radar without being spotted, and it's not often that we find a "bad" result supporting a "good" outcome that's really so profoundly impossible that people are forced to consider using the "f" word.



One candidate case happened in 1955.
Researchers had been wanting to create artificial diamonds since at least as far back as Nineteenth Century. When H.G. Wells published his short story "The Diamond Maker" in 1894, a number of researchers had already been trying approaches with varying degrees of optimism and claiming positive results, including James Ballantyne Hannay in 1880, and Nobel Prize-winner Henri Moissan (also in 1894). One of the wildest attempts to create artificial diamond was carried out by John Logie Baird, who briefly blacked out of part of Glasgow when he deliberately short-circuited an electricity substation's power terminals across a graphite rod embedded in reinforced concrete (the story goes that he couldn't work out how to get the thing open afterwards, and it ended up at the bottom of a river, unexamined).

The potential financial payoff for anyone able to create artificial diamonds on demand was obvious, and by the 1950's there had been more reported (but often disputed) successes, and competing researchers were trying desperately hard to be the first people to produce a proper, replicable, accepted process that definitely did produce diamonds. One team in particular figured that they were on the edge of actually achieving it. They had the theory right, they had the equipment right ... the only problem was that their pressure-vessel obstinately refused to cough up any diamonds.
It was desperately unfair. They'd done all the work correctly, and the experiment refused to come out the way it was supposed to. They needed a diamond to get further funding. From their perspective, they probably reckoned that they deserved a diamond. It was necessary for their future research. Science needed a diamond!

And a diamond dutifully appeared. They got new funding, bought new equipment and replicated the result, others managed the same thing, and everyone was happy.

Except that ... someone went back and checked the calibration on the original pressure reactor and found that its readings had been significantly "off". The pressure-vessel had been running at too low a pressure for diamond to form. With hindsight, their original artificial diamond seemed to have been a physical impossibility. So how did it get there?

Three of the four original team members put their names to a letter to Nature in 1993, explaining that subsequent spectral analysis of the "run 151" diamond years later had shown that it appeared to have the characteristics of a natural gemstone rather than those of an artificial rock. The experimenters had carried a small stock of natural diamonds for research purposes, and it seemed that one of those had somehow found its way into the pressure vessel during setup, and been "fortuitously" discovered after the experiment.

It's quite a nicely- and elegantly-written letter, but the authors must have been acutely aware that to most people, the idea that one might "accidentally" lose a real diamond inside an apparatus designed to create artificial diamond, in such a way that it could then be rediscovered and used to get further desperately-needed money ... if this happened in any other field, we'd tend to assume deliberate fraud.



Another thing that might surprise some outsiders is that although the announcement that the experiment had been a success was made in 1955, the retraction didn't happen until 1993, nearly forty years later. For Twentieth-Century experimental physics, this wasn't actually all that unusual – there seemed to be an unspoken "gentlemen's agreement" that if someone had claimed a "correct" result that they shouldn't have, that the community would hold off making too many pointed suggestions in print until some time after the person concerned was safely dead. This was probably a great way of avoiding public controversies, but it also meant that we never really got to the bottom of what had happened in many of these cases. If you weren't supposed to go public while someone was still alive, but you couldn't suggest fraud after they were dead (because it was unfair to level that sort of accusation at someone when they couldn't defend themselves), then it meant that anyone who did get up to no good had a decent chance of not being publicly outed, in print, ever. By the time a critical report could be written, the people with first-hand knowledge of what had really happened might have all died off.

By avoiding investigating these cases until after it was too late to reach a conclusion, the physics community probably did manage to achieve a nominal "no confirmed mainstream fraud" result. But that result was itself not especially honest.

Things are now looking up. Berkeley recently went public very quickly about problems with the work of two physicists (in two separate cases) who seemed to have been almost routinely fabricating data to get their "world-class" results (Victor Ninov and Jan Hendrik Schön), and there've now been a few more speedy "outings" of scientists caught misbehaving. So from now onwards, the more temptation-prone members of the physics community know that if they gain fame and fortune by faking data, universities and comissioning bodies won't necessarily hush the thing up for them.

But for research published before 2000 (or perhaps before ~2005) ... be more careful. A certain number of the "jewels" in physics history aren't quite what they appear to be.