Friday, 31 July 2009

Computers Don't Work


Computers don't work.

This is because, with a few notable exceptions related to applications in scientific and mathematical number-crunching (like computer modelling and rocket science), when a computer system does work, we usually stop calling it a computer. When small numerical calculating computers became reliable and cheap and mass-produced, we stopped calling them "computers" and started calling them "pocket calculators". When personal computers became mainstream and stopped being niche toys for geeks, we started calling them "PCs" or "Macs", without really caring what "PeeCee" stood for.

In offices where the IT system works well, people tend to refer to the things on their desks as "workstations" or "terminals". These are things that you just switch on and start working at. They're functioning business tools.

So, once all the general-purpose systems that work are taken out of the equation, what we're left with is the bespoke systems, and all the systems that don't quite work, aren't quite finished yet, or need a lot of technical support and hand-holding. These are the scary, technical, sometimes-malfunctioning things that we still refer to as computers.

It's interesting to watch this change in naming happening with products as a market sector becomes mature. Home weather monitoring systems drifted from being marketed as "weather computers" to "weather stations", and in-car navigation systems shifted from initially being referred to reverentially as "in-car computers" to being casually referred to as "GPSes". Once the novelty factor has worn off, and people know that a product is reliable, useful and worthwhile, the "computer" tag gets dropped.

So as far as retail products are concerned, "computers", almost by definition, are the remaining gadgets that are either too new to be judged yet, or that don't work properly without a certain amount of expert hand-holding.

This also gives us a handy way of quickly assessing how good a company's IT infrastructure is. If you're visiting an office, and the general office staff refer to their "computers", then the chances are that either the staff aren't very computer-literate, or the office has just been undergoing a painful IT transition, or ... their IT systems simply suck. Try it.

Friday, 24 July 2009

Kew Gardens is Nice

Kew Gardens, map, thumbnail linkVisited Kew Gardens on Thursday with the family, to scatter our Mum's ashes.

Kew Gardens is cool. It's a 121-hectare site, with a collection of plants and habitats from around the world, and various public greenhouses with their own microhabitats (one of which has its own multicoloured lizard running wild). It's been going in various forms for about 250 years, and it's been a national botanical garden for about the last 170. In some ways it's the forerunner of the Eden Project, and it's the only site that I know of in the UK, other than my place, that has chocolate trees.

As well as the on-site research stations, there's now also now a satellite site at Wakehurst Place, where they do more of the Millennium Seed Bank Project stuff.

Mum wanted to be a tree surgeon when she was a kid, so she was really into Kew, and was a paid-up member. She even had an old (legitimately acquired!) Kew Gardens sign in her garden.

So it was kindofa a nice day.

Friday, 17 July 2009

Xenotransplantation and Swine Flu

link to link to New Scientist article with larger original version of photograph
Trying to solve the organ transplant shortage using pig organs was both a really good idea and a really bad idea. It was good because a pig's body is reasonably close to ours in terms of size, biology and organ-loading (and because pigs are omnivores, like us) ... and bad because of the virus problem that some people didn't like talking about.

There are three main reservoirs of "foreign" viruses that sometimes cross over into the human population and catch our immune systems unawares - other primates, livestock, and birds. Primates tend to be blamed for origins of the the AIDS virus, the 1918 "Spanish Flu" outbreak that killed between fifty and a hundred million people is sometimes reckoned to have crossed over from birds, and when mammalian livestock is concerned, the culprit is usually assumed to be pigs.

When a disease like this crosses over from a pig or a chicken, we sometimes get a bit disgruntled in the West and mutter that these poor agricultural communities really shouldn't be living in such close proximity to their animals, but for years we've been planning on going one better. Transplanting pig organs into people means that living pig tissue is in as intimate contact with human tissue as its possible to be - actually snuggled up together subdermally and sharing a common blood supply. In Darwinian terms, if you wanted to encourage pig viruses to evolve so that they could thrive in a human environment, this is exactly how you'd do it, and if you were a genocidal mad scientist intent on "accidentally" killing millions of people in a cost-effective manner, without actually hiring weapons research specialists and running the risk of being spotted, then this'd be a great way to do it.

Now, you might think that we could breed a "special" population of guaranteed "disease-free" oinkers in laboratory conditions, to ensure that any transplant organs are kept squeaky-clean, and to minimise the risk as long as the organ recipients were then kept well away from any live pigs (to protect both the human and pig populations) – some researchers were supposed to be setting up special facilities for breeding "special" pigs, perhaps with a bit of gene-manipulation to make the immune-system rejection problems less severe.

Snag is, it turns out that you can't breed "clean" pigs.
Normally, the DNA in your cell nucleii codes for proteins that get used within the cell, and for RNA that moves out of the cell nucleus to do Very Useful Things in other parts of the cell. DNA also copies itself during cell division. Viruses are often RNA-based, and usually insert themselves into a cell, where they tell the cell to make more RNA-based viruses.
But RNA retroviruses run the cell's usual DNA-RNA mechanism backwards – they write DNA versions of themselves into the cell nucleus ("reverse transcription"), and from that point onwards, the cell's own nucleus generates new viral RNA.
If a retrovirus infects a mammalian egg cell or a sperm-producing cell, and those cells produce viable offspring, then those offspring inherit the virus as part of their genome - it's been written into the DNA of every one of their cells.

Sometimes the inherited virus isn't active, or is corrupted so that it does nothing, or ends up mutating again to do something that's actually useful to the host. If it's active, the individuals who have it will presumably have gene-repression systems and a primed immune system that can deal with it, otherwise they'd not survive long enough to be born. So pigs can carry a payload of porcine viruses in their DNA, and still be perfectly healthy. And they do – it turns out that as farm animals, pigs have been so intensively interbred that it now doesn't seem possible to find a pig that doesn't have a library of piggy viruses already written into their DNA. To encourage those viruses to learn how to infect human cells, all you'd have to do is transplant some living virus-bearing pig tissue into a human, and give that human immunosuppressant drugs to damp their immune system long enough to give the fledgeling viruses a change to get in a good few generations of useful mutation, and – bingo! – you've got yourself a new "alien" human-compatible virus that most human immune systems won't yet recognise.

The xenotransplanation research community were always playing with fire. Getting funding for research that might eventually save thousands or tens of thousands of people's lives (including sick kiddies) is good ... but getting funding for a large-scale xenotransplantation programme that might end up being implicated years later in the deaths of tens of millions would be ... not quite so good. So the ethics watchdogs within the community said that it was important that society as a whole understood the risks and decided consensually to go for xenotransplantation, but when it came to lobbying for funds, the TV news would tend to show pictures of dying children with tubes stuck in them, and impassioned researchers saying that this was necessary to stop people dying ... but forget to mention the risk of a potential associated death toll on the scale of that of World War 2.

So the current swine flu outbreak has probably saved the xenotransplanation community from having to wake up in ten years time and find that their work had been responsible for killin a hell of a lot of people. Their funding bodies probably now know rather more about pig viruses, and will now tend to ask the right questions when someone suggests stitching pig tissue into human recipients. Such as: "But isn't that an insanely irresponsible thing to do?". Since the 2009 outbreak, researchers can no longer pooh-pooh safety concerns by pointing out that nobody on the board has heard of anyone who's actually been hurt by swine flu. Conventional live pig-organ xenotransplanation is probably (hopefully) now a dead field.

Good work can still be done. There are some people now looking at taking pig hearts and dissolving away all the tissue to leave a cartilage skeleton on which human stem cells can be grown, to create a working human-tissue heart. That sounds like a much more sensible idea.

There's just one last question we need to answer. The sites where US researchers were keeping their pigs tended to be secret, to avoid protester sabotage and industrial espionage, and to try to make sure that the pigs were kept free from external contamination of pig or human pathogens. It'd be useful to have a full list of all such sites, to see if any of them had been set up conveniently across the border in Mexico. If there's genuinely not a link between xenotransplantation research and the current swine flu outbreak, then the xenotransplantation community can consider themselves lucky – they dodged a bullet.

Sunday, 12 July 2009

Remembering Emil Rupp

In the "impossible diamond" post, when I was talking about the impression given by C20th physicists had that fraud didn't happen in their profession, I forgot about Emil Rupp. Then again, almost everyone tends to forget about Emil Rupp.

Emil Rupp (1898-1979) studied under Nobel-prizewinning experimenter Philipp Lenard, and was considered by some to be one of the most exciting experimenters of his time. He did a series of experiments related to effects like electron diffraction that caught the imaginations of a number of key theoretical physicists, and his work was sometimes credited with being one of the most important influences on the development of quantum mechanics.

Rupp's work was central to some key questions in quantum mechanics. What is reality? is light really a wave or a particle? Is it emitted continuously or instantaneously? Can a state that is said not to exist still influence the outcome of an experiment?

Ironically, it then turned out that Rupp's own experiments, which had been so influential, didn't seem to have existed either. The thing supposedly came to light when some of his colleagues visited the lab where Rupp was working and confronted Rupp – he'd been describing experiments with 500kV electrons, but wasn't in possession of an accelerator that went up to 500kV. He'd been making up his experimental results.

Why did Rupp do it? Well, like Bernie Madhoff, for a while he was getting away with it, and was having a very, very good time. He was identifying problems that the physics community wanted solving, and solving them (albeit with fake experimental writeups). He was an enabler, and people (other than the fellow experimenters that he kept leapfrogging) liked him for it. Great names in theoretical physics would seek him out and cite him. Einstein spent quality time corresponding with Rupp in 1926, working through issues with wave-particle duality, and trying to work out what should happen in certain experiments ... and trying to come up with explanations for how it was that some of Rupp's experiments had come out so well, given some of the difficulties that he should have come up against. The collaboration was reasonably well-known, and people started referring to the "Einstein-Rupp experiments".

When the game was up, Rupp found that he'd now given the physics community a new headache. He'd shown that peer review didn't work as an efficient way of identifying "friendly" fraud within the system. If you had the right background, and you worked out which results people wanted and published those results, your paper tended to pass peer review unless the referees were so convinced that you couldn't possibly have gotten those results that they called you on it. And if an experiment produced the expected result, it was difficult for a referee to insist that an experiment was too successful. Results that don't agree with current thinking can be summarily rejected by peer review on the grounds that getting a "wrong" answer amounts to apparent evidence of error, but rejecting results that give the "right" answer is more awkward.
The lesson seemed to be that if you wanted a career as a scientific fraudster, the way to succeed was to agree with whichever theories were currently in vogue. So the physics community was now facing a potential upheaval – how would they assess how many other key papers by respected researchers might have been unreliable, or even outright fakes?

Rupp solved that problem for them with another piece of documentation. He sent a retraction of his five key papers, along with a letter from his doctor stating that Rupp had been in a "dreamlike" mental state when he'd written them.
It was a tidy conclusion – Rupp exited physics without there having to be a nasty inquiry, the community got to draw a line under the affair, quickly, and thanks to Rupp's explanation, they got to write off the matter not as an extended period of fraud lasting nine or ten years, but as the unfortunate actions of a guy who was having some mental health issues. That let the community off the hook – if Rupp hadn't been completely sane at the time, then we could still tell ourselves that physics was a special "fraud-free" field of science, and that no sane physicist would ever commit fraud. So everything was okay again.

Was Rupp's doctor's letter genuine? We didn't really care. We had the result that we wanted.


Refs:

Monday, 6 July 2009

Projective Cosmology, and the topological failure of Einstein's General Theory

'farside black hole' projection, topological cosmology, 'Relativity in Curved Spacetime' figure 12.4
The graphic above is from my old, defunct, 1990s website, and I also borrowed it for chapter 12 of the book.

It shows a rather fun observerspace projection: if we assume that the universe is (hyper-) spherical, but we colour it in as it's seen to be rather than how we deduce it to be, expansion and Hubble shift result in a description in which things are more redshifted towards the universe's farside. Free-falling objects recede from us faster towards the apparent farside-point, as if they were falling towards some hugely massive object at the opposite end of the universe, and as if there was a corresponding gravitational field centred on the farside. At a certain distance between us and where this (apparent) gravitational field would be expected to go singular, there's a horizon (the cosmological horizon) censoring the extrapolated Big Bang singularity from view, and that looks gravitational, too.

And, funnily, enough, this "warped" worldview turns out to be defensible (as an observer-specific description) using the available optical evidence. Since we reckon that the universe is expanding, and we're seeing older epochs of the universe's history as we look further away, we're seeing those distant objects as they were in the distant past, when the universe was smaller and denser and the background gravitational field-density was greater than it is now.

Our perspective view is showing us an angled slice through space and time that really does include a gravitational gradient – between "there-and-then" and "here-and-now". The apparent gravitational differential is physically real within our observerspace projection, and viewed end-on, the projection describes a globular universe with a great big black hole at the opposite end to wherever the observer happens to be.

This projection is fascinating: it means that we end up describing cosmological-curvature effects with gravitational-curvature language, and it cuts down on the number of separate things that our universe model has to contain. If we take this topological projection seriously, some physics descriptions need to be unified. If we can agree on a single definition of relative velocity, the projection means that cosmological shifts (as a function of cosmological recession velocity) have to follow the same law as gravitational shifts (as a function of gravitational terminal velocity) ... and then, since gravitational shifts can be calculated from their associated terminal velocities as conventional motion shifts, we have have three different effects (cosmological, gravitational and velocity shifts) all demanding to be topologically transformed into one another, and all needing to obey the same laws.


This all sounds great, and at this point someone who hasn't done advanced gravitational physics will probably be anticipating the punchline – that when we work out what this unified set of laws would have to be, we find that they're the set given by Einstein's special and general theories, QED.

Except that they aren't. We don't believe that cosmological shifts obey the relationship between recession velocity and redshift supplied by special relativity.

We dealt with this by ignoring the offending geometry. Since cosmological horizons had to be leaky, and GR1915 told us (wrongly) that gravitational horizons had to give off zero radiation, we figured that these had to be two physically-irreconcilable cases, and that any approach that unified the two descriptions was therefore misguided. Since a topological re-projection couldn't be "wrong", it had to be "inappropriate". Instead of listening to the geometry and going for unification, we stuck with the current implementation of general relativity, and suspended the usual rules of topology to force a fit.

But then Stephen Hawking used quantum mechanics to argue that gravitational horizons should emit indirect radiation after all, as the projection predicts. So we'd broken geometrical laws (in a geometrical theory!) to protect an unverified physical outcome that turned out to be wrong. Where we should have been able to predict Hawking radiation across a gravitational horizon from simple topological arguments in maybe the 1930's, by using the closed-universe model and topology, we instead stuck with existing theory and had to wait until the 1970's for QM to tap us on the shoulder and point out that statistical mechanics said that we'd screwed up somewhere.

If we look at this projection, and consider the consequences, it suggests that the structure of current general relativity theory, when applied to a closed universe, doesn't give a geometrically consistent theory ... or at least, that the current theory is only "consistent" if we use the condition of internal consistency to demand that any logical or geometrical arguments that would otherwise crash the theory be suspended (making the concept almost worthless).
It basically tells us that current classical theory is a screw-up. And that's why you probably won't see this projection given in a C20th textbook on general relativity.