Saturday 30 May 2009

Materials that Learn


Suppose that we have a suspension of long electrically-conductive particles (such as metal filings or buckytubes) suspended in an insulating liquid resin. If we then try to force the liquid to conduct electricity in a particular direction, the particles will tend to self-organise to make that outcome achievable more efficiently.

A physicist will say that what actually happens is that when we apply a high voltage across the material in an attempt to force it to conduct, the particles become charge-polarised, and line up "lengthwise" in the electric field ... then the oppositely-charged "heads" and "tails" of adjacent particles tend to link up, and pretty soon you have lines of conducting threads running through the material linking the two electrical contact points. If your insulating resin's electrical resistance breaks down over small distances (above a given threshold voltage between a pair of particles), and if the sides of your particles can be persuaded to repel each other, to prevent the formation of additional conductive paths at right angles to your applied voltage, then, if you allow the resin to set, you should have a new type of material whose electrical conductivity depends on direction.

In itself, this doesn't sound particularly interesting: after all, we can already produce a solid directionally-conducting block by mechanically glueing or fusing a stack of insulated wires together and then machining the block to the desired shape. The advantage of using self-organising materials is that we can use them to build conduction patterns into films or coatings, or to build more exotic structures into solid blocks. You might want to tailor the electrical response of the paint on an aircraft or satellite to produce certain effects when it's hit by an incoming EM wave (say, to deflect radar or focus an incoming signal), or you might want to produce solid waveguides or field guides for electrical engineering, without laboriously building them from layers of laminated conductors, or winding them as coils.

The idea of self-organising materials isn't new. We use the idea dynamically with liquid crystal displays, and we've recently spent a lot of R&D money coming up with a "freezable" counterpart to LCDs, "electronic paper" (as used in the Amazon Kindle). But the idea of being able to "print" field structures into or onto materials, in a way that automatically self-corrects for any structural defects or variations in the material, is rather interesting. You could use superconducting grains to build exotic superconducting structures in two or three dimensions, or you could use a resin that's conductive when liquid, and freeze it from one end while the applied field is varied, to grow field structures that would be impossible to achieve by other means. You could even try coupling the process with 3D printer technology to produce independent conduction-alignment of each point within a structure to produce extremely ornate conductor structures. Then we have the interesting idea of field holography: if we create a complex external field around a device, and "freeze" the critical regions into superconducting blocks, then when those blocks are milled and reassembled, will Nature tend to recreate the original field by "joining the dots" between the separated blocks? What if we have a containment field with complex external field junctions that tend to destabilise under load – if we could freeze that field junction topology into a set of surrounding superconducting blocks, would they tend to stabilise the field?

We might be able to use expensive hardware to set up, say, a toroidal containment field, place a container of "smart resin" in the field, and "freeze" the external EM image of the device into the external block. If this was a useful component to have, we'd have a method of mass-producing them for use as as field guides or field stabilising devices.
With a number of interconnected connected and energised surrounding blocks, and the original device removed and replaced with a container of "smart resin", you might also be able to use the process in reverse, to recreate a rough electromagnetic approximation of the internal structure of the original device (crudely analogous to the old stereotype process originally used by printers to preserve and recreate the shape of blocks of moveable type).

Admittedly most of the potential applications for this sort of process don't exist yet. We're not mass-producing cage-confinement fusion reactors, and the LHC's magnets don't need miniaturisation. Fusion-powered vehicles are still some way off. But it's nice to know that there are still some fabrication tricks that we might be able to use that don't require laborious hand-tooling and impossible levels of molecular-level precision.

Saturday 23 May 2009

Jitter

Jitter is a fascinating concept, with applications in digital imagery and quantum mechanics. The word is a corruption of the scotticism "chitter", which is an omomatopoeic rendering of the noise that your teeth make when you shiver (another offshoot is "chatter"). So jittering is a jerky jumping between positions that surround a central averaged point, and "having the jitters" means being nervously jumpy, or having the shakes for some other reason (e.g. drug or alcohol withdrawal, see also the origins of the word jitterbug). In digital measuring systems, it's the tendency for background noise to make measurements jump about between adjacent states when the real signal value is close to a quantisation threshold.

At first sight, jitter looks like an engineering annoyance. If you feed a slowly-changing analogue signal into a digitiser you might expect the correct result to be a "simple" stepped waveform, but if the signal is noisy, and the signal level happens to be near a digital crossing-point, then that noise can make the output "jitter" back and forth between the two nearest states. A small amount of noise well below the quantisation threshold can be amplified and generate 1-bit noise on the digital data stream, as the output "jitters" between the two closest states.

So early audio engineers would try to filter this sort of noise out of the signal before quantisation. However, they later realised that the effect was useful, and that the jittering actually carried valuable additional information. If you had an digitiser that could only output a stream of eight-bit numbers, and you needed that stream to run at a certain rate, you could run the hardware at a multiple of the required rate, and deliberately inject low-level, high-frequency noise into the signal, causing the lowest bit to dance around at the higher clockrate. If the original signal level lay exactly between two digital levels, the random jitter would tend to make the output jump between those two levels with a ratio of about ~50:50. If the signal voltage was slightly higher, then additional system noise would tend to make the sampling process flip to the "higher" state more often than the "lower state. If the original input signal was lower than the 50:50 mark, the noise wouldn't reach the higher threshold quite so often, and the "jittered" datastream would have more low bits than high bits. So the ratio between "high" and "low" bit-noise told us approximately where the original signal level lay, with sub-bit accuracy.

This generated the apparently paradoxical result that we could make more accurate measurements by adding random noise to the signal that we wanted to measure! Although each individual sample would tend to be less reliable than it would have been if the noise source wasn't there, when a group of adjacent samples were averaged together, they'd conspire to recreate a statistical approximation of the original signal voltage, at a higher resolution than the physical bit-resolution of the sampling device. All you had to do was to run the sampling process at a higher rate than you actually wanted, then smooth the data to create a datastream at the right frequency, and the averaging process would give you extra digits of resolution after the "point".
So if you sampled a "jittery" DC signal, and measured "9, 10, 9, 10, 10, 9, 10, 10", then your averaged value for the eight samples would be 9.625, and you'd evaluate the original signal to have had a value of somewhere just over nine-and-a-half.

Jitter allowed us to squeeze more data through a given quantised information gateway by using spare bandwidth, and passing the additional information as statistical trends carried on the back of an overlaid noise signal. It was transferring the additional resolution information through the gateway by shunting it out of the "resolution" domain and into a statistical domain. You didn't have to use random noise to "tickle" the sampling hardware – with more sophisticated electronics you could use a high-frequency rampwave signal to make the process a little more orderly - but noise worked, too.

So jitter lets us make measurements that at first sight appear to break the laws of physics. No laws are really being broken (because we aren't exceeding the total information bandwidth of the gateway), but there are some useful similarities here with parts of quantum mechanics – we're dealing with a counterintuitive effect, where apparently random and unpredictable individual events and fluctuations in our measurements somehow manage to combine to recreate a more classical-looking signal at larger scales. Even with a theoretically-random noise source with a polite statistical distribution tickling the detector thresholds, the resulting noise in the digitised signal still manages to carry statistical correlations that carry real and useful information about what's happening under the quantisation threshold.

Once you know a little bit about digital audio processing tricks, some of the supposedly "spooky" aspects of quantum mechanics start to look a little more familiar.

Saturday 16 May 2009

General Relativity and Nonlinearity


One of the difficulties set up by the structure of Einstein's general theory of relativity is the tension between GR's requirement that there be no prior geometry, and the assumption that the geometry must necessarily reduce to the flat fixed geometry of special relativity's Minkowski metric as a limiting case over small regions.

Although it's not news that GR shouldn't presume a prior geometry (GR's fields are not superimposed on a background metric, they define the metric), this is one of those irritating principles that's easier to agree with in principle than it is to actually implement.
It's only human nature when attacking a problem to want to start off with some sort of fixed point or known property that everything else can be defined in relation to. It's like starting a jigsaw by identifying the four corner pieces first. We tend to start off by imagining the shape of the environment and then imagining placing a test object within it ... but the act of placing an observer itself modifies the shape and characteristics of the metric, and means that the signals that the observer intercepts might have different characteristics to those that we might otherwise expect to have passed though the particle's track, if the particle hadn't actually been there, or if it had been moving differently. Although the basic concept of a perfect "test particle" isn't especially valid under relativity theory, we like to assume that the shape of spacetime is largely fixed by large background masses, and that the tiny contribution of our observer-particle won't change things all that much (we like to assume that our solutions are insensitive to small linear "perturbations" of the background field).

Unfortunately, this assumption isn't always valid. Even though the distortion caused by adding (say) a single atom with a particular state of motion to a solar system may well be vanishingly small, and limited to a vanishingly-tiny region of spacetime compared to the larger region being looked at, every observation that the atom and solar system make of each other will be based on the properties of exchanged signals that all have to pass through that teensy-weensy distorted region. So if we build a theory on mutual observation and the principle of relativity, even a particle-distortion or gravitomagnetic distortion that's only significant in over a vanishingly small speck of spacetime surrounding the atom still has the potential to dramatically change what the atom sees, and how outsiders see the atom. It changes the properties of how they interact, and by doing that, it also changes the characteristics of the physics. Although a star isn't going to care much whether an individual distant atom makes a tiny distortion in spacetime or not, our decision as to whether to model that distortion or not can change the functional characteristics of our theory, and change the way that we end up modelling the star, and some of the predictions that we make for it. It also has the potential to wreck the validity of the frame-based approach that people often use with general relativity – if we take nonlinearity seriously, we should probably be talking about the relativity of object views, rather than the relativity of "frames".

Field components aren't always guaranteed to combine linearly, they can twist and impact and writhe around each other in fascinating ways, and generate new classes of effect that didn't exist in any of the individual components. For instance, if we take a bowling ball and a trampoline, and place the ball on the trampoline, their combined height is less than the sum of the two individual heights, and the trampoline geometry has some new properties that aren't compatible with its original Euclidean surface. The surface distorts and the rules change. [Ball+ Trampoline] <> [Ball] + [Trampoline].
Or, place a single bowling ball on an infinite trampoline surface and it settles down and then stays put. But place two bowling balls on the surface, reasonably near to each other, and the elastic surface will push them towards each other in an attempt to minimise its stresses and surface area, producing relative motion. A one-ball model is static, a two-ball model is dynamic, so the rules just changed again.
The result of assuming a background field and simply overlaying particles isn't guaranteed to be the same as a more realistic model in which the particles are intrinsically part of the background field. Nonlinear behaviour generates effects that often can't be generated by simple overlay superimpositions.

Einstein's special theory of relativity rejects the idea of any such interaction between a particle and its surrounding spacetime, so this class of nonlinear effect is incompatible at the particle level with our current general theory of relativity (which is engineered to reduce to SR). While we understand that perhaps a fully integrated model of physics can't be broken up into self-consistent self-contained pieces that can be modelled individually and then assembled into a whole, we try it anyway, because it's easier to tackle smaller bite-size theories than to try to create the full Theory of Everything from scratch. And when we work on these isolated theories, and try to make them internally consistent without taking into account external factors, we end up with a series of theoretical building blocks built on different principles that don't fit together properly.

For Einstein's general theory of relativity, we say that the theory must reduce to the flat-spacetime physics of special relativity over small regions, which makes the theory pretty much incompatible with attempts to model particle-particle interactions as curvature effects. But if what we understand as "physics" is the result of particle-observers communicating through an intermediate medium, and the geometrical properties of those particles on the metric is an intrinsic part of how they interact – if physics is about nonlinear interactions between geometrical features – then by committing to special relativity as a full subset of GR, we might have guaranteed that our general theory can never describe the problem correctly, because any solution with a chance of being right will be ruled out for being in conflict with special relativity. Since deep nonlinearity (which GR1915 doesn't have) seems to be the key to reproducing QM behaviour in a classically-based model, it's not surprising that serious attempts to try to find a way to combine GR and QM have tended to run into the nonlinearity issue:

Albert Einstein, 1954
at the present time the opinion prevails that a field theory must first, by "quantization", be transformed into a statistical theory of field probabilities ... I see in this method only an attempt to describe relationships of an essentially nonlinear character by linear methods.
Roger Penrose, 1976, quoted by Ashtekar:
... if we remove life from Einstein's beautiful theory by steam-rollering it first to flatness and linearity, then we shall learn nothing from attempting to wave the magic wand of quantum theory over the resulting corpse.
Some GR researchers did try to move general relativity beyond a reliance on a fixed initial geometry and dimensionality (see John Wheeler's work on pregeometry), but the QM guys were better at analysing where their "perturbative" and "nonperturbative" approaches differed than the GR guys were at identifying the artefacts that special relativity might have introduced into their model.

In order to work out what parts of current GR might be artefacts of our approach, it's helpful to look at non-SR solutions to the general principle of relativity, and compare the results with those of the usual SR-based version.
The two approaches give two different sorts of metric. If we embrace nonlinearity, we get a relativistic acoustic metric and a general theory that supports Hawking radiation, classically. The second approach (where we start by assuming that a particle's own distortion is negligible and doesn't play a role in what the particle sees) gives us standard classical theory, Minkowski spacetime, the current version of general relativity, and a deep incompatibility with Hawking radiation and quantum mechanics.

So I'd suggest that perhaps we shouldn't be trying to reconcile "current GR" with quantum theory ... we should instead be trying to replace our current crippled version of general relativity with something more serious, that didn't rely on that additional SR layer. There seem to have been two different routes available to us to construct a general theory of relativity, and it's possible that we might have chosen the wrong one.

Saturday 9 May 2009

The Principle of Relativity

mediaeval illustration, spherical Earth, with walkers simultaneously in front of and behind each otherThe principle of relativity is pretty straightforward: it's essentially that "nothing is nailed down" The locations and properties of our universe's contents are defined by their relationships to other things in the same universe: there is no absolute sheet of "universal graph-paper" that's overlaid on the universe from outside that defines where everything "really" is, and which dictates the laws of physics in some occult manner.

If we think about the problem logically, we find that there's another aspect to the idea: if there were such a sheet of universal graphpaper, and that sheet did force physics to operate in such a way that we could identify an objects absolute motion relative to it, then that hypothetical sheet of graphpaper would (in a sense) have to exist within our universe, and the motion of bodies could once again be described using the principle of relativity, by treating our absolute frame as another (rather special) physical "thing". But it's perhaps slightly perverse to decide that the universe exactly one of these special things, with nothing else like it, so Occam's Razor pretty much demands that we reject the idea of a single absolute reference frame, unless there's compelling supporting evidence for it.

A more serious problem with the idea of an absolute, inviolable aetheric medium is that such a thing would appear to break some basic principles concerning cause and effect. Normally we assume that when a thing acts, it knows that it's acted ... that is, that there is a back-reaction for every reaction. We assume that if Object A exerts power over Object B, that A's ability to influence is somehow reduced, or at least altered in some way. There is no “something from nothing”, no expenditure of influence without a corresponding lessening of the bank account, and no free lunch. If A's ability to affect B was absolute and without consequence for A, then we could say that A's stock of influence appeared to be infinitely large. And if we are talking about an identifiable physical and quantifiable influence, it leads to some nasty mathematical results if we say that anything has an infinite quantity of a real physical thing. A further problem with these infinities is that they break accounting rules and the chain of causality. When asked where this influence comes from, we can't reverse the sequence of events and extrapolate any further back than the dictatorial rulings of our infinitely-strong metric, which then acts as a limit for any further logical analysis. It becomes a prior cause, a thing that can't be politely incorporated into a larger, fluid, mutually self-contained logical structure, but has its own separate anchor-point that doesnt relate to anthing else inside the structure, and allows it to dictate terms to everything else without retribution.

This sort of “absolute aether” is a way of saying that things simply happen in a certain way because they do, with no further analysis possible, and from a theoretical-analytical point of view, it's a dead end.

It was partly Einstein's appreciation of this problem that led him to the conviction that spacetime itself had to be a stressable, flexible, malleable thing. The “medium” of Einstein's general theory was the background gravitational field (which also defined distances and times), but the assumed properties of this field were no longer absolute, but were affected by the properties of the physics that played out within it. Spacetime was an interactive, integrated part of physics. The “fabric of spacetime” deformed gravitomagnetically as objects passed through it, and spacetime itself was the medium by which masses communicated with and connected causally to other masses. There was an interplay between the properties of spacetime and the properties of matter and energy – as John Wheeler put it, “Matter tells space how to bend, space tells matter how to move”.

The more static, "fixed" spacetime of special relativity, Einstein later decided, was a somewhat distasteful creature. Certainly special relativity had done away with the idea of there being any absolute reference for location, and even for absolute independent values of distance and time, but the overall spacetime structure still had an “absolute” quality to it, in that the geometry of Minkowski spacetime was meant to control and define inertial physics, without its own properties being in any way affected (a slightly abstract version of "action without reaction"). Minkowski spacetime was still "absolute" in the geometrical sense.

To quote Einstein ("The Meaning of Relativity", Princeton University Press):

... from the standpoint of the special theory of relativity we must say, continuum spatii et temporis est absolutum. In this latter statement absolutum means not only "physically real", but also "independent in its physical properties, having a physical effect, but not itself influenced by physical conditions".
...

It is contrary to the mode of thinking in science to conceive of a thing (the space-time continuum) which acts itself, but which cannot be acted upon. This is the reason why E. Mach was led to make the attempt to eliminate space as an active cause in the system of mechanics. According to him, a material particle does not move in unaccelerated motion relatively to space, but relatively to the centre of all the other masses in the universe; in this way the series of causes of mechanical phenomena was closed, in contrast to the mechanics of Newton and Galileo. In order to develop this idea within the limits of the modern theory of action through a medium, the properties of the space-time continuum which determine inertia must be regarded as field properties of space, analogous to the electromagnetic field.
...
... the gravitational field influences and even determines the metrical laws of the space-time continuum."

Because the word "relativity" is often equated with the predictions of specific theoretical implementations of the principle, it comes with a certain amount of historical baggage that isn't always useful when one wants to discuss a problem more generally. Sometimes it's more convenient to start from scratch and use a different form of words when trying to explain a relativistic principle without getting bogged down in historical implementational specifics. John Wheeler used the term "democratic principle" to refer to the idea that there's no single overriding cause that determines the forces on a particle, and another way of describing it might be to refer to the principle of mutuality, in that everything in the universe might be expected to not only have a vote in influencing anything that happens (subject to signal-propagation times), but also to be influenced itself in return.

So really, the principle of relativity in its broadest sense is just about going back to classical first principles: there's no action without origin and/or consequences, causality is A Good Idea, and nothing happens for no reason. These are somewhat pragmatic assumptions if we want to analyse the pattern of rules that the universe obeys – the first step is to assume that there IS a pattern.

There are, of course, more specific definitions of what the principle of relativity "says", which are tailored to the contexts of specific theories (usually Einstein's special and general theories). But we aren't obliged to use those existing definitions, and if we want a chance of discovering broader and deeper theories, we probably shouldn't.

Friday 1 May 2009

All Physics as Curvature?

William Kingdon Clifford (1845-1879) was an Nineteenth Century mathematician and geometer commemorated by modern mathematicians by having Clifford algebra named after him. He was also a fellow of the Royal Society and The Metaphysical Society, wrote a children's book, and made the occasional cutting remark about the inadvisability of trusting the opinions of groups of experts (unless one knew for a fact that at least one of the group had personal first-hand knowledge of the thing that they were talking about).

Amongst relativists, Clifford is remembered as having been one of the first people to come out unambiguously in favour of the idea that physics could (and should) be modelled as a problem involving curved space.

In 1870, Clifford addressed the Cambridge Philosophical Society ("On the Space-theory of Matter" *), declaring:
"...
I hold in fact,
  1. That small portions of space are in fact of a nature analogous to little hills on a surface which is on the average flat; namely, that the ordinary laws of geometry are not valid in them.
  2. That this property of being curved or distorted is continually being passed on from one portion of space to another after the manner of a wave.
  3. That this variation of the curvature of space is what really happens in that phenomenon which we call the motion of matter, whether ponderable or etherial.
  4. That in the physical world nothing else takes place but this variation subject (possibly) to the law of continuity.
... "
In other words, according to Clifford, matter was simply a persistent local curvature in space. While some other well-known theorists of the time (such as Oliver Lodge) were were interested in the idea of describing matter as a sort of condensation of a presumed aetherial medium, and using ideas from fluid dynamics as a shorthand for the properties of space, Clifford considered the mathematical curvature-based descriptions as more than just a means of expressing the variation in field-effect properties associated with density-variations and distortions of an underlying medium: for Clifford, the physics was simply the geometrical curvature itself.

Clifford was one of a number of C19th mathematicians working on geometrical descriptions of physics considered as a curved-space problem, a loose association of broadly similar-minded researchers whose presentations were sometimes propagated in lectures rather than in published journal papers (and who were memorably referred to by James Clerk Maxwell as the "space-crumplers".

Clifford's view was influential, but his vision arguably wasn't quite implemented by Einstein's general theory of relativity – although GR1915 implemented curvature-based descriptions of gravitation, rotation and acceleration effects, it still fell back on an underlying flat-spacetime layer when it came to describing inertial mechanics (that layer being special relativity).

This seems to be fixable, but we're not there yet.

* "William Kingdon Clifford, Mathematical Papers", (1882) pp.21-22