Showing posts with label cosmology. Show all posts
Showing posts with label cosmology. Show all posts

Friday, 22 January 2010

Einstein's Cosmological Constant

LambdaBack in 1916, Einstein was still working to the assumption that the universe should be neat and tidy, and since he was now using a more mathematical approach, this meant "infinite and unchanging".
If you were solving the equations of general relativity, and getting solutions in which the universe appeared to be unstable, then you could throw those away. Chaos was bad. Order was good. Stability was good. Static solutions were better than dynamic ones.

Since it seems that gravitational mass is always positive, gravitational effects are cumulative, and over a large enough region, the combined background curvature should be enough to curve space right back on itself. The combined attraction also ought to be trying to make the universe contract, so we've appreciated for a while that unless there was some other effect in play, the universe should either be expanding and slowing, or collapsing in on itself (see: Erasmus Darwin, 1791).

Einstein wanted his universe to be pretty much flat at very large scales, so he got rid of the effects caused by cumulative curvature by adding an additional squiggle to the equations: an invented long-range repulsive effect whose purpose was to counteract the cumulative long-range effects of gravitation, allowing a tidy, constant, unchanging, static universe. If the rest of the equation generated long-range curvature effects and evolution over time, the upper-case Greek letter Lambda (Λ) represented the necessary compensating effect that might exactly cancel these effects.

Einstein referred to this as the Cosmological Constant.

Unfortunately, Einstein had made his model too tidy. A few years later, Edwin Hubble successfully measured a distance-dependent trend in the spectral shifts of light from a range of galaxies (Hubble shift), and we realised that the complicating large-scale effects that Einstein thought he'd eliminated with his Cosmological Constant seemed to be physically real. After taking some time to think the matter over, Einstein agreed that a Riemann-type solution (without Lambda) gave a cleaner and more natural implementation of General Relativity. He later described his early decision to invent the Constant to force large-scale flatness onto GR as "The biggest blunder of my career".

End of story.



However, the subject seemed to kick off again in the 1990's when a lot of headlines started appearing in in the popular science press (and in scientific papers) to do with the idea of dark energy, and the idea that the universe seemed to be expanding faster than GR1915 predicted – these articles usually declared that "Einstein's Cosmological Constant" was back, and had excited-sounding researchers competing to see who could give the best quote about Einstein having been "right all along".

This wasn't really true: Einstein's Cosmological Constant had been a mathematically-derived thing that only had one allowable value, and whose justification was to set the strengths of a range of effects in the model (large-scale curvature, distance-dependent redshifts, change in size over time) to zero. It had been there for purely logical reasons, in the context of a static universe, because a static universe seemed to need it. It existed to explain an assumed physical equilibrium that turned out not to exist, in a universe that wasn't ours. It was derived from bad assumptions, but at least it was derived.

The modern counterpart was almost the opposite. The antigravitational "dark energy" cosmological constant applied to an expanding universe that seemed to be expanding too fast for GR1915, and the effect initially had no fundamental logical, mathematical, geometrical or theoretical basis. It was, essentially, a parameter describing the extent to which the result of our GR predictions "missed" the actual data.
More recently, some researchers have tried to put the dark energy idea onto a more "theoretical" footing by arguing that perhaps the constant might not have a fixed arbitrary value, but might be a measure of the universe's expansion. That'd make the "modern" CC less fudgey, but it'd also mean that, as well as the thing not being Einstein's, it wouldn't be a constant, either.

So why did we initially get all those news stories announcing things like: "Eighty years later, it turns out that Einstein may have been right ... So he was smarter than he gave himself credit for." [*] ?

Putting it brutally, it was about PR. Attaching Einstein's name gave a false sense of historical provenance and a false sense of respectability. It let researchers use Einstein's name as a shield to deflect awkward questions about the apparent arbitrariness of their new expansion effect, and it turned a fairly boring and slightly negative story about GR failing to agree with the evidence into a snappy human-interest story about the throes of the scientific process coming out right in the end, and Einstein being right, and GR being right.

The "Einstein's Cosmological Constant returns: Einstein was right after all!" stories generated a lot of news headlines, and let researchers give interviews to magazines and appear on the telly and improve their departments' media profiles. Suddenly there were a lot of editors and journalists wanting quotes on the cosmological constant, because they wanted to print the same reader-grabbing "Einsteiney" headline, but didn't want to put their name on the claim, as reporters, because it was dodgy. So they rang round the universities and found a bunch of cosmologists happy to give the right quote if it meant getting their name in a magazine or getting onto the telly.

The story was junk. It was researchers collectively gaming the news media, and manufacturing and repeating a story that they knew would work, in order to get more media exposure. And unfortunately, that's the sort of behaviour that makes the general public more inclined to distrust scientists.

Wednesday, 30 December 2009

Differential Expansion, Dark Matter and Energy, and Voids

2df Galaxy Redshift SurveyA raspberry (NASA: Pinwheel galaxy
Normally with a field theory, you have some idea where to start. You start by defining the shape and other properties of your "landscape" space, and then you add your field to that context, and watch what it does when you play with it.
But in a general theory of relativity (which is forced by Mach's Principle to also be a relativistic theory of gravity), the gravitational field is space. The field doesn't sit inside a background metric, it is the background metric.
So with this sort of model, we've got no obvious starting point – no obvious starting geometry, and not even an obvious starting topology, unless we start cheating and putting in some critical parameters by hand, according to what we believe to be the correct values.

We make an exasperated noise and throw in a quick idealisation. We say that we're going to suppose that matter is pretty smoothly and evenly distributed through the universe (which sounds kinda reasonable), and then we use this assumption of a homogeneous distribution to argue that there must therefore be a fairly constant background field. That then gives us a convenient smooth, regular background shape that we can use as a backdrop, before we start adding features like individual stars, and galaxies.

That background level gives us our assumed gravitational floor.

We know that this idea isn't really true, but it's convenient. Wheeler and others tried exploring different approaches that might allow us to do general relativity without these sorts of starting simplifications (e.g. the pregeometry idea), but while a "pregeometrical" approach let us play with deeper arguments that didn't rely on any particular assumed geometrical reduction, getting from first principles to new, rigorous predictions was difficult.
So while general relativity in theory has no prior geometry and is a completely free-standing system, in practice we tend to implicitly assume a default initial dimensionality and a default baseline background reference rate of timeflow, before we start populating our test regions with objects. We allow things to age more slowly than the baseline rate when they're in a more intense gravitational field, but we assume that the things can't be persuaded to age more quickly than the assumed background rate (and that signals can't travel faster than the associated background speed of light) without introducing "naughty" hypothetical negative gravitational fields (ref: Positive Energy Theorem).
This is one of the reasons why we've made almost no progress in warpdrive theory over half a century – our theorems are based on the implicit assumption of a "flat floor", and this makes any meaningful attempt to look at the problem of metric engineering almost impossible.

Now to be fair, GR textbooks are often quite open about the fact that a homogeneous background is a bit of a kludge. It's a pragmatic step – if you're going to calculate, you usually need somewhere to start, and assuming a homogeneous background (without defining exactly what degree of clumpiness counts as "homogeneous") is a handy place to start.


But when we make an arbitrary assumption in mathematical physics, we're supposed to go back at some point and sanity-check how that decision might have affected the outcome. We're meant to check the dependencies between our initial simplifying assumptions and the effects that we predicted from our model, to see if there's any linkage.
So ... what happens if we throw away our "gravitational floor" comfort-blanket and allow the universe to be a wild and crazy place with no floor? What happens if we try to "do" GR without a safety net? It's a vertigo-inducing concept, and a few "crazy" things happen:

Result 1: Different regional expansion rates, and lobing
Without the assumption of a "floor", there's no single globally-fixed expansion rate for the universe. Different regions with different "perimeter" properties can expand at different rates. If one region starts out being fractionally less densely populated than another, its rate of entropic timeflow will be fractionally greater, the expansion rate of the region (which links in to rate of change of entropy) will be fractionally faster, and the tiny initial difference gets exaggerated. It's a positive-feedback inflation effect. The faster-expanding region gets more rarefied, its massenergy-density drops, the background web of light-signals increasingly deflects around the region rather than going through it, massenergy gets expelled from the region's perimeter, and even light loses energy while trying to enter, as it fights "uphill" against the gradient and gets redshifted by the accelerated local expansion. The accelerated expansion pushes thermodynamics further in the direction of exothermic rather than endothermic reactions, and time runs faster. Faster timeflow gives faster expansion, and faster expansion gives faster timeflow.

The process is like watching the weak spot on an over-inflated bicycle inner tube – once the trend has started, the initial near-equilibrium collapses, and the less-dense region balloons out to form a lobe. Once a lobe has matured into something sufficiently larger than its connection region, it starts to look to any remaining inhabitants like its own little hyperspherical universe. Any remaining stars caught in a lobe could appear to us to be significantly older than the nominal age of the universe as seen from "here and now", because more time has elapsed in the more rarefied lobed region. The age of the universe, measured in 4-coordinates as a distance between the 3D "now-surface" and the nominal location of the big bang (the radial cosmological time coordinate, referred to as "
a" in MTW's "Gravitation",§17.9), is greater at their position than it is at ours.

With a "no-floor" implementation of general relativity, the universe's shape isn't a nice sphere with surface crinkles, like an orange, it's a multiply-lobed shape rather more like a
raspberry, with most of the matter nestling in the deep creases between adjacent lobes (book, §17.11). If there was no floor, we'd expect galaxies to align in three dimensions as a network of sheets that form the boundary walls that lie between the faster-expanding voids.

And if we look at our painstakingly-plotted maps of galaxy distributions, that's pretty much what seems to be happening.

Result 2: Galactic rotation curves
If the average background field intensity drops away when we leave a galaxy, to less than the calculated "floor" level, then the region of space between galaxies is, in a sense, more "fluid". These regions end up with greater signal-transmission speeds and weaker connectivity than we'd expect by assuming a simple "floor". The inertial coupling between galaxies and their outside environments becomes weaker, and the influence of a galaxy's own matter on its other parts becomes proportionally stronger. It's difficult to get outside our own galaxy to do comparative tests, but we can watch what happens around the edges of other rotating galaxies where the transition should be starting to happen, and we can see what appears to be the effect in action.

In standard Newtonian physics (and "flat-floor" GR), this doesn't happen. A rotating galaxy obeys conventional orbital mechanics, and stars at the outer rim have to circle more slowly than those further in if they're not going to be thrown right out of the galaxy. So, if you have a rotating galaxy with persistent "arm" structures, the outer end of the arm needs to be rotating more slowly, which means that the arm's rim trails behind more and more over time. This "lagging behind" effect stretches local clumps into elongated arms, and then twists those arms into a spiral formation.
When we compare our photographs of spiral-arm galaxies with what the theory predicts, we find that ... they have the wrong spiral. The outer edges aren't wound up as much as "flat-floor" theory predicts, and the outer ends of the arms, although they're definitely lagged, seem to be circling faster than ought to be possible.

So something seemed to be wrong (or missing) with "flat-floor" theory. We could try to force the theory to agree with the galaxy photographs by tinkering with the inverse square law for gravity (which is a little difficult, but there have been suggestions based on variable dimensionality and string theory, or MOND), or we could fiddle with the equations of motion, or we could try to find some way to make gravity weaker outside a galaxy, or stronger inside.

The current "popular" approach is to assume that current GR and the "background floor" approach are both correct, and to conclude that there therefore has to be something else helping a galaxy's parts to cling together – by piling on extra local gravitation, we might be able to "splint" the arms to give them enough additional internal cohesiveness to stay together.

Trouble is, this approach would require so
much extra gravity that we end up having to invent a whole new substance – dark matter – to go with it.
We have no idea what this invented "dark matter"might be, or why it might be there, or what useful theoretical function it might perform, other than making our current calculations come out right. It has no theoretical basis or purpose other than to force the current GR calculations to make a better fit to the photographs. Its only real properties are that its distribution shadows that of "normal" matter, it has gravity, and ... we can't see it or measure it independently.

So it'd
seem that the whole point of the "dark matter" idea is just to recreate the same results that we'd have gotten anyway by "losing the floor".

Result 3: Enhanced overall expansion
Because the voids are now expanding faster than the intervening regions, the overall expansion rate of the universe is greater, and ... as seen from within the galactic regions ... the expansion seems faster than we could explain if we extrapolated a galaxy-dweller's sense of local floor out to the vast voids between galaxies. To someone inside a galaxy, applying the "homogeneous universe" idealisation too literally, this overall expansion can't be explained unless there's some additional long-range, negatively-gravitating field pushing everything apart.

So again, the current "popular" approach is to invent another new thing to explain the disagreement between our current "flat-floor" calculations and actual observations.
This one, we call "Dark Energy", and again, it seem to be another back-door way to recreating the results we'd get by losing the assumed gravitational background floor.

So here's the funny thing. We know that the assumption of a "homogenous" universe is iffy. Matter is not evenly spread throughout the universe as a smooth mist of individual atoms. It's clumped into stars and planets, which are clumped into star systems, which are clumped into galaxies. Galaxies are ordered into larger void-surrounding structures. There's clumpiness and gappiness everywhere. It all looks a bit fractal.

It might seem obvious that, having done the "smooth universe" calculations, we'd then go back and factor in the missing effect of clumpiness, and arrive at the above three (checkable) modifying effects, (1) lobing (showing up as "void" regions in the distribution of galaxies), (2) increased cohesion for rotating galaxies, and (3) a greater overall expansion rate. It also seems natural that having done that exercise and having made those tentative conditional predictions, that when all three effects were discovered for real, the GR community would be in a happy mood.

But we didn't get around to doing it. All three effects took us by surprise, and then we ended up scrabbling around for "bolt-on" solutions (dark matter and dark energy) to force the existing, potentially flawed approach to agree with the new observational evidence.

The good news is that the "dark matter"/"dark energy" issue is probably fixable by changing our approach to general relativity, without the sort of major bottom-up reengineering work needed to fix some of the other problems. At least with the "floor" issue, the "homegeneity" assumption is already recognised as a potential problem in GR, and not everyone's happy about our recent enthusiasm for inventing new features to fix short-term problems. We might already have the expertise and the willpower to solve this one, comparatively quickly.

Getting it fixed next year would be nice.

Monday, 26 October 2009

Cosmological Hawking Radiation, and the failure of Einstein's General Theory

The Earth's Horizon, E. Baird 2009Cosmological horizons are rather arbitrary. The cosmological limit to direct observation is at different places for different observers, and if you change position, your horizon position changes to match. In that respect, a cosmological horizon is a little bit like a planetary horizon - it's different for everyone, and every physical location can be considered as being at a horizon boundary for someone.

With a cosmological horizon, we can mark out a region of space that we reckon should be directly visible, and another region beyond that shouldn't be, and try to draw a dividing line between the two that represents the horizon. The unseen region doesn't exist in an observerspace map even as space, which (in an observerspace projection) seems to fizzle out and come to a stop at the horizon limit.
As we try to look at regions further and further away, we're seeing larger and larger cosmological redshifts, and seeing further and further back in time, until we approach a theoretical limit where the redshift is total, time doesn't appear to have moved on at all since the Big Bang, and events apparently frozen into the horizon correspond to those in the vicinity of Time Zero.
In an idealised model, trying to see any further away than this means that we'd be expecting to be seeing spacetime events that originated before the Big Bang, which – in our usual models – don't exist. So the cosmological horizon is the rough analogue of a censoring surface surrounding a notional black hole singularity under general relativity. It kinda ties into the cosmic censorship hypothesis that, if any physical singularities do exist anywhere in Nature, Nature will always make physics work nicely and politely helpfully hiding the nasty singularities from view.

HOWEVER ... with a cosmological horizon, there are logical arguments that insist that we can receive signals though it.

Suppose that we have two star systems, A and B, whose spatial positions are on different sides of our drawn cosmological horizon, a couple of hundred lightyears away from each other. Let's say that B's the closer star to us – 100 ly inside our nominal horizon – and A's 100 ly outside. In an observerspace projection, we'll eventually be able to see the formation of the nearer star B (if we wait a few bazillion years) but A is off-limits.

But the nearer star B is quite capable of seeing events generated by A, and then helpfully relaying their information on to us. If A goes supernova, we should (eventually) be able to see a cloud of gas near B being illuminated by the flash. B can pass A's signals on, just as an observer at a planetary horizon can see things beyond our horizon and describe them to us, or hold up a carefully-angled mirror to let us see for ourselves.

So technically, Star A, under QM definitions, is a virtual object. It doesn't exist for us according to direct observation, but it's real for nearby observers and we can see the secondary result of those observations. B radiates indirectly through the horizon, so not only does the supposed Big Bang singularity have a masking horizon, the horizon emits Hawking radiation. If we'd bee a bit brighter back in the 1950's, we'd have been able to predict Hawking radiation by taking the "cosmological horizon" case and generalising over to the gravitational case. What stopped us from doing this was an incompatibility with the way that GR1915 was constructed.

The cosmological horizon is an acoustic horizon. It fluctuates and jumps about in response to events both in front of it and behind it. If someone near star A lobs a baseball at star B, we'll eventually see that baseball appear, apparently from nowhere, as a Hawking radiation event. And depending on how close the thrower is to the horizon, and how hard they throw the ball, we might even get a glimpse of their shoulder, as the physical acceleration of their arm warps spacetime (accelerative gravitomagnetism, Einstein 1921) making the nominal horizon position jump backwards.

For this sort of acoustic horizon to work, the acceleration and velocity of an object has to affect local optics (if the ball had been thrown in the opposite direction, we'd never have seen it).
If the local physics at a cosmological horizon generates an acoustic horizon, then that physics is going to correspond to that of an acoustic metric. NOT a static Minkowski metric. The presence, velocity and acceleration of objects must change the local signal-carrying properties of a region. Since the operating characteristics of an acoustic metric are different to those of the Minkowski metric that defines the relationships of special relativity, the local physics then has to operate according to a different set of laws to those of special relativity – the velocity-dependent geometry of an acoustic metric makes the basic equations of motion come out differently. For cosmological horizons to work as we expect, the local light-geometry for a patch of horizon has to be something other than simple SR flat spacetime, and the local physics has to obey a different set of rules to those of special relativity.

Now, the punchline: Since our own region of spacetime will in turn lie on the horizon of some distant far-future observer, this means that if we buy into the previous arguments, our own local "baseball physics", here on Earth, shouldn't be that of special relativity either.


The good news
is that if we eliminate special relativity from GR, to force cosmological horizons to make sense, GR's predictions for gravitational horizons would also change. The revised general theory would predict indirect radiation effects through gravitational horizons, bringing the theory in line with quantum mechanics. Which would be a Good Thing, because we've been trying to solve THAT problem for most of the last 35 years.

The bad news
is that there doesn't seem to be any polite way to do it. Disassembling and reconstructing general relativity to address its major architectural problems involves going back to basics and starting from scratch, questioning every assumption and decision that was made the first time around, and being pretty ruthless about which parts get to stay on in the final theory.

I find this sort of work kinda fun, but apparently I'm in a minority.

Monday, 6 July 2009

Projective Cosmology, and the topological failure of Einstein's General Theory

'farside black hole' projection, topological cosmology, 'Relativity in Curved Spacetime' figure 12.4
The graphic above is from my old, defunct, 1990s website, and I also borrowed it for chapter 12 of the book.

It shows a rather fun observerspace projection: if we assume that the universe is (hyper-) spherical, but we colour it in as it's seen to be rather than how we deduce it to be, expansion and Hubble shift result in a description in which things are more redshifted towards the universe's farside. Free-falling objects recede from us faster towards the apparent farside-point, as if they were falling towards some hugely massive object at the opposite end of the universe, and as if there was a corresponding gravitational field centred on the farside. At a certain distance between us and where this (apparent) gravitational field would be expected to go singular, there's a horizon (the cosmological horizon) censoring the extrapolated Big Bang singularity from view, and that looks gravitational, too.

And, funnily, enough, this "warped" worldview turns out to be defensible (as an observer-specific description) using the available optical evidence. Since we reckon that the universe is expanding, and we're seeing older epochs of the universe's history as we look further away, we're seeing those distant objects as they were in the distant past, when the universe was smaller and denser and the background gravitational field-density was greater than it is now.

Our perspective view is showing us an angled slice through space and time that really does include a gravitational gradient – between "there-and-then" and "here-and-now". The apparent gravitational differential is physically real within our observerspace projection, and viewed end-on, the projection describes a globular universe with a great big black hole at the opposite end to wherever the observer happens to be.

This projection is fascinating: it means that we end up describing cosmological-curvature effects with gravitational-curvature language, and it cuts down on the number of separate things that our universe model has to contain. If we take this topological projection seriously, some physics descriptions need to be unified. If we can agree on a single definition of relative velocity, the projection means that cosmological shifts (as a function of cosmological recession velocity) have to follow the same law as gravitational shifts (as a function of gravitational terminal velocity) ... and then, since gravitational shifts can be calculated from their associated terminal velocities as conventional motion shifts, we have have three different effects (cosmological, gravitational and velocity shifts) all demanding to be topologically transformed into one another, and all needing to obey the same laws.


This all sounds great, and at this point someone who hasn't done advanced gravitational physics will probably be anticipating the punchline – that when we work out what this unified set of laws would have to be, we find that they're the set given by Einstein's special and general theories, QED.

Except that they aren't. We don't believe that cosmological shifts obey the relationship between recession velocity and redshift supplied by special relativity.

We dealt with this by ignoring the offending geometry. Since cosmological horizons had to be leaky, and GR1915 told us (wrongly) that gravitational horizons had to give off zero radiation, we figured that these had to be two physically-irreconcilable cases, and that any approach that unified the two descriptions was therefore misguided. Since a topological re-projection couldn't be "wrong", it had to be "inappropriate". Instead of listening to the geometry and going for unification, we stuck with the current implementation of general relativity, and suspended the usual rules of topology to force a fit.

But then Stephen Hawking used quantum mechanics to argue that gravitational horizons should emit indirect radiation after all, as the projection predicts. So we'd broken geometrical laws (in a geometrical theory!) to protect an unverified physical outcome that turned out to be wrong. Where we should have been able to predict Hawking radiation across a gravitational horizon from simple topological arguments in maybe the 1930's, by using the closed-universe model and topology, we instead stuck with existing theory and had to wait until the 1970's for QM to tap us on the shoulder and point out that statistical mechanics said that we'd screwed up somewhere.

If we look at this projection, and consider the consequences, it suggests that the structure of current general relativity theory, when applied to a closed universe, doesn't give a geometrically consistent theory ... or at least, that the current theory is only "consistent" if we use the condition of internal consistency to demand that any logical or geometrical arguments that would otherwise crash the theory be suspended (making the concept almost worthless).
It basically tells us that current classical theory is a screw-up. And that's why you probably won't see this projection given in a C20th textbook on general relativity.

Tuesday, 30 June 2009

The Riemann Projection and General Relativity

The Riemann projection is associated with the mathematician Bernhard Riemann (1826-1866), and gives a method of projecting the contents of a finite spherical surface onto an infinite flat plane.

We place the sphere onto the plane, so that its South Pole is touching the surface, and then we draw lines from the North Pole to the plane. After leaving N, each line intersects one (and only one) point on the spherical surface, and one (and only one) point on the spherical plane. Every point on one on one of the two surfaces has its corresponding point on the other. As long as we don't mind making a vanishingly-small pinprick in the spherical surface at its North Pole, the two surfaces are topologically identical … we can take our pin-mark, stretch it to a finite-sized hole, and then stretch the resulting bowl-shaped surface to cover the full infinite plane.

We can also imagine this as a simple optical projection – if the sphere is a hollow transparent surface and we place a lightsource at N, then anything drawn on the sphere will project shadows onto the plane.

Einstein used the projection in his "Geometry and Experience" lecture, as an aid to visualising the idea of a closed finite universe:Riemann Sphere, Einstein, 'Geometry and Experience' lecture, 1921There's also a nice Riemann Sphere animation on YouTube, courtesy of the American Mathematical Society, and a nice image at Encyclopaedia Britannica.




Now although we don't usually want to make this sort of projection (unless we're working on something a bit abstract, like Moebius transformations), the "Riemann Sphere" projection was psychologically important for physics, because the thing was fairly easy to visualise, and because it had such far-reaching implications for geometrical physics.

Thanks to the projection, we know that any physics described in sphereland has to have an exact counterpart description in flatland, as long as we scale all our definitions to match. When we lay rulers over the surface of the sphere, rulers near the North Pole have projections onto the plane that tend towards becoming infinitely large, so the plane's surface appears (to its occupants) to be finite, just like the sphere. Similarly, a constant-speed light-pulse travelling around the sphere has a "shadow" on the plane whose speed tends to infinity as the corresponding position on the sphere approaches N . If we take objects and structures whose internal equilibrium is maintained by signals travelling at the speed of light, then as we move these objects away from S, they enlarge. So it takes us the same number of tiles to pave the infinite plane as the sphere. And to the plane's inhabitants, there's no obvious way of telling which tile is the central tile – the internal physics of the plane and sphere are precisely the same.

But the intrinsic geometry of a blank plane, on its own, is not the same as that of a sphere. We need to add something – a density-map. In order to recreate the sphere's properties , we need to either project a helpful scaling grid from the sphere onto the plane to describe how scalings need to vary across the plane's surface, or attach a value to each point on on the plane to describe the local scaling. This "density" parameter varies smoothly over the surface, so we're entitled to describe it as a field. We can then say that it's this density-field that deflects light and matter in the plane towards the region of highest density (S), by Huygens' principle. But as Newton and Einstein both pointed out, a variation in the density of an underlying medium, and the associated variation in the speed of light, can both be considered as expressions of the action of a gravitational field.

As a crude first approximation, we can say that the unscaled plane description includes a gravitational field that doesn't exist in the sphere description – and yet both descriptions are equivalent.



So ... the implication of the Riemann projection is that gravitational fields aren't absolute. We can take a physical description that works, and stretch and squash our reference-grid in weird and silly ways, and as long as we invent compensating gravitational fields that vary in sympathy with our fictitious distortions (causing space's contents to nominally stretch and squash and distort to fill exactly the same region as before), the final predictions should be identical, regardless of which grid we use.
Within a space defined by that grid, these fields are physically real. And, said Einstein, we could also run the process backwards. We can place an observer in a genuine gravitational field, and allow them freefall acceleration, and for them, that field will no longer exist in their local physics ("a freefalling observer feels no gravity"). If Eötvös' Principle (that everything falls at the same rate in a gravitational field) was right, and gravity affected everything equally, then we had to be able to produce a geometrical description of gravitational effects ... and by allowing space to be warped, we could then eliminate gravitational fields from our description as a separate effect. The background gravitational field was simply space(-time), and what we normally thought of as conventional gravity was simply the result of curvature, and of curvature-related variations in projected density.

In practice, things were a little more difficult than this: Riemann and co couldn't get their curved-space models to work using curvature in just three dimensions, so a geometrical theory of gravity had to wait until Einstein had noticed the argument for gravitational time dilation, and that it led to curvature in four dimensions.
Einstein also decided to use a "frame-based" approach, which led to some simplified geometries being cross-mapped and projected that sometimes didn't correspond to actual physics, or to the shapes that more general principles said ought to be there.

I'll deal with the topological failure of the current default version of the general theory of relativity in a future post (or two). If anyone can't wait, it's in the book.