tag:blogger.com,1999:blog-4805553531325801002024-03-05T04:14:20.173+00:00ErkDemon(Eric Baird) - The Other Side of ScienceErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.comBlogger100125tag:blogger.com,1999:blog-480555353132580100.post-31258475120979840922015-12-22T09:00:00.000+00:002015-12-22T09:00:22.526+00:00General Relativity and MTW's Three Tests for Viability<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeZTqnxFY4WX5H9ZUNPP4Op0oW4RE98n4LQBs-K2zchlHjBiN__t7eAqCb9BZtlCOSIyDpxX_d3umk_NadkmlogNVJiCnNfvM6QzN1LRXdnUyiZmq9Lihmj8cjQT5QO4rY7G15IxVPyW0/s1600/MTW+cross+three.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeZTqnxFY4WX5H9ZUNPP4Op0oW4RE98n4LQBs-K2zchlHjBiN__t7eAqCb9BZtlCOSIyDpxX_d3umk_NadkmlogNVJiCnNfvM6QzN1LRXdnUyiZmq9Lihmj8cjQT5QO4rY7G15IxVPyW0/s400/MTW+cross+three.jpg" width="400" /></a></div>
At well over twelve hundred pages, "<b>Gravitation</b>" (1973, ISBN 9780716703440) by <b>Charles Misner</b>, <b>Kip Thorne</b> and <b>John Wheeler</b>, collectively known as "<b>MTW</b>" for short, is one of the thicker textbooks on general relativity.<br />
<br />
MTW's section on gravitational testing (~p1066) suggests that there's not really much doubt over the correctness of general relativity, but that we're supposed to put up alternatives for the sake of scientific procedure, so that we can make comparisons and conclude <i>objectively</i> that the theory is wonderful. MTW then give three conditions that a theory has to meet in order to be considered credible enough to warrant being compared with Einstein's wonderful theory.<br />
<br />
There are a couple of problems here: the first is that the main competitor class of theory to GR1916 seems to be the "Cliffordian" acoustic metric class which doesn't reduce to flat spacetime and special relativity, and is therefore not included in MTW's scheme before we even look at the three conditions. This means that the main class of theory that could hypothetically whup GR's arse is already excluded from the comparison, because its metric is too sophisticated to fit the "clunky" SR-based defintions that act as a foundation for a lot of work on GR. All we're <i>supposed </i>to compare with GR is other similar SR-reducing theories, which means, essentially, other variations on the existing GR1916/60 theme.<br />
The assessment is basically "skewed and screwed" before we even begin.<br />
<br />
The second problem is that even though the three critical tests seem to have been designed to make the 1916 theory look good, GR1916 still manages to fail at least one out the three, from the perspective of someone in 2015 it fails at least two out of the three, and when we look at the original theory before its 1960 reboot, <i>that</i> version arguably fails all three tests.<br />
<br />This is not good.<br />
<br />
<br />
<br />
MTW, page 1066: <br />
<blockquote class="tr_bq">
<span style="color: #990000;">" Not all theories of gravitation are created equal. Very few, among the multitude in the literature, are sufficiently viable to be worth comparison with general relativity or with future experiments. The "worthy" theories are those are those which satisfy <i>three criteria for viability: self-consistency, completeness, and agreement with past experiment. </i>"</span></blockquote>
Let's examine these criteria:<br />
<br />
<h3>
1: SELF-CONSISTENCY</h3>
We know (and MTW presumably also knew in 1973) that Einstein's 1916 theory had already been found in 1960 to have failed the test of internal self-consistency. That's when the theory had a crisis in which it was discovered that the principle of equivalence, arguably the foundation of the theory, appeared to be fundamentally irreconcilable with special relativity. Einstein had warned in 1950 about potential issues related to his original "pragmatic" decision to include SR as a limiting case in 1916, subsequently arguing that this wasn't obviously a legitimate feature of a general theory. He died in 1955 before the 1960 crisis vindicated his concerns: <br />
<br />
<span style="font-size: x-small;"><a href="http://scitation.aip.org/content/aapt/journal/ajp/28/9/10.1119/1.1936000"><b><i>Alfred Schild, "Equivalence Principle and Red-Shift Measurements" Am. J. Phys. 28, 778 (1960):</i></b></a></span><br />
<blockquote class="tr_bq">
<span style="color: #990000;"><span style="font-family: Georgia,"Times New Roman",serif;">" ... special relativity and the equivalence principle do not form a consistent theoretical system. "</span></span></blockquote>
If the principles of general relativity ruled out the inclusion of SR, we'd lose both major theories of relativity and have to rewrite a new single-stage general theory to replace both previous layers. Since the "total rewrite" option was considered unacceptable, we instead rejected the very<i> idea </i>that SR could be wrong, and declared that SR was an unavoidable part of any credible gravitational model. The principle of equivalence therefore had to be suspended every time it was about to collide with SR and crash the theory, because any process that crashed the theory was by definition, not a correct process under that theory.<br />
<br />
So GR1916(original) never was a self-consistent theory, and the 1960 "reimagining" that tried to fix this cannot be said to be <i>meaningfully </i>self-consistent, because it only achieves a more limited form of consistency by setting up protocols for coping with failures in an orderly and consistent way. This is like the difference between a self-driving car that never crashes, and a car that crashes repeatedly, but comes with seatbelts and airbags and crumple zones, and instructions for when to override the autopilot. GR1960 does failure management rather than failure avoidance.<br />
<br />
<h3>
2: COMPLETENESS</h3>
MTW's second criterion is that<br />
<blockquote class="tr_bq">
<span style="color: #990000;"><span style="font-family: Georgia,"Times New Roman",serif;">" ... it must mesh with and incorporate a consistent set of laws for electromagnetism, quantum mechanics, and all other laws of physics. ... "</span></span></blockquote>
At the time those words were written, it probably seemed that GR1916/60 met all those requirements, but since the theoretical discovery of black hole radiation in the Seventies, we've realised that textbook GR very much does <b>NOT</b> mesh with quantum mechanics.<br />
<br />
<b><i><span style="font-size: x-small;">Kip Thorne, "Black Holes and Timewarps" (1994) , p237:</span></i></b><br />
<blockquote class="tr_bq">
<span style="color: #990000;"><span style="font-family: Georgia,"Times New Roman",serif;">" ... looking at the laws of general relativity and the laws of quantum mechanics, it was obvious that one or the other or both must be changed to make them mesh logically. "</span></span></blockquote>
So, assuming that current QM is basically correct, GR (past and present) also currently fails MTW test number 2. We can try to hypothesise a larger theoretical structure – a theory of <b>quantum gravity</b> – that somehow contains both theories, and this is what the classical physics guys have been holding out for ... but forty years later, nobody's managed to come up with one that works without modifying GR. Current GR and QM seem to have fundamentally incompatible causal structures and definitions that would make their predictions <i>irreconcilable</i>, so if QM is right then we'd seem to be using the wrong general theory of relativity.<br /><h3>
3 AGREEMENT WITH PAST EXPERIMENT</h3>
<br />
This is an odd one. It's natural to ask that a theory agree with experiment (at least <i>reasonably </i>well) because we need it to agree with reality. So why use the word "past"? Aren't all experiments past? Do current experiments not matter?<br />
<br />
Setting aside that one ambiguous word, GR obviously doesn't obviously agree with all <i>current</i>,<i> recent </i>gravitational experimental data. MTW mentions "the expansion of the universe" as one of the things that gravitational theory has to get right, and textbook GR gets expansion characteristics wrong unless we invent a new thing, "dark energy", specifically to make up the shortfall between what GR predicts and what our hardware reports.<br />
<br />
Similarly, GR currently "under-predicts" the cohesiveness of large systems such as galaxies – the rotation curve of spiral galaxies seems to be wrong, and suggests that either the gravitational attraction within a galaxy is somehow greater, or the interaction across intergalactic voids is weaker. We'd get the second effect (and a stronger expansion characteristic) if GR was the wrong theory, and the real theory was more aggressively nonlinear, so both these things are arguably "warning flags" for the theory being faulty. Instead, we prefer to explain the extra cohesiveness by inventing a whole new form of matter ("dark matter") which obeys different rules to the rest of the universe, is conveniently invisible and unreactive (except gravitationally), and which has no theoretical basis or reason to exist other than to help us to balance the books.<br />
<br />
With dark matter and dark energy, we can't currently say that GR agrees well with experiment because we can't currently demonstrate that these things are real and not just ad-hoc devices that let us write "blank cheques" for any mismatch between GR and reality. GR enthusiasts might interpret the situation as meaning that once dark matter and energy are added, the theory matches the data <i>excellently </i>... but they can't dispute the fact that textbook GR is currently <i>functionally indistinguishable</i> from a theory that fails to agree with the available data. General relativity cannot be shown to pass test #3, without making so many "creative" external <i>ad hoc</i> adjustments that nominally "passing" the test carries no real significance.<br />
<br />
<br />
<b><i>MTW 1973:</i></b><br />
<blockquote class="tr_bq">
<span style="color: #990000;"><span style="font-family: Georgia,"Times New Roman",serif;">" Among all bodies of physical law none has ever been found that is simpler or more beautiful than Einstein's geometric theory of gravity ... as experiment after experiment has been performed , Einstein's theory has stood firm. No purported inconsistency between experiment and Einstein's laws of gravity has ever surmounted the tests of time. "</span></span></blockquote>
We're no longer in a position to make this statement.<br />
<h3>
CONCLUSIONS</h3>
According to MTW, failing <i>any one of these conditions</i> means that a theory is to be regarded as not worth pursuing and not worth testing. A sceptic could argue that the 1916 theory technically appears to fail<i> all three</i> tests, and with even the most optimistic and most generous interpretation of MTW's criteria, where the theory "only" fails test #2, we'd still be obliged to write off the current general theory as not being a credible theory of gravitation. <br />
<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-34410199665281965562012-02-08T21:46:00.007+00:002018-05-17T13:14:11.522+01:00Hexagonal Diamond - The "other" form of diamond<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKL7SrxkNAHJJzlpZyyDvYH4qg8I-t-Evf-TkAj1j_gv52PDNOuUvLoVkfqSHth3TZLe_2o9e6aw6HkCVRfmqN3y0ixgIUAgxTGAY4e-EWznbeT9UdvhFtVUMVlDm7knOG_Ev-CGOAs2I/s1600/Lonsdaleite+%2528Hexagonal+Diamond%2529.jpg"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5706886210195664850" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKL7SrxkNAHJJzlpZyyDvYH4qg8I-t-Evf-TkAj1j_gv52PDNOuUvLoVkfqSHth3TZLe_2o9e6aw6HkCVRfmqN3y0ixgIUAgxTGAY4e-EWznbeT9UdvhFtVUMVlDm7knOG_Ev-CGOAs2I/s400/Lonsdaleite+%2528Hexagonal+Diamond%2529.jpg" style="cursor: hand; cursor: pointer; display: block; height: 234px; margin: 0px auto 10px; text-align: center; width: 400px;" /></a><span style="font-weight: bold;">Here's another geometrical object that, if you believed basic school textbooks, should be impossible.</span> You know how they taught you that carbon only comes in three forms, diamond, graphite and soot, and that other configurations were geometrically impossible? Before the penny dropped regarding <a href="http://en.wikipedia.org/wiki/Fullerene">Buckyballs and Buckytubes</a>?<br />
<br />
Well, it turns out that even diamond has (at least) two possible versions.<br />
<br />
This one is known as <a href="http://en.wikipedia.org/wiki/Lonsdaleite"><span style="font-weight: bold;">hexagonal diamond</span>, or <span style="font-weight: bold;">Lonsdaleite</span></a>, after the crystallographer <a href="http://en.wikipedia.org/wiki/Kathleen_Lonsdale"><span style="font-weight: bold;">Kathleen Lonsdale (1903-1971)</span></a>.<br />
<br />
The reason why most people haven't heard of it is that it's not normally naturally occurring, at least, not in situations that are easily accessible to us (although teeny-tiny specks of it are supposed to have been isolated from meteorites). Its nominal bond angles and lengths would seem to be the same as normal diamond, and it still has a tetrahedral aspect to the way that it has four bonds surrounding each individual atom, but the configuration is, nevertheless, different. People have computer-modelled Lonsdaleite, but I only know of two physical models of the structure, and they're both in my bedroom. <span style="color: rgb(255 , 0 , 0); font-weight: bold;">*</span><br />
<br />
Hexagonal diamond is a bit of a wildcard, in that although we can try to calculate and model the properties of the bulk material, we don't really know for certain what they are, exactly. We <span style="font-style: italic;">expect</span> pure Lonsdaleite to be harder than standard diamond (which is interesting), and it it might well have interesting semiconductor properties when we dope it (as with normal diamond), but until we can find or make a decent-sized chunk of the stuff to test, we don't know for sure.<br />
<br />
What we <span style="font-style: italic;">could</span> do is try to make hexagonal diamond using conventional <a href="http://en.wikipedia.org/wiki/Chemical_vapor_deposition_of_diamond" style="font-weight: bold;">chemical vapour deposition (CVD)</a>, but to use some sort of crystal seed surface that has bumps and hollows in the right places to get the Lonsdaleite structure started, after which the deposited film will <span style="font-style: italic;">hopefully</span> continue growing in the new "HexD" configuration. But it's maybe not immediately obvious why Lonsdaleite doesn't usually get noticed in normal diamond-bearing rock. Does it have some form of instability that makes it a less viable end-material than conventional "cubic" diamond? Dunno.<br />
<br />
One potential clue is Lonsdaleite's structural affinity to <a href="http://en.wikipedia.org/wiki/Graphite">graphite</a>. You can (notionally) make Lonsdaleite by taking stacked and aligned sheets of graphite and cross-linking. Graphite's two-dimensional sheets only make three out of the four potential bonds per atom, the missing fourth bond being shared as a sort of pair of fuzzy electron clouds that hover on both sides of each individual graphene sheet, as a sort of repulsive lubricant that lets the individual <a href="http://en.wikipedia.org/wiki/Graphene">graphene sheets</a> slide across each other. Looking at the hexagonal structure of a single graphene sheet, we can select alternating carbon atoms (three per hexagon), and force them down out of the plane to make bonds with the corresponding atoms on the sheet below … and then take the other 50% of the atoms in the sheet and make them form similar shared bonds with the atoms directly <span style="font-style: italic;">above</span> them, in the next sheet up. The sheet then crinkles so that it's no longer flat, and hopefully, in the right set of circumstances, the sheets on either side will start to crinkle to fit, and their spare three-bond atoms will be pushed out of the plane to be closer to the next sheets, and will hopefully start to make bonds.<br />
<br />
So, <span style="font-style: italic;">perhaps</span> we could try making Lonsdaleite by clamping the edges of a block of graphite to compress the constituent graphene sheets and encourage them to "crinkle", while ... er ... heating? Or repeatedly hitting the thing with a hammer? That might create disordered Lonsdaleite, which might have pockets or regions of "the good stuff", which might then be extractable. And even if it doesn't work, it might produce <span style="font-style: italic;">something</span> interesting, maybe. What the heck, why not go for the whole "Frankenstein laboratory" approach and try zapping a current across the sheets at the same time, to see if you can encourage something interesting to "grow". :)<br />
<br />
But perhaps chemical vapour deposition is the way to go, if we can find a suitable seed substrate.<br />
<br />
<br />
<span style="font-size: 85%; font-style: italic;"><span style="color: rgb(255 , 0 , 0);">*</span> Okay, somebody's now bought one though my Shapeways shop, so that makes three. :) There must be other physical models of this thing out there, in chemistry labs somewhere ... I've just not seen one photographed. Then again, I'm not a chemist or a crystallographer.</span><div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com1tag:blogger.com,1999:blog-480555353132580100.post-19772190674684393252010-09-30T23:35:00.001+01:002010-09-30T23:39:28.211+01:00Different Types of Zero<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQ0yAGUywxjRNpqaWCyjDq2JhvzpcrgmMtw8Ix9ZIMCo7S6C0OHjcZyan4-xFPH-iBxyCzGrpdVrqLggCE6nVNUNRoM7HJWUhAw92wBJUe-c2IhPUcb9XPU3wvg3FcrTcMWuz_YqaB3qs/s1600/BlogZeroes.gif"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 146px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQ0yAGUywxjRNpqaWCyjDq2JhvzpcrgmMtw8Ix9ZIMCo7S6C0OHjcZyan4-xFPH-iBxyCzGrpdVrqLggCE6nVNUNRoM7HJWUhAw92wBJUe-c2IhPUcb9XPU3wvg3FcrTcMWuz_YqaB3qs/s400/BlogZeroes.gif" alt="" id="BLOGGER_PHOTO_ID_5522833551657098706" border="0" /></a><p style="margin-bottom: 0cm; font-weight: bold;">It took mathematicians a while to realise that infinities came in different sizes.</p> <p style="margin-bottom: 0cm;">The problem was an inadequacy of language. All "infinities" are <span style="font-style: italic;">infinite</span>, but some are a little more infinite than others. For instance, "infinity-squared" gives an infinite result, but it's a <span style="font-style: italic;">stronger</span> infinity than the infinity that we started out from ... but by deciding to assign all these different infinite results the same name — "infinity" — we created an implicit assumption that this "infinity" was a <span style="font-style: italic;">thing</span>, a single entity rather than a family. We ended up reciting things like "infinity is just infinity, <span style="font-style: italic;">by definition</span>". Well, if so, it was a pretty bad definition, because infinity isn't so much a <span>value</span> as a realm, or a concept that allows multiple members, like "integer".<br /></p><p style="margin-bottom: 0cm;">Our conventional language breaks down in these sorts of situations. To try to get a handle on the infinities, we can construct an "infinity-based" number system where our reference base unit [∞] is a "reference infinity" of "one divided by zero" (we can say, "1/0 =[∞]"), and we can compare other infinities to that, so that 2/0 gives 2×[∞], and 2×[∞] / [∞] = 2. It's possible to do proper math and get sane finite results by multiplying and dividing infinities together, as long as you remember to keep track of how big each individual infinity is (and/or where it originally came from).</p> <p style="margin-bottom: 0cm;">We do similar things with <a href="http://en.wikipedia.org/wiki/Complex_number"><span style="font-weight: bold;">complex numbers</span></a>. These have two components, a conventional "real" component, and an "imaginary" component that's a multiple of the "impossible" square root of minus one, which we abbreviate as <span style="font-style: italic;">i</span>. Even though the imaginary components don't exist in our default number system, we can still do useful math with these hybrid numbers ... that's actually how we generate exotic mathematical creatures like the the Mandelbrot Set. The approach works. We've seen the pretty pictures. </p><br /><hr align="left" width="25%"><br />So multiple values of infinity are okay.<br /><p style="margin-bottom: 0cm;"><span style="font-weight: bold;">But there's one last thing that we have to fix. Zero. </span>See, it turns out that if infinities come in different sizes, then <span style="font-style: italic;">zero</span> has to come in different sizes, too.<br /></p><p style="margin-bottom: 0cm;">At first sight this seems even more crazy. We can plot a simple line going through zero, and put the tip of our pencil on the crossing point, and say <span style="font-style: italic;">there</span> it is, right <span style="font-style: italic;">there</span>. How can that single point have different values? Well, as with the infinities, the auxiliary values exist off the page — when different graphs all hit zero at the same position, the properties associated with the coincident points on those different graphs aren't automatically completely identical even though they show up as being at the same position. Coincident points on different intersecting lines can carry different slopes and rates of change, and can have associated vectors and other associated baggage that gets lost when we try to break a line down into instantaneous isolated unconnected values. </p><p style="margin-bottom: 0cm;">Zero times any <span style="font-style: italic;">conventional</span> number gives a zero, just like infinity times any conventional number gives an infinity. But not all zeroes have the same emphasis or strength, and this can become important when you have them fighting against each other. If we're only multiplying our zeroes by normal boring numbers then the auxiliary parameters don't matter, but as with the infinities, when we start multiplying or dividing different zeroes, we have to track the strengths of the zeroes, or else we tend to end up with mathematical garbage.<br /></p><br />One of the problems that theoretical physicists currently have is they they're coming up against a range of problems — black hole event horizons, Hawking radiation, gravity-wave and warpfield propagation — where clusters of values assigned to apparent physical properties have a habit of going to zero or to infinity and beyond, even though the underlying local physical properties are non-zero and non-infinite. To deal with those problems we either have to find ways of sidestepping the pathological math, or come up with a more complete mathematical vocabulary that doesn't freak out when we occasionally need to divide a known strength of infinity by an associated known incarnation of zero.<br />Otherwise, we're liable to come to bad conclusions about how certain things are "provably physically impossible" because they appear to break the math, when in fact, the real problem is that the math that we're trying to apply to the problem is too naive. If we go down that route, we can end up accidentally elevating the result of human error to the status of an accepted mathematical proof.<br /><br />Which is bad.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com2tag:blogger.com,1999:blog-480555353132580100.post-85624677161559863002010-08-01T23:19:00.002+01:002010-08-02T00:32:04.728+01:00The Decline of Theoretical Physics<span style="font-weight: bold;">Progress in fundamental theoretical physics</span> now seems to have been on hold for quite a while.<br /><br />I thought that the situation was summed up quite nicely by one of the characters in "<a href="http://en.wikipedia.org/wiki/The_Big_Bang_Theory">The Big Bang Theory</a>" (an improbably funny TV sitcom about sciencey people).<br /> <blockquote><dl><dt><span style="font-style: italic;">Penny (cheerfully as a conversation-starter):</span><br /></dt><dd>"So, what's new in the world of physics?"</dd><br /><dt><span style="font-style: italic;">Leonard (momentarily suprised and slightly amused that anyone would ask such a question):</span><br /></dt><dd>"Nothing!"<br /></dd><br /><dt><span style="font-style: italic;">Penny (taken aback):</span><br /></dt><dd>"Really, nothing?"<br /><br /></dd><dt><span style="font-style: italic;">Leonard:</span></dt><dd>"Well ... with the exception of string theory, not much has happened since the 1930's ... and ya can't prove string theory, at best you can say, 'Hey look, my logic has an internal consistency-y!' "<br /></dd><br /><dt><span style="font-style: italic;">Penny:</span></dt><dd>"Ah. Well, I'm sure things will pick up."</dd></dl></blockquote><br />Leonard unhappily picks his nails, broods briefly, decides that there's nothing positive he can say, and then changes the subject.<br /><br />And I think that just about sums things up.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com3tag:blogger.com,1999:blog-480555353132580100.post-24295004527163696882010-06-25T00:27:00.000+01:002010-06-25T00:27:33.048+01:00A 3D Mandelbrot<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiua9L-f9xLKRdUV6qAR_B7_VlPRmtsrgjaRf-ykw7n6KpdPwZDLbq-m0YoKAfOFl6SZeuI6JoNVivI4ZY38MkzMg4seZo2qipmAOlh2b8zxow46TEkAj1_Y9mSjKPMk-m5YFXVpJ7dpt8/s1600/3D_Mandelbrot.jpg"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 387px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiua9L-f9xLKRdUV6qAR_B7_VlPRmtsrgjaRf-ykw7n6KpdPwZDLbq-m0YoKAfOFl6SZeuI6JoNVivI4ZY38MkzMg4seZo2qipmAOlh2b8zxow46TEkAj1_Y9mSjKPMk-m5YFXVpJ7dpt8/s400/3D_Mandelbrot.jpg" alt="" id="BLOGGER_PHOTO_ID_5486483540726201698" border="0" /></a><a style="font-weight: bold;" href="http://www.skytopia.com/project/fractal/mandelbrot.html">Skytopia have a great set of pages</a><span style="font-weight: bold;"> on the search for a 3D version of the Mandelbrot Set.</span> Or at least, for an <span style="font-style: italic;">interesting</span> 3D version of the normal Mandelbrot. <p></p> <p style="margin-bottom: 0cm;">It's easy enough to produce fractal solids that have a Mandelbrot on one plane, and if you plot <a href="http://www.relativitybook.com/CoolStuff/julia_set_4d.html">the correct 3D shadows of the 4D Julia Set</a>, you can find shapes that have Mandelbrots on multiple intersecting planes. But getting a Mandelbrot on two <span style="font-style: italic;">perpendicular</span> intersecting planes, while having the transition between them being more interesting than simply spinning or rotating the thing on its axis, is more difficult.</p><br /><hr align="left" width="25%"><span style="font-weight: bold;"><br /></span><span style="font-weight: bold;">The "normal" Mandelbrot</span><span style="font-weight: bold;"> has one "real" component and one "imaginary" component</span>, set on the <span style="font-style: italic;">x</span> and <span style="font-style: italic;">y</span> axes. If you add another imaginary component on axis <span style="font-style: italic;">z</span>, you simply get the sort of boring "spun" shape that you might produce on a lathe. If you distinguish the two "imaginary" axes by whapping a minus sign in front of one of them, you get a <a href="http://www.relativitybook.com/CoolStuff/erkfractals_3d.html"><span style="font-weight: bold;">hybrid Mandebrot/Tricorn solid</span></a>, but one of the cross-sections is then a <a href="http://en.wikipedia.org/wiki/Tricorn_%28mathematics%29"><span style="font-weight: bold;">tricorn</span></a> rather than a 'brot. <p>From here, you can try <a href="http://en.wikipedia.org/wiki/Hypercomplex_number"><span style="font-weight: bold;">hypercomplex numbers</span></a>, number systems that support multiple distinct imaginary components and define how they should fit together. In a simple hypercomplex system, we have <span style="font-style: italic;">four</span> components, <span style="font-style: italic;">r</span>, <span style="font-style: italic;">i</span>, <span style="font-style: italic;">j</span> and <span style="font-style: italic;">k</span> — “<span style="font-style: italic;">r</span>” is real, <span style="font-style: italic;">i</span> and <span style="font-style: italic;">j</span> are identicallly-acting roots of minus one, but <span style="font-style: italic;">i</span>-times-<span style="font-style: italic;">j</span> gives a third creature, <span style="font-style: italic;">k</span>, and <span style="font-style: italic;">k</span>-squared gives <span style="font-style: italic;">plus</span> one. So we can plot <span style="font-style: italic;">r</span>, <span style="font-style: italic;">i</span>, <span style="font-style: italic;">j</span> to get a 3D Mandelbrot. Trouble is, as Skytopia point out, it's a bit boring … if we look down on the 'brot's side-bulbs, they show up as simple nubbins. There are other way to try to force Mandelbrot cross-sections, but they're a bit arbitrary, and the results tend to look like someone's cut them out of a block of wood using a Mandelbrot template.</p><hr align="left" width="25%"><span><p><a style="font-weight: bold;" href="http://www.bugman123.com/Hypercomplex/index.html">Paul Nylander (bugman)</a><span style="font-weight: bold;"> then started looking at higher-powered counterparts of the Mandebrot</span>, and realised that the boring hypercomplex solid for z^2 actually got pretty damned interesting when you jacked the power value up to eight (<a href="http://www.relativitybook.com/CoolStuff/erkfractals_powers.html">z^8</a>). This gives a gorgeously intricate beast now referred to as a <a href="http://www.skytopia.com/project/fractal/mandelbulb.html"><span>Mandelbulb</span></a>, with bulbs that spawn bulbs all over the place. It also has Julia-set siblings. But it's not a standard Mandelbrot.</p><br /><p style="margin-bottom: 0cm;">So what else? Well, the “standard” hypercomplex number system isn't the only option. There are alternative systems that give multiple imaginary components with slightly different interrelations. There are <span style="font-weight: bold;">quaternions</span> (tried them, didn't like them), and there are other potential configurations and a larger overarching system of eight-parameter <span style="font-weight: bold;">octonions</span>. The Mandelbrot-based solid at the top of this blog was made with one of those. The internal shape is also slightly reminiscent of a <a href="http://www.complexification.net/gallery/machines/buddhabrot/"><span>Buddhabrot</span></a>.</p> <p style="margin-bottom: 0cm;">The semitransparent voxel plot above isn't really able to show the shape properly, you can see there there's some fine floating ribs that connect some of the Mandelbrot features on the two planes that aren't being adequately captured by the plot, so I'll have to run off a larger version at some point, and perhaps experiment with some colour-coding. Some of the more exotic detail, like the floating network of ribbing, might also be an artefact of a technique I used to emphasise surface structure in the plot, so I'll need to spend some time playing with the thing and working out how much of the image is “proper” 3D Mandelbrot detail, and how much is an additional fractal contribution from the enhancement code.<br /></p> <p style="margin-bottom: 0cm;">But meanwhile … pretty shape!</p> </span><div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-39471946123699082982010-05-30T22:06:00.000+01:002010-06-01T06:39:31.453+01:00"Tesla Turbine" Pumps<span style="font-weight: bold;">When used as a pump, the Tesla turbine is one of the simplest devices that exists.</span> Its main component is simply a spinning disc – the disc is immersed in a fluid (like air, or water), the moving surface couples frictionally with the surface of the fluid, and makes the surface layer of fluid rotate with the disc. The fluid gets thrown outwards away from the rotation axis by centrifugal forces, and new fluid moves in to take its place. You then typically build a box around the container, with an inlet tube and outlet tube. The inlet feeds fresh fluid to the central axis of the disc, and the higher-pressure "centrifuged" fluid that collects around the disk edge is collected and allowed to escape via the outlet pipe.<br /><br />You spin the disc (in either direction), and fluid jets through the device.<br /><br />Now sure, we can do this sort of thing with a conventional bladed propeller, but those beasties have problems. The blades chop up the air or water, and create turbulence, which in turn encourages the assembly to vibrate, and small imperfections in the rotor construction can cause imbalances (and vibrations) that are different at different speeds. So bladed designs tend to be messy and noisy and juddery, and the blades' leading edges are prone to collecting buildups of dust or muck, or being damaged by collisions with any junk that happens to be caught in the fluid stream, which in turn messes up the aerodynamics of the blade and unbalances the assembly.<br /><br />If you've ever built a PC to be especially quiet, you'll know that as the months pass, it gets noisier and noisier until you have to take the thing apart to clean the accumulated muck off the leading edges of the fanblades. In the case of ship's propellers these vibrations cause more extreme physical damage: <a href="http://en.wikipedia.org/wiki/Sonoluminescence">sonoluminescence</a> momentarily creates microscopic pockets of superheated steam that can etch pits into the bronze. All this work wastes energy and causes unwanted noise and vibration, and makes for additional engineering complications.<br /><br />With the Tesla turbine fan, this violent interaction with the stream doesn't happen. For conventional propellers, surface friction wastes energy, with a Tesla disc, surface friction is the useful coupling mechanism that makes the thing work.<br /><br />Nowadays, if you have a tropical fish tank or an outdoor pond with an ornamental fountain, the little cylindrical pump that circulates the water or drives the fountain is probably a small centrifugal Tesla turbine. Because it's bladeless, it means that any tiny creatures that get into the pump don't risk being chopped or hit by a big nasty blade, they might have a couple of bumps on the way through, but that's it. And weeds can't snag on the propeller blades and jam the pump, because there aren't any propeller blades to snag. So it's a comparatively creature-friendly and low-maintenance type of pump, if you want something to pump water for years without requiring any attention, or mashing up the microfauna.<br /><br />Recently, they've also starting to consider using Tesla pumps for pumping blood. Blood includes all sorts of delicate gunge that doesn't like being disturbed too much, or it's liable to trigger a clotting reaction or an immune response. You don't want to smash up too many of the blood cells or start banging platelets together -- traditional blood pumps use clear tubing that's "massaged" by rotors to push the blood through, which makes for a nice simple high-visibility sealed unit, but you're still "squashing" some of the blood every time the pinched region travels along the tube.<br /><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">Perhaps the most surprising thing about Tesla pumps</span>, apart from their simplicity, is how long it took us so many years to realise that these things were useful. A diagram of a conventional bladed fan gives you some indication of what a device does, but a simple smooth spinning disc in a box doesn't <span style="font-style: italic;">look</span> as if it would do anything useful. Nikola Tesla got his turbine patent as late as 1913 claiming it as a novel device, Tesla pumps apparently started being generally manufactured in the 1970's, and a quick Google for references to radial bloodpump designs seems to only throw up results newer than 1990, most in the last five or ten years.<br /><br />Sometimes we miss out on useful technologies because they require too much R&D or technical skill to get them to point where they actually work, but sometimes we also miss out on trivially-easy technologies that "work first time" because they're just too damned simple.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-88494894116172851612010-05-14T23:25:00.002+01:002010-06-01T06:40:27.039+01:00Rice and the Chessboard<span style="font-weight: bold;">In the story, an Emperor asks his mathematician to solve a difficult problem.</span><br /><br />In payment, the mathematician asks for a chessboard with one grain of rice on the first square, two on the next, four on the one after that, eight on the next, and so on. The emperor agrees. Then the smart-alec mathematician points out that by the time we get to the sixty-fourth square, the number of grains of rice is astronomical. It's about 10^19, or 10,000,000,000,000,000,000. In binary, that's 1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111,1111 , which is probably the largest number that you can express as a standard integer on a modern 64-bit processor running a specialist 64-bit version of Windows. One more grain of rice and you probably get an overflow error.<br /><br />If each grain of rice weighs about 25 mg, then, when we double the last-square figure to get the total number of grains on the chessboard, I think we end up with something like 460 billion metric tonnes (minus one grain).<br /><br />According to the story, the Emperor's response was to point out that this created a new problem that required the mathematician's involvement. As Emperor, he couldn't go back on his word, even if the mathematician allowed him to. An Imperial Decree couldn't be rescinded. On the other hand, that much rice didn't physically exist. The solution was to point out that if the <span style="font-style: italic;">mathematician</span> didn't exist, the debt would cease to exist, too. So the Emperor signed the mathematicians' death warrant on the grounds that pulling this sort of trick on the Emperor counted as treason, and had him executed.<br /><br /><hr align="left" width="25%"><br />Here's how to work out the result in your head, without using a calculator (or even pen and paper) :<br /><br />Square 1 has one grain of rice. The next ten have 2, 4, 8, 16, 32, 64, 128, 256, 512 and 1024.<br /><br />Every time that you advance another ten squares, you multiply the number on the square by <span style="font-weight: bold;">1024</span> (2^10), which is only <span style="font-style: italic;">slightly</span> more than a thousand (<span style="font-weight: bold;">1000</span>, 10^3). As a first approximation, every ten-square move pretty much shifts the "decimal" version of the number <span style="font-weight: bold;">three places to the left</span>.<br /><br />This means that when we move <span style="font-style: italic;">sixty </span>squares, we're adding those three zeroes <span style="font-style: italic;">six</span> times, giving us <span style="font-style: italic;">eighteen</span> zeroes. That leaves just three more squares, so we go 2, 4, 8 … and write down a "guesstimate" figure of <span style="font-weight: bold;">8 ×10^18</span> for the number of grains on the last square.<br /><br />This is an underestimate, but by how much? We treated 1024 as if it was 1000, so we have a missing factor of <span style="font-weight: bold;">1.024</span> that needs to be multiplied in six times to get to the <span style="font-style: italic;">proper</span> answer. What's 1.024 raised to the sixth power? Eww. :(<br /><br />Well, when we square "one-point-something", we get one, plus two times the "something", plus "the something-squared" ( (1+x)^2 = 1 + 2x + x^2 ).<br />If the something is <span style="font-style: italic;">very</span> small, then something-squared is going to be <span style="font-style: italic;">extremely</span> small, and hopefully so small that we can forget about it, and get away with just doubling the original small something.<br /><br />So "1.024 times 1.024" gives 1.048, plus a little bit. Call it 1.049 .<br />Now we need "1.049 times 1.049 times 1.049", to get us up to that power of six.<br />A similar principle applies: <span style="font-style: italic;">cube</span> something very close to one, and the tiny difference kinda triples (plus a little bit).<br />So we take 1.049, look at the part after the decimal point, 0.049, nudge it up to a nicer 0.05 , then triple <span style="font-style: italic;">that</span> to give <span style="font-weight: bold;">~0.15</span> as the ratio that has to be multiplied into our original result to find the amount of undershoot correction.<br /><br />"Eight" times the 0.1 is 0.8<br />"Eight" times the 0.05 should be half that, so 0.4 .<br />Adding them together, 0.8 + 0.4 is 1.2 (× 10^18).<br />That's our error .<br />Add that to the original guess of 8 (× 10^18), and we get our improved estimate, of <span style="font-weight: bold;">~9.2 ×10^18</span> .<br /><br />… and if we check that against our calculator, which says that 2^63 = <span style="font-weight: bold;">~9.22 × 10^18</span>, we were correct to two significant figures. Not bad for calculating something to the sixty-third power. Yayy Us!<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com1tag:blogger.com,1999:blog-480555353132580100.post-75463231964531417592010-05-03T04:35:00.001+01:002010-05-04T18:49:50.521+01:00Ten Things you can't do on an Apple iPad<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFoYIr9hyphenhyphentVe15U8cQASuDaCrK4fcDJsP8k7QjxmZth6FKQE2okIgXZnTafFv3q062vdmoQ9zEUtTg_J9_lEhbZKhJk2pNrvaoW_dqJJ9VPdceoFlf9xRhyphenhyphen6FYhFG0c_EGyh1I7IcU_8M/s1600/iPad_No.jpg"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 315px;" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFoYIr9hyphenhyphentVe15U8cQASuDaCrK4fcDJsP8k7QjxmZth6FKQE2okIgXZnTafFv3q062vdmoQ9zEUtTg_J9_lEhbZKhJk2pNrvaoW_dqJJ9VPdceoFlf9xRhyphenhyphen6FYhFG0c_EGyh1I7IcU_8M/s400/iPad_No.jpg" alt="Apple iPad: No Can Do" id="BLOGGER_PHOTO_ID_5466868591143424818" border="0" /></a><br /><span style="font-weight: bold;">Ten Things you can't do on an Apple iPad:</span><br /><ol><li style="margin: 10px;"><span style="font-weight: bold;">Watch broadcast TV</span><br />The iPad has nowhere to plug in a DVB <a href="http://www.pcw.co.uk/personal-computer-world/compare/2153973/miniature-usb-tv-tuners">TV tuner dongle</a>, and even if if it had, the iPad doesn't decode the <a href="http://en.wikipedia.org/wiki/MPEG-2">MPEG2 video</a> format used for <a href="http://www.dvb.org/index.xml">standard-format DVB digital tv broadcasts</a>. <a href="http://www.apple.com/ipad/specs/">It's MPEG4-only.</a> So you can't use it as a <a href="http://en.wikipedia.org/wiki/Digital_video_recorder">personal video recorder</a>, and if you have an existing PVR, you won't be able to copy or stream the recorded MPEG2 files to the iPad. Unless your other machine's fast enough to convert to <a href="http://en.wikipedia.org/wiki/Mpeg4">MPEG4</a> in real time, you'll have to transcode your files to MPEG4 first. Oh, and not all MPEG4 transcoder software produces files that play properly on the iPhone OS, so even if you <span style="font-style: italic;">do</span> transcode, you still might not be able to watch the files.<br /><br /></li><li style="margin: 10px;"><span style="font-weight: bold;">Listen to the radio</span><br />The iPhone chipset supposedly includes <a href="http://www.theregister.co.uk/2009/10/14/apple_fm_radio/">an onboard hardware FM radio</a>, which the OS doesn't make available. In theory you can plug an FM receiver module into the iPhone/iPad docking connector, but in practice, it's cheaper to buy a separate radio (or a cheap MP3 player with a radio onboard). Apple don't make a separate snap-in radio, and third-party manufacturers ave been a bit reluctant to market one in case it becomes redundant overnight, if and when Apple decide to finally enable the internal device. Apple don't <span style="font-style: italic;">want</span> you listening to FM until they can find a way to make money from it, and with FM, it's the radio station that gets the advertising revenue, not Apple.<br />If you have a good internet connection, you <span style="font-style: italic;">can </span>listen to a stack of radio stations online … as long as they don't use <a href="http://en.wikipedia.org/wiki/Adobe_Flash">Flash</a> as a delivery medium.<br />Major radio stations are often also available via DVB ... but that's not an option with the iPad because of point (1).<br />Many iPhone owners get their "fix" of radio by buying a speaker dock that includes an FM radio receiver, but fitting an iPad to one of these is a bit more difficult.<br /><br /></li><li><span style="font-weight: bold;">Watch DVDs</span><br />Okay, so you don't <span style="font-style: italic;">expect</span> the iPad to have a DVD drive, but netbooks at least have the option of plugging in a cheap USB-powered optical drive to play your DVD movies. Not the iPad. And even if it had a general-purpose USB port, <a href="http://en.wikipedia.org/wiki/DVD#DVD_Video">standard DVD video is encoded in MPEG-2</a>, so even if you find a way to get the DVD .vob files de-encrypted and onto the iPad, it won't play them. If a relative passes you a homebrew DVD with your family's home movies, you're back into Transcoding Hell. Transcoding on a <span style="font-style: italic;">mac</span> probably produces "Apple-friendly" MP4 files, first time, every time ... on other platforms, don't count on it.<br /><br /></li><li style="margin: 10px;"><span style="font-weight: bold;">View or edit OpenOffice files</span><br />Some organisations are trying to migrate away from using MSOffice files to more open formats, to avoid vendor lock-in. The main alternative suite is <a style="font-weight: bold;" href="http://www.openoffice.org/">OpenOffice</a>, which runs under Windows and Linux, can read and write all the main MS formats as well as its own "open" format, and also happens to be free. Apple don't seem to have a reader for "Ooo" files. They don't seem to much approve of open formats, and would rather you used Microsoft's apps and formats than open-source – they see open-source as a bigger threat than Microsoft.<br /><br /></li><li><span style="font-weight: bold;">Share photos. </span><br />Jobs says that sharing photos is "a breeze" on the iPad. By "sharing", he presumably means, "tilting the screen so that other people can see it". If you want to actually <span style="font-style: italic;">give someone a </span><span style="font-style: italic;">copy </span>of a holiday picture, you'll probably have to do it on a different computer, rather than the iPad. There's currently no "file export" media option. <a href="http://www.google.co.uk/products?q=%22photo+frame%22+usb">Budget picture frames</a> usually have have picture sorting, import/export, and USB/SD card support functions, but the iPad doesn't, it's strictly a secondary device. Any serious file organisation is supposed to be done on a parent computer, so don't expect to be able to sort your piccy collection on the iPad while sitting comportably on your sofa.<br />There <span style="font-style: italic;">is</span> a USB/Cardreader accessory listed for the iPad ... the <a href="http://store.apple.com/us/product/MC531ZM/A">Camera Connection Kit</a> ... but Apple currently only describe it as allowing you to import files <span style="font-style: italic;">to</span> the iPad. To get the photos <span style="font-style: italic;">out </span>of the iPad, you're supposed to synch to the iPad's "parent" PC or Mac, and then save them from that parent device. In which case, it'd be faster to upload the files directly to the parent machine without going via the iPad. Not exactly "breezy".<br /><br /></li><li><span style="font-weight: bold;">Use standard peripherals.</span><br />As well as not having internal USB, the OS 3.x iPhone apparently doesn't support much in the way of bluetooth peripherals other than stereo headphones, and apparently doesn't even support <a href="http://www.apple.com/keyboard/">Apple's own bluetooth keyboard</a>. <a href="http://store.apple.com/us/product/MC533LL/A">Apple's "official" external keyboard for the iPad</a> is a dedicated iPad keyboard-and-stand, which only works in portrait mode. Heath and safety regulations say that you aren't supposed to use keyboards in an office environment unless they're adjustable, and this looks like it probably isn't. But Apple seem to have realised that this restriction sucked <span style="font-style: italic;">too</span> much, and the iPad's OS 4.0 now seems to be more relaxed, and supports <a href="http://en.wikipedia.org/wiki/Apple_Wireless_Keyboard">Apple's general-purpose bluetooth keyboard</a> (which costs the same as the dedicated iPad keyboard).<br />Unless the iPad's "OS 4.0" is a radical departure from 3.x, you probably also won't be able to zap contacts or notes or files into the iPad from general bluetooth peripherals, like you can with decade-old bluetooth-equipped Palm devices. I used to carry about a pocket-sized Targus folding keyboard and an OCR pen-scanner device with my old Palm organiser. Nothing like that seems to be available for the iPad.<br /><br /></li><li><span style="font-weight: bold;">Record stereo audio. </span><br />Apple want you buying music, not recording it, so while the Apple dock connector has pins for stereo in, the official iPad Apple specifications don't <a href="http://www.apple.com/ipad/specs/">commit to the pins doing anything</a>. Maybe they're connected, maybe they're not. If they are, great. But its a brave third-party manufacturer who releases a product or connector for a function that an Apple device isn't guaranteed to have – even if your gadget works <span style="font-style: italic;">now</span>, one OS revision later it might not (see also <span style="font-style: italic;">(2) external FM radio</span>). As a playback-only media centre, the iPad again has the problem that onboard organisation is limited – you're supposed to do all your media organising on a separate parent computer, and iTunes usually won't recognise album art originating on a PC. Often it won't recognise PC-ripped tracks and let you download replacement artwork, either. Of course, if you're sick of watching CoverFlow "flipping" blank squares, you can always buy your albums over again as Apple downloads, or rip the CD's again using a mac ...<br /><br /></li><li><span style="font-weight: bold;">Use unapproved software. </span><br />Apple reserve the right to decide what software you run on your machine, and there are certain sorts of applications they really don't want you to have. You normally aren't even allowed to load your own media files onto an iPhoneOS device unless the iTunes "sentry" approves – the iPx range won't emulate a basic thumb drive.<br />You can often upload these "unapproved" apps and use your iPx gadget as a file caddy, by hacking past the Apple firmware's protection to expose the internal filesystem over USB – "<a href="http://en.wikipedia.org/wiki/Jailbreaking_for_iPhone_OS">jailbreaking</a>" – but jailbreaking doesn't always work on all models, and it's too early to know what eventual proportion of iPads are likely to be jailbreakable.<br /><br /></li><li><span style="font-weight: bold;">Camera functions </span><br />iPhone OS 4 is supposed to finally add proper support for camera functions, but the iPad doesn't actually have a camera. In theory it'd be easy to add support for a camera that snaps onto the dock connector, but AFAIK, no third-party manufacturer has yet produced one.<br />It's probably easy <span style="font-style: italic;">in theory</span> to support a swivellable webcam that can point forwards as a camera or backwards for video calls, but that'd need the device to be held upside down with the dock connector at the top. There's no technical problem with this … except that Apple's own OS 3.x applications refuse to work in upside-down mode. On OS4, the onboard applications <span style="font-style: italic;">are</span> supposed to work in any orientation, but it's still a bit discouraging for manufacturers to know that if they launch a camera, it won't work well on v3.x devices. There's also the possibility that if Apple <span style="font-style: italic;">do</span> decide to embrace the idea of an add-on camera, they won't make the function ready until they have a camera of their own to sell. You could buy rotatable snap-in cameras for some Palm organisers <a href="http://www.dpreview.com/news/article_print.asp?date=0110&article=01102202sonypegamsc1">nearly ten years ago</a>, so the iPad's still lagging behind in this respect.<br />And there's some useful camera-aware apps: the <a href="http://www.evernote.com/">Evernote notetaking app</a>s let you snap images (memos, restaurant menus, street signs), save them with geotagging data, and apply OCR to add the text in the image to a searchable comments field. If you have a iPhone with Evernote, and someone shows you their contact details on their smartphone screen or a business card, you can snap a photo and get a text file. But without a camera, none of this cool stuff will currently work on the iPad. Evernote also has a nice voicenotes feature, but again, on the iPad ... no onboard mic.<br />So, no <a href="http://en.wikipedia.org/wiki/Skype">Skype</a> video calling.<br /><br /></li><li><span style="font-weight: bold;">SIM-swapping.<br /></span>The iPad isn't locked-in to a particular phone provider (hooray!), but the bad news is that if you've just bought a high-capacity service plan for your iPhone, and you want to transfer it to your iPad (which you expect to be using for all your serious mobile web-browsing from now on), you can't. <a href="http://www.wired.com/epicenter/2010/01/ipad-mini-sim/">The SIMs are physically different sizes.</a> The iPhone uses a standard-sized SIM, the larger iPad uses a smaller mini-SIM. In theory, a mini-SIM with a holder can fit into a full-size SIM slot, but that chances are that if you're an existing iPhone owner, you won't have one of those. Apple enthusiasts have gotten used to Apple engineering-in incompatibilities with other manufacturers' products, but some have gotten a bit annoyed at what looks like a deliberate incompatibility with other <span style="font-style: italic;">Apple</span> products.<br /></li></ol><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">The iPad isn't really what Steve Jobs said it was.</span> It's not a device that's designed to sit in some middle ground between netbooks and laptops, because those two types of device can do pretty much everything on the list.<br /><br />The iPad's purpose is straightforward: it's designed to kill sales of the <a href="http://en.wikipedia.org/wiki/Amazon_Kindle">amazon Kindle</a>, break <a href="http://www.amazon.com/">amazon's</a> stranglehold on ebook sales, and let Apple add ebook and magazine retailing to their existing music-and-movies portfolio. It's a conduit.<br />It has to be five hundred dollars in order to crush the <a href="http://www.amazon.com/Kindle-Wireless-Reading-Display-Generation/dp/B0015TG12Q">Kindle DX</a>, at $500 its facilities have to be limited in order to avoid undercutting <a href="http://www.apple.com/mac/whichmacbook/compare.html">Apple's own laptop range</a> (which starts at a thousand dollars) and it has to be based on the iPod Touch (with an updated "iPhone OS" and a bigger screen) to give it an established sales channel, because that's the "other" OS that Apple have, because that preserves separation between the iPad and the more expensive OSX-based products, and because that makes it more difficult for people to dig out and redistribute downloaded paid-for content.<br /><br />Those three things pretty much define it.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com1tag:blogger.com,1999:blog-480555353132580100.post-50882560425665849212010-04-27T01:53:00.001+01:002010-04-27T03:03:47.524+01:00'Circular' Polyhedra, and the Apollonian Net<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUbkgBpoWcRfF6-ffH3biu6nqogjfVLVFZYJm9c36Cixym_u1yehu90azjPLmF4PWmGTlTabtgVD5UgtFmv4QHCMIcMCtQL5g76wgxGCUAnzfS3DDyIrZVHDz5Hp8z3xmmVF-eFVw53RU/s1600/Apollonian_Net_RiCS_P002.jpg"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUbkgBpoWcRfF6-ffH3biu6nqogjfVLVFZYJm9c36Cixym_u1yehu90azjPLmF4PWmGTlTabtgVD5UgtFmv4QHCMIcMCtQL5g76wgxGCUAnzfS3DDyIrZVHDz5Hp8z3xmmVF-eFVw53RU/s400/Apollonian_Net_RiCS_P002.jpg" alt="Fractal circular tiling, giving the Apollonian Net / Apollonian Gasket / Liebniz packing diagram" id="BLOGGER_PHOTO_ID_5464601576630065298" border="0" /></a>This is the nice design that I used on page 2 of <a href="http://www.scribd.com/doc/30090609/Relativity-in-Curved-Spacetime">the book</a>.<br /><span style="font-weight: bold;"><br />Annoyingly, rather a lot of other people discovered it before me:</span> it's indexed on Wikipedia as the <a href="http://en.wikipedia.org/wiki/Apollonian_gasket"><span style="font-weight: bold;">Apollonian Net</span></a>, after <a href="http://en.wikipedia.org/wiki/Apollonius_of_Perga">Apollonius of Perga</a><a href="http://en.wikipedia.org/wiki/Apollonius_of_Perga"> (~262 BC – ~190 BC)</a>, and it's also referred to elsewhere as the <span style="font-weight: bold;">Leibniz Packing</span> diagram, after <a href="http://en.wikipedia.org/wiki/Gottfried_Leibniz">Gottfried Leibniz (1646-1716)</a>, Newton's rival for <a href="http://en.wikipedia.org/wiki/Leibniz_and_Newton_calculus_controversy">the invention of calculus</a>. I've even seen it credited to the design of the floor of a Greek temple. But frankly, it's such a nice shape that I'm sure that people have been discovering and rediscovering it for millennia. Draw three touching circles, fill in the inviting gap in the middle with more circles, and when you're feeling pleased with yourself and wondering what to do next, step back and look at the whole thing, draw in a bigger circle to enclose everything (facing away from you), and repeat. That's how I got there, anyway.<br /><br />There's some rather interesting geometry here to do with tangents, but I got impatient trying to get a complete derivational method, and generated the figures using a vector graphics program (<a href="http://en.wikipedia.org/wiki/CorelDRAW">CorelDraw10</a>), driven by an automating script, using a mix of partial derivations, testing, and brute force. If you're calculating a chain of circles that might be twenty or thirty stages long, successive rounding errors tend to screw up these diagrams when you calculate them "properly"(look at the overlap of the smaller circles in the Wikipedia vector graphics version), and my priority was to make sure that the circles <span style="font-style: italic;">really did</span> fit, so I used a hybrid approach where I used trig to get each circle into the ballpark of its proper destination w.r.t. its parents, and then a successive approximation method with error correction to tweek and nudge and jiggle everything snugly into place.<br /><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">The Apollonian Net makes more sense when you stretch it over the surface of a sphere</span>, so that the four largest "primary" circles are all the same size, and are explicitly equivalent. They then form the intersection of the sphere with the four faces of a tetrahedron, giving the fractal-faceted solid that I used as a vignette <a href="http://www.scribd.com/doc/30090609/Relativity-in-Curved-Spacetime"><span style="font-weight: bold;">on page 378</span></a>.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://erkdemon.blogspot.com/2009/03/hyperbolic-planar-tesselations-by-don.html"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 256px; height: 256px;" src="http://www.relativitybook.com/CoolStuff/erkfractals/sphere_cut.gif" alt="Infinitely-truncated sphere, giving an infinite-sided polygon with circular faces, whose map corresponds to an Apollonian Net" /></a>There are two main ways to construct this solid:<br /><span style="font-weight: bold;">1: Start with a sphere</span> and grind four flat circular faces into it that correspond to the four faces of an intersecting <a href="http://en.wikipedia.org/wiki/Tetrahedron">tetrahedron</a>, then keep grinding maximum-sized circular facets into the remaining curved parts, <span style="font-style: italic;">ad infinitum</span>.<br /><br /><span style="font-weight: bold;">2: Start with a tetrahedron</span>, and lop off the four points to give a shape with four regular hexagonal faces, and four new triangular faces where the tips used to be. Then continue lopping off the remaining points, <span style="font-style: italic;">ad infinitum</span>. Each wave of cutting creates a new face at each cut, and doubles the number of sides on all the existing faces. If we cut at a depth that'll keep these polygons regular, then with an arbitrarily-high number of cuts, the faces converge toward perfect circles, and the point-mesh of the resulting peaks converges downwards to settle onto the surface of the sphere used in method 1.<br /><br />Either way works.<br /><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">This sort of </span><a style="font-weight: bold;" href="http://mathworld.wolfram.com/PlatonicSolid.html">duality is common when we construct standard polyhedra</a> – the network of relationships in a regular polyhedron tends to be <span style="font-style: italic;">another</span> regular polyhedron, so we can usually get to a regular shape by starting from either of its two relatives. Four of the five <a href="http://en.wikipedia.org/wiki/Platonic_solid">Platonic solids</a> pair up nicely like this, and the last – the tetrahedron – is a special case whose "dual solid" partner is another tetrahedron. But we normally only consider these sorts of dualities when considering combinations of regular polygons with finite numbers of rectilinear sides <span style="font-style: italic;">with each other</span>, and don't include the infinite-sided fractal shapes that show up when one of the parent solids is an infinitely-faceted sphere (which, in some ways, <span style="font-style: italic;">almost</span> counts a a sixth Platonic solid).<br /><br />We don't have to start with a tetrahedron, we can make these fractal solids from any regular polygon (cube, etc). But the tetrahedral and icosahedral versions probably look the nicest. I find the cube-based version a bit disappointing, but I grew up with rounded-cornered dice with circular faces, so perhaps I'm just a bit <em>blasé</em> about the solid that corresponds to the "six-circle" version of the Apollonian net.<br /><br />From here, we have three immediate ways to generate new families of solids:<br /><span style="font-weight: bold;">(1)</span> We can choose different starting solids, <span style="font-weight: bold;"><br />(2)</span> we can vary the <span style="font-style: italic;">number</span> of cuts or cutting stages (from zero to infinity), to produce finite-sided solids that look more like cut gemstones, and <span style="font-weight: bold;"><br />(3)</span> we can vary how the cutting is done. If we make our cuts too shallow, then the facets are distorted away from circularity, and the overall shape isn't a sphere, but has flat-topped bulges where the original polyhedral points used to be. If we cut too deep, we get bulges in the shape of the original solid's "dual" sibling, with each bulge tipped by an edge.<br /><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">Another cool thing about these nets</span> is their topological transformability. With the "closed" version, every circle has three parents of the same size of larger, including the four primary circles (who count as each other's parents). You can transform between the different versions of the net by warping and resizing, while still keeping everything as circles.<br /><br />This lets us get to tilings that don't automatically suggest standard polyhedra, such as the "two-large-enclosed-circles" version that I used for the "fractal Yin-Yang" symbol <a style="font-weight: bold;" href="http://www.scribd.com/doc/30090609/Relativity-in-Curved-Spacetime">on page 145</a>, and the asymmetrical versions <a href="http://www.scribd.com/doc/30090609/Relativity-in-Curved-Spacetime"><span style="font-weight: bold;">on page 224</span></a>. And once I'd written the scripts and code to generate these figures, I had a few more blank bits in the book to fill, so I knocked up the "triangular boundary" version <a href="http://www.scribd.com/doc/30090609/Relativity-in-Curved-Spacetime"><span style="font-weight: bold;">on page 370</span></a> which, actually, has some other interesting proportions. The "triangle" version includes parts that represent the limiting case of the edge of the Apollonian Gasket when we zoom in so far that the outer circle tends toward a straight line. Filling these voids then gives the special-case <a href="http://en.wikipedia.org/wiki/Ford_circles">Ford Circles</a> tiling.<br /><br />Some serious people have worked on this subject. You can also Google <a href="http://en.wikipedia.org/wiki/Descartes%27_theorem">Descartes' Theorem</a> (after <a style="font-weight: bold;" href="http://en.wikipedia.org/wiki/Descartes_theorem">René Descartes (1596-1650)</a>, and <a href="http://mathworld.wolfram.com/SoddyCircles.html">Soddy Circles</a>. <a href="http://en.wikipedia.org/wiki/Lester_R._Ford">Lester Ford</a> and <a href="http://en.wikipedia.org/wiki/Frederick_Soddy">Frederick Soddy</a> only produced <span style="font-style: italic;">their</span> papers in 1936 and 1938, so the Apollonian Net involves math research that extends across more than two thousand years, and isn't finished yet.<br /><br />It would have been nice to meet the person who designed that floor, though.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-23707720852711250072010-04-18T23:48:00.000+01:002010-04-19T01:46:10.815+01:00Ultra-high resolution photography<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW0aphOGJjmIXE7haxWFsWf3LnTbMQFKJRODd7EMnIl_RLwPY7H7TtEPdBlkU74eZlg73OdEhanava_2nXypGOUT68I6MuBi4gfF7dHl42bInRcLQ1EoUra0Kvw94sVQ__DX18CVscjKo/s1600/SmileyTest.gif"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 215px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW0aphOGJjmIXE7haxWFsWf3LnTbMQFKJRODd7EMnIl_RLwPY7H7TtEPdBlkU74eZlg73OdEhanava_2nXypGOUT68I6MuBi4gfF7dHl42bInRcLQ1EoUra0Kvw94sVQ__DX18CVscjKo/s400/SmileyTest.gif" alt="" id="BLOGGER_PHOTO_ID_5461641182497208146" border="0" /></a><span style="font-weight: bold;"><a href="http://erkdemon.blogspot.com/2009/05/jitter.html">The "jitter" method (earlier post)</a> can also be used for <span style="font-style: italic;">ultra-high-resolution photography</span>. </span><br /><br />People want higher-resolution cameras, but the output resolution of a camera is usually limited by the number of pixels in its sensor. Some digital cameras have a "<span style="font-weight: bold;">digital zoom</span>" function, but this is a bit of a cheat: it simply invents extra pixels between the real pixels by smudging the adjacent colour values together. Conventional digital zoom doesn't actually give you any additional information or detail, it just resizes a section of the original image to fill the required size.<br /><br />A second problem with cameras is <span style="font-weight: bold;">camera shake</span>. If you're holding the camera in your hand, then a tiny movement of the camera can result in the image being panned across the sensor while the <a href="http://en.wikipedia.org/wiki/Charge-coupled_device">CCD imaging chip</a> is doing its thing, giving a blurred photograph. The smaller the pixel elements, and the greater the optical zoom, the worse this gets. We can try clamping the camera and taking a shorter-exposure image (so that the camera doesn't have as much time to move), but shorter exposures lead to more random "noise" per pixel, due to the reduced sampling time.<br /><br /><hr align="left" width="25%"><br />But with enough processing power, we can use jitter techniques to solve both problems:<br /><span style="font-weight: bold; font-style: italic;">In our <a href="http://erkdemon.blogspot.com/2009/05/jitter.html">earlier "audio" example</a></span>, we deliberately added high-frequency noise to an audio signal to shift the sampling threshold up and down with respect to the signal, and we took multiple samples and overlaid them to achieve sub-sample resolution.<br /><span style="font-style: italic; font-weight: bold;">With digital photography</span> we can use "positional" noise: we vary the alignment of the camera sensor to the background image, take multiple samples, and overlay <span style="font-style: italic;">those</span> (aligned to subpixel accuracy), to generate images that have higher resolution than the camera sensor. In some ways, this is a little like the <a href="http://en.wikipedia.org/wiki/Nipkow_disk">Nipkow disc</a> approach used in early television systems, that often used a swept array of less than a hundred sensor elements provide a passable image ... in <span style="font-style: italic;">this</span> case, we're not sweeping a line strip of sensors at right angles, but an entire <span style="font-style: italic;">grid</span> of pixel elements, and using their random(-ish) offsets to extract real intermediate detail.<br /><br />Instead of camera shake being a problem, it becomes Our Friend! The individual images will be noisier, but when you recombine a secondsworth of images, the end result should have noise levels comparable to a single one-second exposure – and since you might not normally <span style="font-style: italic;">try to take</span> a one-second exposure (because of camera stability issues), static scenes might sometimes end up with reduced noise as well as enhanced resolution.<br /><br />So, if we have a <a href="http://www-isl.stanford.edu/%7Eabbas/group/imaging.shtml">programmable camera</a>, in theory it's possible to design an "ultra-resolution" mode that fires off a series of short-exposure images while we hold the camera, and then makes us wait while its processor laboriously works out the best way to fit all the shots together ... or saves the individual shots to their own directory, to be assembled later by a piece of desktop software.<br />If we were able to design the camera from scratch, we'd probably also want to include a gadget to deliberately nudge the CCD sensor diagonally while the component shots were being taken. If the software's smart enough, the nudging doesn't have to be particularly accurate, it just has to give the sensor a decent spread of deliberate misalignments. A cheap little <a href="http://en.wikipedia.org/wiki/Piezoelectricity">piezo</a> device might be good enough.<br /><br /><hr align="left" width="25%"><br />The problem with this approach is getting hold of the software: In theory, you can try aligning images by hand, but in practice ... it doesn't really seem sensible.<br />People are already writing algorithms for this sort of stuff – it's what allows the <a href="http://hubblesite.org/">Hubble space telescope</a> to take those absurdly high-resolution images of distant galaxies, and presumably the military guys also use the technique to get extreme resolution enhancements from spy satellite hardware. For analysing and aligning photos with "free-form" offsets, the necessary techniques already seem to be included in the <a href="http://cvlab.epfl.ch/%7Ebrown/autostitch/autostitch.html">Autostitch</a> <a href="http://erkdemon.blogspot.com/search/label/panoramic%20images">panoramic software</a>, which even includes the ability to distort images to make them fit together better – it wouldn't seem to take a lot to turn Autostitch into an ultra-resolution compositor.<br /><br /><a href="http://www.stsci.edu/hst/wfpc2/analysis/drizzle.html">Amateur astronomers are now enthusiastically using the technique, and sharing resources</a> (try using "<a href="http://en.wikipedia.org/wiki/Drizzle_%28image_processing%29">drizzle</a>" as a Google search keyword).<br />Suppose that you want to take an ultra-high resolution photograph of the full Moon – you train your camera-equipped telescope at the Moon, lock it down, and set it to keep taking ten pictures per second for an hour while the Moon gradually arcs across the sky and it's corresponding image crawls across your image-sensor ... and then feed the resulting thirty-thousand-odd images into a sub-pixel alignment program, to chew over for a few weeks and pull out the underlying detail. As long as the matching algorithm knows that it's supposed to be lining up the part of the images that contain the big round yellow thing rather than the clouds or the treetops, there wouldn't seem to be any real limit to the achievable resolution. Okay, so you have different atmospheric distortions when the Moon is in different parts of the sky, and when the air temperature drifts, but with a sufficiently-smart autostitch-type warping, even that shouldn't be a problem. If you didn't have a "rewarping" feature, you'd probably just have to decide which <span style="font-style: italic;">part </span>of the moon you wanted the software to use as a master-key when lining up the images.<br /><br /><hr align="left" width="25%"><br />Techniques like this go beyond conventional photography and enter the territory of <span style="font-weight: bold;">hyperphotography</span> – we're capturing additional information that goes beyond our camera's conventional ability to take images, and doing things that, at first sight, would seem to be physically impossible with the available hardware. A bit of knowledge of <span style="font-weight: bold;">quantum mechanics</span> principles is useful here: we're not actually breaking any laws of physics, but we're shunting information between different domains to obtain results that sometimes <span style="font-style: italic;">seem</span> impossible.<br /><br />There's a whole family of hyperphotographic techniques: I'll try to run through a few others in a future post.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-11791278152875394192010-04-10T02:53:00.001+01:002010-04-12T18:43:14.187+01:00Titanic Syndrome<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC28ZVWicjS-ECq9yujOREp24hEYzm9npqGmWvNyJhqNXVC497vhaCTbUjhTUpo_wzyRtptmrICid3kgLopfSpBv8ePBl9AYzdv45aVuw1m_X-hGnHPc8nl-JEaFX_5IvC9RDFRz5IQNY/s1600-h/titanic_plaque.jpg" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="RMS Titanic Memorial Plaque, detail, Eastbourne Bandstand" id="BLOGGER_PHOTO_ID_5420454845722120962" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC28ZVWicjS-ECq9yujOREp24hEYzm9npqGmWvNyJhqNXVC497vhaCTbUjhTUpo_wzyRtptmrICid3kgLopfSpBv8ePBl9AYzdv45aVuw1m_X-hGnHPc8nl-JEaFX_5IvC9RDFRz5IQNY/s400/titanic_plaque.jpg" style="cursor: pointer; display: block; height: 300px; margin: 0px auto 10px; text-align: center; width: 400px;" border="0" /></a><span style="font-weight: bold;">On the 10th of April 1912</span>, the <a href="http://www.encyclopedia-titanica.org/"><b>RMS Titanic</b></a> set out on her first passenger-carrying voyage. The Titanic (and her <a href="http://en.wikipedia.org/wiki/Olympic_class_ocean_liner">Olympic-class sister-ships</a> were state-of-the-art. They had a double-hulled design that meant that if one hull ruptured, the ship was still seaworthy. The ship was considered to be practically unsinkable.<br /><br /><a href="http://www.nmm.ac.uk/researchers/library/research-guides/rms-titanic/research-guide-d1-rms-titanic-fact-sheet">Four days later</a> it was at the bottom of the ocean with the bodies of 1517 crew and passengers. The "unsinkable" ship was arguably the most "sinky" ship in human history.<br />It's normally difficult to assign a "sinkiness" ranking to ships, given that each failed ship only normally manages to sink <i>once</i>, but by sinking <i>before it even made it to the end of its maiden voyage</i>, and killing <i>so many</i> people, the Titanic flipped straight from being supposedly one of the safest seagoing structures ever built, to one of the most dangerous.<br /><br /><hr align="left" width="25%"><span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rfr_id=info%3Asid%2Focoins.info%3Agenerator&rft.genre=book&rft.btitle=Relativity+in+Curved+Spacetime&rft.title=Relativity+in+Curved+Spacetime&rft.atitle=Titanic+Syndrome&rft.isbn=0955706807&rft.aulast=Baird&rft.aufirst=Eric&rft.au=Eric+Baird&rft.date=2007&rft.pub=Chocolate+Tree+Books&rft.place=Eastbourne&rft.edition=1&rft.spage=327&rft.epage=327" style="font-weight: bold;"><br />Titanic Syndrome</span> isn't based on any specific mechanism. <a href="http://en.wikipedia.org/wiki/Syndrome">"Syndromes"</a> are recognisable convergences of trends, that can sometimes associate a particular outcome with a recognisable set of starting parameters. When we notice one of these patterns, we sometimes have a good idea how things are likely to end without having to know the mechanism that gets us there.<br /><br />In the case of Titanic Syndrome, the association is pretty self-explanatory: when people tell us that nothing can possibly go wrong, that everything's perfectly safe, that a plan is foolproof ... things usually turn out badly.<br /><br />Why did the Titanic disaster happen, and happen so emphatically? The obvious answer is that the ship sank because it struck an iceberg, but there are additional factors that track back to that initial belief that the ship was almost indestructible. If the ship's crew had been less confident, perhaps they'd have done a better job of keeping watch for ice, or cut their speed. If the shipyard had been less confident about the ship's hull, maybe they'd have built it with <a href="http://www.independent.co.uk/news/world/americas/cheap-rivets-blamed-for-massive-loss-of-life-as-titanic-sank-809622.html">better-quality materials</a>, rather than just assuming that if one hull failed there was a spare. And if the company hadn't been so sure that <a href="http://www.historyonthenet.com/Titanic/lifeboats.htm">lifeboats weren't really necessary</a>, perhaps that'd have included enough for everyone, and not so many people would have had to drown when the ship went down, while they were waiting to be rescued.<br /><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">In science</span>, <a href="http://en.wikipedia.org/wiki/Hyperbole">hyperbole</a> is usually an indicator that something's wrong. Theories that are described as "pretty good" usually are, but theories that were told are <i>excellent</i>, or that <i>can't possibly</i> be wrong usually turn out to be already failing, unnoticed. Titanic Syndrome.<br /><br />Theories that really <i>are</i> that good, don't <i>need</i> to be oversold – it's usually possible to express confidence in an established model more convincingly with quiet understatement. On the other hand, if a core theory is right, but the people involved are still trying to exaggerate the case for it (even though their actions are likely to backfire), then if they're making <i>that</i> mistake, they've probably been making others, too. So "cheerleading" is usually a <a href="http://en.wikipedia.org/wiki/Red_flag_%28signal%29">red flag</a> that some things in the picture are likely to be dodgy, even if the fundamentals of a theory are right.<br /><br />And sometimes the "cheerleading" stops people noticing that the fundamentals are wrong. And those are the times ... when everybody's invested so strongly in something that they really don't want to believe in the possibility of problems, or start thinking seriously about fallback positions or lifeboats ... that you get another "Titanic-class" event.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com4tag:blogger.com,1999:blog-480555353132580100.post-22896235853461010362010-04-02T23:27:00.003+01:002010-04-11T03:00:16.865+01:00General Relativity is Screwed Up<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEho3UZToEzYV8JY5OXhRVUQexUUb0Bm-UIbCkXEQDACLoIAF9VEWtiodKiY9juzX87XJ8fnxx4zE1Ibx5aODUZmFl350Y8N8dTejlZxqfeH3J7LpxLn3LY4go1QfHjfob0Zd5xBqKBjo6g/s1600/general_relativity_notgood.gif"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 114px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEho3UZToEzYV8JY5OXhRVUQexUUb0Bm-UIbCkXEQDACLoIAF9VEWtiodKiY9juzX87XJ8fnxx4zE1Ibx5aODUZmFl350Y8N8dTejlZxqfeH3J7LpxLn3LY4go1QfHjfob0Zd5xBqKBjo6g/s400/general_relativity_notgood.gif" alt="" id="BLOGGER_PHOTO_ID_5458687725907751570" border="0" /></a><span style="font-weight: bold;">With Einstein's general theory of relativity</span>, one of the theory's harshest critics was probably Einstein himself. This was partly a matter of personal discipline, and partly – like the joke about sausages – because it's sometimes easier to like a thing if you don't know the gruesome details of how it was actually made. Einstein found it easy to be sceptical about the design decisions that had gone into his general theory, because he was the guy who'd made them. It had been the best general theory that had been possible at the time, said Einstein, but with the benefit of hindsight ... perhaps its construction wasn't entirely trustworthy.<br />The "iffy" aspects of C20th GR are difficult to see from <i>within</i> the theory, because – where the lower-level design decisions have forced a fudge or bodge – from the <i>inside</i>, these things seem to be completely valid, derived (and quite necessary) features. It's not until we look at the structure from the <i>outside</i>, with a designer's eye, that we see the arbitrary design decisions and short-term fudges that went into making the theory work the way it does.<br /><br />Sure, the <i>surface math</i> looks pretty (with no obvious free variables or adjustable parameters), but that's because, as part of the theory's development, all the ugliness necessarily got moved down to the definitional and procedural structures that sit below the math. Change those underlying structures, and the surface mathematics break and reform into a different network that looks similarly unavoidable. So even though the current system <i>looks</i> like the simplest possible theory when viewed from the inside, we can't invest too much significance in this, because if the shape and structure was different, <i>that'd</i> look like the simplest possible theory, too.<br /><br />To see how the theory might have been, we need to look at the subject's <i>protomathematics</i>, the bones and muscles and guts of the theory that dictate its overall shape, and which don't necessarily have a polite set of matching mathematical symbols.<br /><br />Here are two interlinked examples of decisions that we made in general relativity that weren't necessarily correct:<br /><h4>Problem #1: Gravitational dragging, velocity-dependent gravitomagnetic effects<br /></h4><blockquote><div style="font-size: small; color: rgb(51, 51, 51);"><p>As <a href="http://en.wikipedia.org/wiki/Hippolyte_Fizeau">Fizeau</a> demonstrated back in ~1849 with water molecules, moving bodies drag light. General relativity describes explicit <b>gravitomagnetic</b> dragging effects for accelerating and rotating masses, and logic pretty much then forces it to describe similar effects for relative velocity, too. When you're buffeted by the surrounding gravitational field of a passing star, the impact gives you some of the star's momentum – momentum exchange means that the interaction of the two gravitational fields acts as a sort of proxy collision, and the coupling effect speeds you up a little, and slows down the star, by a correspondingly tiny amount.<br /></p>For a <i>rotating</i> star, GR915 also agrees <a href="http://science.nasa.gov/headlines/y2004/19apr_gravitomagnetism.htm">you're pulled preferentially to the receding side</a> – there's an explicit velocity component to gravitomagnetism (v-gm). Even quantum mechanics seems to agree. And we can use this effect to calculate the existence of the <a href="http://www.scientificamerican.com/article.cfm?id=how-does-the-slingshot-ef">slingshot effect</a>, which is not just <i>theory</i>, but established engineering.<br /><p>But v-gm effects appear to conflict with <b><a href="http://csep10.phys.utk.edu/astr161/lect/history/newton3laws.html">Newton's First Law of Motion</a></b>: If all the background stars dragged light according to their velocity, then as you moved at speed with respect to the background starfield, the receding stars would pull on you a little bit stronger than the others, slowing you down. There'd be a preferred state of rest, that'd correspond to the state in which the averaged background starfield was stationary (ish). This doesn't agree with experience.<br /></p>So the v-gm effect gets edited out of current GR, and when we do slingshot calculations, we tend to <a href="http://maths.dur.ac.uk/%7Edma0rcj/Psling/sling.pdf">use Newtonian mechanics and model them in the time domain, instead</a>. We compartmentalise.<br /><h5 style="color: red;"><i>Summary:</i><br /></h5><span style="color:red;"><i>Argument: The omission of v-gm effects from general relativity seems to be arbitrary and logically at odds with the rest of the theory, but it seems to be “required” to force agreement with reality … otherwise “moving” bodies would show anomalous deceleration. </i></span><br /><p>I'd consider this a fairly blatant fudge, but GR people would tend to refer to it as essential derived behaviour (based on the condition that the theory has to agree with reality).<br /></p></div></blockquote><h4>Problem #2: Gravitational Aberration</h4><blockquote><div style="font-size: small; color: rgb(51, 51, 51);"><p>If signals move at a finite speed, the apparent positions of their sources get distorted by relative motion. We "see" a source to be pretty much in the direction it was when it emitted the signal, with a position and distance that's out of date, thanks to the signal timelag.<br /></p>If gravitational and optical signals both move at about the same speed, "<i>c</i>", (ignoring nonlinear complications), then we expect to "feel" the gravitational signal of a body to be coming from the same position that the object is seen to occupy. Which is kinda helpful.<br /><p>But it seems that under current GR, the apparent "gravitational" position of a body gets assigned to its instantaneous position, as if the speed of gravity was infinite. We say that the speed of gravity isn't <i>actually</i> infinite, but that moving bodies somehow "project" their field forwards and then sideways so that it <i>looks</i> infinite as far as the observer's measurements are concerned. In other words, it seems that under current GR, <a href="http://www.math.ucr.edu/home/baez/physics/Relativity/GR/grav_speed.html">there's no such thing as gravitational aberration</a>.<br /></p>This is a bit like the sound of fingernails scratching down a blackboard. It means that there's no longer the concept of a body having a single observed position, and we get separate definitions of "apparent position" for EM and gravity. This badly weakens the theory, because it means that mismatches between the two that that we might normally look out for to show us that we've made a mistake somewhere, are the theory's default behaviour. We lose a method of testing or falsifying the model.<br /><p>So why do we do it?<br /></p>We...ell, the usual argument involves <a href="http://www.math.ucr.edu/home/baez/physics/Relativity/GR/grav_speed.html">planetary orbits and the apparent position of the Sun as seen by an observer on a rotating planet</a>. But that argument's complicated and perhaps still a bit unconvincing, so … the simpler argument is that if gravitational aberration existed, it'd again seem to screw up Newton's First Law. When an astronaut travels through the universe at high speed, the background stars appear to bunch together in front of them (<i>e.g.</i> <a href="http://128.112.100.2/%7Ekirkmcd/examples/mechanics/scott_ajp_38_971_70.pdf">Scott and van Driel, Am.J.Phys <b>38</b> 971-977 (1970)</a> ), and if the gravitational effect of all those stars was shifted to the front as well, then we'd expect the astronaut to be pulled towards the region of highest apparent mass-density … forwards … and this'd further increase their forward speed, making the aberration effect even worse, which'd then create an even stronger forward pull.<br /><p>So again, we manually edit the effect out, <a href="http://xxx.lanl.gov/abs/gr-qc/9909087">say that it's known not to exist</a>, and then do whatever we have to do with math and language to stop the theory contradicting us.<br /></p><h5 style="color: red;">Summary:</h5><span style="color:red;"><i>Argument: Losing gravitational aberration seems to be arbitrary and logically at odds with the rest of the theory, but seems to be "required" to force agreement with reality … otherwise "moving" bodies would show anomalous acceleration. </i></span></div></blockquote><br /><hr align="left" width="25%"><br />Put these two arguments together, and you should immediately begin to see the problem:<br /><br />If we'd resisted the "urge to fudge", it looks as if our two problems would have eventually canceled each other out anyway, without our having to get involved. They seem to have the same characteristic and magnitude, but different signs. One produces anomalous acceleration, the other anomalous deceleration. Put them together and the moving astronaut doesn't accelerate <i>or</i> decelerate, because the stronger rearward pull of the fewer redshifted stars behind them is balanced by the increased number of stars ahead, which are blueshifted and individually weakened. Instead of our imposing N1L-compliance on general relativity as a necessary initial condition, the theory works out N1L all by itself, as an emergent property of curved spacetime.<br /><br />So in these two cases, we seem to have corrupted the "deep structure" of the current general theory of relativity not once but <i><b>twice</b></i>, by trying to solve problems sequentially rather than letting the geometry generate the solutions for us, organically. Both "deleted" effects turn out to be necessary for a "purist" general theory … but once we'd fudged the theory <i>once</i> to eliminate <i>one</i> of them, we had to go back and fudge the theory a <i>second</i> time to eliminate the second effect that would otherwise have balanced it out.<br /><br />And in doing that, we didn't just "double-fudge" a few details of the theory, we broke important parts of the structure that should have allowed it to expand and blossom into a larger, more tightly integrated, more strictly falsifiable system that could have embraced quantum mechanics and dealt with properly with cosmological issues. General relativity <i>should</i> have been a tough block of dense, totally interlocking theory, with independent multiply-redundant derivations of every feature, rather than the thing we have now.<br /><br /><hr align="left" width="25%"><br />The fudging of these two issues also changed some of the theory's physical predictions:<br /><br />Losing gravitational aberration gave us a different set of observerspace definitions that altered the behaviour of horizons. Losing v-gm meant that we got different equations of motion, once again a different behaviour for black holes, and no way of applying the theory properly to cosmology without generating further cascading layers of manual corrections reminiscent of the old <a href="http://en.wikipedia.org/wiki/Deferent_and_epicycle#Epicycles_on_epicycles">epicycle</a> approach to astronomy. It also created a statistical incompatibility with quantum mechanics.<br /><br />So general relativity in its current form seems to be pretty much screwed. GR1915 was fine as an initial prototype, but it should really have been replaced half a century ago – in 2010, it's an ugly, crippled, mutated, limited form of what the theory <i>could</i>, and <i>should</i> have been by now. But because people fixate on the math rather than on the structure, they can't see the possibility of change, or the beauty of what general relativity always had the potential to become. And that's why the subject's been almost stalled for pretty much the last fifty years, it's because Einstein died, and too many of the surviving physics people who did this stuff couldn't see past the mathematical and linguistic maze that'd developed around the subject, they didn't "get" the design principles and the dependencies between the choice of initial design decisions and the characteristics of the resulting model, and they didn't appreciate the design aesthetics.<br /><br />And I find that sad on so many levels.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-57276714906027906502010-03-28T03:48:00.001+01:002010-03-28T17:45:29.543+01:003D Audio, and Binaural Recording<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKZzo5Qqa1H6ufm4yK1493bDi9OXkZoUMX6HSvkBMeZZk0EDQbDRI2BJ6HDOLGuRAyaDr8PFP2bvzmYTZ8rvkDpFclQTB0VOLd6jy9N-UQ7_dtXYRna6NkC39lD1WKHMa_1OBn4xGthB0/s1600/dummyhead.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Binaural recording: NIH 'Virtual Human' head cross-section, Neuman KU100 'dummy head' binaural microphone (inverted image), Sound Professionals in-ear microphone (left ear)" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKZzo5Qqa1H6ufm4yK1493bDi9OXkZoUMX6HSvkBMeZZk0EDQbDRI2BJ6HDOLGuRAyaDr8PFP2bvzmYTZ8rvkDpFclQTB0VOLd6jy9N-UQ7_dtXYRna6NkC39lD1WKHMa_1OBn4xGthB0/s320/dummyhead.jpg" /></a></div><br />
One of the dafter things they teach in physics classes is that because humans only have two ears, we can only hear location by comparing the loudnesses of a sound in both ears, and that because of this we can only hear "lefty-rightiness", unless we start tilting our heads.<br />
<br />
It's wrong, of course: Physics people often suck at biology, and (non-physicist) humans are actually pretty good at pinpointing the direction of sound-sources, without having to tilt our heads like sparrows, or do any other special location-finding moves. <br />
<br />
And we don't just perceive sound with our ears. It's difficult to locate the direction of a backfiring car when it happens in the street (because the sound often reflects off buildings before it reaches us) ... but if it happens in the open, we can directionalise by identifying the patch of skin that we felt the sound on (usually chest, back, shoulder or upper arm), and a perpendicular line from that "impact" patch then points to the sound-source.<br />
For loud low-frequency sounds, we can also feel sounds through the pressure-sensors in our joints.<br />
<br />
But back to the ears ... while its obviously true that we only have two of them, it's <i>not</i> true that we can't use them to hear height or depth or distance information. Human ears aren't just a couple of disembodied audio sensors floating in mid-air, they're embedded <i>in your head</i>, and your head's acoustics mangle and colour incoming sounds differently depending on direction, especally when the sound has to pass <i>through</i> your head to get to the other ear. The back of your skull is continuous bone, whereas the front is hollow, with eyeballs and eyesockets and <a href="http://en.wikipedia.org/wiki/Paranasal_sinuses">naso-sinal cavities</a>, with <a href="http://en.wikipedia.org/wiki/Eustachian_tube">Eustachian tubes</a> linking your throat and eardrums from the inside. You have a flexible jointed spine at the back and a soft hollow cartilaginous windpipe leading to a mouth cavity at the front, and as sounds pass through all these different materials to reach both ears, they get a subtle but distinctive set of differential frequency responses and phase shifts that "fingerprint" them based on their direction and proximity. <br />
<br />
To make the colouration even more specific, we also have two useful flappy things attached to the sides of our heads, with cartilaginous swirls that help to introduce more colourations to sounds depending on where they're coming from. Converting all these effects back into direction and distance information probably requires a lot of computation, but it's something that we learn to do instinctively when we're infants, and we do it <i>so</i> automatically that – like judging an object's distance by waggling our eye-focusing muscles – we're often not aware that we're doing it. <br />
<br />
The insurance industry knows that people who lose an external ear or two often find it more difficult to directionalise sound. Even with two undamaged eardrums, simple tasks like crossing the road can become more dangerous. If you've lost an ear, you might find it more difficult working on a building site or as a traffic cop, even if your "conventional" hearing is technically fine. <br />
<h3><b>Binaural</b>, or <b>3D sound recording</b>:</h3>We're good enough at this to be able to hear multiple sound sources and pinpoint all their directions and distances simultaneously, so with the right custom hardware, a studio engineer can mimic these effects to make the listener "hear" the different sound-sources as coming from specific directions, as long as they're wearing headphones.<br />
<br />
There are three main ways of doing this:<br />
<h4>1: "Dummy head" recording</h4><blockquote>This literally involves building a "fake head" from a mixture of different acoustic materials to reproduce the sound-transmission properties of a real human head and neck, and embedding a couple of microphone inserts where the eardrums would be. Dummy head recording <i>works</i>, but building the heads is a specialist job, and they're priced accordingly. <a href="http://www.neumann.com/">Neumann</a> sell a dummy head with mic inserts called the <a href="http://www.neumann.com/?lang=en&id=current_microphones&cid=ku100_description">KU100</a>, but if you want one, it'll cost you around six thousand pounds.</blockquote><blockquote>Some studios have been known to re-record multitrack audio into 3D by surrounding a dummy head with positionable speakers, bunging it into an <a href="http://images.google.co.uk/images?q=anechoic+chamber">anechoic chamber</a> and then routing different mono tracks to different speakers to create the effect of a 3D soundfield. But this is a bit fiddly.</blockquote><h4>2: 3D Digital Signal Processing</h4><blockquote>After <a href="http://en.wikipedia.org/wiki/Digital_signal_processor">DSP</a> chips came down in price the odd company started using them to build specialist DSP-based soundfield editors. So for instance, the <a href="http://www.soundonsound.com/sos/1996_articles/mar96/rolandrss10.html">Roland RSS-10</a> was a box that let you feed in "mono" audio tracks and it'd let you choose where they ought to appear in the soundfield. You could even add an outboard control panel with <b>alpha dials</b> that let you sweep and swing positions around in real time.</blockquote><blockquote>Some cheap PC soundcards and onboard audio chips have systems that <i>nominally</i> let you position sounds in 3D, but the few I've tried have been a bit crap, their algorithms probably don't have the detail or processign power to do this properly. </blockquote><blockquote>At "only" a couple of thousand quid, the Roland RSS10 was a cheaper more controllable option for studio 3D mixing than using a dummy head in a sound booth, and <a href="http://en.wikipedia.org/wiki/Pink_Floyd">Pink Floyd</a> supposedly bought a stack of them. There's also a company called <a href="http://en.wikipedia.org/wiki/Qsound">QSound</a> that do this sort of thing: Qsound's algorithms are supposed to be more based on theoretical models, Roland's based more on reverse-engineering actual audio. </blockquote><h4>3: "Human head" recording</h4><blockquote>There's now a third option: a microphone manufacturer called <a href="http://www.soundprofessionals.com/cgi-bin/gold/category/110/mics">Sound Professionals</a> had the idea that, instead of using a <i>dummy</i> human head, why not use a <i>real</i> human head?.</blockquote><blockquote>This doesn't require surgery, you just pop the special microphones into your ears (making sure that you have them the right way round), and the mics record the 3D positioning colouration created by your own head's acoustics.</blockquote><blockquote>The special microphones cost a <i>lot</i> less than a Neumann KU100, and they're a lot easier to use for field recording than hauling about a dummy head – it's just like wearing a pair of "earbud"-style earphones. The pair that I bought required a mic socket with DC power, but I'm guessing that most field recorders probably provide that (they certainly worked fine with a <b>Sony MZ-N10</b> minidisc recorder).</blockquote><blockquote>Spend a day wandering around town wearing a pair of these, and when you listen to the playback afterwards with your eyes closed, it's spooky. You hear //everything//. Birds tweet above your head, supermaket trolley wheels squeak at floor level, car exhausts grumble past the backs of your ankles as you cross a road, supermarket doors --swisssh-- apart on either side of you as you enter.</blockquote><blockquote>"Human head" recording isn't quite free from problems. The main one is that you can't put on a pair of headphones to monitor what you're recording, real-time, because that's where the microphones are: you either have to record “blind” or have a second person doing the monitoring, and you can't talk to that person or turn your head to look at them (or clear your throat) without messing up the recording. If you move your head, the sound sources in the recording swing around in sympathy. Imagine trying to record an entire symphony orchestra performance while staring determinedly at a fixed point for an hour or two. Tricky. </blockquote><blockquote>The other thing to remember is that although the results might sound spectacular to <i>you</i> (because it was <i>your</i> head that was used for the recording), it's difficult to judge, objectively, whether other people are likely to hear the recorded effect quite so strongly. For commercial work you'd also want to find some way of checking whether your “human dummy” has a reasonably "standard" head. And someone with nice clear sinuses is likely to make a better recording that someone with a cold, or with wax-clogged ears.</blockquote><blockquote>Another complication is that most people don't seem to have heard of "in-ear" microphones for 3D human head recording, so they can be difficult to source: I had to order mine from Canada. </blockquote><h3>Media</h3><blockquote>For recording and replaying the results: since the effect is based on high-frequency stereo colourations and phase differences, and since these are exactly the sort of thing that MP3 compression tends to strip out (or that gets mangled on analogue cassette tape), it's probably best to try recording binaural material as high-quality uncompressed wav files. If you find by experiment that your recorder can still capture the effect using a high-quality compressed setting, then fine. The effect's captured nicely on 44.1kHz CD audio, and at a pinch, it even records onto high-quality vinyl: the <a href="http://en.wikipedia.org/wiki/Eurythmics">Eurythmics</a> album track "Love you like a Ball and Chain" had a 3D instrumental break in which sound sources rotate around the listener's head, off-axis: if you look at the vinyl LP, the cutting engineer has wide-spaced the tracks for that section of recording to make absolutely sure that it'd be cut with maximum quality. </blockquote><h3>Sample recordings</h3><blockquote>I'd upload some examples, but my own test recordings are on minidisc, and I no longer have a player to do the transfer. Bah. :(</blockquote><blockquote>However, there's some 3d material on the web. <a href="http://www.trendhunter.com/trends/3d-sound">The "Virtual Barber Shop" demo</a> is a decent introduction to the effect, and there are some more gimmicky things online, like <a href="http://www.qsound.com/demos/london-tour_wmv.htm">Qsound's London Tour demo</a> (with fake 3D positioning and a very fake British accent!). When I was looking into this a few years back, the nice people at Tower Records directed me to their spoken word section where they stocked <a href="http://www.wired.com/wired/archive/1.04/streetcred.html">a slightly odd "adult" CD</a> that included a spectacular 3D recording of, uh, what I suppose you might refer to as an adult "multi-player game". Ahem. This one actually makes you jump, as voices appear without warning from some <i>very</i> disconcerting and alarming places. I'm guessing that the actors all got together on a big bed with a dummy head and then improvised the recording. There's also a couple of 3D audio sites by <a href="http://www.binaural.com/bindemos.html">binaural.com</a> and <a href="http://www.noogenesis.com/binaural/binaural.html">Duen Hsi Yen</a> that might be worth checking out.</blockquote>So, the subject of 3D audio isn't a con. Even if the 3D settings on your PC soundcard don't seem to do much, "pro" 3D audio is very real - with the right gear, the thing works just fine. It's also fun.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com2tag:blogger.com,1999:blog-480555353132580100.post-14028746093266889522010-03-19T18:30:00.002+00:002010-04-12T18:41:20.959+01:00Virtual Lego<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4BeB8WA5mxZxd1MygVQr6uHuLvdAzezn87Ezub385awKF0nIrWzh7oZiGb2NK7LcAol8oI8JOuRPTcEqoO0M0PKCDmExuwe__1htrlyodzIENSTdD9d-ztsa_xBzNZUop6Am4nAe2p8g/s1600-h/Lego_digital_box_kiosk.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4BeB8WA5mxZxd1MygVQr6uHuLvdAzezn87Ezub385awKF0nIrWzh7oZiGb2NK7LcAol8oI8JOuRPTcEqoO0M0PKCDmExuwe__1htrlyodzIENSTdD9d-ztsa_xBzNZUop6Am4nAe2p8g/s320/Lego_digital_box_kiosk.jpg" lego="" augmented="" reality="" digital="" box="" kiosk="" border="0" /></a></div><br /><span style="font-weight: bold;">Someone's finally come up with the "</span><a style="font-weight: bold;" href="http://en.wikipedia.org/wiki/Killer_application">killer application</a><span style="font-weight: bold;">" for </span><a style="font-weight: bold;" href="http://en.wikipedia.org/wiki/Virtual_reality">VR</a><span style="font-weight: bold;"> and </span><a style="font-weight: bold;" href="http://en.wikipedia.org/wiki/Augmented_reality">computer-augmented reality</a><span style="font-weight: bold;">. </span><br /><br />It's <b><a href="http://www.lego.com/">buying Lego</a></b>.<br /><br />You walk into a participating <a href="http://maps.google.com/?q=lego">Lego shop</a>, pick up a box of <a href="http://en.wikipedia.org/wiki/Lego">Lego</a>, and walk over to the big screen. A video camera shows you your image. You hold out the box in front of you, horizontally, as if you're holding a tray.<br /><br />The software sees the box, recognises which product it belongs to, and calculates the exact position of the box corners in three dimensions.<br /><br />It then retrieves a 3D computer model of the assembled Lego model from its database, and projects a virtual reality image of the completed masterpiece onto the screen as if the completed Lego masterpiece is sitting on top of the box clutched in your little sticky hands.<br /><br />You rotate the box, and on the screen, <a href="http://www.youtube.com/watch?v=PGu0N3eL2D0">the 3D model rotates</a>. Tilt the box and it tilts. Move the box around and you get to see the final Lego construction from different angles, complete with perspective effects.<br /><br />Oh, and the computer-generated Lego image is also animated. If it's a garage, <a href="http://www.youtube.com/watch?v=L587qNCmYnU">the little Lego cars scoot about</a>, if it's a building, the little Lego people are wandering about doing their own thing, <a href="http://en.wikipedia.org/wiki/The_Sims">"Sims"-style</a>, and if its a tipper truck, the truck drives about the top of the box, tipping stuff.<br /><br />It's very, very cool.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-69086087166017565322010-03-14T23:17:00.001+00:002010-04-12T18:40:46.805+01:00The Caltech Snowflake Site<div class="separator" style="clear: both; text-align: center;"><a href="http://www.snowcrystals.com/" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="thumbnail link image to CalTech's snowflake site, www.snowcrystals.com" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjYn3JE-X-TSYJNt9JW7T-GYx8w2t5nbkp3gA_UVzrX2T8OYnnuLCcbiVQki_gx_0bGl9Qv5kw0BzeUXwKUpx1UPXtZTI0CNMl3-V72K5_P6YIqXhHuDA8xXd6NUPaQtBoBxUB7HQQhK0/s320/CalTech_snowflakes.jpg" border="0" /></a></div><span style="font-weight: bold;">While I was finishing off yesterday's snowflake post</span>, I came across <a href="http://www.snowcrystals.com/">Caltech's excellent snowflake site at www.snowcrystals.com</a> (<span style="font-size:x-small;"><i><a href="http://www.its.caltech.edu/%7Eatomic/">Kenneth G. Libbrecht</a></i></span>).<br /><br />Lots of photos, lots of useful information. <a href="http://www.its.caltech.edu/%7Eatomic/snowcrystals/designer1/designer1.htm">Caltech even have their own snowflake creation machine</a>, that, instead of electrostatically levitating the snowflakes as they grow, or using a vertical blower, applies an electric field to grow narrow ice-spikes, and then lets the snowflakes form at the spikes' tips (which means that the central mount is probaby rigidly aligned to the resulting flake with atomic precision, and doesn't seem to affect the growing process).<br /><br />If you're in the UK, and you've mocked train companies for blaming their electrical locomotive failures on "<a href="http://www.google.co.uk/search?q=%22the+wrong+kind+of+snow%22">the wrong kind of snow</a>", well, it turns out that snow crystallisation has a slightly crazy dependency on both temperature and airborne water content, forming a range of very different shapes, from the classic branched hexagon "christmas card" forms, to hexagonal plates or long hexagonal tubes (<a href="http://www.its.caltech.edu/%7Eatomic/snowcrystals/primer/morphologydiagram.jpg">snowflake chart</a>).<br /><br />The CalTech site explains the wide variety of snowflake forms by this temperature-dependence: the idea being that snowflakes form symmetrically because the conditions across the flake are the same at any given time, and that the extreme variety of shapes is a function of the varying environmental conditions that the whole snowflake experiences as it falls through different regions of sky. It might go through a "spiky dendrite" phase, then change temperature and start trying to grow plates, and then go back to "dendrite" mode, and the exact amount of time spent in these different phases then dictates the shape that emerges.<br /><br />If the identical patterning of the arms is purely a result of the identical (varying) growing conditions across the whole flake, then we don't require any additional mechanism for regulating symmetry. In that case, we'll expect individual snowflakes to accumulate diverging asymmetries as they grow, due to gradients of temperature or water availability or light or airflow across the flake. This'd seem to make the formation of extremely regular crystals a bit unlikely.<br />But the CalTech site argues that actually, most natural snowflakes <i>are</i> pretty irregular, and that people generally overestimate the degree of symmetry because the artsy folks who photograph them (presumably including CalTech!) give a misleading impression by carefully selecting out the "best" (most regular) flakes to photograph and publish.<br /><br />That explanation seems to be a bit at odds with the current suggestion of <a href="http://arxiv.org/abs/0911.4267">how triangular snowflakes form</a>, though: if triangular snowflakes grow because of airflow over the flake creating an asymmetrical growing environment, breaking the hex pattern, then if there <i>wasn't</i> an additional internal regulating symmetry-mechanism, there'd be no obvious reason why the resulting aerodynamically-disfigured flake should have 120-degee rotational symmetry. Airflow and a moisture gradient flowing across the flake in one direction might allows a bilateral <i>left-right</i> symmetry for the two sides of the flake that are experiencing the same growing conditions ... it doesn't explain why the conditions at the leading point of the falling tri-flake (falling point-first) should be identical to that at the two trailing side-points, or why points on the <i>sides</i> of those two trailing spurs points should be equivalent, when the airflow is hitting them at different angles. If triangular flakes <i>are</i> due to sideways airflow, then it means that the flake seems to be fighting to retain some sort of symmetry despite significant asymmetrical disruptive forces that ought to be destroying it. That'd increase the odds of there being a significant internal symmetry mechanism in play.<br /><br />Of course, it may be that <i>our explanation of triangular snowflakes</i> is simply wrong, that airflow <i>isn't</i> disrupting the hex pattern, and that instead chemical contamination (or some other factor) is causing the alternative triangular crystal structure. But that'd still mean that something in our current understanding of snowflakes is wrong or incomplete. Even if <a href="http://erkdemon.blogspot.com/2010/03/snowflake-engineering-quantum-ghosts.html">yesterday's wacky suggestion</a> about the <a href="http://images.google.co.uk/images?q=quantum+mirage">quantum mirage effect</a> is midguided, we'd still not know why snowflake formation is so sensitive to environmental conditions, or what the (non-aerodynamic) explanation of triangular snowflakes might be.<br /><br /><br />So again, more research needed.<br /><br /><hr /><div style="color: rgb(53, 28, 117);"><span style="font-size:x-small;"><i><span style="font-weight: bold;">The Caltech site's debunking of "mysterious" causes of snowflake symmetry</span> is in the "Myths and Nonsense section" at <a href="http://www.its.caltech.edu/%7Eatomic/snowcrystals/myths/myths.htm">http://www.its.caltech.edu/~atomic/snowcrystals/myths/myths.htm</a> . The page says that there aren't any special forces at work here regulating symmetry, that most snowflakes are asymmetrical and "rather ugly", and that the published examples (including the ones on the site) are atypical, because "not many people are interested in looking at the irregular ones". In other words, if you look through the published work, you get a misleading impression due to <a href="http://en.wikipedia.org/wiki/Publication_bias">publication bias</a>. Well, yes ... quite possibly. But since the idea of what counts as "significant" symmetry might be a bit subjective,and since the datasets aren't available for us to look at, it's difficult to take this as a definitive answer until there's been actual experimental testing done. </i></span></div><div style="color: rgb(53, 28, 117);"><span style="font-size:x-small;"><i><br /></i></span></div><div style="color: rgb(53, 28, 117);"><span style="font-size:x-small;"><i>Water is wierd stuff, and it keeps catching us out. I remember when people used to debunk <b>ice spikes</b> as an obvious example of psudoscience, and now those are understood, studied, and have <a href="http://www.its.caltech.edu/%7Eatomic/snowcrystals/icespikes/icespikes.htm">their own page on the CalTech site</a>. A lot of "crazy" ideas about water <b>do</b> turn out to be just as dumb as they first appear, but a few turn out to be correct. The trouble is, it's not always immediately obvious which are which.</i></span></div><div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-8881266328842012322010-03-13T23:56:00.002+00:002010-04-12T18:38:33.355+01:00Snowflake Engineering, Quantum Mirages and Matter-Replicators<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXYqUcHzOcgR2J3oUcv2_TVIs91wUBdRXI3dNXDY2iAcFK1_D5w-kBqIB7H9-as8HWR2yTxlJvPfzXOXZtz92eIoHtn_lqI1DvWAj9iaQNxocmYgw8ULlYCXhWVZuT1bKqsxfsPwU2MYk/s1600-h/JuliaArray_snowflakes_blue.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXYqUcHzOcgR2J3oUcv2_TVIs91wUBdRXI3dNXDY2iAcFK1_D5w-kBqIB7H9-as8HWR2yTxlJvPfzXOXZtz92eIoHtn_lqI1DvWAj9iaQNxocmYgw8ULlYCXhWVZuT1bKqsxfsPwU2MYk/s320/JuliaArray_snowflakes_blue.jpg" alt="Julia Set " snowflakes="" eric="" baird="" 2009="" border="0" /></a></div><span style="font-weight: bold;">One of the most impressive things about </span><a style="font-weight: bold;" href="http://en.wikipedia.org/wiki/Snowflake">snowflakes</a> is that we still don't really understand how they work.<br /><br />We understand how <i>conventional</i> crystals grow – normal crystals assemble into large, faceted, regular-looking forms because the flat facets attract new atoms more weakly than the rougher, "uncompleted" parts of the structure, which provide more friendly neighbours for a new atom to bond with. So if you have an "incomplete" conventional crystal, it'll preferentially attract atoms to the sites needed to fill in the gaps, to produce a nice large-faceted shape that tries to maximise the size of its facets, as far as it can bearing in mind the original random initial distribution of seed crystals.<br /><br />But <i>snowflakes</i> do something different. Their range of forms makes their growth appears pretty chaotic, but they also manage to be deeply symmetrical. It'd <i>seem</i> that the point of greatest attraction on a region of snowflake doesn't just depend on the atoms that are nearby, but also on the arrangement of atoms on a completely different part of the crystal, which might be some way away, and facing in a different direction, on a different spur. The sixfold symmetry of a snowflake <i>suggests</i> that when you add an atom to the point of one of the six spurs, the other five points become more attractive ... add an atom to the side of a spur, and we're dealing with twelve separate sites (twenty-four if the atom is off the plane). Add an atom to a side-branch, and a copy of the electrical-field image of that single atom is transmitted and reflected and multiplied and <span class="blsp-spelling-corrected" id="SPELLING_ERROR_0">refocused</span> at potentially tens of corresponding sites on the crystal surface. And that's for every atom in the crystal.<br /><br />This would be beyond fibre-optics, and beyond conventional holography. It'd be multi-focus holography, and the <span class="blsp-spelling-corrected" id="SPELLING_ERROR_1">holographically</span>-controlled assembly of matter at atomic scales to match a source pattern – making multiple copies without destroying the original. It'd be using holographic projection to assemble multiple macroscopic structures that are atom-perfect copies of an original. And that idea should make the hairs on the back of your neck start to stand up.<br /><br />The closest thing I've seen in print to this is the <a href="http://en.wikipedia.org/wiki/Quantum_mirage"><b>quantum mirage effect</b></a> described in <a href="http://www.nature.com/nature/journal/v403/n6769/abs/403512a0.html">Nature, 3 Feb 2000</a>. Researchers assembled an elliptical <a href="http://www.aip.org/png/html/mirage.html">quantum corral</a> of atoms on a substrate, and placed another atom at one of the ellipse's two focal points. They then examined the second focal point, and found that the atom's external field properties seemed to be projected and <span class="blsp-spelling-corrected" id="SPELLING_ERROR_2">refocused</span> at the second point, to give a partial "ghost" of the source atom [<a href="http://www.wisegeek.com/what-is-a-quantum-mirage.htm">*</a>][<a href="http://philipball.blogspot.com/2009_11_01_archive.html">*</a>][<a href="http://mota.stanford.edu/press.php">*</a>]. You could interact with the ghost even though it wasn't there. Presumably your actions on the "ghost particle" copy would be transmitted back to the source, <span class="blsp-spelling-error" id="SPELLING_ERROR_3">which'd</span> be recreating the ghost behaviour by a process of electrical ventriloquism, using the elliptical <span class="blsp-spelling-corrected" id="SPELLING_ERROR_4">reflecting</span> wall to "throw" its voice to the ghost location.<br /><br />Something similar may be happening in a perfectly-symmetrical <span class="blsp-spelling-corrected" id="SPELLING_ERROR_5">monocrystalline</span> snowflake as it grows. Maybe the crystal's regular structure just happens to not just <i>split</i> the image of the atom into multiples, but refocus them with phase coherence at all the key symmetry points. Maybe we could try adding a few metal atoms to one part of a snowflake crystal and seeing if matching atoms are preferentially attracted to the other corresponding sites.<br /><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">A possible clue is the phenomenon of </span><a style="font-weight: bold;" href="http://arxiv.org/abs/0911.4267">triangular-symmetry snowflakes</a>. <br />It's been suggested that these form in nature when an asymmetrical snowflake falls corner-first, with the airflow disrupting regular <span class="blsp-spelling-corrected" id="SPELLING_ERROR_6">hexagonal</span> crystal formation (see also <a href="http://www.wired.com/wiredscience/2009/12/triangular-snowflakes/">Wired</a>). But since the remaining triangular symmetry is still so strong, this hints that perhaps the strongest linkage between crystal sites is in triples, with a secondary slightly weaker triplet attraction producing the hex.<br /><br />Okay, so I suppose there might be problems in attempting to use giant snowflake crystals as matter-photocopiers ... for snowflake formation, every copied pattern forms an extension of the crystal, if you use the crystal to try to copy other things, then the "irregular" matter being copied is liable to disrupt of the <span class="blsp-spelling-corrected" id="SPELLING_ERROR_7">focusing</span>. You might only be able to copy layers an atom or two thick (at least, to start with).<br /><br />But a giant atom-perfect <span class="blsp-spelling-error" id="SPELLING_ERROR_8">monocrystalline</span> snowflake would be an awfully fun thing to play with if you had a chip-fabrication lab with goodies like force-sensing tunnelling microscopes.<br /><br />And to me, that was the one thing that could have justified building the <a href="http://en.wikipedia.org/wiki/International_space_station">International Space Station</a>. The ability to build a giant, heavy-duty <b>zero-gravity snowflake</b>, hopefully one big and chunky enough to withstand eventually being brought back to Earth immersed in liquid helium for further study (what does <a href="http://en.wikipedia.org/wiki/Bose-Einstein_condensate">Bose-Einstein condensate</a> do when it's in in contact with a hex crystal?). <i>That</i> had to be worth a few billion in research money, and would have given the public something pretty to look at when it came time to tell them what the money had bought. We haven't done it yet, but maybe ...<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-20009436691698339692010-03-05T21:39:00.005+00:002010-04-12T18:39:27.255+01:00Kylie Minogue and the Gorilla Experiment<div class="separator" style="clear: both; text-align: center;"><a href="http://www.kylie.com/"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifRnkpQ9HuR-IBkxLHhWBcshl7Z6sbsFfZ_EHGQG6U1Rt7ubeCgqZtQuvaSOJrkQBoxXabRqRsFer7Abu0YBoU2GA4g1F1cZZTBkyHex-HSDFi_0LEymvrFqd0WnkKHJHjfxY9k-KWOT0/s320/Kylie_gorilla.jpg" alt="Kylie, gorilla" border="0" /></a></div><span style="font-weight: bold;">To a large extent, we see and hear what we </span><span style="font-style: italic; font-weight: bold;">expect</span><span style="font-weight: bold;"> to see and hear.</span> As newborns we're hit with a tidal wave of experiential data, a screaming torrent of raw sensory information that we have to learn how to deal with, and our brains' main coping strategy is to scrunch itself up until it's found ways of shutting out most of the din.<br /><br />As infants, we initially lose neurons at an alarming rate until the remaining pathways can mimic (and to some extent synchronise with and predict) external datapatterns. We construct progressively more complex predictive mental models for how the outside world works, and increasingly live within our own models. We experience what we expect to experience, unless there's such a glaring mismatch that it can't be ignored.<br /><br />It's a matter of data-reduction and enhanced reaction-times. We coast along, our experience being <i>steered</i> by sensory data but not dictated by it. If you're sitting on a chair, you don't suddenly jolt every few seconds and exclaim, "Chair!" – once the chair's been accepted you assume that it's still there until you're told otherwise. This internal secondary reality also compensates for the significant processing delays that happen in our brains – so that we <i>think</i> that we experience the world in real-time – by starting to react unconsciously to our internal models' predictions, before we're consciously aware of what we've seen. We live our lives from moment to moment in a state of continual anticipation.<br /><br />Sometimes random data tickles our expectation-engine – when a black bin-bag blowing in the wind in the corner of an alley momentarily triggers an expectation of seeing a black cat, we don't just interpret the movement as <span style="font-style: italic;">possibly</span> belonging to a cat, we actually <i>see and remember</i> the cat (until we look a second time and realise that it's just a refuse bag, and the rogue memory gets shredded).<br /><br />These models act as perception filters and error-correction filters for what our brains allow us to register as reality. Information that's not compatible with the model (or not relevant) simply doesn't register on our consciousnesses, it gets stripped out as anomalous data and jettisoned before we have a chance to become fully aware of it.<br /><br />The usual example for this is <span style="font-weight: bold;">the basketball experiment</span>, conducted by <span class="style1"><a href="http://viscog.beckman.illinois.edu/media/dailytelegraph.html">Daniel Simons</a> and Christopher Chabris</span> in the 1990s, but unfortunately, if I explain what the experiment <span style="font-style: italic;">is</span>, it'll spoil it for you. If you don't already know about it, don't read anything else about it until you've <a href="http://viscog.beckman.illinois.edu/flashmovie/15.php">watched this video and tried to count just the number of basketball passes made my the people in the white shirts</a>. <i>Then</i> read <a href="http://www.telegraph.co.uk/science/science-news/3322642/Did-you-see-the-gorilla.html">the analysis</a>.<br /><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">The </span><a style="font-weight: bold;" href="http://en.wikipedia.org/wiki/Inattentional_blindness">Gorilla Effect</a><span style="font-weight: bold;"> is now considered a classic</span>, but what most psychologists might not realise is that in 1991, someone had already done a large-scale version of the experiment, using the UK's music broadcasting networks.<br /><br />In '91, <a href="http://en.wikipedia.org/wiki/Kylie_Minogue">Kylie Minogue</a> was still widely seen as a squeaky-clean pop songstress, freshly out of <a href="http://en.wikipedia.org/wiki/Neighbours">Neighbours</a>, warbling heavily-processed <a href="http://en.wikipedia.org/wiki/Stock_Aitken_Waterman">Stock Aitken and Waterman</a> lyrics over generic (and slightly cheesy) SAW chunka-chunka backing tracks.And that's when someone at the Minogue team decided to slip the f-word into one of the singles, three times, to see who noticed. Nobody did.<br /><br />The single was called "<a href="http://en.wikipedia.org/wiki/Shocked">Shocked</a>" and charted at number 6.<br /><br /><blockquote>" <i>Shocked by the power, ooh-ohh, shocked by the power of love.</i><br /><i>You got me fucked to my very foundations, shocked by the power, shocked by the power ...</i>"</blockquote><br />Whattt???<br /><br />Uncharacteristically for SAW lyrics, “fucked to my very foundations” was actually a pretty great line for a pop song. Alliterative an' everything. I'd have been proud of it. And maybe that's why someone decided to leave it in.<br /><br />Whether it was an ad-lib, like <a href="http://en.wikipedia.org/wiki/Atomic_Kitten">Atomic Kitten</a>'s alternative “<a href="http://www.monochrom.at/cracked/news/news1_01.htm#OLD%20NEWS%202002"><span lang="DE">You can lick my hole again</span></a>” soundcheck version of <i>their</i> single, I don't know. But that's the version of "Shocked" that actually got broadcast, over and over again, on TV and on the radio. In a country that was obsessed with the F-word being used on music programmes, in which the <a href="http://www.sex-pistols.net/">Sex Pistols</a> had made their careers by effing on <a href="http://en.wikipedia.org/wiki/Bill_Grundy">Bill Grundy</a>'s show, and <a href="http://en.wikipedia.org/wiki/Jools_Holland">Jools Holland</a> was suspended for accidentally let it slip on a live trailer for "<a href="http://en.wikipedia.org/wiki/The_Tube_%28TV_series%29">The Tube</a>" in 1987, and every <a href="http://en.wikipedia.org/wiki/Madonna_%28entertainer%29">Madonna</a> single was eagerly being pored over by the UK press for possible naughty words or double-entendres that people could declare themselves outraged by, <span style="font-style: italic;">la</span> Minogue got away with repeatedly standing up on <a href="http://en.wikipedia.org/wiki/Top_of_the_Pops">Top of the Pops</a> [a bit after ~7pm], and apparently singing her little heart out about how she was "fucked to my very foundations", three or four times per appearance, without anyone hearing it.<br /><br />If you get hold of the more recent "<a href="http://en.wikipedia.org/wiki/Ultimate_Kylie">Ultimate Kylie</a>" compilation, the audio's different. They've either changed the recording or used a different version in which The Kylie is <i>definitely</i> singing "rrucked", with a pronounced "rr" rather than "fucked", with an "ff". But go back to contemporary broadcast recordings of the single ( <a href="http://www.youtube.com/watch?v=ZLsmZctoKQk">thanks, YouTube!</a> ), and yep – it's different.<br /><br />The "Kylie" version of the gorilla experiment might be one of the biggest mass-media psychological experiments ever to take place, but unless you can get hold of contemporary recordings of radio and TV broadcasts, you might be forgiven for thinking that it never happened.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-44224289295909221072010-02-26T15:38:00.003+00:002010-04-12T18:35:15.868+01:00The Magic of Richard Feynman<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdBbHCedaj3aUYqDzciusuDR-g3GdyzOKf70B7HMgTUKP7GkK_jzIQk0uN2u2pOyr-86reL05Jexy1Fo7_m9K7pwfujU6efRZ0oeIRfpp5pCdWReiHTjGOWbp4p9A3iKFwSBtFWSeUu8Y/s1600-h/Feynman_diagrams.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 100px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdBbHCedaj3aUYqDzciusuDR-g3GdyzOKf70B7HMgTUKP7GkK_jzIQk0uN2u2pOyr-86reL05Jexy1Fo7_m9K7pwfujU6efRZ0oeIRfpp5pCdWReiHTjGOWbp4p9A3iKFwSBtFWSeUu8Y/s400/Feynman_diagrams.png" alt="Feynman Diagrams" id="BLOGGER_PHOTO_ID_5443283238514932162" border="0" /></a><a style="font-weight: bold;" href="http://en.wikipedia.org/wiki/Richard_Feynman">Richard Feynman (1918-1988)</a><span style="font-weight: bold;"> was one of the more colourful and charismatic characters in US physics.</span><br />He's remembered as one of the greatest physics minds of the Twentieth Century, which sometimes leaves non-physicists wondering exactly what it was (apart from <a href="http://en.wikipedia.org/wiki/Feynman_diagram"><span>Feynman diagrams</span></a>) <span style="font-style: italic;">that he actually </span><span style="font-style: italic;">did</span> to get that reputation. How did he end up being regarded as some sort of god amongst physicists, he never actually discovered anything that most people will have heard of?<br /><br /><a href="http://www.youtube.com/watch?v=lDvu6wz9qF4">One of Feynman's hobbies was stage magic</a>. He was a keen practical joker, and was fascinated by the way that people are led to believe certain things, or why they end up acting in certain predictable ways. He was fascinated by fallibility, and <span style="font-style: italic;">predictable</span> fallibility, which is one of the reasons why he was such a a great choice when they were picking people for the <a href="http://en.wikipedia.org/wiki/Rogers_Commission">Rogers Commission</a>, to investigate the reasons for <a href="http://history.nasa.gov/sts51l.html">the 1986 "Challenger" space shuttle disaster</a>. Feynman understood the concept of system failure, both at the organisational and personal level, and he liked to play with people, including other physicists.<br /><br />Stage magic often works through a process of <span style="font-weight: bold;">misdirection</span>. The practitioner demands with every element of their voice, facial gestures and body language that the audience may like to look over //here//, to the extent that we find it almost impossible not to look at their selected spot – perhaps an inch or so away from their extended, waggling fingertips – while with their other hand over //there//, they perform the mechanics of the actual trick.<br /><br />A magician might announce before performing their stunt: "Look at this table. It's a perfectly ordinary table. It really, <span style="font-style: italic;">really</span><span style="font-style: italic;"> is</span>." And they bang on the table with their fist, and walk around it, and hit it with a stick, and mark an X on it with white chalk ... and you're concentrating so hard on the table to try to find why it's NOT an ordinary table, that you fail to notice the large black velvety cloth hanging above it, or the trap door behind it. The table is, in fact, completely ordinary. It's a double-bluff.<br /><br />That's misdirection. You don't necessarily tell the audience something that's untrue or misleading, you give them a series of false clues, and let them work out the wrong story for themselves.<br /><br />Another factor that makes stage magic effective is the way that people apply <a href="http://www.skepdic.com/occam.html"><span>Occam's Razor</span></a>. Technical stage tricks often require ludicrous amounts of preparation, absurd amounts of technical expertise or physical dexterity, and improbable investments in custom hardware. The assistant just happens to be double-jointed, or has an identical twin sister, or a false leg. At some subconscious level, the watcher's mind runs through a set of absurdly complicated and tortuous conspiracy theories that might explain what they're seeing and gives up, deciding that it's simpler to assume that the magician really <span style="font-style: italic;">can</span> fly or make tigers disappear. The audience <span style="font-style: italic;">reasons</span> that this isn't true (it's "only a trick"), but at a gut level they've already suspended disbelief enough to enjoy the show.<br /><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">And so, to Feynman's magic trick.</span><br />One of the recurring stories about Richard Feynman goes something like this:<br />A physicist is working on a difficult problem. The physicist contacts Feynman. Feynman's secretary replies that Feynman is very busy, but could maybe schedule a meeting at some nebulous future date.<br />Several months pass. The physicist is contacted unexpectedly by the secretary to say that the secretary has just spotted that Mr Feynman now has a gap in his schedule, at quite short notice, and would the physicist still like to make use of it? The physicist eagerly agrees.<br /><br />The physicist walks into Feynman's office.<br />"So,", says Feynman, "My secretary's just told me that you're working on some sort of interesting problem, but you'll have to forgive me, I've been really busy for the last couple of days, and haven't had the chance to look into it. Could you explain it to me? Oh, and could you start from scratch and make it simple, because, you know, this really isn't my field, and I'm not really up to speed with this subject. Start from the beginning."<br /><br />The visitor is flattered and walks up to the board and starts explaining the nature of the problem. He pauses.<br /><br />"So", says Feynman, "Let me see if I've got this right ..."<br /><br />Feynman stares at the board and frantically marks symbols up while talking through what he's doing, until he has an equation.<br /><br />"So your starting point would be something like that, yes? Okay, now tell me what you did next."<br /><br />The visiting physicist is dumbfounded. What Feynman has just written on the board is the solution. And it's not <span style="font-style: italic;">just</span> the solution, it's the solution to a <span style="font-style: italic;">more general</span> version of the problem than the one that the visitor has been struggling with for months, or years. And Feynman's just done it in about three minutes flat.<br /><br />The physicist leaves, ego totally destroyed, knowing that RF is in a totally different league to lesser mortals like himself.<br /><br />Now, the "reveal".<br /><br />If you were a suspicious stage-magician type, what you might <span style="font-style: italic;">suspect</span> happened would be something like this: Physicist contacts RF's secretary, mentioning something about the problem. Secretary tells Feynman. RF researches the problem and all the relevant papers on the subject, and finds out how far the physicist has gotten. The secretary sends a stalling letter. Feynman adds the problem to his stack of other outstanding problems, playing them off each other, trying to cross-fertilise the different issues and bounce ideas between them, considering it a break from the problems he's actually trying to work on for himself. Finally, he works out the solution, and at <span style="font-style: italic;">this</span> point, his secretary sends out the letter saying that RF now has an unexpected gap in his schedule.<br /><br />Physicist arrives, RF plays dumb and asks them to outline the problem, RF "solves" it in three minutes flat, apparently using only the tools that the visitor has just provided.<br /><br />Of course, this scenario still required RF to have been a damned good theoretical physicist. It also required RF to have had a wicked sense of humour, and to have done an awful lot of tough background work each time he pulled his stunt, just to create a few brief minutes of surprise for his "audience".<br /><br />But that's exactly what stage magicians do.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-44841566584530160312010-02-24T19:38:00.002+00:002010-04-27T01:58:51.725+01:00"Relativity in Curved Spacetime", PDF eBook<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3CAzBgn9xYA_CJ07A-75Fz48QhQOoUlsWN-oGBM1byxCBubaC8C2xjO-xpIKdSz3Nc9EHsINl0CczJgKSjdVjqsVomD9bv0CCQphBZaBJhTvue3ekqCieXZOemv0LgH1TmMsyby5FX_4/s1600-h/RiCS_PDF_medium.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 303px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3CAzBgn9xYA_CJ07A-75Fz48QhQOoUlsWN-oGBM1byxCBubaC8C2xjO-xpIKdSz3Nc9EHsINl0CczJgKSjdVjqsVomD9bv0CCQphBZaBJhTvue3ekqCieXZOemv0LgH1TmMsyby5FX_4/s400/RiCS_PDF_medium.jpg" alt="'Relativity in Curved Spacetime', PDF ebook version, screenshot" id="BLOGGER_PHOTO_ID_5441890629892136466" border="0" /></a><span style="font-weight: bold;">I've just provisionally put </span><a style="font-weight: bold;" href="http://store.payloadz.com/details/787839-eBooks-Science-Relativity-in-Curved-Spacetime-book-.html">Relativity in Curved Spacetime online as an eBook</a><span style="font-weight: bold;"></span>, to see what happens. It's the full fixed-layout PDF file for the book, with an added "bookmark pane" PDF index and some annotations. If you're curious about the <a href="http://www.relativitybook.com/book_contents.html">page layouts</a> or you'd like <a href="http://www.relativitybook.com/0955706807_contents.pdf">a single-sheet PDF listing of the book's contents</a>, click on the links. <p style="margin-bottom: 0cm;">I've initially priced the thing at USD $4-99, which comes out as about three quid in British Pounds. That's about a third of what Apple are going to be charging for ebooks. </p><p>If you want a nicely-bound hardcopy, and don't fancy printing off nearly 400 sides of paper, you can still buy the paperback and hardback. Otherwise, the PDF version's on <a href="http://www.payloadz.com/go/sip?id=1200994"><span>Payloadz.com</span></a> .<br /></p><div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com2tag:blogger.com,1999:blog-480555353132580100.post-81549116695352687182010-01-29T12:00:00.004+00:002010-04-25T13:45:16.664+01:00My Website Sucks<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1s-rTQcXLhjNsthJRDqB4eN80dz9QxW-NfjGck7Z8ri4-87-H1zhPxKhtyTdveS65KeaPQflV0t4JddZL6DbeIAQw6R3ONxDxBT5D0u7adBb0vQzSASXxc1H82yI1CDWrCx0uasOiwvE/s1600-h/mainsadaptors.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="The result of adding haphazardly to a system, illustrated with a stack of mains power adaptors. Don't try this at home." src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1s-rTQcXLhjNsthJRDqB4eN80dz9QxW-NfjGck7Z8ri4-87-H1zhPxKhtyTdveS65KeaPQflV0t4JddZL6DbeIAQw6R3ONxDxBT5D0u7adBb0vQzSASXxc1H82yI1CDWrCx0uasOiwvE/s320/mainsadaptors.jpg" border="0" /></a></div><span style="font-weight: bold;">I know </span><span style="font-style: italic; font-weight: bold;">why </span><span style="font-weight: bold;">it sucks ...</span> it's not because I don't know how to write a proper website ... I do ... it's because it's a personal site, and I kinda tinker with it and add things from time to time, and experiment ... and because I've been using HTML for too long.<br /><br />I was designing the site for someone else, I'd be less indulgent and more brutal with it. I'd insist that the owners had a clear brief of exactly what they wanted the site to do, and how to judge success. It'd be focused and lean and mean. I'd decide a visual theme, and a hierarchy, and apply it strictly. But when it's your <i>own</i> site, the tendency is to drift and add things and sections and use the pages as a sandbox for playing with different techniques until you end up with an indulgent hodge-podge of themes and style ideas that don't really gel.<br /><br />If it was someone else's site, I'd tell them to delete the whole thing and start again. Don't just fiddle with the layout, start with a blank sheet of paper and a pen, doodle a brand new layout based on CSS, set up some default templates and rebuild the site from the ground up.<br /><br /><hr align="left" width="25%"><br />When you drift and add bits and pieces haphazardly, you end up slipping into old habits. I started writing webpages before we even had html tables. My first site (<b>Erk's Relativity Pages</b>) was a 300-page monster written entirely in Windows Notepad, and back then, a site designer had to learn all sorts of odd layout tricks (like using invisible GIFs as spacers) to produce efficient layouts. When tables were implemented by Netscape (and then by MS), we redesigned our pages to suit, with nice orderly auto-resizing panels – they were a pain to begin with, but the quirks and incompatibilities smoothed out with time, and we ended up using them everywhere. Tables became the answer to everything, from navigation panels to equation-setting. Then there was a craze for breaking a page up into sections and writing those sections as separate webpages embedded in <span style="font-weight: bold;">frames</span>. I managed to avoid that one (since I could see the long-term search-engine problems), but for a few years, using frames everywhere was supposed to be the mark of a "pro" designer. And then a couple of years later, the importance of search-engine optimisation became obvious, the fashion swung into reverse, and any frame-based sites began to look terribly dated.<br /><br />Back in the 1980's and 1990's, the way to produce a flashy (but legible) site was to use a dark background with light text. The old CRT monitors tended to be strongly curved, with a display area that didn't extend quite to the edges, so a dark background made your page appear larger. With low-res CRT displays, "inverted" light-on-dark text was often easier to read, because the the outward blurring of light from the letters produced a sort of natural antialiasing effect. With dark text on pale backgrounds, the surrounding light tended to bleed over the characters, making them more difficult to read. Adding background patterning made the pages look more exciting, made the screen defects less distracting, and helped the user forget that they were staring at a fairly nasty little computer screen.<br /><br />In 2010, things have flipped. Legibility isn't a problem on modern LCD displays, and because the screens are now flat, stark white rectangles actually look <i>good</i>. The monitor glass is thinner, so "snow blindness" due to light-scattering from large bright areas isn't so much of a problem, and you no longer need to add a faint background texture to pale or white backgrounds to disguise the "bitty" red, green and blue phosphor dots of a low-res CRT screen.<br /><br />Nowadays, we practically squander space. On large screens, we use column layouts that waste most of the screen display, so that the central vertical column corresponds to what the user sees if they try to view the site from an iPhone. The web in the 1980s was content-starved, and you'd try to impress visitors with how much you had on your site and how much you could cram onto a small screen. In 2010 the visitor is spoilt for choice, so now designers try to keep things minimal and direct their visitors as quickly as possible to the information they actually want, otherwise they'll just click back to Google and try somewhere else.<br /><br />After tables and frames, we now have <span style="font-weight: bold;">Cascading Style Sheets</span>. CSS is genuinely cool, and I really ought to rip up the existing pages and redo all their elderly table-based layout completely using CSS. Trouble is, it'll require a certain amount of work, and the immediate result will be that certain existing things (like same-height panels) won't work so well. There are bodges and workarounds, CSS isn't quite perfect yet.<br /><br />The site's "look" also badly needs an overhaul. It was originally going to just be a few pages supporting the book, with a navy blue block across the top and down the left side referring to the book cover art (front and spine). On the subsequent pages, that morphed into a "program window" theme, with a title bar and an icon in the top left corner. I never quite worked out what to do with the spine. It's now an inconsistent mess, with pages on almost unrelated subjects like fractals, and should really be torn down and rebuilt.<br /><br /><hr align="left" width="25%"><span style="font-weight: bold;">Relativity theory</span> is in a similar mess. A number of themes have come and gone, and left their mark on the subject. There are artefacts and traditions in the way that theory is presented that don't really make sense in the new context, and older methods that aren't compatible with newer principles. We teach special relativity as having destroyed <span style="font-weight: bold;">aether theory</span>, but we still teach SR using the length-contraction idea, which was an old aether theory concept borrowed from Lorentzian electrodynamics.<br /><br />In theoretical physics, we probably have a feeling deep down that we know that we really <span style="font-style: italic;">ought</span> to be tearing up the current system and starting again. But it'd require a lot of work without an immediate payoff, and some of the things we currently do would stop working for a while as the new system found its feet. The current system is bodgy and patched and held together with string and duct tape, but we know how to use it, and over time the bugs and fudges have started to feel like old friends. We invested a lot of time in special relativity (like website designers spent a lot of time learning the quirks of HTML tables), and now that we know that system, we tend to use it everywhere. With special relativity, we've gone further and actually <span style="font-style: italic;">redefined</span> some key parts of relativity theory in such a way as to make SR inevitable and unavoidable, and this lock-in frees us from having to make awkward upgrade decisions.<br /><br />So while it may seem that I'm sometimes a bit harsh on the theoretical physics community for being welded to obsolete and archaic systems that don't really make sense in the C21st, I <span style="font-style: italic;">do</span> actually sympathise and empathise with their problem. They ought to rip up their SR-based structure and redesign, just as I ought to rip up my table-based webpage layout and redesign. But there's a difference between knowing that you ought to do something, and actually rolling up your sleeves and starting work, especially when there's no external deadline forcing your hand, and you always seem to have other more pressing things demanding your time.<br /><br />So to help the theoretical physics community, here's a time-point. <a href="http://www.scribd.com/doc/30090609/Relativity-in-Curved-Spacetime"><b>The book</b></a> came out in late 2007, and sketches out the principles and the rough shape of the suggested next-generation replacement for our current general theory of relativity. This is early 2010, and the book's now been out for two years. That book is the roadmap to what comes next. So perhaps we can have a concerted start on plotting out at least a rough preliminary <span style="font-style: italic;">schedule </span>for a replacement to general relativity, some time in 2010?<br /><br />Meanwhile, I'll try to think of a way of cleaning up the website.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-87674289736186741442010-01-22T22:14:00.001+00:002010-04-12T18:24:50.854+01:00Einstein's Cosmological Constant<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZXQNECfwxrvpIdKKe2af6TqvZ_e5bqUoq0i7L8-apR9tLyuk-oiexfT2Je0scxF9g6zEEB8z4MobRZe7ZfuLQdeGt07IFnZJv0IZSzxwB9KxZRU9NmNZBA7s38PYWHvtnZ1NjziQTCl8/s1600-h/lambda.gif" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="Lambda" id="BLOGGER_PHOTO_ID_5420739062086810274" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZXQNECfwxrvpIdKKe2af6TqvZ_e5bqUoq0i7L8-apR9tLyuk-oiexfT2Je0scxF9g6zEEB8z4MobRZe7ZfuLQdeGt07IFnZJv0IZSzxwB9KxZRU9NmNZBA7s38PYWHvtnZ1NjziQTCl8/s400/lambda.gif" style="cursor: pointer; display: block; height: 129px; margin: 0px auto 10px; text-align: center; width: 400px;" border="0" /></a><span style="font-weight: bold;">Back in 1916, Einstein was still working to the assumption that the universe should be neat and tidy</span>, and since he was now using a more mathematical approach, this meant "infinite and unchanging".<br />If you were solving the equations of general relativity, and getting solutions in which the universe appeared to be unstable, then you could throw those away. Chaos was bad. Order was good. Stability was good. Static solutions were better than dynamic ones.<br /><br />Since it seems that gravitational mass is always positive, gravitational effects are cumulative, and over a large enough region, the combined background curvature should be enough to curve space right back on itself. The combined attraction also ought to be trying to make the universe contract, so we've appreciated for a while that unless there was some other effect in play, the universe should either be expanding and slowing, or collapsing in on itself (<i>see:</i> <b>Erasmus Darwin</b>, 1791).<br /><br />Einstein wanted <i>his</i> universe to be pretty much flat at very large scales, so he got rid of the effects caused by cumulative curvature by adding an additional squiggle to the equations: an invented long-range repulsive effect whose purpose was to counteract the cumulative long-range effects of gravitation, allowing a tidy, constant, unchanging, static universe. If the rest of the equation generated long-range curvature effects and evolution over time, the upper-case Greek letter <a href="http://en.wikipedia.org/wiki/Lambda"><span style="font-weight: bold;">Lambda</span> (<span style="font-weight: bold;">Λ</span>)</a> represented the necessary compensating effect that might exactly cancel these effects.<br /><br />Einstein referred to this as the <b><a href="http://en.wikipedia.org/wiki/Cosmological_constant">Cosmological Constant</a></b>.<br /><br />Unfortunately, Einstein had made his model <span style="font-style: italic;">too</span> tidy. A few years later, <a href="http://en.wikipedia.org/wiki/Edwin_Hubble">Edwin Hubble</a> successfully measured a distance-dependent trend in the spectral shifts of light from a range of galaxies (<a href="http://hyperphysics.phy-astr.gsu.edu/hbase/astro/hubble.html">Hubble shift</a>), and we realised that the complicating large-scale effects that Einstein thought he'd eliminated with his Cosmological Constant seemed to be physically real. After taking some time to think the matter over, Einstein agreed that a <span style="font-weight: bold;">Riemann</span>-type solution (without Lambda) gave a cleaner and more natural implementation of General Relativity. He later described his early decision to invent the Constant to force large-scale flatness onto GR as "<span style="font-weight: bold;">The biggest blunder of my career</span>".<br /><br />End of story.<br /><br /><hr align="left" width="25%"><br /><span style="font-weight: bold;">However, the subject seemed to kick off again in the 1990's</span> when a lot of headlines started appearing in in the popular science press (and in scientific papers) to do with the idea of <a href="http://en.wikipedia.org/wiki/Dark_energy">dark energy</a>, and the idea that the universe seemed to be expanding faster than GR1915 predicted – these articles usually declared that "<b>Einstein's Cosmological Constant</b>" was back, and had excited-sounding researchers competing to see who could give the best quote about Einstein having been "<a href="http://www.google.co.uk/search?q=%22cosmological+constant%22+%22right+all+along%22">right all along</a>".<br /><br />This wasn't really true: <span style="font-style: italic;">Einstein's</span> Cosmological Constant had been a mathematically-derived thing that only had one allowable value, and whose justification was to set the strengths of a range of effects in the model (large-scale curvature, distance-dependent redshifts, change in size over time) to zero. It had been there for purely <i>logical</i> reasons, in the context of a static universe, because a static universe seemed to need it. It existed to explain an assumed physical equilibrium that turned out not to exist, in a universe that wasn't ours. It was derived from bad assumptions, but at least it was <i>derived</i>.<br /><br />The modern counterpart was almost the opposite. The antigravitational "dark energy" cosmological constant applied to an expanding universe that seemed to be expanding too fast for GR1915, and the effect initially had no fundamental logical, mathematical, geometrical or theoretical basis. It was, essentially, a parameter describing the extent to which the result of our GR predictions "missed" the actual data.<br />More recently, some researchers have tried to put the dark energy idea onto a more "theoretical" footing by arguing that perhaps the constant might not have a fixed arbitrary value, but might be a measure of the universe's expansion. That'd make the "modern" CC less fudgey, but it'd also mean that, as well as the thing not being Einstein's, it wouldn't be a constant, either.<br /><br />So why did we initially get <a href="http://www.sciencedaily.com/releases/2007/11/071127142128.htm">all those news stories</a> announcing things like: "<i>Eighty years later, it turns out that Einstein may have been right ... So he was smarter than he gave himself credit for.</i>" [<a href="http://www.sciencedaily.com/releases/2007/11/071127142128.htm">*</a>] ?<br /><br />Putting it brutally, it was about PR. Attaching Einstein's name gave a false sense of historical provenance and a false sense of respectability. It let researchers use Einstein's name as a shield to deflect awkward questions about the apparent arbitrariness of their new expansion effect, and it turned a fairly boring and slightly negative story about GR failing to agree with the evidence into a snappy human-interest story about the throes of the scientific process coming out right in the end, and Einstein being right, and GR being right.<br /><br />The "<i>Einstein's Cosmological Constant returns: Einstein was right after all!</i>" stories generated a lot of news headlines, and let researchers give interviews to magazines and appear on the telly and improve their departments' media profiles. Suddenly there were a lot of editors and journalists wanting quotes on the cosmological constant, because they wanted to print the same reader-grabbing "Einsteiney" headline, but didn't want to put <i>their</i> name on the claim, as reporters, because it was dodgy. So they rang round the universities and found a bunch of cosmologists happy to give the right quote if it meant getting their name in a magazine or getting onto the telly.<br /><br />The story was junk. It was researchers collectively gaming the news media, and manufacturing and repeating a story that they knew would work, in order to get more media exposure. And unfortunately, that's the sort of behaviour that makes the general public more inclined to distrust scientists.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-45601528213524302722010-01-15T10:22:00.001+00:002010-04-12T18:23:23.832+01:00Clever, Bright, and Smart<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGhyphenhyphen-PDakLYH0cYHm79JvO3WUUBiSrZN5EcW3qu4qkxApBvE9hzlqa3e2neAjuHj1XAfwrzw9yhE77nljzderHuI9j8xmSoydkeQ7HdyNBeUPFAgU-3gWUgnrJmG5thWDdtRZdCQAv03c/s1600-h/lightbulb.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGhyphenhyphen-PDakLYH0cYHm79JvO3WUUBiSrZN5EcW3qu4qkxApBvE9hzlqa3e2neAjuHj1XAfwrzw9yhE77nljzderHuI9j8xmSoydkeQ7HdyNBeUPFAgU-3gWUgnrJmG5thWDdtRZdCQAv03c/s400/lightbulb.jpg" border="0" width="400" height="300" /></a></div><span style="font-weight: bold;">There's no single scale that adequately describes someone's abilities.</span> People can excel at some types of task and be hopeless at others, and we have a range of different words for different types of aptitude.<br /><br />Three of the most popular ones are <b>clever</b>, <b>bright</b> and <b>smart</b>.<br /><br /><span style="font-weight: bold;">Cleverness</span> is about tool manipulation. It's about <span class="blsp-spelling-corrected" id="SPELLING_ERROR_1">having</span> a library of information and methods at your disposal that you can call upon to attack a problem. It's about the toolset. "Clever" researchers tend to be great at solving well-known types of problems, or well-defined problems that are attackable with existing approaches. It's a matter of going through the toolset until you find something that works. Clever people tend to be good at technical subjects that involve absorbing a lot of jargon and detail. They're not always so good at solving or understanding problems that aren't well defined, or seeing the bigger picture, or starting with a blank page.<br /><br /><span style="font-weight: bold;">Brightness</span> is about being able to appreciate larger patterns and relationships that don't necessarily conform to an existing approach or definition. Bright people tend not to be so dependent on clearly-defined goals or methodology, and can take a more "free-form" approach to work, where the project parameters and characteristics emerge as the project progresses.<br />A <i>computer programmer</i> needs to be clever, but a <i>software designer</i> needs to be bright.<br /><br />"<b>Clever vs. Bright</b>" is like comparing soldier ants with butterflies. The soldier ant, working with other soldier ants, manages to overcome a lot of problems even if each individual ant doesn't really know where they fit into the larger scheme of things. The butterfly arguably has the better world-view, but can't always do very much with it.<br /><br /><span style="font-weight: bold;">Smartness</span> is about being able to understand and exploit opportunities to gain advantage and achieve goals. It's possible to be clever and bright without being smart. Having "smarts" means that you learn from experience and think ahead strategically, to plan how the workings of a system can allow you to achieve your desired outcome.<br /><br />Smart people are often also bright and clever, but they're also smart enough to realise that their success doesn't depend on cultivating those other skills to the same extent, because once they've become moderately successful, they can "hire in" clever and bright people to do that part of the work, and delegate. Successful entrepreneurs tend to be smart, and bright, and clever, but their focus is on being smart.<br /><br /><i>Military R+D</i> usually wants researchers who are <i>extremely</i> clever, but not necessarily <i>too</i> bright or smart. A "bright" employee might query what their work is to be used for, notice how their research fits together with others to produce a device that they aren't supposed to know about, or query the legality or ethics or consequences of the project they're involved in. A smart researcher might realise that the market value of their work is more than their current employer is paying, leave to take a better job when they realise that the project is in trouble, or try to wrest control of the project from the existing managers.<br /><br /><hr align="left" width="25%"><br />Now, this is where it gets complicated:<br /><br />People who <i>describe</i> themselves as smart (outside a limited peer group) usually aren't.<br />Smart people tend not to publicly <i>identify</i> themselves as as smart, because it's usually not a smart thing to do. <i>Clever</i> people sometimes describe themselves as smart, because nobody's actually told them what the words mean, and they're not bright enough to work it out for themselves. They follow the lead of the other clever people in their group that they've heard describing themselves as smart. The <i>bright</i> people also don't normally describe themselves as smart, because they only hear the word being used self-referentially by people with poor social skills who are "clever-only", and they decide that they don't want to be lumped in with <i>them</i>.<br /><br />So if you're studying monkeys in a zoo that are picking grubs out of a log that have been put there by the zookeeper, the clever monkey will become adept at using a stick to extract the grubs, the bright monkey will watch the zookeeper and only go grub-hunting when the log's just been refilled, and the smart monkey will congratulate the other two on their cleverness, assume a management position and a share of the grubs, and then patent the stick.<br /><br />Different skills.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-75384591898165665982010-01-07T16:30:00.002+00:002010-04-12T18:21:43.754+01:00Relativity Four Point Zero<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.fourpointzero.org/"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 360px; height: 360px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhDNce32eJ3CIlppbvr61S4A3-mPjroEfYWlxAJ5Hlw2sfFaf51czGWbH8jO-AaeB277nfhIgyGAvEHAhqn8QVaP72EIseLjeIIhdsY4iGr-Kd3evFljQFJr9L-H4Dit3TSbrkht56FsE/s400/squarelogo_fourpointzero_large.gif" alt="'4.0'logo and icon for the 'Relativity four point zero' website (www.fourpointzero.org)" id="BLOGGER_PHOTO_ID_5421197670170665042" border="0" /></a><span style="font-weight: bold;">Okay, here starts a new decade.</span> I've started a simple site sketching out the basic principles of the suggested revised general theory:<br /><div style="text-align: center;"><a href="http://www.fourpointzero.org/">http://www.fourpointzero.org</a><br /></div><br />I figured that if the work of <a href="http://en.wikipedia.org/wiki/Galileo_Galilei">Galileo</a> & <a href="http://en.wikipedia.org/wiki/Isaac_Newton">Newton</a> counts as "<span style="font-weight: bold;">Relativity v1.0</span>", <a href="http://en.wikipedia.org/wiki/Special_relativity">special relativity</a> changed some key equations and counts as <span style="font-weight: bold;">v2.0</span>, <a href="http://en.wikipedia.org/wiki/General_relativity">general relativity</a> altered and added some fundamental principles and did away with SR's concept of global lightspeed constancy, and therefore counts as <span style="font-weight: bold;">v3.0</span>, then since <span style="font-style: italic;">this</span> isn't compatible with the current textbook definitions of GR (because it eliminates the "compulsory" SR component), it counts as another "discontinuous" iteration and earns a further major version number, <span style="font-weight: bold;">4.0</span> .<br /><br />You can't get to 4.0 without breaking a few eggs. That's what makes it 4.0 .<br /><br />I <span style="font-style: italic;">was</span> thinking of giving the new site ~twenty-six sections listed in alphabetical order, one page per letter, but I think I might just stop at five or six (the current pages A-E seem to work quite well as a logical progression). I'm trying not to fall into my usual trap of writing realms of material that most people won't want to read, and keeping things pretty minimalist, so there's a lot of the more juicy stuff left out. I think this sort of "skeleton" overview probably serves a useful purpose, so don't expect a lot of updates to the "<span style="font-style: italic;">4.0 org</span>" site, unless other people get involved.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com0tag:blogger.com,1999:blog-480555353132580100.post-29646676811578847212009-12-31T20:44:00.001+00:002009-12-31T20:44:00.488+00:00New Year's Eve<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEF2Jbm9u2zyQg4YpeQqakYpyuGM6y5Zmm4prjtsHiZtF1DZmTMqORjBag4XJbw3MVNiePpNJnGPCYDQ3kMuaHFHfnG5vm_3t6FWKN2ZQRfjnTjF5z0xXgRXsCYzx1gm202OzdgbDZwVw/s1600-h/end_of_the_beginning.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 267px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEF2Jbm9u2zyQg4YpeQqakYpyuGM6y5Zmm4prjtsHiZtF1DZmTMqORjBag4XJbw3MVNiePpNJnGPCYDQ3kMuaHFHfnG5vm_3t6FWKN2ZQRfjnTjF5z0xXgRXsCYzx1gm202OzdgbDZwVw/s400/end_of_the_beginning.jpg" alt="'THIS IS THE END OF THE BEGINNING': Final image from George Pal's 'Destination Moon' (1950)" id="BLOGGER_PHOTO_ID_5421180225925708642" border="0" /></a>Okay, that's it. First decade of the new century over, and we've got almost nothing good to show for it, physics-wise.<br />That's <span style="font-style: italic;">bad</span>. We only get ten of these per century. One down, only another nine to go before 2100. If we're burning through resources at the current rate, we can't afford to waste decades like this if we want to actually achieve something significant this century before we get hit by a resources crash.<br /><br />So a suggested schedule. Let's officially notice the idea of a <a href="http://erkdemon.blogspot.com/2009/12/differential-expansion-dark-matter-and.html">no-floor implementation of GR</a> by at least late 2010, and see if we can get rid of dark matter and dark energy. Let's have the quantum gravity guys working on acoustic metrics as a low-velocity approximation have the guts to come out and actually suggest that this might be the basis of a real theory, and not just a toy model. Let's stop issuing press releases claiming that the current version of general relativity is the wonderfullest theory and has never ever failed us, let's acknowledge the problems and let's sit down and write a proper general theory from scratch, stealing that "acoustic metric" work.<br /><br />Instead of setting a schedule that puts the next theoretical breakthroughs maybe eighty or a hundred years from now because we aren't clever enough to understand string theory, let's get off our arses and do the things that we <span style="font-style: italic;">do</span> know how to do. Kick off with the no-floor approach, and when we're energised by the success of <span style="font-style: italic;">that</span>, converge the acoustic metric work with a GR rewrite .. and suddenly the next generation of theory only looks about five years away. If we're very lucky, two and a half. If we can't get enough people onboard fast enough, maybe eight to ten.<br /><br />Unless we take that first step of exploring the idea that change might be possible and might be a good thing, we won't get anywhere except by dumb luck and/or massive public spending on hardware. If we're not careful, and we don't change the way we do things, next thing we know it'll be 2020 and we <span style="font-style: italic;">still</span> won't have achieved anything.<br /><br />So let's write off the 00's as a big double-zero. Let's pretend that the Bush years and Iraq and the financial crash never happened. We don't need multi-billion-dollar hardware for this, we only need to be able to think, and to be a bit more adventurous than we've been for the last few decades. Lets redo general relativity <span style="font-style: italic;">properly</span><span> and get a theory that we can be proud of without having to spin results</span>, one that actually predicts new effects <span style="font-style: italic;">in advance</span> rather than retrospectively, and has the potential to lead us into genuinely new physics territory.<br /><br />Tomorrow is 2010. Let's start again.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com1tag:blogger.com,1999:blog-480555353132580100.post-71802819766483976662009-12-30T22:10:00.010+00:002009-12-31T20:01:55.850+00:00Differential Expansion, Dark Matter and Energy, and Voids<div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www2.aao.gov.au/2dFGRS/"><img style="margin: 0px auto 10px; display: inline; text-align: center; cursor: pointer; width: 133px; height: 133px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5xBHsz5ZCS5aBi9O828vLJa8g4tlb2NQa9XX03s92uPR-QuVsg5J6k4eL4rDZXs7VdNmSPLPOIumsYqtuLLyUnlMqJwSMEx95-_0zVCjjhTDovUBXPvQrf8nJzOhIOyueor6WCbM_0fU/s400/greatwall_thumbnail.jpg" alt="2df Galaxy Redshift Survey" id="BLOGGER_PHOTO_ID_5421462614602357522" border="0" /></a><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_x54HB4a2HNSHqZ5h4B-F_DufaUkFeGWG5NSAUr6YbkYbPws0njZFWyy4kpXXcviErSoo9C32zmzdplyJL2V_HS5tmugqkDKLMQ5T4XgCWOZnON0-XDBiU9cQ7MWB0cEFsaSuTKkngsk/s1600-h/Raspberry.jpg"><img style="margin: 0px auto 10px; display: inline; text-align: center; cursor: pointer; width: 133px; height: 133px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_x54HB4a2HNSHqZ5h4B-F_DufaUkFeGWG5NSAUr6YbkYbPws0njZFWyy4kpXXcviErSoo9C32zmzdplyJL2V_HS5tmugqkDKLMQ5T4XgCWOZnON0-XDBiU9cQ7MWB0cEFsaSuTKkngsk/s400/Raspberry.jpg" alt="A raspberry (" relativity="" in="" curved="" spacetime="" section="" id="BLOGGER_PHOTO_ID_5421473082754372802" border="0" /></a><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.jpl.nasa.gov/news/news.cfm?release=2008-138"><img style="margin: 0px auto 10px; display: inline; text-align: center; cursor: pointer; width: 133px; height: 133px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijYPCZE0JjFvv2rSLrs4shkRg2itzXkHr_-DqLNXLcSqmc-b-t5zu2Ox7EXHih2tzwoGCbgFrthyEXT7Ats71mccwtezDyJa2_IWoVxE0gy4uULUevTFc7GcC0MmEFhdg40RZL3NVd9Ug/s400/pinwheelgalaxy_thumbnail.jpg" alt="NASA: Pinwheel galaxy" id="BLOGGER_PHOTO_ID_5421462265474891906" border="0" /></a></div>Normally with a field theory, you have some idea where to start. You start by defining the shape and other properties of your "landscape" space, and then you add your field to that context, and watch what it does when you play with it.<br />But in a general theory of relativity (which is forced by <a href="http://www.bun.kyoto-u.ac.jp/%7Esuchii/mach.pr.html"><span>Mach's Principle</span></a> to also be a relativistic theory of <span style="font-style: italic;">gravity</span>), <a href="http://www.relativitybook.com/resources/Einstein_space.html">the gravitational field <span style="font-style: italic;">is </span>space</a>. The field doesn't sit <span style="font-style: italic;">inside</span> a background metric, it <span style="font-style: italic;">is</span> the background metric.<br />So with this sort of model, we've got no obvious starting point – no obvious starting geometry, and not even an obvious starting <span style="font-style: italic;">topology</span>, unless we start cheating and putting in some critical parameters by hand, according to what we believe to be the correct values.<br /><br />We make an exasperated noise and throw in a quick idealisation. We say that we're going to suppose that matter is pretty smoothly and evenly distributed through the universe (which sounds kinda reasonable), and then we use this assumption of a <span style="font-weight: bold;">homogeneous distribution</span> to argue that there must therefore be a fairly constant background field. That then gives us a convenient smooth, regular background shape that we can use as a backdrop, before we start adding features like individual stars, and galaxies.<br /><br />That background level gives us our assumed <span style="font-weight: bold;">gravitational floor</span>.<br /><br />We know that this idea isn't really true, but it's convenient. <span style="font-weight: bold;">Wheeler</span> and others tried exploring different approaches that might allow us to do general relativity without these sorts of starting simplifications (e.g. th<span>e </span><a href="http://en.wikipedia.org/wiki/Pregeometry_%28physics%29"><span></span></a><a style="font-weight: bold;" href="http://en.wikipedia.org/wiki/Pregeometry_%28physics%29"><span>pregeometry</span></a> idea), but while a "pregeometrical" approach let us play with deeper arguments that didn't rely on any particular assumed geometrical reduction, getting from first principles to new, rigorous predictions was difficult.<br />So while general relativity <span style="font-style: italic;">in theory</span> has no prior geometry and is a completely free-standing system, <span style="font-style: italic;">in practice</span> we tend to implicitly assume a default initial dimensionality and a default baseline background reference rate of timeflow, before we start populating our test regions with objects. We allow things to age more slowly than the baseline rate when they're in a more intense gravitational field, but we assume that the things can't be persuaded to age <span style="font-style: italic;">more</span><span style="font-style: italic;"> quickly</span> than the assumed background rate (and that signals can't travel faster than the associated background speed of light) without introducing "naughty" hypothetical negative gravitational fields (ref: <a href="http://en.wikipedia.org/wiki/Positive_energy_theorem"><span>Positive Energy Theorem</span></a>).<br />This is one of the reasons why we've made almost no progress in warpdrive theory over half a century – our theorems are based on the implicit assumption of a "flat floor", and this makes any meaningful attempt to look at the problem of <span style="font-weight: bold;">metric engineering</span> almost impossible.<br /><br />Now to be fair, GR textbooks are often quite open about the fact that a homogeneous background is a bit of a kludge. It's a pragmatic step – if you're going to calculate, you usually need somewhere to start, and assuming a homogeneous background (without defining exactly what degree of clumpiness counts as "homogeneous") is a handy place to start.<br /><br /><br />But when we make an arbitrary assumption in mathematical physics, we're <span style="font-style: italic;">supposed</span> to go back at some point and <span style="font-weight: bold;">sanity-check</span> how that decision might have affected the outcome. We're meant to check the dependencies between our initial simplifying assumptions and the effects that we predicted from our model, to see if there's any linkage.<br />So ... what happens if we throw away our "gravitational floor" comfort-blanket and allow the universe to be a wild and crazy place with no floor? What happens if we try to "do" GR without a safety net? It's a vertigo-inducing concept, and a few "crazy" things happen:<br /><br /><span style="font-weight: bold;">Result 1: Different regional expansion rates</span><span style="font-weight: bold;">, and lobing</span><br /><blockquote><span style="font-size:85%;">Without the assumption of a "floor", there's no single globally-fixed expansion rate for the universe. Different regions with different "perimeter" properties can expand at different rates. If one region starts out being fractionally less densely populated than another, its rate of entropic timeflow will be fractionally greater, the expansion rate of the region (which links in to rate of change of entropy) will be fractionally faster, and the tiny initial difference gets exaggerated. It's a positive-feedback inflation effect. The faster-expanding region gets more rarefied, its massenergy-density drops, the background web of light-signals increasingly deflects around the region rather than going through it, massenergy gets expelled from the region's perimeter, and even light loses energy while trying to enter, as it fights "uphill" against the gradient and gets redshifted by the accelerated local expansion. The accelerated expansion pushes thermodynamics further in the direction of exothermic rather than endothermic reactions, and time runs faster. Faster timeflow gives faster expansion, and faster expansion gives faster timeflow.<br /><br />The process is like watching the weak spot on an over-inflated bicycle inner tube – once the trend has started, the initial near-equilibrium collapses, and the less-dense region balloons out to form a lobe. Once a lobe has matured into something sufficiently larger than its connection region, it starts to look to any remaining inhabitants like its own little hyperspherical universe. Any remaining stars caught in a lobe could appear to us to be significantly older than the nominal age of the universe as seen from "here and now", because more time has elapsed in the more rarefied lobed region. The age of the universe, measured in 4-coordinates as a distance between the 3D "now-surface" and the nominal location of the big bang (the radial cosmological time coordinate, referred to as "</span><span style="font-style: italic;font-size:85%;" >a</span><span style="font-size:85%;">" in MTW's "Gravitation",§17.9), is greater at their position than it is at ours.<br /><br />With a "no-floor" implementation of general relativity, the universe's shape isn't a nice sphere with surface crinkles, like an orange, it's a multiply-lobed shape rather more like a </span><span style="font-weight: bold;font-size:85%;" >raspberry</span><span style="font-size:85%;">, with most of the matter nestling in the deep creases between adjacent lobes (</span><span style="font-weight: bold;font-size:85%;" >book, §17.11</span><span style="font-size:85%;">). If there was no floor, we'd expect galaxies to align in three dimensions as a network of sheets that form the boundary walls that lie between the faster-expanding voids.<br /><br />And if we look at our painstakingly-plotted maps of galaxy distributions, that's pretty much <a href="http://en.wikipedia.org/wiki/Void_%28astronomy%29">what seems to be happening</a>.</span></blockquote><br /><span style="font-weight: bold;">Result 2: Galactic rotation curves</span><br /><blockquote><span style="font-size:85%;">If the average background field intensity drops away when we leave a galaxy, to less than the calculated "floor" level, then the region of space between galaxies is, in a sense, more "fluid". These regions end up with greater signal-transmission speeds and weaker connectivity than we'd expect by assuming a simple "floor". The inertial coupling between galaxies and their outside environments becomes weaker, and the influence of a galaxy's own matter on its other parts becomes proportionally stronger. It's difficult to get outside our own galaxy to do comparative tests, but we can watch what happens around the edges of other rotating galaxies where the transition should be starting to happen, and we can see what </span><span style="font-style: italic;font-size:85%;" >appears</span><span style="font-size:85%;"> to be the effect in action.<br /><br />In standard Newtonian physics (and "flat-floor" GR), this doesn't happen. A rotating galaxy obeys conventional orbital mechanics, and stars at the outer rim have to circle more slowly than those further in if they're not going to be thrown right out of the galaxy. So, if you have a rotating galaxy with persistent "arm" structures, the outer end of the arm needs to be rotating more slowly, which means that the arm's rim trails behind more and more over time. This "lagging behind" effect stretches local clumps into elongated arms, and then twists those arms into a spiral formation.<br />When we compare our photographs of spiral-arm galaxies with what the theory predicts, we find that ... they have the wrong spiral. The outer edges aren't wound up as much as "flat-floor" theory predicts, and the outer ends of the arms, although they're definitely lagged, seem to be circling faster than ought to be possible.<br /><br />So something seemed to be wrong (or missing) with "flat-floor" theory. We could try to force the theory to agree with the galaxy photographs by tinkering with the inverse square law for gravity (which is a little difficult, but there have been suggestions based on variable dimensionality and string theory, or <a href="http://en.wikipedia.org/wiki/Modified_Newtonian_dynamics"><span>MOND</span></a>), or we could fiddle with the equations of motion, or we could try to find some way to make gravity weaker outside a galaxy, or stronger inside.<br /><br />The current "popular" approach is to assume that current GR and the "background floor" approach are both correct, and to conclude that there therefore has to be something else helping a galaxy's parts to cling together – by piling on extra local gravitation, we might be able to "splint" the arms to give them enough additional internal cohesiveness to stay together.<br /><br />Trouble is, this approach would require so </span><span style="font-style: italic;font-size:85%;" >much</span><span style="font-size:85%;"> extra gravity that we end up having to invent a whole new substance – </span><span style="font-weight: bold;font-size:85%;" >dark matter</span><span style="font-size:85%;"> – to go with it.<br />We have no idea what this invented "dark matter"might be, or why it might be there, or what useful theoretical function it might perform, other than making our current calculations come out right. It has no theoretical basis or purpose other than to force the current GR calculations to make a better fit to the photographs. Its only real properties are that its distribution shadows that of "normal" matter, it has gravity, and ... we can't see it or measure it independently.<br /><br />So it'd </span><span style="font-style: italic;font-size:85%;" >seem</span><span style="font-size:85%;"> that the whole point of the "dark matter" idea is just to recreate the same results that we'd have gotten anyway by "losing the floor".</span><br /><br /></blockquote><span style="font-weight: bold;">Result 3: Enhanced overall expansion</span><br /><blockquote><span style="font-size:85%;">Because the voids are now expanding faster than the intervening regions, the overall expansion rate of the universe is greater, and ... as seen from within the galactic regions ... the expansion seems faster than we could explain if we extrapolated a galaxy-dweller's sense of local floor out to the vast voids between galaxies. To someone inside a galaxy, applying the "homogeneous universe" idealisation too literally, this overall expansion can't be explained unless there's some additional long-range, negatively-gravitating field pushing everything apart.<br /><br />So again, the current "popular" approach is to invent another new thing to explain the disagreement between our current "flat-floor" calculations and actual observations. </span><span style="font-style: italic;font-size:85%;" >This</span><span style="font-size:85%;"> one, we call "<a href="http://nasascience.nasa.gov/astrophysics/what-is-dark-energy"><span style="font-weight: bold;">Dark Energy</span></a>", and again, it seem to be another back-door way to recreating the results we'd get by losing the assumed gravitational background floor.</span></blockquote><br />So here's the funny thing. We <span style="font-style: italic;">know</span> that the assumption of a "homogenous" universe is iffy. Matter is <span style="font-style: italic;">not</span> evenly spread throughout the universe as a smooth mist of individual atoms. It's clumped into stars and planets, which are clumped into star systems, which are clumped into galaxies. Galaxies are ordered into larger void-surrounding structures. There's clumpiness and gappiness everywhere. It all looks a bit <a href="http://en.wikipedia.org/wiki/Fractal_cosmology"><span>fractal</span></a>.<br /><br />It might seem obvious that, having done the "smooth universe" calculations, we'd then go back and factor in the missing effect of clumpiness, and arrive at the above three (checkable) modifying effects, <span style="font-weight: bold;">(1)</span> lobing (showing up as "void" regions in the distribution of galaxies), <span style="font-weight: bold;">(2)</span> increased cohesion for rotating galaxies, and <span style="font-weight: bold;">(3)</span> a greater overall expansion rate. It also seems natural that having done that exercise and having made those tentative conditional predictions, that when all three effects were discovered for real, the GR community would be in a happy mood.<br /><br />But we didn't get around to doing it. All three effects took us by surprise, and then we ended up scrabbling around for "bolt-on" solutions (<a href="http://imagine.gsfc.nasa.gov/docs/science/mysteries_l1/dark_energy.html">dark matter and dark energy</a>) to force the existing, potentially flawed approach to agree with the new observational evidence.<br /><br />The good news is that the "dark matter"/"dark energy" issue is probably fixable by changing our <span style="font-style: italic;">approach</span> to general relativity, without the sort of major bottom-up reengineering work needed to fix some of the other problems. At least with the "floor" issue, the "homegeneity" assumption is already recognised as a potential problem in GR, and not everyone's happy about our recent enthusiasm for inventing new features to fix short-term problems. We might already have the expertise <span style="font-style: italic;">and</span> the willpower to solve this one, comparatively quickly.<br /><br />Getting it fixed next year would be nice.<div class="blogger-post-footer">from <b>ErkDemon: The Other Side of Science</b> <a href="http://erkdemon.blogspot.com">http://erkdemon.blogspot.com</a></div>ErkDemon (Eric Baird)http://www.blogger.com/profile/00430413494529535159noreply@blogger.com1