Yesterday was the start of the new year 2012… in the current version of the Gregorian calendar used in much of the world. In the old Julian calendar, yesterday was December 19, 2011. In the Hebrew calendar, we’re about three months from the end of the year 5772. In the Indian civil calendar, we’re a similar distance from the end of 1933 Saka Era. In the Islamic calendar, we’re early in the second month of 1433 AH. To astronomers, yesterday was the Julian day 2455927. In the traditional Chinese calendar, we’re a few weeks from the end of the year 28 (Year of the Metal Hare) of Cycle 78 (or Cycle 79 depending on whether you date from the beginning or the end of the reign of Emperor Huangdi).
The point is, calendrical divisions are arbitrary human constructs. The Earth just moves continuously through its rotation and orbit from one moment to the next. Some people make a huge deal out of the start of a new year in their particular calendar. That’s fine, but it’s just an excuse for a party, not something that has any objective significance where the universe — or even the entirety of the human race — is concerned. Worth keeping that in mind just for a sense of proportion.
Now, about this silly Mayan meme that’s going around… I wish we would stop blaming the Maya for this dumb apocalyptic prophecy. The Maya (a more proper term than “Mayans”) had no apocalyptic tradition of any kind. What they had was a calendar that worked very well, and that they used to date events in their own lives and up to a few generations in the future. They probably never gave much thought to the year we call 2012, although they had a couple of written predictions/prophecies about events taking place long after that date. A lot of the jokes and cartoons you see circulating posit that the “Mayans” actually manufactured calendars going up to this year and then stopped. That’s not so. They didn’t bother making calendars more than a few centuries ahead. But the thing about calendars is, they’re cyclical. They can easily be extrapolated forward indefinitely. And the Maya calendar was really cyclical, since that was their view of time, that everything was a series of nested cycles that continued forward indefinitely.
So what happened was this: the actual Maya/Mesoamerican calendar fell into disuse many centuries ago. More recently, in the mid-20th century, Western researchers reconstructed it and projected it forward into their own era, something the Maya themselves didn’t really bother to do, since why would they need to? Those modern researchers discovered that one of the longer cycles of the Mesoamerican calendar, the 13th baktun cycle, would come to an end within their lifetimes, in the year 2012. In 1966, a guy named Michael Coe wrote a book wherein he interpreted that as a Maya prophecy of what he called “Armageddon.” But of course Armageddon is a Biblical concept, and later scholars have consistently debunked Coe’s interpretation. There was nothing in Maya writings to suggest they saw the end of the 13th baktun as the end of the world — merely as the end of one cycle and the beginning of the next. In other words, New Year’s Day writ large, an excuse for a huge party but nothing more. Coe had imposed his own Western religious traditions onto his interpretation of a foreign culture — an elementary mistake for an archaeologist. But that didn’t stop other 20th-century Westerners who believed in a looming Apocalypse from latching onto Coe’s mistake and building a whole eschatological cottage industry out of it.
So there is no “Mayan” prediction of the end of the world. They don’t deserve the blame. End-times theology is a distinctly Christian belief system and has been for thousands of years. Many Christians since Biblical times have been convinced the world would end in their lifetimes, and they’ve always looked for “evidence” to justify the latest eschatological forecast. The Maya calendar just had the bad luck of getting co-opted by that process and misinterpreted to fit it. So please, let’s stop blaming the Maya for our own Western preoccupations.
After my earlier post on quantum teleportation, I’ve been wondering about whether I wanted to include it in my science fiction in some way, but first I wanted to get some handle on whether it was feasible in practice to teleport a macroscopic object rather than just individual particles or a Bose-Einstein condensate in a single coherent quantum state. Finding detailed discussions online is a bit tricky just using Google, but I found a discussion thread that goes into a fair amount of depth:
Granted, BBS threads, even on science forums, aren’t the best way to get information about the subject, but I’m in no mood to try to wade through a bunch of technical papers, and I’m only looking at it from the perspective of a fiction writer, so for now I’m content to let other people do the interpreting for me, though I still have to try to filter out the informed posts from the less informed ones.
It sounds like there may be some fundamental limitations that would prohibit teleporting humans or the like. Apparently what gets teleported are discrete properties like spin or charge. Teleporting a continuous variable, like the relative positions of multiple atoms or their momentum, would require infinite amounts of data, so one of the posters says. Another poster countered that some measurement of continuous states was possible, citing a paper on Arxiv.org, but the first replied that it was a limited, classical-resolution measurement and not precise enough to allow replicating a macroscopic object accurately. (Kind of like how Star Trek replicators have only “molecular resolution” and not “quantum resolution” so they can recreate nonliving matter but not living beings, because the error rate would be fatally high.)
Then there’s this thread at the Bad Astronomy and Universe Today Forum (which seems to have been started by the same poster under a different username), in which it’s pointed out that a macroscopic object can never be truly isolated from its environment, which again would suggest that the amount of information you’d need to define its state exactly would be effectively unbounded.
And this thread from the same forum (which is definitely by the same poster since it has the same original post as the Physics Forums thread above) clarifies that thermal effects in the body would interfere with getting a precise scan; ideally you’d need to freeze the subject to extremely near absolute zero, which isn’t exactly conducive to survivable teleportation.
Another factor raised in this article from Null Hypothesis: The Journal of Unlikely Science is a simple matter of bandwidth: even if you didn’t need infinite information to transmit continuous states, you’d still need to transmit so much data to replicate a human body that it would take a great deal of time and energy to send — billions of years at our highest current transmission rate. And if you could get a much higher transmission rate, according to the link in the previous paragraph, you’d need to send such intense energy that it would become unfeasible — you’d basically be firing a very powerful beam of gamma radiation at the receiving station, and that’s more a death ray than a transporter beam. At the very least, in most instances it would take less time and energy just to travel physically than to send a teleport signal.
So the question this raises for me is: how “exact” do you actually need to get? It could be feasible using advanced nanofabrication technology to “print out” a human body that’s a good molecular-level match for the original person. As long as you recreated the DNA and RNA in the cells accurately, you could probably settle for just knowing how many of which type of cell the body had, and where they were located, so you could reduce the amount of data that needed to be sent by using these “generic” substitutions. You could even improve on the body, say, write out excess fat or burgeoning tumors, or rewrite defunct hair follicles as functioning ones, or add extra muscle, or even make more radical changes. (See Wil McCarthy’s The Queendom of Sol tetralogy for an illustration of this.)
Aside from matching (or refining) the genetic and epigenetic data, then, the key information you’d need to transmit a person with their identity intact would be an accurate brain scan. Otherwise you’ve just created an identical twin rather than duplicated the original person. So the question is, just how accurate would it have to be? As far as science is able to determine, thought and memory are classical-scale processes. According to this page which I used as a reference for quantum theory in DTI: Watching the Clock:
In quantum terms each neuron is an essentially classical object. Consequently quantum noise in the brain is at such a low level that it probably doesn’t often alter, except very rarely, the critical mechanistic behaviour of sufficient neurons to cause a decision to be different than we might otherwise expect. The consensus view amongst experts is that free-will is the consequence of the mechanistic operation of our brains, the firing of neurons, discharging across synapses etc. and fully compatible with the determinism of classical physics.
Sure, there are some theorists who argue that consciousness is based on quantum processes, and you hear a lot of talk about “microtubules” in the neurons operating on a quantum level, but there’s no experimental support yet, and the general consensus is that quantum effects in the brain would decohere well before they reached the scale at which the neurons’ activity occurs. So it might be possible to faithfully duplicate the entire mental state of a human brain using classical-level accuracy, so that mechanism in the research paper mentioned above might be applicable.
So the key issue that remains is the one that was the focus of my previous post: Is there continuity of consciousness between the original and the duplicate? What I reasoned there is that what creates our perception of ourselves as continuous beings is the ongoing interaction, and thus the quantum entanglement/correlation, among the particles of our brains. The specific particles may be expended and replaced, but the correlations within the entire overall structure give us our sense of continuity. So if the original subject and the teleported replica are quantum-entangled, that would make them the same continuous entity on a fundamental level even if separated in space and time. The question is, would that same principle apply even if the entanglement were between the original and a body that was not an exact quantum duplicate? I.e. if you used classical-level fabrication to synthesize a duplicate of a person and only quantum-teleported partial information about the state of the brain? You’d synthesize a brain and body that were almost exact replicas, and then transmit enough quantum data about the brain to essentially cancel out the discrepancies and make it effectively the same brain, with the entanglement providing the continuity. Thus you have a replica of the body but preserve a single continuous consciousness.
So the original body would not need to be scanned to destruction but the brain would. Remember, teleporting quantum state information requires changing the original state. You’d essentially be teleporting just the brain/mind into a new, possibly modified body, and leaving the old body behind as a corpse with a destroyed brain. Ickier than the ideal situation. But it still precludes the possibility of creating a viable “transporter duplicate.”
But the question is, how much “cheating” can you get away with? How small a percentage of the information defining you needs to be quantum-teleported rather than classically copied in order to ensure that your consciousness survives intact? How could science measure the difference between a synthesized replica that thinks it’s you and one that actually contains your original consciousness? How much entanglement, how much equivalence, is enough for continuity? Even if we assume the teleportation of the brain states alone is enough to make it the same brain, we run into the mind/body problem: the two are more linked than we have traditionally tended to think, and it may be premature to define consciousness as something that resides solely in the brain. The entire nervous and hormonal systems may play a role in it too. Still, if you were to have your legs amputated and replaced with prosthetics, that wouldn’t destroy your consciousness. So maybe teleporting just the brain states is enough.
But then there’s a simple mathematical question: does that really reduce the amount of data by a significant amount? The mass of the brain is about 2 percent of the total mass of the body, so that’s only reducing the amount of data by roughly two orders of magnitude. So it would take 2 billion years to transmit instead of 100 billion, say. To make it feasible, you’d have to “compress” the data still further — and we’d need a much deeper understanding of how the brain works before we could estimate how little of its structure we could get away with teleporting at a quantum level versus substituting with “generic” cellular/structural equivalents. (Of course it’s a total myth that “We use only 10% of our brains; fMRI scans show conclusively that we make use of just about all the brain’s volume over the course of a day. But on a cellular level, a lot of that may be underlying substructure that could be “generically” replicated. Or maybe not. I don’t know enough about neurology to be sure.) Even so, I doubt the threshold percentage would be low enough to reduce the amount of data by even one order of magnitude, let alone many.
And there’s still the thermal problem. There’s a lot of molecular motion in the brain, not just from its temperature but from the constant chemical exchange among neurons. You might not be able to get a detailed quantum scan of a living, active brain as opposed to a deep-frozen corpse, and I don’t have enough confidence in cryonics to believe a person could be frozen to near absolute zero and then revived.
Still… depending on what fictional universe I’m in and how much I’m willing to bend the rules, I might be willing to fudge things enough to include quantum teleportation if I have a good enough story reason for it, using the ideas discussed above to make it relatively more plausible. Maybe there are ways to transmit data at far higher rates than we can now conceive, and with less energy expenditure. And come to think of it, having a requirement that a subject has to be frozen solid before teleportation adds an interesting twist. Though it would rule it out as a routine commute as it is in Star Trek or Niven’s Known Space.
The big science news today is the announcement of the latest results from the Large Hadron Collider”s search for the elusive Higgs boson, and while the results are far from conclusive so far, they’re actually mildly encouraging. Two independent detectors got pretty much consistent results suggesting the possibility of a particle with a mass somewhere around 125 GeV. (That’s giga electron volts — since E=mc^2, physicists measure particle mass in units of energy.)
Here’s the New York Times piece on the news, including links to the raw data published on a site called TWiki (whose motto is probably not “bidi-bidi-bidi”). Here’s a more detailed article from New Scientist. However, the most useful link I’ve come across is this Higgs FAQ from the blog of particle physicist Matt Strassler. I’ve never quite understood what all this Higgs field/particle business was all about until recently, but I’m starting to get a handle on it now. The FAQ does a good job of explaining the Higgs field and its role, and why not finding the Higgs particle would be just as intriguing and useful a result as finding it. (Because the particle isn’t the key, it’s just the simplest and only known way of detecting the Higgs field, which is the thing that’s actually important. And if there were no particle, it would just mean the field is different than the simplest model suggests, or that it works in a different way, not that it didn’t exist.)
So nothing conclusive today, but interesting hints that will be pursued further. Apparently they’ll be able to confirm or deny this evidence by next summer, and if it doesn’t pan out, they’ll try something else when the LHC reaches full power in 2015.
I’ve been mulling over another subject that was suggested by the recent NOVA miniseries “The Fabric of the Cosmos,” hosted by physicist Brian Greene based on his book of the same name. I felt some of the ideas it put across were too fanciful, putting sensationalism over plausibility or clarity, and one of them was the topic of its concluding episode, “Universe or Multiverse?”
The premise of that episode was that, if the Big Bang happened as the result of localized symmetry-breaking in an ever-inflating realm of spacetime, then our universe could be just one “bubble” in a perpetually expanding cosmic foam, with other universes being separate “bubbles” with their own distinct physics and conditions, forever out of reach because the space (how many dimensions?) between us and them is forever expanding. Now, that’s okay as far as it goes. It’s a somewhat plausible, if untestable, notion given what we currently know. But what Greene chose to focus on was a rather outre ramification of this: the idea that if the multiverse is infinite, if there’s an infinite number of other universes alongside ours, then probability demands that some of them will be exact duplicates of our universe, just happening by random chance to have the exact same combination of particles and thus producing the same galaxies, stars, planets, species, inviduals, etc. — kinda like how the famous infinite number of monkeys banging on an infinite number of typewriters will inevitably produce all great literature by chance. Thus, so the claim went, there could be other universes out there that are essentially parallels to our own with duplicates of ourselves, except maybe for some minor variations. (Or maybe universes where duplicate Earths and humans exist in different galaxies, or where a duplicate Milky Way coexists with a different configuration of galaxies, or all of the above.)
Note that this is entirely different from the concept of parallel timelines, the usual way of generating alternate Earths in science fiction. Parallel timelines aren’t separate universes, despite the erroneous tendency of SF to use the terms interchangeably. They’re coexisting quantum states of our own universe. The idea is that just as a single particle can exist in two or more quantum states at the same time, so can the entire universe. These alternate histories would branch off from a common origin, and thus it’s perfectly reasonable that they’d have their own Earths and human beings and the same individuals, at least if they diverged after those individuals were born. And there’s at least the remote possibility of communication or travel between them if nonlinear quantum mechanics could exist. What we’re talking about here is something else altogether, literal other universes that just happen by random chance to duplicate ours because it’s inevitable if there’s an infinite number of universes. While parallel timelines would be facets of the same physical universe we occupy, and would thus essentially be overlapping each other in the same place, these duplicate universes would be unreachably far away, except maybe by some kind of FTL or wormhole technology if such a thing could ever exist. And they might predate or postdate our own universe by billions of years.
But I think it was a flawed conceit to dwell on that aspect of the multiverse idea, and I have my problems with the reasoning employed. For one thing, it’s purely an ad hoc assumption that the multiverse is infinite rather than finite. If it’s finite, then there’s no guarantee that there would be other universes that exactly duplicate ours. Certainly there could be ones with compatible physical laws, with their own stars and galaxies and planets and life forms, but odds are they’d be different planets, different species, different individuals. No duplicate Earth, no duplicate Lincoln or Kennedy or Jet Li.
And if the multiverse is infinite, then sure, you could argue that with an infinite number of tries, it’s inevitable that our universe would be exactly duplicated somewhere. But the flip side to that argument is that if there’s an infinite number of universes, then the odds that any given universe would duplicate ours would be n divided by infinity, or effectively zero. In practical terms, if we found a way to visit other universes via wormholes or something, then we could search for an infinite amount of time before finding one that had its own Earth and human race and history duplicating ours except for having more goatees or whatever. Thus, by any realistic standard, such duplicates would be effectively nonexistent. (This is the problem with infinity as a concept in science — it tends to lead to absurdities and singularities. Physicists generally try to avoid infinities.) So while that result (the existence of duplicate universes) might be a logically sound consequence of the premise of an infinite multiverse, it’s also a trivial result, one that has no practical meaning and can’t be proven or falsified. So it’s not science, just sophistry. It’s angels dancing on the head of a pin. And that makes it a waste of time to focus on in a program that’s supposed to be about science.
Besides, it’s boring. The show presented us with the prospect that there could be an infinite number of possible forms for universes to take, whole other sets of physical laws, an unlimited range of possibilities… and all they wanted to talk about was duplicates of the world we already know? What a staggering failure of imagination — or what a staggering triumph of self-absorption. I would’ve been far more interested in hearing about the endless variety of universes that weren’t just like ours. Why not dazzle the viewers with some discussion about what physics would be like in a universe with more than three spatial dimensions? Or one with a higher or lower speed of light? That would’ve been so much cooler and more enlightening than the silly, dumbed-down examples they gave, like Earth with a ring around it or Brian Greene with four arms.
I suppose the one appeal of the infinite-monkeys premise is metafictional: You can use it to argue that if every remotely possible combination or interaction of particles is inevitable, then every fictional universe really happens somewhere. So, for instance, I could claim that my various fictional universes — my default/Only Superhuman universe, the Hub universe, the “No Dominion” universe, whatever else I might eventually get published — all coexist in the greater multiverse, and their different physical rules, different principles of FTL and whatever, could be explained by subtle variations in the laws of physics of their distinct universes (and yet somehow don’t prevent the fundamental interactions, dark energy, and so forth from having the exact same values so that stars and planets and life can form the same way). And it’s handy for fans who want to believe that, say, a crossover between Star Trek and Transformers, or Star Wars and Firefly, or whatever might be possible despite the huge differences in those universes’ histories and physics. But I’m not sure I find it desirable. To me, if there’s some planet in some unreachably distant universe that exactly duplicates Earth’s evolution and history, and has a duplicate of myself who’s writing this post at this equivalent point in his Earth’s orbit (which might be billions of years in the past or future relative to my “now,” if such a thing could even be meaningfully measured), I wouldn’t really think of him as me, or his Earth as being my Earth. So it wouldn’t really feel to me that those other fictional universes connected to my world’s history, and that would make them less meaningful.
Or would it? I mean, just going in, I know these fictional universes don’t have the same physical laws as our universe, that the specific characters or alien races or whatever that exist in them don’t exist in our world. So I know going in that they’re already separate realities from my own. Their versions of Earth and its history may correspond almost exactly to ours, yet they’re still separate entities. So maybe it’s no worse to think of my various written worlds (blog name drop!) as coexisting realms in an infinite multiverse than it is to think of them simply as independent fictional constructs.
And sure, sometimes I think it would be nice to have some sort of grand unified theory linking my universes together. I already tend to think of “No Dominion” as being in a parallel quantum timeline of my Default universe, because it has no visible discrepancies in physics or cosmology and has a lot of similar technological and social developments; it’s just that some technologies develop decades too early to be compatible with my published or soon-to-be-published Default-universe fiction. That won’t work for something like the Hub, though, since it has distinct differences in physical law. And yeah, I admit I’ve tried to think of a way to fit my universes together into a unified multiverse, at least in passing. I suppose the “infinite monkeys” idea could give me a means to do that.
But I don’t think I find it appealing, because it just multiplies the variables to such an insane degree. If these universes are just infinitely separated samples of an infinitely expanding metacosmos, then that doesn’t really unify them in any way, does it? They’re so far apart, so mutually unreachable, that the “connection” doesn’t really count as a connection at all. (After all, given the underlying physical premise, there’s no realistic chance of any kind of wormhole link or inter-universe crossover anyway.) It’s a trivial and useless result fictionally for the same reasons it is physically. And if they’re specks in an infinite sea of universes, it makes them all feel kind of irrelevant anyway. So why even bother? It’s simpler just to treat them as distinct fictional constructs and not bother trying to unify them. Besides, even if I know intellectually that the humanity and Earth and Milky Way of my fictional universes aren’t the same as my own, it’s more satisfying to pretend they are, to construct a satisfying illusion for the readers that they’re reading about an outgrowth of our own reality, than to pretend that they’re some totally separate duplicates in universes unreachably distant from ours. No point going out of my way to create a premise that alienates me and my audience from the universes they’re reading about. Granted, judging from some conversations I’ve had in the past, there are some people out there who wouldn’t have a problem with that. But it doesn’t really work for me.
A recent episode of NOVA‘s miniseries The Fabric of the Cosmos, based on Brian Greene’s book of that name and hosted by Greene, featured a discussion of quantum teleportation, and it got me thinking about the subject again. I’ve considered it as a possible far-future technology for my science fiction, but I’ve been resistant to the idea of using it as a means for teleporting sentient beings, because of the question of whether the self survives. The question is, if your body is destroyed and an exact duplicate is created elsewhere, is that really the same you? The being who steps out of the teleporter on the other end has all your memories and personality and considers herself to be the same person who stepped into the transmitting station, and nobody else might be able to tell the difference, but it might still be that the original person’s awareness ceased forever the moment she was destroyed by the teleporter to create her duplicate. Nobody else could tell the difference, but she could tell — or rather, she could if she hadn’t ceased to exist. The question is, is there an actual scientific way of resolving this dilemma, or is it doomed to be a matter of philosophy and personal belief forever? Because I’m not stepping in one of those things — or having one of my beloved characters step into one, at any rate — unless I can be persuaded that there’s a continuity of self-awareness from one end to the other.
But I’ve done a lot of reading about quantum theory over the past couple of years as research for my Star Trek: Department of Temporal Investigations books, and just other reading in general before that about the idea of decoherence as an explanation for how the quantum world becomes “classical” at a macroscopic scale. And upon reconsidering quantum teleportation within that context, I’ve realized there’s a possible way to resolve this question, one that actually allows defining the self and its continuity in scientific rather than metaphysical terms and thus allows getting a more concrete handle on the problem.
The key is to look within the question itself and evaluate its unexamined assumptions. The question is, will my own sense of myself as a continuous, self-aware consciousness be carried through the teleportation process? To answer that, we must ask the deeper question, what is that sense of self-continuity in the first place? How does it come about? A brain is actually an ensemble of trillions of quantum particles, each in its own superposition of multiple states. What makes all those separate particles function as a continuous whole? Current quantum theory suggests that it’s their interaction. By interacting with one another, they become quantum-entangled: that is, as their multiple overlapping states interact, some of those states will correlate with one another. As those particles interact with still more particles, their states will correlate as well, and you’ll eventually get a whole ensemble of entangled particles whose quantum states are correlated with one another, functioning as a single whole. (Those other states that don’t correlate either fizzle out to insignificance under the Quantum Darwinism model or branch off into alternate timelines under the Many-Worlds model, but that’s irrelevant here.) It’s that collective, mutual correlation among all the different particles that creates what we think of as a macroscopic, classical reality. (This is why it appears to us that measuring a quantum particle collapses its wavefunction into a single state. ”Measurement” is simply a process by which we correlate the states of our own quantum particles with a particular state of the measured particle, so that the ensemble of particles that we call ourselves reacts only to that state and doesn’t perceive the others.)
So when I perceive my mind as a continuous whole, an entity with an unbroken existence in time and space (at least the space within my skull) rather than just a collection of quarks and leptons, that sense of continuity exists because of the interaction and entanglement among my brain’s particles creating a correlation among their states. Even when I’m not being teleported, I rely on quantum entanglement to give me a continuous sense of existence.
So what happens when someone is quantum teleported? Her particles are thoroughly scanned along with the particles of a reference object, both of their states measured precisely and defined in relation to one another. The Heisenberg Uncertainty Principle doesn’t let you measure the position and momentum of her particles exactly at the same time, but the reference object lets you get around that: you don’t have to know the specific positions and momenta, you just have to know how they differ from those of the reference particles. (In my Star Trek fiction, I’ve explained the “Heisenberg compensator” of the transporter as being based on this principle.) Now, presumably that interaction between our subject and the reference object creates a correlation, an entanglement, between them. (I’m actually not entirely sure of that part, but I think it stands to reason.) Now, the reference object is already pre-entangled with a matter supply at the receiving end, and some of the state information of our subject is transmitted instantaneously through that entanglement link while the remainder is transmitted “classically” (i.e. without direct entanglement) via a speed-of-light signal (although theoretically it could be stored and delivered on a hard drive, I imagine, if the storage capacity were high enough). Then that information — the exact quantum information of the original subject, who’s now been disintegrated (since you can’t record quantum states exactly without changing them) — is superimposed on the receiver station’s matter supply, transforming it into that exact, indistinguishable duplicate of the original.
So what this means, or so it seems to me, is that the subject who steps into the teleporter station to be disintegrated and the one who steps out after being assembled are not only exact, indistinguishable duplicates, but share a quantum entanglement with one another. Just like the different parts of her brain and body were entangled and correlated with each other all along, thereby giving her a sense of herself as a continuous being. So if entanglement/correlation is what creates our perception of continuous classical existence, then it stands to reason that by any meaningful definition, the subject who steps out of the receiving station is continuous with the subject who stepped into the transmitting station. Even though there’s a separation in space and time, they’re still the same unbroken whole, because it’s the quantum entanglement that creates a “whole” out of a bunch of individual particles in the first place, and entanglement is independent of separation in spacetime. Two entangled particles separated by 50 light years will still be just as directly linked as they would be if they were physically adjacent. So the classical view that the teleported person ceases to exist for a time and is then reconstructed somewhere else is irrelevant on the quantum level that underlies reality. It’s still the same continuous self for the same reason that the self exists as a continuous thing in the first place.
So for my purposes, at least as it applies to fiction, I’m now reasonably convinced that quantum teleportation would preserve the continuity of self — that your own self-awareness, your consciousness, would be preserved and unbroken through the process even if you were physically nonexistent for years between transmission and reception (say, if you were beamed across an interstellar distance, or if the hard drive containing your classical data got lost for a while). It may seem unlikely, but only until you realize that what we perceive as a continuous reality is something of a quantum illusion to begin with, simply a coherent, large-scale correlation among certain states of a bunch of interacting particles.
Although there are still some massive obstacles to consider. Like, how does the teleporter actually work? How does it measure all the 10-to-the-29th-power particles in a human body at once? Given that doing so would require completely disintegrating the subject, that would require a lot of energy, essentially a lot of heat. It would basically be vaporization. If it weren’t done instantaneously, wouldn’t that heat and energy rather radically affect the states of the adjacent particles you hadn’t gotten around to scanning yet? You’d need to scan fast enough to capture the complete information of an intact person. Conversely, if you did it more gradually, say, by using nanotech or pinpoint lasers to disintegrate a body bit by bit (a la TRON), there’s a question of timing. If you scan different parts of the brain at different times, then there will be a time lag between the different parts when you reassemble them, and that will alter their relation to one another. It would be like taking a blurred photo of a moving object because you left the shutter open too long. (Or more like using slit-scan photography to stretch out an image.) So even if the brain that came out the other end contained the same continuous mind that went in, the transition would still alter its mental state. Would that just be a minor hiccup in awareness, or would it actually alter your mind somehow? Probably the brain’s activity is dynamic enough that it would be temporary, but what about the physiological changes from one part of the body to the other if the scan takes too long, or the reassembly does? The human body is a jellylike, wibbly-wobbly thing on the inside. If the process is too slow, then the different ends of an neural connection or a capillary might not line up right because they moved during the scan or the reconstruction stage.
Still, I suppose if you had a powerful enough quantum computer, it might be able to record that amount of state information pretty quickly, and be able to employ corrective algorithms to compensate for any delays in the scanning or reconstruction process. It might be necessary to be sedated before transmission in order to avoid the discomfort of being disintegrated/reassembled or the sudden alterations of mental state that might occur as a result of scanning lag.
So I’m not going to jump right into using QT in my fiction without trying to get some handle on these practical questions first. It might not turn out to be better than the alternatives available in a given universe. It might be more practical or cost-efficient for teleporting small and simple objects or raw materials rather than sentient beings. But at least I no longer consider the continuity-of-self question to be insurmountable, and that was the most important objection I had. So the door is open now.
I’ve just read a very interesting paper reassessing the idea of a Galactic Habitable Zone. This notion came along about a decade ago, and it was based on the idea that there were limiting factors on potential habitability based on a star system’s location in the galaxy. The idea was that if you were too close to the galactic center, there’d be too many supernovae happening near your planet and they’d repeatedly sterilize it before complex life could form; whereas if you were too far from the galactic center, the metallicity of stars in that region would be too low for the formation of terrestrial planets large enough to support life. Some scientists concluded that habitable worlds were only likely to be found in a narrow band a couple of thousand parsecs wide, with the Sun being just about in the middle of that range.
Now, I was always skeptical of this. It seemed to me that all the science really showed was that habitable planets would be less likely or less common outside the “GHZ,” not that they’d be completely nonexistent. I figured it would make more sense to call it a Galactic Temperate Zone rather than a Habitable Zone.
So I’m pleased with the findings of this new paper, “A Model of Habitability Within the Milky Way Galaxy” by Michael G. Gowanlock, David R. Patton, Sabine M. McConnell (arXiv:1107.1286v1). The abstract reads:
We present a model of the Galactic Habitable Zone (GHZ), described in terms of the spatial and temporal dimensions of the Galaxy that may favour the development of complex life. The Milky Way galaxy is modelled using a computational approach by populating stars and their planetary systems on an individual basis using Monte-Carlo methods. We begin with well-established properties of the disk of the Milky Way, such as the stellar number density distribution, the initial mass function, the star formation history, and the metallicity gradient as a function of radial position and time. We vary some of these properties, creating four models to test the sensitivity of our assumptions. To assess habitability on the Galactic scale, we model supernova rates, planet formation, and the time required for complex life to evolve. Our study improves on other literature on the GHZ by populating stars on an individual basis and by modelling SNII and SNIa sterilizations by selecting their progenitors from within this preexisting stellar population. Furthermore, we consider habitability on tidally locked and non-tidally locked planets separately, and study habitability as a function of height above and below the Galactic midplane. In the model that most accurately reproduces the properties of the Galaxy, the results indicate that an individual SNIa is ~5.6 \times more lethal than an individual SNII on average. In addition, we predict that ~1.2% of all stars host a planet that may have been capable of supporting complex life at some point in the history of the Galaxy. Of those stars with a habitable planet, ~75% of planets are predicted to be in a tidally locked configuration with their host star. The majority of these planets that may support complex life are found towards the inner Galaxy, distributed within, and significantly above and below, the Galactic midplane.
Basically, it’s saying two things: One, that the sterilizing effects of supernovae in the inner galaxy aren’t as pervasive as formerly claimed (this study actually modelled individual stars rather than going for a statistical aggregate), and two, that the sheer number of stars in the inner galaxy is so great in proportion to the outer galaxy that it more than cancels out the supernova effect. That is, it’s true that a much higher percentage of habitable planets get sterilized the closer you get to the center of the galactic disk (and it only covers the disk, not the central bulge), but the number of stars is so much greater that said percentage still comes out to a much larger amount. For instance, at a galactic radius of 2.5 thousand parsecs (kpc) you might have 10% of 7.5 million candidates = 750,000 habitable worlds, while at 12 kpc you might have more like 70% of 1 million candidates = 700,000 habitable worlds. Although the numbers actually add up so that maybe half the life-bearing worlds in the model are between 2.5-4 kpc, partly because star formation began earlier in the inner galaxy so there’s been time for more habitable worlds to develop life (assuming a uniform rate of life emergence comparable to Earth’s, which is a huge and rather arbitrary assumption, but we don’t have any other examples yet).
Anyway, the exact numbers aren’t very meaningful due to all the assumptions and conjectures (and the paper shows the differing results of four separate galactic models it used), but the overall trend is what’s significant. It suggests that inhabited worlds could be found just about anywhere in the galactic disk, much more broadly than past studies have suggested. So the “Galactic Habitable Zone” is pretty much the whole disk.
So to get speculative here, what might it be like for intelligent life in the inner disk? Well, neighboring intelligences would probably be closer together than out here. They’d occupy a smaller percentage of the stars, but the stars would be packed more tightly. And that might make it easier for them to expand and colonize, to hopscotch across the star systems and reach one another. (After all, this paper only considers worlds where life evolved independently. Once you throw colonization into the mix, all bets are off.) So it might be easier to form a robust interstellar community in the inner galaxy. Particularly since there’d be a much higher number of uninhabited worlds for every inhabited world — hence more resources and territory to go around, reducing the pressure for conflict.
Conversely, maybe the earliest species to develop would’ve had an easier time colonizing the more densely packed inner galaxy, settling other habitable worlds before they could develop their own indigenous sentience, or finding indigenous intelligences so far behind them that they could be easily dominated and crowded out. So maybe the inner galaxy would be characterized by regions that were dominated by the descendants of a single species each. No telling what might happen when two such regions impinged on one another.
In either case, though, the paper estimates that life in the inner disk has maybe a 2 billion year headstart on us, on the average. So why haven’t they made their way out here? That’s a trickier question to speculate about. I think I’ll let it go for now.
From Centauri Dreams:
A new paper has revised our estimates of how wide the habitable zone would be around a red dwarf star. This comes from taking into account the difference in spectrum between a red dwarf and a Sunlike star. Compared to the Sun, red dwarfs give off a higher percentage of their EM radiation in longer wavelengths like red and infrared, wavelengths that aren’t reflected as much by snow and ice as shorter wavelengths are. So while in our system, a planet covered in snow and ice would reflect a lot of heat back into space and thus reinforce its own freezing (runaway glaciation), in a red-dwarf system it would retain a higher percentage of the star’s heat and could stay unfrozen at a larger proportional distance than we’d thought. This could make red-dwarf habitable zones 10-30 percent wider than we thought, and increase the potential number of habitable planets around them. Sure, those HZs are pretty narrow to begin with, but red dwarfs constitute maybe 70-80 percent of the galaxy’s stars, so it adds up.
Be sure to read the comments. There’s a lot of interesting discussion about other factors that would go into habitability around a red dwarf. It’s a lively and ever-evolving subject in astrophysics today.
Here’s a fascinating article about octopus intelligence and the ways in which it’s profoundly different from ours:
As a science fiction fan and author, I’m always fascinated by research revealing that other sapient species probably exist right here on Earth — be it apes, dolphins, elephants, whatever. And the “smarter than we ever imagined” club keeps broadening. Now it’s grown to encompass birds and cephalopods like octopus and squid. (And yes, the plural of octopus is octopus, octopuses, or octopodes, not “octopi.” It’s from Greek, not Latin; -pus means “foot” and its plural is -podes.) This article is particularly interesting to me as an alien-builder in its discussion of how radically different the octopus’s senses and perceptions are, and how different are the reasons behind its evolution of intelligence. It might be premature to read consciousness into the octopus’s behavior, but it might be so alien that it’s hard to define what is or isn’t conscious.
Although it’s kind of heartening that, even across such a gulf, the article describes such a bond of affection between human and octopus. Even though octopus aren’t particularly social, and often fight with rival octopus, some of them do seem to show affinity or at least interest toward certain humans. We often assume in SF that the gulfs between different sapient species might be too great to surmount if they’re different enough (see Orson Scott Card’s Ender novels, for instance). But I tend to think maybe the opposite might be true. We often get along better with other species than we do with our own kind. Like the way dolphins are famously benevolent and protective toward humans even though they’re often quite aggressive toward other dolphins. I think it’s because members of other species are rarely our rivals in the same way that members of our own species can be, so there’s less incentive for fear and hostility. And I think it’s because intelligent minds have a natural tendency to reach out to other intelligent minds. At least that’s true with mammalian and avian species, whose intelligence arises from the need for complex social interaction and communication. But this article says that octopodan intelligence didn’t come from social needs, but may instead have come from the need to be adaptable in strategies for pursuing various forms of prey, fleeing various forms of predator, and dealing with the changing environment of the sea. So what, in that case, could be the incentive driving this form of intelligence to connect with others? Perhaps simple curiosity. Perhaps an intelligent mind can recognize that another intelligent mind, particularly an alien one, is something it can learn new things from. And if a species’ intelligence arises from the need to adapt and innovate in order to survive, then surely there would be a survival imperative to seek new knowledge, new insight. Even if we have nothing else in common with another intelligence, we may have curiosity and the willingness to learn in common. That could be the basis for understanding with even the most alien intelligences.
In any case, this article makes me rethink my assumption that all intelligent species would be social species. Certainly many would be, and those would be the aliens that we could probably get along with the most easily, the ones most likely to join into interstellar federations and commonwealths and leagues and whatnot. But there could be others as well, species that evolved a less social form of intelligence. It’s doubtful that they’d have much in the way of civilization, though, if they couldn’t cooperate and organize. But maybe they’d find a way. It’s certainly interesting to think about.
I’ve read a couple of articles lately about a new book by Harvard social scientist Steven Pinker, who’s done research showing that, contrary to what a lot of people and a lot of dystopian science fiction tends to assume, human society has actually become less violent over time. Here are some excerpts from an article in New Scientist:
I was struck by a graph I saw of homicide rates in British towns and cities going back to the 14th century. The rates had plummeted by between 30 and 100-fold. That stuck with me, because you tend to have an image of medieval times with happy peasants coexisting in close-knit communities, whereas we think of the present as filled with school shootings and mugging and terrorist attacks.
Then in Lawrence Keeley’s 1996 book War Before Civilization I read that modern states at their worst, such as Germany in the 20th century or France in the 19th century, had rates of death in warfare that were dwarfed by those of hunter-gatherer and hunter-horticultural societies. That too, is of profound significance in terms of our understanding of the costs and benefits of civilisation.
…How do you explain the decline in violence?
I don’t think there is a single answer. One cause is government, that is, third-party dispute resolution: courts and police with a monopoly on the legitimate use of force. Everywhere you look for comparisons of life under anarchy and life under government, life under government is less violent. The evidence includes transitions such as the European homicide decline since the Middle Ages, which coincided with the expansion and consolidation of kingdoms; the transition from tribal anarchy to the first states. Watching the movie in reverse, in today’s failed states violence goes through the roof.
Do you think commerce helps too?
Commerce, trade and exchange make other people more valuable alive than dead, and mean that people try to anticipate what the other guy needs and wants. It engages the mechanisms of reciprocal altruism, as the evolutionary biologists call it, as opposed to raw dominance.
What else has contributed to the decline?
The expansion of literacy, journalism, history, science – all of the ways in which we see the world from the other guy’s point of view. Feminisation is another reason for the decline. As women are empowered, violence can come down, for a number of reasons.
I’m not entirely sure about all his points. I’ve gathered that pre-agrarian, hunter-gatherer societies tend to be fairly peaceful on the whole, the idea being that it’s when we settle down and don’t need to hunt for food as much that our predatory instincts go unfulfilled and get misdirected into war, conquest, oppression, rape, etc. Also I wonder if he’s reading too much into percentages — the percentage of the population touched by violence may be declining simply because there are so many more people around.
Still, I think there’s a lot of merit to his position, at least with regard to the period since civilization began. I’ve had the same impression myself for a long time: that states and societies in the past were far more prone to resort to killing, torture, and the like, and that these things are a lot less acceptable now as we’ve developed better alternatives, more ethical justice systems and bodies of law. When people in the past would resolve disputes with duels to the death, today they’d resolve them with litigation or smearing their opponents in the media. Either of which can get ugly, but it’s better than the alternative.
What’s nice about this model is that it’s nonpartisan. It says government helps reduce violence, which should make lefties happy, but it also says commerce and business do the same, which should satisfy those on the right. Which fits what I believe, that it’s a healthy balance between the two institutions, government and business cooperating and curbing each other’s excesses, that works the best. Both can certainly be abused and mishandled, but both have the potential to do great good with the right approach.
So why does it seem to us that the world is so much more violent? Pinker says that as violence becomes more uncommon, those acts of violence that do occur stand out more and are more shocking. They don’t blend into the noise the way they once would have.
Pinker says the changes are more likely social/environmental than evolutionary, which makes sense, since it’s only been a few thousand years since civilization began, hardly a blink of an eye in evolutionary terms. Still, it seems to me that if a society has come to see violence as unacceptable, then that could influence evolution, because people more prone to violent behavior could come to be seen as less desirable mates. Not to mention that people leading more violent lives might be more likely to get themselves killed off, or locked away without access to prospective mates, and thus would make less of a contribution to the gene pool. So it might not be a factor now, but if civilization continues along these lines for long enough, then it could affect our evolution over the next few millennia or so. (Which also says something about alien civilizations in science fiction. If they’ve been civilized for tens or hundreds of millennia, they could also have evolved to be less aggressive.)
This is heartening for me to read about, since it reinforces what I’ve always believed about humanity’s potential to improve. It makes the optimistic future seen in Star Trek, and the one I seek to depict in my original fiction, more credible. Maybe it will persuade other SF writers to explore more optimistic futures as well, or at least not be so quick to default to dystopias. And who knows? Maybe if people in general can recognize that civilization is heading in a positive direction, it will inspire them to work for further improvement.
We finally got a news conference from the Dawn team reporting on their preliminary findings from the survey orbits of Vesta. Here’s a link to the video (I don’t seem to be able to embed it properly):
Summary of the findings that stood out to me:
- The huge impact basin that takes up most of the southern hemisphere of Vesta is actually two huge impact basins. The main one with the big mountain at the center, which is called Rheasilvia Basin, is overlapping another, slightly smaller and somewhat older impact basin. So Vesta’s southern hemisphere was struck by a huge impactor not just once, but on two separate occasions. The older basin and the mountain in Rheasilvia haven’t been named yet; the mountain is currently just called the “Central Complex” of Rheasilvia.
- The mysterious grooves that gird most of Vesta’s equatorial region are some kind of “ripples” resulting from the impacts. There are two sets of grooves, the main one that circles 2/3 of the planetoid, and an older set in the northern part of the other 1/3. The older set seems to be roughly centered on the older southern impact basin, and the younger equatorial ridges roughly center on Rheasilvia.
- There’s a wide variety of different materials visible on the surface, suggesting a complex geologic history, but no hard evidence yet that Vesta’s mantle is exposed in the impact basins. However, it could be buried under looser regolith (i.e. “soil”). But the geologists are excited at how well Vesta’s surface has retained a record of its complicated history.
So how does this affect what I wrote about Vesta in Only Superhuman? Well, I didn’t really say that much about the planetoid itself, but there is one sentence in Chapter 4 that I’ll definitely have to reword in copyedits, a description of the southern polar “crater” that’s no longer accurate. Hopefully they’ll at least coin a name for the big mountain before the text gets locked down.
What’s surprising to me is how little attention the Vesta mission is getting in the news. It’s been hours since the press conference ended, and it’s very hard to find coverage of it, even on the science news sites. And at the conference itself, even though it went out live over the Internet, there were only a few questions and a lot of dead air during the Q&A period. I mean, this is exciting stuff! Vesta is one of the weirdest, coolest worlds we’ve seen, with all sorts of fascinating features.
As weird as it may seem, one model describes almost all mammal coloration patterns. All it takes to make an animal a certain color is the interaction of a couple of chemicals with the skin. One chemical stimulates melanin, causing darker coloration in the skin and fur of mammals, while another keeps melanin from being produced. These spread outwards through the body of the animal in the same way in every mammal….
The key to the differences in coloration is the fact that the chemicals spread outward in waves at different phases during the gestation period. Some start their move when the embryo is still tiny. Some start when it’s nearly fully grown. If the animal is tiny, no pattern will form, which is why there aren’t a lot of tiger-striped mice out there. If it’s huge, the chemicals jumble outwards and back, interfering with each other until they form a uniform color. This is why there aren’t any tiger-striped elephants.
And tigers? Their chemical waves move out at just the right time to form a series of peaks and valleys that lead to striped patterns on their fur. Leopards, though smaller than tigers, get hit with the waves at an embryonic stage at which they’re a little bigger than the tigers. The waves interfere enough to form spots on their bodies. Giraffes get hit at a bigger stage and form the large brown patches that we see on them.
Also, it depends on the shape and size of the body part, which is why spotted cats have striped tails. The original report is here:
This is really cool to know, that something as beautiful as the stripes on my beloved cat Tasha are an expression of math and physics, an interference pattern between chemical waves. And it might explain something about her brother Shadow. When he was a very small kitten, he had faint stripes of lighter and darker gray, but as he got older, the stripes vanished and he became a solid (and totally gorgeous) dark gray. Maybe the interference process was still ongoing.
It’s also useful to me as an SF writer and alien-creator to know this. This specific formula only applies to mammals, but the original article says that similar math can explain butterfly wings and striped fish. So maybe if I create some giant alien creature in a future novel or story, I’ll take care not to give it stripes or spots.
I seem to go through phases in my writing — sometimes I get in a groove and the words just pour out of me, but then I inevitably lose it and end up at a point where it’ s a struggle to figure out a single sentence and I just slog forward, and I keep getting distracted by other things. I’ve been in the latter phase lately, though I’ve been making a bit more progress since the start of the month (I have to, since my deadline’s the end of the month). I’ve had a couple of decent-to-good days so far, but today wasn’t shaping up to be one of them. I was at a point where I wasn’t sure what came next, since the outline was vague on it. I managed to piece about a 900-word scene together, but I didn’t know how to move forward from it, and my mind resisted focusing. So I’d reached the point where I was ready to just give up for the day and watch more Mission: Impossible.
But I decided to wash the dishes first, since I had two days’ worth accumulated in the sink. And while I was doing that, which took quite a while, I started to think about what I would do in the next scene, presumably tomorrow. And I had an idea or two, and with those seeds it started to coalesce a little. So I decided I’d sit down and write down at least those bits I’d come up with. So I did that, but once I had the beginning, it gave me a foundation to build on and I kept going. And so I finished the whole scene. And then I was going to stop but I already had in mind how to start the next scene, so I wrote the start of it, and then I kept going. And by the time I was done, an hour and a half had passed and I’d added another 1600 words to the count. And I’m through that whole vague portion of the outline and ready to get on with the next bit.
The only downside is that it’s now 11:30 PM, later than I wanted to stay up. I’ve been having trouble getting to sleep the past few days, and I was just starting to get over it last night. Now I might’ve thrown off my sleep cycle again. Which means I should stop typing and go to bed.
In the library the other day, I happened upon a DVD of a movie called Agora, which caught my interest because it was about Hypatia of Alexandria, the philosopher/astronomer who’s the most (and probably only) famous female scholar of the Hellenistic age, and whose murder has been cited by some as the downfall of that age and the beginning of the Dark Ages. I’ve been interested in Hypatia’s story ever since I learned about her from Carl Sagan’s Cosmos, and apparently so was the film’s director, Alejandro Amenábar. He started out wanting to make a film about the history of astronomy from Hypatia to Einstein, but ended up focusing specifically on Hypatia and the Alexandria she lived in, but as a microcosm representing a much larger story.
The film is a Spanish production, but has a multinational cast and crew and is in English. It stars Rachel Weisz as Hypatia, beginning in her youth as a lecturer at the Library of Alexandria (or rather, the Serapeum, where the surviving texts were brought when the original Library was burned in Caesar’s time), where she lectured and taught disciples of both pagan and Christian faiths, and was caught in the middle as tensions between the faiths erupted into violence, ultimately leading to the trashing of the Library by decree of the (Christian) Roman emperor. It then jumps forward to the final years of her life, in which she was a chief advisor to the city’s prefect Orestes (Oscar Isaac) and thus came under challenge by Bishop Cyril (Sammy Samir), who objected to a pagan (and according to the film, a woman) advising the now-Christian city’s ruler. Ultimately Hypatia was killed by a mob, and her teachings, like the contents of the Library, were lost to posterity.
The film does a marvelously researched and detailed job recreating 4th/5th-century Alexandria, as is fascinatingly discussed in the DVD’s hourlong making-of feature, and is made with considerable naturalism and verisimilitude. But it should not be mistaken for an accurate account of Hypatia’s life and death. As I said, the film uses these events as a microcosm or symbol of the history of science and the transition from the Hellenistic era to the Dark Ages, so a lot of historical liberties are taken. The timeline is compressed, the main characters not significantly aging even though the two halves of the film represent events that came some 24 years apart. Hypatia’s friend Synesius (Rupert Evans), whose letters represent the principal historical source for Hypatia’s life, is present in the film until the end even though the real Synesius died two years before her. Orestes is consolidated with another, unnamed figure from history, a suitor whom Hypatia rejected in a rather infamous way that’s depicted in the film. The second lead, Max Minghella, plays a fictional character, Davus, who starts out as Hypatia’s adoring slave and then joins the Christians and pretty much goes through the film not being sure what side he’s on; if anything, he’s sort of a personification of the zeitgeist of Alexandria itself, a viewpoint character through whom the audience can follow the shifting political and cultural forces pulling the city in multiple directions. Most of all, Hypatia herself is shown anticipating millennia of scientific progress, starting out a firm believer in the Ptolemaic geocentric model of the universe but gradually coming around to Aristarchus’s heliocentric model and even making Kepler’s breakthrough, realizing that the orbits of the planets are ellipses rather than circles, only to be killed before she can pass the insight on to posterity. (The film’s POV periodically rises into space, looking down on the Earth, reminding us of the cosmic truths that Hypatia seeks and the other characters in the film remain ignorant of.)
Is this too great a break from reality? Not necessarily, though it’s a stretch. Hypatia’s work and writings were lost to history, as many insights of the ancient Greeks were lost to the Western world for thousands of years (though the Muslim world retained and expanded on a lot of it — it’s important to remember that the Dark Ages were a phenomenon of Western Europe, not the entire world). Hypatia’s expertise in astronomy and geometry gave her the necessary grounding, so if anyone in the period could’ve figured it out, it’s believable that she could have. What gives me the most pause is the scene showing her conducting an experiment to prove the existence of inertia, supporting the idea that the Earth could move and we couldn’t detect the motion because our frame of reference moves with it. The thing is, the main reason Hellenistic science fell short of the modern breakthroughs it was on the verge of reaching was that it was a slavery-based society. The intellectuals who did the thinking considered actually doing things, building things and performing work and so forth, to be the business of slaves, beneath their notice, so they didn’t have the mindset to consider practical experimentation useful as a means of testing their hypotheses. And the movie does portray Hypatia as taking the institution of slavery for granted. Then again, the crux of its portrayal of the character is her ability to question her preconceptions, so maybe she could’ve overcome that prejudice along with her Ptolemaic prejudices.
And that’s pivotal. The film’s portrayal of Hypatia anticipating Kepler isn’t meant to be historically accurate, but symbolic. For one thing, it symbolizes all the unknowable works, writings, and insights of the ancients that were tragically lost when the Library of Alexandria was sacked, when Hellenistic knowledge was condemned as paganism and destroyed, when great minds like Hypatia were persecuted and killed. With so much of their work lost to posterity, who knows what they might have discovered? More fundamentally, it symbolizes the theme of the film. All around Hypatia, throughout Alexandria, characters of all faiths are preoccupied with the purity and perfection of their beliefs to the degree that they feel driven to persecute, expel, or murder those who believe differently, and thereby completely miss the point of their beliefs. And yet Hypatia’s scientific quest leads her to realize that the surface we stand on may not be fixed, that our perceptions may be relative to our frame of reference, and that there may be more than one center of all things. Her discoveries in the film may be anachronistic, but they symbolize the film’s message about the folly of dogmatism and intolerance, and they echo her own willingness, even need, to question and move beyond what she believes.
Apparently some have denounced the film as anti-Christian, but I don’t think that’s true. Pagans, Christians, and Jews are all shown persecuting and attacking people of other faiths, and there are decent characters of all faiths who are defined by their willingness to look beyond religious categories and accept those who disagree. So it’s not against any one religion, it’s against the abuse of religion as an excuse for intolerance and persecution. Indeed, the film is full of morally ambiguous characters, people who are capable of the best and worst of humanity. A key example is Ammonius (Ashraf Barhom), leader of a Christian group called the Parabalani, who goes from standing by while his followers toss a pagan into a firepit in one scene to encouraging Davus to give food to the starving beggars in another. Even Hypatia, whose portrayal is largely hagiographic (Amenábar even admits to painting her as a secular Christ figure), has the flaw of accepting slavery, and even though she’s mostly kind to her slave Davus, it’s her casual condescension that ultimately drives him to join the Christian mob. The one character who isn’t ambiguous is Bishop Cyril, the main antagonist of the film’s second half. He comes off as menacing, manipulative, and hateful, a man who lives to persecute those who don’t fit his standards of purity and keeps narrowing those standards until he’s turning against fellow Christians for not being pure enough. Indeed, he’s even given a speech wherein he condemns a Jewish attack on the Parabalani by declaring, a bit metatextually and heavy-handedly, that the Jews must be exiled and condemned until the end of time. Now, what I’ve read about Cyril and his role in the expulsion of the Jews from Alexandria and the murder of Hypatia doesn’t exactly incline me positively toward him, but there are plenty of characters in this film who commit horrendous acts and still have a sympathetic side, while even the idealized Hypatia is given one or two flaws. This isn’t a film about good vs. evil, but about human fallibility, about people struggling to figure out the right thing to do and often screwing it up massively. So the one-dimensionally villainous portrayal of Cyril seems out of place.
Overall, though, I think Agora is an excellent film, so long as you don’t take it as accurate history but make the effort to do the research and listen to the commentary (which is in Spanish with subtitles — I just turned off the commentary audio and read the subtitles along with the film’s regular audio track) and understand where and why the film diverges from known history. Rachel Weisz is superb as Hypatia, a woman who’s cool and logical as would befit a woman filling a traditionally male role in 4th-century Alexandria, but who still has a restrained passion and joy about science that captivates the viewer. At least, I found it exciting to watch her discovering inertia and figuring out elliptical orbits, but I enjoy watching the scientific process. (And they did mostly avoid the House, MD school of random epiphany substituting for deductive reasoning; there were a couple of scenes where Hypatia had a sudden insight based on something she or someone else said in a conversation, but at least the conversations were about reasoning through the problems she was trying to crack.) The director said he saw her story as a love story between Hypatia and the universe, and Weisz plays that well (albeit to the dismay of her merely human suitors who are unable to compete with the stars). The rest of the cast is strong as well, particularly Ashram Barhof, who makes Ammonius a lively and funny character who’s appealing despite the awful acts he performs (though really, I’d think someone with such a rich sense of humor about his faith wouldn’t be so dogmatic about it). The production values are excellent, particularly the recreation of classical Alexandria on a scale that’s remarkable for a modestly-budgeted film (I was surprised to learn from the making-of feature how much of the city was actually built rather than digitally created). And basically it’s cool to see a film that’s about both science and history, two of my primary interests, and particularly to see a film about Hypatia, to get a conjectural glimpse of what this pivotal yet little-known figure from history might have been like.
The latest news from the Dawn probe at Vesta: NASA has posted a video showing a full rotation of the protoplanet, compiled from Dawn photographs. I don’t seem to be able to get the embed code to work, so here’s a link:
It’s fascinating to watch, with so many complex features. I’m really wondering what caused those parallel striations around the equator. I’m also wondering if that big mountain in the southern hemisphere is really what scientists have assumed it was, the central bulge of a crater so big it pretty much flattened out the rest of the hemisphere. It doesn’t really look like there’s a crater there. Maybe it’s so old that the edges have been worn away, but maybe the mountain is something else. And that would mean I’d have to do a bit of rewriting in Only Superhuman.
Yesterday, July 16, 2011, NASA’s Dawn space probe entered orbit around the asteroid (or more properly, protoplanet) Vesta, the second-most massive object in the Main Asteroid Belt. This is a mission I’ve following with interest, and I made a previous post about it back in April. But now I can reveal why I’m particularly interested in this mission — because my upcoming novel Only Superhuman is set in the Asteroid Belt, and much of its action takes place on habitats around Vesta (or around Ceres, which Dawn will visit in 2015). The novel mentions little enough about Vesta itself that I hope I won’t have to do any rewrites as a result of Dawn‘s findings, but I’m going to keep my eye on this just in case, and who knows — maybe I’ll get to write more about Vesta in a sequel.
Here’s the NASA press release:
And here’s the clearest photo of Vesta to date, taken on July 9:
Today I was catching up on the status of the Dawn space probe, which is mere months away from a rendezvous with Vesta, one of the largest asteroids in the Main Asteroid Belt, so much so that the Dawn mission scientists prefer to call it a protoplanet. Dawn is a really cool mission, an ion-powered spacecraft maneuverable enough to rendezvous with Vesta, spend a year in orbit of it, then thrust its way to a rendezvous with Ceres, the full-on dwarf planet member of the Main Belt, in 2015. We’ll finally get detailed images and surveys of these sub-planetary bodies, which are very different from each other: Vesta is dry and rocky and differentiated like a planet, Ceres carbonaceous and probably covered in a thick layer of ice that contains more fresh water than exists on Earth. These are of particular interest to me because of the spec novel I’ve written that’s set in the asteroid belt, and which I’ve alluded to before on this blog.
Anyway, another article I looked at today was this one from the JPL Dawn journal page, and I noted the following paragraph in it:
In December, we saw that by sensing the irregularities in the gravity field, Dawn will reveal the nature of Vesta’s internal structure. Until those detailed measurements have been made and accounted for in the design of the flight plan, however, the subtle effects of the gravity field will cause deviations from the planned trajectory. Therefore, as the spacecraft travels from one science orbit to another, it will thrust for a few days and then stop to allow navigators to get a new fix on its position. As it points its main antenna to Earth, the Doppler shift of its radio signal will reveal its speed, and the time for radio signals (traveling, as all readers know so well, at the universal limit of the speed of light) to make the round trip will yield its distance. Combining those results with other data, mission controllers will update the plan for where to point the thruster at each instant during the next phase of the spiral travel, check it, double check it, and transmit it to the distant explorer which will put it into action. This intensive process will be repeated every few days as Dawn maneuvers between science orbits.
I think that “science orbit” is the most awesome phrase I’ve seen all week. Everything is made cooler by putting “science” in front of it. Can’t you just hear it? ”Helmsman! Prepare to enter… science orbit!“
Dawn will enter science orbit! of Vesta in the science month of Science July, and will surely do much science to it.
I was using my microwave to thaw some frozen chopped green peppers in a small Pyrex dish (not much, just enough to top a couple of Italian sausages on buns along with tomato and onion), and to my surprise, there was an electric arc within the dish, even though there was no metal within. I wondered how this might happen, so I did some Googling, and I came upon this page that explains the physics:
That’s about grapes sparking in the microwave, but the principles seem about the same — small objects close in size to the microwaves’ wavelength, with small gaps between them filled with steam as the water inside evaporates. The pepper pieces were rectangular instead of round, but then, grapes aren’t perfect spheres anyway, so the model presented there is just an approximation.
And I’ll repeat what the link says — this is not something you want to try at home, since you could damage your microwave. Luckily I caught it and hit the stop button within a second. But in the future, I should consider finding another way to thaw my green peppers — or maybe just start buying them fresh (although I don’t know if I use them often enough for that — how well do they keep?).
Lately, I’ve been doing a lot of reading up on quantum physics, time travel, and so forth as preparation for Star Trek DTI. I want to ground my treatment of Trek temporal physics in as much real science as possible. So I’ve been reading articles and papers as well as hard-SF novels on time travel.
Today I finally got around to visiting the index of John G. Cramer’s “Alternate View” sciene columns for Analog Science Fiction and Fact. One of the columns I read there, “Quantum Telephones to Other Universes, to Times Past”, involved the concept of nonlinear quantum mechanics, an idea that, if true, might allow communication between alternate timelines, something that the conventional linear model of QM deems impossible. I figured this idea might serve a particular purpose in my book, though I hadn’t worked out the details yet and needed to do some more reading on the subject.
Then, just a little while ago, I did some reading in a book I’ve been working through all week, The Time Ships by Stephen Baxter, which is a 1995 sequel to H. G. Wells’ The Time Machine, written in the same style as if by the same narrator, but based on modern notions of physics including Many-Worlds quantum mechanics. And the part I was reading, in the “White Earth” portion late in the book, involved the main characters having a discussion of — get this — nonlinear quantum mechanics and how it might allow communication between timelines! And when I read about it in the terms used by Baxter, it clarified the idea further and let me realize that this tied into a key element of my time-travel model for DTI and might be the answer to tying some rather fundamental things together in a coherent way.
Then the fact sank in that I came across something so potentially important twice in the span of five hours. That was a “whoa” moment. Of course, I know coincidences happen. And given that I’ve been aggressively researching the theoretical physics of time travel from various angles for weeks, it was inevitable that I’d come across this idea eventually. After all, the number of ideas in theoretical physics pertaining to time travel is finite and the same ideas will be discussed in various different places. And since I’m reading up on so many of them in various sources within a short amount of time, it’s not really that unlikely that I might come upon one in two different sources on the same day. So I know there’s nothing special about what happened. My own choices created a selection pressure that increased the probability of such an event to a moderately high level.
But for such a moment of synchronicity to occur specifically when I’m looking into matters of quantum probability and causality violation… and for it to be something that might be an answer I’ve been seeking for weeks… well, it’s certainly a weird sensation. And the more fanciful side of my nature wouldn’t mind taking it as a good omen.
Well, what do you know. Less than two weeks after I found out about the concept of quantum Darwinism for the first time, there’s a report of an experiment that’s actually found evidence for the process, and apparently not for the first time:
Since quantum Darwinism was first proposed in 2003 by Wojciech Zurek of Los Alamos National Laboratory, several studies have found evidence to support the idea. Most recently, a team of physicists and engineers from Arizona State University and the Naval Research Laboratory in Washington, D.C., has performed experiments using scanning gate microscopy to image scar structures in an open quantum dot. Their results have revealed the existence of periodic scar offspring states that evolve and eventually contribute to a robust state, much in the way that the derivation of pointer states is predicted by quantum Darwinism.
The “scars,” as the researchers explained, are actually scarring on the quantum wave functions, which cause the wave functions’ amplitudes to be highly concentrated along classical trajectories. Scars are traditionally thought to be unstable, where any small perturbation could break up the connection to the classical trajectory. However, when scar states replicate and evolve through quantum Darwinism, becoming a family of mother-daughter states, they can become coherent and eventually stabilize into multiple pointer states.
To detect this scar replication, the researchers used scanning gate microscopy to scan a conductive tip over the scar structures at a constant height. The tip acts as a local perturbation by causing a change in electrical conductance proportional to the sample’s electron density at that location. By measuring the change in conductance at different locations, the technique revealed that the scar structures have a periodic magnetic field that fits well with the idea of periodic offspring states.
So it sounds like this is quickly becoming more than an abstract interpretation. There’s real evidence that this is actually going on. This is a major step forward in explaining something that’s been a point of contention in quantum physics for generations: how the wavefunction “collapses” (or appears to) to produce our “classical” world.
I do wish they weren’t calling it “quantum Darwinism,” though. ”Darwinism” is a loaded term in a lot of ways. It’s misapplied when used for evolutionary biology, often used by creationists to paint evolutionary theory as just blind faith in one man’s dogma, when in fact modern evolutionary biology is as far beyond Darwin’s tentative, incomplete, and often inaccurate ideas as modern physics is beyond Newton’s pre-relativistic, pre-quantum model of physics. Darwin was just the beginning of the process. Also, more broadly, the term “Darwinism” has long been co-opted for philosophies like social Darwinism that distort and misuse “survival of the fittest” notions (a term actually coined by Herbert Spencer) into something rather different from what Darwin had in mind. A term like “quantum evolution” or “quantum selection” might be more appropriate, though admittedly less evocative.
On the other hand, Zurek isn’t proposing a comprehensive analogy with evolutionary biology, merely evoking the specific idea of the environment selecting for entities better adapted to thrive in that environment, which is essentially the concept of natural selection that Darwin introduced in On the Origin of Species. So I guess this is one case where the term “Darwinism” isn’t really inaccurate, as long as you ignore the unfortunate ideological baggage attached to the term by its many abusers.
Starting yesterday afternoon, I decided to begin doing some research into quantum physics and the Many-Worlds Interpretation thereof in preparation for Star Trek: DTI — since a book about time travel in the Trek universe has to deal with parallel universes to some extent. True, the portrayal of temporal physics and alternate realities in ST is quite fanciful, but as my readers know, I like to do what I can to ground ST’s fanciful science in principles from real science. I’m often surprised at how feasible it is.
I took quantum mechanics in my final year as a physics major in college, but it never really clicked with me. Part of it was that the calculus was just too complicated for me, but I think part was that it was all so abstract. I was interested in the concepts, the meaning, but all the class covered was the math. I recall describing it to a friend as “variables performing unnatural acts on each other.” Nowadays, I still can’t follow the math in any detail, but I find it easier to study the subject now that I’m motivated to use it in fiction. In fact, I got so caught up in it last night that I barely got any sleep. It helps that I’ve found some good articles, starting on Wikipedia and including sites linked to in its articles on quantum theory.
So anyway, the basic idea of quantum physics is that a particle or other system can exist in multiple states at once — a superposition, it’s called. The question is, how does that produce the classical world we observe where everything appears to have one definite state? The old idea, the Copenhagen interpretation, was that the wavefunction “collapses” into a single state when it’s measured. Copenhagen didn’t explain how or why this happened; it was an ad hoc postulate that nobody was really happy with. (The Schroedinger’s Cat thought experiment, often cited as an illustration of this phenomenon, was actually a critique of it, because it argued that there could be a scenario wherein a macroscopic object like a cat was forced into an impossible dual state because of the superposition of a single atom — the decay or nondecay of that atom determines whether or not the poison is released.)
These days, we understand it as a process called quantum decoherence, which is nicely discussed at this site. The key is that we don’t really observe a particle directly; we observe its effect on the environment it interacts with. If you measure the state of a particle, you’re actually observing the state it’s induced in the measuring device. So the wavefunction’s multiple states never actually collapse into one; rather, they’re all still there, but we, as part of the environment interacting with the particle, only observe one state at a time.
So the idea that Schroedinger’s Cat is both alive and dead until the box is opened by an observer is wrong. When the atom’s two states (decayed and undecayed) interact with the poison trigger, the interactions with the billions of particles in that trigger cause decoherence, isolating the states’ effects from each other, so the trigger will only react to one state at a time. So the behavior becomes “classical” through the interaction with the trigger, and when the trigger interacts with the poison and the poison interacts with the cat, they all become part of that same combined quantum system and share the same single state. The cat survives or dies before the observer opens the box, because the decoherence happens at the trigger.
This is why it’s a mistake to do what so many laypeople do and interpret wavefunction collapse mystically, as the result of observation by a thinking being. In fact, observation is just one type of interaction between the local system and its environment. Any such interaction causes decoherence, once the effects of the interaction propagating through the environment become thermodynamically irreversible (like a glass shattering — it’s all but impossible to make the atoms revert to being an intact glass and jump off the floor onto the table).
This idea was proposed in 1957 by Hugh Everett, in what he called the relative state formulation: when a system interacts with its environment, its state can no longer be described in isolation, but only as it exists relative to the environment it interacts with. So the environment becomes correlated — or quantum entangled, in modern terminology — with the state of the particle. But each of the multiple states of the particle is separately entangled with its environment, describing a separate system. Instead of just the particle having a wavefunction that’s a superposition of states, the combined system of the particle and environment has a superposed wavefunction, the whole schmeer existing in two or more states at once. And once the difference between those states becomes irreversible, they stop interfering and have no more interaction. From that point on, the system has multiple independent histories.
This has become known as the Many Worlds Interpretation, and that’s often taken literally: each independent measurement history represents a parallel reality. One world where the cat lives, another where the cat dies. Naturally, this is the interpretation that applies in the Star Trek universe, with all its parallel realities. Here’s a really good discussion of MWI, what it means, and what it doesn’t mean. But although MWI is increasingly accepted among physicists as a mathematically valid approach to quantum physics, not too many of them believe that the other “worlds” literally exist as parallel universes; rather, they consider the alternative measurement histories to be simply alternative possibilities that are present but swamped within our singular reality, states that exist in a mathematical sense but don’t really split off into alternate universes. This is the view I favor when I’m not writing a work of fiction that requires using MWI.
But either way, there are still questions about the actual physical mechanism behind decoherence. What causes one state to dominate in the macroscopic system while the others fade away (or get shunted off into other realities)? This is where Wojciech Zurek and his theory of Quantum Darwinism come in. It’s pretty much just what it sounds like. A Darwinian evolutionary process can happen — indeed, must happen — in any system that meets three conditions: 1) It has reproducing entities; 2) the reproductions are not exact; and 3) the environment favors the reproduction of some traits over others. In such a system, some entities will have traits that let them reproduce more successfully than others. That means there are more and more of them with each generation, until they inevitably overwhelm the competition. (This is one of the many reasons why creationism is such BS. The basic mechanism of evolution is so simple as to be inevitable. Not only does it happen, but there’s no way to prevent it from happening in any system that meets those three simple conditions.)
In Quantum Darwinism, what’s being “reproduced” is information about quantum states. The information about the original particle is encoded in the states it induces in the other particles/systems it interacts with, so each particle’s state becomes an “offspring” of the original state. Basically, the process selects for states that can survive the decoherence process. The states that survive are ones that are stable enough to survive interaction with other particles and thus get “copied” over and over by multiple interactions, so that they’re encoded redundantly throughout the environment. Unstable states may be copied once or a few times before being destroyed, so their information doesn’t propagate as far. The larger the environment becomes (i.e. the further the information spreads out into the universe), the more dominant the redundant information gets, swamping out the alternate states.
This is why reality looks classical. We measure an object by measuring the environment it’s interacted with, and different observers measuring different parts of the environment wil see the same redundant information and agree on the reality they observe. The original particle is still in multiple states, but the information about those other states has been swamped because it didn’t get reproduced redundantly enough.
So instead of a vast number of parallel realities, all carrying equal weight, what you have instead is a “signal” of classical reality on top of a faint “background noise” consisting of the unfulfilled potentials of all the other possible outcomes. Which is how the universe can appear classical even while being entirely quantum-mechanical. It’s not perfectly classical, which would require infinite redundancy, but it’s close enough to look that way for the most part.
Which doesn’t mean that Quantum Darwinism is incompatible with Many-Worlds. It’s actually derived directly from Everett’s assumptions. But as I said, it’s an open question whether MWI can be taken literally, whether the “worlds” are objectively real or just mathematical constructs. Zurek doesn’t take sides on the question, and he says the following:
Objective existence can be acquired (via quantum Darwinism)
only by a relatively small fraction of all degrees
of freedom within the quantum Universe: The rest is
needed to “keep records”. Clearly, there is only a limited
(if large) memory space available for this at any time.
This limitation on the total memory available means that
not all quantum states that exist or quantum events that
happen now “really happens” — only a small fraction of
what occurs will be still in the records in the future.
If I’m reading this right, Zurek seems to be saying that Quantum Darwinism allows for there to be more than one “real” history to the universe, but rules out the interpretation of MWI stating that every possible outcome must be equally real. So there could be a finite number of parallel timelines — maybe just those robust enough to stand out from the noise. That fits the Darwinian paradigm, since a successful species can split apart into multiple coexisting species. Not every genetic variation or mutation spawns a whole new species, but the most reproductively successful ones generally do.
That’s an interpretation I think I can live with. The idea that every possible reality is real, that I’m splitting off at every instant into thousands of alternate selves, is one I find inelegant and kind of disturbing. And from a dramatic standpoint it’s undesirable; if every decision actually happens more than one way, then any story’s outcome is arbitrary and meaningless. However, if the number of realities is greater than one but still limited, it’s not so bad. The number could still be quite large, though, large enough to accommodate the myriad universes (to coin a phrase) seen in Star Trek.
Note also that Zurek seems to be saying that a “real” alternate history won’t necessarily remain “real” forever. The information could be lost, that state of the universe wiped from the cosmic memory. That has interesting ramifications from a fictional perspective, particularly where a book about time travel is concerned. And that’s all I’m going to say on the subject for now.