Archive

Archive for the ‘Science’ Category

Midair holograms! Who knew?

November 13, 2019 2 comments

Sometimes it’s cool to be wrong.

One thing that’s been a longtime pet peeve of mine in science fiction film and television is free-floating midair “holograms” — volumetric (3-dimensional) images made of light that just miraculously appear in midair. Star Wars holograms are a familiar example, but they’re a common trope throughout SF media. But they annoy me because they make no sense. Light can’t just appear in midair. It has to be emitted by something or reflected off of something. Actual holograms, things that literally use the phenomenon called holography, are flat sheets of photographic film encoded with laser light that’s polarized in such a way that you see a different angle on the photographed subject depending on the angle at which you view the sheet, so that a 2-dimensional film image contains 3-D information. But the image is “inside” the sheet rather than floating in midair. And the things sometimes used in the entertainment industry or museums that are called “holograms” are really just flat film images reflected off of half-silvered glass positioned in such a way that they look like ghostly images hovering behind the glass, but are still just flat projections, so the label is a total misnomer. So the sci-fi conceit of a 3-D shape made of light hovering in midair has always seemed silly to me.

But just now, I read about a prototype system that comes pretty close. It’s called the Multimodal Acoustic Trap Display, and you can see it in action here:

Pretty impressive, huh? Now, the light in this display is still reflecting off a solid object, but it’s a small white bead that’s levitated and moved through the projection volume by precision sound waves, so fast that it blurs out and creates persistence of vision, and is illuminated by multicolored laser light as it moves. Together, the moving bead and the shifting colors function sort of like an old cathode-ray TV screen with scan lines, except it can actually create 3-D shapes that hover in midair. The abstract published in Nature cites movie/TV-style “holograms” as the inventors’ inspiration, and they’ve actually come pretty close to duplicating them, allowing for the fact that the image still has to be contained inside a sort of C-shaped box so that it isn’t quite floating free. (The scan lines make it seem very Star Wars-y.) But it’s just the prototype, so who knows how it can be refined over time?

Of course, there still is a physical object (or several, since it can levitate multiple beads) that the light is reflecting off of, but because it’s just one or a few tiny beads swooping around really fast, most of the volume actually is empty space, with the perception of a continuous shape resulting from persistence of vision. So this is probably just about as close to the standard intangible, free-floating sci-fi “hologram” as we’re likely to get, allowing for further refinements like maybe a system that uses more and smaller beads. I’ve read before that there are some volumetric displays that project light off of a mist of fine particles, but that doesn’t seem to have the same degree of control as this, though maybe it and the acoustic-trap technology could be merged somehow. Anyway, because the beads are constantly moving around and their positions are controlled by the acoustic waves, someone could wave their hand through such a hologram or walk through it, and as long as they didn’t knock out the beads directly, they could just pass through the image without doing more than briefly disrupting it, as often shown in fiction.

So I now have to rethink my contempt for floaty midair holograms as a sci-fi trope. There would still have to be some physical object there for the light to bounce off, and it would still probably have to be confined within some kind of projector stage rather than moving freely through an area like the holograms in a lot of sci-fi (including Star Trek: Discovery). But to an extent, many of the floaty holos in sci-fi are at least somewhat more credible now. Who knows? Ryuji Hirayama and the other developers of this device have solved a number of the engineering problems that I was skeptical could be solved, so maybe they can solve others. So we may see more realistic and versatile volumetric projections in the future (and I guess we’re stuck with them being called “holograms” even though they’re nothing of the kind).

Which means that maybe I should be more open to incorporating translucent midair holos into my own SF writing, rather than going for alternatives like soligrams (shapeshifting smart-matter gel that morphs into solid lifelike objects) or the anamorphic projections I featured in “Murder on the Cislunar Railroad.” Although I rather like avoiding the standard cliches in my writing. But if science makes those cliches real, then continuing to avoid them would be…

(puts on sunglasses)

…a holo gesture.

Science news: a “new,” safe, clean nuclear tech that’s actually decades old!

February 27, 2019 1 comment

It’s been a while since I did a science-themed post around here, partly because of generally neglecting my blog but partly because I’ve fallen out of the habit of reading science magazines online — something that I fear has been affecting my professional writing as well, since I’ve been having trouble thinking up new story ideas in recent years, and maybe the lack of inspiration from science articles is part of the problem. But recently, when the Firefox browser discontinued its inbuilt support for RSS feeds, I found a separate add-on that worked even better, in that it notified me of new posts and made it easier to keep current. So I decided to take advantage of that to subscribe to some science sites’ feeds so I could stay more current with the news.

Anyway, Discover Magazine just posted the following article, which is quite interesting:

Nuclear Technology Abandoned Decades Ago Might Give Us Safer, Smaller Reactors

It’s a long article rather than the short colums the feed usually gives me, so I’m not sure if it’ll stay permanently available or go behind a paywall at some point. So I’ll summarize here.

It turns out that, in the early days of nuclear research, scientists had examined various options for generating power from atomic fission, including a system called a molten salt reactor. Per the article:

Every other reactor design in history had used fuel that’s solid, not liquid. This thing was basically a pot of hot nuclear soup. The recipe called for taking a mix of salts — compounds whose molecules are held together electrostatically, the way sodium and chloride ions are in table salt — and heating them up until they melted. This gave you a clear, hot liquid that was about the consistency of water. Then you stirred in a salt such as uranium tetrafluoride, which produced a lovely green tint, and let the uranium undergo nuclear fission right there in the melt — a reaction that would not only keep the salts nice and hot, but could power a city or two besides.

Weird or not, molten salt technology was viable; the Oak Ridge National Laboratory in Tennessee had successfully operated a demonstration reactor back in the 1960s. And more to the point…, the liquid nature of the fuel meant that they could potentially build molten salt reactors that were cheap enough for poor countries to buy; compact enough to deliver on a flatbed truck; green enough to burn our existing stockpiles of nuclear waste instead of generating more — and safe enough to put in cities and factories. That’s because Fukushima-style meltdowns would be physically impossible in a mix that’s molten already. Better still, these reactors would be proliferation resistant, because their hot, liquid contents would be very hard for rogue states or terrorists to hijack for making nuclear weapons.

Molten salt reactors might just turn nuclear power into the greenest energy source on the planet.

It sounds paradoxical — they’re safe from meltdowns because they’re already molten? But the thing is, they’re designed to contain material at that temperature to begin with, and since it’s already liquid, any temperature runaway would just make it expand until the reaction shut down. Plus the coolant wouldn’t need to be under pressure so there’d be no risk of a steam explosion, and there’s a failsafe built in that would drain the molten salts into an underground tank so they wouldn’t be released into the environment. The one real problem, it seems, was finding a sufficiently corrosion-resistant material to make the tanks and pipes from.

Better yet, the liquid nature of the nuclear fuel means that it could be continuously filtered, purified, and cycled back into use like the liver cleansing the bloodstream, so eventually all the nuclear material would be used up and there’d be no nuclear waste — or rather, what little waste there was would have a short enough half-life to be safe after about 300 years rather than a quarter of a million. What’s more, it could use some of our existing nuclear waste as fuel and help reduce that problem too.

So why was this superior technology abandoned decades ago in favor of the riskier water-cooled, solid-fuel nuclear plants? Largely just industrial and political inertia, it seems. The solid-fuel design was already in use on nuclear subs when the effort to build civilian nuclear power plants got underway, and the molten salt design was still experimental. So by the time molten salt technology was experimentally proven viable, the industry was already fully committed to solid-fuel reactors, with a big infrastructure built up to support them and deal with their fuel. And there were big plans to recycle their fuel in breeder reactors and create more and more plutonium to power future reactors, which seemed like a great idea until it turned out you could build bombs from the spent fuel, which meant the recycling plan was shut down and we were stuck with a bunch of nuclear waste we didn’t know what to do with, and that problem plus Three Mile Island and Chernobyl soured people on any nuclear-fission research, even something like molten salt reactors that would be far safer and cleaner and have none of the drawbacks that made people so afraid of fission power. But now, people (at least those who aren’t in denial) are more afraid of climate change and are looking for green energy sources, and this might be one of the best.

Then again, MSRs are not a perfect technology. I looked around and found another site talking about the tech:

Molten Salt Reactors

This article is more cynical about the downsides of the tech than the Discover article, asserting that it could be used to create weapons after all, and that there are a number of unknowns yet to be addressed.

And here’s the World Nuclear Association’s assessment, which mentions that MSR research is already pretty big in China, something the Discover article doesn’t mention:

Molten Salt Reactors – World Nuclear Association

Although it doesn’t seem to agree with the previous article about the weapons risk, barely mentioning the issue in its discussion, and suggesting that the early research into the technology was specifically focused on finding a form of nuclear power that would minimize the proliferation risk. So evidently there are differing points of view on this, which is why it’s always a good idea to look beyond a single source.

This is informative stuff for a science fiction writer like me. For decades, SF writers have assumed that the future of clean nuclear power would be fusion rather than fission. I’ve long been a believer in the aneutronic form of fusion that would react deuterium with helium-3 (which is abundant on the Moon due to being deposited by the solar wind) and react without neutron radiation. But it turns out there’s been a viable, safe, fairly compact fission technology that’s been known about this whole time and largely ignored — already pretty much proven viable, while fusion has remained just out of reach (they’ve been predicting it was 30-40 years away for the past 50-60 years now). I mean, sure, a reactor based on what’s essentially a pit of radioactive lava sounds scary, but no more so than a starship engine based on constantly annihilating matter and antimatter.

It’s also a good reminder that technology doesn’t always develop in a straight line — that viable advances can be sidelined for a generation or more because industries choose to concentrate all their attention elsewhere, or because the political will to explore them is lacking. Of course, there’s no shortage of SF stories about scientists (often of the mad persuasion) trying to prove to Those Fools at the Institute that a discredited fringe idea is viable after all, but it might also be worth exploring what comes after that, when the fringe idea finally starts to get acceptance — or when it was never really discredited to begin with, just overshadowed and forgotten until the hero of the story tried digging into old research and turned up an overlooked gem.

By the way, it’s amusing to read that the molten uranium-salt mixture has “a lovely green tint,” given that the public has long associated radioactivity with a green glow. That myth arose as a result of the glow-in-the-dark radium clock and watch faces that were common back in the days before it was understood how dangerous radioactivity was. The green glow wasn’t from the radium itself, whose emissions (like those of all radioactive isotopes) are invisible; rather, the radioactivity excited luminescence in the phosphor dyes the radium was mixed with. But since such items were common in the early 20th century, people assumed that anything radioactive would glow green, which is part of why the Incredible Hulk is that color (although it’s largely because his original gray hue was hard to reproduce consistently with cheap 1960s printing methods), along with various vintage monsters like those in The Green Slime and Doctor Who‘s “The Green Death,” and why the nuclear rod prominently featured in the titles of The Simpsons glows green. It’s also probably why kryptonite is green. So anyway, given that I’ve grown used to thinking of “green radiation” as a total myth, it’s ironic that the molten salt fuel in this case actually is green in color (though presumably not glowing except thermally) — not to mention that it’s a “green” power source in the environmental sense!

Memory RNA after all?

Today I’m experiencing that common occupational hazard for the science fiction writer: Learning that a new scientific discovery has rendered something I wrote obsolete.

I’ll let Tamara Craig, the narrator of my 2010 story “No Dominion” from DayBreak Magazine, explain:

Nearly a century ago, an experiment with flatworms seemed to show that memory was stored in RNA and could be transferred from one organism to another. But the experiment had been an unrepeatable fluke — pardon the pun — and later research showed that memory worked in a completely different way, unfortunately for the science fiction writers who’d embraced memory RNA as a plot device.

(This passage is trimmed down a bit in the version soon to be reprinted in Among the Wild Cybers: Tales Beyond the Superhuman, since that collection’s editor thought the references to SF writers were a bit too meta and distracting.)

What I wrote there was based on memory and was roughly correct. In the late 1950s and early ’60s (“No Dominion” is set in 2059), a researcher named James V. McConnell spent years experimenting with memory in planaria (flatworms), doing things like cutting them up and testing if their regenerated tails retained the memories of their original heads, and — most famously — grinding them up and feeding them to other flatworms. McConnell’s research did seem to show that some learned behavior was passed on by what he proposed to be a form of RNA storing memories created in the flatworm’s brain. It’s true that there was never enough reliable confirmation of his result to establish it as true, and the scientific establishment dismissed McConnell’s findings, although they did inspire a lot of science fiction about RNA memory drips or memory pills as a technique for quick-learning overnight what would normally take months or years. However, it seems that there were some experiments that did appear to replicate the results. There just wasn’t enough consistency to make it definitive.

Apparently, there’s been some renewed experimentation with McConnell’s theory in the past few years, showing promising but uncertain results. What I read about today was a new result, involving snails rather than flatworms:

http://www.sfn.org/Press-Room/News-Release-Archives/2018/Memory-Transferred-Between-Snails

Memories can be transferred between organisms by extracting ribonucleic acid (RNA) from a trained animal and injecting it into an untrained animal, as demonstrated in a study of sea snails published in eNeuro. The research provides new clues in the search for the physical basis of memory.

Long-term memory is thought to be housed within modified connections between brain cells. Recent evidence, however, suggests an alternative explanation: Memory storage may involve changes in gene expression induced by non-coding RNAs.

A more thorough article about the result can be found at the BBC:

‘Memory transplant’ achieved in snails

Now, this doesn’t mean the original memory RNA idea was altogether right. This experiment involved injecting the RNA into the blood of the snails rather than feeding them ground-up snails. And the result probably needs to be repeated more times and studied more fully before it can be definitive. But it does suggest that I was wrong to insist that memory “worked in a completely different way.” It’s possible that memories are stored, not in patterns in the synapses of nerve cells, but in RNA in their nuclei, which has an epigenetic effect on the neurons’ gene expression and therefore their behavior and structure.

Of course, all these results show is that very simple reactions to stimuli can be transferred. There’s no evidence that it would work for something as elaborate as the kind of declarative memory and knowledge that the passage in the story was discussing, or the kind of procedural memory and skills often transferred by memory RNA in fiction (e.g. foreign languages or fighting techniques). Perhaps those kinds of memory are partly synaptic, partly epigenetic. Maybe there’s something else involved. So Tamara’s lines in the story may not be entirely obsolete, just a little inaccurate (forgivable, since she’s a cop, not a scientist).

So I guess it could be worse. It was a minor part of the story anyway. And the actual research itself suggests some interesting possibilities. The articles say that learning more about memory creation and storage — and perhaps memory modification and transfer — could help treat conditions like Alzheimer’s and PTSD. If so, then it’s unfortunate that McConnell’s results weren’t taken more seriously half a century ago.

Quantum teleportation revisit: Now with wormholes!

December 12, 2017 1 comment

Six years ago, I wrote a couple of posts on this blog musing about the physics behind quantum teleportation — first proposing a model in which quantum entanglement could resolve the philosophical condundrum of whether continuity of self could be maintained, then getting into some of the practical limitations that made quantum teleportation of macroscopic objects or people unlikely to be feasible. I recently came upon an article that offers a potential new angle, basically combining the idea of quantum teleportation with the idea of a wormhole.

The article, “Newfound Wormhole Allows Information to Escape Black Holes” by Natalie Wolchover, was published in Quanta Magazine on October 23, 2017. It’s talking about a theoretical model devised by Ping Gao, Daniel Jafferis, and Aron C. Wall, a way that a stable wormhole could exist without needing some kind of exotic matter with arbitrary and probably physically unattainable properties in order to keep it open. Normally, a wormhole’s interior “walls” would attract each other gravitationally, causing it to instantly pinch off into two black holes, unless you could line them with some kind of magic substance that generated negative energy or antigravity, like shoring up a tunnel in the dirt. That’s fine for theory and science fiction, but in practical terms it’s probably impossible.

The new model is based on a theory that’s been around in physics for a few years now, known in short as “ER = EPR” — namely, that wormholes, aka Einstein-Rosen bridges, are effectively equivalent to quantum entanglement between widely separated particles, or Einstein-Podolsky-Rosen pairs. (Podolsky, by the way, is Boris Podolsky, who lived and taught here in Cincinnati from 1935 until his death, and was the graduate advisor to my Uncle Harry. I was really impressed when I learned my uncle was only two degrees of separation from Einstein.) The EPR paradox, which Einstein nicknamed “spooky action at a distance,” is the way that two entangled particles can affect each other’s states instantaneously over any distance — although in a way that can’t be measured until a light signal is exchanged between them, so it can’t be used to send information faster than light. Anyway, it’s been theorized that there might be some sort of microscopic wormhole or the equivalent between the entangled particles, explaining their connection. Conversely, the two mouths of a wormhole of any size could be treated as entangled particles in a sense. What the authors of this new paper found was that if the mouths of a wormhole were created in a way that caused them to be quantum-entangled — for instance, if one of them were a black hole that was created out of Hawking radiation emitted from another black hole (it’s complicated), so that one was a direct outgrowth of the other on a quantum level — then the entanglement of the two black holes/mouths would create, in the words of the paper’s abstract, “a quantum matter stress tensor with negative average null energy, whose gravitational backreaction renders the Einstein-Rosen bridge traversable.” In other words, you don’t need exotic matter to shore up the wormhole interior, you just need a quantum feedback loop between the two ends.

Now, the reason for all this theoretical work isn’t actually about inventing teleportation or interstellar travel. It’s more driven by a strictly theoretical concern, the effort to explain the black hole information paradox. Conservation of energy says that the total amount of energy in a closed system can’t be increased or decreased. Information is energy, and the universe is a closed system, so the total amount of information in the universe should be constant. But if information that falls into a black hole is lost forever, then conservation is violated. So for decades, physicists (notably Stephen Hawking) have been exploring the question of whether it’s possible to get information back out of a black hole, and if so, how. This paper was an attempt to resolve that question. A traversable wormhole spinning off from a black hole provides a way for information to leave the interior of the black hole, resolving the paradox.

I only skimmed the actual paper, whose physics and math are way beyond me, but it says that this kind of entangled wormhole would only be open for a very brief time before collapsing. Still, in theory, it could be traversable at least once, which is better than previous models where the collapse was instantaneous. And if that much progress has been made, maybe there’s a way to refine the theory to keep the wormhole open longer.

There’s a catch, though. Physical law still precludes information from traveling faster than light. As with quantum teleportation, there is an instantaneous exchange of information between the two ends, but that information remains in a latent, unmeasurable state until a lightspeed signal can travel from the transmitting end to the receiving end. So a wormhole like this, if one could be created and extended over interstellar distances, would not allow instantaneous travel. A ship flying into one end of the wormhole would essentially cease to exist until the lightspeed signal could reach the other end, whereupon it would emerge at long last.

However — and this is the part that I thought of myself as an interesting possibility for fiction — this does mean that the ship would be effectively traveling at the speed of light. That in itself is a really big deal. In a physically realistic SF universe, it would take an infinite amount of energy and time to accelerate to the speed of light, and once you got fairly close to the speed of light, the hazards from oncoming space dust and blueshifted radiation would get more and more deadly. So as a rule, starships would have to stay at sublight speeds. In my original fiction I’ve posited starships hitting 80 or 90 percent of c, but even that is overly optimistic. So in a universe where starships would otherwise be limited to, say, 30 to 50 percent of lightspeed, imagine how remarkable it would be to have a wormhole transit system that would let a starship travel at exactly the speed of light. Moreover, the trip would be instantaneous from the traveler’s perspective, since they’d basically be suspended in nonexistence until the lightspeed signal arrived to “unlock” the wormhole exit. It’s not FTL, but it’s L, and that alone would be a damned useful stardrive. You could get from Earth to Alpha Centauri in just 4.3 years, and the trip would take no time at all from your perspective, except for travel time between planet and wormhole mouth. You’d be nearly 9 years younger than your peers when you got home — assuming the wormhole could be kept open or a second temporary wormhole could be generated the other way — but that’s better than being 2 or 3 decades younger. Short of FTL, it’s the most convenient, no-fuss means of interstellar travel I can think of.

Or, looked at another way, it’s a method for interstellar quantum teleportation that avoids all the scanning/transmission obstacles and impracticalities I talked about in my second 2011 post on the subject. No need to use a technological device to scan a body with a level of detail that would destroy it, then transmit a prohibitively huge amount of data that might take millennia to send in full. You just pop someone into one end of a wormhole and make sure the handshake signal is transmitted strongly enough to reach the other end. I’ve long felt that wormhole-based teleportation would be a more sensible approach than the disintegration-based kind anyway. Although we’re technically talking about black holes, so it wouldn’t be the sort of thing where you could just stand on a platform in your shirtsleeves and end up somewhere else. Also, there might be a little problem with getting torn apart by tidal stresses at either end. I’m not sure the paper addresses that.

This idea could be very useful for a hard-SF universe. My problem is that the universes I have established are a little less hard than that, though, since I tend to like working in universes with FTL travel of one sort or another. But maybe some idea will come to me for a future story. And maybe some other writer will read this and get an idea. We’re all in this together, and any worthwhile SF concept can inspire multiple very different stories.

Ars Technica interviewed me on STAR TREK transporters

September 23, 2017 5 comments

You may recall that last year, Xaq Rzetelny of the science site Ars Technica interviewed me about Star Trek temporal physics. Well, Xaq recently came across my 2011 post “On quantum teleportation and continuity of self,” and sought my input for an article tackling the same basic question for Star Trek transporters — whether or not the person who comes out of the transporter is the same one who went in. It’s a detailed and well-researched piece that also contains comments from folks like Michael Okuda and Lawrence Krauss, and you can read it here:

Is beaming down in Star Trek a death sentence?

Thoughts on LIFE (the 2017 film, not, y’know, the general state of existence) (spoilers)

After growing up with countless sci-fi films and TV shows that totally ignored the fact that the “sci” was short for “science,” I’ve been quite pleased with the trend in recent years to make more movies that are grounded in plausible science, such as Gravity, Europa Report, Interstellar, and The Martian. The movie Life, directed by Daniel Espinoza and written by Rhett Reese and Paul Wernick, is the latest entry in the hard-science movie trend, and is mostly quite impressive. It’s set on the International Space Station in the near future (very near, since a character played by 36-year-old Jake Gyllenhaal reminisces about being taken out of school on the day of the Challenger disaster 31 years ago), with its 6-person international crew studying a single-celled life form brought back by a Mars sample probe. Dubbed “Calvin,” the Martian organism quickly grows into a multicellular colony creature of great adaptability, and when things inevitably go wrong, the creature breaks out and it becomes a horror movie.

The science and realism in Life are top-notch. Espinoza and his team consulted with scientists and space experts to make the ISS environment as realistic as possible. It’s quite remarkable — like Gravity, it’s set almost entirely in free fall, but with six actors instead of two and with much more time spent in shirtsleeve environments within the ISS rather than in spacesuits. And the simulation of free fall is quite good. There are a couple of moments here and there where body parts or worn/held items sag downward, but mostly it’s very convincing. The filmmakers studied real ISS footage and consulted with astronauts, and the stunt team and actors worked out a very convincing replication of the real thing, more casual and natural than the stock “move very slowly” approach to weightlessness we’ve seen in countless movies before. It makes for a very novel and engaging viewing experience. The Calvin creature is also quite a creative design, convincingly unlike anything on Earth (well, almost anything — apparently the designers were inspired by slime mold colonies to an extent). And for the most part, it doesn’t really feel like a horror movie with a fanciful monster. It’s so grounded that it just feels like a drama about scientists dealing with an animal (albeit an alien one) that’s gotten out of control. The main scientist who studies the creature (Hugh, played by Ariyon Bakare) points out, even after being badly injured by Calvin, that it’s just following its instinct to survive and bears no malice.

Character-wise, I think the movie does a good job. The characters have a good mix of personalities, but they’re all played as professionals who know how to stay calm under pressure. There are some moments when they give into fear or anger, but then they get it together and work the problem. Ryan Reynolds is maybe a bit exaggerated as the standard cocky, wiseass space guy, not unlike George Clooney’s Gravity character, but he has some good moments — especially one where he’s in the lab with the escaped creature and Gyllenhaal’s character slams the hatch shut with him inside. Reynolds meets his eyes for a moment, then just nods and says “Yeah,” a quiet, almost casual acknowledgment that he did the right thing and is forgiven. Rebecca Ferguson is pretty solid as the “planetary protection officer,” the designer of the “firewalls” meant to prevent contamination between the humans and any alien life. She’s the one who bears the most responsibility for the steps that must be taken when the creature escapes, steps that the crew members know they might not survive, and Ferguson bears that weight with convincing professionalism. Hiroyuki Sanada and Olga Dihovichnaya round out the cast effectively, though they didn’t make too strong an impression on me. I do wish the cast had been a bit more diverse, and though they faked us out and nicely averted the “black guy dies first” cliche, we did still end up with two white actors, Ferguson and Gyllenhaal, as the last survivors. Still, it does better on the diversity front than Interstellar did.

But what damaged the film for me was its very ending. Major spoilers here: In the climax, we’re made to believe that the final plan to keep the creature from reaching Earth is succeeding, but enough deliberate ambiguity is created that it could go either way, and it isn’t until the final minute that we get the shock reveal that, no, the plan failed and the creature made it to Earth, implicitly dooming humanity. That downer ending left me with a very disheartened feeling. Okay, having the good guys lose is often what defines a horror movie, but I didn’t care for it at all here. This wasn’t the kind of horror movie where the characters are idiot teenagers making stupid decisions so you can feel they deserved what they got. This was a movie where good people made smart and brave decisions that should’ve worked, where they were heroically willing to sacrifice themselves in order to protect humanity as a whole, so having them ultimately fail to defend the Earth feels nihilistic, like it invalidates all their skill and sacrifice and renders everything we’ve seen pointless. It also plays into an anti-science mentality, the old Luddite idea that exploration can only bring ruin. I’ve never cared for that. One thing I liked about Europa Report was that, even though the outcome was tragic, the crew’s efforts still achieved something positive by advancing human knowledge, that their sacrifice served a noble purpose. By comparison, this ending left me with a very hollow and bitter feeling.

Also, in retrospect, Calvin was too superpowerful, too smart and too capable of overcoming everything the characters did to contain or kill it. As believable as the first two acts of the film were, it started to push the limits of credibility in the third act, both where Calvin’s abilities were concerned and in the contrivances necessary to create the climactic situation. There’s even a point where Calvin actively tries to stop Gyllenhaal from doing something that would keep it from reaching Earth, even though there’s no possible way the creature could’ve known enough about orbital physics to know the danger it was in or enough about spacecraft engineering to know how to avert it. Up to then, most everything Calvin managed to do was reasonably credible, but this broke the logic of the story and gave the creature magical omniscience in order to force a shock ending, and I just don’t buy it. The movie should not have ended this way, not just from an optimism standpoint, but from a basic plot logic standpoint. I guess that’s part of why it feels so wrong and frustrating to me — because it was forced rather than earned.

In sum, Life is mostly a very good, smart, believable movie with a sense of wonder (though with a terribly dull title), but the ending really hurts it.

Eclipse walk

I just got back from a long walk I took to watch the eclipse, which was not total here in Cincinnati but pretty darn close (91%). I decided to walk over the University of Cincinnati campus, figuring there would be a lot of other eclipse watchers there, and I ended up watching the watchers more than the eclipse itself. I did have some NASA-approved eclipse glasses, courtesy of the folks at the Shore Leave Convention, who handed them out for free with the convention packets last month. But even with the glasses, I didn’t feel comfortable looking at the Sun more than a few times or for more than a few moments at a time. I think maybe I got a couple of glints of direct sunlight around the edges while orienting myself the first couple of times, so I learned to keep my eyes closed until I could see enough glow through my eyelids to know I was looking the right way.

Still, once you’ve seen a crescent Sun once or twice, you’ve got the general idea. It was more interesting watching the environment and the people. It didn’t get dark enough here for the crickets to chirp or the animals to think it was night or whatever. But the light level softened to a degree I’d call comfortable. Ever since I got surgery for a retinal melanoma in high school, my eyes have been extremely sensitive to sunlight. This afternoon was the first time in ages that I’ve been comfortable without sunglasses while outdoors on a clear, sunny day. I heard some people around me say it was dark, but it looked more than bright enough to me, still definitely sunny, just not glaringly so. Maybe it was darker in shaded areas, though.  And the sky did turn a dimmer shade of blue as the eclipse neared maximum.

As for the people, there were a bunch of students and faculty members milling around watching, many with eclipse glasses, others with handheld filters, quite a few with homemade cereal-box pinhole cameras, at least one with a welder’s mask. A bunch were trying to take cell phone pictures through their eclipse glasses, which didn’t seem like a particularly wise idea to me. A few minutes before maximum, I happened across a group with a telescope that was projecting an image of the Sun on a plate, which gave me a clearer image than my eclipse glasses, so that was handy. (It’s surprising how small the Sun is in your field of view when you can actually look at it. Of course, by an accident of nature, it’s the exact same apparent size as the Moon, which is why total eclipses work.) The group seemed pretty upbeat and engaged with the whole thing, although maybe that was partly since it was an excuse to get out of class. When maximum coverage was reached at 2:29 PM, a round of applause went through the crowd. In what other context would people applaud something just for blocking their view of something else?

It is impressive how close we came to totality, and yet how bright it still was even with just 9% of the Sun still visible. I guess it shows how well the eye can adjust to different light levels. Still, now I have a crick in my neck from looking up so much. And I’m probably one of several million people asking, “So now what do I do with these eclipse glasses?”

Ars Technica interviewed me on STAR TREK time travel

February 12, 2016 2 comments

Ars Technica, a science and technology news site that also covers SF and media, has posted a lengthy, in-depth article by Xaq Rzetelny exploring the science of time travel in Star Trek and discussing my attempts to reconcile and rationalize it in my Department of Temporal Investigations books. I was interviewed for the article, and there are some quotes from me toward the end — and even a quote from an actual physicist reacting to my quotes. You can read the whole piece here:

Trek at 50: The quest for a unifying theory of time travel in Star Trek

Dawn probe reaches Ceres orbit!

Or as I like to call it, a Ceres circuit. Ba-dum-bum!

But Ceres-ly, folks…

This morning, at about 1239 GMT (or 7:39 AM where I am), the Dawn space probe successfully entered orbit around the dwarf planet Ceres. The NASA press release is here:

Nasa Spacecraft Becomes First to Orbit a Dwarf Planet

Unfortunately, Dawn is currently on the dark side of Ceres, and is orbiting slowly enough that it won’t come around to the light side until mid-April. So the best we get for a photo at the moment is this one from March 1:

Ceres March 1 2015

Image Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

 

This is historic as the first orbit of a dwarf planet (the New Horizons probe later this year will only fly by Pluto, I believe) and the first time a probe has orbited two different bodies. And it’s significant to me since it means Dawn has now visited both of the Main Belt protoplanets featured in Only Superhuman, first Vesta back in 2011 and now Ceres. With Vesta, the timing was right to let me incorporate a bit of what Dawn discovered into the novel during the revision process — but with Ceres I just have to hope nothing contradicts what I wrote. My main description of Ceres in the book was as follows:

The sunlit side of the dwarf planet was a dusty gray, except for the bright glints where craters or mining operations had exposed fresh ice beneath.

So far, so good, I’d say, given the other photo we got recently:

Ceres bright spots

Image Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

Scientists are speculating that those bright spots might be exposed ice, or maybe salt. Although you know what they kinda look like to me?

The on switch.

More news as it develops…

But what I really want to talk about is INTERSTELLAR (Spoiler review)

December 3, 2014 1 comment

I finally went to see Christopher Nolan’s Interstellar yesterday. I didn’t get the full IMAX experience; only one theater in reach is showing it that way, and I wanted to go to a different theater so that I could visit a couple of stores nearby. But it was still an impressive experience. There is a lot I love about the film, although it has some significant flaws. I tend to agree with a lot of the reviews I’ve seen that certain ideas in the climax really stretch credibility and take one out of the film, which is a problem for a movie that, for the most part, is very heavily grounded in credible science.

The premise of the film is one that feels familiar from a lot of science fiction I’ve read — which is a good thing, given how rarely cinematic sci-fi feels like it engages the same kind of ideas as science fiction literature. The world is dying, and the heroes of the film are scientists and explorers trying to save the human race. The drama comes largely from the clash between the commitment of the protagonists — including Cooper (Matthew McConaughey), Brand (Anne Hathaway), and her father Dr. Brand (the inevitable Michael Caine) — to exploration and human survival and the more intimate, personal concerns of the people they leave behind, notably Cooper’s daughter Murph (Mackenzie Foy as a child, Jessica Chastain as an adult). There are also conflicts arising from the dispute over whether to place rescuing a loved one over the greater needs of the mission, with the conflict generated by the physical and engineering constraints of the situation, and from the profound isolation that drives the film’s main antagonist Dr. Mann (Matt Damon, who seemed to eschew major billing and whom I was surprised to see in the film) to his desperate actions. So even the character drama is mostly (mostly) placed in the context of thoughtful, plausible scientific scenarios, and that was good to see.

There was so much science here that was awesome to see onscreen at last. I loved the portrayal of the wormhole and the dialogue explaining why it has a spherical mouth instead of the cliched funnel shape. It was amazing to see an accurate version of a wormhole portrayed onscreen for once (although the sequence of passage through the wormhole seemed more visually fanciful). I loved the realistic treatment of the spaceships and their physics, and — as in Gravity — I loved, loved, loved the lack of sound in the space scenes. I’ve come to realize that silence can make things feel more real, and not only in space. Even here on Earth, we often see news footage or surveillance-camera footage that’s soundless, or observe something live from a great enough distance that we can’t hear it. So seeing something without hearing it, without a carefully honed accompaniment of clearly audible sound effects, can make it feel more like a real event and less like a constructed artifice. I had the same reaction to the shot of the Endurance passing by Saturn, visible only as a distant point of light. I gasped in awe at that, because the very absence of clarity and detail made it feel like I was looking at the real thing rather than a constructed special effect.

The reason there’s so much good science is because Nolan made the film in cooperation with Kip Thorne, the physicist whose work on wormholes for Carl Sagan’s novel Contact led him to a whole new field of wormhole physics that improved our understanding of general relativity and the way it could apply in extreme situations. This is taking that kind of collaboration to the next level, since Thorne was actually an executive producer on the film and was involved in every level of building the story. So there is so much good science and effective science exposition — naturally a bit simplified for movie audiences, but nothing that really felt badly wrong or misunderstood by the screenwriters. Even areas other than physics were well-handled. I gather that the filmmakers met with a team of biologists to work out a plausible mechanism for the blight that’s killing all the crops on Earth. And it was so refreshing to see cryogenic pods that didn’t have big windows that would let tons of heat in, that were more like realistic deep-freeze units.

But all that good science made it harder to tolerate the more fanciful moments, the parts that Nolan apparently considered non-negotiable and that Thorne had to compromise on as best he could. The severity of the time dilation on the ocean planet near the black hole was hard to justify, although apparently Thorne found an equation that made it just barely believable. The second planet they visited was just plain weird… so, it’s… made of clouds of solid ice, and has no surface? It’s just some kind of spongy ball of ice? And yet it has 80 percent of Earth’s gravity? There’s just no way that works. Even if such a body could form, if it were so low in density, it would never have gravity anywhere near that high. And it’s more likely that it would condense into a more solid ball of ice. This was just weird. Thorne has said it’s the part he’s most unhappy with.

Also, I’m disappointed that a movie nominally about the wonders of exploration doesn’t give us more interesting environments. We get a bunch of ocean and a bunch of ice, and that’s about it. So monochrome! Apparently the earlier draft by screenwriter Jonathan Nolan had more planetary exploration and even aliens, but director Nolan stripped most of it out.

Oh, and how was that NASA facility supposed to work as a centrifuge if it ever launched? All those vertical columns next to the walls would become big horizontal obstructions at chest level.

But the climax of the film is what really pulled me out of the story, and here’s where we get into the heavy spoilers. So Cooper falls into the black hole — okay, there was Thorne-guided dialogue explaining reasonably why it was the kind of black hole that could allow a survivable entry — and ends up in a tesseract spacetime manifold constructed by the 5-dimensional “bulk beings” that are actually the far-future evolved descendants of humanity reaching back to help us save ourselves. Okay, I can buy that conceit. And I can buy the premise that only gravity can cross the dimensions and transcend time, which is why the bulk beings could only send Cooper and the robot TARS back to Sol System in the relative present and could only send a message back in time. (String theory says that most kinds of particle/string are attached to the 4-dimensional brane of our universe, but gravitons are detached from it and can leak through to other universes, which may be why gravity is so weak.) But still, that’s something I had to reason out after the fact. As it was presented — Cooper just magically turning out to be the “ghost” and sending cryptic messages to Murph through “gravity” — it felt silly and fanciful. Thorne did his best to ground Nolan’s idea in some kind of plausible context, but it’s hard to believe that a force as weak as gravity could be focused tightly enough to have the fine-scale effects shown in Murph’s room. More to the point, even if it can be justified physically in terms of Sufficiently Advanced Technology for 5-dimensional gravity control and spacetime manipulation, there’s the deeper question of why. Why employ such convoluted methods to send the quantum data to Murph? Couldn’t the bulk beings just send the message directly instead of setting up this contrived father-and-daughter-connecting-across-time situation? The only excuse I could think of as I walked back to my car after the movie was that maybe they were so far in the posthuman future that they no longer remembered our languages and communication methods and thus needed a human intermediary to interpret for them. But then, how were they able to communicate to TARS sufficiently that he could explain the situation to Coop? And why couldn’t TARS transmit the data? It felt like Nolan’s intent was to build on Brand’s earlier speech about love being a force that could transcend time and space, that it was only Coop’s love for his daughter that let him connect. But as a number of other critics have said, that’s sentimental silliness in the context of such a hard-science film. It’s a maudlin, corny scenario that just doesn’t feel right, and it’s a shaky foundation for an otherwise mostly solid film.

On top of which, how the hell did adult Murph figure out that it was her father communicating with her? There was no evidence presented to her that would’ve let her make that deduction. She just magically knew it because the timing of the montage demanded that she recognize it at the same time the audience did. It’s the one part where there wasn’t even an attempt to assert some kind of rational justification for the sentimental situation, and the worst part of the sequence. Heck, it wasn’t even justified from a character standpoint. For all these years, she’s felt that her father abandoned her. Why would she suddenly, based on nothing but the Morse-code “STAY” that she’d already known about at age 10, do a total about-face in her perceptions and suddenly believe that her father had been sending her messages from the future all along? Where the hell does that come from, either as an intellectual leap or an emotional epiphany? The only reason she got there was because the script made her do it. That’s as dishonest from a character standpoint as it is from a plausibility standpoint.

And that’s a shame, because the visual portrayal of the tesseract is brilliant. It’s unlike anything I’ve seen onscreen before, and it’s a marvelous visualization of the idea of time as a traversable dimension, although I could quibble about the details.

There’s one other area where the film’s realism failed badly, and it’s more disturbing. This film is set in the United States sometime in the future, probably the latter half of the 21st century. By then, demographic trends suggest that the US is going to be a white-minority nation. I’m sure that the current pool of physicists, engineers, and astronauts working for or with NASA is already highly diverse today. And yet the cast of this film was overwhelmingly white. There were only two black characters in the film, a school principal who appeared in a single scene and a token member of the expedition who stayed behind on the ship on the first landing and then got killed off at the midpoint. The only vaguely positive thing that can be said is that at least the black guy was the second one killed off instead of the first. Other than that, there were only a couple of uncredited bit players in the background. And the only Asian face I noticed in the film was a photo of one of the missing astronauts, one they didn’t bother to rescue. I don’t think there were any Hispanic characters in the film at all. The robots got more screen time in this movie than anyone nonwhite. This is a story about the survival of all humanity, yet virtually the only humans given any agency or participation in the story are white people with Anglo-Saxon names. In a film that strives for realism on so many levels, this is a gross failure of plausibility and common sense. I’m sick of the Hollywood establishment being so out of step with reality when it comes to inclusion in feature films. Television is increasingly catching up to reality as executives realize that their audience is diverse and they can make more profit by appealing to that diversity. But movie executives still apparently haven’t caught on.

On a more positive note, I wanted to commend Hans Zimmer’s score. As I’ve remarked before, I find Zimmer a chameleonic composer that I have a mixed response to; he’s good at adapting to what different directors want, so sometimes I find his work brilliant and fascinating, yet on other films I really don’t like it at all. I really disliked his work on Nolan’s Batman films, Inception, and the Nolan-produced Man of Steel, finding those scores ponderous and blaring, so I wasn’t expecting to like his score for Interstellar. But it’s actually very good. It’s in kind of a Philip Glass-y, minimalist vein, but it works well for the film. I’m glad that it ends up being another tick in the plus column for the film rather than adding another minus.

All in all, then, Interstellar is a film that mostly works as an installment in the all too small but growing category of hard-science fiction motion pictures. It’s more successful than Gravity at being believable, and hopefully it will add momentum to the trend of SF films getting more grounded in real science. In many ways, it’s a refreshing treat for fans of physics and hard SF. But it has a couple of major flaws that are hard to get past, especially for fans of physics and hard SF.

Green Blaze powers addendum: The high jump

February 5, 2014 5 comments

I’ve added a new paragraph to my earlier post “ONLY SUPERHUMAN reader question: Measuring the Green Blaze’s powers,” since I realized there was one aspect of Emerald Blair’s superstrength that I forgot to address, one that occurred to me as a result of watching The Six Million Dollar Man on DVD. Here’s what I added:

It’s occurred to me to wonder: How high could Emry jump? Of course, that depends on the gravity, so let’s assume a 1g baseline. According to my physics textbook, the maximum height of a projectile is proportional to the square of its initial velocity (specifically, the velocity squared times the square of the sine of the launch angle, divided by twice the gravity). So if we use my earlier, very rough assumption that Emry’s speed relative to an unenhanced athlete goes as the square root of her relative strength, that would cancel out the square, and thus jumping height (for the same gravity and angle) would increase linearly with strength. If she’s four times stronger than the strongest human athlete today, then, it follows she could jump roughly four times the world record for the high jump. Except it’s more complicated than that, since we’re dealing with the trajectory of her center of mass. The current world record is 2.45 meters by Javier Sotomayor. But that’s the height of the bar he cleared, not the height of his center of mass. He used a technique called the Fosbury flop, in which the body arcs over the bar in a way that keeps the center of mass below it. So his CoM was probably no more than about 2.15 meters off the ground, give or take. And he was pretty much fully upright when he made the jump. since he’s 1.95 meters tall to start with, and the average man’s CoM height is 0.56 of his total height (or about 1.09 m in this case), that would mean the world-record high jump entailed an increase in center-of-mass altitude of slightly over one meter. So if we assume that Emry is doing more of a “bionic”-style jump, keeping her body vertical and landing on her feet on whatever she’s jumping up to, then she might possibly be able to raise her center of mass up to four meters in Earthlike gravity. Which means she could jump to the roof of a one-story building or clear a typical security fence — comparable to the jumping ability of Steve Austin or Jaime Sommers.

And just a reminder: I’m open to more reader questions about Only Superhuman or my other writing.

We’ve found water on Ceres!

This just in from Space.com:

Water Found on Dwarf Planet Ceres

Astronomers have discovered direct evidence of water on the dwarf planet Ceres in the form of vapor plumes erupting into space, possibly from volcano-like ice geysers on its surface.

Using European Space Agency’s Herschel Space Observatory, scientists detected water vapor escaping from two regions on Ceres, a dwarf planet that is also the largest asteroid in the solar system. The water is likely erupting from icy volcanoes or sublimation of ice into clouds of vapor.

This is big news. It’s a major scientific breakthrough, proof of something that’s only been suspected about Ceres up to now, and it comes a year earlier than I expected, since the Dawn probe won’t reach Ceres until early 2015. It also has important ramifications for our future in space. In Only Superhuman, I established Ceres as the primary source of water and organic molecules for space habitats throughout the Main Asteroid Belt and inner system. This was based on astronomers’ estimates that Ceres might potentially have more fresh water on it than Earth does (since most of ours is salt water). Now we have verification for that, and it confirms (or at least makes it far more likely) that future space colonists and asteroid miners will have access to abundant sources of water without needing to lug it up out of Earth’s gravity well or go clear out to the moons of Jupiter and Saturn.

It’s also nice to get confirmation that what I put in my novel wasn’t wrong. Although it never occurred to me to mention a water-vapor atmosphere or cryovolcanoes in my descriptions of Ceres. Just as well, I suppose, since the volcanoes are unconfirmed. If and when I get to do a sequel, hopefully the timing will be right to work in Dawn’s findings. Hmm, the article says it’s more likely just sublimation, but I’m hoping for icecanoes (to use the Doctor Who term). Those would be cooler to write about. (Literally…)

Categories: My Fiction, Science

ONLY SUPERHUMAN reader question: Measuring the Green Blaze’s powers

December 30, 2013 4 comments

I recently received a few questions about Only Superhuman from Brandt Anderson via a Facebook message, and I thought I’d address them here. Brandt wrote:

I enjoy most super hero novels such as Ex-Heroes, Paranormals, Devil’s Cape, etc., and one of the things that is always forefront on my mind is stats. I like knowing exactly how strong or how fast the super-powered character is. So, I was hoping you wouldn’t mind giving an approximation on how enhanced Emerald Blair is. Her strength, speed, reflexes, senses, healing factor and durability if you don’t mind. Also, I apologize for this amount of nitpicking, would you able to tell me what her superhuman attributes be at without any of the enhancements she has? And lastly, in your world, how strong is the average super-being and what is the normal human level at?

Those are interesting questions, though to be honest, I haven’t really worked out that many of the details. It’s worth thinking about if I get to do further novels, though, so I’ll try to offer some answers. I did address Emerald Blair’s strength level in the novel when I had Eliot Thorne mention that she could “bench-press a tonne in one gee,” i.e. standard Earth surface gravity. That led me to the following analysis from my novel annotations:

From what I can find, the current world record for an unassisted or “raw” bench press (without the use of a bench shirt, a rigid garment that supports the muscles and augments the amount they can lift) for a woman in Emry’s weight class seems to be held by Vicky Steenrod at 275 lb/125 kg. Assuming Thorne was referring to what Emry could lift raw, that would make her 8 times stronger than Ms. Steenrod, at least where those particular muscles are concerned. And Emry’s training isn’t specialized for powerlifting but is more general, so that would probably make her even stronger overall. Not to mention that Thorne seemed to be talking about her typical performance, not a personal record. So as an adult Troubleshooter, with bionic upgrades on top of her Vanguardian mods, Emry might be at least 10 times the strength of an unenhanced female athlete of her size and build. That may be conservative, given some of what I’ve read about the possibilities of artificial muscle fibers. On the other hand, there are limits to how much stress the organs of even an enhanced body could endure.

By the way, the all-time raw bench-press world record is 323.4 kg by Scot Mendelson, who’s 6’1″ and over twice Emry’s weight. The assisted world record (with a bench shirt) is 487.6 kg by Ryan Kennelly, who’s about the same size and whose unassisted record is much lower. So going by what I figured before, that would make Emry nearly 4 times as strong as the strongest human beings alive today, and that’s without the added assistance her light armor would provide her (though she’d need to add sleeves to her armor to get the full effect). And that’s the lower limit. In any case, given all the bionic enhancements she’s added to her native strength, she might well be the strongest person in Solsys in proportion to her weight class, or at least right up there with the record-holders of her day.

Only Superhuman cover art by Raymond Swanland

Art by Raymond Swanland

According to my character profiles, by the way, Emerald’s height is 168 cm (5’6″) — at least in one gee or thereabouts, since people gain a bit of height in low or microgravity due to their skeletons being less compressed — and her mass is 69 kg (153 lb), which is a bit heavy for her size, but that’s because of the added weight of her bionics and reinforcements, as well as her dense musculature. So that’s strength.

What about speed? Well, I established in Chapter 6 that Javon Moremba, who’s specialized for running, could run at 60 km/h, which is just one and a third times the world record set by Usain Bolt in 2009. I’m not really sure how much it’s possible to increase human running speed without substantially restructuring human anatomy, since we’re already kind of specialized for running by evolution — although we’re specialized more for endurance running than speed, which was how our ancestors were able to be successful hunters and trackers. Javon’s anatomy is altered from the human norm, with atypically long legs and powerful joints and enlarged lungs. Emerald’s proportions are more normal, and her legs aren’t especially long; plus she’s not exactly lean. She’s built more for strength than speed. On the other hand, the athlete I modeled her physique after, tennis star Serena Williams, can be an astonishingly fast mover on the tennis court due to her sheer strength — though not as fast as her leggier sister Venus. Okay, so we can safely assume that the teenage Emry couldn’t run as fast as Javon. She’s only 84% his height and less of it is legs, so let’s say she has 75% of his stride. I actually have her down as only 72% of his mass, though; I think I based Javon’s statistics on the aforementioned Mr. Bolt. I guess the question is, what’s the comparative ratio of total muscle mass to push with and total body mass to be pushed? I think I’ll avoid any complicated math and just go with visual intuition, which tells me that Emry has proportionally more excess bulk to deal with; but once she’s bionically enhanced, that might compensate. So let’s say that with just her raw muscle, no cyborg upgrades, she’s got a minimum of 75% of his running speed, which would be equal to Bolt’s world record.

But how much do her upgrades boost her strength? Well, we know that she was always strong enough in her adolescent years to match or overpower any man, and judging by those weightlifting figures above, a man’s maximum strength might be something around 2.5 times a woman’s, all else being equal. But many of those men would be mods themselves, so we’d need to up that. Still, I don’t want too much of her strength to be innate, since the bionics should contribute a lot. So let’s say that she started out roughly 3.3 times the normal strength of a woman of her build and had it tripled by her Troubleshooter bionics.

How does that apply to her running speed? This is probably oversimplifying like hell, but it seems to me that if you exert three times the force on the same mass, then by Newton’s second law you get three times the acceleration. Now, for a given distance, the time needed to cover it goes as one over the square root of the acceleration; and the rate is the distance over the time. So that would suggest, unless I’m doing something very wrong, that if she has three times the acceleration for each thrust of her leg muscles pushing her forward, then her speed would be increased by roughly the square root of three, or 1.73. So if her running speed without bionics was 45 km/h, then with bionics it’d be nearly 80 km/h (50 mph). Though she’s probably capable of bursts of even faster speed when she supercharges her nanofiber implants, as we saw when she made her skyscraper jump in Chapter 11. This would make her about 5/6 the typical speed of Steve Austin, the Six Million Dollar Man, and half the top recorded speed of Jaime Sommers, the Bionic Woman. But let’s call that her sprinting speed. For endurance running, she’d probably average out a bit slower — let’s say 64 km/h (40 mph), which translates to a 1.5-minute mile — which would put her at better than two and a half times the female world record for the mile. It would also put her slightly above Javon’s indicated speed, but that was for a Javon who was out of training. (Oh, and keep in mind that this is assuming she’s in a full Earth gravity or close to it.)

There are other ways of measuring speed, though. How fast can she dodge a blow or throw a punch? That gets us into the next question, reflexes. Well, at one point in chapter 16 (p. 284 in the paperback), I say “Her enhanced reflexes made her dodge the shockdart before she was consciously aware of it, but her mind quickly caught up.” So her reaction time is certainly accelerated considerably beyond the norm, so much that it outpaces her conscious thought at times. And while her foot speed is not too much above normal, her dodging speed can be literally faster than a speeding bullet. Well, a speeding dart. If we assume the dart had a speed of around 300 m/s, comparable to an air rifle pellet and close to a 9mm bullet, and if she was maybe 15 meters away from the shooter, that would give her a twentieth of a second to react, or 50 milliseconds. That’s maybe twice the fastest recorded human reaction time for movement, and nearly four times the typical reaction time for a visual stimulus. And that’s just the reaction time she’d need to begin moving to dodge that particular dart. Add in the time it would take to move far enough to miss and she’d have to be even faster. Now, I found a factoid somewhere saying that Usain Bolt moves a foot every 29-odd milliseconds, which is about one centimeter per millisecond, so if we draw on the above comparisons to Bolt’s running speed (which is a horribly rough comparison, but it’s all I’ve got), the Green Blaze might be able to move 1.5-2 cm per millsecond, and her torso is maybe c. 32 cm at its widest point, so to dodge a dart fired at center mass she’d need maybe 8-11 ms. So she’d need to start moving within 40 ms or less, which would be 5 times average human reaction time. Just for a margin of safety and round numbers, let’s say her reaction time is 6 times average and 3 times maximum.

Edited to add: It’s occurred to me to wonder: How high could Emry jump? Of course, that depends on the gravity, so let’s assume a 1g baseline. According to my physics textbook, the maximum height of a projectile is proportional to the square of its initial velocity (specifically, the velocity squared times the square of the sine of the launch angle, divided by twice the gravity). So if we use my earlier, very rough assumption that Emry’s speed relative to an unenhanced athlete goes as the square root of her relative strength, that would cancel out the square, and thus jumping height (for the same gravity and angle) would increase linearly with strength. If she’s four times stronger than the strongest human athlete today, then, it follows she could jump roughly four times the world record for the high jump. Except it’s more complicated than that, since we’re dealing with the trajectory of her center of mass. The current world record is 2.45 meters by Javier Sotomayor. But that’s the height of the bar he cleared, not the height of his center of mass. He used a technique called the Fosbury flop, in which the body arcs over the bar in a way that keeps the center of mass below it. So his CoM was probably no more than about 2.15 meters off the ground, give or take. And he was pretty much fully upright when he made the jump. since he’s 1.95 meters tall to start with, and the average man’s CoM height is 0.56 of his total height (or about 1.09 m in this case), that would mean the world-record high jump entailed an increase in center-of-mass altitude of slightly over one meter. So if we assume that Emry is doing more of a “bionic”-style jump, keeping her body vertical and landing on her feet on whatever she’s jumping up to, then she might possibly be able to raise her center of mass up to four meters in Earthlike gravity. Which means she could jump to the roof of a one-story building or clear a typical security fence — comparable to the jumping ability of Steve Austin or Jaime Sommers.

So let’s move on to senses. We know from Ch. 3 that 13-year-old Emry’s “enhanced vision” let her make out the movements of the townspeople of Greenwood from some distance away, far enough that the curve of the habitat gave her an overhead view. Now, Greenwood is a Bernal sphere meant to simulate a rural environment with farmland and presumably forest. It should have a fairly low population density, and my notes give it a population around 3000 people. If we set the population density at maybe 30 people per square kilometer, that gives a surface area of 100 square km, for a radius of about 5 km and a circumference of 31.4 km. Now, just eyeballing it with a compass-drawn circle and a ruler, I’d say she’d need to be 1.5 to 2 km away to get the kind of raised angle described in the text. Now, being an assiduous researcher, I went out and braved the cold to visit my local overlook park to see if I could spot human figures at anything resembling that range. The farthest I was able to spot a human being was at a place that I estimate was about a mile/1.6 km away, with the park’s elevation, despite being a respectable 300 feet or so higher, too small to add significantly to the distance. But I just saw the faintest speck of movement. The scene indicates that Emry could see enough detail to make out body language and attitude. I’d say her resolution would have to be at least 3-4 times greater than mine (with glasses). Although we’re not talking about bionic eyes with zoom lenses, so it’s probably more a matter of perception of detail. Assuming my prescription is still good enough to give me 20/20 vision in at least one eye (which I probably shouldn’t assume), that would make Emry’s visual acuity something like 20/7 or 20/5 if not better; the acuity limit in the unaided eye is 20/10 to 20/8 according to Wikipedia. (20/n means the ability to see at 20 feet what an average person needs to be n feet away to see.) Hawks are estimated to have 20/2 vision. Emry isn’t specialized for eyesight, so let’s not go to that extreme. Let’s give her a baseline visual acuity of 20/5, say, about twice the human maximum.

So what do her bionics add? For one thing, they broaden her visual spectrum to the infrared. This is apparently something she can turn on and off. Now, it should be remembered that TV and movies tend to misrepresent infrared vision as being able to see through walls. Actually that usually wouldn’t work, since walls are generally designed to insulate, so heat — and thus IR light — doesn’t pass through them easily. And as I said in the book, glass is generally opaque to IR. So this wouldn’t be the equivalent of “x-ray vision,” except when dealing with less well-insulated things like human bodies. It could enable her to read people’s emotional states through their blood flow, though, or to track recent footprints and the like. She also has an inbuilt data buffer that’s shown recording images from her eyes and letting her replay, analyze, and enhance them later, projected on the heads-up display built into her retina. So that might give her sort of a “digital zoom” ability, to enlarge part of a recorded image, but not to increase its resolution beyond what her eyes could detect. And her implants might up her acuity to maybe 20/4.

As for her hearing, I haven’t established anything beyond the fact that it’s better than normal. She can probably hear a somewhat larger dynamic range than most people and has somewhat more sensitivity, but I’m not sure it could be enhanced too much without a substantial alteration to the anatomy of the ears. But she could have bionic auditory sensors that could allow her to amplify sounds further as needed.

As for scent, I establish in Ch. 20 that Emry can track by it, though not as well as someone more specialized for the task like Bast or Psyche. So her sense of smell is, again, somewhat above normal but not massively so. Which would enhance her sense of taste accordingly as well. It’s possible she’s a supertaster, like a lot of real-life people. (In fact, looking over the list of foods that supertasters dislike, I think I might be one!)

As for her sense of touch, it’s no doubt unusually sensitive, which is why she’s so hedonistic and easily stimulated. Although her pain sense is no doubt diminished in comparison to her other tactile perceptions. I gather that redheads are normally more sensitive to pain than most, but it stands to reason that her nociception would have been somewhat suppressed.

“Healing factor” is a tricky one; I’m not sure how to codify it. But she does have a fast metabolism and thus probably heals a bit faster than normal, and her bionics include a “nanotech immune-boosting and injury repair system,” as stated in her character bio. She can’t heal nearly instantly like Wolverine in the movies, since there would be physical and metabolic limits on how fast repairs could realistically be done, but she could probably heal, at a guess, 2-4 times faster than normal depending on the type of injury and whether she’s able to rest and replenish or has to heal on the run. (That’s complete guesswork, since I’m not sure where to find information on human healing rates, what the recorded maximums are, or what mechanisms could enhance them.) She’s also got an augmented immune system, both inborn and nanotech-enhanced; she’s probably got little or no experience with being sick, though she might be susceptible to a sufficiently potent bioweapon. She has toxin filters to protect her from poisoning and drugs. Alcohol would probably have little effect on her, but if she is a supertaster, she wouldn’t like the taste of it anyway. And it’s not like she needs help relaxing her inhibitions, since she hardly has any to begin with.

Durability, though, is something the Green Blaze has in abundance, thanks to her “dense Vanguardian bone” and the nanofiber reinforcements to her skeleton and skin. She’s not easy to hurt. She takes a good deal of pounding in Only Superhuman, but the only skeletal injury she suffers is a hairline wrist fracture which is compensated for by her nanofiber bracing. She rarely sustains more than cuts, bruises, and strains. She’s not exactly bulletproof — she needs her light armor for that — but there are enough reinforcements around her skull and vital organs that it would take a pretty high-powered rifle to inflict a life-threatening injury. Her skull reinforcements are probably comparable to a military ballistic helmet, so shooting her in the head would probably cause surface bleeding and a moderate concussion at worst, and more likely just make her mad. And of course her light-armor uniform gives even more protection, strength enhancement, and the like. (Note that this is as much a matter of micrometeorite protection as bullet protection.)

One power Brandt didn’t ask about is intelligence. Emerald Blair embraces her physical side more than her intellectual side, but her intelligence is easily at genius level. She’s definitely smarter than I am, since I have plenty of time to figure out the solutions that she comes up with on the fly. She’s far more brilliant than she realizes yet, and when and if she catches on and begins developing that potential, she could be a superbly gifted detective and problem-solver.

Brandt’s final question is, “And lastly, in your world, how strong is the average super-being and what is the normal human level at?” Well, the normal, unmodified human level is the same as it would be in real life, although people living in lower-gravity conditions would be less strong than Earth-dwelling humans. As for mods, I’m not sure there’s such a thing as an average one, since they’ve specialized in diverse directions. Only some are augmented for physical strength, like Vanguardians, many Neogaians, and Mars Martialis… ans… whatever. Honestly, I’m not sure to what extent physical strength would be needed as a human enhancement in Strider civilization. Combat in the future will be mostly the purview of drones and robots, or soldiers in strength-enhancing exoskeletons. So enhancing individual strength would be more a choice than a necessity, probably more likely to be done for athletics than anything else.

Still, in a setting like Strider civilization, where mods have embraced superhero lore as a sort of foundational mythology, there would be an element of sports and celebrity to crimefighting or civil defense. More fundamentally, in a culture where human modification is embraced, physical strength could be seen as a desirable “biohack” just for the sense of power it provides. It’s like how some people like high-performance muscle cars even though they don’t need that much power to commute to work. I think in the Vanguardians’ case it was about exploring the limits of the human animal, finding how far we could be augmented in every possible way, both as a matter of scientific curiosity and as a symbolic statement to inspire others to mod themselves — and thumb their noses at Earth’s resistance to such things. The average Vanguardian might be somewhere around Emry’s pre-bionic strength level, maybe 3-3.5 times normal muscle strength for a given build, though they vary widely in their builds. Still, I think that Beltwide, only a certain percentage of mods would be tailored for strength, with others emphasizing endurance, senses, reaction time, intelligence, adaptation to particular environments, etc. (on top of the radiation resistance that everyone living in space would need).

So, to sum up the Green Blaze’s powers:

  • Strength: c. 10x normal for a woman of her build or 4x baseline-human maximum
  • Speed: up to 80 km/h (50 mph) in c. 1g environment
  • Reaction time: c. 6x average or 3x human maximum
  • Vision: c. 20/4 visual acuity, infrared vision, visual recording and enhancement
  • Other senses: Moderately above human maximum range and sensitivity
  • Healing: Somewhat accelerated healing and cellular repair; very strong immune system and toxin resistance
  • Durability: Considerable resistance to abrasion, contusion, laceration, and broken bones; bullet resistance comparable to a human in moderate body armor
  • Intelligence: Genius-level but underdeveloped

Note, however, that these are her innate power levels, not counting the further enhancements her light armor would provide to her strength, durability, and speed. But figuring those out would be a whole other essay.

Wow, I got a pretty long essay out of this. It was fun, and it could be useful for future adventures, hopefully. So I’ll open the door. If anyone has further questions about Only Superhuman that haven’t been addressed already on my website, or more generally about my work, feel free to post them in the comments or on Facebook. Easy questions asked in the comments will probably be answered there, while more involved questions may spawn more essays. We’ll see how it goes.

Musings on quantum gravity

Recently I came across this article about an experiment to reconcile quantum physics with gravity, the one fundamental force that hasn’t yet been explained in quantum terms:

New Experiments to Pit Quantum Mechanics Against General Relativity

The problem with reconciling gravity (which is explained by Einstein’s General Theory of Relativity) and quantum physics is that they seem to follow incompatible laws. Quantum particles can exist in superpositions of more than one state at a time, while gravitational phenomena remain resolutely “classical,” displaying only one state. Our modern interpretation suggests that what we observe as classical physics is actually the result of the quantum states of interacting particles correlating with each other. A particle may be in multiple states at once, but everything it interacts with — including a measuring device or the human observer reading its output — becomes correlated with only one of those states, and thus the whole ensemble behaves classically. This “decoherence” effect makes it hard to detect quantum superpositions in any macroscopic ensemble, like, say, a mass large enough to have a measurable gravitational effect. Thus it’s hard to see quantum effects in gravitational interactions. As the article puts it:

At the quantum scale, rather than being “here” or “there” as balls tend to be, elementary particles have a certain probability of existing in each of the locations. These probabilities are like the peaks of a wave that often extends through space. When a photon encounters two adjacent slits on a screen, for example, it has a 50-50 chance of passing through either of them. The probability peaks associated with its two paths meet on the far side of the screen, creating interference fringes of light and dark. These fringes prove that the photon existed in a superposition of both trajectories.

But quantum superpositions are delicate. The moment a particle in a superposition interacts with the environment, it appears to collapse into a definite state of “here” or “there.” Modern theory and experiments suggest that this effect, called environmental decoherence, occurs because the superposition leaks out and envelops whatever the particle encountered. Once leaked, the superposition quickly expands to include the physicist trying to study it, or the engineer attempting to harness it to build a quantum computer. From the inside, only one of the many superimposed versions of reality is perceptible.

A single photon is easy to keep in a superposition. Massive objects like a ball on a spring, however, “become exponentially sensitive to environmental disturbances,” explained Gerard Milburn, director of the Center for Engineered Quantum Systems at the University of Queensland in Australia. “The chances of any one of their particles getting disturbed by a random kick from the environment is extremely high.”

The article is about devising an experiment to get around this and observe a superposition (potentially) in a “ball on a spring” type of apparatus. What interests me, though, is a more abstract discussion toward the end of the article.

Inspired by the possibility of experimental tests, Milburn and other theorists are expanding on Diósi and Penrose’s basic idea. In a July paper in Physical Review Letters, Blencowe derived an equation for the rate of gravitational decoherence by modeling gravity as a kind of ambient radiation. His equation contains a quantity called the Planck energy, which equals the mass of the smallest possible black hole. “When we see the Planck energy we think quantum gravity,” he said. “So it may be that this calculation is touching on elements of this undiscovered theory of quantum gravity, and if we had one, it would show us that gravity is fundamentally different than other forms of decoherence.”

Stamp is developing what he calls a “correlated path theory” of quantum gravity that pinpoints a possible mathematical mechanism for gravitational decoherence. In traditional quantum mechanics, probabilities of future outcomes are calculated by independently summing the various paths a particle can take, such as its simultaneous trajectories through both slits on a screen. Stamp found that when gravity is included in the calculations, the paths connect. “Gravity basically is the interaction that allows communication between the different paths,” he said. The correlation between paths results once more in decoherence. “No adjustable parameters,” he said. “No wiggle room. These predictions are absolutely definite.”

Now, this got me thinking. Every particle with mass interacts gravitationally with every other particle with mass, so there would be no way to completely isolate them from interacting. For that matter, gravity affects light too. So if gravity is an irreducible “background noise” that prevents stable superpositions, that would explain why quantum effects don’t seem to manifest with gravitational phenomena.

And that does sort of reconcile the two. The decoherence model, that classical states are what we get when quantum states interact and correlate with each other, basically means that classical physics is simply a subset of quantum physics, the behavior of quantum particles that are in a correlated state. So the “classical” behavior of gravity would also be a subset of quantum physics — meaning that relativistic gravity is quantum gravity already, in a manner of speaking. We just didn’t realize they were two aspects of the same overarching whole.

Now, this reminds me of another thing I heard about once, a theory that gravity didn’t really exist. It might have been the entropic gravity theory of Erik Verlinde, which states that gravity is, more or less, just a statistical artifact of particles tending toward maximum entropy. Now, what I recall reading somewhere, though I’m not finding a source for it today, is that this — or whatever similar theory I’m recalling — means that particles tend toward the most probable quantum state. And statistically speaking, for any particle in an ensemble, its most probable position is toward the center of that ensemble, i.e. the center of mass. So I had the thought that maybe what we perceive as gravity is more just some sort of probability pressure as particles tend toward their most likely states.

Now, if Stamp’s theory is right, then Verlinde’s is wrong; there must be an actual force of gravity, or rather, an interaction that correlates the paths of different particles. But it occurs to me that there may be some basis to the probabilistic view of gravity if we look at it more as a quantum correlation than an attraction. To explain my thinking, we have to bring in another idea I’ve talked about before on this blog, quantum Darwinism. The idea there is that the way decoherence works is that the various states of a quantum particle “compete” as they spread out through interaction with other particles, and it’s the more robust, stable states that prevail. Now, what I’m thinking is that as a rule, the most stable states would be the most probable ones. And again, those would tend to be the positions closest to the center of mass, or as close as feasible when competing with other particles.

So if we look at gravitation not as an attractive force per se, but as a sort of “correlational field” that promotes interaction/entanglement among quantum particles, then we can still get its attractive effect arising as a side effect of the decoherence of the correlated particles into their most probable states. Thus, gravity does exist, but its attractive effect is fundamentally a quantum phenomenon. So you have quantum gravity after all.

But how to reconcile this with the geometric view of General Relativity, that gravity is actually a manifestation of the effect that mass and energy have on the topology of spacetime? Well, that apparent topology, that spatial relationship between objects and their motions, could be seen as a manifestation of the probabilistic relationships among their position and movement states. I.e. a particle follows a certain path within a gravitational field because that’s the most probable path for it to take in the context of its correlation with other particles. Even extreme spacetime geometries like wormholes or warp fields could be explained in this way; an object could pass through a wormhole and show up in a distant part of space because the distribution of mass and energy that creates the wormhole produces a probability distribution that means the object is most likely to be somewhere else in space. Which is analogous to the quantum tunneling that results because the peak of a particle’s probability distribution shifts to the other side of a potential barrier. And for that matter, it has often been conjectured that quantum entanglement between correlated particles could be caused by microscopic wormholes linking them. Maybe it’s the other way around: wormholes are just quantum tunneling effects.

One other thought I’ve had that has a science-fictional impact: if gravitation is a “correlational quantum field” that helps the most probable state propagate out through the universe, that might argue against the Many-Worlds Interpretation of quantum decoherence. After all, gravity is kind of universal in its effect, and the correlation it creates produces what we see as classical physics, a singular state. It could be that coherent superpositions would only happen on very small, microscopic scales, and quantum Darwinism and gravitational correlation would cause a single consensus state to dominate on a larger scale. So instead of the whole macroscopic realm splitting into multiple reality-states (timelines), it could be that such splitting is only possible on the very small scale, and maybe the simmering of microscale alternate realities is what we observe as the quantum foam. It could be that the MWI is a consequence of an incomplete quantum theory that doesn’t include gravity, and once you fold in gravity as a correlating effect, it imposes a single quantum reality on the macroscopic universe.

Which would be kind of a bummer from an SF perspective, since alternate realities are useful story concepts. I’d just about come around to believing that at least some alternate realities might be stable enough to spread macroscopically, as I explained in my quantum Darwinism essay linked above. Now, I’m not so sure. The “background noise” effect of gravity might swamp any stable superpositions before they could spread macroscopically and create divergent timelines.

However, these thoughts might be applicable to future writings in my Hub universe (and as I’ve discussed before, I’ve already given up on the idea of trying to reconcile that with my other universes as alternate timelines). The Hub is a point at the center of mass of the greater galaxy — i.e. the system that includes the Milky Way proper, its satellite galaxies, and its dark-matter halo — that allows instantaneous travel to any point within that halo. I hadn’t really worked out how it did so, but maybe this quantum-gravity idea provides an answer. If gravity is quantum correlation, and all particles’ probability distributions tend toward the center of mass, then maybe the center of mass is the one point that allows quantum tunneling to the position of every other particle. Or something like that. It also provides some insight into the key McGuffin of the series, the fact that nobody can predict the relationship between Hub vectors (the angle and velocity at which the Hub is entered) and arrival destinations, meaning that finding new destinations must be a matter of trial and error. If the Hub works through quantum gravity and correlation with all the masses within the halo, then predicting vectors would require a complete, exact measurement of the quantum state of every particle within the halo, and that would be prohibitively difficult. It’s analogous to how quantum theory says that every event in the universe is already part of its wave equation, but we can’t perfectly predict the future because we’d need to know the entire equation, the behavior of every single particle, and that would take an eternity to measure. So it’s something that’s theoretically deterministic but functionally impossible to determine. The same could be true of Hub vectors.

Although… we’re only talking about one galaxy’s worth of particles, which is a tiny fraction of the whole universe. So maybe it’s not completely impossible…

Anyway, those are the musings I’ve had while lying awake in bed over the past couple of early mornings, so maybe they don’t make much sense. But I think they’re interesting.

Realism in space: EUROPA REPORT and GRAVITY (spoiler reviews)

November 12, 2013 11 comments

In the past few days I’ve seen two recent movies that took an unusually realistic approach to portraying spaceflight: Sebastián Cordero’s Europa Report (which I watched on my computer via Netflix) and Alfonso Cuarón’s Gravity (which I watched in the theater). It’s very rare to get two movies in such close succession that make an attempt to portray space realistically, and I hope it’s the beginning of a trend. Although both movies did compromise their realism in different ways.

Europa Report is a “found-footage” movie presented as a documentary about the first crewed expedition to Jupiter’s moon Europa to investigate hints of life. It’s rare among such movies in that not only is the found-footage format well-justified and plausibly presented, but it’s actually thematically important to the film. On the surface, the plot follows the beats of a fairly standard horror movie: characters come to an unfamiliar place, start to suspect there’s something out there in the dark, and fall prey to something unseen one by one. But what’s fascinating about it is that it doesn’t feel like horror, because these characters want to be there, are willing to risk or sacrifice their lives for the sake of knowledge, and see the discovery of something unknown in the dark as a triumph rather than a terror. And that elevates it above the formula it superficially follows. It’s really a nifty work of science fiction in that it celebrates the importance of the scientific process itself, and the value of human exploration in space even when it comes at the cost of human lives.

The depiction of the ship, its flight, the onboard procedures, and the behavior of the astronauts is all handled very believably, with a well-designed and realistic spaceship relying on rotation to create artificial gravity. The actors, including Sharlto Copley, Daniel Wu, Anamaria Marinca, Christian Camargo, House‘s Karolina Wydra, and Mission: Impossible — Ghost Protocol‘s Michael Nyqvist, are effectively naturalistic and nuanced. The film’s low budget means they can only manage a limited number of microgravity or spacewalking shots, but what we get is reasonably believable. I do have some quibbles about procedures, though, like the lack of spacesuit maneuvering units during the spacewalk, and the decision later on Europa to send one crewperson out on the surface alone without backup. Really, a lot of the bad things that happened seemed to be avoidable. But I’m willing to excuse it since this was portrayed as a private space venture and the first of its kind. Now, I’m a big supporter of private enterprise getting into the space business, since history shows that development and settlement of a frontier doesn’t really take off until private enterprise gets involved and starts making a profit from it. And I’m sure that private space ventures in real life take every safety precaution they can. But for the sake of the fiction, it’s plausible that a novice organization might let a few safety procedures slide here and there.

The one thing about the film that really bugged me is one that’s pervasive in film and TV set in space and largely unavoidable: namely, once the crew landed on Europa, they were moving around in what was clearly full Earth gravity. Europa’s gravity is 13.4 percent of Earth’s, a few percent less than the Moon’s gravity, so they should’ve been moving around like the Apollo astronauts. Unfortunately, it seems to be much harder for Hollywood to simulate low gravity than microgravity. I’ve rarely seen it done well, and all too often filmmakers or TV producers are content to assume that all surface gravity is equal. In this case I suppose it’s a forgivable break from reality given the film’s small budget, but it’s the one big disappointment in an otherwise very believable and well-researched portrayal of spaceflight. Still, it’s a minor glitch in a really excellent movie.

Gravity is a very different film, much more about visual spectacle and action. Indeed, I’d read that it definitely needed to be seen in 3D to get the full impact, so I decided to take a chance. See, nearly 30 years ago I had some laser surgery for a melanoma in my left eye, and that left my vision in that eye distorted, on top of my congenitally blurry vision in that eye. So normally my depth perception isn’t all that great, and I tend to be unable to perceive 3D images like those Magic Eye pictures that were a fad not long after my surgery. So I’ve always assumed that I wouldn’t be able to experience 3D movies. But a few years back, I talked to a friend who had similar eye problems, and he said he could occasionally get some sense of depth from a 3D movie. So for this case, I decided to give it a try. And lo and behold, it worked! I could actually perceive depth fairly normally, though mainly just when there was a considerable difference in range, like when something passed really close to the camera, or in the shots of Sandra Bullock receding into the infinite depths of space (which were the key shots where you pretty much need 3D to get the full impact). I’m not sure if someone with normal vision could perceive more than I did, but it worked pretty well, considering that I wasn’t sure if it would work at all. There were occasionally some shots where I got a double image when something bright was against black space, but the double image persisted when I closed one eye, so I think it was a matter of the glasses filtering out the second image imperfectly. Anyway, it’s nice to know I can see 3D movies (and I didn’t get a headache or nausea either), though it costs a few bucks extra, so I’ll probably use this newfound freedom judiciously — for movies where the 3D is really done well and serves a purpose, rather than just capitalizing on a fad or being sloppily tacked on.

Anyway, as for the movie itself, it’s a technical tour de force, one big ongoing special effect that uses remarkably realistic CGI to create the illusion of minutes-long unbroken shots of George Clooney and Sandra Bullock floating in space and interacting seamlessly with each other and their environs. The technical aspects of NASA procedures and equipment and so forth seem to be very realistically handled as well. And best of all, the movie states right up front in the opening text that in space there’s nothing to carry sound, and it sticks by that religiously, never giving into the temptation to use sound effects in vacuum no matter how cataclysmic things get and how many things crash or blow up. The only sounds we hear when the viewpoint astronauts are in vacuum are those that they could hear over their radios or through the fabric of their suits when they touch something. It’s utterly glorious. Every science-fiction sound designer in Hollywood needs to study this film religiously.

The behavior of objects and fluids in microgravity is moderately well-handled too, although I’m not convinced the fire in the ISS would spread as quickly as shown, since fires in space tend to snuff themselves out with no convection to carry away the carbon dioxide buildup. But there were glimpses of what seemed like ruptured gas canisters spewing blue flame, so maybe they were oxygen canisters feeding the fire? I also wasn’t convinced by the scene where Bullock’s character wept and the tears sort of rolled away from her eyes and drifted off. I think surface tension would cause the tears to cling around her eyes unless she brushed them away.

One thing that both films handle quite realistically is the coolness of trained professionals in a crisis. In both Europa Report and Gravity, for the most part the astronauts keep a calm and level tone of voice as they report their crises. In real life, professionals generally don’t get all shouty and dramatic when bad things happen, but they fall back on procedure and training and discipline and rely on those things to see them through. And that’s what we mostly get in both these movies, although Sandra Bullock’s character in Gravity has more panicky moments because she’s not as well-trained as the other astronauts. I’m not sure it’s entirely plausible that they would’ve let her go into space without sufficient training to accustom her to it, but it’s balanced by Clooney’s calm under pressure.

However… all that realism of detail in Gravity masks the fact that the basic premise of the movie requires fudging quite a bit about the physics, dimensions, and probabilities of orbital spaceflight. The crisis begins when an accidental satellite explosion starts a chain reaction that knocks out all the other satellites and creates a huge debris storm that tears apart the space shuttle and later endangers the ISS. Now, yes, true, orbital debris poses a serious risk of impact, but we’re still talking about small bits spread out over a vast volume. In all probability a shuttle or station would be hit by maybe one large piece of debris at most, not this huge oncoming swarm tearing the whole thing to pieces. And the probability of the same thing happening to two structures as a result of the same debris swarm? Much, much tinier. Not to mention that I really, really doubt the fragments as shown could impart enough kinetic energy to these spacecraft to knock them into the kind of spins we see. It’s all very exaggerated for the sake of spectacle. And by the climactic minutes of the film it’s starting to feel a bit repetitive and ridiculous that everything just keeps going so consistently wrong over and over. (The film also simplifies orbital mechanics a great deal, suggesting you can catch up with another orbiting craft just by pointing directly at it and thrusting forward. Since you and it are already moving very fast on curved paths, it’s really not that simple.)

Gravity has a huge edge over Europa Report in its budget and thus its ability to portray microgravity; I wish ER had been able to use this level of technology to simulate Europa’s 0.134g in its surface scenes. But as impressive as Gravity‘s commitment to realism is in some respects, it’s ultimately a far shallower film than ER and cheats the physics in much bigger ways for the sake of contrived action and danger. It’s essentially a big dumb disaster movie disguised with a brilliantly executed veneer of naturalism. Gravity has the style, while Europa Report has the substance.

Now what we need is for someone to put the two together, and we could really be onto something.

Enter the (collagen) matrix

Nearly a year ago, I posted about the minor periodontal surgery I had to deal with the receding gumline on my lower front teeth. I said there might be a second procedure to graft gum tissue from my palate into the receded area (to protect the roots of the front teeth), but I was hoping that wouldn’t be necessary. It soon enough became evident that it would be necessary, but I put it off as long as I could, until finally the peridontist’s office called me last month to schedule the procedure, which was done yesterday.

Turns out that putting it off worked out well, though, because in the interim, the doctor began doing a new version of the procedure which he offered me as an alternative. Instead of taking gum tissue from my upper palate, he could implant an artificial graft, basically just a scaffold of collagen (porcine in origin — I guess from pig hooves or something, but carefully purified and sterilized) that my own fibroblasts would grow into, forming new gum tissue to replace what was lost, with the collagen eventually breaking down and being “resorbed” into my body. So instead of taking existing gum tissue and moving it elsewhere, it’s enabling me to grow new gum tissue where the old tissue was lost.

The high-tech nature of the procedure appealed to me, as did the fact that it would simplify the operation and let me avoid the cutting into my upper palate. But I still took care to ask questions and read the documentation about the graft. There didn’t seem to be any significant drawbacks and there were definite advantages, so I agreed to the new procedure. It wasn’t very pleasant getting Novocaine stuck into my gums (though he used the sonic wand that temporarily numbed my nerves to ease the pain of the needle going in) and having him go in and pull things back and stitch things in, but it was easier than it would’ve been before the new grafts became available.

And now it’s the same drill as last time — ice packs and ibuprofen, soft foods and nothing hot for the first day or so, then no biting with the front teeth for about a month. Last time I found I was able to get by with a pretty normal diet so long as I cut things into small pieces, but I think I’ll still be having fewer sandwiches and more pasta salad for a while.

And within a couple of months or so, as long as I’m careful to avoid putting too much pressure on the area and crushing the collagen matrix so the cells can’t grow into it, I’ll have a nice new intact gumline there. I wish there were other parts of my body I could regenerate like that. Hopefully, by the time I need to, medical science will have made it possible.

Categories: Science, Uncategorized

Just to put things in perspective…

Yesterday was the start of the new year 2012… in the current version of the Gregorian calendar used in much of the world.  In the old Julian calendar, yesterday was December 19, 2011.  In the Hebrew calendar, we’re about three months from the end of the year 5772.  In the Indian civil calendar, we’re a similar distance from the end of 1933 Saka Era.  In the Islamic calendar, we’re early in the second month of 1433 AH.  To astronomers, yesterday was the Julian day 2455927.  In the traditional Chinese calendar, we’re a few weeks from the end of the year 28 (Year of the Metal Hare) of Cycle 78 (or Cycle 79 depending on whether you date from the beginning or the end of the reign of Emperor Huangdi).

The point is, calendrical divisions are arbitrary human constructs.  The Earth just moves continuously through its rotation and orbit from one moment to the next.  Some people make a huge deal out of the start of a new year in their particular calendar.  That’s fine, but it’s just an excuse for a party, not something that has any objective significance where the universe — or even the entirety of the human race — is concerned.  Worth keeping that in mind just for a sense of proportion.

Now, about this silly Mayan meme that’s going around… I wish we would stop blaming the Maya for this dumb apocalyptic prophecy.  The Maya (a more proper term than “Mayans”) had no apocalyptic tradition of any kind.  What they had was a calendar that worked very well, and that they used to date events in their own lives and up to a few generations in the future.  They probably never gave much thought to the year we call 2012, although they had a couple of written predictions/prophecies about events taking place long after that date.  A lot of the jokes and cartoons you see circulating posit that the “Mayans” actually manufactured calendars going up to this year and then stopped.  That’s not so.  They didn’t bother making calendars more than a few centuries ahead.  But the thing about calendars is, they’re cyclical.  They can easily be extrapolated forward indefinitely.  And the Maya calendar was really cyclical, since that was their view of time, that everything was a series of nested cycles that continued forward indefinitely.

So what happened was this: the actual Maya/Mesoamerican calendar fell into disuse many centuries ago.  More recently, in the mid-20th century, Western researchers reconstructed it and projected it forward into their own era, something the Maya themselves didn’t really bother to do, since why would they need to?  Those modern researchers discovered that one of the longer cycles of the Mesoamerican calendar, the 13th baktun cycle, would come to an end within their lifetimes, in the year 2012.  In 1966, a guy named Michael Coe wrote a book wherein he interpreted that as a Maya prophecy of what he called “Armageddon.”  But of course Armageddon is a Biblical concept, and later scholars have consistently debunked Coe’s interpretation.  There was nothing in Maya writings to suggest they saw the end of the 13th baktun as the end of the world — merely as the end of one cycle and the beginning of the next.  In other words, New Year’s Day writ large, an excuse for a huge party but nothing more.  Coe had imposed his own Western religious traditions onto his interpretation of a foreign culture — an elementary mistake for an archaeologist.  But that didn’t stop other 20th-century Westerners who believed in a looming Apocalypse from latching onto Coe’s mistake and building a whole eschatological cottage industry out of it.

So there is no “Mayan” prediction of the end of the world.  They don’t deserve the blame.  End-times theology is a distinctly Christian belief system and has been for thousands of years.  Many Christians since Biblical times have been convinced the world would end in their lifetimes, and they’ve always looked for “evidence” to justify the latest eschatological forecast.  The Maya calendar just had the bad luck of getting co-opted by that process and misinterpreted to fit it.  So please, let’s stop blaming the Maya for our own Western preoccupations.

Categories: Science Tags: ,

Quantum teleportation: maybe not?

December 18, 2011 3 comments

After my earlier post on quantum teleportation, I’ve been wondering about whether I wanted to include it in my science fiction in some way, but first I wanted to get some handle on whether it was feasible in practice to teleport a macroscopic object rather than just individual particles or a Bose-Einstein condensate in a single coherent quantum state.  Finding detailed discussions online is a bit tricky just using Google, but I found a discussion thread that goes into a fair amount of depth:

“Quantum teleportation of macroscopic objects” at Physics Forums

Granted, BBS threads, even on science forums, aren’t the best way to get information about the subject, but I’m in no mood to try to wade through a bunch of technical papers, and I’m only looking at it from the perspective of a fiction writer, so for now I’m content to let other people do the interpreting for me, though I still have to try to filter out the informed posts from the less informed ones.

It sounds like there may be some fundamental limitations that would prohibit teleporting humans or the like.  Apparently what gets teleported are discrete properties like spin or charge.  Teleporting a continuous variable, like the relative positions of multiple atoms or their momentum, would require infinite amounts of data, so one of the posters says.  Another poster countered that some measurement of continuous states was possible, citing a paper on Arxiv.org, but the first replied that it was a limited, classical-resolution measurement and not precise enough to allow replicating a macroscopic object accurately.  (Kind of like how Star Trek replicators have only “molecular resolution” and not “quantum resolution” so they can recreate nonliving matter but not living beings, because the error rate would be fatally high.)

Then there’s this thread at the Bad Astronomy and Universe Today Forum (which seems to have been started by the same poster under a different username), in which it’s pointed out that a macroscopic object can never be truly isolated from its environment, which again would suggest that the amount of information you’d need to define its state exactly would be effectively unbounded.

And this thread from the same forum (which is definitely by the same poster since it has the same original post as the Physics Forums thread above) clarifies that thermal effects in the body would interfere with getting a precise scan; ideally you’d need to freeze the subject to extremely near absolute zero, which isn’t exactly conducive to survivable teleportation.

Another factor raised in this article from Null Hypothesis: The Journal of Unlikely Science is a simple matter of bandwidth: even if you didn’t need infinite information to transmit continuous states, you’d still need to transmit so much data to replicate a human body that it would take a great deal of time and energy to send — billions of years at our highest current transmission rate.  And if you could get a much higher transmission rate, according to the link in the previous paragraph, you’d need to send such intense energy that it would become unfeasible — you’d basically be firing a very powerful beam of gamma radiation at the receiving station, and that’s more a death ray than a transporter beam.  At the very least, in most instances it would take less time and energy just to travel physically than to send a teleport signal.

So the question this raises for me is: how “exact” do you actually need to get?  It could be feasible using advanced nanofabrication technology to “print out” a human body that’s a good molecular-level match for the original person.  As long as you recreated the DNA and RNA in the cells accurately, you could probably settle for just knowing how many of which type of cell the body had, and where they were located, so you could reduce the amount of data that needed to be sent by using these “generic” substitutions.  You could even improve on the body, say, write out excess fat or burgeoning tumors, or rewrite defunct hair follicles as functioning ones, or add extra muscle, or even make more radical changes.  (See Wil McCarthy’s The Queendom of Sol tetralogy for an illustration of this.)

Aside from matching (or refining) the genetic and epigenetic data, then, the key information you’d need to transmit a person with their identity intact would be an accurate brain scan.  Otherwise you’ve just created an identical twin rather than duplicated the original person.  So the question is, just how accurate would it have to be?  As far as science is able to determine, thought and memory are classical-scale processes.  According to this page which I used as a reference for quantum theory in DTI: Watching the Clock:

 In quantum terms each neuron is an essentially classical object. Consequently quantum noise in the brain is at such a low level that it probably doesn’t often alter, except very rarely, the critical mechanistic behaviour of sufficient neurons to cause a decision to be different than we might otherwise expect. The consensus view amongst experts is that free-will is the consequence of the mechanistic operation of our brains, the firing of neurons, discharging across synapses etc. and fully compatible with the determinism of classical physics.

Sure, there are some theorists who argue that consciousness is based on quantum processes, and you hear a lot of talk about “microtubules” in the neurons operating on a quantum level, but there’s no experimental support yet, and the general consensus is that quantum effects in the brain would decohere well before they reached the scale at which the neurons’ activity occurs.  So it might be possible to faithfully duplicate the entire mental state of a human brain using classical-level accuracy, so that mechanism in the research paper mentioned above might be applicable.

So the key issue that remains is the one that was the focus of my previous post: Is there continuity of consciousness between the original and the duplicate?  What I reasoned there is that what creates our perception of ourselves as continuous beings is the ongoing interaction, and thus the quantum entanglement/correlation, among the particles of our brains.  The specific particles may be expended and replaced, but the correlations within the entire overall structure give us our sense of continuity.  So if the original subject and the teleported replica are quantum-entangled, that would make them the same continuous entity on a fundamental level even if separated in space and time.  The question is, would that same principle apply even if the entanglement were between the original and a body that was not an exact quantum duplicate?  I.e. if you used classical-level fabrication to synthesize a duplicate of a person and only quantum-teleported partial information about the state of the brain?  You’d synthesize a brain and body that were almost exact replicas, and then transmit enough quantum data about the brain to essentially cancel out the discrepancies and make it effectively the same brain, with the entanglement providing the continuity.  Thus you have a replica of the body but preserve a single continuous consciousness.

So the original body would not need to be scanned to destruction but the brain would.  Remember, teleporting quantum state information requires changing the original state.  You’d essentially be teleporting just the brain/mind into a new, possibly modified body, and leaving the old body behind as a corpse with a destroyed brain.  Ickier than the ideal situation.  But it still precludes the possibility of creating a viable “transporter duplicate.”

But the question is, how much “cheating” can you get away with?  How small a percentage of the information defining you needs to be quantum-teleported rather than classically copied in order to ensure that your consciousness survives intact?  How could science measure the difference between a synthesized replica that thinks it’s you and one that actually contains your original consciousness?  How much entanglement, how much equivalence, is enough for continuity?  Even if we assume the teleportation of the brain states alone is enough to make it the same brain, we run into the mind/body problem: the two are more linked than we have traditionally tended to think, and it may be premature to define consciousness as something that resides solely in the brain.  The entire nervous and hormonal systems may play a role in it too.  Still, if you were to have your legs amputated and replaced with prosthetics, that wouldn’t destroy your consciousness.  So maybe teleporting just the brain states is enough.

But then there’s a simple mathematical question: does that really reduce the amount of data by a significant amount? The mass of the brain is about 2 percent of the total mass of the body, so that’s only reducing the amount of data by roughly two orders of magnitude.  So it would take 2 billion years to transmit instead of 100 billion, say.  To make it feasible, you’d have to “compress” the data still further — and we’d need a much deeper understanding of how the brain works before we could estimate how little of its structure we could get away with teleporting at a quantum level versus substituting with “generic” cellular/structural equivalents.  (Of course it’s a total myth that “We use only 10% of our brains; fMRI scans show conclusively that we make use of just about all the brain’s volume over the course of a day.  But on a cellular level, a lot of that may be underlying substructure that could be “generically” replicated.  Or maybe not.  I don’t know enough about neurology to be sure.)  Even so, I doubt the threshold percentage would be low enough to reduce the amount of data by even one order of magnitude, let alone many.

And there’s still the thermal problem.  There’s a lot of molecular motion in the brain, not just from its temperature but from the constant chemical exchange among neurons.  You might not be able to get a detailed quantum scan of a living, active brain as opposed to a deep-frozen corpse, and I don’t have enough confidence in cryonics to believe a person could be frozen to near absolute zero and then revived.

Still… depending on what fictional universe I’m in and how much I’m willing to bend the rules, I might be willing to fudge things enough to include quantum teleportation if I have a good enough story reason for it, using the ideas discussed above to make it relatively more plausible.  Maybe there are ways to transmit data at far higher rates than we can now conceive, and with less energy expenditure.  And come to think of it, having a requirement that a subject has to be frozen solid before teleportation adds an interesting twist.  Though it would rule it out as a routine commute as it is in Star Trek or Niven’s Known Space.

Hints of Higgs? Maybe…

The big science news today is the announcement of the latest results from the Large Hadron Collider”s search for the elusive Higgs boson, and while the results are far from conclusive so far, they’re actually mildly encouraging.  Two independent detectors got pretty much consistent results suggesting the possibility of a particle with a mass somewhere around 125 GeV.  (That’s giga electron volts — since E=mc^2, physicists measure particle mass in units of energy.)

Here’s the New York Times piece on the news, including links to the raw data published on a site called TWiki (whose motto is probably not “bidi-bidi-bidi”).  Here’s a more detailed article from New Scientist.  However, the most useful link I’ve come across is this Higgs FAQ from the blog of particle physicist Matt Strassler.  I’ve never quite understood what all this Higgs field/particle business was all about until recently, but I’m starting to get a handle on it now.  The FAQ does a good job of explaining the Higgs field and its role, and why not finding the Higgs particle would be just as intriguing and useful a result as finding it.  (Because the particle isn’t the key, it’s just the simplest and only known way of detecting the Higgs field, which is the thing that’s actually important.  And if there were no particle, it would just mean the field is different than the simplest model suggests, or that it works in a different way, not that it didn’t exist.)

So nothing conclusive today, but interesting hints that will be pursued further.  Apparently they’ll be able to confirm or deny this evidence by next summer, and if it doesn’t pan out, they’ll try something else when the LHC reaches full power in 2015.

Categories: Science Tags: ,

Thinking about other universes (or, the trouble with infinities)

December 8, 2011 8 comments

I’ve been mulling over another subject that was suggested by the recent NOVA miniseries “The Fabric of the Cosmos,” hosted by physicist Brian Greene based on his book of the same name.  I felt some of the ideas it put across were too fanciful, putting sensationalism over plausibility or clarity, and one of them was the topic of its concluding episode, “Universe or Multiverse?”

The premise of that episode was that, if the Big Bang happened as the result of localized symmetry-breaking in an ever-inflating realm of spacetime, then our universe could be just one “bubble” in a perpetually expanding cosmic foam, with other universes being separate “bubbles” with their own distinct physics and conditions, forever out of reach because the space (how many dimensions?) between us and them is forever expanding.  Now, that’s okay as far as it goes.  It’s a somewhat plausible, if untestable, notion given what we currently know.  But what Greene chose to focus on was a rather outre ramification of this: the idea that if the multiverse is infinite, if there’s an infinite number of other universes alongside ours, then probability demands that some of them will be exact duplicates of our universe, just happening by random chance to have the exact same combination of particles and thus producing the same galaxies, stars, planets, species, inviduals, etc. — kinda like how the famous infinite number of monkeys banging on an infinite number of typewriters will inevitably produce all great literature by chance.  Thus, so the claim went, there could be other universes out there that are essentially parallels to our own with duplicates of ourselves, except maybe for some minor variations.  (Or maybe universes where duplicate Earths and humans exist in different galaxies, or where a duplicate Milky Way coexists with a different configuration of galaxies, or all of the above.)

Note that this is entirely different from the concept of parallel timelines, the usual way of generating alternate Earths in science fiction.  Parallel timelines aren’t separate universes, despite the erroneous tendency of SF to use the terms interchangeably.  They’re coexisting quantum states of our own universe.  The idea is that just as a single particle can exist in two or more quantum states at the same time, so can the entire universe.  These alternate histories would branch off from a common origin, and thus it’s perfectly reasonable that they’d have their own Earths and human beings and the same individuals, at least if they diverged after those individuals were born.  And there’s at least the remote possibility of communication or travel between them if nonlinear quantum mechanics could exist.  What we’re talking about here is something else altogether, literal other universes that just happen by random chance to duplicate ours because it’s inevitable if there’s an infinite number of universes.  While parallel timelines would be facets of the same physical universe we occupy, and would thus essentially be overlapping each other in the same place, these duplicate universes would be unreachably far away, except maybe by some kind of FTL or wormhole technology if such a thing could ever exist.  And they might predate or postdate our own universe by billions of years.

But I think it was a flawed conceit to dwell on that aspect of the multiverse idea, and I have my problems with the reasoning employed.  For one thing, it’s purely an ad hoc assumption that the multiverse is infinite rather than finite.  If it’s finite, then there’s no guarantee that there would be other universes that exactly duplicate ours.  Certainly there could be ones with compatible physical laws, with their own stars and galaxies and planets and life forms, but odds are they’d be different planets, different species, different individuals.  No duplicate Earth, no duplicate Lincoln or Kennedy or Jet Li.

And if the multiverse is infinite, then sure, you could argue that with an infinite number of tries, it’s inevitable that our universe would be exactly duplicated somewhere.  But the flip side to that argument is that if there’s an infinite number of universes, then the odds that any given universe would duplicate ours would be n divided by infinity, or effectively zero.  In practical terms, if we found a way to visit other universes via wormholes or something, then we could search for an infinite amount of time before finding one that had its own Earth and human race and history duplicating ours except for having more goatees or whatever.  Thus, by any realistic standard, such duplicates would be effectively nonexistent. (This is the problem with infinity as a concept in science — it tends to lead to absurdities and singularities.  Physicists generally try to avoid infinities.)  So while that result (the existence of duplicate universes) might be a logically sound consequence of the premise of an infinite multiverse, it’s also a trivial result, one that has no practical meaning and can’t be proven or falsified.  So it’s not science, just sophistry.  It’s angels dancing on the head of a pin.  And that makes it a waste of time to focus on in a program that’s supposed to be about science.

Besides, it’s boring.  The show presented us with the prospect that there could be an infinite number of possible forms for universes to take, whole other sets of physical laws, an unlimited range of possibilities… and all they wanted to talk about was duplicates of the world we already know?  What a staggering failure of imagination — or what a staggering triumph of self-absorption.  I would’ve been far more interested in hearing about the endless variety of universes that weren’t just like ours.  Why not dazzle the viewers with some discussion about what physics would be like in a universe with more than three spatial dimensions?  Or one with a higher or lower speed of light?  That would’ve been so much cooler and more enlightening than the silly, dumbed-down examples they gave, like Earth with a ring around it or Brian Greene with four arms.

I suppose the one appeal of the infinite-monkeys premise is metafictional: You can use it to argue that if every remotely possible combination or interaction of particles is inevitable, then every fictional universe really happens somewhere.  So, for instance, I could claim that my various fictional universes — my default/Only Superhuman universe, the Hub universe, the “No Dominion” universe, whatever else I might eventually get published — all coexist in the greater multiverse, and their different physical rules, different principles of FTL and whatever, could be explained by subtle variations in the laws of physics of their distinct universes (and yet somehow don’t prevent the fundamental interactions, dark energy, and so forth from having the exact same values so that stars and planets and life can form the same way).  And it’s handy for fans who want to believe that, say, a crossover between Star Trek and Transformers, or Star Wars and Firefly, or whatever might be possible despite the huge differences in those universes’ histories and physics.  But I’m not sure I find it desirable.  To me, if there’s some planet in some unreachably distant universe that exactly duplicates Earth’s evolution and history, and has a duplicate of myself who’s writing this post at this equivalent point in his Earth’s orbit (which might be billions of years in the past or future relative to my “now,” if such a thing could even be meaningfully measured), I wouldn’t really think of him as me, or his Earth as being my Earth.  So it wouldn’t really feel to me that those other fictional universes connected to my world’s history, and that would make them less meaningful.

Or would it?  I mean, just going in, I know these fictional universes don’t have the same physical laws as our universe, that the specific characters or alien races or whatever that exist in them don’t exist in our world.  So I know going in that they’re already separate realities from my own.  Their versions of Earth and its history may correspond almost exactly to ours, yet they’re still separate entities.  So maybe it’s no worse to think of my various written worlds (blog name drop!) as coexisting realms in an infinite multiverse than it is to think of them simply as independent fictional constructs.

And sure, sometimes I think it would be nice to have some sort of grand unified theory linking my universes together.  I already tend to think of “No Dominion” as being in a parallel quantum timeline of my Default universe, because it has no visible discrepancies in physics or cosmology and has a lot of similar technological and social developments; it’s just that some technologies develop decades too early to be compatible with my published or soon-to-be-published Default-universe fiction.  That won’t work for something like the Hub, though, since it has distinct differences in physical law.  And yeah, I admit I’ve tried to think of a way to fit my universes together into a unified multiverse, at least in passing.  I suppose the “infinite monkeys” idea could give me a means to do that.

But I don’t think I find it appealing, because it just multiplies the variables to such an insane degree.  If these universes are just infinitely separated samples of an infinitely expanding metacosmos, then that doesn’t really unify them in any way, does it?  They’re so far apart, so mutually unreachable, that the “connection” doesn’t really count as a connection at all.  (After all, given the underlying physical premise, there’s no realistic chance of any kind of wormhole link or inter-universe crossover anyway.)  It’s a trivial and useless result fictionally for the same reasons it is physically.  And if they’re specks in an infinite sea of universes, it makes them all feel kind of irrelevant anyway.  So why even bother?  It’s simpler just to treat them as distinct fictional constructs and not bother trying to unify them.  Besides, even if I know intellectually that the humanity and Earth and Milky Way of my fictional universes aren’t the same as my own, it’s more satisfying to pretend they are, to construct a satisfying illusion for the readers that they’re reading about an outgrowth of our own reality, than to pretend that they’re some totally separate duplicates in universes unreachably distant from ours.  No point going out of my way to create a premise that alienates me and my audience from the universes they’re reading about.  Granted, judging from some conversations I’ve had in the past, there are some people out there who wouldn’t have a problem with that.  But it doesn’t really work for me.

%d bloggers like this: