« Remaking the Athlete, Remaking the Culture | Main | The Suburban Question »

Pondering Fermi

The Fermi Paradox -- if there's other intelligent life in the galaxy, given how long the galaxy's been here, how come we haven't seen any indication of it? -- is an important puzzle for those of us who like to think ahead. Setting aside the mystical (we're all that was created by a higher being) and fundamentally unprovable (we're all living in a simulation), we're left with two unpalatable options: we're the first intelligent species to arise; or no civilization ever makes it long enough. The first one is unpalatable because it suggests that our understanding of the biochemical and physical processes underlying the development of life have a massive gap, since all signs point to the emergence of organic life under appropriate conditions being readily replicable. The second one is unpalatable for a more personal reason: if no civilization ever survives long enough to head out into the stars, what makes us think we'd be special?

But I think there might be a third option.

(Warning: the rest hidden in the extended entry due to extreme geekitude.)


My colleague at the IEET Nick Bostrom offers a provocative version of the consequences of the Fermi Paradox in the latest Technology Review. In "Where Are They?" Nick (the director of the Future of Humanity Institute at Oxford University) suggests the existence of a metaphorical "Great Filter," some phenomenon (or set of phenomena) beyond which it's nearly impossible to pass. If the Great Filter is in the past, as some biochemical hurdle making the emergence of complex life wildly improbable, then we may have a grand future ahead of us. If the Great Filter is still to come, conversely, we're likely doomed. For this reason, Nick hopes that we don't find signs of life elsewhere in the solar system.

It's not hard to imagine what a future "Great Filter" might be -- the list of potential sources of extinction is diverse. It could easily be a natural event, such as a global plague or a massive asteroid strike; perhaps more likely, it could be a human-caused event, such as catastrophic environmental collapse or global war with ultra-high-tech weapons, wiping us out past recovery. Either way, it's a depressing end, but (in this scenario) a common one.

But I suspect that the "where are they?" query has a serious flaw: it makes assumptions about the behavior of an interstellar-capable culture based on what we, a pre-interstellar society, might do. Take this bit from Bostrom's article, about self-replicating "Von Neumann machine" probes:

If a probe were capable of traveling at one-tenth the speed of light, every planet in the galaxy could thus be colonized within a couple of million years (allowing some time for each probe that lands on a resource site to set up the necessary infrastructure and produce daughter probes).

A clear argument, and one would be forgiven for missing the key word in that sentence: colonized.

The Singularity is Near? The Singularity is Calling from INSIDE THE HOUSE!

It's a reasonable assumption that a civilization capable of building self-replicating probes that can travel at 10% of the speed of light (or even 1%) would be well past the point of developing machines able to behave as sentient beings. Throw molecular nanotechnology into the mix, along with a unthinkably more advanced science's understanding of how the local equivalent of a brain works, and it's clear that an interstellar-capable civilization must also be a post-Singularity civilization, no matter how narrowly, broadly, or dismissively one defines the concept.

So why would members of a culture so advanced want to deal with colonization? It's a very human/biological concept, not one that would readily apply to a post-biological civilization. Colonization would be important if you were spreading people, but not intelligent Von Neumann machines. Gravity wells take energy to get out of. Planets can have their own replicators to deal with (organic or otherwise). A static position makes you a sitting duck for natural disasters. All of those could be dealt with, but why bother, when there so much more out there?

An Oort cloud, the shell of comets that surrounds a solar system at the outer reaches of its star's influence, would be much more appealing -- lots of fun molecules to work with, in abundance. Even the Kuiper belt, the ring of rocks and asteroids and occasional dwarf planets at the extreme edge of a solar system would be more interesting in terms of readily-accessible masses of material. Getting solar power is a non-issue, as an interstellar-capable civilization able to spread at 1% or 10% of the speed of light clearly has access to much more significant (and more readily portable) sources of energy.

To be clear, this isn't an argument that these interstellar-capable civs just sit at home. They could and would likely spread, and certainly explore. But the notion that they'd hop from solar system to solar system planting their colonies, strikes me as terribly unimaginative, and definitely a pre-Singularity perspective.

The core of the Singularity argument is that those of us on the "left of the boom" side of one simply can't understand what life is like on the "right of the boom." The demands and concerns and requirements of a post-Singularity civilization wouldn't be based on a pre-Singularity pattern. That would apply to choices made for interstellar spread, too.

Interstellar Risk

This is, to me, an arguable possibility as to why we haven't encountered extraterrestrial intelligence. It's not dead certain, however -- there could still be an interstellar culture that managed to avoid a Singularity, or still opted for colonization (or to turn every bit of non-stellar mass into computronium). But those have their own complexities, mostly revolving around the speed of light, evolution, and politics(!).

As far as we can tell, the speed of light is an absolute limit. As a result, the further out a civilization spreads from its original home, the greater the time required for the edges to speak to/trade with/learn from the center, or each other. After a few thousand light years (if not well before), the edges would be so disconnected that they'd effectively be in isolation.

What we know about groups in isolation, from both biological and sociological evolutionary models, is that they diverge. Various local conditions and particular histories set these groups along novel pathways. There's no reason why these patterns wouldn't also apply to interstellar spread. What would these variations look like? Who knows? But one thing we know about this imagined interstellar species is that it has a strong drive to spread and colonize.

So there you have a diverse (and diversifying) set of (sub-) cultures/species, all interested in spreading and colonization. Looking out into the deep dark, they'd see more systems to move to and fiddle with for a few centuries/millennia getting them set up right; looking back, they'd see lots of systems already set up to be perfectly-suited to this particular original species, or at worst easily modified. They don't all have to attack internally to disrupt the entire endeavor: some become victims, some become defenders, and some -- possibly many -- try to keep a very low profile, not wanting to become the next victim. After digesting the "old worlds," the super-colonizing culture might start to move out again, setting off another cycle. Eventually, they'd figure out that there's a limit to how far a civilization can spread before it falls apart.

One might argue that this is simply taking human history (clearly pre-Singularity) and trying to apply it to a post-Singularity culture. One would be wrong -- I'm taking a pattern repeated in evolution, the flip side of a species spreading across an environment. That it happens in politics as well as biology simply points to its universality.


There's one last flaw to the "where are they?" argument: it assumes that we could see them if they were there. I don't mean anything magical, just that we may not be looking in the right place, signal-wise. Advanced extraterrestrial civilizations could be using an entirely new medium for communication, one that we don't know see as possible, only having made a brief stop at radio along the way. That's possible, although given that it's dependent upon something we don't now know about, it's really just special pleading.

The issue that SETI and its related efforts can really only detect high-powered beacons is a more tangible issue. "Radio fossils," the signal leaks from radio-capable civilizations, are far too weak to be detected right now. Even our largest radio receivers are nowhere close to being able to pick up alien TV signals -- one estimate claims that we'd need a current-technology receiver larger than the diameter of the Earth to pick up UHF television signals from the nearest star system, Alpha Centauri. And when you add spread-spectrum and encryption technology, even a strong signal would likely look like noise.

To sum up:

* Current SETI couldn't detect the kinds of signals we're putting out, so may be missing abundant radio fossil traffic.
* We have no way of knowing if a post-radio communication method is in use.
* An interstellar-capable civilization would certainly be post-Singularity, and therefore have very different needs and motives for expansion.
* Interstellar-capable civilizations that somehow remain wedded to colonization would inevitably fall into internal conflict because of speed-of-light communication/travel lag and divergent evolution (social or biological).

All of this is to argue that just because we don't see them doesn't mean (a) they're not out there, or (b) we're doomed. Whew.


Since we're throwing singularities, computronium and other commonplace objects around, here's another possibility: the universe is a nursery that advanced civilisations opt to leave behind.

In this SEED article, Geoffrey Miller suggests that The Great Filter is . . .

. . . Civilization.

The game, that is. Along with other involving simulations that give you the satisfaction of achievement without actually accomplishing anything.


Another geeky option is that they are already here at the micro or smaller scale. SETI is looking in the wrong direction ;-)

Or building on the eternal conflict thread. Life could be a by-product of the struggle. Or maybe just artwork...

Another possibility: intelligent life is abundant in the universe, but none significantly more intelligent than us. There could be millions of civilizations, many winking out all the time, none yet past any "Singularity."

My guess is that the most likely cause is that intelligent life (and probably any life) is so extraordinarily unlikely to get going that we just don't have any anywhere near us, maybe anywhere in the visible universe.

We have almost no idea what happened to get from hot barren rocks to DNA-based life here, so we have no idea how likely or unlikely it is.

That relies on the anthropic principle to explain why we find ourselves existing at all: in a universe/multiverse where very many sets of rules and very many initial conditions occur and evolve, as long as there is some possible path to human life our existence is explicable. But if it requires something like a 1:10^500 coincidence to get going then we're never going to meet any aliens. (Now, even if that were true, universes exist where that coincidence happened next door on Mars as well as here and we have neighbours we can exchange rocket mail with. But the chance of us being in one of those would be so vanishingly tiny as to be irrelevant.)

But the point about colonization is well-made too and another very plausible explanation, and I don't think we need to resort to Singularity references to understand it. (I think the Singularity is an interesting idea, but I think there's a lot to be said about possible futures involving many of the same elements that do not fit the Singularity stereotype.)

For one thing, the most complicated thing we've found in existence is the brain. Ours contain roughly the same number of neurons as there are stars in our galaxy - in the hundreds of billions area - and the number of connections between them is comparable to the number of stars in the Virgo supercluster, in the hundreds of trillions area.

It's true that everyday objects contain even more gigantic numbers of atoms, but the behaviour of atoms at a human scale is fairly predictable if we leave life out of the equation. Atoms make stars, stars burn, stars explode, the junk makes more stars and more planets.

Their behaviour is not "predictable" because their interactions are so chaotic, but we can see that the outcomes of those interactions don't end up with very varied behaviour. We seem to have cataloged many of the things that non-living/non-designed collections of atoms can do at a whole range of scales.

And unlike those between stars and between galaxies, the connections between neurons are extremely fast and close together, and they demonstrably lead to very interesting behaviours like, say, human consciousness. And perhaps a lot of other interesting things too that we're not really aware of yet. So there's probably plenty of room to explore down here in our own brains (or each others) without needing to rush off to other star systems to find something novel (and likely be disappointed).

And honestly even if we don't disappear into our own brains, I really can't see Malthusian-type population expansion ever happening again once we stabilize this century. Rich people barely make enough babies to replace themselves. So the urge to colonize probably cuts off right there.

Another take is that it's about taste: nobody is so crass to actually convert the entire universe into computronium for the same reason we don't chop down the entire Amazon rainforest tomorrow and turn it into lawn chairs: we don't need all those lawn chairs and it would destroy something unique, complex, and largely unexplored. We may have enough material in our own planet to make as much computronium as we can figure out what to do with.

One more possibility is just that there's no need to colonize for "raw materials" or "living space"; we might find a way to create those things out of nothing, in which case vandalizing even just this one galaxy would be pretty crass when we could have just made as much space & dumb rock as we wanted out of nothingness. Once we figured out such a trick, only a philistine would suggest eating the galaxy just because we can...

Hmmm. The main thing to worry about in this discussion is assuming some sort of uniformity in ETs. If they exist, they would likely be very, very diverse, and have all sorts of diverse motivations and interests.

We should expect not just variation between civilizations, but also significant variation within civilizations. A post-singularity civilization may have entities with smarts ranging from hyper-intelligences right down to dumb viruses. And these entities should vary in many more dimensions than just intelligence.

Given all this room for variation, even a small subset of aliens would probably do something we'd recognize as "colonize" the galaxy. Another sub-set would probably do something flamboyant with matter and energy that we could see (see this short paper).

Since we haven't seen them, and haven't seen anything at all besides dumb energy and matter, I still think the Fermi Paradox is a cause for pause.

But yes, it's way to early to say that this issue is resolved.

Thanks for these great comments, folks.

Tony, the transcension argument puts a nice spin on it, but isn't it essentially the same as a "great filter" argument, except here the filter is attractive instead of destructive?

Stefan, that's all too plausible.

John, the idea that ETIs are here, but operating at a micro/nano-scale, fits nicely with the notion that uber-tech aliens, masters of nano, are more interested in exploration than colonization.

David, I've considered that one, too -- essentially, the conditions in the galaxy, from percentage of metals in planets to the decline in gamma-ray-burster activity, has only been amenable to the development of complex life for the past few hundred million years, and that, simply put, nobody has gotten to the point of being able to do full interstellar activity yet.

Jacob, you're right that the details of abiogenesis remain unknown, but the mechanisms of how something like that could have started are (supposedly) relatively well-understood. By all signs from our single data point, once started, life proved to be extremely resilient. I'd be really surprised if it turned out that successful life emergence was so wildly improbable that we were the first ones around.

As for the Singularity aspect, I'll post a follow-up.

I do hope your taste argument is right, though.

Eric, I do agree that ETIs would have diverse and potentially unknowable motivations, but I hope that I demonstrated that the combination of time scales, isolation, and divergent evolution (cultural and biological/post-biological) would make full-blown colonial spread a tenuous proposition.

As for doing something flamboyant with matter and energy, I can think of a couple of reasons why we may not see that happening. The first is that we might be seeing it, but not recognizing it (and having to come up with elaborate explanations to describe it as a natural phenomena) -- not likely, but not impossible. The second, more likely one, is that flamboyant displays get you killed by aggressive ETIs looking to eliminate competitors (and who might even be your own progeny).

I posted this on The Well a long time ago, but it is germane to this discussion. The Fermi paradox is actually not so incredible when you consider the actual signal strengths involved.

Astronomy FAQ for signal strength vs. receiver size

I love statements like:

(4) A well-designed 12 ft diameter amateur radio telescope could detect narrowband signals from 1 to 100 light-years distance assuming the transmitting power of the transmitter is in the terawatt range.

Examples of power in the TW range (from wikipedia):

  • 1.7 TW - Geo: average electrical power consumption of the world in 2001
  • 2 TW - Astro: Approximate power generated between the surfaces of Jupiter and its moon Io due to Jupiter's tremendous magnetic field.
  • 3.34 TW - Geo: average total (gas, electricity, etc) power consumption of the U.S. in 2005
  • 15 TW - Geo: average total power consumption of the human world in 2004
  • 44 TW - Geo: average total heat flux from earth's interior

In other words, astronomers may have a shot at detecting ET Armageddon, maybe. They sum it up:

It should be apparent then from these results that the detection of AM radio, FM radio, or TV pictures much beyond the orbit of Pluto will be extremely difficult even for an Arecibo-like 305 meter diameter radio telescope! Even a 3000 meter diameter radio telescope could not detect the "I Love Lucy" TV show (re-runs) at a distance of 0.01 Light-Years!

I'm wondering what the effect of a terawatt-range microwave transmitter would have on life on earth. I am thinking: global warming would no longer be an issue.

And note that the signal strength numbers in the FAQ are for narrowband emissions.

The Fermi Paradox ceased to surprise me once I really started looking in to it more.

Link got eaten, sorry about that. I found it at:


But I suspect the link you have in the blog post body is similar anyway.

just throwing these out...

comment regarding..
>> As far as we can tell, the speed of light is an absolute limit...

inflation thereafter the big bang surposedly operated faster than light, before the four forces seperated.
See History Channel The Universe S01E14. - Beyond the big bang

Then there's surposed tachyons to consider...

also late last year reports of experiments where light is going faster than light.

also while your at it explain why quantum entangled subatomic particles work over large distances within the context of saying the speed of light is unbreakable absolute limit? Einstein spooky effect..

And, not to claim undue credit, the above post is a couple of my responses to a post where someone else shared that link.

Jamais, I wish I could be as optimistic as you appear. What bothers me with your reasoning is that every advanced ETI would have to choose, without exception, to remain invisible. That assumption seems far more shaky than the does the likelihood of a Great Filter (or, as I call it, a Cosmic Roadblock).

See more here.

Howard is right about the signal power, but a civilization that could build self-replicating robots that could colonize the entire galaxy would presumably already have robots here that ought to be waking up and saying hello, a la 2001, even if it'll take a while to get messages back and forth from the general net. But if that's not the case, a civilization at our tech level couldn't do any kind of interstellar communication at significant distances, and there probably won't be broadcast messages we can pick up. And nobody further than about 150 light years can even have found out about our technological revolution.

Jamais, life is resilient once it's going, but that doesn't say anything about the difficulty of getting it started. My car is pretty resilient too, but that doesn't mean it spontaneously self-assembled. So, even though the first Darwinian replicators were probably much simpler than any life we see now, they still might have involved an extremely unlikely (but physically possible) chance configuration of common chemicals to get started. The anthropic principle just says that if the universe is A) infinite or "very large" in extent (i.e. much larger than the probability of the right chemicals happening to wind up in contact to replicate) - which seems likely to be true, B) contains varied starting conditions (so that it's not just a repeat of the same stuff over and over, or completely uniform outside our visible bubble) - which also seems likely to be true, as even though the constituents of matter come from a pretty limited set and the ways in which they naturally group up tend towards common things like stars, planets, and galaxies, we still see considerable variety in arrangement of those elements at even the largest scales - and C) has the potential to contain life - which I think we can take as fact - then we can say that even though something on the critical path to life is extraordinarily unlikely it is not ruled out as an explanation if the data support it and no simpler alternative with the same explanatory power is offered.

Of course if something comes along to contradict one of those points - like life being fairly easy to start up, or the universe being finite in extent and fairly small, or that it repeats itself exactly at some scale rather than having a truly variable set of starting conditions - then another explanation becomes more likely. But right now, I think the evidence from attempts to understand primitive life and the Fermi paradox evidence of aloneness make the idea of an extremely unlikely chance configuration quite plausible. The number of possible configurations of the few fairly-simple molecules involved in starting life probably exceeds the number of atoms in the visible universe, and if only a minute percentage of those turn into replicators, and there is no way that they will spontaneously group into the life configurations except by chance, that would fully explain the lack of other life in the visible universe. I find it quite likely that our nearest neighbours might be many times further away than the size of our visible universe. We might meet them one day, but not anytime soon.

This is unlike the case with most phenomena we try to explain, where positing an extremely unlikely chance configuration will pretty much always be wrong, even if no data to support a simpler theory is given. If an elephant - better yet, a sperm whale - falls from the sky in front of us and explodes, the explanation that it spontaneously self-assembled in the sky is simply wrong, even if we have no specific evidence for the alternative explanation that it fell from a plane (or was created by an alien spaceship).

But back to speaking of tech revolutions - another interesting thing is that humans of nearly identical physical form to us have existed for 100k years or so, and lived for perhaps 90k of those without cities, writing, or significant technology, nor much apparent sign of movement towards those things. And even for the 10k or so after cities and writing got going, things stayed pretty much the same for very long periods. So even if intelligent life gets started it demonstrably doesn't always have to experience the kind of self-sustaining science & technology take-off that we've seen in the past few hundred years. Both pre-technological and primitive-technological societies seem to be able to exist in a roughly steady state for a long time. That seems unlikely to us from the modern era, but it really happened and the people doing it were not anatomically different to us, or even sociologically different in most ways. That's the kind of future Dune portrays, although in that world technology is strictly controlled; but I think it's just as likely that social change that eliminated the scientific mindset and disparaged curiosity could lead to a technological stasis without explicit controls.

As to variety within a single civilization - I do buy the idea that some sub-groups might want to colonize the galaxy, but I think it's unlikely that they'd be given free reign by the rest of the society. As an analogy, it would be completely feasible for a relatively small group of committed scientists and technologists to build nuclear bombs and use them to destroy every major city. But no matter what our tolerance for diversity, we wouldn't let them do that, and we would have the technological means to stop them. Similarly a group that wanted to colonize the galaxy or turn it into computronium wouldn't be able to do so even in the most diversity-tolerant civilization. Mind you, I do think that understating the amount of diversity in the future is a big problem in futurism in all its forms.

(I hadn't actually read the linked article until now, but I see he also brought up the origin-of-life part.)

Whether or not we agree, I think it's very interesting that we finally seem to have a framework in the anthropic principle for understanding those things that are "special" about the conditions around us, which helps us distinguish "common" from "unlikely" and those from "impossible"; and that some things are simply "required, no matter how improbable".

At the same time, looking at the Fermi paradox is an extension of relativism to time- and location- and species-independence. That's been very hard for humans to do, but I think we're getting better at it.

The xkcd take on the Drake equation:

The Seed article makes the good point against my "unlikely life" anthropic theory that life developed quite soon after the surface of the earth cooled enough. Unless there was only a brief window of time when it could have developed, under the anthropic principle you would expect the "unlikely coincidence" to occur at a random point in the period when the Earth's surface held the potential for life. If it happened soon after conditions made it possible, then it probably wasn't very unlikely; if it happened after 3 billion years of conditions very similar to those in which life eventually developed, that would be more convincing.

On the other hand if we needed a few billion years of life to evolve to where we are now, it could still be true that the coincidence required for life was very unlikely - and the fact that it happened early in the period when it was possible would be just one more coincidence required for our existence. That means untold numbers of Earths had life that developed too late to become intelligent before the sun ran out of fuel.

But the more coincidences you have to pull in, the shakier your foundation gets with this argument.

The other thing about the signal power is that of course the inverse square law only applies to point source/omnidirectional signals. If someone were to hit us with, say, a targetting radar or a laser, it would be detectable far farther away.

I've just written about this from a different angle:

Virtual Reality Could Explain the Fermi Paradox.

Someone has probably already said this, or I might be crazy, but here goes.

Post-singularity, a species has one certainty -- immortality within the bounds of this universe -- and one goal -- surviving the death of this universe. Everything else is just wanking.

To achieve that goal, a good place to start is by building a universe-sized brain. That means converting all available matter into distributed computing clusters. That means "colonizing" in a very technical sense.

If entanglement permits faster than light communication, then within mere billions of years you can have a computer spanning at least millions of galaxies and (wishfully) thinking in realtime. Now you've just started working on the problem. The downside to all of this is that there might be no solution, of course. Other than 42, and that never did anyone any good.

On the other hand, the final anthropic principle might be true, but that's even less likely.

Lots of good discussion.

Howard, thank you for details on the signal strength issue.

every advanced ETI would have to choose, without exception, to remain invisible

Mike, I don't think that's what I said. Some would choose to remain invisible; some would choose to undertake projects that, as a side effect, remain hard-to-detect from an extreme distance; some would choose to undertake projects that could be detected if we knew what we were looking for; and others would do things that would be easily detected.

However -- and this is the crux of the argument -- making themselves easily detected means providing a ready target for the small (but non-zero) number of hostile civilizations (hostile due to ideology, biology, or competition). Those visible civilizations would have a limited lifespan. Camouflage is a survival strategy.

So it's not that every ETI chooses to remain invisible, it's that those who don't remain invisible don't last.

Honestly, I don't know if this more appealing than a universe where we're the only intelligent life.

Jacob, I think you make a strong argument in general, but bear in mind that 90K or 100K years is, in evolutionary or astrophysical terms, by no means a "long time." As for existing without any significant advances in technology or social organization, bear in mind that if the tech relied on wood, plant, or soft animal tissues (e.g., hides), it wouldn't last to be discovered today. Upshot is, we really can't make that claim.

Tim, that's certainly one scenario. However, even setting aside any dependence upon superluminal computing, it (a) presupposes patterns of behavior that we're not qualified to predict, and (b) ignores the issue of competition and divergence.

One more issue: we shouldn't fall into the trap of looking for a single explanation for why we don't see ETIs and/or why they don't exist. A diversity of individually-insufficient explanations can, in aggregate, make for a persuasive case.

Actually I've got a problem with Tipler's von Neumann machine argument; which is, we can't falsify the possibility that it's already happened.

Fact is, we're made out of replicators. And we don't know exactly how the ancestral replicators got here, either. But we do, already, have sufficiently developed propulsion technology that if we wanted to we could send a few million biological payloads out of the solar system per year on an ongoing basis.

They don't have to be big, or fast, they just have to be viable at the far end: and I suspect our understanding of minimal organisms and bacterial spore encapsulation mechanisms is within a couple of decades of allowing us to engineer a bug that can survive tens or hundreds of kiloyears in a hard vacuum/high radiation environment. We're also within a couple of years of being able to seriously start cataloging exoplanets that would be suitable targets for bacterial/algal panspermia.

Nor is speed essential. You're talking about 10% of lightspeed -- but what about 1%? Or 0.1%? Or 0.01%? If the goal is to seed the galaxy with life-forms, and the tool is ruggedized algal cells in spore form, then at 300km/sec (achievable with a plasma sail or similar) you can cross the galaxy in only about 100Myears.
All this leaves out of the picture is motivation, and I'm yet to be convinced that the whole space colonization shtick isn't essentially religious in nature.

So ...

I conclude that panspermia -- as opposed to actual exploration using vNMs -- is actually cheap, and achievable within another 20 years from our current tech base if we actually want to do it. Which has interesting implications, doesn't it?

Robert Charles Wilson has an interesting take on that in Spin.

There is, of course, not the slightest indication that Artificial Intelligence (GOFAI) or Drexlerian-style molecular assembler nanomachines or super-high-resolution virtual realities indistinguishable from reality are scientifically possible, and a great deal of evidence that they're impossible. We do have evidence concerning the alleged Singularity: the rate of progress in technology has been slowing down for some time.

It's fascinating to observe the Singularity lemmings as they rush to drink Ray Kurzweil's crackpot Kool-Aid, since even a non-scholar who is not aware of scientific arguments like the Nobel chemist Smalley's debunking Drexler's molecular assemblers, nonetheless realizes that progress has ground to a halt. Just look around. Is Windows Vista an improvement? For that matter, is compiz in Ubuntu linux a meaningful improvement over Doug Englebart's original windowing and hypertext demo in 1968?
Everyone who owns a personal computer realizes that Moore's Law has broken down, and the hail mary pass the industry has made toward parallel multi-core CPUs isn't working out:
"I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecutre. To me, it looks more or less like the hardware designers have run out of ideas, and that they're trying to pass the blame for the future demise of Moore's Law by giving us machines that work faster only on a few key benchmarks! I won't be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Titanium" approach that was suppose to be so terrific--until it turned out that he wished-for compilers were basically impossible to write."

["Interview with Donald Knuth," 2008]

As progress has ground to a halt, from AI to CPUs, the prospect before us is not of accelerating technological change leading to a mythical Singularity, but Western civilization falling off the Olduvai Cliff as we run out of oil.

As Bruce Sterling put it: "`Artificial Intelligence' is so far from the ground-reality of computation that it ought to be dismissed like the term `phlogiston.'"

Of course subsequent commenters will convulse with ecstasies of contempt as they ridicule Don Knuth, Nobelist in chemistry Smalley, Rodney Brooks, Marvin Minsky, and all the other experts who have admitted we've hit a brick wall in the failed and futile efforts to produce AI, faster CPUs, Drexlerian molecular assemblers, etc. Standard operating procedure for the internet: like the delusion misnamed "the wisdom of crowds," the reality distortion field called the Singularity remains a crackpot fantasy confected for the purpose of deluding gullible dupes.
As Charles Mackay reminded us 160 years ago, "Men go mad in herds, while they only recover their senses slowly, and one by one." [Mackay, Charles, Extraordinary Popular Delusions and the Madness of Crowds, 1841]

My take (perhaps not 100% serious) on both "The Singularity is imminent!" and "The Singularity is impossible!" is that they're both wrong: the Singularity is here, it's inside our heads, and it may have been here for billions of years in the brains of other animals. In fact, we have an inverse Fermi paradox of our very own, which is: why are our minds so limited when the hardware & firmware is so capable?

Of course, I'm a crackpot. And probably not the first person to formulate it this way. But the human brain already contains one Turing-compliant AI. That's you. It also contains a virtual-reality suite capable of simulating anything you can imagine. That's your mind. At present, it's simulating (as best it can) the world around you, but it's capable of a lot more than that, as dreaming or hallucinations can demonstrate. It can even do both at once, as with daydreaming. Similarly your AI suite is capable of quite-accurately modelling other personalities, which is what it does when you think about what your mother would like for her birthday or whether your boss is about to fire you for reading blogs at work.

In fact, as experience with computers will tell you, there doesn't seem much innate reason why this exceptionally flexible system is limited to running one personality and one model of reality. There are obvious and significant differences between brains and computers, but for capacity I think this kind of analogy is fair: computers usually have a lot of wasted capacity, and if they can do one thing, they can usually do that thing ten times at once, or a thousand. At the very least, they can do it twice, each at half the speed.

Not to mention, there are long periods when your brain is doing virtually nothing, just ticking over. So why don't we have access to a controllable VR suite? (The closest is perhaps lucid dreaming.) Why are we not able to imagine other personalities to converse with? (Except a small section of the population, for whom this phenomenon seems to cause extreme distress.) Why can we only remember 7 things? Why do some people lack the empathy required not to hurt other people? Why are some people smarter than other people when their minds are so similar? Why are other animals not able to communicate with us, or even with each other at more than a very basic level? Why do our brains start out so apparently blank, when other animals can function at birth?

To me these are mysteries of just the same kind as the Fermi paradox. What you would expect - given our existence and experience - seems not to be the case.

Natural selection probably provides most of these answers, but they're not yet clear, and the means by which enforcement is set is even less clear. A mind that could simulate paradise might not bother reproducing or eating. A mind that could simulate other personalities might not bother with the social contact required to find a mate or rear children. But despite that we appear to be perched right at the edge of a huge expanse of cognitive possibilities, so this mechanism must be very subtle. The dangers posed by unlimited cognitive and simulative capacity must be quite severe in the natural state. Of course, we're far from the natural state, so my bet is that we can fairly-safely tap into this enormous potential in the next decades, well before we know how to simulate a mind or have the capacity to run one even if we knew.

Some possibilities (they hardly rise to the level of predictions), probably not original to me but I'm a terrible one for remembering sources:

  • Whale brains contain entire societies of simulated personas, and their evolution to fairly-safe ocean life is something like the idea of simulation liners drifting through space.
  • The human mind proves to be enormously more powerful than the surface gloss that we have access to; our experience and our awareness of our own cognition are the Cliff Notes of the Cliff Notes of the real processes of cognition, but for some reason that's all we can cope with.
  • We already had a full-blown society-wide Singularity, but our super-smart ancestors decided to implement cognitive limits and let things roll on along this human-level-intelligence path we find ourselves on.
  • Or in fact running a full-blown virtual reality simulation all the time is way beyond the capacity of our minds, and we're just fooled into thinking it works that way by a few sharp still frames, a bunch of stick figures in motion, and a hypnotic instruction not to notice the crudeness of the animation.

Well, it's a thought, anyway. I told you I was a crackpot.

I generally think AI as envisioned in pop culture is a complete dead end. I mean, Turing has a lot to say about the theoretical limits of computation; the Halting Problem says enough to rule out meaningful machine "intelligence", in a lot of ways anyway.

That said, traditional AI is not the only way machine learning can bring on a "singularity". I'd argue that we are *already* at a social Singularity; the vast amounts of data to mine about every one of us, given just a *little* more in the way of machine learning, will lead to social issues in our generation that we may not be equipped to solve without a lot of upheaval.

The Fermi Paradox is quite easily explained: technological civilisations are doomed to failure due to the finite energy resources available and the time scales required for the evolution of intelligent life.

Our own civilisation, though measured in the thousands of years since we became agricultural creatures, has accelerated with the availability of cheap and abundant energy - energy that had been stored in fossil remnants by a series of fortuitous events. Those events occurred 70 million years ago and the finite resource of fossil fuels is now being depleted to the point of collapse.

The industrial revolution that has allowed technological advance has been powered by wood, coal and then oil and gas. The minuscule amount of energy from other sources such as uranium and solar are actually byproducts of oil-powered technology and machinery.

We are now at or close to peak-oil with peaks in nearly every other important commodity that drives our civilisation. As can be seen in the collapse of all other previous societies, Easter Island leaps to mind, when an essential energy component depletes the civilisation collapses.

There must have been countless alien cultures that reached a similar technological pinnacle before declining in the face of energy depletion, it must be remembered that our industrial age has lasted a mere two hundred years, and with very few years left before our eventual decline being able to glimpse another civilisation's radio signals would be a monumental coincidence.

I feel as though a combination of many different comments on this page are right. I think that while we will colonize the Solar System, we will also colonize ourselves. We will be constantly improving ourselves with better and better nanotechnology, and looking in more than looking out.

That being said, I have no idea how E.T's are going to operate, but I feel that, if there are any, they will either be very similar or very different to our society when we reach that level, depending on their physical bodies.

Finally, I feel compelled to say that I have no idea at all, and that this is just a guess.

I'm thinking that the Great Filter may really be a series of events, some behind us and some ahead.

It seems to me that there are at least two types of roadblocks on the way to a spacefaring species:

- Evolution stuck in local maxima
- Unstable dynamics leading to collapse of the aspiring species

For example, the dinosaurs may have been a local maxima, and an asteroid of the right size had to hit the earth in order to jog the ecosystem out of that.

An example of unstable dynamics was the nuclear arms race. It could certainly have gone a different way, with pretty high probability.

Even with a small number of local maxima and unstable dynamics, you can get to pretty strong Filters, because the event probabilities are multiplicative.

For example, if you have 30 such events, and the chance of successfully navigating the events is 50%, that's a 1 in 10^9 Filter. Some of the Filters may be much stronger (I suspect that the nuclear one was 1 in 1000).

I think we're out of the local maxima jungle, but could still have a pretty severe Filter component with nanotech and rogue AI unstable dynamics.

The only way to get some real numbers is to survive and do a galaxy scale survey. I'm up for it!

I'd like to do a survey! I just think it would be rather inelegant to dismantle solar systems to do it even if you had the capability. Sort of like showing up in the wilderness with a fleet of Hummers and bulldozers and building a subdivision and shopping mall and office park in the name of studying the local fauna and flora. You know. Crass.

A related question (for me) is why the universe is so complicated and yet so functional. So far there doesn't seem a reason why self-aware life couldn't exist in a much simpler physical system, and because the permutations of a simpler system are vastly smaller in number, you would think that it would be most likely that we would find ourselves in one. And yet here we are, with all kinds of interesting things going on in the universe at all scales that have nothing to do with us and do not seem to have even an anthropic explanation. I was reading about superconductivity findings this morning and thinking about how odd it is that our physical system has these emergent properties that are not involved in life (so far as we know) and yet are quite useful and available to us and happen in controllable ways.

Obviously, anything that might occur in nature needs to be pretty stable otherwise we'd never have time to develop, but it does seem awfully convenient to me that so many stable forces and particles exist with such interesting and diverse emergent behaviours when surely a simpler system could also produce intelligence.


Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37