« "BoingBoing Censored" - The Game (!?!) | Main | Playing the News - A Chat with Asi Burak »

Singular Sensations

Creation 2.0The Singularity concept remains inescapable these days, although rarely well-understood. Both are unfortunate developments, for essentially the same reason: the popularity of the term "Singularity" has undermined its narrative value. Its use in a discussion is almost guaranteed to become the focus of a debate, one that rarely changes minds. This is especially unfortunate because the underlying idea is, in my view, a useful tool for thinking about how we'll face the challenges of the 21st century.

For many of its detractors -- and more than a few of its proponents -- the Singularity refers only to the rise of godlike AIs, able to reshape the world as they see fit. Sometimes this means making the world a paradise for humanity, sometimes it means eliminating us, and sometimes it means "uploading" mere human minds into its ever-expanding digital world. That this isn't all that close to Vinge's original argument is really irrelevant -- by all observations this appears to be the most commonplace definition.

It's not hard to see why this gets parodied as a "rapture for nerds." It's not that it's a religious argument per se, but that it has narrative beats that map closely to eschatological arguments of all kinds: Specialists (with seemingly hermetic knowledge) [Premillennial Dispensationalists, Singularitarians, Marxist Revolutionaries] predict an imminent transformative moment in history [Rapture, Singularity, Withering Away of the State] that will create a world unlike anything before possible in human history, a transformation mandated by the intrinsic shape of history [The Book of Revelation, the Law of Accelerating Returns, Historical Materialism]. The details of the various eschatological stories vary considerably, of course, and this general framework matches each version imperfectly. Nonetheless, this pattern -- a predicted transformation creates a new world due to forces beyond our ken -- recurs.

This comparison drives many Singularity adherents to distraction, as they see it as the intentional demeaning of what they believe to be a scientifically-grounded argument.

The thing is, the Singularity story, broadly conceived, is actually pretty compelling. What Vinge and the better of the current Singularity adherents argue is that we have a set of technological pathways that, in both parallel and combination, stand to increase our intelligence considerably. Yes, artificial intelligence is one such pathway, but so is bioengineering, and so is cybernetic augmentation (I'll argue in a subsequent post that there's yet another path to be considered, one that Vinge missed).

The version of the Singularity story that I think is well-worth holding onto says this: due to more detailed understandings of how the brain works, more powerful information and bio technologies, and more sophisticated methods of applying these improvements, we are increasingly able to make ourselves smarter, both as individuals and as a society. Such increased intelligence has been happening slowly, but measurably. But as we get smarter, our aggregate capacity to further improve the relevant sciences and technologies also gets better; in short, we start to make ourselves smarter, faster. At a certain point in the future, probably within the next few decades, the smarter, faster, smarter, faster cycle will have allowed us to remake aspects of our world -- and, potentially, ourselves -- in ways that would astonish, confuse, and maybe even frighten earlier generations. To those of us imagining this point in the future, it's a dramatic transformation; to those folks living through that future point, it's the banality of the everyday.

Regardless of what one thinks of the prospects for strong AI, it's hard to look at the state of biotechnology, cognitive science, and augmentation technologies without seeing this scenario as distinctly plausible.

What I'm less convinced of is the continuing value of the term "Singularity." It made for a good hook for an idea, but increasingly seems like a stand-in for an argument (for both proponents and detractors). Discussions of the Singularity quickly devolve into debates between those who argue that godlike AI is surely imminent because we have all of these smart people working on software that might at some point give us a hint as to how we could start to look at making something approaching an intelligent machine, which would then of course know immediately how to make itself smarter and then WHOOSH it's the Singularity... and those who argue that AI is impossible because AI is impossible, QED. And we know this because we haven't built it, except for the things we called AI until they worked, and then we called them something else, because those weren't real AI, because they worked. Since AI is impossible.

In Warren Ellis' snarky piece on the Singularity from a few weeks ago, he suggested replacing "the Singularity" with "the Flying Spaghetti Monster," and seeing if that actually changed the argument much. Here's the parallel: replace "the Singularity" with "increasing intelligence," too. If it still reads like eschatology, it's probably not very good -- but if it starts to make real sense, then it might be worth thinking about.

Comments

I like to think of the Singularity as a concept at its most literal possible meaning, namely the point at which the rate of change of a system will exceed our ability to cope with it or even fully understand it.

In that sense I feel we have reached a social singularity already, regarding privacy and transparency in the increasingly indexed and datamined online world. I think there's ramifications of this that not only we can't even see yet, but that we don't even know the right questions to ask about yet.

As for the traditional AI-emergent Singularity idiom, it seems like a geek wankfest to me at this point, and is rightly ridiculed. It made for interesting speculative fiction but that's about it.

I also agree with you that augcog and other forms of "enhancement" are more likely paths; note that these do not need to be invasive but can simply manifest as being able to have instant access to more and more indexed information. Sort of the "display and presentation" side of the argument I make in the second paragraph above.

If you want to look at farther future SF as a guide for the possible in this regard, perhaps the Transenlightened of Alistair Reynolds "Revelation Space" world are a reasonable strawman.

I thought you were going to mention the idea of abrupt change that becomes overwhelming. Even before the potential intelligence explosion a super fast rate of change *is* a serious issue. Already I'm feeling the strain of keeping up with my RSS feed and email and that's just one's person understanding. I'd imagine it'd be a lot worse for anybody trying to plan something in 10, 15, 20 more years...

On the relationship between sustainability futurism and Singularity futurism:

I think it's fair to say that climate change is the sustainability issue that encompasses and contextualizes all others. Since last month's IEA report, I think of it in terms of 550, 450, and 350 ppm CO2. 550+ is where we're going at present (IEA Baseline scenario for 2050), and the debate over stabilization targets is between 450 and 350. The IEA provides a detailed scenario (codenamed "BLUE") to reach 450, which is what the "Cool World 50" goal being discussed by the G8 in Japan this week is about, but that still involves a two-degree long-term rise. 350 ppm is the goal advocated by James Hansen and by 350.org, but we don't have a blueprint, complete with costings, for how to realize it.

It does look very much as if 450 is going to be the target favored in Copenhagen 2009, but if and when the temperature ratchets up again as it did in 1998, I'm sure the 350 advocates will suddenly get a lot more attention.

In second place we have a host of issues like development, population, food security, peak oil, but it certainly looks as if the 450 target will be an immutable constraint on plans for the future, going forward. If your plan for a better world involves exceeding that concentration, you need a better plan. (350 advocates, substitute accordingly.)

On the Singularity side, though there is a similar diversity of topics, I think increasing intelligence is indeed the axial issue, with radical life extension and nanotechnology in joint second place. The essential transhumanist agenda at present is something like this: The foremost pre-Singularity priority should be the fight against ageing and death, with cryonics for those who die too soon and de Grey's SENS for the rest of us. Nanotechnology is an extinction risk (I prefer that expression to "existential risk", it's more straightforward) and presents a distinctive intrinsic challenge that must be met; and finally, Friendly AI is the best approach we have to the all-or-nothing challenge of superhuman intelligence which defines the Singularity proper.

So how does the other futurism, the one that's becoming mainstream, look from the Singularity perspective? Certainly the projections of 2100 look unbelievable. It is impossible for a transhumanist to believe that the main question regarding Earth in 2100 is whether or not cities and cars still run on fossil fuels. By then, the expectation is, we're either mostly posthuman and in space, or we're all dead. So there is a big question mark over these extrapolations regarding the second half of the 21st century.

Also, the big challenges of sustainability don't look so big when you have God-in-a-box technologies; the really big challenge is simply not killing yourself with all that concentrated power. Need to suck carbon out of the air in a hurry? Use some aerovores! But just make sure they stop when they're supposed to.

The Worldchanging school of thought is, in part, an approach to sustainability futurism which is willing to listen to ideas derived from Singularity futurism; but the values of sustainability futurism remain paramount. I look at things from the other side and say, what is a rational attitude for a Singularity futurist to adopt regarding sustainability issues? Transhumanists will never be at the center of that debate, their agenda is fundamentally different. As the new scarcity bites harder, it will be tempting for people who do care about the future and have the ability to contribute to do it the Worldchanging way, e.g. advocate nanotechnology in the context of solar energy or geoengineering.

I'm certainly not opposed to that, but I do want to warn that such an approach must not become paramount among Singularity futurists, because (in the extreme) it would simply lead to green technophilia and a neglect of the cautious side of Singularity futurism (a la Drexler and Yudkowsky) which is one of its essential ingredients. The aerovores provide a simple example: if, gripped by a further temperature spike, the world somehow embraced accelerated carbon capture via nanotechnology as the only answer and invested massively in that goal, it would risk creating a globally distributed technological base capable of destroying the world not long after it had saved it. There need to be people working directly on the risks associated with very high technology, because even massive human dieoffs aren't going to stop it from turning up somewhere in the world; and if one is trying to create a sort of futurist popular front which collectively addresses every major issue confronting the world, you need such people in your coalition, or at least you need to leave room in your worldview for their relevance.

From the other side, Singularity futurism certainly has problems with sustainability concerns. There is a tendency to dismiss them as irrelevant (because 2100 will not be like that), unimportant (because a Friendly Singularity will fix them easily enough), or even as pernicious distractions, and I dare say they really are pernicious distractions for a handful of people, namely those who are genuinely best employed on core transhumanist concerns like antiageing, nano safety, and Friendly AI, rather than on the complexities of energy economics.

Nonetheless, because the timing of breakthroughs in those areas simply cannot be forecast reliably, it's very hard to approach those topics in the same way as climate, population, food, or even economic growth. The view I have therefore been pushing is a segregation of the quantitative and the qualitative. The processes pertaining to sustainability futurism generally are quantifiable, and therefore form a nexus which can be approached much as economic governance is - as a problem in optimization or constraint satisfaction. There are a variety of levers we can pull or push, and we want to know the actions which will steer us in the best direction.

But Singularity issues exist on a different plane and need to be approached differently; they are qualitative ones where we are still working out the basics of how to deal with them. They can be "quantified", but only in the way of scenarios, in which assumptions are made by hypothesis: e.g. assuming that a cheap molecular therapeutic regimen that doubled human lifespan was made available on the Internet in the year 200X, one could then plausibly calculate the demographic consequences. But that is just deductive reasoning about hypotheticals. By contrast, the quantification of climate change processes has empirical and physical-first-principles theoretical bases; epistemically it's on a completely different plane.

My overall prescription then is that you make quantitative policy about the quantifiable, and qualitative policy about the unquantifiable. Sustainability futurism is generally quantifiable. Singularity futurism is generally only quantifiable through the introduction of arbitrary hypotheses. These futurisms can coexist and have to coexist, but the terms of their coexistence need to be thought about and understood.

Interesting link to the Flynn effect above, by the way. However it does ignore one obvious possibility, that the population isn't getting smarter but that the tests (and by proxy their creators) are getting dumber.

Smart is very different from wise. Some very smart people can be most unwise.

Amplified intelligence, by any means, does not necessarily mean more wisdom. If history is any measure, it will probably mean bigger and more terrible mistakes.

This is not meant to be a pessimistic statement but a way to point out that we may not be looking at what is essential.


Archives

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37