« New FC: Futures Thinking - the Basics | Main | New FC: The Singularity and Society »

If I Can't Dance, I Don't Want to be Part of Your Singularity

All of the details have been worked out, so now I can talk about it: I will be speaking in New York City on October 3, at the New York Futures Salon. The subject?

Singularity Salon:
Putting the Human Back in the Post-Human Condition

aka If I Can't Dance, I Don't Want to be Part of Your Singularity

I'm very happy to announce that acclaimed futurist Jamais Cascio will be coming to lead our discussion of the Singularity, and what we should be doing about it. He's going to kick us off with a provocative call-to-arms:

With their unwavering focus on computing power and digital technology, leading Singularity proponents increasingly define the future in language devoid of politics and culture—thereby missing two of the factors most likely to shape the direction of any technology-driven intelligence explosion. Even if the final result is a "post-human" era, leaving out human elements when describing what leads up to a Singularity isn't just mistaken, it's potentially quite dangerous. It's time to set aside algorithms and avatars, and talk about the truly important issues surrounding the possibility of a Singularity: political power, social responsibility, and the role of human agency.

This should provide more than enough fodder for a lively discussion. I'm looking forward to a very special evening.

This is, in essence, counter-programming for the Singularity Summit, happening that same weekend (I'm not attending the Summit, fwiw). The 7pm start time for my event gives Summit attendees a chance to come on over after the last Saturday talk.

This is the first time I've give a talk on futures in New York, and it's open to the public (via registration for the Future Salon group). Hope to see you there.

Comments

Amen.

I imagine a lot of folks think the whole point of the singularity is to do away with politics, social concerns, and culture, and concentrate on the really important business of turning the moon into something useful.

Sounds like an interesting talk, I agree that it is important to take political, ideological and social factors into account, and I don't really think that anyone has done that. I'll try to make it.

What luck for me! That's probably the only weekend this year that I'll be in the NYC area and the conference I'm attending wraps up at 3:15 on Saturday afternoon. Looking forward to it.

Politics and culture are relevant insofar as they influence the source code or motivations of the first superintelligence, be it enhanced human or AI.

Sorry we can't be in NYC for this--I know it will be fun.

Beginning with a few things that Michael (above) has also said on his blog, in the past:

Singularity can be defined broadly or narrowly. Broadly defined, it can designate any number of imagined changes, and indeed these days the word is often appropriated metaphorically to designate a big change or rapid transition.

The narrow definition which interests me most is Singularity as rise of superhuman intelligence, partly because this turns out to give us some conceptual leverage. A sort of Singularity Syllogism is possible along these lines:

1. Suppose that intelligent entities can be classified by their goals, and by their intellectual ability to achieve those goals. Define these two qualities as the value system and the intelligence of an entity.
2. Let us also suppose that intelligence can be ranked: Some entities are definitely more capable than other entities at achieving their goals. In any conflict of goals, one should expect the more intelligent to defeat the less intelligent.
3. Then: the value system of the first superhuman intelligence or intelligences will determine the subsequent future of the world, because lesser, merely human intelligences will be unable to realize goals which are in opposition to those of the superintelligences (should such opposition exist).
If we accept this conception of Singularity for the moment, then we can distinguish two further subcases. In one, superhuman intelligence comes about in an environment not paying any attention to part 3 of the syllogism (e.g. due to an enthusiastic quest by AI programmers to create something cool); in the other sub-scenario, the people who are at risk (so to speak) of creating superintelligence are aware of the likely consequences, and make their choices accordingly.

One can similarly distinguish two ways in which politics and culture affect the outcome of such a Singularity. The first way in which they might do so is simply by preventing an understanding of the actual situation from arising (that's culture), or by preventing such an understanding from having any power (that's politics). (I emphasize again that this whole analysis is predicated on the "syllogism" above being accurate.) Under these circumstances, the Singularity becomes at best a gamble, from the perspective of humanity: one is reduced to hoping that those first superintelligences have human-friendly value systems.

The second way for politics and culture to affect the nature of a Singularity is in the context of a conscious effort to achieve a human-friendly Singularity. And here we get into unavoidably technical territory, including technical culture and politics, though it will remain in interactions with non-technical culture and politics until the threshold of superintelligence has been crossed.

The reason that the technical element now unavoidably comes to the fore is that any proposed specification of human-friendliness - say, one exact enough to serve as the blueprint for an artificial intelligence - will be both technically posed and technically justified. For example, here is a formulation that I have used (really just a variation on the model of Friendliness espoused by the institute that Michael works for): we should want the first superintelligence to be an ideal moral agent, where the criterion for the moral ideal is derived (somehow) from the (hypothesized) human utility function, the species-universal component of the human decision-making process.

Now regardless of the merits of that recipe, it can serve as a concrete test-case for this sort of anticipatory sociology of the Singularity. Imagine we have a research project and a group of programmers who think they finally have a recipe for safe superintelligence. For the public they have a verbal formula like the one I just used; and then they have a technical understanding among themselves of what "human utility function" and "human-relative ideal moral agent" mean. If an attempt to create a friendly Singularity is not simply carried out in secret, then this dual public-private, nontechnical-technical presentation of its nature I think must exist.

So, to recapitulate. If one accepts a notion of Singularity like the one in my "syllogism", and if one is not that interested in the sociology of an irresponsibly out-of-control Singularity (because that's just Russian roulette), then the political and cultural issues I think have to focus on the intellectual and social viability of projects like the one I describe, which seek to deliberately and with understanding initiate a human-friendly Singularity.

In its full multidimensional complexity, the situation is rather more complicated than I describe. There is the luddite option; and there are going to be rather different technical understandings of Singularity (which in some way violate the premises of my syllogism). But this is where I'd start, in trying to understand the situation.

Mitchell, great analysis. I agree that you've raised important points.

I am curious about what Jamais will have to say on this, though, despite the fact that I probably won't be able to make it to his talk. I'm not sure I've ever heard Jamais say that he thinks an intelligence explosion is possible, so I wonder why he mentions it here.

Fwiw, several talks of our program on the second day are focused on the human element of the Singularity. For instance, "Collaborative Networks In Scientific Discovery" by Michael Nielsen, "How Does Society Identify Experts and When Does It Work?" by Robin Hanson, and a Future of Scientific Method panel.

My general impression here is that Jamais may simply be uncomfortable with the political ramifications of an intelligence explosion. An intelligence explosion means that one entity could be elevated to godhood very quickly. This is not democratic, but seems to proceed from the fact that an entity that can copy itself endlessly and learn nearly instantly is going to be more powerful than humans. These potential advantages available to a human-level AI are coded into the rules of the universe and there is nothing we can do about them.

The question is, bearing in mind that an intelligence explosion inherently grants tremendous power to a single agent, how do we go about encouraging that agent not to kill us wholesale? That is mostly a matter of programming, not politics.

Michael

I'm not sure I've ever heard Jamais say that he thinks an intelligence explosion is possible

Possible? Sure. I don't think I've ever indicated otherwise. The question isn't whether it's possible, it's whether it's likely -- and definitely whether it occurs in the way that folks today expect (I'd wager that it won't). I do think that an intelligence explosion is far more likely to happen via augmentation than by AI.

I'm not sure that the "human element" items in SS09 push anything further than in SS08 or SS07. Moreover, talks about scientific networking, scientific methods, and the role of experts in society seem rather inward-looking. It's not about how singularity issues will affect the world, it's more about how to do singularity research.

No real critics of the concept on the program, of course, but you do have Ray K *talking about* critics. I am happy to see that there are more bio talks on the docket than in prior summits, however.

... may simply be uncomfortable...

You're misreading my objections about the singularity argument. That the existence of a super-empowered individual entity would be undemocratic is somewhat beside the point; my concerns about democracy are more about the process than the result (as in, how we get to the point of making something disproportionately powerful).

My discomfort come more from the unsupported assumptions that go into much of the argument exemplified by what you've claimed here, along with the expressly religious thinking that seems to structure it. (If you don't want people saying that you're making a religious argument, you might want to steer clear of phrases like "one entity could be elevated to godhood very quickly" and "coded into the rules of the universe.")

Assumptions like:

  • ..."one entity"... (actually pretty unlikely, even limiting ourselves to AIs)
  • ..."can copy itself endlessly"... (lots of reasons why this is not likely, from limitations on available hardware to the potential reliance on unique characteristics of hardware materials -- plenty of examples of evolutionary/emergent software taking advantage of material flaws and characteristics, resulting in non-replicable software/hardware combos)
  • ..."learn nearly instantly"... (that one is veering awfully close to pure faith)
  • ...and a metric buttload of unstated yet present assumptions about behavior and the nature of intelligence.
  • Finally, the biggest unsupported assumption of them all, "...an intelligence explosion inherently grants tremendous power to a single agent." Inherently? As the saying goes, I do not think that word means what you think it means.

    Please note that I didn't say that this was all impossible. Of course it's possible, in the broad sense -- there's nothing here that violates our understanding of how the universe works. Again, the question is how likely the various elements are.

    What's frustrating for me is that I honestly *do* think that the set of technologies of enhanced functional intelligence will very likely be quite disruptive (at best) and potentially quite dangerous, and that it's something worth examining -- but the singularitarian argument, as it has come to be expressed, is utterly disconnected from history, social behavior, evolutionary processes, and just about every other non-digital line of study.

    Especially politics: when you say that something is a "matter of programming, not politics," I just have to shake my head.

    This was one of the better talks at the NYC Future Salon, especially for those of us who couldn't make it to the Singularity Summit. I blogged about it here: http://blogs.journalism.cuny.edu/interactive2010/2009/10/05/department-of-intelligence-and-robot-services/

    Post a comment

    All comments go through moderation, so if it doesn't show up immediately, I'm not available to click the "okiedoke" button. Comments telling me that global warming isn't real, that evolution isn't real, that I really need to follow [insert religion here], that the world is flat, or similar bits of inanity are more likely to be deleted than approved. Yes, it's unfair. Deal. It's my blog, I make the rules, and I really don't have time to hand-hold people unwilling to face reality.

    Archives

    Creative Commons License
    This weblog is licensed under a Creative Commons License.
    Powered By MovableType 4.37