« Lights, Camera, Talk! | Main | New Fast Company: Multifractals in the Sky, With Power-Laws »

Putting the Human Back Into the Post-Human -- The Motion Picture

The talk I gave at the New York Future Salon is now available!

The entire video runs about 98 minutes; my talk starts after a couple of minutes of intro, and I finish up right at the one-hour mark. The remainder of the video is the Q&A period, which has some good stuff, too. When I get a chance, I hope to pull out some short clips as stand-alone videos.

The sound quality is surprisingly good, considering that I wasn't mic'd. The lighting is such that some of the slide images are a bit hard to see; if you're curious, the entire deck (sans nifty Keynote transition effects) is available at SlideShare.

You can get a high-quality MPEG (.m4v) version at the Internet Archive page for the video, if you're eager to download just under a gigabyte...

My thanks to Kevin Keck and Ella Grapp for inviting me to give the talk, and to Robert Wald for dealing with the video stuff.

As always, please let me know what you think of the talk.

City in the Clouds


Good talk, in my blog posts I focus on responding to things I disagreed with, but I found many of the points interesting and meaningful.

I don't believe that the public or government will believe that AGI is possible up until it actually happens. It's just too radical. Even when we have powerful infrahuman AIs, people will still view them as fancy machines, not agents.

I am surprised that in your talk you seem to take objection to the very notion that there will ever be strong superintelligences that make decisions more effectively than all humans or take more grandiose actions than all humans. You seem to be uncomfortable with the idea of humans not always being #1. Am I wrong in this? Isn't one of the fundamental ideas of transhumanism that we'll eventually be surpassed by our creations and future selves which are radically different than our present selves, not just superficially but in terms of fundamental cognitive design?

I'm not really terribly freaked out by a world where humans are surpassed, as long as the greater agents explicitly value our existence and don't run us over, so to speak. If that happens, doesn't it make sense that they might want to help us out with some of our deeper problems, and they'll be able to solve those problems more effectively than we can? Is that so bad, or so unlikely in the long run?

In his recent paper and talk, Aubrey points out that he predicts that friendly superintelligence would probably fade into the background because it would know that we wouldn't want it to pester us and intervene in our lives in obtrusive ways. This also seems like a reasonable idea, and not quite covered by your four scenarios.

Also, I wanted to mention that I really really wanted to make this talk but I had so little sleep over the few days before that I practically fell on my face after getting out on the first day of the Summit. I didn't secure any modafinil supplies beforehand either, and honestly I haven't tried it yet.

Post a comment

All comments go through moderation, so if it doesn't show up immediately, I'm not available to click the "okiedoke" button. Comments telling me that global warming isn't real, that evolution isn't real, that I really need to follow [insert religion here], that the world is flat, or similar bits of inanity are more likely to be deleted than approved. Yes, it's unfair. Deal. It's my blog, I make the rules, and I really don't have time to hand-hold people unwilling to face reality.


Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37