New FC: Singularity Scenarios
My latest Fast Company essay goes up today, talking about the different scenarios for a "Singularity" that arise when you take into account different cultural and political drivers for both before and after the development of greater-than-human intelligence.
Three of the four scenarios (leaving aside "Out of Control") assume that human social intelligence, augmentation technology, and competition continue to develop. And in all three, human civilization -- with its resulting conflicts and mistakes, communities and arts, and, yes, politics -- remains a vital force even after a Singularity has begun.
One key aspect of the three is that they're not necessarily end states. Each could, given the right drivers, eventually evolve into one of the others. Moreover, all three could in principle exist side-by-side.
I noted earlier that I differ from many of the Singularity enthusiasts in my take on what happens before and what happens after a Singularity. I suppose I differ in my take on what happens during one, as well. I don't think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late-followers learn from the reactions and dilemmas of those who had initially encountered the disruptive change.
Ultimately, I think the "singularity" language has outlived its usefulness. By positing that the culmination of certain technological changes is simply Beyond the Minds of Mortal Men, the concept both dismisses (or greatly downplays) the potential of human action to modify the evolution of the technologies, and undermines the stated desire of many Singularity proponents to avoid disastrous outcomes. "If it's completely out of our hands, then why worry?" is not exactly the mantra of a responsible, safe, globally beneficial future.