Futurist Matrix Revisited (Again)
David Brin wrote a provocative and thoughtful response to my futurist matrix idea, and posted it over at his blog. Unfortunately, the system he uses -- Blogger -- has once again broken its comment system. Rather than wait to reply, I've decided to post my response to his response here. (David -- this is an updated version of the email I sent.)
The futurist matrix is clearly a work in progress, and the changes have been slow and evolutionary. The main difference between the first and second versions of the matrix is in the terminology, not the concept -- I dropped the word "realist," and replaced with "pragmatist." More importantly, I tried to make the sub-headings less normative, less apt to appear biased towards one particular option along an axis.
I suspect I'll need to do something similar with "optimist" and "pessimist." The danger of using commonplace terms in a setup like this is that readers' interpretations of the words may not match my use. The present sub-headings of "inclusive success" and "exclusive success or failure" are more accurate than optimist/pessimist, and I'll likely make them the axis labels.
These more expressive terms help to illustrate a seemingly-illogical aspect of the matrix: the combination of ideologically opposed groups in the same philosophical box, such as Marxists and Dispensationalists in the lower-right quadrant. But the matrix is less concerned with a group's ideology than with its eschatology: how do the philosophies see the future unfolding? As Brin points out, neither Marxists nor Dispensationalists would see themselves as particularly pessimistic. But while they may see a happy future world, it's a world limited to the true believers. They may want everyone to become a true believer, but people outside of the circle cannot achieve a successful future.
There is a bigger problem with putting exclusive success and failure in the same box, though, one that Brin gets at with his Paul Erlich example: it's a pejorative combination, implying that the two are equivalent. I certainly wouldn't be happy in a Left Behind world (in fact, I'd probably be hunted down by the Tribulation Commandos), but few Dispensationalists would see their own success as a form of failure -- while they would likely see the upper left world as indicative of one where they've lost. Failure becomes an issue of perspective, not objective reality.
For many pragmatists, exclusive success and failure may in fact be equivalent concepts; many (most?) people willing to accept different pathways to positive change would see the success of a limited group of people at the expense of everyone else as a form of failure. Even the doomiest doom-sayers among the peak oil and civilization collapse crowd (e.g., James Howard Kunstler) wouldn't see being right as a form of success, even if pockets of well-prepared survivalists carried on (although they may get a bit of schadenfreude out of saying "I told you so" as the boat sinks).
So perhaps it's better to drop "failure" as a hard term, recognizing that each of the four quadrants would likely be seen as a "failure" outcome for somebody.
Regarding some particular points Brin raises:
[As a (very) crude example, imagine a spectrum of "singularity technologies are inevitable and all-powerful" versus "singularity technologies will be haphazard and only marginally transformative," one would put both Ray Kurzweil and Bill Joy at the same end of the spectrum, even though they have radically different visions of what these technologies would actually do.]
One last item: with regards to this:
I feel we have to get smarter. Maybe a LOT smarter, before we will be able to deal with AI and immortality and molecular manufacturing and nanotech and bioengineering. Effective intelligence is where we really should be investing research and development. Because if we do get smarter, or make a next generation that is, then the rest of it could be much easier.
Frankly, when I look at Aubrey de Gray and Ray Kurzweil... and when I look in a mirror... I see jumped up cavemen who want to live forever and get all pushy with the universe and quite frankly, I am not at all sure that cavemen are ready to leap into the role of gods.
I agree that we need to get smarter and that we need to focus attention on effective intelligence. I disagree, however, that this means we need to pull back. Intelligence evolves with the environment, broadly conceived, and (if William Calvin is right, and I think he is) we get smarter faster when the environmental pressures are the most extreme. Calvin argues, for example, that the measurable improvements in hominid and early human cognitive skills closely correlated with rapid climate shifts.
In other words, we may not get the intelligence we need if we don't put ourselves in the position of needing it.