« Memorial Day | Main | Why Am I Doing This? »

Futurist Matrix Revisited (Again)

things_to_come.jpgDavid Brin wrote a provocative and thoughtful response to my futurist matrix idea, and posted it over at his blog. Unfortunately, the system he uses -- Blogger -- has once again broken its comment system. Rather than wait to reply, I've decided to post my response to his response here. (David -- this is an updated version of the email I sent.)

The futurist matrix is clearly a work in progress, and the changes have been slow and evolutionary. The main difference between the first and second versions of the matrix is in the terminology, not the concept -- I dropped the word "realist," and replaced with "pragmatist." More importantly, I tried to make the sub-headings less normative, less apt to appear biased towards one particular option along an axis.

I suspect I'll need to do something similar with "optimist" and "pessimist." The danger of using commonplace terms in a setup like this is that readers' interpretations of the words may not match my use. The present sub-headings of "inclusive success" and "exclusive success or failure" are more accurate than optimist/pessimist, and I'll likely make them the axis labels.

These more expressive terms help to illustrate a seemingly-illogical aspect of the matrix: the combination of ideologically opposed groups in the same philosophical box, such as Marxists and Dispensationalists in the lower-right quadrant. But the matrix is less concerned with a group's ideology than with its eschatology: how do the philosophies see the future unfolding? As Brin points out, neither Marxists nor Dispensationalists would see themselves as particularly pessimistic. But while they may see a happy future world, it's a world limited to the true believers. They may want everyone to become a true believer, but people outside of the circle cannot achieve a successful future.

There is a bigger problem with putting exclusive success and failure in the same box, though, one that Brin gets at with his Paul Erlich example: it's a pejorative combination, implying that the two are equivalent. I certainly wouldn't be happy in a Left Behind world (in fact, I'd probably be hunted down by the Tribulation Commandos), but few Dispensationalists would see their own success as a form of failure -- while they would likely see the upper left world as indicative of one where they've lost. Failure becomes an issue of perspective, not objective reality.

For many pragmatists, exclusive success and failure may in fact be equivalent concepts; many (most?) people willing to accept different pathways to positive change would see the success of a limited group of people at the expense of everyone else as a form of failure. Even the doomiest doom-sayers among the peak oil and civilization collapse crowd (e.g., James Howard Kunstler) wouldn't see being right as a form of success, even if pockets of well-prepared survivalists carried on (although they may get a bit of schadenfreude out of saying "I told you so" as the boat sinks).

So perhaps it's better to drop "failure" as a hard term, recognizing that each of the four quadrants would likely be seen as a "failure" outcome for somebody.

Regarding some particular points Brin raises:

  • I do agree with Brin's list of What To Avoid for ideological matrices; in fact, those are pretty much identical to the What To Avoid list elements for making dual-axis scenario sets, too.

  • It's not an accident that the various examples in each box are all folks who "care about the future" -- it *is* a matrix of futurist perspectives, after all.

    I disagree with the argument that groups that dislike or oppose each other shouldn't end up in the same box. If the point of opposition is unrelated to the dynamics of the axes, while the issue arguably connecting them is fundamental to the matrix, it's a completely appropriate structure.

    [As a (very) crude example, imagine a spectrum of "singularity technologies are inevitable and all-powerful" versus "singularity technologies will be haphazard and only marginally transformative," one would put both Ray Kurzweil and Bill Joy at the same end of the spectrum, even though they have radically different visions of what these technologies would actually do.]

    One last item: with regards to this:

    I feel we have to get smarter. Maybe a LOT smarter, before we will be able to deal with AI and immortality and molecular manufacturing and nanotech and bioengineering. Effective intelligence is where we really should be investing research and development. Because if we do get smarter, or make a next generation that is, then the rest of it could be much easier.
    Frankly, when I look at Aubrey de Gray and Ray Kurzweil... and when I look in a mirror... I see jumped up cavemen who want to live forever and get all pushy with the universe and quite frankly, I am not at all sure that cavemen are ready to leap into the role of gods.

    I agree that we need to get smarter and that we need to focus attention on effective intelligence. I disagree, however, that this means we need to pull back. Intelligence evolves with the environment, broadly conceived, and (if William Calvin is right, and I think he is) we get smarter faster when the environmental pressures are the most extreme. Calvin argues, for example, that the measurable improvements in hominid and early human cognitive skills closely correlated with rapid climate shifts.

    In other words, we may not get the intelligence we need if we don't put ourselves in the position of needing it.

  • Comments

    Jamais, you know I support you in nearly all ways as a flaming Neo-Modernist and radical, militant moderate, like me. I approve of your ruminations about "attitudes toward the future" and only offered a few helpful comments ;-)

    Please, my comments about "needing to be smarter" should in no way be misconstrued as Billjoy-Kaczinski-style rejectionism! I feel we need to charge into a tech competent future ASAP!

    Nevertheless, my own central theme has been all about finding ways to do this that are least-stupid. Least unwise.

    Increasing aggregate intelligence, through improved capabilities of civilization - from markets to science to dispute resolution - should be top priority, above even life extension or cool toys.

    Above all, I think it is time to reconsider another long discredited term (as long as we are resurrecting "modernism"). That term is "sanity." It was formerly used as a cudgel against eccentricity, enforcing homogeneity. We modern eccentrics consider that kind of repression to be evil...

    ...so? Cannot a society of smart and tolerant eccentrics have its OWN definition of sanity? Worthy of a debate, some time.

    Finally, my own thoughts about 2 and 3 D political exes can be seen at:

    Thrive and persevere.


    Thanks, David. I figured you didn't really buy into the rejectionist philosophy, but the "are we smart enough to handle this newfangled tech?" question all-too-often has the presumptive answer of "no." I wanted to push back pre-emptively against that assertion (no matter who makes it).

    I agree with you 100% that we need to find ways to build our future that are least-stupid. I increasingly suspect, however, that the least-stupid path will not be an obvious one, and in fact might be quite a surprise.

    More good points; glad I followed you here, Jamias! (Who knows, maybe I'll get smart enough to check your feed before commenting on old posts, eh?)

    Seems to me that what Mr. Brin has to say is that we need to encourage people to think for themselves, and for the world ans the human race in general, which is something I wholeheartedly agree with.

    But how to achieve this, in a country (using the UK where I live as an example) that blatantly rewards and cossets the wilfully ignorant, through such vectors as litigation, a means-tested-but-easily-played welfare system, and an education system that, instead of encouraging progress, rewards the lazy by putting them in progressively less challenging streams, before finally signing them off of all education completely (using any one of a number of 'learning disabilities' and 'disadvantages' as the excuse) to a lifetime of bitter resentment against a system that has provided them with no motivation to better themselves or the world they live in?

    (Just as a footnote to that, I am *not* denying the existance of learning disabilities, but I *am* claiming that they are used as a convenient cop-out (by harrassed teachers and parents alike) in an education system that is so obsessed with scoring and tests that it utterly fails to actually teach anyone to think and learn for themselves, unless they are already naturally motivated in that respect...which I acknowledge fully is far more a failing of the system than of the people entrapped within it; people are like electricity, they take the path of least resistance.)

    Whoa. Rant over.

    This is all well & good, but you cannot tell what is *least unwise*.
    Markets are try-everything-and-see-what-works. That's blind and crude. Science isn't much better. Since the demise of positivism, its starting-point is "anything goes".
    Perhaps you should classify -isms or -ists by characteristics, say either inclusive or exclusive.
    As for education, most of its ills pale when compared to how unprepared it is for a life-extension world. Of course the same applies to pensions...

    Buddhists make a distinction between the "great vehicle" which brings enlightenment for (nearly) all elements of society and the "lesser vehicle" which brings enlightenment for individuals. That may be a more neutral equivalent to your optimist/ pessimist axis.

    Thanks for continuing this conversation with a third post, Jamais. Very helpful to be able to read thru your exploration of not-fully-fleshed-out ideas, especially for novice practitioners, of any generation.

    Perhaps informative to survey other personality-type characterizations? In Local Politics of Global Sustainability, Prugh, Constanza and Daly label Alfred E Newmans, Technocratic Optimists, and Jeremiahs, before seeking a new grouping for themselves. And in The Great Wager, Alex similarly labels four groups and then adds a fifth (WC) model. A comment there by Joseph Willemssen lists several others.

    I’ve occasionally followed a method similar to David’s in Models, Maps and Visions of Tomorrow. Ask questions or pose statements (i.e. “Do you believe in the improvability of humankind?”) that elucidate core belief systems.

    Some that I find useful:
    The extent to which one separates fact from value.
    The extent to which one believes that cooperation among individuals, peoples, and nations is possible.
    The extent to which one holds that there will always be a highly elastic substitutability of labor and capital for land (natural resources) as factors of production.

    I'm still a long way from a 2x2 matrix, tho...


    Creative Commons License
    This weblog is licensed under a Creative Commons License.
    Powered By MovableType 4.37