Main

March 4, 2015

Usefully Wrong

It's a line I've used quite a bit in my talks: "The point of futurism [foresight, scenarios] isn't to make accurate predictions. We know that in details large and small, our forecasts will usually be wrong. The goal is to be usefully wrong." I'm not just pre-apologizing for my own errors (although I do hope that it leaves people less annoyed by them). I'm trying to get at a larger point -- forecasts and futurism can still be powerful tools even without being 100% on-target.

Forecasts, especially of the multiple-future scenario style, force you (the reader or recipient of said futurism) to re-examine the assumptions you make about where things go from here. If your response to a given forecast is "that's bullshit!" you need to be able to ask why you think so. Even if the futurist behind the scenarios leaves out something important, she or he may just as easily have included something that you had ignored. To push this thinking, it's often productive to ask:

  • What would have to happen to make this forecast plausible?
  • What would have to happen to make this forecast impossible (not simply unlikely)?
  • What in this forecast feels both surprising and uncomfortably true?

Thinking deeply about forecasts and futurism can change your perception. Events and developments that you might once have ignored or reflexively categorized take on new meanings and (critically) new implications. You start to think in terms of consequences, not just results. Here you ask:

  • Did I expect that event or development? Why or why not?
  • What should I now be prepared to see happen next?
  • What expected consequences or results did we manage to avoid?

Unfortunately, if you really embrace this kind of thinking, you begin to see on a daily basis just how close we as a planet keep coming to disaster. "Dodging bullets" is the top characteristic of human civilization, apparently. Welcome to my world.

July 16, 2013

Hackers, Griefers, and Futurists

Apropos of griefing Glass, my most recent piece for Co.Exist is now up: "To [Forecast] The Future Of Technology, Figure Out How People Will Use It Illegally". (The actual title uses "predict" not "forecast," of course.) It's a quick look at stuff I've talked about before, why illicit and unexpected uses of new systems give a better vision of the future than do such systems' intended purposes.

New technologies don’t exist in a vacuum: they interact with both technological and non-technological systems as well as a variety of human wants and needs. This allows for the emergence of surprising combinations of goals and uses, many of which may be completely outside of the expectations of the designers. In short, as the patron saint of futurism William Gibson once said, “the street finds its own uses for things.”

As a futurist, I try to think beyond the designers notes when it comes to the impacts of emerging technologies. I find that it’s often useful to imagine the unintended, seedy, improper, or illicit uses of new tools and systems.

Not a new argument from me, but a concise articulation of it.

May 28, 2013

The End of the World As We Know It (and I'm rather annoyed)

Fast Company's Co.Exist just put up my latest piece for them: "The End Of The World Isn’t As Likely As Humans Fighting Back." It's the latest in my series of short essays under the working title "Stop Complaining About the End of the World and Do Something About It." Here's how it starts:

While it’s certainly true that one can tell a compelling dramatic story about the end of the world, as a mechanism of foresight, apocaphilia is trite at best, counter-productive at worst. Yet world-ending scenarios are easy to find, especially coming from advocates for various social-economic-global changes. As one of those advocates, I’m well aware of the need to avoid taking the easy route of wearing a figurative sign reading The End Is Nigh. We want people to take the risks we describe seriously, so there is an understandable temptation to stretch a challenging forecast to its horrific extremes--but ultimately, it’s a bad idea.

In all seriousness, dystopias are boring and, as a tool of foresight, counter-productive. Enough, already.

May 20, 2013

Imagination Experiment: Visualizing Transformative Tech

Cena cxo bigTime for another thought experiment. Or, rather, a puzzle without a good answer yet.

We're getting pretty good at building extremely powerful telescopes. The Kepler planet finder orbiting telescope may have gone functionally offline, but Hubble keeps plugging along, and the James Webb infrared telescope is on the calendar. And when we look out in the universe, we're seeing some pretty amazing stuff.

But what if the stuff we're seeing is even more amazing than we think?

Imagine, if you will, a very high technology non-human civilization living in two star systems (reasonably close to one another, say half of a light year, to make colonizing moderately feasible; that's close enough to share an Oort Cloud) about 10,000 light years from us. About 10,000 years ago, they split into three factions:

The first wants to go full upload, transcend into post-Singularity bit-liness. They've decided to disassemble their entire planetary system into Computronium, creating a web around their home star to absorb energy to support their digital lifestyle. (Charlie Stross describes this process in the later chapters of Accelerando, required reading for anyone who follows this blog.)

The second likes the idea of tearing things apart, but it less enthusiastic about the whole "turn ourselves into software" upload thing. They make use of similar tools to disassemble their own planetary system to create a Dyson Sphere. (The Star Trek: The Next Generation episode "Relics" is probably the best visualization of a Dyson Sphere around.)

The third faction says "the hell with this noise" and wants to bug out. They build a fleet of modified Alcubierre warp-drive ships to zip around the galaxy. This apparently plausible system uses exotic matter to compress a bubble of space-time around the ship, allowing it to travel effectively faster than light, even though within the bubble the ship is still traveling at a reasonable sub-light speed. None of these ships heads to Earth, but some of them head roughly in our direction, such that photons from the ships arrive along with those from the Computronium conversion and the Dyson Sphere construction.

Okay, got it? Three groups, each doing something different, with the light from their ultra-tech activities just now getting to Earth.

What do we see?

What would the disassembly of planetary systems look like? A Dyson Sphere, by definition, blocks out the home star; what would it look like as the Sphere came together? A Computronium web, conversely, need not block the entire star, but would consume quite a bit of energy; would that radiate differently than a "normal" star?

And just what would a warp-bubble-drive ship look like in action? It may only require a ton or two of "exotic matter," but that still translates into enormous amounts of energy being used to push around spacetime like a middle-school bully.

How would we know we're seeing something artificial, rather than a bizarre natural phenomenon?

May 3, 2013

The Fuzzy Now

Thought experiment: imagine you've been taken, somehow, and dropped into a big city in another place, with comparable technological and economic development, somewhere you don't speak the language. Here's the twist: it's also time travel. How long would it take you to notice that you've been shifted in time as well as space?

I've been thinking more lately about how it is we (as a collection of societies) respond to the world evolving around us. I've written before about the banality of the future -- the idea that changes that seem mind-boggling and transformative from the perspective of today would seem utterly boring to people who have lived through the development and slow deployment of those particular changes. There's also William Gibson's famous line, "the future is here, it's just not evenly distributed." I'm fascinated by the idea that our perception of "the future" is contingent upon where and when we live.

At the Institute for the Future's 2013 Ten-Year Forecast event, I offered the concept of the "fuzzy now" -- the stretch of time before and after the present day in which there seems to be little if any significant change. The length of the fuzzy now period corresponds to how much disruptive, dislocative change is taking place. Which brings us back to the thought experiment: if you're within the "fuzzy now," you may not realize that you've traveled in time for days.

Dropped into a new place, your first clues that you're in a different year would come from the gross physical environment: transportation types, building size/materials/designs, clothing design. You'd also be looking at what people are doing as they go about their business -- if they are fiddling with mobile phones, for example. Are there cues in terms of social behavior around ethnicity, gender, or sexual orientation? (Of course, if you spot an abundance of Zeppelins in the sky, you know immediately that you've moved to an alternate universe.)

Clues would come in two broad categories: things that should be there, but aren't; and things that shouldn't be there, but are.

If you were to be sent back ten years (2003), for example, you might not immediately recognize that you were in a different year. Clothing, building, and automobile designs would be familiar enough, and the lack of the most recent items wouldn't be instantly apparent (especially if you factor in being in a different country, where such differences would be masked by cultural/market variations). One possible clue you might notice soon is the fewer number of people using mobile devices, the complete lack of any kind of "tablet," and that the mobile phones in use are essentially all the old "feature phone" with buttons and tiny screens. Nobody has an Android or the like -- the iPhone wouldn't be coming out for another five years. Depending upon where you were, you might also see more public telephones and newspaper boxes. And once you saw that, you'd likely start picking up all sorts of other clues, especially about technology and media.

In short, we can say that ten years back is probably just beyond what we'd consider the "fuzzy now" -- you wouldn't notice immediately (as you would if you were bounced back a hundred years, or probably even 25), but you'd very likely pick up on it within an hour or two. Five years, conversely, would almost certainly be well within the "fuzzy now;" you'd eventually pick up on the shift, but it might take a day or more.

What about if you were shifted forward in time ten years, not back? I'd hazard a guess that you'd notice much more swiftly that something was very, very wrong. Why? Because while the physical objects, designs, and media of ten years ago might seem dated, they would also seem familiar; decade-old stuff is often still in active use. New stuff would be a surprise, especially if the overall appearance was distinctive from anything back in your home time. Some of it you might discount as being in another country, but seeing big signs for electric vehicle rapid-charging stations, or bunches of people walking along the street wearing the descendants of Google Glass, or just about everyone wearing hats for sun protection, these would quickly stand out, especially in combination.

A five year forward jump probably wouldn't be detected as quickly, but -- depending upon what kinds of developments we see -- could start to feel weird and wrong within an hour or two. This parallels the depiction of ten years back: the changes may not immediately be noticeable, but would not remain hidden for very long. This could actually be more dizzying than a jump in time that's immediately visible -- your sense of safety, already compromised by the unexpected shift in place, gets steadily undermined by the gnawing sense of wrongness. A bigger shift in time, conversely, is like ripping a bandage off -- shocking, but all at once.

The observation that a five year forward jump might parallel the effects of a ten year backwards shift suggests that a "fuzzy now" might extend twice as far back as it does forward. The you from 2013 would likely feel at home anywhere from (say) 2008 to 2015/2016, perhaps going for days without realizing that you've moved in time as well as space.

There's a futurist adage that to get a sense of the changes we face, you need to look back twice as far as you look ahead. My suggestion of the structure of the fuzzy now seems to align with that, at least superficially. But what needs to be clear is that I'm not saying that we'll change twice as much over the next ten years as we did in the last. Rather, it's that we are more sensitive to the emergence of the new than to the persistence of the old.

This has a few implications for foresight work.

It's a useful way of explaining the "banality of the future" idea. It's all about perspective. We may think of developments happening eight or ten years from now as being wildly disruptive, but for people living eight or ten years from now, today (2013) seems only marginally different at best.

It also offers a language for thinking about how different parts of the world experience change. A stable part of the developing world may have a broader fuzzy now than a place going through conflict or environmental destruction. Similarly, it's a way of articulating the disruption arising from different kinds of changes or events -- do they (temporarily?) shrink the fuzzy now period? Does a global economic downturn make the fuzzy now period expand?

Ultimately, it's a way of articulating the shock that can accompany big disruptions. We rely on the comforting knowledge that tomorrow will be pretty much like today. That seeming stability -- the spread of the fuzzy now -- actually allows us to think about the future. We don't have to look at our feet when we walk, figuratively speaking. But if you're accustomed to the present feeling like the last five or six years, and the next few years likely to seem like more of the same, suddenly having that perception of the present reduced from years to weeks, even days, can be enormously debilitating. Suddenly, we have to watch our feet.

A disruptive, cataclysmic future doesn't goad us into action, it eviscerates our ability to look ahead.


March 29, 2013

A Dragon, a Black Swan, and a Mule Walk into a Future...

My latest piece at Fast Company's Co.Exist site is now up. I gave it the title "A Futurist Bestiary", but they went with the more informative title of "3 Reasons Why Your Predictions Of The Future Will Go Wrong." (I've really got to get them to stop using the "P" word.)

Futurism is a richly metaphorical body of thought. It has to be; much of what we talk about is on the verge of unimaginable, so we have to resort to metaphors for it to make any kind of sense. Not all of the metaphors we use are complex: It struck me recently that there are several common futurist metaphors that take a relatively simple animal shape: the Dragon; the Black Swan; and the Mule.

[...] These days, “here be dragons” is a broadly-understood metaphor for something both dangerous and uncertain. And it seems that the future is full of dragons, considering how frequently I’ve heard the term.

Dragons are things we should know about, but don't want to -- questions that we should ask, but we're afraid to hear the answers. Black Swans are, as you probably know, things we could know about, if we asked the right questions -- but we probably won't. And Mules are... well, if you've read the Foundation trilogy, you know who the Mule was.

If you haven’t, here’s a quick recap (and a spoiler for a set of novels published in the early 1950s): a brilliant “psychohistorian” named Hari Seldon--essentially a futurist with above-average math skills--successfully plots out a way for the dying galactic empire to get through a dark age much more quickly than it otherwise would. But after a couple of hundred years, Seldon’s predictions, which all along had been completely accurate, suddenly start going wrong. The reason? The emergence of a mutant able to control human minds, a mutant who called himself the Mule.

In this short essay I've made the Mule a metaphor. Fear me.


Full text in extended entry.

Continue reading "A Dragon, a Black Swan, and a Mule Walk into a Future..." »

March 20, 2013

Futures of Human Cultures

My friend Annalee Newitz, editor at io9.com, asked me a short while ago for some thoughts on the possible futures of human cultures. The piece (which also includes observations from folks like Denise Caruso, Maureen McHugh, and Natasha Vita-More) is now up, and is a fun read. And while I captures the flavor of what I said, here's the (slightly edited to fix typos) full text of my reply to Annalee:

A hundred years, hmm.

I think that for many futurists the default vision of social existence a century hence is one of expanded rights (poly marriage, human-robot romance, that sort of thing), acceptance of cultural experimentation, and the dominance of the leisure society (robots doing all of the work, humans get to play/make art/take drugs/have sex). Call it the "Burning Man Future." With sufficiently-advanced biotech, people can alter or invent genders & genital arrangements (think KSR's 2312); with sufficiently-advanced infotech, people can run instant simulations of social and personal evolution (think the last chapter or two of Stross' Accelerando); with sufficiently-advanced robo/nanotech, class and work-related identities are of dwindling or no importance. Social divisions likely to still be around are those around politics (power still matters), art (aesthetics still matters), and the legitimacy of choices (the Mac/PC religious war writ large).

A more nuanced version of the Burning Man Future would allow for the establishment of sub-communities with radically different norms, able to isolate themselves either physically or informationally. Systems of abundance mean that any kind of social configuration is at least plausibly sustainable, while the kinds of interfaces we'd be using (engineered/upgraded brains, etc.) would mean that any level of filtering or reality manipulation is possible, too. Imagine a city street where not one of the hundred people around you sees the same version of reality, the interface systems translating the physical and social environment into something interesting and/or culturally acceptable. (This would also be a remarkable tool for mind control in a totalitarian regime.)

The more extreme version of that would be one where all experiences are market-driven, where everything (including hearing music playing in a building or the appearance of a designer outfit) would require a micro-transaction to hear or observe.

There's also the question of how pervasive Gossip/Reputation Networks will be; my gut sense is that they'll be all over the place by mid-century, but seen as ridiculous and dated by the early 22nd*.

That raises a larger point: it's not just that by 2113 we'll have gone through another three or four human generations (depending on how you count them), by 2113 we'll have gone through a dozen or so technosocial-fashion generations. Smartphones give way to tablets to phablets to wearables to implantables to swallowables to replaceable eyeballs to neo-sinus body-nanofab systems (using mucous as a raw material) to brainwebs to body-rentals... and those are increasingly considered "so 2110." And with all of these (or whatever really emerges), there are shifting behavioral norms. Don't look at your phone at the dinner table. Don't replace your eyeball in public. Don't reboot your neo-sinus in church.

At the same time, many of the Big Socio-cultural Fights we're having now will seem as ridiculous in 2050 as the cultural angst in the 1960s over hair length, or the performance of an expressionist orchestral concert in 1913 leading to a riot in Vienna. Gay? Bi? Trans? Cis? What does it even matter? What *really* pisses people off these days is the use of real meat instead of fleshfabbers... Barbarians.

All of this strikes me as plausible assuming that we don't run into major catastrophic downturns, which tend to push us towards more tribal behaviors and demand strict adherence to norms (where threatening community stability also threatens community survival). So there's your choice: Burning Man or Walking Dead.

[And that's the extent of my "Walking Dead" reference, btw. No zombies here. :) ]

*Thanks, Adam!

June 18, 2012

New Legacies

Jet packOne of my favorite posts from my time here at Open the Future has to be Legacy Futures, from late 2008. The concept of a legacy future is simple: it's a persistent but outdated vision of the future that distorts present-day futures thinking. As I suggest in the original piece, the "jet pack" is the canonical legacy future -- just about every futurist you'll ever meet could tell you about being asked when we'll get our personal jet packs. Nine times out of ten, the person doing the asking thinks that they're being clever.

Some of the other legacy futures I brought up in the 2008 essay include Second Life, hydrogen fuel cell vehicles, and population projections that don't account for technological (especially healthcare) changes. I'm less-inclined to include the third one at this point, though -- one important characteristic of a legacy future is that it can conjure up a vision of tomorrow with a simple, usually two-word, phrase. "Jet packs!" "Second Life!" "Fuel cells!" Maybe the future population vision of "overpopulation crisis!" would count.

It's important to remember that the problem with legacy futures isn't that they're actually impossible, but that they're wrong in a fundamental way. The reason we don't have jet packs instead of cars today isn't because jet packs don't exist, nor because of a grand conspiracy -- it's because there are basic, practical problems with jet packs that would be too difficult to solve today.

But of increasing interest to me is the question of what present-day "plausible" visions of the future will be the legacy futures of the next generation. What are the scenarios and assumptions about the world of tomorrow that seem almost inevitable -- or at least common-sense -- today, but will in a few years (or a few decades) be seen as hopelessly off-base, but still shaping how people view The Future?

One obvious candidate is a perspective I found myself immersed in during my trip to Astana: the idea that fossil fuels will remain the primary source of energy for at least the next 40 years. The notion that coal, oil, and natural gas will still be the dominant energy sources in 2050 is utterly conventional wisdom in the energy industry -- unsurprising, perhaps, but it's a perspective that shapes how many political and economic figures think about energy. The only challenge to that view that is considered acceptable is the notion of "peak oil," and then only to dismiss it. The idea that climate disruption would cause a radical shift in energy consumption patterns gets laughed off with comments along the lines of "it would take too long/be too expensive/be simply impossible to replace fossil energy with 'alternative' energy." But you don't have to be a hardcore green to see that the trajectory of energy and industrial technologies is moving quite decisively away from reliance on fossil fuels; even the current spike in natural gas has some big underlying problems (not the least of which is the fact that fracking can and does trigger earthquakes).

Those who are familiar with my work will find this particular argument unsurprising at best. However, I suspect that an equally problematic vision of the future can be found in the scenarios that posit a total civilization collapse due to global warming. While the "global apocalypse" future is pretty commonplace, I'm not sure the global warming variant counts as a potential legacy future -- it's not a concept that's widely-embraced by the public at large, at least not yet. Zombie Apocalypse futures are more popular (but are not true legacy futures, as the idea of a Zombie Apocalypse doesn't really change our behavior).

This doesn't mean that I don't think that global warming will be a problem. But it seems likely to me that the 2012 vision of what a global climate disaster looks like wouldn't really match the reality. It seems more likely, for example, that we'd see parts of the world able to adapt more readily than others -- so an overwhelming catastrophe in (say) India is only a series of manageable disasters in (say) the US. Or a series of rapid-fire improvements and setbacks, unstable but never quite tipping into collapse. Or maybe, just maybe, a catalyst for profound beneficial change.

Reality is always more complex than popular visions of the future would have us think.

But, as I said, what I want to figure out are the legacy futures yet to come. Some candidates:

  • The Singularity
  • End of Scarcity
  • Functional Immortality
  • Everyone on Facebook
  • Robot wars

    All of these are presumably possible in some way, but strike me as very likely to come about in ways that differ considerably from present-day visions.

    The first three of these tend to be more popular with the Wired/io9/futurist crowd than with the popular culture in the West. The Facebook is everywhere future and the Robot/drone warfare future are more widespread. I can't off the top of my head think of other commonplace in pop culture proto-legacy futures -- the majority of future visions that you find in the mass culture tend to already be legacy concepts.

    I don't have a grand conclusion at this point, but wanted to toss this out there for the massmind to consider. I'll very likely come back to this topic again soon.

  • May 1, 2012

    The Pink Collar Future

    Mechanical Staff

    The claim that robots are taking our jobs has become so commonplace of late that it's a bit of a cliché. Nonetheless, it has a strong element of truth to it. Not only are machines taking "blue collar" factory jobs -- a process that's been underway for years, and no longer much of a surprise except when a company like Foxconn announces it's going to bring in a million robots (which are less likely to commit suicide, apparently) -- but now mechanized/digital systems are quickly working their way up the employment value chain. "Grey collar" service workers have been under pressure for awhile, especially those jobs (like travel agent) that involve pattern-matching; now jobs involving the composition of structured reports (such as basic journalism) have digital competition, and Google's self-driving car portends a future of driverless taxicabs. But even "white collar" jobs, managerial and supervisory in particular, are being threatened -- in part due to replacement, and in part due to declining necessity. After all, if the line workers have been replaced by machines, there's little need for direct human oversight of the kind required by human workers, no? Stories of digital lawyers and surgeons simply accelerate the perception that robots really are taking over the workplace, and online education systems like the Khan Academy demonstrate how readily university-level learning can be conducted without direct human contact.

    With advanced 3D printers and more adaptive robotic and computer systems on the near horizon, it's easy to see that this trend will only continue.

    Except for one arena, that is, and it's a pretty interesting one. Jobs where empathy and "emotional intelligence" can be considered requirements, often personal service and "high touch" interactive positions, have by and large been immune to the creeping mechanization of the workplace. And here's the twist: most of these empathy-driven jobs are performed by women.

    Nursing, primary school teaching, personal grooming -- these jobs require varying levels of education and knowledge, but all have a strong caretaker component, and demand the ability to understand the unspoken or non-obvious needs of patients/students/clients/etc. We're years -- perhaps even decades -- away from a machine system that can effectively take on these roles; a computer able to demonstrate sufficient empathy to take care of a crying kindergartener is clearly approaching True AI status. As a result, we appear to be heading into a future where these "pink collar" jobs -- empathy-driven, largely performed by women -- are the most significant set of careers without any real machine substitute, and therefore without the downward wage pressure that mechanization usually produces.

    This raises some big questions, of course, and not the least of which is how this will affect the social and economic status of these professions. Nurses may be more valued than surgeons; kindergarten teachers paid better than university professors. Would this lead to a shift in the gender composition of these jobs? In a culture that remains beholden to the concept that men are the "breadwinners," might we see efforts to "masculinize" these roles? Recall that in the United States after World War II, there was a great deal of pressure on women to give up the "Rosie the Riveter"-type jobs they held during the war.

    Conversely, if accelerating mechanization of jobs triggers the emergence of large-scale social support systems (like the Basic Income Guarantee) paid for by "robot taxes," does this mean that outside-the-home jobs are largely performed by women, while men stay at home?

    What I'm saying is this: there is a terrible habit that many of us in the futures game seem to have of generalizing potential disruptions. That is, if robots are taking our jobs, then they're taking all of our jobs (except, ideally, for the jobs of futurists) and we start thinking through the implications from there. But disruptions aren't so easily flattened; when Gibson said that the future's here, it's just not evenly distributed, he wasn't just talking about geography, or even class. Big sociotechnoeconomic shifts don't just appear and redraw the landscape, they have to adapt to the existing conditions, and will themselves be disrupted by deeply-rooted cultural forces. We also have a habit of expecting that the most well-off financially are the most likely to resist big changes -- but what happens when the underlying notions of value themselves are changing?

    February 16, 2012

    Forensic Futurism

    CSIthefutureIf there's a common trope about "futurism," it's that it gets everything wrong.

    From jetpacks to vacations on the Moon, any discussion of futurism in broader culture very quickly turns into a listing of the various crazy things that "futurists" (whether or not they'd call themselves that) have said over the past century. Sometimes it's an easy one-off article, sometimes it's an entire book
    or blog devoted the topic. Done well, it's a kind of indulgent ridicule: those futurists sure are whacky, but charmingly whacky.

    Anyone who has read my stuff will know that I'm not really fond of being called a "futurist," although it's the most widely-recognized name for what I do. I don't make predictions, and I don't talk in certainties; I'm all about trying to illuminate surprising implications of present-day processes. I don't expect that the scenarios I offer will be right, but I do want them to be usefully provocative.

    But that doesn't mean that I'm irritated by the focus on futurists being wrong (although I will admit to being tired of the "jetpack" trope; can't we come up with another stereotyped prediction?). I wrote a piece awhile back about "legacy futures," and pay close attention to the responsibility foresight professionals have to acknowledging when they get things wrong.

    So when the term "forensic futurism" showed up today (see the extended entry for how & why), it hit me as something both useful and meaningful.

    It's not enough simply to point and ridicule about whacky futurists. Those of us in the discipline really need to examine why serious forecasts can turn out to be terribly wrong. This takes two related forms:

  • Understanding why forecast X didn't happen as expected. Maybe we thought that certain drivers would continue to be important, or that other drivers wouldn't be important, or perhaps simply never expected a "Black Swan" event. This is a useful practice for all foresight professionals, in order to better understand (and ultimately to communicate) how reasonable expectations can go terribly wrong.

  • Understanding why X was forecast in the first place. This is the more difficult process, as it requires engaging in an objective, dispassionate look at how futurists came to their conclusions. Not simply what they looked at, the lines of evidence they selected as important, but why they chose those lines of evidence in the first place.

    "Forensics" is a process involved in criminology, and I don't want to imply that futurists who get things wrong are doing something of dubious morality or legality. Instead, I'm riffing on the more popularized concept of the process, that of a strictly-evidence-based examination of a mysterious result. Leaping to conclusions, going only by hunches, and other subjective approaches are to be frowned upon; what we want to do is take a serious look at how we think about the future, in order to do so more usefully in the time to come.

    Continue reading "Forensic Futurism" »

  • January 17, 2012

    The Future Isn't What It Used to Be

    future in reverse

    Foresight is not about making predictions. Rather, it's a tool for identifying dynamics of change, in part by exploring the implications of those changes. This is a point I've made often enough that even I'm sick of it -- but it remains an idea that not enough people understand. It's next to useless to say "X will happen;" it's much more valuable to say "here's why X could happen."

    One of the trickier aspects of this formulation of foresight is the need to keep an eye on how the dynamics of change themselves are evolving. It's easy to get locked into a particular idiom of futurism, calling upon standard examples and well-known drivers as we work through what a turbulent decade or three might hold. It's comforting to be able to go back to the old standbys, confident that the audience can sing along.

    Nowhere is this more visible than in the role technological change plays in futurism. The big picture visions of what the next 20-50 years could hold in terms of technologies haven't changed considerably since the beginning of the century, and (for the most part) since the early 1990s. Moreover, what we've seen in terms of real-world, actual technological change has been largely evolutionary, not revolutionary. Or, more to the point, the revolutions that have occurred have not been in the world of technologies.

    Here's what I mean: if you were to grab a future-oriented text from the early part of the last decade, you'd find discussions of technological concepts that radical futurists and "hard science" science fiction writers were seeing as being on the horizon, developments like:

  • Molecular nanotechnology
  • Artificial intelligence and robots galore
  • 3D printers
  • Augmented reality
  • Ultra-high speed mobile networks
  • Synthetic biology
  • Life extension
  • Space colonies

    I could go on, but you get the picture. All of those technologies appeared in the "hard science" science fiction game series Transhuman Space, which I worked on in 2001 to 2003. Most could easily be found in various "what the future will look like" articles and books from the late 1990s.

    Since then, some of those concepts have turned into reality, while others remain on the horizon. But pin down a futurist today and ask what technologies they expect to see over the next few decades, and you'll get a remarkably similar list -- often an identical one. As a telling example, the list above could serve as a rough guide to the current curriculum of the Singularity University, minus the investment advice.

    There hasn't been a ground-breaking new vision of technological futures in at least 10 years, probably closer to 15; nearly all of the technological scenarios talked about at present derive in an incremental, evolutionary way from the scenarios of more than a decade ago. The closest thing to an emerging paradigm of technological futures concerns the role of sensors and mobile cameras in terms of privacy, surveillance, and power. It's still fairly evolutionary (again, I could cite examples from Transhuman Space), but more importantly, it's much more about the social uses of technologies than about the technologies themselves.

    For me, that's an interesting signal. In many ways, we can argue that the major drivers of The Future, over the past decade and very likely to continue for some time, are primarily socio-cultural. Unfortunately, for a variety of reasons futurists often are uncomfortable with this line of foresight thinking, and most do it rather poorly. But while those of us in the futures world have been talking about nanotechnology, fast mobile networks, bioengineering and such over the past decade, very few of us even came close to imagining back in the late 1990s/early 2000s that by the early 2010s we'd see:

  • The effective collapse of American hegemony.
  • The inability/unwillingness of world leaders to respond to global warming.
  • The death spiral of the European Union.
  • Accelerating economic inequality.
  • Major changes to global demographics, especially population forecasts.
  • The unregulated expansion of financial instruments based on little more than betting on other financial instruments.
  • That the Koreas would remain divided.
  • That there hasn't been a major biological, radiological, or nuclear terror event.
  • The speed of urbanization, especially in the developing world.
  • The Arab Spring, Occupy, Tea Party, and similar bottom-up political movements.

    And on and on. If futurists have become almost too good at technological foresight, we remain woefully primitive in our abilities to examine and forecast changes to cultural, political, and social dynamics.

    Why is this? There isn't a single cause.

    Some of it comes from a long-standing habit in the world of futurism to focus on technologies. Tech is easy to describe, generally follows widely-understood physical laws, offers a bit of spectacle (people don't ask about "jet packs" because they think they're a practical transit option!), and -- most importantly -- is a subject about which businesses are willing to pay for insights. Most foresight work is done as a commercial function, even if done by non-profit organizations. Futurists have to pay the rent and buy groceries like everyone else. If technology forecasts are what the clients want to buy, technology forecasts will be what the foresight consultants are going to sell.

    Another big reason is that, simply put, cultural/political/social futures are messy, extremely unpredictable, and partisan in ways that make both practitioners and clients extremely vulnerable to accusations of bias. We're far more likely to make someone angry or unhappy talking about changing political dynamics or cultural norms than we are talking about new mobile phone technologies; we're far more likely to be influenced by our own political or cultural beliefs than by our preferences for operating systems. One standard motto for foresight workers (I believe IFTF's Bob Johansen first said this, but I could be wrong) is that we should have "strong opinions, weakly held" -- that is, we should not be locked into unchanging perspectives on the future. Again, this is relatively easy to abide by when it comes to technological paradigms, and much harder when it comes to issues around human rights, economic justice, and environmental risks.

    Lastly, there's a strong argument to be made that futurism as practiced (both the the West and, from what I've seen, in Asia) has a strong connection to the topics of interest to politically-dominant males. It would be too easy to caricature this as "boys with toys," but we have to recognize that much of mainstream futures work over the past fifty years (certainly since Herman Kahn's "thinking the unthinkable") has focused on tools of expressing power, and has been performed by men. This is changing; the Institute for the Future employs more women than men, for example. In many respects, futurism in the early 21st century seems very similar to historiography in the post-WW2 era: still dominated by traditional stories of power, but slowly beginning to realize that there's more to the world.

    Howard Zinn was a highly controversial historian, but even those who hate his work can admit that he popularized a perspective on history that simply hadn't received much attention beforehand. History can be about more than what Great Leaders did and said, which Great Wars were fought, and how Great Events Turned the Tide of History; history can be about how regular people lived, slowly-changing shifts in belief, and the complicated aftermath of the Great Moments. Similarly, futurism can be -- needs to be -- about more than transformative, transcendental technologies.

    There's no doubt that social futurism is significantly more difficult than techno futurism. Without a clear model for socio-cultural change, and absent the appearance of a Hari Seldon complete with almost infallible mathematics of social behavior*, we have to go by experience, gut instinct, and the intentional misapplication of training in History, Anthropology, Sociology. But that doesn't mean that good social futurism is impossible; it just means we have to be careful, conscious of the pitfalls, and transparent about our own biases.

    Easier said than done, of course.

    * Void in the case of the Mule.

  • January 3, 2012

    Our tools don’t make us who we are. We make tools because of who we are.

    Acceler8or LogoCyberculture legend RU Sirius, editor at the Acceler8or webzine, interviewed Joel Garreau and myself about the Prevail project. (Short summary for those who missed the earlier post: Prevail is an Arizona State University-sponsored non-profit organization looking to build collaborative knowledge about transformative technologies and culture.) In a series of back-and-forth email among the three of us, we discussed everything from the logic of transhumanism to the power of the Occupy movement.

    In one of his comments, Joel gives one of the best summaries of the Prevail perspective I've yet seen:

    The heart of Prevail is: perhaps there are two curves of change, not one. If our technological challenges are heading up on a curve, but our responses are more or less flat (like we’re waiting for House Judiciary to solve our problems), the species is clearly toast. The gap just keeps on getting wider and wider.

    But suppose we are seeing an increase almost as rapid in our unexpected, bottom-up, flock-like social adaptations. Then you’d be looking at high-speed human-controlled co-evolution.

    There are reasons for guarded optimism about this.

    In other words, we can't wait for someone else to give us the future; we have to make it ourselves.

    The title of this post is one of my comments from the interview.

    It comes down to humanism.

    One bit of snark I’ve used before is that transhumanists focus too much on the “trans” and not enough on the “humanist.” As I said earlier, I’m more adamant in my anti-Singularitarianism than in my anti-Transhumanism, but in both cases it’s not because I reject the notion that our technologies are changing rapidly. It’s because I firmly believe that it’s not a one-way process. Technologies change us, but we change the technologies, too. Technology is not an external force emerging from the very fabric of the universe (and, as you know, there are some Singularitypes out there who seriously believe that Moore’s Law is woven into the laws of nature); our technologies (plural, lower-case T) are cultural constructs. They are artifacts of our minds, our norms and values, our societies.

    Our tools do not make us who we are. We make tools because of who we are.

    It was a good conversation. Thank you to RU for inviting me along, and thank you to Joel for tolerating my presence!

    December 12, 2011

    The Future is a Virus (my Swedish Twitter University "talk")

    Not literally, of course. But if we think about the future as something that infects us, we gain a new perspective on our world.

    Human civilization has a weak immune system when it comes to futures. We can sometimes recognize when something big is imminent, and act. We rely on clumsy, inefficient tools like finance, religion, even "look before you leap" to make us look forward and consider our choices. So more often than not, we're taken by surprise, shocked when something big happens "out of the blue." We haven't prepared for big changes. Our immune system needs to be strengthened. But how do we do something like that? (I suspect you know the answer.)

    First, a digression: a biological immune system works by encountering a pathogen, then generating antibodies to fight that pathogen. The body now recognizes that pathogen, so if it's encountered again, the body is ready to fight it off. That's roughly how it all works. Now, some pathogens can be deadly, and getting infected the first time doesn't help the immune system if you're dead! But there's a trick. We figured out that infecting the body with a weakened form of a pathogen still triggers the body's immune response, generating antibodies. A vaccination makes the body sensitive to the appearance of a pathogen, and ready to fight--even if you never actually encounter that bug!

    In my view, futurism ("strategic foresight," "scenario planning") is a vaccination for our civilization's immune system. It strengthens us. By introducing us to different possible futures, we become sensitive to those potential outcomes, and able to recognize their early signs. We can think about how we would respond to different futures, and argue about what would be desirable *before* it happens... if it happens. That "if" is important. Most of the forecast futures *won't* happen, and even the "real" future won't look exactly like our scenarios. It will have bits and pieces from multiple forecast futures, and some items that we didn't catch. We'll still be surprised by some things.

    But it turns out that planning for a set of different possible futures is a good way to prepare, even if the real future is different. There's usually enough overlap, enough "economies of scope" allowing plans and solutions built for one issue to be effective for another. And even when reality takes us by surprise, the very act of thinking about, preparing for different futures gives us a better perspective. We're more attuned to how seemingly unrelated factors can combine, leading to novel outcomes. We're sensitive to the power of contingency. Diversity of ideas strengthens us; we're more flexible and adaptive. We can't let ourselves get trapped by thinking about just one future.

    Sadly, many of our world's business, government, and cultural leaders see thinking about the future as silly, or unprofitable, or dangerous. Forecasts that violate dogma or ideology are ignored. Scenarios that demand big changes to head off disaster are rejected as "impossible." Our civilization's body is rejecting its own immune system. We're making ourselves vulnerable because we don't like what we see. But as Bruce Sterling said, "The future is a process, not a destination." We can change this. We have to act to build the future that we want.

    November 29, 2011

    "To Prevail"

    The following is my essay for Joel Garreau's Prevail Project.

    I have in front of me a late 1960s advertisement from the Burroughs Corporation. It shows a sketch of a guy — in a snappy suit and crisp haircut — sitting at what one must assume is a Burroughs business computer. A large genie-like figure billows from the machine, and the caption reads “MAN plus a Computer equals a GIANT!

    Ad025

    I love this image, despite the outdated sexism. It’s a healthy reminder that the notion of computers making humans something supremely powerful (and distinctly no longer human) isn’t just an idea dreamt up in the heady days of the 1990s, as Moore’s Law seemed to be really taking off. It’s been woven into the fabric of our relationship with “thinking machines” for decades. While there may have been no Mad Men-era Singularitarians fantasizing about being uploaded into a B6500 mainframe, it was clear even then that there was something about these devices that went beyond mere tool. They were extensions not of our bodies, but of our minds.

    Of course, anyone sitting down at a 1960s Burroughs business machine right now expecting to become a figurative “giant” is in for a surprise. It may be something of a cliché at this point to note that a cheap mobile phone has far more computing power than a mainframe of a generation or two ago, but it’s true. Yet instead of making us all “giants,” our information technologies played something of a trick: they made us more human. All of the things that humanize us — love, sex, despair, creativity, sociality, storytelling, art, outrage, humor, and on and on — have been strengthened, given new power and new reach by the march of technology, not discarded.

    That’s not the conventional wisdom. Western intellectual culture is in the midst of a civil war between two superficially distinct viewpoints: a claim that transformative information technologies are set to sweep away human civilization, eliminating our humanity even if they don’t simply destroy us, versus a claim that transformative information technologies are set to sweep away human civilization and replace it (and eventually us) with something better. We’re on the verge of disaster or the verge of transcendence, and in both cases, the only way to hang onto a shred of our humanity is to disavow what we have made.

    But these two ideas ultimately tell the same story: by positing these changes as massive forces beyond our control, they tell us that we have no say in the future of the world, that we may not even have the right to a say in the future of the world. We have no agency; we are hapless victims of techno-destiny. We have no responsibility for outcomes, have no influence on the ethical choices embodied by these tools. The only choice we might be given is whether or not to slam on the brakes and put a halt to technological development — and there’s no guarantee that the brakes will work. There’s no possible future other than loss of control or stagnation.

    Such perspectives aren’t just wrong, they’re dangerous. They’re right to see that our information technologies are increasingly powerful — but because our tools are so powerful, the last thing we should do is abdicate our responsibility to shape them. When we give up, we’re simply opening the door to those who would use these powerful tools to manipulate us, or worse. But when we embrace our responsibility, we embrace the Prevail scenario.

    To Prevail is to accept that our technological tools are changing how our humanity expresses itself, but not changing who we are. It is to know that such changes are choices we make, not destinies we submit to. It is to recognize that our technologies are manifestations of our culture and our politics, and embed the unconscious biases, hopes, and fears we all carry — and that this is something to make transparent and self-evident, not kept hidden. We can make far better choices about our futures when we have a clearer view of our present.

    To Prevail is to see something subtle and important that both critics and cheerleaders of technological evolution often miss: our technologies will, as they always have, make us who we are.

    Human plus a Computer equals a Human.

    The Prevail Project

    Joel Garreau has one of the most sensitive radars for big changes of anyone that I know. I first met him back at GBN, and I quickly came to realize that I should pay very close attention to whatever he's thinking about or working on -- and what he's working on now is definitely worth the time to check out.

    The "Prevail Project" (named for one of the scenarios in his book Radical Evolution) at the Sandra Day O'Connor College of Law at Arizona State University is an attempt to draw together people thinking about -- and building -- a livable human future, one that uses (but is not dominated by) transformative technologies.

    Joel's statement in the press release sums up his perspective:

    "Prevailproject.org will be a place for everybody from my mother to technologists inventing the future to grapple with some of the most pressing questions of our time: How are the genetics, robotics, information and nano revolutions changing human nature, and how can we shape our own futures, toward our own ends, rather than being the pawns of these explosively powerful technologies?” said Joel Garreau, the Lincoln Professor of Law, Culture and Values at the Sandra Day O’Connor College of Law at Arizona State University, and director of The Prevail Project: Wise Governance for Challenging Futures.

    “The Prevail Project is a collaborative effort, worldwide, to see if we can help accelerate this social response to match or exceed the pace of technological change,” Garreau said. “The fate of human nature hangs in the balance.”

    I'll set aside my resistance to the traditional "social response to technological change" model to celebrate the placement of this project in the Law School, and not as part of the school of engineering or some technical discipline. It's far too common to see these issues dominated by technologists (and technology-fetishists) with little understanding of law and culture; it's vital to get a more sophisticated understanding of society into the conversation.

    As the Prevail Project kicks off its public unveiling, it has invited a set of writers to offer up their thoughts on what it means to "prevail" in a transformative future. Bruce Sterling's essay went up yesterday; mine went up today.

    November 17, 2011

    Pantheon

    "We are as gods and might as well get good at it." -- Stewart Brand, the Whole Earth Catalog, 1968.

    Stewart Brand's observation has simultaneously enchanted, terrified, and driven me ever since I first heard it (probably some 20-25 years after he wrote it). It's both an admonition (we're not very good at being gods) and encouragement (...but we could be!); Brand saw that our capabilities as humans (when using the tools devised by human minds) equalled or exceeded most of the capabilities of the gods of myth, and even those abilities not yet in our toolkit would likely be right over the horizon. Brand also saw that our sense of ourselves, and our responsibility to the world, remained firmly rooted in simple humanity.

    "We have more power than we think we do," he seemed to be saying, "and we can't use it wisely until we acknowledge that fact."

    The statement can be critiqued from a number of perspectives, and has been. (My own push-back against it these days is that it has the equation exactly backwards. Gods are just people who can truly see the extent of their power.) But there's one observation about the "We are as gods..." line that I haven't seen elsewhere -- and it requires a little digression.

    Matt Jones at BERG London asked me to participate in the "Tomorrow's World" event they were putting on for Internet Week Europe. "A night of drinks and ten minute talks" was the capsule description, and everyone who spoke had been asked to talk about the "near-future of..." some idea. Matt asked me to talk about the near-future of redesigning the planet.

    I'm sure Matt expected that I'd do a quick geoengineering song-and-dance, and that was my original plan. But the more I thought about the topic, lying in bed at 4am cursing jet lag, the more I realized that I needed a different direction. And then I remembered the Brand line, and was struck by something I hadn't heard anyone else say.

    "We are as gods --" okay, but which gods? In our generally monotheistic age, we tend to lump all "gods" and "godlike powers" into a bucket of Almighty Power. But that's not the way humans have thought of gods until relatively recently; for much of human civilization, gods were seen as individuals, with their own personalities, domains, and entries in an AD&D manual.

    We are gods, but we're the gods of an earlier age. Powerful, yes, but petulant; wise yet warlike; arrogant and utterly capricious... and also able to create sublime beauty. The Greek gods were the ones that came to mind last week, but really nearly every mythic pantheon followed a similar pattern.

    We are as gods, but we have gotten pretty good at it -- as long as we remember that this means we are as likely to be Loki as Athena.

    September 28, 2011

    The Foresight Paradox

    In every foresight or forecasting exercise, there are two overarching tensions:

    • The more certain and detailed the forecast, the more people will accept it and believe it to be useful.
    • The more certain and detailed the forecast, the less likely it is to happen.

    This is the foresight paradox: you can be completely accurate, or you can be completely engaging, but you can't be both. As a result, every forecast (or scenario, or prediction) has to find the right balance between the two, trading off likelihood for believability.

    As a simple example: a forecast that says "the next decade will see continued economic disruption" is very likely to be true, but of limited utility and almost no capacity to inspire innovative thought; conversely, a forecast that says "the Eurozone will collapse in the Summer of 2013, leaving EU countries scrambling to find usable currencies, with many temporarily adopting the dollar" is almost certainly not going to happen as described, but offers clear guidance for action, and can inspire novel business and political strategies. If the latter forecast is given by someone in a suit and tie, with a very serious sounding title from a very serious sounding institution, many people will accept it as being much more than informed conjecture -- and will reject the more general forecast as being useless.

    This shouldn't come as a surprise. Precise and detailed forecasts offer structure for thinking, giving the listener a framework upon which to build strategies or make concrete rebuttals. Moreover, there appears to be a psychology of belief that makes people more likely to listen to detailed predictions, offered with certainty and clarity, than to listen to general forecasts, or those offered with plenty of hedges and caveats -- even though the detailed predictions are almost always wrong. (This isn't helped by a media culture that favors the spectacular over the thoughtful, and the adamant over the hesitant.)

    It's not hard to find pundits and self-described futurists who will gladly accept the visibility and attention that comes from making detailed, spectacular predictions, no matter the eventual accuracy. If confronted, they'll mumble something about timing or unpredictable events; such confrontations are vanishingly rare, however, especially for high-profile pundits. It doesn't matter how wrong you are if you get good ratings.

    Ethical futurists have a bit more of a dilemma here, however. A forecast needs to be vivid and engaging enough to trigger action, yet general and cautious enough to engender restraint. Or, as I put it in one interview, should be wrong in useful ways.

    The simplest approach is to keep forecasts as general as possible, using detail only when well-supported by evidence. With this method, the emphasis is on the present-day and near-term drivers that lead towards the (more general) future. There is a temptation to over-emphasize the visible, and not leave enough space for wild cards and "black swans," however. The core quandary remains knowing how general and cautious one can be while still offering useful insights, and how specific and detailed one can be while still not leading the audience astray.

    Another fairly straightforward method is to use a more detailed forecast, but emphasize the uncertainty from the outset, being clear to the audience that the real outcomes will vary. The given forecast should be considered an example, not a certainty, a possible future that fits within a broader framework. Audiences don't always respond well to that approach, however; in some cases, they'll still take the example future to be the "real prediction," and in others can interpret an emphasis on caution to mean that the futurist really doesn't know what she or he is talking about.

    My preferred approach is to use scenarios, essentially giving multiple examples within the general framework. This illustrates the shape of the broader framework better, and makes clear that no one specific forecast is the "real prediction." Yet the problems with this approach are manifold: coming up with three to five internally consistent forecasts is significantly harder than just coming up with one; audiences will gravitate towards preferred scenarios, sometimes ignoring those that don't turn out in ways they like; and it's difficult to encapsulate multiple scenarios into a short presentation or statement without rendering them meaningless.

    This last problem is one that I've encountered quite a bit recently. There seems to be a trend in conferences right now (especially in Europe) to limit presentations to 15 minutes. Although there are definite benefits to this approach (most notably in maintaining audience interest), it means that any foresight-based presentation is crippled. A speaker simply doesn't have the time to offer multiple scenarios in anything other than a bullet point/headline format, surrounded by lots of big idea framing to give the scenario headlines some context (the talk I gave at the Guardian Activate Summit in London last year is probably my best effort at doing this).

    Unfortunately, audiences don't respond as well to multiple scenarios as they do to single, detailed forecasts, even when they know the detailed forecasts will inevitably be wrong. Moreover, appearances limited by time (such as, in particular, television) make even the headline scenario approach difficult. The best one can do -- in my experience, at least, and I'd love to hear better suggestions -- is to be sure to offer caveats and use cautious language such as "appears to," "likely," and especially "one possibility" (or similar statements underlining that different outcomes are possible).

    The modern spectacle-driven media loathes uncertainty, and will almost always give more attention to aggressive certitude (no matter the accuracy) than caution. Many business audiences feel the same way. Sadly, the foresight paradox boils down to this:

    The futurists who get the most attention are usually the least accurate.

    August 30, 2011

    Living in a Scenario

    There's something of a rule-of-thumb among professional futurey-types: scenario elements that sound plausible are almost certainly wrong, while scenario elements that sound utterly implausible are very likely on-target. That's generally true, although it applies more to the disruptive aspects of a scenario than to the everyday aspects. (That said, a scenario that said "most people in the West continue to live quiet lives, using their barely-sufficient income to pay for disposable commodity goods and overly-processed food," while both plausible and very likely on-target for the next decade or three, is more depressing than illuminating.) Good scenario disruption points should be things that, in the here-and-now, would make you say "oh, crap" if you heard them in the news.

    Oh, crap.

    Nanotechnology researchers in Mexico, France, Spain, and Chile have been targeted by a terror group calling itself "Individuals Tending Towards Savagery," and claiming to be inspired by the Unabomber.

    Unabomber-copycat terror cell hits nanotech researchers in the developing world and Europe -- I'm not sure anything could sound more like a headline from a scenario exercise.

    anti-tech terror bombingYou can find the manifesto of the group (in Spanish) here (this is not the group's website, but a site that republishes relevant material); a Google translate version in English is here. The translation is a bit spotty in places, but gets the message across. For me, the most unsettling part is that (a) I know several of the people they mention as villains, and (b) I fit their criteria for potential targets.

    Reading the piece is like a checklist for a scenario's anti-technology movement: beyond the approving Unabomber citations, they have quotes from Bill Joy's Why the Future Doesn't Need Us, misunderstandings of what nanotechnology is and isn't, and intimations of further violence against researchers, along with (now trendy!) attacks on Facebook for destroying the ability of young people to think. For the record, I don't believe that Joy or any of the other non-Unabomber folks whose writing they cite approvingly (explicitly or implicitly) would in any way support this group.

    But this is why I keep writing pieces like "Not Giving Up" and "Sanity" -- reminders (especially to myself) that the way forward is going to be filled with danger, but we can't let danger -- and chaos, and despair, and the relentless demands that we just give up -- be the only option.

    I've been thinking, recently, that one way to define "progress" is "when the future turns out better than we expect it to be." Given how grim things seem to be, and how many signals of disruption we seem to be getting, I can only hope that we'll be seeing a bit of progress any time now.

    August 9, 2011

    About Foresight (a minor rant)

    Why worry about tomorrow? After all, according to one of our most respected thinkers, "always in motion is the future."

    It's a reasonable question. Consistently accurate predictions about interconnected complex systems are functionally impossible, at least at any real level of specificity. It's long been known that even people paid far too much money to make predictions about a constrained system (such as the stock market) usually do no better -- and typically worse -- than a chimpanzee flinging darts (or whatever else the chimp feels like flinging). One of the best-selling books about foresight in recent years -- The Black Swan -- essentially argued that trying to glimpse the future was worse-than-useless, because it would get you locked onto the understandable (but actually unlikely) and make you miss the seemingly impossible (but actually inevitable). Failed predictions and futurism go hand-in-hand, to the point where the first thing that someone identifying himself/herself as a futurist is typically asked is some variant of "where's my jetpack?"

    The conventional image of a "futurist" is that of someone who speaks with certainty about the yet-to-come, making bold predictions of headline-generated changes... and never really being held to account when those predictions fail to be realized. (In fact, there's a weird pathology at work in the traditional media and political worlds: the only way to be taken seriously is to be repeatedly wrong, but in acceptable ways. Being right, when the conventional wisdom was wrong, will get you ignored.) J. Random Futurist gets quoted on CNBC one day saying that Facebook is undervalued, and will soon be rich enough to buy a small country, and quoted on FBN the next day saying that Facebook is doomed, DOOOOOMED, because of what Google just unveiled. This isn't informative, and it isn't illuminating; at best, it's infotainment.

    Conventional futurists are the Michael Bays of the intellectual world: what they produce can be spectacular and amusing, but is ultimately hollow and depressing.

    July 26, 2011

    Sanity

    Yesterday, on Twitter, I posted this:

    When the present is filled with tragedy and idiocy, focusing on the future is my way of staying sane.

    That was my oblique reaction to the terrorism in Norway and the disastrous efforts to deal with the debt ceiling in the United States. With the first, you have a hardcore Christian terrorist attempting to kick-start a war against Muslims and Secularists in Europe by attacking "race traitors." With the second, you have a hardcore Teahadist movement in the U.S. House of Representatives refusing to act to avoid what will end up being a major economic catastrophe, seeing it instead as an engine of political change. The first was a spike of murderous violence, while the second would likely kill more people over time. That both come from a right wing perspective is secondary to the fact that both embrace the idea that the only way forward is through intentional chaos.

    (And I don't buy that the two perspectives are inherently linked. I've met enough self-identified conservatives who don't want to destroy the world, and enough self-identified progressives who see disaster as a mechanism for changing climate policy, to know better.)

    My post resulted in some welcome, and very thoughtful, replies. The most salient boiled down to the observation that the future is a result of the present, and that what we do now shapes what we can do in the days and years to come. Such observations are, of course, quite correct.

    But what I was trying to express was something a bit different. What focusing on the future does, here, is provide context. When tragedy and idiocy are so visible, it's easy to forget that these aren't permanent conditions, nor are they all that's out there. But these stories so quickly become a smothering shroud, blocking out all else and making any thoughts about the future seem pointless. It's a trap, a distraction at best and a pit of despair at worst.

    Focusing on the future is a way of reminding myself that these aren't stopping points, and that--awful as they are now--they will, in time, be largely forgotten. Not that they're not serious, not that they're not important... but that they are, ultimately, a part of history, not the end of history.


    Nothing ever ends

    June 28, 2011

    Not Giving Up

    buckle upAbout ten years ago, I found myself sitting on the floor of my San Francisco Bay Area apartment, hoping that the call I was on wasn’t going to drop yet again. At the other end of the line was a Seattle public radio station, hosting a live debate/conversation between me and computer scientist Bill Joy on the question of whether our technologies were going to kill us; at that point, my main concern was whether our technologies would even work. Joy had recently published his infamous “Why the Future Doesn’t Need Us” essay in Wired, and was still charged with the fiery nihilism of his argument that we are less than a generation away from nano-, bio-, and information technologies that would fundamentally transform — in a bad way — human society and the human species. Joy was convinced that these emerging technologies would cause our extinction, and that the only hope for humanity was to give up entirely on these innovations.

    Joy was suffering from the same repeated disconnection problem I was wrestling with, but didn’t seem to appreciate the irony of the situation: here he was arguing that all-powerful technologies on the near horizon would inevitably destroy us, even while a ubiquitous and more-than-a-century-old technology remained stubbornly unreliable.

    It’s a theme that would recur in countless arguments and debates I’d find myself in over the years. Usually, my sparring partner would claim (like Joy) that transformative technologies were about to sweep away human civilization, eliminating our humanity if they don’t destroy us completely. The only weak hope we might have would be to get rid of them — call this the Rejectionist perspective. Occasionally, however, the claim would be that transformative technologies were about to sweep away human civilization and replace it (and eventually us) with something better. This future was being driven by forces beyond our understanding, let alone control — call this one the Posthumanist argument. Each claim is a funhouse mirror of the other: We are on the verge of disaster or on the verge of transcendence, and the only way to hold on to our humanity in either case would be to disavow what we have made.

    And they’re both wrong. More importantly, they’re both dangerous.

    Our technologies are not going to rob us (or relieve us) of our humanity. Our technologies are part of what makes us human, and are the clear expression of our uniquely human minds. They both manifest and enable human culture; we co-evolve with them, and have done so for hundreds of thousands of years. The technologies of the future will make us neither inhuman nor posthuman, no matter how much they change our sense of place and identity.

    The Rejectionist and Posthumanist arguments are dangerous because they aren’t just dueling abstractions. They have increasing cultural weight, and are becoming more pervasive than ever. And while they superficially take opposite views on technology and change, they both lead to the same result: they tell us to give up.

    By positing these changes as massive forces beyond our control, these arguments tell us that we have no say in the future of the world, that we may not even have the right to a say in the future of the world. We have no agency; we are hapless victims of techno-destiny. We have no responsibility for outcomes, have no influence on the ethical choices embodied by these tools. The only choice we might be given is whether or not to slam on the brakes and put a halt to technological development — and there’s no guarantee that the brakes will work. There’s no possible future other than loss of control or stagnation.

    Today, Rejectionists like writer Nicholas Carr and MIT social scientist Sherry Turkle argue passionately that a new wave of digital technologies is crippling our minds and breaking our social ties. Their solution is to (paraphrasing the words of William F. Buckley) “stand athwart history yelling Stop!” While their visions are less apocalyptic than Joy’s tirade, they’re more directly relevant for many people, and ultimately have the same ends.

    The Posthumanist side is no less active. The godfather of the concept, technologist Ray Kurzweil, continues to churn out books and interviews telling us that the Singularity is near, a claim that seems to have special attraction for many tech-savvy young men. But like the Rejectionist perspective, Posthumanist arguments have mutated into new forms linked to current debates. Venture capitalist Peter Thiel (co-founder of PayPal and currently on the board of directors at Facebook), for example, insists that his investments in Singularity technologies will allow him to create a future devoid of politics — and arguing, infamously, that true freedom is incompatible with democracy.

    Technology is part of who we are. What both critics and cheerleaders of technological evolution miss is something both subtle and important: our technologies will, as they always have, make us who we are—make us human. The definition of Human is no more fixed by our ancestors’ first use of tools, than it is by using a mouse to control a computer. What it means to be Human is flexible, and we change it every day by changing our technology. And it is this, more than the demands for abandonment or the invocations of a secular nirvana, that will give us enormous challenges in the years to come.

    I'm looking forward to it.

    May 12, 2011

    Sent this in Email Today

    There's really only been one social institution that's been able to get people to work hard on changes/solutions that they'll never see come about: religion.

    That leaves us with two real choices:

    • We figure out what it is about religion that has managed to do this, and try to replicate it in a non-religious arena -- something that political and military institutions have been trying to do for a very long time, without much success.
    • We try to embed sustainability/innovation/foresight discourse into existing religious institutions. A lot of us secular humanist types are going to be awfully uncomfortable with that.

    The latter will happen without much intervention on our part post-disaster, but I'd rather not take that course.

    So the big question, then, is how we can reverse-engineer religion such that we can make use of the persuasive aspects without having to bring over the mythical aspects...

    April 26, 2011

    Listening to Foresight

    Tablet on tabletWhen people learn that I'm a professional futurist*, almost invariably the immediate response is the question "what predictions have you gotten right?"

    My usual answer is to argue that prediction isn't the what futurists do these days, we're all about illuminating possible implications of choices, and so forth. It's not a terribly satisfying answer, but it's better than the alternative: it doesn't matter.

    It doesn't matter what predictions I may have gotten right because, when futurists make detailed predictions, the intended audiences rarely ever listen. There are all sorts of reasons for this, and it's worth exploring.

    When you have a spare 15 minutes, watch this video:

    For those of you unable/unwilling to watch, here's the summary: It's a 1994 video from Knight-Ridder's Information Design Lab, talking about the potential development of a "tablet newspaper." Knight-Ridder was a big American newspaper publisher, and in 1992 it established the Information Design Lab as a way to visualize and even build future newspaper technologies. This video illustrated what its proposed newspaper tablet would look like, and how it would work. It's worth watching just for the details they present.

    It's a remarkable bit of future artifact creation, as much of the forecast ended up playing out in the subsequent 17 years much as the IDL described. As predictions go, it was usefully on-target.

    At least, it could have been useful had anyone been paying attention. The IDL was closed the next year, its forecasts essentially forgotten. Knight-Ridder itself was bought out in 2006.

    Much of the attention the video has received in the last week has focused on how closely the imagined tablet looks like an iPad. It does, I suppose, but I'm a bit less excited about that -- the iPad is a flat, thin slab in black trim, which is hardly a radical departure from what a tablet computer had been imagined looking like since the late 1960s.

    What leapt out at me, conversely, was the video's prescience about how a digital newspaper would function: the use of the conventional newspaper form as a recognized interface; the seamless leap from headline to full story; the use of animation and video integrated with the text; the lack of limits on space; even the need to pay for the news via advertising. It was clear that the designers of the tablet newspaper in the video had given careful thought to the evolution not just of digital hardware, but user interfaces. Remember, in 1994 the web still looked like the image to the right.

    What IDL described was a world where people could access their preferred newspapers from anywhere, where readers could copy and share articles as desired, where targeted advertising was a necessary component of newspaper economics... essentially, a world where people grappled with news media in ways very much like today's reality. If you substitute notebook computers for tablets, even their otherwise-optimistic timeline (with the big developments hitting around 2000) was more-or-less on target.

    They didn't get everything right. Some of what they got wrong was minor, if amusing, such as the use of a stylus or PCMCIA-sized memory cards. Some errors, however, were more critical. Pronouncements that people like advertising nearly as much as they like the news, or that people don't want generic news items, they want a branded news source, suggest an unwillingness to examine basic assumptions about the behavior of newspaper readers.

    But such a re-examination of assumptions could have happened, eventually, had the newspaper industry -- or even Knight-Ridder itself -- taken seriously the forecast in this video.

    So why didn't they?

    I think the answer comes in a balance between three reactions I've seen time and again: a perception that the forecast or prediction is impossible, is unacceptable, or is scary.

  • The forecast future is impossible: what is described is so outside how we understand the world that we can't see how we get from here to there. Therefore, we can ignore it.

    This is a reasonable filter to have regarding forecasts; if the prediction is flat-out impossible, it's not worth the time and focus required to engage with it. The typical inaccuracy of narrow predictions -- almost always wrong, and in big ways -- further helps to feed this response. Why pay attention if it's not going to happen this way anyway? This has been one of the big drivers for futurism moving away from prediction, and towards scenario-based, implication-driven approaches.

  • The forecast future is unacceptable: what is described, while technically believable, is outside of what we deem "right" for the industry/society, or has elements that don't fit our knowledge of how the industry/society works. Therefore, we can dismiss it.

    This often is translated into "you just don't understand our industry" when outside futurists propose unacceptable forecasts. And that's sometimes true -- but it's also often the case that an outside perspective is able to catch inconsistencies and oddities that for insiders have essentially become invisible.

  • The forecast future is scary: what is described, while both believable and plausible, would be devastating to us or to our industry/society. Therefore, our only choice is to reject it.

    This is precisely the situation that foresight is supposed to help an organization deal with. Sadly, this reaction is more common than you might expect.

    All three of these reactions may have come into play with the "tablet newspaper" video. The world described projected technological developments that might have seemed silly to people accustomed to giant CRT monitors, clumsy PDAs, and 16Kbps dial-up modems (and, in fact, the IDL timeline for the development of the tablet technology was overly aggressive). For some viewers in the industry, this future might have seemed so unlikely as to be impossible. The scenario presented also shifted the locus of power between the newspaper, advertisers, and readers -- readers would have much more control over what they accessed, and advertisers would have a more direct relationship with the readers, with less mediation by the newspaper. For some in the newspaper industry, this would have been unacceptable and readily dismissed. And for the viewers who could imagine some of the implications -- advertisers no longer needing newspapers at all, or readers pulling together diverse news sources instead of remaining loyal to a local paper -- the conclusions would have been potentially terrifying.

    (There's also the likelihood that most people in the industry didn't even know about the video, or didn't pay attention to that future stuff because they were too busy dealing with day to day crises.)

    This leaves foresight professionals in a bit of a quandary. How do you respond to that kind of refusal to acknowledge, let alone think through the implications of, provocative forecasts?

    Each of the reactions has, arguably, a counter.

  • Finding supporting evidence can be a counter for the "impossible" reaction. The IDL video points at a few items that support the technological underpinnings, but a stronger case could have been made that this was a compelling vision of what the future could hold.

  • Bringing in diverse perspectives can be a counter for the "unacceptable" reaction. A single outside voice saying "look out" may be readily ignored, but a diverse set of outside voices, from a variety of disciplines, saying "look out" might be taken more seriously. This could be compounded if some of the outside voices come from parallel fields that have experienced similar changes. The IDL video relied solely on people within the journalism community.

  • Stimulating competitive instincts can be a counter for the "scary" reaction. Just because one audience finds the narrative too scary to contemplate doesn't mean that all audiences will. Pushing the argument that the group that figures out how to deal with the scary scenario first is likely to have big advantages over its competitors is one way to get over the "scary" barrier. In the case of the newspaper industry, unfortunately, nobody seemed willing to take that step.

    Everyone who has done futures work has had a "newspaper tablet" moment of their own, to one degree or another. In many ways, the hard part of this discipline isn't in the process of coming up with useful and provocative scenarios. The hard part is making sure that the people who need to pay attention to them do so.


    * As I've noted, I don't like the term "futurist," but there's not much in the way of an alternative term that is easily recognized.
  • March 29, 2011

    Evolution

    At the Institute for the Future's 2011 Ten Year Forecast event in late March, I presented a long talk on ways in which evolutionary and ecological metaphors could inform our understanding of systemic change. The head of the Ten Year Forecast team, IFTF Distinguished Fellow Kathi Vian, thought that the ideas it contained should get a wider viewing, and asked me to put the talk on my blog. Here it is. It's lightly edited, and only contains a fraction of the slides I used; let me know what you think.

    We’ve now reached the part of the day where I’ve been asked to make your brains hurt. Don’t worry, there will be alcohol afterwards.

    The first thing I’m going to do, of course, is talk about dinosaurs.

    Everybody knows about dinosaurs, right? Giant, lumbering lizards that were killed off by an asteroid just when the smarter, more nimble mammals were starting to take over anyway. And everyone knows what dinosaur means as a metaphor: big, stupid, and about to be wiped out. Nobody wants to be a dinosaur.

    What if I told you that all of that – all of it – was wrong?

    Here’s another dinosaur:

    It turns out that most dinosaurs were actually pretty small and fast, and far more closely related to today’s birds than to lizards.

    Some dinosaurs we might envision as scaly monsters from the movies were likely actually feathered. It’s widely accepted, in fact, that dinosaurs didn’t all die off when that asteroid struck 65.5 million years ago — they stuck around as birds.

    Oh, and one other thing.

    The “age of dinosaurs” lasted 185 million years, not counting the 65 million years of dino-birds. And mammals first emerged about halfway through the “age of dinosaurs,” and were stuck scurrying around between dinosaur legs, trying to avoid being eaten.

    Dinosaurs have been around, including as birds, for 250 million years. Humans, conversely, have been around in a form recognizable as Homo sapiens for only about 250 thousand years. Dinosaurs have had a thousand times more history than has Homo sapiens.

    And they survived – arguably eventually thrived after – one of the biggest mass extinctions in Earth’s history. Maybe being a dinosaur wouldn’t be such a bad thing.

    The story of dinosaurs is a particularly vivid example of what happens after complex systems face traumatic shocks. It’s a story of change and adaptation. And it’s one that we can learn from.

    This will come as a surprise to precisely none of you, but one of the areas that I studied academically was evolutionary biology. Although I didn’t follow that path professionally, I’ve always kept my eyes open for ways in which bioscience can illuminate dilemmas we face in other areas.

    There’s one concept from biology that I’ve been mulling for awhile, and I think it has quite a bit to say about our current global situation.

    It’s an element of the concept of “ecological succession,” the term for how ecosystems respond to disruptive change. A fundamental part of that process is the “r/K selection model,” with a little r and a big K, which is a way of thinking about the reproductive strategy that living species employ within a changing environment.

    Biologist E.O. Wilson came up with this concept over 30 years ago, and it’s proven to be a useful lens through which to understand ecosystems.

    Species that use the “r” strategy tend to have lots of offspring, but devote little time or energy to their care. Even though most will die, the ones that survive will have a bunch of offspring of their own, hopefully carrying the same advantages that let their progenitors survive. Because these species are optimized to reproduce and spread quickly, we humans often think of them as weeds, “vermin,” and other pests.

    Species that use the “K” strategy tend to have very few offspring, and devote quite a bit of time and energy to their care. Survival rates for the progeny are much higher, but the loss of an offspring is correspondingly more devastating. K species are optimized to compete for established resources, and tend to be larger and longer-lived than r species. Humans are on the K side of the spectrum, which might be why we often tend to sympathize with other K species.

    As I said, Ecological Succession is what happens when an ecosystem has been hit with a major disturbance. When various species come to re-inhabit the area, it follows a pretty standard pattern.

    While conditions are still unstable, r species dominate. Although they may not be ideally suited for the changing environment, they reproduce quickly. r strategies promote rapid iteration, diversification, and a willingness to sacrifice unsuccessful offspring.

    As an ecosystem returns to stability, K species start to take over. Sometimes they’ll evolve from r species, sometimes they’ll come in from other locations. Species that employ K strategies evolve to fit their environmental niche as optimally as possible, seeking out the last bit of advantage over ecological competitors.

    This is the typical pattern, then: disruption, r dominance, increased stability, K dominance.

    But in periods when the volatility itself is sporadic, things get weird. Think of it as “unstable instability:” disruptions happen unpredictably, with long enough periods of stasis for the normal ecological succession pattern to start to take hold – then wham! A spike of instability. But this doesn’t eliminate the K strategists; they can reemerge once stability returns.

    In this kind of environment, K approaches and r approaches trade off, neither gaining dominance. This is the kind of setting that accelerates change. Arguably, extended periods of unstable instability have been engines of radical evolution. They appear at numerous points in our planet’s history, and nearly always have a major impact.

    In this kind of setting, imagine the impact of a species able to shift rapidly between r and K, optimizing when possible, rapidly iterating when necessary. Such as, to pick a random example, Homo sapiens, us. We’ve been able to adapt to changing environmental conditions through technological innovation, using rapid iteration of tools to enable biological stability

    The appropriate question, now, is “so what?”

    The language we employ when we do foresight work is often intensely metaphorical. We focus on events yet to fully unfold, new processes overshadowed by legacies, and weak signals amidst the noise of the now. And because of this, we often find ourselves reaching for familiar concepts that parallel the story we’re trying to tell.

    As I suggested earlier, the r/K selection and ecological succession models offer us some insights into what’s happening the broader global political economy.

    Human enterprises, whether business or government, civil or military, aren’t precisely biological species, but you can see numerous historical parallels to the ecological succession logic. The traumatic chaos of World War Two, with its vicious, rapidly-evolving competition between myriad nations, becomes the long stability of the Cold War era, dominated by two superpowers. Or, less bloody, think of the rapid churn of Silicon Valley startups trying to take advantage of a new innovation, and eventually settling into a small number of dominant players.

    Time and again, disruptive events lead to periods of experimentation and diversity, which over time crystallize into more stable institutions. But if that was the extent of the metaphor, it might be of passing interest, and not of much value.

    It’s the periods of unstable instability, where r and K make strange bedfellows, that pulls us in.

    A useful example is the era of the late 1800s to early 1900s. There was instability, to be sure, but it was matched by periods of recovery and growth – but none of it proved able to last.

    And alongside these economic and political convulsions came an enormously fruitful era of technological and social innovation. Much of the technology that so dominates our day-to-day lives that we even sometimes stop thinking of it as technology – airplanes, automobiles, air conditioning, electricity to the home, the incandescent light bulb, and so on – emerged in the period between 1880 and 1920.

    So, too, did social transformations like the labor movement, the progressive model of governance, and, in the US, women’s suffrage.

    I believe that there’s a good case to be made that we’re now in a similar era of unstable instability. Disruptions, when they hit, are intense, but there’s an equally powerful drive to stabilize. Innovations arise across a spectrum of technologies, but quickly become old news. There are major conflicts over social change. Both recoveries and chaos have strong regional and sectoral dimensions, and can flare up and die off seemingly without notice.

    It would be dangerous to rely on strategies – reproductive or otherwise – that assume the continuation of either stability or instability. This period of unstable instability has been with us for at least the last decade, and will very likely continue for at least another decade more.

    And this suggests that neither relying upon scale and incumbency nor relying upon rapid-fire iteration will succeed as fully and as dependably as we might wish. We can’t depend on either the garage hacker or the global corporation to push us to a new phase of history. It’s going to have to be something that manages to combine elements of both flexible experimentation and long-term strategy. Something that puts r in service of K.

    This doesn’t mean that r is less important than K; we could just as easily call it “K enabling r.” Either way, it comes down to strategies that take advantage of scale and diversity, that allow both long-shot experimentation and quick adoption of innovation. Decentralized, but collaborative.

    Strategies, in other words, that are resilient.

    Resilience is a concept we think about quite a bit at IFTF, and if you’ve been involved in engagements with us over the past couple of years, you have probably heard us talk about it. It’s the ability of a system to withstand shocks and to rebuild and thrive afterwards. We believe that it will be a fundamental characteristic of success in the present decade, and Kathi will talk more about it tomorrow.

    But from the ecological perspective, resilience interweaves r and K, containing elements that we might consider to be “r” in nature , as well as elements we’d consider to be “K.”

    Resilience is the goal. r in service of K is the path.

    Now, I said a moment ago that this “unstable instability” is likely to last for at least another decade. I’m sure we could all spend the next hour coming up with reasons why that might be so, but one that I want to focus on for a bit is climate disruption. In many respects, climate disruption is the ultimate unstable instability system.

    Climate disruption is something that comes up in nearly all of our gatherings these days, and I don’t think I need to reiterate to this audience the challenges to health, prosperity, and peace that it creates.

    We’ve spent quite a bit of time over the last few Ten Year Forecasts looking at different ways we might mitigate or stall global warming. Last year, we talked about carbon economies; the year before that, social innovation through “superstructures.” In 2008, geoengineering. This year, I want to take yet another approach. I want to talk about climate adaptation.

    I say that with some trepidation. Adaptation is a concept that many climate change specialists have been hesitant to talk about, because it seems to imply that we can or will do nothing to prevent worsening climate disruption, and instead should just get ready for it. But the fact of the matter is that our global efforts at mitigation have been far too slow and too hesitant to have a near-term impact, and we will see more substantial climate disruptions in the years to come no matter how hard we try to reduce carbon emissions. This doesn’t mean we should stop trying to cut carbon; what it does mean is that cutting carbon won’t be enough.

    But adaptation won’t be easy. It’s going to require us to make both large and small changes to our economy and society in order to endure climate disruption more readily. That said, simply running down a checklist of possible adaptation methods wouldn’t really illuminate just how big of a deal adaptation would be. We decided instead that it would be more useful to think through a systematic framework for adaptation.

    Our first cut was to think about adaptations in terms of whether they simplify systems – reducing dependencies and thereby hopefully reducing system “brittleness” – or make systems more complex, introducing new dependencies but hopefully increasing system capacity.

    Simplified systems, on the whole, tend to be fairly local in scale. But reducing dependencies can also reduce influence. Simplification asks us to sacrifice some measure of capability in order to gain a greater degree of robustness. It’s a popular strategy for dealing with climate disruption and energy uncertainty; the environmental mantra of “reduce, reuse, recycle” is a celebration of adaptive simplification.

    Adaption through complexity creates or alters interconnected systems to better fit a changing environment. This usually requires operating at a regional or global scale, in order to take advantage of diverse material and intellectual resources. Complex systems may have increased dependencies, and therefore increased vulnerabilities, but they will be able to do things that simpler systems cannot.

    So that’s the first pass: when we think about adaptation, are we thinking about changes that make our systems simpler, or more complex?

    But here’s the twist: the effectiveness of these adaptive changes and the forms that they take will really depend upon the broader conditions under which they’re applied. We have to understand the context.

    At last year’s Ten-Year Forecast, we introduced a tool for examining how choices vary under different conditions. It’s the “alternate scenario archetype” approach, and it offers us a framework here for teasing apart the implications of different adaptive strategies. If you were here last year, you’ll recall that the four archetypes are Growth, Constraint, Collapse, and Transformation. These four archetypes give us a basic framework to understand the different paths the future might take.

    scenario archetypes

    But let’s also apply the ecosystem thinking I was talking about earlier. With this in mind, we can see Growth and Collapse as aspects of the standard ecological succession model: Growth supports K strategy dominance, until we get a major disruption leading to Collapse, which supports r strategy dominance until we return to Growth. As it happens, while they may not use this exact language, many of the long-term cycle theories in economist-land map to this model.

    Constraint and Transformation, however, seem more like unstable instability scenarios.

    Both Constraint and Transformation have quite a bit in common. Both can be seen as being on the precipice of either growth or collapse, and needing just the right push to head down one path or the other. At the same time, both will contain pockets of growth and collapse, side by side, emerging and disappearing quickly. In both, previously well-understood processes no longer seem to work as well, yet there’s enough that remains functional and understandable that the world doesn’t simply spin apart. For both, the underlying systems are in flux.

    With Constraint, the result is a reduced set of options. The uncertainty and churn limit what you can do.

    With Transformation, the result is the emergence of new models and new opportunities.

    So we have two adaptive strategies – simplify and complexify – and two conditions of “unstable instability” – constraint and transformation. What do you do when you have two variables? You make a matrix!

    Ah, the good old two-by-two matrix. So let’s put up the conditions, and the strategies. What happens when we combine them?

    4 box matrix

    In many ways, Constraint and Simplification go hand-in-hand, giving us a world of doing more with less. Smaller scale, fewer resources, and a need for cheap experimentation: this is very much an “r” world.

    Similarly, Transformation and Complexification are also common partners, resulting in a world focused on big ideas and long-term results. The potential is here for major changes, but failures can be catastrophic: it’s a classic “K” world.

    The less-common combinations, however, prove pretty interesting.

    When you link Constraint and Complexification , you get a world of deep interconnection: lots of small components in dense networks. There’s quite a bit of interdependence, but no one element is a potential “single point of failure.” This is an “r in service of K” world.

    And when Transformation and Simplification come together , you get a world of fast iteration and slow strategy: numerous projects and experiments functioning independently, with loose connections but a long-range perspective. This, too, is an “r in service of K” world.

    Okay.

    Remember, I said earlier that foresight relies on intensely metaphorical language. You might not have expected the metaphors to be quite that intense, however. So here’s the takeaway:

    Adaptation can take multiple forms, but more importantly, the value of an adaptation depends upon the conditions in which it is tried. Just because an adaptive process worked in the past doesn’t mean that it will be just as effective next time. But there are larger patterns at work, too. If you can see them early enough, you can shape your adaptive strategies in ways that take advantage of conditions, rather than struggle against them.

    But here’s the crucial element: it looks very likely that we’re in a period where the large patterns we’ve seen before aren’t working right.

    Instead, we’re in an environment that will force swift and sometimes frightening evolution. Businesses, communities, social institutions of all kinds, will find themselves facing a need to simultaneously experiment rapidly and keep hold of a longer-term perspective. You simply can’t expect that the world to which you’ve become adapted will look in any way the same – economically, environmentally, politically – in another decade.

    As a result, you simply can’t expect that you will look in any way the same, either.

    The asteroid strikes. The era of evolution is upon us. It’s now time to watch the dinosaurs take flight.

    February 23, 2011

    Is the Alphabet Making Us Stupid?

    Socrates: “ ...The story goes that Thamus said many things to Thoth in praise or blame of the various arts, which it would take too long to repeat; but when they came to the letters,“This invention, O king,” said Thoth, “will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.”

    But Thamus replied, “Most ingenious Thoth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess. For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.

    "You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

         –[Plato, Phaedrus, 274e-275b]


    January 26, 2011

    "Win the Future"

    Nothing_Ends.pngIf you watched the American State of the Union address last night, in part or in total, you couldn't have escaped noticing one particular phrase: "win the future." President Obama used it (or "winning the future") nine times in the speech; he used "future" 15 times, in total. You might think that, as a futures guy, I would be thrilled at the Presidential shout-out, but I'm not.

    When thinking about the future, "winning" is a terrible metaphor. It's not just that "winner" implies "loser;" it's not just that "win" demands competition. For me, the fundamental problem with the metaphor is that "win" means that the competition is over. Okay, we've won the future... now what? Everybody goes to Disneyland? Or if "win the future" means the future is a prize, once we've won it, what do we do with it? I don't think my office bookcase is big enough to hold the whole future. I might have to get a storage locker.

    In reality, there's always more future yet to come*: that's why my favorite thing that Bruce Sterling has ever said is simply "the future is a process, not a destination." If we think of the future as thing, or a goal, we are limiting it -- and, by extension, limiting ourselves. Obama, by encouraging us to "win the future," is not just asking that we do something that simply cannot be done, he's asking us to accept a meager, ephemeral sense of triumph, when we could do so much more.

    Embrace the future. Create the future. Become the future.

    All more meaningful and forward-looking than a transient victory. Harder to articulate than "win," undoubtedly -- but the future isn't easy.


    * Statement not valid post-Eschaton, or outside of standard model physics.

    December 14, 2010

    Neodicy

    Warren Ellis did me the great honor of asking me to write a piece for his website, on whatever topic was on my mind. This is what resulted. You can see the posting at Warren's place here; I've reproduced it below for my archives.

    I see your Jesus Phone with a Moses TabletTechnology will save us. Technology will destroy us.

    The Future will save us. The Future will destroy us.

    The tension between the myriad ways our tools — our technologies — affect us is often at the core of futurological discussions. Do they weaken us, destroying our memories (as Socrates argued) or our ability to think deeply (as Nicholas Carr argues), or do they enhance us? Do our technologies rob us of our humanity, or are they what make us human? While I tend to bias towards the latter view, it's not without recognition that our tools (and how we use them) can damage our planet and our civilization. But for a surprisingly large number of people, such discussions of technology aren't just part of futurism, they are futurism. From this perspective, the question of whether our technologies will destroy us is essentially the same as asking if our futures will destroy us.

    This deep fear that what we have built will both give us heretofore unimagined power and ultimately lay us to waste has been with us for centuries, from the story of Icarus to the story of Frankenstein to the story of the Singularity. But because of its mythical roots, few foresight professionals give this fear sufficient credence. Not in the particulars of each story (I don't think we have much cause to worry about the risks associated with wax-and-feather personal flight), but in the recognition that for many people, a desire to embrace "the future" is entangled with a real, visceral fear of what the future holds for us.

    In religious study, an explanation of how an all-powerful deity that claims to love us can allow evil is known as a "theodicy." The term was coined in 1710 by Gottfried Liebniz -- a German natural philosopher who, among his many inventions and ideas, came up with calculus (independently of Newton, who is usually credited) and the binary number system. A theodicy is not merely a "mysterious ways" or "free will" defense, it's an attempt to craft a consistent plausible justification for evil in a universe created by an intrinsically good deity. Theodicies are inherently controversial; some philosophers claim that without full knowledge of good, no theodicy can be sufficient. Nonetheless, theodicies have allowed believers to think through and discuss in relatively sophisticated ways the existence of evil.

    The practice of foresight needs within its philosophical underpinnings a similar discourse that treats the fear of dangerous outcomes as a real and meaningful concern, one that can neither be waved away as pessimism nor treated as the sole truth — a "neodicy," if you will. Neodicies would grapple with the very real question of how we can justifiably believe in better futures while still acknowledging the risks that will inevitably arise as our futures unfold. Such a discourse may even allow the rehabilitation of the concept of progress, the idea that as a civilization we do learn from our mistakes, and have the capacity to make our futures better than our past.

    For those outside the practice of futurism, neodicies could be sources of comfort, allowing a measure of grace and calm within a dynamic and turbulent environment; neodicies give future dangers meaningful context. For futurists, the construction of neodicies would demand that we base our forecasts in more than just passing trends and a desire to catch the Next Big Thing; neodicies require complexity. For all of us, neodicies would force an abandonment of both optimism- and (more often) pessimism-dominated filters. Neodicies would reveal the risks inherent to a Panglossian future, and the beauty and hope contained within an apocaphile's lament.

    What I'm seeking here is ultimately an articulation of futurology (futurism, foresight, etc.) as a philosophical approach, not simply a tool for business or political strategy. I want those of us in the discipline to think more about the "why" of the futures we anticipate than about the "what." Arguing neodicies would allow us to construct sophisticated, complex paradigms of how futures emerge, and what they mean (I'd call them "futurosophies," but I'm on a strict one-neologism-at-a-time diet). Different paradigms need not agree with each other; in fact, it's probably better if they don't, encouraging greater intellectual ferment, competition and evolution. And while these paradigms would be abstractions, they could still have practical value: when applied to particular time frames, technologies, or regions, these paradigms could offer distinct perspectives on issues such as why some outcomes are more likely than others, why risks and innovation coevolve, and how tomorrow can be simultaneously within our grasp and out of control.

    But the real value of a neodicy is not in the utility it provides, but the understanding. For too many of us, "the future" is a bizarre and overwhelming concept, where danger looms large amidst a shimmering assortment of gadgets and temptations. We imagine that, at best, the shiny toys will give us solace while the dangers unfold, and thoughts of the enormous consequences about to fall upon us are themselves buried beneath the desire for immediate (personal, economic, political) gratification. Under such conditions, it's easy to lose both caution and hope.

    A world where futurology embraces the concept of neodicy won't make those conditions go away, but it would give us a means of pushing back. Neodicies could provide the necessary support for caution and hope, together. Theodicy is often defined simply as an explanation of why the existence of evil in the world doesn't rule out a just and omnipotent God; we can define neodicy, then, as an explanation of why a future that contains dangers and terrible risks can still be worth building — and worth fighting for.

    June 24, 2010

    A Dilemma

    Is something still meaningful and true, even when it's been turned into a marketing slogan?

    (Spotted in London, in the window of a brand marketing agency.)

    February 9, 2010

    Translating Opacity

    Andrew Revkin asked what I thought about his arguments for greater development and use of automated language translation technologies. In his piece "The No(w)osphere," Revkin writes:

    As the human population heads toward nine billion and simultaneously becomes ever more interlaced via mobility, commerce and communication links, the potential to shape the human journey — for better or worse — through the sharing of ideas and experiences has never been greater. [...]

    But language remains a barrier to having a truly global conversation...]

    .

    Automated translation remains clumsy, at best, these days. (One perfect illustration is the website "Translation Party," which translates an English phrase into Japanese, then translates it back to English, then back to Japanese, until it reaches "equilibrium" -- a point where the English and the Japanese auto-translate back and forth precisely.) Linguistic accuracy is a much harder problem than technology pundits of a few decades ago had expected. Nonetheless, as Revkin points out, there are a number of projects out there that suggest that a future of relatively useful automated translation is probably fairly near.

    Here's the twist: I suspect that a less-than-perfect system would be better than an idealized perfect translation. Why? Because an imperfect system would require us to speak more simply and in a more straightforward fashion, with fewer culture-specific idioms and convoluted sentences, as we do today with our current tools. Working with people for whom English is not their primary language, I know that I need to speak and write in a way that doesn't lend itself to unintended ambiguity or confusion. If I knew that an automated system could be tripped up by overly-complex language, I'd be as careful and precise as possible.

    But in everyday conversation, we don't tend to speak carefully and precisely. Correspondingly, an effectively perfect system would let us slip into the kinds of discussion and writing patterns that we use with other native speakers. I suspect that, counter-intuitively, this would lead to more confusion and friction, as meaning is culturally-rooted. A perfect translation of the denotation of a word or phrase may not carry the correct connotation; moreover, the translated word or phrase may have a very different connotation in a different culture.

    In other words, translation technology that offers results that make sense linguistically, and carry the proper surface meaning of the words and phrases used, could well be close at hand. But translation technology that offers results that have the same meaning in both languages, especially with complex or idiomatic phrasing, probably awaits the arrival of relatively strong machine intelligence. Simply put, it would require software that understood what you meant, not just what you said.

    We should be careful not to get these two outcomes confused. The more that we expect our translation tools to convert meaning, not just phrasing, the more likely we are to be unhappy with the results.

    November 30, 2009

    New Fast Company: Futures Thinking: Scanning the World

    ...And just now my latest Fast Company piece popped up on the site. "Futures Thinking: Scanning the World" is the third in the occasional series on thinking like a futurist.

    In my opinion, it may actually be the hardest step of all, because you have to navigate two seemingly contradictory demands:
    • You need to expand the horizons of your exploration, because the factors shaping how the future of the dilemma in question will manifest go far beyond the narrow confines of that issue.
    • You need to focus your attention on the elements critical to the dilemma, and not get lost in the overwhelming amount of information out there.

    You should recognize up front that the first few times you do this, you'll miss quite a few of the key drivers; even experienced futurists end up missing some important aspects of a dilemma. It's the nature of the endeavor: We can't predict the future, but we can try to spot important signifiers of changes that will affect the future. We won't spot them all, but the more we catch, the more useful our forecasts.

    It boils down to this: keep reading, keep asking questions, keeping looking for outliers... and if you think you have enough, you don't.

    October 21, 2009

    New FC: Futures Thinking: Asking the Question

    My latest Fast Company essay is up, and with it I return to the "Futures Thinking" series. This one, "Asking the Question," looks at how to craft a question for a foresight exercise that's most likely to generate useful results.

    It's a subtle point, but I tend to find it useful to talk about strategic questions in terms of dilemmas, not problems. Problem implies solution--a fix that resolves the question. Dilemmas are more difficult, typically situations where there are no clearly preferable outcomes (or where each likely outcome carries with it some difficult contingent elements). Futures thinking is less useful when trying to come up with a clear single answer to a particular problem, but can be extremely helpful when trying to determine the best response to a dilemma. The difference is that the "best response" may vary depending upon still-unresolved circumstances; futures thinking helps to illuminate possible trigger points for making a decision.

    As always, let me know what you think.

    October 13, 2009

    All Money is Fantasy

    Future of Money.pngMy friend Stowe Boyd, consultant and provocateur, interviewed me recently for his Future of Money project. The video of that interview is now available at Stowe's blog, /Message.

    It's a good conversation, although I clearly haven't learned the blogger video conversation practice of simply talking over the person I'm conversing with. I'm far too polite.

    I start with the observation that all money is fantasy. I laugh/sigh when I see "gold bugs" going on and on about how money should be tied to gold, because gold has "real value." The only intrinsic value that gold has relates to how we can use it (in electronics, mostly, or as meal garnish); its utility as money is just as imaginary, just as "fiat," as post-Bretton Woods currency. It's a mutually-agreed upon fantasy. A "consensual hallucination," to steal from Gibson.

    August 3, 2009

    Expiration Date

    Slate's Josh Levin kicks off a series of articles on the possible future dissolution of the United States today with a piece about how a few different "futurologists" see the possibility. In "How Is America Going To End?", he talks to Peter Schwartz and Stewart Brand (for GBN), and to me, covering the Fifty Year Scenarios I did for IFTF.

    Cascio clearly believes that humanity has the ingenuity and the smarts to beat back threats to its continued existence. He doesn't, however, assume that the persistence of the United States is necessarily the most-desirable outcome. It's possible America will collapse as we try desperately to save it—or perhaps the country will shrivel up and go away when its time has come and gone. "It's not necessarily how America will survive," Cascio says, "but how do the values we hold dear … survive even if some of the institutions don't?"

    I have to say, it's fairly rewarding to be held up shoulder-to-shoulder with Peter and Stewart.

    Amusingly, the piece also includes a short -- six minute or so -- video interview. Embedded below, it's notable for me as a dire warning that I really shouldn't wear white.

    The video embed sometimes forces a 15-second advertisement for (of all things) Amway at the beginning, so if you're ad-adverse, but have to have your Jamais-on-video fix, you can watch it at the Slate page.

    Levin's series also includes a make-your-own Apocalypse game!

    April 2, 2009

    Never Mind. Not Doomed Yet.

    Sorry, don't know what came over me.

    March 3, 2009

    The End of Long-Term Thinking

    My intent, from this point forward, is to stop talking about the "long-term." No more long-term problems, long-term solutions, long-term changes. No more long-term perspectives.

    In its place, I'm going to start talking about "multigenerational" issues. Multigenerational problems, solutions, changes. Multigenerational perspectives.

    The advantage of the term "multigenerational" is threefold.

    Firstly, it returns a sense of perspective that's often absent from purportedly "long-term" thinking. In a culture that has tended to operate on the "worry about tomorrow, tomorrow" model, looking at the next year can seem daring, and looking ahead five years can seem outrageous. But five years out isn't very long for long-term thinking; even ten years is better thought of as mid-range. Multi-generational, conversely, suggests that whatever we're thinking about may require us to think ahead 20+ years.

    Secondly, it reinforces the notion that choices we make today don't just impact some distant future person (subject to discounting), but can and will directly affect our physical and cultural offspring. (Even those of us without kids of our own recognize that we have a role in shaping subsequent generations.) That is to say, "multigenerational" carries with it a greater implied responsibility than does "long-term."

    Finally, it doesn't let us skip over the journey from today to the future. "Multigenerational" demands that we include generations along the way -- and while the core meaning of the term refers to human populations, one could stretch the concept to include other systems that show generational cycles.

    This is a key difference between "long-term" and "multigenerational," but it's a subtle one. When we talk about the long-term, the corresponding structure of language -- and thinking -- tends to bias us towards a kind of punctuated futurism, pushing us to look ahead to the end of the era in question while leaping over the intervening years. This skews our perspective. "In the long run, we are all dead" John Maynard Keynes famously said -- but over that same long run, we will all have lived our lives, too.

    I'm increasingly convinced that, when looking ahead, the focus should be less on the destination than on how we get there. Yet that's not how we discuss long-term issues. When we describe climate change as a long-term problem, for example, we inevitably end up talking about what it would look like down the road, after some "tipping point" perhaps, or at a particular calendar demarcation (2050 or 2100). Although there's no explicit denial that climate change is something with implications for every year between now and then, our attention -- our foresight gaze, as we might think of it -- is drawn to that distant end-point, not to the path.

    My thoughts about "long-run" vs. "long-lag" problems cover a similar issue, looking at how our articulations of the future shape our thoughts of it. But this is a deeper problem, one that the "long-lag" concept only hints at.

    "Multigenerational" has two drawbacks, however. The first is that, simply put, it's a bear of a word. Multi-syllabic, 17 letters in length, it requires a bit more effort than "long-term" to write or say. While not an insurmountable barrier, this does mean that sheer laziness will bias me towards "long-term."

    The second is a bit more serious. As noted above, multigenerational implies looking ahead twenty or more years. If we consider a ten-year horizon to be the outer edge of medium-term, there's still the "near-long-term" range between ten and twenty years out to worry about. It's definitely not multigenerational -- hell, it's really not even generational. Yet it's still well beyond the comfortable "foresight window" for most people (which, in my experience, tends to be about five years). At this point, I'm likely to just roll that time range into multigenerational, but the inherent inaccuracy leaves me wanting a better solution.

    I first started thinking about the multigenerational vs. long-term language a month or so ago, while talking with colleagues working on a new foresight-driven non-profit. Its utility was solidified, however, when Emily Gertz pointed me to this essay by science fiction writer and green futurist Kim Stanley Robinson, "Time to end the multigenerational Ponzi scheme," which looks out at what's needed to develop a postcapitalism perspective. KSR is one of the best world-builder science fiction writers out there, in my opinion, and he has an excellent sense of historical patterns. If he's taken to using "multigenerational," then I feel confident of its value.

    Language matters, especially when considering something that's intrinsically conceptual rather than physical. "Long-term" has a lengthy (!) history and deep cultural roots; I expect that I'll find myself using the phrase for some time, even as I try to shift to "multigenerational." But right now we're facing a century of what could easily be the greatest overlapping set of crises our civilization has ever seen. If we're to get through this era intact, we'll need all the tools at our disposal -- and to be thinking about the consequences of our actions with as much acuity and clarity as humanly possible.

    February 26, 2009

    John Henry was an Audiobook-Readin' Man

    You might remember the story of old John Henry. He built rail lines, and could work harder and faster than any man alive. When the company brought in a steam-driven rail driving machine, though, they announced that they were going to fire all of the human rail workers. John Henry stepped up and challenged that machine.

    Challenged it, and beat it.

    And then dropped over dead.

    Keep that in mind as you read this.

    Roy Blount, Jr., the president of the Authors' Guild, wrote an editorial in the New York Times on February 25th, arguing that the text-to-speech feature of Amazon's new Kindle 2 electronic book reading device actually violates the intellectual property rights of the authors he represents, as it provides the functional equivalent of an audiobook, without paying for audiobook rights.

    The crux of Blount's argument is that it's critical to set a precedent now, because the text-to-speech is an audio performance of the book, and even if the digital vocalization is now lousy, it won't always be.

    Not surprisingly, authors who have more willingly entered the 21st century, such as Cory Doctorow, John Scalzi, Neil Gaiman, and Wil Wheaton, have attacked Blount's argument with gusto. Wil even provides an amusing side-by-side audio comparison (MP3) of himself and the Mac's "Alex" voice reading a section of his new book Sunken Treasure.

    For Scalzi, Gaiman, and Wheaton, the crux of the argument is that Blount's concerns are worse than silly, because nobody would mistake the text-to-speech for real voice acting. (Doctorow, as is his practice, focuses on the legal aspect of Blount's argument, finding it more than wanting.)

    My take on this? They're all wrong (well, probably not Cory)... and they're all right, too. That is, Blount is right about the technology, but wrong in his conclusions, while Scalzi/Gaiman/Wheaton/et al are wrong about the problem, but right about the proper response. The reason that Blount's wrong is that he's just trying to hold back the tide, fighting a battle that was lost long ago. The reason that the 21st century digital writers are wrong is that they've forgotten the Space Invaders rule: Aim at where your target will be, not at where it is.

    Text-to-speech is laughably bad now for reading books aloud.

    Text-to-speech could very well be the primary way people consume audiobooks within a decade.

    At present, text-to-speech systems that go from ASCII to audio follow a few pronunciation conventions, but otherwise have no way of interpreting what is read for proper emphasis. For the kinds of uses that current text-to-speech systems typically see, that's good enough. For reading books, especially fiction, that's not.

    But it's not hard to imagine what would be needed to make text-to-speech good enough for books, too. In order to give the right vocalization to the words it's reading, an "AutoAudio Book" would have to have one of three characteristics:

    • It could have been told in detail how to emphasize certain words and phrases, probably through some kind of XML-based markup standard. Call it DRML, or Dramatic Reading Markup Language. Given the existence of other kinds of voice control systems (such as speech synthesis markup language and pronunciation lexicon specification), such a standard isn't hard to imagine. It would take some pre-processing of the text files, though, to really make it work.

    • At the other end of the spectrum, it could actually understand what it's reading, and be able to provide emphasis based on what is going on in the story (basically, what you or I would do).

    • Somewhere in the middle would be a system that had a number of standard emphasis heuristics, and is able to take a raw text file and, after a little just-in-time processing, offer an audio version that would by no means be as good as a real voice actor, but would, for most people, be good enough.

    The DRML version is possible now -- hell, I had DOS apps back in the 1990s that would let me add markers to a text file to tell primitive text-to-speech software how to read it. The "understand what it's reading" version, conversely, remains some time off; frankly, that's pretty close to a real AI, and if those are available for something as prosaic as an ebook reader, we have bigger disruptions to worry about.

    But the "emphasis heuristics" scenario strikes me as just on the edge of possible. There would have to be some level of demand -- such as would arguably be demonstrated by the success of the Kindle 2 and its offspring. More importantly, it would require a dedicated effort to create the necessary heuristics; amusingly, Blount's editorial has probably done more than anything else to make irritated geeks want to figure out how to do just that. It would probably also need a more powerful processor in the ebook reader; that's the kind of incentive that might make Intel want to underwrite the aforementioned irritated geeks.

    One can easily imagine a scenario in which we see a kind of "wiki-emphasis" editing, allowing tech-attuned readers, upon encountering a poorly-read section of an AutoAudio Book, to update it and upload the bugfix, thereby improving the heuristics. (Of course, that would undoubtedly result in orthographic edit-wars and dialect forking. But I digress.)

    Ultimately, Blount's fears that a super text-to-speech system could undermine the market for professional audiobooks really have more to do with economic choices than technical ones. The requisite technologies are either here but expensive or just on the horizon, and the combination of technological pathways and legal precedent (as Doctorow describes) make the scenario of good-enough book reading systems all but certain. But that doesn't guarantee that the market for audio books goes away. The history of online music is illustrative here, I think: when the music companies were ignorant or stubborn, music sharing proliferated; when music companies finally figured out that it was smart to sell the music online at a low price, music sharing dropped off considerably.

    The more that the book industry tries to fight book-reading systems, the more likely it is that these systems (whether for Kindles, or iPhones, or Googlephones, or whatever) will start to crowd out commercial audiobooks. The more that the book industry sees this as an opportunity -- keeping audiobook prices low, for example, or maybe providing ebooks with DRML "hinting" for a dollar more than the plain ebook -- the more likely it is that book reading systems will be seen as a curiosity, not a competitor.

    None of these scenarios may be very heartening for authors, unfortunately. Sorry about that.

    At least you're not likely to keel over and die competing with an automated audiobook.

    February 16, 2009

    Futurist Scaffolding

    This Thursday, I'll be delivering the morning keynote at the Art Center College of Design Sustainable Mobility Summit. My talk will cover the big picture context for the kinds of debates and discussions swirling around the event. There will be the usual assortment of drivers -- along with a cheeseburger or two -- but I thought I'd offer a preview of where the talk ends up.

    After going through an exploration of fundamental catalysts, I list three different lenses through which to view what tomorrow holds:

      Participatory Future
      Bottom-up drivers enable greater collaboration and participation, but also greater instability. This is a future of Open Source Design and Global Guerillas. This is a world where power comes from the Commons.

      Interconnected Future
      Technology-driven changes enable more sharing of information and ideas, but abandon the remnants of old intellectual property and privacy rules. This is a future of the Participatory Panopticon and Augmented Reality. This is a world where power comes from Relationships.

      Leapfrog Future
      Catastrophe and Opportunity combine to drive the creation of new economic, political, and social models. This is a future of Massive Disruptions and Unanticipated Consequences. This is a world where power comes from Creativity.

    To unpack this a bit: These are clearly not necessarily mutually-exclusive scenarios, but different ways of thinking about how anticipation, response, and resilience manifest in an era of crisis. By power, I don't mean it in the "...flows from the barrel of a gun" sense, but in the "social engine of change" sense -- that is, how we enable our anticipation, response, and resilience. Although I don't discuss a set timeline, I think of these scenarios as operating in the fifteen-to-twenty year horizon.

    I intentionally gave them all reasonably appealing names. I wanted to avoid any sense that I was pushing towards one or away from another, and especially wanted to avoid any intimation that this was a "good-medium-bad" set of linear scenarios.

    There's very little narrative to these futures -- so little that I actually hesitate to call them "scenarios" -- but they do provide structure. They're scaffolds, frameworks upon which to build stories of tomorrow. I have a fairly limited time to give my presentation, so I won't be able to do much building myself.

    My hope is that these three scaffolds will give the Summit participants a useful way of thinking about the various challenges and surprises they encounter at the event.

    As always, I look forward to seeing what kind of responses these ideas generate.

    January 1, 2009

    Aspirational Futurism

    One of the secondary effects of the latest set of crises to grip the world is the rise of essays and articles from various insightful folks, laying out scenarios of what the future will look like in an era of limited resources, energy, money, and so forth. Most of these follow a similar pattern: a list of reasonable depictions of a more limited future, and at least one item that seems completely out of the blue.

    The best example has to come from James Kunstler's description of the world to come in his "non-fiction" The Long Emergency and his explicitly fictional World Made By Hand. Along with his schadenfreude-soaked claims about the end of suburbia, automobiles, and all things superficial, he comes in with stark assertions that we'll all be making our own music and acting on stage for each other, instead of listening to that damnable recorded "rock-roll" music and the disco and suchlike.

    Yeah, I'm no big fan of JHK's reactionary futurism, but this points to a bigger trend, one that I'm seeing across a variety of political spectra: the vision of an apocalyptic near-future as a catalyst for making the kinds of social/economic/political/technological/religious/etc. changes that the ignorant or deceived masses wouldn't have otherwise made.

    This isn't just Rapturism, where a glorious transformation happens, which may or may not have nasty results for some; in that kind of scenario, an apocalypse isn't a trigger so much as a possible side-effect. In this kind of scenario -- "aspirational apocaphilia" -- the global disaster is a requisite enabler.

    It's a notable trend in that it's something that those of us who consider ourselves ethical futurists need to pay close attention to in our own work. I'd love to see the current crises result in a variety of more sustainable social patterns -- but I have to be careful not to mistake my desire with what would be a useful forecast.

    December 19, 2008

    Cycles of History

    A new economic superpower undermines established economic leaders. The collapse of complex financial instruments turn a boom into a bust. Banks fail in waves. Unemployment reaches up to 25% in some areas. A global depression holds on for more than two decades. Class warfare breaks out. Transportation networks stall -- along with industries dependent upon them -- as the main "fuel" for transportation disappears. Pandemic disease exacts a terrible toll. Religious fundamentalism skyrockets. Totalitarianism rises around the world.

    I'm describing the 1870s-1890s. Hopefully, I'm not also describing the next couple of decades.

    Historian Scott Reynolds Nelsen argues in the Chronicle of Higher Education that today's financial crisis bears a much closer resemblance to the Panic of 1873 and the resulting Long Depression than to the more familiar Great Depression of 1929. He writes:

    But the economic fundamentals were shaky. Wheat exporters from Russia and Central Europe faced a new international competitor who drastically undersold them. The 19th-century version of containers manufactured in China and bound for Wal-Mart consisted of produce from farmers in the American Midwest. [...] The crash came in Central Europe in May 1873, as it became clear that the region's assumptions about continual economic growth were too optimistic. [...]

    As continental banks tumbled, British banks held back their capital, unsure of which institutions were most involved in the mortgage crisis. The cost to borrow money from another bank — the interbank lending rate — reached impossibly high rates. This banking crisis hit the United States in the fall of 1873. Railroad companies tumbled first. They had crafted complex financial instruments that promised a fixed return, though few understood the underlying object that was guaranteed to investors in case of default. (Answer: nothing).

    Among the results of the 1873 Panic & Long Depression (lasting until 1896) were the labor movement and religious fundamentalism in the US, modern anti-semitism in Europe, and (according to Hannah Arendt) the origins of totalitarianism.

    As for transportation networks and pandemic, they were actually connected issues. In 1872, equine influenza took hold in the US, infecting close to 100% of all horses, with a mortality rate ranging from 1-2% to 10%. The "Great Epizootic of 1872" froze horse-drawn transportation (even leaving the US cavalry on foot), which in turn stalled trains because of the lack of coal transport.

    As a preview of peak oil it's admittedly shallow, but the similarities are there. The damage to transportation and industry in 1872 was a significant multiplier to the financial crisis; a modern collapse of transportation -- even if equally temporary -- would be potentially even more devastating.

    Our understanding of and tools for managing the global economy are better now than in the 1870s, and there are enough divergent drivers to make the overall parallel more instructive than spooky. But while we may be missing some of the factors that made the Long Depression so bad, we have plenty new elements that threaten to make the current situation even worse: climate disaster, networked terrorism, and much more deeply-linked economic interdependence between states.

    If we generalize a bit from the 1870s-1890s, a handful of key issues emerge as likely to have echoes today:

  • Aggressive self-interest on the part of states, despite clear potential to damage the overall economic/political structure;
  • Desperate need to find scapegoats;
  • Embrace of religious extremism as a way of finding support and solidarity;
  • Heightened conflict between economic classes and political movements.

    None of these will be particular surprising to observers of our present condition. Those of us in the foresight game have included some or all of these in many of our more unpleasant scenarios. Nonetheless, it's sobering to see stark evidence that a previous, similar economic crisis had these exact kinds of results.

  • December 18, 2008

    Overton, Warren, and Re-Making the Middle

    Why Obama's selection of Rick Warren to give an opening prayer at the inauguration is a lesson for environmental activists -- and poses a troubling question about the future.

    If you follow political news in the US, you're probably aware that President-Elect Barack Obama has asked conservative Pastor Rick Warren to give the opening invocation for the inauguration ceremony (Joseph Lowery, co-founder of the Southern Christian Leadership Conference and open supporter of gay marriage, will give the closing benediction). Given that Warren is known for some fairly un-Obama-like statements (explicitly comparing gay marriage to pedophilia, calling for the assassination of Iran's leaders), this selection has been a smidge controversial, with quite a few liberals seeing this as Obama having "peed in the ol' cornflakes" of gay and progressive supporters. The uproar about this choice, however, has in turn been met with dismissive or angry replies from other Obama supporters, who say that having Warren speak for two minutes is pretty close to meaningless, and it's a good move by Obama to be willing to reach out to communities that didn't vote for him in November. Some even argue that it's smart politics for Obama to attack liberals to show his independence.

    It struck me, reading the debate online (and having a mild debate of my own over Twitter with Howard Rheingold), that not only are both sides right in this, it's actually very useful to have this kind of debate be so public, even if it gets caricatured as "the Left vs Obama."

    To begin to see why, imagine this: John McCain won, asked Warren to give the closing benediction, and asked Joseph Lowery to give the opening invocation as a way of reaching out to the communities that didn't vote for McCain in November. How would the conservative "base" respond to that choice? With anger. And it would certainly be seen as a problem for McCain to have so upset his supporters, not as a sign of strength.

    This is a well-known process: radical positions become commonplace, shifting the "center" towards the fringe. This is known as the Overton Window, and as I noted back in January, it has the potential for being a decisive tool for shifting perspectives about the environment.

    If the selection of Warren had been met only with "ho hum, it doesn't matter much, and it's useful politics for Obama," the conventional wisdom that Warren represents some kind of moderate position would be further solidified -- "see, even the crazy lefties think he's a moderate!" -- and might even give a subtle push to the idea that Warren is actually kind of liberal.

    But with this immediate and loud turmoil over the choice, the conventional wisdom that Warren is a moderate gets eroded, and a new mainstream notion starts to emerge: Warren's views are actually pretty conservative, and Obama is being nice to the right wing in this, not simply embracing the center.

    It's a quiet game, and not one that will be shifted by a single event. But what the reaction to the Warren choice helps to demonstrate -- and here's where this becomes useful for people thinking about changing the politics around global warming -- is that loud, angry voices can reshape the nature of the mainstream. These voices don't become the mainstream, at least not initially, but push what's considered to be the "moderate center" a bit more towards the desired position. We can see that now with gay marriage, as the "civil unions with the same rights" concept has become something of the cautious, centrist view, not something seen as radical and weird.

    The question all of this raises, however, is what happens when countervailing groups both decide to operate as Overton Window drivers?

    Remember, the basic concept is that by vocally espousing a truly radical position, what gets considered to be moderate shifts towards you by looking like a reasonable contrast. So when differing sides of an argument both start to use this process, do we simply remain at the status quo "center"? Or do we have a bifurcated center, and further fragmentation? Or does it become an opening for an entirely new position to take hold?

    I'm also curious about what happens when you can identify an Overton process starting up. Take nanotech -- if the reasonable center was somewhere between "go fast, but pay close attention to problems" and "go slow," one could imagine that strident calls for (say) the arrest of anyone working on nanotechnology (as people engaged in crimes against the planet/humanity) to be considered on the unacceptable fringe. But as those calls persisted, even from a small minority, the reasonable center might start to shift to become something between "go slow" and "stop research," with more encouraging positions increasingly seen as radical.

    So if we saw something like that, and didn't want to see the reasonable center shift, what could we do?

    There's one of your Jobs of Tomorrow: Overton Window Engineer.

    December 8, 2008

    Legacy Futures

    Reading a talk given by science fiction author Ken Macleod, I came across this bit:

    I used the term 'legacy code' in one of my novels, and Farah Mendlesohn, a science-fiction critic who read it thought it was a term I had made up, and she promptly adapted it for critical use as 'legacy text'. Legacy text is all the other science fiction stories that influence the story you're trying to write, and that generally clutter up your head even if you never read, let along write, the stuff. Most of us have default images of the future that come from Star Trek or 2001 or 1984 or Dr Who or disaster movies or computer games. These in turn interact with the tendency to project trends straightforwardly into the future.

    What immediately struck me is that we all have this kind of cognitive "legacy code" in our thinking about the future, not just science fiction writers, and it comes from more than just pop-culture media. We get legacy futures in business from old strategies and plans, legacy futures in politics from old budgets and forecasts, and legacy futures in environmentalism from earlier bits of analysis. Legacy futures are rarely still useful, but have so thoroughly colonized our minds that even new scenarios and futures models may end up making explicit or implicit references to them.

    In some respects, the jet pack is the canonical legacy future, especially given how the formulation (originally from Calvin & Hobbes, I believe), of "where's my jet pack?" has become a widely-used phrase representing disappointment with the future instantiated in the present.

    People who follow my Twitter stream may recognize another example of a legacy future: Second Life. While the jet pack never really became part of anything other than Disneyfied visions of Tomorrowland, over the past five years or so Second Life came to represent for professional forecasters and futurists the vision of the Metaverse. Even though Second Life has yet to live up to any of the expectations thrust upon it by people outside of the online game industry, it has doggedly maintained its presence as a legacy future.

    Just like legacy code makes life difficult for programmers, legacy futures can make life difficult for futures thinkers. Not only do we have to describe a plausibly surreal future that fits with current thinking, we have to figure out how to deal with the leftover visions of the future that still colonize our minds. If I describe a scenario of online interaction and immersive virtual worlds, for example, I know that the resulting discussion will almost certainly include people trying to map that scenario onto their existing concept of how Second Life represents The Future.

    Sure, Second Life futurism may be a particular irritant for me, but the legacy futures concept can have much more troubling implications.

    We can see it in discussions of post-petroleum transportation that continue to elevate hydrogen fuel cells as The Answer, even though most eco-futurists and green automotive thinkers now regard that technology as something of a dead end. We can see it in population projections that don't account for either healthcare technologies extending both productive lives and overall lifespans. We can see it in both visions of a sustainable future reminiscent of 1970s commune life, and visions of a viable future that don't include dealing with massive environmental disruption.

    All of these were once legitimate scenarios for what tomorrow might hold -- not predictions, but challenges to how we think and plan. For a variety of reasons, their legitimacy has faded, but their hold on many of us remains.

    This leaves us with two big questions:

  • How do we deal with legacy futures without discouraging people from thinking about the future at all?
  • What scenarios considered legitimate today will be the legacy futures of tomorrow?

  • October 2, 2008

    Long-Run vs. Long-Lag

    All distant problems are not created equally.

    By definition, distant (long-term) problems are those that show their real impact at some point in the not-near future; arbitrarily, we can say five or more years, but many of them won't have significant effects for decades. Our habit, and the institutions we've built, tend to look at long-term problems as more-or-less identical: Something big will happen later. For the most part, we simply wait until the long-term becomes the near-term before we act.

    This practice can be effective for some distant problems: Let's call them "long-run problems." With a long-run problem, a solution can be applied any time between now and when the problem manifests; the "solution window," if you will, is open up to the moment of the problem. While the costs will vary, it's possible for a solution applied at any time to work. It doesn't hurt to plan ahead, but taking action now instead of waiting until the problem looms closer isn't necessarily the best strategy. Sometimes, the environment changes enough that the problem is moot; sometimes, a new solution (costing much less) becomes available. By and large, long-run problems can be addressed with common-sense solutions.

    Here's a simple example of a long-run problem: You're driving a car in a straight line, and the map indicates a cliff in the distance. You can change direction now, or you can change direction as the cliff looms, and either way you avoid the cliff. If you know that there's a turn-off ahead, you may keep driving towards the cliff until you get to your preferred exit.

    The practice of waiting until the long-term becomes the near-term is less effective, however, for the other kind of distant problem: Let's call them "long-lag problems." With long-lag problems, there's a significant distance between cause and effect, for both the problem and any attempted solution. The available time to head-off the problem doesn't stretch from now until when the problem manifests; the "solution window" may be considerably briefer. Such problems can be harder to comprehend, since the connection between cause and effect may be subtle, or the lag time simply too enormous. Common-sense answers won't likely work.

    A simple, generic example of a long-lag problem is difficult to construct, since we don't tend to recognize them in our day-to-day lives. Events that may have been set in motion years ago can simply seem like accidents or coincidences, or even assigned a false proximate trigger in order for them to "make sense."

    But a real-world example of a long-lag problem should make the concept clear.

    Global warming is, for me, the canonical example of a long-lag problem, as geophysical systems don't operate on human cause-and-effect time frames. Because of atmospheric and ocean heat cycles (the "thermal inertia" I keep going on about), we're now facing the climate impacts of carbon pumped into the atmosphere decades ago. Similarly, if we were to stop emitting any greenhouse gases right this very second, we'd still see another two to three decades of warming, with all of the corresponding problems. If we're still three degrees below a climate disaster point, but have another two degrees of warming left because of thermal inertia regardless of what we do, we can't wait until we've increased to just below three degrees to act. If we do, we're hosed.

    With long-lag problems, you simply can't wait until the problem is imminent before you act. You have to act long in advance in order to solve the problem. In other words, the solution window closes long before the problem hits.

    We have a number of institutions, from government to religions to community organizations, with the potential to deal with long-run problems. We may not do well with them individually, but as a civilization, we've developed decent tools. However, we don't have many -- perhaps any -- institutions with the inherent potential to deal with long-lag problems. Moreover, too many people think all long-term problems are long-run problems.

    (This argument emerged from a mailing list discussion of the Copenhagen Consensus. Smart people, with lots of good ideas, but clearly convinced that we can address global warming as a long-run problem.)

    Sadly, recognizing the difference between long-run and long-lag problems simply isn't a common (or common-sense) way of thinking about the world. We evolved to engage in near-term foresight (and I mean that literally; look at the work of University of Washington neuroscientist William Calvin for details), and (as noted) we have developed institutions to engage in long-run foresight. Long-lag is a hard problem because it combines the insight requirements of long-run foresight (e.g., being able to make a reasonable projection for long-range issues) with the limited-knowledge-action requirements of near-term foresight (e.g., being able to act decisively and effectively before all information about a problem has been settled). Both are already difficult tasks; in combination, they can seem overwhelming.

    A salient characteristic of long-lag problems is that they're often not amenable to brief, intense interactions as solutions. Dealing with such problems can take a long period, during which time it may be unclear whether the problem has been solved. Politically, this can be a dangerous time -- the investment of money, time and expertise has already happened, but nothing yet can be shown for it.

    Another long-lag problem that shows this dilemma clearly is the risk of asteroid impact. It turns out that nuking the rock (as in Armageddon) doesn't work, but a small, steady force on the rock for a period of years, years ahead of the potential impact, does. Pushing the rock moves the point of impact slowly, and it may take a decade or more before we can be certain that the asteroid will now miss us. That's why the slim possibility of a 2036 impact of 99942 Apophis frightens many asteroid watchers: if we don't get a good read on the trajectory of the rock long before its near-approach in 2029, we simply won't have time to make a big enough change to its path to avoid disaster.

    But tell people in power that we need to be worrying now about something that won't even potentially hurt us until 2036, and the best you'll get is a blank look.

    My interest, at this point, is to try to identify other long-lag problems, and to see what kinds of general conditions separate long-run and long-lag problems. With both global warming and asteroid impacts, the lag comes from physics; with peak oil (and other resource collapse problems), conversely, the lag comes from the need for wholesale infrastructure replacement. What else is out there?

    September 11, 2008

    Read This Now

    Adam Greenfield on America's rejection of the future.

    For a long, long time thereafter, I’d sit in idle moments and wonder just when future shock was going to happen. In my childish conception, it was something that would happen all at once, be precipitated by some obvious event - the proverbial straw - and stand out just as vividly and obviously as an outbreak of the flu when it did roll across the land. It took me years to understand the words as pointing toward something more poetic and metaphoric than clinically diagnostic. It’s a thought I’ve had occasion to dig up and reconsider this last week. Because this is what I’ve come to understand: Here we are. This is it.

    Must read.

    September 8, 2008

    This Changes Everything

    You have my permission to slap the next futurist (foresight thinker, scenario strategist, or trend-spotter) who uses the expression "this changes everything" seriously. Slap them hard. Maybe a shin-kick, too, if you're into it.

    The notion that some new development -- usually a technology, but not always -- "changes everything" manages to combine the most uselessly banal and the most pointlessly wrong observations in the field.

    At the top end, it's part of what I'm starting to call the "cinematic bias" in futurism: the need to describe future developments in ways that startle, titillate, and would probably look pretty cool on-screen. Quite often, the items that fall into this category are simply impossible, or so implausible as to make me struggle to avoid lashing out with Dean Venture's infamous "I dare you to make less sense!" I'm not shocked when people from client companies offer up suggestions like these -- cinematic science fiction is the common language of futurism right now -- but I'm boggled when I see people who get paid to do this for a living coming up with misfires like "teleportation eases traffic problems!" or "population pressure solved by Moon colonies!"

    Sometimes, it's not just implausibility, it's an unwillingness to deviate from The One True Future. Logic is irrelevant, except for the narrow conjectural pathway that leads the futurist from Point A to Point Stupid. Complexity goes right out the window, as do any notions of co-evolution, competing drivers, mistakes, or push-back. This is the kind of thinking that tells us that we don't need to worry about global warming/hunger/poverty/ocean acidification/resource depletion because NewTechnology will fix all of our problems, for ever and ever amen.

    I'm not saying this out of pessimism, or even realism. It's I'm-not-trapped-with-my-head-up-my-posterier-ism.

    At the opposite end of the "this changes everything" spectrum are those people who use this cognitive abortion of a phrase to describe something that might merit a page 14 mention in Widget Fancy. No, a new form of text messaging does not change everything. A new teen language trend does not change everything. And the latest update to an MP3 player most decidedly does not change everything.

    You might think that the people offering up such exaggerated praise for minor developments are novice marketeers, trying on their big hyperbole pants for the first time. You'd be wrong. More often, such an utterance comes from someone who should be paying attention to such things discovering a new toy or trend that half the people sitting around the table already knew about (most likely the underpaid under-30 interns & employees). Simply put, saying that a new widget will "change everything" is just one step more articulate than holding up a napkin drawing and saying "ZOOM! WHOOSH! PEW PEW!"

    What frustrates me most about the ascendence of the "this changes everything" meme is that its implicit opposite is "this changes nothing." Left out are the changes that really matter: the widgets and methods and practices and ideas that change the little parts of our lives, the everyday decisions, offering us new perspectives on old problems -- not solving them with a wave of the hand, but letting us see new ways to grapple with old dilemmas. This doesn't change everything -- in the real world, like it or not, we change everything. The longer we wait for magical technology or new MP3 players to do it for us, the sorrier we'll be.

    August 22, 2008

    Thinking About Thinking

    Here's the opening of a work in progress....


    Seventy-four thousand years ago, humanity nearly went extinct. A super-volcano at what's now Sumatra's Lake Toba erupted with a strength more than a thousand times greater than that of Mount St. Helens in 1981. Over 800 cubic kilometers of ash filled the skies of the northern hemisphere, lowering global temperatures and pushing a climate already on the verge of an ice age over the edge. Genetic evidence shows that at this time – many anthropologists say as a result – the population of Homo sapiens dropped to as low as a few thousand families.

    It seems to have been a recurring pattern: Severe changes to the global environment put enormous stresses on our ancestors. From about 2.3 million years ago, up until about 10,000 years ago, the Earth went through a convulsion of glacial events, some (like the post-Toba period) coming on in as little as a few decades.

    How did we survive? By getting smarter. Neurophysiologist William Calvin argues persuasively that modern human cognition – including sophisticated language and the capacity to plan ahead – evolved due to the demands of this succession of rapid environmental changes. Neither as strong, nor as swift, nor as stealthy as our competitors, the hominid advantage was versatility. We know that the complexity of our tools increased dramatically over the course of this period. But in such harsh conditions, tools weren't enough – survival required cooperation, and that meant improved communication and planning. According to Calvin, over this relentless series of whiplash climate changes, simple language developed syntax and formal structure, and a rough capacity to target a moving animal with a thrown rock evolved into brain structures sensitized to looking ahead at possible risks around the corner.

    Our present century may not be quite as perilous as an ice age in the aftermath of a super-volcano, but it is abundantly clear that the next few decades will pose enormous challenges to human civilization. It's not simply climate disruption, although that's certainly a massive threat. The end of the fossil fuel era, global food web fragility, population density and pandemic disease, as well as the emergence of radically transformative bio- and nanotechnologies – all of these offer ample opportunity for broad social and economic disruption, even devastation. And as good as the human brain has become at planning ahead, we're still biased by evolution to look for near-term, simple threats. Subtle, long-term risks, particularly those involving complex, global processes, remain devilishly hard to manage.

    But here's an optimistic scenario for you: if the next several decades are as bad as some of us fear they could be, we can respond, and survive, the way our species has done time and again: By getting smarter. Only this time, we don't have to rely solely on natural evolutionary processes to boost intelligence. We can do it ourselves. Indeed, the process is already underway.

    July 27, 2008

    Robomotors

    Were-Car.pngBrad Templeton wants you to stop driving.

    Templeton (Chairman of the Electronic Frontiers Foundation, programmer, dot-com entrepreneur, inventor of the "dot com" domain name structure -- no kidding! -- and more) laments the tens of thousands of people killed every year in traffic accidents, the waste of urban space for parking garages and gas stations, and the various institutional roadblocks to moving to renewable energy systems. But he doesn't suggest that you go get a bicycle, you lazy bum, or spend hours on packed public transit. He wants you to get a robot.

    A robot car, to be precise.

    Brad Templeton's set of essays, under the collective title "Where Robot Cars (Robocars) Will Really Take Us," explains exactly why robot (autonomous-driver) cars are possible, likely, safer, cleaner, and all-around a good idea. This isn't meant as a nuanced thought experiment; Templeton lays out page after page of statistics, arguments, and data. This is a massively detailed piece. If you think of an objection, chances are he's already covered it.

    (Disclosure: Brad sent me a link to an earlier version of this piece, and I sent back numerous comments.)

    Templeton doesn't make any claims that this would be easy, or that it could be done soon. As a professional programmer, he's well-acquainted with both the risks arising from relying on computer controls, and the difficulty of putting autonomous systems on the road alongside human drivers. He sees these as solvable issues, though, and points to present-day examples of extremely reliable coding and the "Darpa Grand Challenge" for automated drivers as reasons why. The social (particularly the legal-liability) issues are less-easily solved.

    Probably the most provocative aspect of this piece is Templeton's effort to play out some of the consequences of a shift to robotic vehicles. Not only would autonomous vehicles allow for major changes to urban design (don't need downtown parking if your car can come when you call) and major reduction of accident rates (crash-avoidance would be the first form that car automation would take, potentially eliminated tens of thousands of crashes per year, saving hundreds of millions of dollars), we'd likely see the end of mass transit (with a few long-haul exceptions).

    (His data on the overall energy efficiency of mass transit, versus standard, hybrid, and ultra-light automobiles, is startling.)

    I suspect that both technophile and envirophile readers will find aspects of Templeton's piece to argue with, but I suspect you'll be surprised at how strong and reasonably well-supported most of his claims are. This is the kind of piece you go into thinking that it's all crazy, and come out thinking it's all quite plausible.

    Do I believe him? I think he lays out a pretty compelling scenario. I do think he still under-estimates the social, cultural, and legal inertia likely to slow the rate of acceptance of such systems. This strikes me as almost certainly a generation-change issue -- that is, the rate of acceptance will map to the maturation of kids growing up riding in semi-autonomous vehicles. Lots of resistance for longer than expected, then boom, a phase shift.

    But I doubt it will happen first in the US. Singapore, maybe Scandinavia, Japan almost certainly... but I expect USians to be watching this from afar.

    July 22, 2008

    For When the Metal Ones Decide to Come For You

    (From Saturday Night Live, some years ago.)

    Be afraid.

    July 14, 2008

    The Big Picture: Collapse, Transcendence, or Muddling Through

    I'll start this essay by leading with my conclusion: do we make it through this century? Yeah, but not all of us, and it's neither as spectacular nor as horrific as many people imagine.

    Techno-utopianism is heady and seductive. Looking at the proliferation of powerful catalytic technologies, and the potential for truly transformative innovations just beyond our present grasp, makes scenarios of transcendence wiping away the terrible legacies of 20th century industrialism seem easy. If we're just patient, and don't shy away from the scale of the potential change, all that we fear today could be as relevant as 19th century tales of crowded city streets overwhelmed by horse droppings.

    But if you don't trust the technological scenarios, it's not hard to see just how thoroughly we're doomed. There are myriad drivers: depleting resources, rapid environmental degradation, global warming, international political instability, just to name a few. Any of these forms of "collapse" would pose a considerable challenge; in combination, they're simply terrifying. Most importantly, we seem to be unwilling to acknowledge the significance of the challenge. We're evolutionarily set to look for nearby, near-term problems and ignore deeper, distributed threats.

    But here's the twist: the impacts of these broader drivers for collapse and of technosocial innovation aren't and won't be evenly distributed globally. Some places will be able to last longer in the face of resource and environmental collapse than will others -- and (not coincidentally) such places may be at the forefront of technosocial development, as well. The combination of collapse and innovation will lead to profoundly divergent results around the world.

    One disturbing aspect is that the slowly-developing/late-leapfrog world may not be hurt nearly as badly as the recent-leapfrog nations -- it may be worse to be China or Brazil than Indonesia or Nigeria, for example, because rapid industrialization based on carbon-age technologies still leaves you more dependent upon the collapsing resources than you had been, but not yet in a good position to leap past the collapse itself. The key example here would be China and India's growing dependence on coal (and, to a lesser extent, old-style massively-centralized nuclear power). In order to support their rapid economic development, they're stuck using energy technologies that are devastating both locally (through pollution) and globally (through carbon footprint). Add to this that China's economic and demographic situation is more unstable than many people think, and that India faces significant political threats -- including terrorism -- both internally and along its border.

    So the dilemma here is how to construct a global policy that can take into account the sheer complexity of the onrushing collapse. If it was "just" resource depletion, it would be tricky but doable; but it's resource collapse plus global warming plus pandemic disease plus post-hegemonic disorder plus the myriad other issues we're grappling with. It's going to be difficult to see our way through this. Not impossible, but difficult.

    The aspects that are on our side:

  • We do have the technology to deal with a lot of this stuff, but not the political will. But we know that we can change politics and society, arguably better than we know we can build some new technologies. A major disaster or three will change the politics quickly.
  • To a certain extent, the crises can cross-mitigate -- for example, skyrocketing petroleum prices has measurably reduced travel miles, and are pushing people to buy more fuel-efficient cars, thereby reducing overall carbon outputs. Economic slow-downs also reduce the pace of carbon output. These are not a solution, by any means, but a mitigating factor.
  • We have a lot of people thinking about this, working on fixes and solutions and ideas. Not top-down directed, but a massively-massively-multi-participant quest, across thousands of communities and hundreds of countries, bringing in literally millions of minds. The very description reeks of innovation potential.

    Here's my best guess, for now:

    Over the next forty years, we'll see a small but measurable dieback of human population, due to starvation, disease, and war (one local nuclear war in South Asia or Middle East, scaring the hell out of everyone about nukes for another couple of generations). Much of the death will be in the advanced developing nations, such as China and India. There will be pretty significant economic slowdowns globally, and US/EU/Japan will see significant unrest. Border closings between the developed and the developing nations will likely spike, probably along with brushfire skirmishes.

    The post-industrial world will see a burst of localization and "made by hand" production, but even at its worst it is more reminiscent of World War II-era restrictions than of a Mad Max-style apocalypse. In much of the developed world, limitations serve as a driver for innovation, both social and technological. It's not a comfortable period, by any means, but the Chinese experience and the aftermath of the Middle East/South Asian nuclear exchange sobers everybody up.

    Imperial overreach, economic crises, and the various global environmental and resource threats put an end to American dominance, but nobody else can step up as global hegemon. Europe is trying to deal with its own social and environmental problems, while China is struggling to avoid full-on collapse. The result isn't so much isolationism as distractionism -- the potential global players are all far too distracted by their own problems to do much overseas.

    Refugees and "displaced persons" are ubiquitous.

    I'm near-certain that we'll see a significant geoengineering effort by the middle of the next decade, the one major global cooperative project of the era. The global economic crises, near-collapse of China, and faster-than-expected shift to non-petroleum travel will slow the projected rate of warming, limiting the necessary climate hacking. Solar shading works reasonably well and reasonably cheaply, so the clear focus of global warming worries and new geoengineering efforts by the late 2020s is on ocean acidification.

    A mix of nuclear, wind, solar, and a few others (OTEC, hydrokinetic) overtakes fossil fuels in the West by 2020s, but China & India retain coal-fired power plants longer than anyone else; this may end up being a driver for significant global tension.

    Technological innovation continues, though, with molecular nanotechnology fabrication emerging by 2030 -- not as a deux ex machina but as a significant boost to productive capacities. The West (including Japan) stabilizes around the same time, and finally starts to focus on helping the rest of the world recover.

    Then the Singularity happens in 2048 and we're all uploaded by force.

    (I'm kidding about that last one. I think.)

  • May 21, 2008

    Fifteen Minutes into the Future

    One of the hardest things to grapple with as a futurist is the sheer banality of tomorrow.

    We live our lives, dealing with everyday issues and minor problems. Changes rarely shock; more often, they startle or titillate, and very quickly get folded into the existing cultural momentum. Even when big events happen, even in the worst of moments, we cope, and adapt. This is, in many ways, a quiet strength of the human mind, and a reason for hope when facing the dismal prospects ahead of us.

    But futurism, at least as it's currently presented, is rarely about the everyday. More often, futurists tell stories about how some new technology (or political event, or environmental/resource crisis, etc.) will Change Your Life Forever. From the telescopic perspective of looking at the future in the distance, they're right. There's no doubt that if you were to jump from 2008 to 2028, your experience of the future would be jarring and disruptive.

    But we don't jump into the future -- what we think of now as the Future is just an incipient present, very soon to become the past. We have the time to cope and adapt. If you go from 2008 to 2028 by living every minute, the changes around you would not be jarring; instead, they'd largely be incremental, and the occasional surprises would quickly blend into the flow of inevitability.

    There is a tendency in futurism to treat the discipline as a form of science fiction (and I don't leave myself out of that criticism). We construct a scenario of tomorrow, with people wearing web-connected contact lenses, driving semi-autonomous electric cars to their jobs at the cultured meat factories, and imagine how cool and odd and dislocating it must be to live in such a world. But futurism isn't science fiction, it's history turned on its head. The folks in that scenario don't just wake up one day to find their lives transformed; they live their lives to that point. They hear about new developments long before they encounter them, and know somebody who bought an Apple iLens or package of NuBacon before doing so themselves. The future creeps up on them, and infiltrates their lives; it becomes, for the people living there, the banal present.

    William Gibson's widely-quoted saying, "the future is here, it's just not well-distributed yet" is suggestive of this. The future spreads, almost like an infection. The distribution of the future is less an endeavor of conscious advancement than it is an epidemiological process -- a pandemic of tomorrows, if you will.

    If futurism is more history inverted than science fiction, perhaps it can learn from the changes that the study of history has seen. One of the cornerstone revolutions in the academic discipline of history was the rejection of the "Great Men" model, where history was the study of the acts of larger-than-life people, the wars fought by more-powerful-than-most nations, and the ideas of the brilliant shapers of culture. Historians have come to recognize that history includes the lives of regular people; some of the most meaningful and powerful historical studies of the past few decades, from Howard Zinn's A People's History of the United States to Ken Burns' popular "Civil War" documentary, focused as much or more on the everyday citizens as they did the "Great Men," and as much on everyday moments as on the "turning points" and revolutionary events.

    What might a "people's history of the future" look like?

    May 16, 2008

    How Many Earths?

    It's a standard trope in environmental commentary: we would need more than one Earth to support the planet's population, especially if everyone lived like Americans. The number of Earths needed can vary greatly, depending upon who's doing the counting. 1.2? Two? Three? Five? Ten? It's a very fuzzy form of ecological accounting, much harder to calculate in any consistent and plausible way than (for example) carbon footprints. But the "N Earths" concept is dubious for reasons beyond simple accounting imprecision. Simply put, it's adding together the wrong things.

    Assertions that we'd need three (or five, or ten) Earths to support our now-unsustainable lifestyles may make for nice graphics, but miss a more important story. The key to sustainability isn't just reducing consumption. The key to sustainability is shifting consumption from limited sources to the functionally limitless.

    Broadly put, there are three different kinds of resources:

    LIMITED-SUBTRACTIVE
    These are resources that have a finite limit, and once used, would be difficult or impossible to reuse. The most visible example would be fossil fuels, but most extractive resources would also fit this category. For some resources, the limits may be extended through recycling, but this has limits as well. As a resource dwindles, the resulting high costs may make otherwise expensive extraction methods feasible, but eventually the resource will just be gone. In the language of economics, these are both rivalrous and excludable resources.

    The implication for the "N Earths" model: given enough time, we'd never have enough Earths. Oil will run out, whether in a decade or a millennium, as long as someone continues to use it.

    LIMITED-RENEWABLE
    These are resources that renew over time, but face a limit to total concurrent availability. These are largely (but not exclusively) organic resources: food, fish, topsoil, people. Water arguably could be included here, as well. These resources can be over-used or abused, but absent catastrophe, will eventually recover. Economically, these are considered rivalrous but non-excludable -- that is, they're the "commons."

    This is probably the closest fit for the "N Earths" concept, but misses two very important aspects: use management (encompassing conservation, efficiency, and recycling), which can alter the calculus of how much of a given resource may be considered "in use" in a sustainable environment; and substitution, which can cut or eliminate ongoing demand for a given resource (the classic example being guano as fertilizer).

    UNLIMITED-RENEWABLE
    These are resources that renew over time, but where the limits to availability are so far beyond what we could possibly capture as to make them effectively limitless. These run the gamut from energy (solar and wind) to materials (environmental carbon) to abstract phenomena (ideas). No current or foreseeable mechanisms could fully use the total output of these resources. Economically, they're both non-rivalrous and non-excludable.

    Where the limited-subtractive resources make any use non-sustainable, given enough time, with unlimited-renewable resources, all uses are inherently sustainable.

    The argument behind the "N Earths" model is that we -- the global we, but especially the West -- need to reduce our consumption to the point where we no longer use more resources than the planet can provide. The argument behind this alternative model -- call it the "Smarter Earth" model -- is that we need to shift our consumption away from limited resources, especially limited-subtractive resources, as much as possible. It's not a question of consuming less (or more, for that matter), but a question of consuming smarter.

    The immediate rejoinder to this notion is that "we can't eat ideas or solar energy." That's superficially true; however, plants are embodiments of solar energy, and ideas can allow us to use limited resources more efficiently. It's not possible with current or foreseeable technologies to shift entirely to unlimited-renewable resources, but every step along the way improves our sustainability.

    Another response to this model is that it's essentially an argument for a techno-fix. Despite appearances, it's not. What I'm arguing for is more of a design framework, a guide for decision-making. Yes, that may often mean technological design, but it also encompasses community design (as John Robb has engaged in with his "Resilient Communities" work), economic design (do tax and regulation patterns promote a shift from limited-subtractive to unlimited-renewable consumption?), and especially memetic design (how do we construct a coherent narrative of what's happening around us?).

    The goal of shifting consumption boils down to this: moving from a "never enough Earths" model for society, to an "all the Earth we need" model.

    May 9, 2008

    The Suburban Question

    How do you green the suburbs?

    The bright green mantra, when it comes to the built environment, is that cities rule, suburbs drool. Cities are more (energy) sustainable, resilient, cultural, diverse, better for your waistline, surprise you with presents on your birthday, and so forth. Suburbs, conversely, are bastions of excessive consumption and insufficient sophistication, filled with McMansions and McDonalds, and are probably hitting on your spouse behind your back. My preferences actually align with that sentiment, but I've become troubled with the green urbanization push. The issue of the future of suburbia isn't as easy as simply telling people to move to cities.

    Gentle question: when you convince the masses of people living in the ring suburbs to move back downtown, what happens?

    (a) Everybody gets a place in the city, and a pony.
    (b) Prices for places in the city shoot up, even in "down and out" areas, driving out low- and moderate-income current residents, and stopping all but the higher-income suburbanites from returning. Without any ponies at all.

    Encouraging people to move from the suburbs closer to their place of work in the city because it's actually cheaper (when you include transportation) only works when nobody else does it. Once everybody -- or even a lot of people -- gets that bright (green) idea, the combination of increased demand and limited availability drives up prices. As big as cities may be, there are lots of people in the 'burbs. It may be possible to build more housing within the urban core, but you have one guess as to which neighborhoods are likely to be the ones knocked down to make way for new high-rise condos.

    We're already seeing the reverse of the old "white flight" trope, where middle-class whites abandoned cities for the suburbs. Gentrification (with the artists as the "shock troops," we're told), re-urbanization, even "black flight" to the suburbs upset the conceptual models of the built environment that remained dominant in the US for the last few decades. Cities are back... and the suburbs may be abandoned to the low-income.

    Everywhere? No. Overnight? No. An important trend? Very much so.

    Why? Because figuring out how to make suburbs sustainable is increasingly an act of environmental justice. The displaced urban poor and middle-income will be even less able to afford the energy, transportation, and health costs of environmental decline.

    We need to figure out how to upcycle the suburbs. It may involve traditional green ideas such as mass transit and bicycles; it may involve something a bit more complex, like a specialized version of LEED for neighborhoods.

    But we need more innovation than that. Not just technology -- while cheap solar building materials wouldn't be bad at all, the real innovations in resilience and sustainability will come in the realm of policy and behavior. Society and culture. Not just the physical infrastructure, the connective sinews of communities. Metaphorical language is all we have now to describe it, because it hasn't yet been invented.

    But here's the golden hope: the first one(s) to figure out how to do this, how to make suburbia sustainable and to do so at a breathtakingly low cost, will win the world. Because, as much as China and India and South Africa and Brazil are hot to get their hands on their local iterations of the 1950s American Dream -- a house, two giant cars, and a TV in every pot -- they'll be desperate to figure out how to afford it pretty damn soon. They'll be looking for this same elusive model, and will pay well for it.

    May 5, 2008

    Pondering Fermi

    The Fermi Paradox -- if there's other intelligent life in the galaxy, given how long the galaxy's been here, how come we haven't seen any indication of it? -- is an important puzzle for those of us who like to think ahead. Setting aside the mystical (we're all that was created by a higher being) and fundamentally unprovable (we're all living in a simulation), we're left with two unpalatable options: we're the first intelligent species to arise; or no civilization ever makes it long enough. The first one is unpalatable because it suggests that our understanding of the biochemical and physical processes underlying the development of life have a massive gap, since all signs point to the emergence of organic life under appropriate conditions being readily replicable. The second one is unpalatable for a more personal reason: if no civilization ever survives long enough to head out into the stars, what makes us think we'd be special?

    But I think there might be a third option.

    (Warning: the rest hidden in the extended entry due to extreme geekitude.)

    Continue reading "Pondering Fermi" »

    April 7, 2008

    The Big Picture: Resource Collapse

    Puccinia_graminis_teliospores.png(The Big Picture is my series on the major driving forces likely to shape the next 20 years. The first post, on Climate Change, went up in early February.)

    Truism #1: Human society's continued existence depends on the sustained flows of a variety of natural resources.
    Truism #2: What that set of natural resources comprises can change over time.

    We (the human we) have pushed the limits of many of the resources our civilization has come to depend upon. Oil is the most talked-about example, but from topsoil to fisheries, water to wheat, many of the resources underpinning life and society as we know it face significant threat. In many cases, this threat comes from simple over-consumption; in others, it comes from ecosystem damage (often, but not always, made worse by over-consumption).

    The most obvious cause of over-consumption is population. Long a contentious issue for environmentalists, the argument that "we have too many people," logical in theory, faces serious ethical questions when turned to practice. One example: how do we decide who gets to continue living? Over-consumption is compounded by rising standards-of-living allowing more people to consume even more than before, and by a historically-rooted assumption that the Earth is big and can always provide.

    But some resources simply have limits -- there's a maximum amount of oil to be extracted, or copper to be dug up. Some resources (topsoil, fisheries) can renew themselves, but at a rate far slower than our use. Unfortunately, what we've seen from other dwindling resources is that humans have a tendency to try to grab the last bits for themselves, even at the expense of others. This is the so-called "tragedy of the commons," and its most visible present-day manifestation has to be ocean fisheries. Many seafood species are the on the verge of total collapse, perhaps even extinction; official efforts to limit or halt fishing of certain species face desperate communities dependent upon the industry.

    The other driver for resource collapse, ecosystem damage, is somewhat more complex. In some cases, such as honeybees, we still have little certainty as to why the resource is in such danger. In the case of wheat, the risk comes from a combination of human and natural activity.

    If you hadn't heard that wheat is threatened, you're not alone. It's a relatively recent problem: a fungus known as Ug99. Emerging in Uganda in 1999 (hence the name), this black stem rust fungus seemed to be slowly moving north into the Middle East, not yet hitting locations dependent upon wheat as a primary food crop; this slow movement seemed to offer biologists time to come up with effective counters and to breed resistant strains of wheat, a time-consuming process. But that luck didn't hold.

    ...on 8 June 2007, Cyclone Gonu hit the Arabian peninsula, the worst storm there for 30 years.

    "We know it changed the winds," says Wafa Khoury of the UN Food and Agriculture Organization in Rome, because desert locusts the FAO had been tracking in Yemen blew north towards Iran instead of north-west as expected [...]. "We think it may have done that to the rust spores." This means, she says, that Ug99 has reached Iran a year or two earlier than predicted. The fear is that the same winds could have blown the spores into Pakistan, which is also north of Yemen, and where surveillance of the fungus is limited.

    In Iran, the spore will encounter barberry bushes, which trigger explosive reproduction of Ug99 (and more potential for mutation). From Iran to Pakistan, and then to India (much more dependent upon wheat) and to China. From China, it can blow to North America (as dust and soot do already). The fungus ignores current strains of wheat with fungal resistance, because it initially faced monocultures of wheat with single markers for resistance, allowing for easy mutation and replication.

    I'm just glad the Norwegian seed vault is now up and operating. But as disturbing as the potential for collapse may be, the second truism listed above offers cause for hope.

    Ecosystem services is the term to remember this time around. It's tempting to think of ourselves as dependent upon the resources we currently use, but that's not quite right. What we depend upon are the services the various resources provide -- the energy, for example, or the protein. In principle, if we can receive those service a different way, we may avoid the repercussions of the collapse of a particular resource. It's true that, in some cases (like water), the resources effectively are the services, but even here, we have to be careful not to think of a particular source (e.g., aquifers) as being the only possibility.

    Bird poop provides an instructive example. In the 19th century, guano from birds native to Peru offered the world's best form of fertilizer -- so good that guano became the subject of imperial ambitions, national laws, and international tension. In "When guano imperialists ruled the earth," Salon's Andrew Leonard quotes from President Millard Fillmore's 1850 state of the union address:

    Peruvian guano has become so desirable an article to the agricultural interest of the United States that it is the duty of the Government to employ all the means properly in its power for the purpose of causing that article to be imported into the country at a reasonable price.

    But by the end of the century, the market for guano had collapsed, along with Peru's economy, because of the development of industrial "superphosphate" fertilizer. It's worth noting that, even if superphosphate hadn't been developed, Peru would have been in trouble -- the supplies of guano were just about depleted by the time the market collapsed. That's right: The world was facing "Peak Guano," only to be saved by catalytic innovation.

    Resource Collapse and... Climate Change
    I addressed this in The Big Picture: Climate Change, but as I noted a week or so ago, a recent article by NASA's James Hansen points to another point of intersection. In "Implications of “peak oil” for atmospheric CO2 and climate" (PDF), Hansen and colleague Pushker A. Kharecha argue that the effort to keep atmospheric carbon levels below 450ppm (widely considered the seriously bad news tipping point) may be greatly helped by limitations on the amount of available oil. With a reasonable phase-out of coal, active measures to reduce non-CO2 forcings (including methane and black soot), and draw-down of CO2 through reforestation, limiting CO2 to 450ppm can be readily accomplished due to limits on oil reserves. This doesn't require the most aggressive peak oil scenarios, either -- simply using the US Energy Information Administration's estimates of oil reserves is enough. Using more aggressive numbers, atmospheric CO2 peaks at 422ppm.

    We may end up avoiding catastrophic climate disruption despite our own best efforts.

    Resource Collapse and... Catalytic Innovation
    The clearest connection between resource collapse and catalytic innovation is in the realm of substitution services. Nobody wants oil, for example, people want what can be done with oil. That can mean other forms of energy, such as electricity (for transportation), or it may mean other sources of hydrocarbons, such as thermal polymerization (for plastics), and so forth. The big concern: will the substitute technologies be ready by the time the resource is (effectively) gone?

    Often, the issue really isn't technology, but expense and willingness to change. Driving the cost of alternatives down to make them competitive with the depleting resource can be difficult; even more difficult can be getting people to accept a substitution service that isn't exactly like the old one (even if it's objectively "better"). Cultured meat would be far and away better than today's meat processing industry -- environmentally, ethically, health-wise -- but, even if the product looked, tasted and felt just like "real" meat, a substantial number of people would likely avoid it simply because it was weird.

    More important may be questions of culture and "ways of life." Substitutions rarely mean the same workforce providing one resource shifts seamlessly over to its replacement; more often, the substitute comes from an entirely different region, or may require different kinds or numbers of workers.

    It also means a change in mindset or interpretations of the world around us. I've commented before about the imminent emergence of photovoltaic technologies allowing us to make nearly any surface a point of power generation. To an extent, this seems superficially obvious, but try taking a walk or drive with your mind's eye set on what would be different with a solar world. What rationale would we have, for example, for not giving any outside surface a photovoltaic layer? How would we design the material world differently? What would disappear -- and what would suddenly become ubiquitous?

    Or there may be larger issues of infrastructure delaying an otherwise "easy" transition. Take alternative power vehicles: in many ways, making the cars & trucks run on clean energy will be the easy part. Think of all of the gas stations that would have to change or go out of business; think of all of the jobs lost when old skills become less valuable; think of the thousands of car repair places needing to retrain and retool. If you take the scenario I posited in The Problem of Cars last year, imagine all of the elements of the present day that would have to change in order for it to become possible.

    Resource Collapse and... Ubiquitous Transparency
    As with the climate, the role of ubiquitous transparency is to keep a close eye on the flows of production and consumption that might otherwise be invisible (at least until it's too late).

    The scientific benefits would likely be the proximate driver. Whether the ultimate users are regulatory officials or participating panopticoneers depends on the balance of top-down vs. bottom-up power. Ultimately, it won't just be the points of production being watched, it will be the points of consumption, as well.

    Resource Collapse and... New Models of Development
    This is both harsh and simple.

    If the newly-developing nations persist in trying to follow a Western path of development, then the competition for dwindling resources will end up as a critical point of tension and, likely, warfare. The more powerful nations will scrape by, while the ones less-able to throw their weight around will suffer. The more that the newly developing nations emulate Western consumption, the more that they're likely to face famine, economic collapse, and millions of casualties.

    Conversely, if the newly-developing nations take a leapfrog-alternatives path, with a strong emphasis on efficiency and experimentation, they could find themselves the eventual winners of the century. The leapfrog concept is straightforward -- the areas with less legacy infrastructure can adopt new systems and models faster -- and emerging catalytic technologies and economic models seem custom-made for new adopters. But this isn't without risk; the new systems and models are intrinsically unproven, and may not work as well as expected. Leapfrogging nations may find themselves facing famine, economic collapse, and mass deaths anyway, and probably compounded by the expenditure of resources needed by the leapfrog systems and the loss or weakening of the old systems.

    Resource Collapse and... The Rise of the Post-Hegemonic World
    Resource collapse isn't the cause of the rise of the post-hegemonic world, but it's an important driver. It weakens the powerful, and opens up new niches of influence. It triggers conflict, setting the mighty against the mighty. It reveals vulnerabilities.

    Most importantly, it sets up the conditions for the emergence of new models of power, as ultimately the most effective responses to resource collapse will come from revolutions in technology and socio-economic behavior. Those actors adopting the new successful models will find themselves disproportionately powerful.

    Right now, none of the leading great power nations seem well-suited to discover and adopt such new models. The same can be said of the leading global corporate powers. The climate and resource crises of the 2010s and 2020s will be compounded by a vacuum of global leadership.

    Ultimately, I suspect that the identity of the pre-eminent global actors of the mid-21st century will surprise us all.

    April 1, 2008

    Yeats Signals

    Turning and turning in the widening gyre
    The falcon cannot hear the falconer;
    Things fall apart; the centre cannot hold;
    Mere anarchy is loosed upon the world,
    The blood-dimmed tide is loosed, and everywhere
    The ceremony of innocence is drowned;
    The best lack all conviction, while the worst
    Are full of passionate intensity.

    -William Butler Yeats, The Second Coming

    Setting aside its religious imagery, the opening stanza of The Second Coming remains one of my favorite go-to sources for "uh oh" language in my writing.

    In conversation at IFTF this morning, a reference to a profound oddity in crop markets led to the coining of the phrase "Yeats Signals," a play on the IFTF term "weak signals" (referring to subtle indicators of big changes). The profound oddity is this:

    Whatever the reason, the price for a bushel of grain set in the derivatives markets has been substantially higher than the simultaneous price in the cash market.

    When that happens, no one can be exactly sure which is the accurate price in these crucial commodity markets, an uncertainty that can influence food prices and production decisions around the world. [...]

    Market regulators say they have ruled out deliberate market manipulation. But they, too, are baffled. The Commodity Futures Trading Commission, which regulates the exchanges where these grain derivatives trade, has scheduled a forum on April 22 where market participants will discuss these anomalies and other pressure points arising in the agricultural markets.

    This simply should not be happening, and yet it is. As an indicator of major instabilities in what had been structurally stable (if not always predictable) markets, it's a big one. Big enough that it wouldn't take much to imagine this as a sign of a major financial crisis in the global food market -- something with profound economic and health implications for everyone, including the rich countries.

    It seems to me that we've been seeing more than our fair share of Yeats Signals lately.

    January 11, 2008

    Paul Saffo on Forecasting

    Brand & SaffoPaul Saffo gave the Long Now Seminar tonight. Here are some of his more telling observations:

    The biggest mistake a forecaster can make is to be more certain than the facts suggest.

    When changes cluster at the extremes, it's a certain bet that more fundamental change lies ahead.

    The future constantly arrives late and in unexpected ways.

    Good "backsight" is necessary for good foresight.

    Cherish failure -- we fail our way into the future.

    October 27, 2007

    The Second Uncanny Valley

    second uncanny valley.jpg

    The "Uncanny Valley" is the evocative name for the commonplace reaction to realistic-but-not-quite-right simulated humans, robotic or animated. Most of us, when encountering such a simulacrum, have an instinctive "it's creepy" response, one that is enhanced when the sim is moving. Invented by roboticist Masahiro Mori, the Uncanny Valley concept is typically applied to beings (broadly conceived) as they become increasingly similar to humans in appearance and action.

    But what about beings as they become less similar to humans -- following the path of transhumans and, eventually, posthumans?

    An article in the latest issue of New Scientist (subscription required) prompted this question. Thierry Chaminade and Ayse Saygin of University College London began to investigate how the Uncanny Valley phenomenon worked, and performed brain scans on people encountering simulacra of varying degrees of human likeness. They found spikes of activity in the parietal cortex.

    This area of the brain is known to contain "mirror neurons", which are active when someone imagines performing an action they are observing. While watching all three videos, people imagine picking up the cup themselves. Chaminade says the extra mirror neuron activity when viewing the lifelike robot might be due to the way it moves, which jars with its appearance. This "breach of expectation" could trigger extra brain activity and produce the uncanny feelings.

    The response may stem from an ability to identify - and avoid - people suffering from an infectious disease. Very lifelike robots seem almost human but, like people with a visible disease, aspects of their appearance jar.

    Clearly, such a reaction does not require that the observed "human" actually be sick, only that its behavior and/or physiological characteristics seem a bit off. This could, conceivably, include human beings with "enhanced" characteristics -- "H+" in the current jargon.

    Science fiction visions of space-adapted posthumans with hands for feet or wings for low-gravity flight would obviously seem at least "a bit off," but the enhancements need not be that radical. In fact, it's possible -- even likely -- that the less-radical changes would end up being more disturbing. Enhancements to optical capabilities might change the appearance of the eye. Improved neuromuscular systems might make everyday actions -- grabbing a coffee cup, picking up a child, even walking along the street -- look unnatural. Accelerated cognition might make verbal interactions disjointed, even bizarre.

    As long as these changes fall into the broad ranges of current human variety, we'd be unlikely to see an unusually negative response. But if they are clearly outside the realm of the "expanded normal," and if they have external manifestations that are readily identifiable, it may very well be that the reactions of unmodified people -- and perhaps even the reactions of other "H+" individuals! -- are significantly more negative than one might expect. In this scenario, the enhanced person wouldn't just seem weird, he or she would seem wrong.

    If this is possible, then it has profound social and political implications for transhumanist and other H+ advocate agendas for human enhancement technologies.

    For example, if the typical reaction of unmodified people to enhanced humans is "that guy really creeps me out," it may be easy for opponents of these technologies to generate a legal and cultural backlash.

    Similarly, if the gut reaction to a moderately modified human is to see him or her as no longer human, political struggles could get very ugly very quickly.

    It's unlikely that the first generations of human enhancement technologies -- which would most likely just be adaptations of therapeutic medical technologies -- would engender this kind of response. But if we follow the logic of the human enhancement model, we will at some point over this century start to introduce changes to the human physiological and behavior model that will fall well outside the realm of human variability. It's possible that we'll have enough other kinds of simulacra and non-human persons in our midst that we'll take such modifications in stride, and have no qualms about keeping the transhumans in the human family.

    But it's also possible -- arguably, more possible -- that the emergence of significant modifications to humanity will trigger deep responses in the human brain, ones that we may very well not like.

    October 15, 2007

    The Deep Beyond

    Oh, and my contribution to Blog Action Day? Simply this:

    saturn-eclipse.jpg

    It's a picture of Saturn, taken by the Cassini probe. It's a shot of Saturn eclipsing the Sun -- a view that we could never get from Earth. Cassini was launched a decade ago, and has given us incredible science and beautiful images of our solar system's second most awe-inspiring planet. But look closely at the picture, just above the rings on the left side.

    tinyearth.jpg

    That little blue smudge visible above Saturn's ring, barely 2-3 pixels across?

    That's us.

    Everything we have done, every life lived, everything we are, is little more than a tiny dot. Our world is far more fragile than we might wish, but there's nothing else like it that we've yet found. We abuse it at our peril.

    September 27, 2007

    Security through Ubiquity

    Another idea I want to get out and into at least my working lexicon.

    Security through Ubiquity refers to the reduced vulnerability to attack that can manifest due to being part of a transcendently common multitude; in this context "attack" includes social approbation and the deleterious effects of a loss of privacy.

    This apparent security comes from several sources:

  • An abundance of identical items/behaviors can make it proportionately less likely that one's own item/behavior gets targeted. ("Weak" security through ubiquity.)
  • An abundance of identical items/behaviors can lessen the desire to attack the item/behavior -- the item/behavior is not scarce, unusual or out-of-place. ("Moderate" security through ubiquity.)
  • An abundance of identical items/behaviors can mean that many, many people know how to recognize and potentially resolve or mitigate damage from misuses or abuses of that item or behavior. ("Strong" security through ubiquity, overlaps with open source security argument.)

    The example of this that comes to mind is the increasingly commonplace appearance of "inappropriate" pictures and personal stories on publicly-visible social networking sites, websites, and chat logs. In an era when such appearances were unusual and/or out-of-place, participants could be easily targeted and social norms readily enforced. In an era when such appearances are commonplace, it becomes harder to generate ongoing interest or opprobrium absent another factor that makes the appearance scarce or unusual (e.g., celebrity status).

    This is why I don't believe that the up-and-coming network generation will be particularly harmed professionally or socially in the future by "wild" behavior documented online today.

  • Molecular Rights Management

    I'll have more to say about this soon, but I just want to toss the idea out to the noösphere and make it visible.

    Molecular Rights Management refers to the panoply of technologies employed to prevent the unrestricted reproduction of the products of molecular scale (atomically-precise, nano-fabricated) manufacturing technologies. The source concept for the term is digital rights management, technologies employed to prevent the unrestricted reproduction of digital products. As of yet, no actual molecular rights management technologies exist.

    MRM is likely to emerge for two primary reasons: the continued need for intellectual property controls, so as to prevent a wave "napster fabbing;" and the need for security to prevent the production of controlled goods ("assault rifles," figuratively or literally).

    MRM could reside in the design media (the CAD files and the like), such as with single-execution licenses, digital watermarks, and so forth.

    MRM could reside in the production hardware (the "nanofactory"), such as with systems that "store" all designs online (no local storage), blacklist systems that a nanofactory would check an input design against, smart systems that recognize disallowed designs as they are being made, even in disconnected parts, and so forth.

    MRM could reside in the network, with agents that check the designs loaded in a nanofactory for proper licensing information.

    Given that the final results of a nanomanufactured product can, in principle, be used without any need to connect back to the original fabber or design, the impact of MRM on end-users is likely to be less onerous than the impact of DRM has been on the users of digital media. Couple that to the safety/security aspects, and it seems to me that MRM is likely to be broadly tolerated, and potentially even accepted.

    July 13, 2007

    The Futures Meme

    Okay, this is one everyone can play with, and hopefully won't lead to veiled recriminations and bitter feuds in the comments. It's also perfect for a lazy summer weekend.

    This one nicely riffs on a few recurring themes here at OtF: open source scenarios, human agency (that is, the future is something we do, not something done to us), and the possibility of achieving a positive future. It's one of those "web meme" things that kids today are all talking about; I'll take the traditional path and tag five people, and encourage them to tag five of their own (etc.). Please feel free to play along in the comments or on your own blogs. Here we go:

    Fifteen years is a useful time period for thinking about the Future. It's long enough that we'll go through a couple of major political cycles, see noticeable improvement in common technologies, and undoubtedly experience a radical breakthrough or two. At the same time, it's near enough that most of us will expect to still be around, living lives that might not be too different from today's.

    So here's the task: Think about the world of fifteen years hence (2022, if you're counting along at home). Think about how technology might change, how fashions and pop culture might evolve, how the environment might grab our attention, and so forth. Now, take a sentence or two and answer...

  • What do you fear we'll likely see in fifteen years?
  • What do you hope we'll likely see in fifteen years?
  • What do you think you'll be doing in fifteen years?

    There are no wrong answers here -- only opportunities to surprise, provoke and amuse.

    Here are mine:

  • Fear: I'm afraid that we'll have hit a climate tipping point much sooner than anticipated, Storms, floods, drought, disease, and more, all leading to millions upon millions of refugees, drawing upon dwindling resources and wondering what disasters await them.
  • Hope: I expect that we'll have working cures for most forms of cancer by 2022, probably sooner. Most of the treatments will involve inert IR-sensitive nanospheres, some types of which tend to accumulate in tumorous growths -- and when illuminated with an IR laser (which passes harmlessly through tissue) heat up enough to burn away cancer. Animal trials in 2005 saw a 100% success rate with some cancer types.
  • Doing: I figure that, by 2022, I'll be well-established in the US government's Department of Foresight (based on the UK's Foresight Directorate, which exists now), started as an Executive Office group during President Gore's first term (2017-2020), and expanded into a full Department in his second.

    Tags:

    Let's see....

    I'd like Jon Lebkowsky, David Brin, Dale Carrico, Siel, and Rebecca Blood to give this one a whirl. Don't forget to tag five more of your own, and link back here in the comments when you're done.

    (And if you're a regular and I didn't tag you, I'm sorry, I'm a bad person, but please don't let that stop you from giving it a shot anyway, either in the comments here or at your own site.)

  • July 1, 2007

    An Insufficient Present

    I've had three particular web pages open in my browser for a couple of weeks now. I knew that they were saying something to me, but I wasn't quite sure what. I think I may now have finally figured it out.

    The future belongs to those who find the present insufficient.

    The phrase is a deliberate variation of something that Clay Shirky argued recently, that the future belongs to those who take the present for granted. By this, Clay means that people who can accept the (technological) conditions of the present are better-able to see what's next than people who are still wrestling with whether those conditions of the present make sense. He cites Freebase and Wikipedia in this: while some people still argue about whether Wikipedia is a good thing, folks at Metaweb are already building a next-generation collaborative knowledge base.

    Look at these two graphs, generated by Forrester Research for the New York Times and for Business Week.

    The Time graph shows the comparative value of mobile phones, computers and television across five different generational cohorts*. For Gen Y, computers and phones are more important than TV, in that order; for Gen X, phones and television swap rank, with computers still on top; for the remainder, TV is the most personally valuable technology of the three. The Business Week graph splits similar cohorts (Gen Y has "Youth" split out at the bottom end, and a "Young Teen" is added below that) along six different online usage patterns. What's notable is that, although these are all ostensibly computer-based activities, some of the activities map nicely to abstracted uses of TVs and phones. The same cohorts that put TV above computers and phones predominantly engage in passive consumption of online content; the same cohorts that put phones above the others predominantly engage in social networking. (Gen X'ers seem to do a little bit of everything.)

    Now, from the "takes the present for granted" perspective, these graphs can be interpreted to mean something along the lines of Boomers are still trying to figure out if social networking tools are a good thing, even while younger generations are just going ahead and using them as if they've always been there. That maps to the moral panic we've seen about MySpace and the like. As older generations say "wait, it can do *that*?" the leading edge says "of *course* it can."

    But taking the present for granted is not enough. Saying "of *course* it can do that" isn't a catalyst for change, it's a symptom of complacency; it's looking back with a sneer at what has gone before, forgetting that the present that one takes for granted will be just as ridiculous soon enough. Transformation comes from saying "...but why can't it do *this*?"

    And this is about more than technology. The exact same set of reactions -- "wait," "of course," and "but why" -- work equally well to social and political phenomena. We could apply the reasoning to global warming, for example:

  • As the entrenched economic and political leaders fight over whether or not we should do anything about it...
  • ...up and coming cohorts have already gotten past that debate, and take it for granted that action is required...
  • ...even as the people who will take charge of tomorrow are asking not just how to stop global warming, but how to use the effort to make the world a better place.

    Dissatisfaction with the present, not simply acceptance of it, drives change.

    *Note: The age splits for those cohorts is inaccurate: "Boomers" skews too young for both start and end years, and "Seniors" is not a generational cohort description but a chronological age description -- it should be "Silent Generation."

  • June 17, 2007

    Long-Term Deposits

    seeds.jpgFailure happens. Strategic plans that don't take into account the possibility of failure -- and propose pathways to adaptation or recovery -- are at best irresponsible, at worst immoral. The war in Iraq offers an obvious example, but the potential for failure in our attempts to confront global warming* may prove to be an even greater crisis. This is why I'm so adamant about the need to study the potential for geoengineering: we need to have a backup plan. And if that fails to head off global disaster, or (if done without sufficient study and preparation) exacerbates problems further, we need a last-ditch plan for recovery.

    What does it mean to prepare for recovery? I've described it before: a civilization backup, holding a full record of who we are as a civilization, built in a way to facilitate recovery after a global disaster. This is something of an ambitious plan, however, and is not likely to even be considered for decades. In the meantime, a smaller-scale project would be entirely feasible -- and it turns out that such a smaller-scale backup appears to now be underway: the Book & Seed Vault.

    So we pose this question: Are we as a civilization to be knocked back to a hunter-gather stage, or is there a way we can leave a legacy that provides for the future of mankind? [...] The Book and Seed Vault, Inc. has been formed for this purpose— gather and safely maintain long term storage of our civilization’s knowledge, plant seeds and medicinal seeds.

    Much like the seed storage facility in Norway, the Book & Seed Vault would maintain supplies of seeds for key edible and medicinal plants; unlike the Norwegian effort, the Vault would also include an assortment of books, mixing academic, instructional (including a full selection of MAKE magazine, I hope!), and cultural. Long-term plans include underground concrete bunkers dotting the continent, but for now, the initial vault will be in rural Oregon.

    It's clear that the Book & Seed Vault is a very new organization, with great ambitions but limited resources. They just started up in the last few months, and their expertise seems a bit uneven -- lots of detailed info about preserving books, but more general plans for handling the seeds. I suspect that they'll drop the plans to archive CDs and DVDs in short order, when they look at the infrastructure involved for handling electronic media.

    In fact, the Book & Seed Vault may prove to function better as a model and instructions than as an actual vault. We'd need more than one site for any kind of disaster recovery system to be truly useful; we have to assume that many of the eventual locations will be unavailable, so the more the better. The right scale for something like this is probably the "community" -- a bit bigger than your neighborhood, but smaller than a city.

    Think of it as open-source disaster prep -- a site and set of resources offering detailed instructions (which can be updated by the users, of course) showing you how to build a recovery vault for your community. What are the physical specs for the facility? Which seeds are appropriate for your regional climate? What are the key instruction manuals and guidebooks to include? How best to store and protect the vault's contents? I could see this done as a wiki and mailing list, probably with some YouTube videos demonstrating various techniques for proper seed and book storage.

    This kind of idea isn't simply updated survivalism, it's part of a larger effort to develop greater social resilience.

    Now there's a sequel to Mad Max I'd go see: a post-disaster society run by farmers and librarians!


    *[If global warming isn't a sufficiently compelling threat for you, substitute the existential problem of your choice: asteroid strike; zoonotic pandemic; biowarfare; molecular manufacturing-based warfare; unfriendly AI. As long as the disaster remains limited to a Class 1 or Class 2 Catastrophe (i.e., some humans left alive to try to recover), human civilization would have a chance to return.]

    June 13, 2007

    Warning!

    My colleague at the Institute for the Future, David Pescovitz, stuck my face on BoingBoing in the link to the Accidental Cyborg article (yikes!), but told me about this terrific sticker made for Gareth Branwyn's hip replacement.

    May 24, 2007

    City Planet

    Wednesday, May 23, 2007. Remember that date. It's the day the Earth became an urban planet.

    Working with United Nations estimates that predict the world will be 51.3 percent urban by 2010, the researchers [demographers from North Carolina State University and the University of Georgia] projected the May 23, 2007, transition day based on the average daily rural and urban population increases from 2005 to 2010. On that day, a predicted global urban population of 3,303,992,253 will exceed that of 3,303,866,404 rural people.

    For the first time in history, more people live in cities than in rural areas. This is, in many ways, the single most important indicator of whether we'll survive this century. Here's why:

    Urban centers support people more efficiently than do small towns, villages, and the countryside. This isn't just true environmentally or economically; it's arguably also the case when it comes to the kind of intellectual ferment that drives innovation. New ideas are the sparks coming from the friction between minds -- and you get a lot more friction in the city. Urban growth, over time, makes us all stronger.

    Cities require complex support systems, however. Complex infrastructure offers plenty of opportunities for failure, whether via natural disasters or human causation. Isolated failures will happen, and not pose a systemic threat. But repeated -- or un-repaired -- system failures would inevitably drive people out of the cities, by choice or by necessity.

    As long as the overall proportion of urban dwellers to rural denizens continues to grow, we can reasonably conclude that human civilization is doing a decent job of maintaining its overall system integrity. If that pattern reverses -- if we start to see the proportion of urban to rural edge back towards rural dominance -- it's time to look for signs that civilization's systems are collapsing.

    April 13, 2007

    The Sin of Worldbuilding

    Forgive me, Warren, but I must disagree.

    Every moment of a science fiction story must represent the triumph of writing over worldbuilding.

    Worldbuilding is dull. Worldbuilding literalises the urge to invent. Worldbuilding gives an unnecessary permission for acts of writing (indeed, for acts of reading). Worldbuilding numbs the reader’s ability to fulfil their part of the bargain, because it believes that it has to do everything around here if anything is going to get done.

    Above all, worldbuilding is not technically necessary. It is the great clomping foot of nerdism. It is the attempt to exhaustively survey a place that isn’t there. A good writer would never try to do that, even with a place that is there. It isn’t possible, & if it was the results wouldn’t be readable: they would constitute not a book but the biggest library ever built, a hallowed place of dedication & lifelong study. This gives us a clue to the psychological type of the worldbuilder & the worldbuilder’s victim, & makes us very afraid.

    See, what he misses here is that Worldbuilding is its own form of art, and very much its own kind of business. Worldbuilding is what I do pretty much every gig for Institute for the Future, for Global Business Network, for Monitor Institute, and for essentially every corporate, government, or non-profit client I've worked with over the last decade. That great clomping foot of nerdism is what the clients want to see, because they can then use that as a backdrop for their own stories about their organizations.

    The art of Worldbuilding comes from knowing what to omit, from knowing what needs to be surveyed and what can be tacked up as a Potemkin Future. It becomes an intensely detailed game, figuring out what the readers want to know, covering what they need to know, teasing them with the implications of a fuller vision, and creating an effective illusion of paradigmatic completeness.

    Harrison has it wrong: it's not the biggest library ever built, it's a painting of a library that seems to go on and on, with some prop books on a table in the foreground. Make sure those prop books are interesting enough, and the reader will never try to explore the rest of the library.

    March 18, 2007

    Information, Context and Change

    88mpg_annotated_sm.jpgI've long been a proponent of the core Viridian argument that "making the invisible visible" (MTIV) -- illuminating the processes and systems that are normally too subtle, complex or elusive to apprehend -- is a fundamental tool for enabling behavioral change. When you can see the results of your actions, you're better able to change your actions to achieve the results you'd prefer. I've come to understand, however, that it's not enough to make the invisible visible; you also have to make it meaningful.

    The canonical example of how MTIV works is the mileage readout standard in hybrid cars. Almost invariably, hybrid owners see a gradual but noticeable improvement in miles-per-gallon over the first month or so of hybrid vehicle ownership. This isn't so much the car being "broken in," but the driver: because of the mileage readout, the hybrid driver can see what driving patterns achieve the best results.

    A growing number of non-hybrid cars now include miles-per-gallon readouts; will we see similar improvements in driver behavior as a result?

    Possibly, but not likely. The hybrid miles-per-gallon readout comes in two forms: an average mileage, whether calculated for the current tank or the total vehicle miles; and a real-time, current mileage display, which will fluctuate significantly while one drives. As far as I have found, the non-hybrids with mileage readouts only include the average mileage display, not the real-time display. (Update: Howard notes in the comments a few makes of non-hybrid cars that do have both the average and real-time displays. I would be very interested in an examination of driver behavior -- and possible changes in behavior -- for those cars.)

    This is useful information, to be sure; it's good to know what kind of mileage a vehicle gets in real-world use. But as a means of MTIV, it's not terribly helpful, because it breaks the connection between the action and the result. After the first few dozen miles of a given tank of gas, the average mileage readout changes very slowly, and only with sustained greater-than-average or less-than-average mileage driving. Small variations get lost in the noise. This means that minor changes in driving behavior can't be mapped to minor changes in miles-per-gallon. Without that connection between "I did this" and "I got that," drivers can't as easily learn to drive in a more efficient way. The driver needs to be able to compare behaviors and results to learn what works best. Both forms of display are necessary. The average mileage is the context for the momentary changes, and it's the comparison between the two that provides meaning.

    This dilemma isn't just an issue for cars.

    carbonlabelLate last month, the UK's environment secretary, David Milbrand, proposed putting ecological impact labels on all food products sold in UK stores. These labels would focus on the amount of carbon emitted as the result of the production of the food item. In this, the UK government is playing catch-up with some of its businesses, as the grocery chain Tesco announced in late January that it would be adding carbon labels to the products it sold. And now the Carbon Trust, a UK non-profit that works with businesses to reduce their greenhouse impacts, has embarked on an effort to build a labeling standard for adoption across industries. (It should come as no surprise that I'm very much in favor of this sort of labeling!)

    So let's say this works out, and soon every bag of crisps you buy has a little label on it showing how many grams of carbon resulted from that bag's production. Now you can compare it to other snacks, and try to eat only the goodies with smaller numbers in the label. But while that level of comparison is helpful, it doesn't offer the larger context necessary to make the comparison meaningful. You still don't know whether both the (e.g.) 100g of carbon resulting from the production of a bag of crisps and the (e.g.) 50g of carbon resulting from the production of a bag of carrots are outrageously high, ridiculously low, or vanishingly irrelevant.

    In order for any carbon labeling endeavor to work -- in order for it to usefully make the invisible visible -- it needs to offer a way for people to understand the impact of their choices. This could be as simple as a "recommended daily allowance" of food-related carbon, a target amount that a good green consumer should try to treat as a ceiling. This daily allowance doesn't need to be a mandatory quota, just a point of comparison, making individual food choices more meaningful.

    The food carbon labels without the recommended amounts is roughly like the real-time mileage readout on a hybrid: useful data about one's immediate actions, but without any way of measuring overall results. Similarly, the recommended allowances without abundant carbon labels is akin to the average mileage display: a way of seeing overall goals, without any way of directly connecting action and result. Both the individual data and the broader context are necessary.

    This is a pattern we're likely to see again and again as we move into the new world of carbon footprint awareness. We'll need to know the granular results of actions, in as immediate a form as possible, as well as our own broader, longer-term targets and averages. This is certainly not a surprising observation. We're still early enough in the carbon awareness era, however, that even the obvious steps are useful to note.

    March 1, 2007

    Obsolescent Heresies

    sb.jpgI like Stewart Brand, and he and I seem to get along pretty well. I first met him at GBN a decade ago, and I run into him fairly often at a variety of SF-area futures-oriented events.

    But I found myself grumpy and frustrated after reading "An Early Environmentalist, Embracing New ‘Heresies’" in Sunday's New York Times, a profile of Stewart and what he calls his "environmental heresies."

    Stewart Brand has become a heretic to environmentalism, a movement he helped found, but he doesn’t plan to be isolated for long. He expects that environmentalists will soon share his affection for nuclear power. They’ll lose their fear of population growth and start appreciating sprawling megacities. They’ll stop worrying about “frankenfoods” and embrace genetic engineering.

    Brand seems to retain an image of environmentalism that may have been appropriate in the 1970s, but has diminishing credibility today: the anti-technology, back-to-nature hippie. Today's environmental movement is urban, techie, and far less likely to refer to any assertion as "heresy" (although, in the case of the handful of people who still try to deny the existence of global warming, we're happy to use the term "stupidity"). Stewart Brand is nailing his environmental heresies on the door of a church that was long ago abandoned... or, at the very least, taken over by Unitarians.

    This isn't to say that the Bright Green types have fully embraced Stewart's views. There's little support for aggressive nuclear power production among the new environmentalists, and the various positions concerning biotech are complex, to say the least. There's little disagreement with his love of cities, but in this case, Brand is almost a latecomer. Ultimately, the positions that Stewart stakes out appear more to be arguments against his own past beliefs than against the claims of modern eco-advocates.

    David Roberts, over at Gristmill, dissects the nuclear argument exceedingly well, and rather than reiterate what he wrote, I'll just point you to it. The short version, in my phrasing: the Bright Green reluctance about nuclear power has far more to do with it being centralized infrastructure and dated technology than with any fear or loathing of atoms. The environmental situation in which we find ourselves demands a fast-learning, fast-iterating, distributed and collaborative technological capacity, not a system that bleeds out investment dollars and leaves us stuck with technologies already on the verge of obsolescence.

    If we're looking for resilience, flexibility and innovation, the nuclear industry is not a good place to start.

    With regards to biotechnology, resilience, flexibility and innovation are definitely possible, at least in the years to come. Brand argues that genetic engineering has the potential to be a major tool for dealing with global warming's effects, and he's not the only one making claims of the sort. There's no consensus Bright Green position on environmental biotech, but there are plenty of voices calling for the responsible use of biotech (and nanotech) as a way of combatting climate and ecosystem disruption; moreover, most people arguing for holding off on bioengineering do so out of concern that we still have more to learn before we can undertake such solutions responsibly, not out of a flat opposition to the technology.

    Stewart asks, "where are the green biotech hackers?" Rob Carlson -- one of the original open-source bio thinkers, now a leading expert in synthetic biology -- has an answer: they're here, but they're still working under the radar.

    We're coming, Stewart. It's just that we're still on the slow part of the curves. [...] At the moment, synthesis of a long gene takes about four weeks at a commercial DNA foundry, with a bacterial genome still requiring many months at best, though the longest reported contiguous synthesis job to date is still less than 50 kilobases. And at a buck a base, hacking any kind of interesting new circuit is still expensive. [...] So, Mr. Brand, it will be a few years before green hackers, at least those who aren't support by Vinod Khosla or Kleiner Perkins, really start to have an impact.

    Green biotech hacking is still in the punch-card era, and as Stewart himself could tell you, computer hacker culture really didn't take off until you got past punch-cards into time-sharing, where the cost in time and money was low enough that mistakes were something to learn from, not dread.

    As for cities, I'm not sure I could find many modern enviros still clinging to the notion that, on the whole, rural life is intrinsically better than urban life. There are plenty of individual examples of terrific rural homes and awful urban homes, of course, but in the aggregate, there's no question that communities in dense, urban settings have a smaller footprint than communities of the same size in suburban and rural settings. And the notion that population size is still at the top of the environmental hit list is seriously out of date; all signs point to a global population peak of below 10 billion, and possibly no more than 8 billion -- of concern to the extent that more people means more consumption, but by no means a panic-inducing Malthusian threat.

    The conventional meaning of "heretic" is one who goes against dogma, and the positions that Stewart takes here just don't meet that requirement. There's no doubt that it would be possible to find self-described environmentalists who fit the stereotype that Stewart is responding to, but one of the hallmarks of the modern environmental movement -- and the reason why the "heresy" model is arguably obsolete -- is that, when it comes to solutions, nothing is a priori off the table. All solution options can be considered, but they must be able to stand up to competing ideas. Even if some of us believe that some of the solutions he advocates don't stand up to the competition, we aren't going to try to claim that Stewart Brand somehow isn't an environmentalist. As long as he recognizes that the Earth's geophysical systems are under extraordinary duress, and that business-as-usual is driving us headlong into disaster, he's one of us -- even if the ways we want to avoid that disaster vary.

    February 23, 2007

    The Resilient World

    Environmental architect William McDonough is said to have asked, "If a person described her relationship with her spouse as merely 'sustainable' wouldn’t you feel sorry for both of them?"

    The word "sustainability" has come to dominate environmental discourse, employed to mean a condition in which we take no more from our environment than the environment is able to restore. It's a reasonably goal, but a limited one. Sustainability is a static concept: it says nothing about change, or improvement. McDonough's point is that "sustainable" is hardly a condition worth celebrating; at best, it's the maintenance of the status quo.

    It seems to me that what we should be striving for is an environment -- and a civilization -- able to handle dynamic, unexpected changes without threatening to collapse. This is more than simply sustainable, it's regenerative and diverse, relying on both a capacity to absorb shocks and to co-evolve with them. In a word, it's resilient.

    If we're to survive the 21st century, we need to be striving for environmental and civilizational resiliency.

    In a "sustainable" environment, we live in constant fear of greed, accident or malice tipping the balance away from sustainability, returning us to the spiral of over-consumption and environmental depletion. Ironically, the goal of environmental sustainability is highly likely to put us on the path of ongoing environmental management. To an extent, this is already true -- ecologist Daniel Janzen argues that we're better off thinking of the environment as a garden to be tended than as wilds to be preserved -- but sustainability as a goal means constant vigilance. It's not simply that the environment can no longer be considered "wild;" in the sustainability paradigm, the environment can only be considered a subject. A sustainable world is one that manages to avoid imminent disaster, but remains perpetually on the precipice.

    The underlying problem with the concept of "sustainability" is that it's inherently static. It presumes that there's a special point at which we can maintain ourselves and maintain the world, and once we find the right combination of behavior and technology that allows us to reduce our environmental footprint to a "one planet" world, we should stay there. For some sustainability advocates, this can include limiting ourselves technologically, as suggested by the frequency with which such advocates dismiss "techno-fixes" as simply allowing us to continue to behave badly. More broadly, as a strategic goal, sustainability pushes us towards striving to achieve success within boundaries; the primary emphasis of the concept is on stability.

    "Resiliency," conversely, admits that change is inevitable and in many cases out of our hands, so the environment -- and our relationship with it -- needs to be able to withstand unexpected shocks. Greed, accident or malice may have harmful results, but (barring something likely to lead to a Class 2 or Class 3 Apocalypse), such results can be absorbed without threat to the overall health of the planet's ecosystem. If we talk about "environmental resiliency," then, we mean a goal of supporting the planet's ability to withstand and regenerate in the event of local or even widespread disruption.

    Like sustainability, resiliency is a strategic concept, intended to guide how choices are made. But resiliency doesn't presuppose limitations; rather, it encourages the diversification of capacities, in order to be responsive to uncertain future problems. We can think of this as "strategic flexibility" or "maintaining our options," but it comes down to avoiding being trapped on a losing path.

    When applied directly to environmental strategies, resiliency may appear similar to sustainability in superficial ways. Both sustainability and resilience would encourage aggressive moves to greater energy efficiency, for example. The similarity of tactics belies a divergence of intent, however; for sustainability the purpose is to reduce our impact to below a certain threshold, while for resilience, it's to increase the resources available to meet future problems. We see overlap like this because resiliency embraces the near-term goal of sustainability, inasmuch as resiliency recognizes that the depletion of planetary resources and ecosystem diversity is a self-destructive process.

    For me, environmental resilience is a much more satisfying philosophy than environmental sustainability because of its emphasis on increasing our (our planet's) ability to withstand crises. Sustainability is a brittle state: unexpected changes (natural or otherwise) can easily cause its collapse. Resilience is all about being able to handle the unexpected. It does not ignore the need to be "sustainable" in the most general sense, but does not see that as a goal or end-point in and of itself. Sustainability is about survival. The goal of resilience is to thrive.

    February 8, 2007

    Good Ancestors... But Who Are Our Descendants?

    metropolis.jpgThe "Good Ancestor Principle" is based on a challenge posed by Jonas Salk:

    ...the most important question we must ask ourselves is, “Are we being good ancestors?” Given the rapidly changing discoveries and conditions of the times, this opens up a crucial conversation – just what will it take for our descendants to look back at our decisions today and judge us good ancestors?

    The two-day Good Ancestor Principle workshop focused primarily upon teasing out just what it would mean to be a good ancestor, and a bit upon exploring various ways of making sure the Earth inherited by our descendants is better than the Earth we inherited. But a surprisingly large part of the conversation covered a question that is at once unexpected and entirely relevant: just who will our descendants be?

    The baseline assumption, not unreasonably, was that our descendants will be people like us, individuals living deep within the "human condition" of pain, love, family, death, and so forth; as a result, the "better ancestors" question inevitably focuses upon the external world of politics, warfare, the global environment, poverty, and so forth (essentially, the WorldChanging arena). Some participants suggested a more radical vision, of populations with genetic enhancements including extreme longevity. Sadly, this part of the conversation never managed to get much past the tired "how will the Enhanced abuse the Normals" tropes, so we never really got to the "...and how can we be good ancestors to them?" question, other than to point out that we ourselves may be filling in the role of "descendants" if we end up living for centuries.

    Vernor VingeInstead, we ran right past the "human++" scenario right into the Singularity -- and with Vernor Vinge in attendance, this is hardly surprising. (Not that Vinge is dead-certain that the Singularity is on its way; when he speaks next week at the Long Now seminar in San Francisco, he'll be covering what change looks like in a world where a Singularity doesn't happen.) This group of philosophers and writers really take the Singularity concept seriously, and not for Kurzweilian "let's all get uploaded into Heaven 2.0" reasons. Their recurring question had a strong evolutionary theme: what niche is left for humans if machines become ascendant?

    Ben Goertzel describes his generalized AI modelThe conversation about the Singularity touched on more than science fiction stories, because of the attendance of Ben Goertzel, a cognitive science/computer science specialist who runs a company called "Novamente" -- a company with the express goal of creating the first Artificial General Intelligence (AGI). He has a working theory of how to do it, some early prototypes (that for now exist solely in virtual environments), and a small number of employees in the US and Brazil. He says that with the right funding, his team would be able to produce a working AGI system within ten years. With his current funding, it might take a bit longer.

    According to Goertzel, the Singularity would happen fairly shortly after his AGI wakes up.

    It was a surreal moment for me. I've been writing about the Singularity and related issues for years, and have spoken to a number of people who were working on related technologies or were major enthusiasts of the concept (the self-described "Singularitarians"). This was the first time I sat down with someone who was both. Goertzel is confident of his vision, and quite clear on the potential outcomes, many of which would be unpleasant for humankind. When I spoke to my wife mid-way through the first day, I semi-jokingly told her that I'd just met the man who was going to destroy the world.

    Ben doesn't actually want that to happen, as far as I can tell, and has made a point of considering from the very beginning of his work the problem of giving super-intelligent machines a sense of ethics that would preclude them from wanting to make choices that would be harmful to humankind.

    In 2002, he wrote:

    ...I would like an AGI to consider human beings as having a great deal of value. I would prefer, for instance, if the Earth did not become densely populated with AGI’s that feel about humans as most humans feel about cows and sheep – let alone as most humans feel about ants or bacteria, or instances of Microsoft Word. To see the potential problem here, consider the possibility of a future AGI whose intelligence is as much greater than ours, as ours is greater than that of a sheep or an ant or even a bacterium. Why should it value us particularly? Perhaps it can create creatures of our measly intelligence and complexity level without hardly any effort at all. In that case, can we really expect it to value us significantly? This is not an easy question.

    Beyond my attachment to my own species, there are many general values that I hold, that I would like future AGI’s to hold. For example, I would like future AGI’s to place a significant value on:

    1. Diversity
    2. Life: relatively unintelligent life like trees and protozoa and bunnies, as well as intelligent life like humans and dolphins and other AGI’s.
    3. The generation of new pattern (on “creation” and “creativity” broadly conceived)
    4. The preservation of existing structures and systems
    5. The happiness of other intelligent or living systems (“compassion”)
    6. The happiness and continued existence of humans

    (From his essay "Thoughts on AI Morality," in which he quotes both Ray Kurzweil and Jello Biafra.)

    The issue of how to give AGIs a sense of empathy towards humans consumed a major part of the Good Ancestor Principle workshop discussion. The participants recognized quickly that what this technology meant was the creation of a parallel line of descendants of humankind. In essence, the answer to the question of "how can we be better ancestors for our descendants" is answered in part by "making sure our other descendants are helpful, not harmful."

    Ultimately, the notion of being good ancestors by reducing the chances that our descendants will be harmed appeared in nearly every attempt to answer Jonas Salk's challenge. It's a point that's both obvious and subtle. Of course we want to reduce the chances that our descendants will be harmed; the real challenge is figuring out just what we are doing today that runs counter to that desire. We don't always recognize the longer-term harm emerging from a short-term benefit. This goes back to an argument I've made time and again: the real problems we're facing in the 21st century are the long, slow threats. We need our solutions to have a long term consciousness, too.

    That strikes me as an important value for any intelligent being to hold, organic or otherwise.

    January 23, 2007

    Eschatological Taxonomy -- Now Suitable for Framing

    Apocalpyse Scale.jpg

    (click for larger version, of course)

    December 31, 2006

    Must-Know Concepts for the 21st Century

    My colleague at IEET, George Dvorsky, posted a list of concept about the future that he sees as vital for people who consider themselves to be intelligent to know and understand. His goal is admirable: too much of what passes for public discourse (in the United States, at least, but from what I can see, also in much of the rest of the West) is deeply focused on the past, and much too narrow. Moreover, it's not simply that we've become a culture of niche thinkers; it's that the niche thinkers that dominate public discourse have seemingly decided that their particular set of niches (largely issues of domestic politics and economics) are the only important ones.

    George's list is, by and large, a good one. I'd quibble about a couple of items he includes, but nothing strikes me as outrageously out-of-place. (I do wish he'd add links to the terms to help people who don't recognize various entries get up to speed, however.) He covers, for the most part, terms concerning advances in human engineering and in information and material technologies, with particular emphasis on various manifestations and implications of non-human intelligence(s).

    George asks for additions, so in that spirit, here's a list of 10 more terms and concepts intelligent participants in the 21st century should understand. Mine has links. :)

    I'm not entirely satisfied with this list; it remains a bit too tech-focused. Still, in combination with George's list, this looks like the beginnings of a good primer for dealing with the key issues of the new century.

    December 28, 2006

    How to Read an End-of-Year Forecast

    crystal_ball.jpgIt seems to be common practice among bloggers, columnists and other species of pundit to offer in the closing days of December a few predictions about the year to come. These usually include some brief sentences about how well or how poorly the predictions from last year fared, and the best include a tongue-in-cheek undercurrent, a subtle implication that the author knows as well as the reader just how ridiculous this whole thing really is. Aside from the blatantly satirical offerings, however, most of these year-end predictions are meant to be taken seriously to at least some degree, and provide a tangible sense of where the author thinks the world may be heading in the months to come.

    As someone who thinks and writes about the months (and years) to come on a professional basis, I find these efforts a kick to read. I won't add my own, in part because it would be redundant (I write about the future all the time), and in part because the real fun comes from seeing people who don't spend a lot of time thinking about much beyond the next quarter, next project or next release pulling on their Futurist Pants™.

    I enjoy reading them in large part because they often fall into the same traps that can snare the pros, but do so in much more obvious ways. The real value of the myriad forecasts for 2007 emerges not from what they predict, but from how they predict it. These predictions are a terrific training field for critical analysis and skeptical reading of futurist prounouncements of all kinds.

    In that spirit, here are eight guidelines for how to read predictions (and scenarios, and forecasts):

    Cui Bono?

    • Are they just parroting recent headlines? Are the forecasts and predictions simply rehashes of news items from the last couple of months? These subjects are rarely as important in the medium or long term as they seem in the here and now, but are the current triggers for blog links and Slashdot debates.

    • Poked in the eye by the invisible hand? Would the predictor be likely to benefit professionally if the "hot trend for the new year" actually manages to take off? While this doesn't necessarily mean that they're pushing the idea deceptively, it does mean that they're less-likely to be on the lookout for competing ideas and serious roadblocks.

    • Are they just reading their own marketing? Many of the end-of-year predictions come from advertising agencies, trade organizations, and other groups trying to get a bit of press. When the forecasts include buzzwords that don't buzz and "consumers" making radical changes to their behaviors because of some swoopy new gadget, chances are you're seeing an effort to predict the future by marketing it.

    Less Than Meets the Eye

    • Shock and Awe? At the other end of the prediction spectrum are those forecasts that are so disruptive and radical that they simply beg for argument. While they may have some tenuous technological or social justification, they're the kinds of assertions that often get added to lists to make them appear less conventional.

    • Why? Next-year forecasts that simply offer up bulleted lists of terse sentences (e.g., "• Foobar defeats Google.") may be amusing, but offer little insight. Predictions that don't include even a cursory effort to explain the reasoning or offer a justification all too often include forecast items that have few reasons or justifications to begin with.

    Positive Signs

    • Have you heard of this before? Somewhere between the items that everybody knows about already because they've been in the headlines, and the items that nobody knows about because they're internal marketing jargon, are those items that specialists are starting to pay attention to, but few others have picked up on yet. If you encounter a prediction that refers to something you haven't heard about, but you find hundreds of sites digging into its implications when you google it, there's a good chance that you've found a useful forecast.

    • Greater than the sum of its parts? Do the authors make connections between the predictions, or do they toss each out as unrelated phenomena? No technological or social development happens in isolation, and very often changes in one arena can profoundly alter the course of other trends and practices. Forecasts that show interconnections have a sense of a bigger picture.

    Lastly...

    • What did they miss? Have the "future" predictions already happened, but just haven't been widely noticed? Are there other known factors at work that would prevent or substantially alter the predictions? Does one prediction cancel out another, without explanation? Are there alternative outcomes that are just as likely, and equally if not more interesting? Do the predictions miss an obvious connection or combination that could end up being far more influential than any of its component changes?

    End-of-year forecasts make for a fun read, and are usually done in a spirit of play and cameraderie. Even the ones that are blatant marketing efforts can provide some surprises and (very occasionally) insights. This set of guidelines should not by any means be read as a condemnation of the practice. In fact, I'd like to see more people making lists of predictions and forecasts, as at the very least, it would provide more chances to practice skeptical futurism. Besides, with enough minds, all tomorrows are visible -- the more of us playing in this space, the better chance we have of spotting surprises before they happen.

    December 20, 2006

    End-User License Agreement, StuffStation Deluxe

    BY CLICKING "I AGREE" YOU ACCEPT THE PROVISIONS OF THIS LICENSE.

  • I will not use this product (STUFFSTATION DELUXE) to build, repair, or in any way constitute weapons of mass destruction;
  • I will not use this product (STUFFSTATION DELUXE) to produce tools or systems with the express purpose of undermining the duly-elected government;
  • I will not use this product (STUFFSTATION DELUXE) to produce self-replicating automata, including (but not limited to):
       - Gray Goo
       - Green Goo
       - Red Goo
       - Artificial Retroviruses
       - "Blood Music" Plagues
       - "Brain Goo" Neurotropic Substances
       - Spam

  • I will not use this product (STUFFSTATION DELUXE) to produce information processing devices that meet the conditions for self-awareness spelled out in the Phoenix Protocols of 2017 (UN DOC041202017.42);
  • I will not use this product (STUFFSTATION DELUXE) to produce derivative versions of the product (STUFFSTATION DELUXE), or devices that would allow for disassembly and reverse-engineering of said product;
  • I recognize that I provide an open-ended, uncompensated license to the manufacturer of this product (STUFFSTATION DELUXE) for any and all original designs used in this product, applicable throughout the known universe. (VOID in Nebraska, Saskatchewan, and Algeria.)
  • I will use this product (STUFFSTATION DELUXE) in accordance with all local, regional, national and transnational laws, regulations and treaties.

    COPYBOT-FABBERS, THE MANUFACTURER OF THIS PRODUCT (STUFFSTATION DELUXE), HEREBY DISCLAIMS ALL RESPONSIBILITY FOR ANY AND ALL NEGATIVE OUTCOMES FROM ANY USE OF THIS PRODUCT (STUFFSTATION DELUXE), INCLUDING (BUT NOT LIMITED TO) THE RESULTS OF INTENTIONAL MISUSE, ACCIDENTAL MISUSE, THIRD-PARTY MISUSE ("HACKING"), AND PROPER USE WITH UNFORESEEN CONSEQUENCES.

  • December 18, 2006

    The One-Sentence Challenge

    Rebecca Blood listed me as one of the folks to take a shot at the One-Sentence Challenge, as offered by Paul Kedrosky:

    Physicist Richard Feynman once said that if all knowledge about physics was about to expire the one sentence he would tell the future is that "Everything is made of atoms". What one sentence would you tell the future about your own area, whether it's entrepreneurship, hedge funds, venture capital, or something else?

    Examples: An economist might say that "People respond to incentives". I had an engineering professor years ago who said all of that field could be reduced to "F=MA and you can't push on a rope".

    A couple of good ones come immediately to mind: the GBN motto, "the future is uncertain, and yet we must act;" Bruce Sterling's "the future is a process, not a destination;" Yogi Berra's "prediction is very hard, especially about the future." But this really should be one of my own. So here's my try:

    The future is built by the curious -- the people who take things apart and figure out how they work, figure out better ways of using a system, and explore how to make new things fit together in unexpected ways.

    How's that?

    Passing this along, I'd like to see this challenge answered by:

    Green LA Girl [Siel responds here];
    Mike Treder [Mike responds here];
    Bruce Sterling;
    Kim Allen [Kim responds here];
    Violet Blue;
    Eric Townsend [JET responds here];
    Stuart Candy.

    And, of course, anyone who wants to chime in here in the comments.

    (Thanks to everyone who has participated!)

    December 12, 2006

    Life and Love in the Uncanny Valley

    There's a story I've seen about a philosopher who bet an engineer that he could make a robot that the engineer couldn't destroy. What the philosopher produced was a tiny little thing, covered in fur, that would squeak when touched -- and when threatened, would roll onto its back and look at the attacker with its big, glistening eyes. When the engineer lifted his hammer to smash the robot, he found that he couldn't. He paid the wager *.

    Evolution has programmed us, for good reasons, to be responsive to "cute" creatures. Even the coldest heart melts at the sight of kittens playing or puppies sleeping, and while parents respond most quickly to their own children, we all have at least some positive response to sight of a child. Given all of this, it wouldn't be surprising if our biological imperatives could be hijacked by things that are decidedly not puppies and babies -- but approximated their look and behavior. Like, for example, a robot.

    Sociologist Sherry Turkle has studied the effects of technology on society for years. Recently, she brought a collection of realistic robotic dolls called "My Real Baby" to nursing homes. Much to her surprise -- and dismay -- the seniors responded to these artificial dependents in ways that mirrored how they would interact with real living beings. They weren't fooled by the robots; they knew that these were devices. But the artificial beings' look and behavior elicited strong, generally positive, emotions for the elderly recipients. Turkle describes it thusly:

    In bringing My Real Babies into nursing homes, it was not unusual for seniors to use the doll to re-enact scenes from their children’s youth or important moments in their relationships with spouses. Indeed, seniors were more comfortable playing out family scenes with robotic dolls than with traditional ones. Seniors felt social “permission” to be with the robots, presented as a highly valued and “grownup” activity. Additionally, the robots provided the elders something to talk about, a seed for a sense of community.

    Turkle is bothered by the emotions these dolls -- and similar "therapeutic" robots, such as the Japanese Paro seal -- trigger in the adults interacting with them. She argues:

    Relationships with computational creatures may be deeply compelling, perhaps educational, but they do not put us in touch with the complexity, contradiction, and limitations of the human life cycle. They do not teach us what we need to know about empathy, ambivalence, and life lived in shades of gray.

    Turkle is particularly concerned with the issue of the "human life cycle." She worries about emotional bonds with beings that can't understand death, or themselves die. "What can something that does not have a life cycle know about your death, or about your pain?" she asks. She fears the disconnection with the reality of life when children and adults alike bond with machines that can't die. But this machine immortality may be a benefit, not a problem.

    Many, likely most, of the seniors who embraced the robotic children were seriously depressed. Aging is often painful, physically and emotionally, and life in a nursing home -- even a good one -- can seem like the demoralizing final stop on one's journey. Seniors aren't the only ones who are depressed, of course. According to a recent World Health Organization study published in the Public Library of Science ("Projections of Global Mortality and Burden of Disease from 2002 to 2030"), depressive disorders are currently the fourth most common "burden of disease" globally, ranking right behind HIV/AIDS; moreover, the research group projects that depressive disorders will become the second most common burden of disease by 2030, above even heart disease. Depression is debilitating, saps productivity and creativity, and is all too often fatal. Medical and social researchers are only now starting to see the immensity of the problem of depression.

    The ability of the therapeutic robots to reduce the effects of depression, therefore, should not be ignored. The seniors themselves describe how interacting with the robots makes them feel less depressed, either because they can talk about problems with a completely trustable partner, or because the seniors see the robots as depressed as well, and seek to comfort and care for them. Concerns about whether or not the robots are really feeling depressed, or recognize (let alone care about) the human's feelings, appear to be secondary or non-existent. Of far greater importance are the benefits for helping someone in the depths of depression to recover a sense of purpose and self.

    If you were to look for a My Real Baby doll today, you'd be hard-pressed to find one. They were a flop as commercial toys, with a common reaction (at least among adults) being that they were "creepy." That kind of response -- "it's creepy" -- is a sign that the doll has fallen into the "Uncanny Valley," the point along the realism curve where the object looks alive enough to trigger biologically-programmed responses, but not quite alive enough to pass for human -- and as a result, can be unsettling or even repulsive. First suggested by Japanese robotics researcher Masahiro Mori in 1970, the Uncanny Valley concept may help to explain why games, toys and animations with cartoony, exaggerated characters often are more successful than their "realistic" counterparts. Nobody would ever mistake a human character from World of Warcraft for a photograph, for example, but the human figures in EverQuest 2, conversely, look close enough to right to appear oddly wrong.

    As work on robotics and interactive systems progresses, we'll find ourselves facing Creatures from the Uncanny Valley increasingly often. It's a subjective response, and the empathetic/creepy threshold seems to vary considerably from person to person. It's notable, and clearly worth more study, that the nursing home residents who received the My Real Baby dolls didn't have as strong of an "Uncanny Valley" response as the greater public seemed to have. Regardless, it's important to remember that the Uncanny Valley isn't a bottomless pit; eventually, as the realism is further improved, the sense of a robot being "wrong" fades, and what's left is a simulacrum that just seems like another person.

    The notion of human-looking robots made for love has a long history, but -- perhaps unsurprisingly -- by far the dominant emphasis has been on erotic love. And while it's true that many emerging technologies get their first serious use in the world of sexual entertainment, it's by no means clear that there's a real market for realistic interactive sex dolls. The social norms around sex, and the biological and social need for bonding beyond physical play, may well relegate realistic sex dolls to the tasks of therapy and of assistance for those who, for whatever reason, are unable to ever find a partner.

    But that doesn't mean we won't see love dolls. Instead of sex-bots driving the industry, emotional companions for the aged and depressed may end up being the leading edge of the field of personal robotics. These would not be care-givers in the robot nurse sense; instead, they'd serve as recipients of care provided by the human partner, as it is increasingly clear that the tasks of taking care of someone else can be a way out of the depths of depression. In this scenario, the robot's needs would be appropriate to the capabilities of the human, and the robot may in some cases serve as a health monitoring system, able to alert medical or emergency response personnel if needed. In an interesting counter-point to Turkle's fear of humans building bonds with objects that can not understand pain and death, these robots may well develop abundant, detailed knowledge of their partner's health conditions.

    Turkle is also concerned about the robot's inability to get sick and die, as she believes that it teaches inappropriate lessons to the young and removes any acknowledgment of either the cycle of life or the meaning of loss and death. Regardless of one's views on whether death gives life meaning, it's clear that the sick, the dying, and the deeply depressed are already well-acquainted with loss. The knowledge that this being isn't going to disappear from their lives forever is for them a benefit, not a flaw.

    We're accustomed to thinking about computers and robots as forms of augmentation: technologies that allow us to do more than our un-augmented minds and bodies could otherwise accomplish. But in our wonder at enhanced mental feats and physical efforts, we may have missed out on another important form of augmentation these technologies might provide. Emotional support isn't as exciting or as awe-inspiring as the more commonplace tasks we assign to machines, but it's a role that could very well help people who are at the lowest point of their lives. Sherry Turkle is worried that emotional bonds with machines can diminish our sense of love and connection with other people; it may well be, however, that such bonds can help rebuild what has already been lost, making us more human, not less.

    -=-=-=-=-


    *(If anyone has the source of this story, I'd love a direct reference.)

    December 11, 2006

    Nano-Health, Nano-War

    vivagel.jpgLots of nano-news over the past week or two -- and most of it good!

    Clean Bill of Health: One of the big questions about nanomaterials arising in recent months concerns the toxicity of nanoparticles, particularly carbon nanotubes. Since carbon nanotubes have applications ranging from solar power to artificial muscles (see below), their almost-magical potential would be blunted by confirmation of nasty effects on living tissues. Rice University is one of the leading institutions studying the biological effects of nanomaterials, so it was welcome news that a Rice University group (working with the University of Texas) has found through in-vivo tests that single-wall carbon nanotubes have no immediate harmful effects, and that they are flushed from the bloodstream within 24 hours -- long enough to be useful for medical procedures, but not long enough to trigger potential longer-term effects.

    Obviously these tests need to be replicated and built upon, but still -- good news!

    nanotube-yarn.gifMuscles Made of Yarn: One potential application in the body of carbon nanotubes may be in artificial muscle fibers. University of Texas at Dallas researchers have come up with a way to use carbon nanotubes, would together like yarn, as electro-chemical actuators acting essentially like muscles. According to Technology Review:

    By spinning carbon nanotubes into yarn a fraction of the width of a human hair, researchers have developed artificial muscles that exert 100 times the force, per area, of natural muscle. [...] The yarns are created by first growing densely packed nanotubes, each about 100 micrometers long. The carbon nanotubes are then gathered from a portion of this field and spun together into long, thin threads. The nanotube yarn can be just 2 percent of the width of a hair--not even visible--but upwards of a meter long.

    There's still much work to do to make nanotube yarn a full replacement for muscles, but their potential is clear. Among the many issues surrounding powered prosthetic limbs and walking robots is the insufficiency of current artificial muscle/muscle replication technologies. At present, mechanical muscles are far weaker than biological muscles, gram-for-gram. If this line of research is successful, the situation may end up reversed.

    Viva!: A biotech company with a comic-book name, StarPharma, has come up with a novel nano-material-based gel designed to block the activity of HIV and Herpes viruses. VivaGel™ is a "vaginal microbicide," made to be self-applied by women. It contains dendrimers -- synthetic polymer molecules shaped like the branches of a tree -- structured to stick to the linking surfaces on the virus in question, effectively making it impossible for the viruses to attach to the binding points on their cellular targets. The viruses can't harm the cells (or the host) because their molecular latches are clogged.

    This kind of physical attack on a pathogen is less apt to result in the kind of rapid evolutionary adaptation that is seen with traditional antibiotic and antiviral medicines. The virus has to be able to connect to the right spot on a cell to take it over, so there's a very limited assortment of molecular structures it can have on its binding sites -- evolving away from the dendrimer being able to clog the site means evolving away from the site being able to link to the target cell. Adaption remains possible, of course, but just much less likely.

    200px-DendrimerOverview.pngDendrimers are interesting molecules. Because of their branching structure, it's actually possible to design dendrimers that can target different viruses simultaneously. In principle, VivaGel™ could be an all-purpose viral STD blocker. StarPharma (not a wholly-owned operation of LexCorp) has begun safety trials with UC San Francisco.

    Nano-War, Uh, What is it Nano-Good For? Moving away from nano-materials, fellow futurist Michael Anissimov spotted the publication of the academic work Military Nanotechnology, written by Dr. Jurgen Altmann. The book covers the application of nanomaterials as weapons, the use of nanoscale devices as sensors and the like, and the use of nanofabrication technologies to create novel systems. Altmann even looks at the policy implications of the use of human augmentation technologies for military purposes. The answers to how to respond to the development of these technologies won't come easily, but will be even harder to devise if we wait until the technologies are already available.

    Unfortunately, as Michael notes, the people who need to take these issues seriously are likely to dismiss this as way off in the future, if they even give it that much thought.

    Urgency Noted: That doesn't mean that nobody is paying attention. The National Materials Advisory Board has just released a congressionally-mandated review of US nanotechnology policy. Although it looks chiefly at policies around nano-materials and current research into nano-scale devices, it does take a few pages to consider some of the implications of nano-fabrication. My colleagues at the Center for Responsible Nanotechnology have studied the report in detail, and have offered their own take on its findings.

    The Center for Responsible Nanotechnology (CRN) expects that the NMAB report will accelerate research toward the development of molecularly-precise manufacturing. However, without adequate understanding and preparation, exponential atom-by-atom construction of advanced products could have catastrophic results. Conclusions published in this report should create a new level of urgency in preparing for molecular manufacturing.

    Most of the risks arising from all forms of nanotechnology are familiar, at least on their face. What nano-scale engineering, particularly molecular manufacturing, does is to make those risks happen much more swiftly, more cheaply, more easily, and in greater abundance. It's not that we don't know how to deal with toxic particles or readily-obtained weapons; it's that we've never lived in a world in which the particles could result from such a wide variety of common products, and the weapons could be so hard to detect and yet so powerful. Some of the risks associated with molecular technologies are novel, to be sure, but the core lesson we need to learn has less to do with how to respond to individual threats than with how to grapple with an environment in which the threats arise orders of magnitude more quickly than ever before.

    October 24, 2006

    CMOs

    fruiticeutical.jpgAn offhand comment at the Institute for the Future workshop yesterday sent me spiraling off in a new direction. Tom Arnold, Chief Environmental Officer of Terrapass, made reference to "CMOs," and I didn't catch the particular context of that abbreviation (he meant "Chief Marketing Officers," as it turned out). But divorced of its intended meaning, the term "CMO" took on a new definition:

    Cognitively Modified Organism

    Much to my surprise, nobody has used that term before (at least nobody that Google knows about, and that's all that counts these days). But it's a term with a clear application, most probably used to refer to living beings with intentionally-altered mental (and emotional) characteristics. In this usage, a cognitively modified organism, or CMO, has had its brain wiring altered in an essentially permanent way to induce a particular behavior or mental state -- a hardwired version of Pavlov's Dogs. It could also refer to organisms modified in a way to induce mental/emotional changes when consumed, such as with the fruiticeutical as imagined by IFTF's Jason Tester.

    We already live in a world in which we know enough about brain chemistry and behavior to be able to make fairly replicable modifications via drugs; as we learn more about the genetics underlying brain chemistry, we'll be able to experiment with the concept of making more-or-less permanent modifications to behavior in these ways. It won't happen to human beings right off the bat, of course -- we'll be monkeying around with the brains of non-human animals first. We'll probably even find useful results from the ongoing manipulation of non-human animal behavior through the modification of cognitive structures and chemistry.

    If we're lucky, it will only go as far as needed to perform useful neurotherapies. If we're less lucky, we'll find these technologies as the near future equivalent of steroids, superficially therapeutic systems used for clumsy augmentation. If we're entirely unlucky, this will be a dangerous new tool for advertising and marketing -- memetics with teeth, as it were.

    Oops, there was the bell! Time for dinner.

    August 29, 2006

    Hawaii 2050

    hi2050sust.jpgIt's the classic dilemma of both foresight and environmental consulting: how do you get the people with the power to act to pay attention? Political leaders rarely pay sufficient attention to issues of systemic sustainability and planning for long-term processes, at least before events reach a crisis. There are numerous reasons why this might be, ranging from election cycles to crisis "triage" to politicians not wanting to institute programs for which they won't be around to take credit. It's nearly as difficult to get leaders to pay attention to complex systems, with superficially different but deeply-connected issue areas. If you were to try to bring together political, business and community leaders for a day-long discussion of, say, what life might be like at the midpoint of this century, with a focus on environmental sustainability coupled with economic, cultural and demographic demands, how much support do you think you'd get?

    In Hawaii, over 500 leaders showed up on Saturday the 26th for just such an event, including numerous state legislators and former Hawaii governor George Ariyoshi. Legislative support for the Hawaii 2050 Sustainability project was so great, in fact, that funding for the project received a near-unanimous override of the current governor's veto. The meeting hall was filled to capacity, and the buzz of excitement from the participants grew throughout the day. They could tell: this was the start of something transformative.

    The Hawaii 2050 Sustainability project is remarkably ambitious, seeking to create, over the course of the next 18 months, an entirely new planning strategy for the state's next half-century. This strategy will shape how the state handles a tourist economy, a swelling population, friction between cultures and, most importantly, an increasingly dangerous climate and environment.

    Saturday's event kicked off the process, mixing a variety of traditional presentations on Hawaii's major dilemmas with four immersive scenarios created by Dr. Jim Dator, Jake Dunagan and Stuart Candy at the University of Hawaii's Graduate Research Center for Future Studies. (Jake and Stuart, of course, invited me to Hawaii this last week to talk to some of the grad students and to attend the Hawaii 2050 event; I got a chance to meet and converse with Dr. Dator, as well.) The four scenarios represented a diverse array of possible futures for the state, and included a high-growth world, a limited-growth outcome, a collapse scenario, and a near-Singularity possibility. Participants each stepped into two of the four, and had an opportunity to discuss and evaluate one of the two they saw.

    Details of the four scenarios, including links to relevant resources, can be found in this PDF.

    The goal of the scenario presentations was to illustrate different possible outcomes, giving the participants a context in which to think about their present-day issues around sustainability. This can be a powerful technique, as it reminds us that choices have consequences, but that sometimes events outside of our control can shape how our choices play out. Scenarios remind us of the complexity of history, by showing how that complexity can evolve in the days and years to come.

    The two scenarios I encountered were the near-Singularity world and the collapse world. In the first, nanotechnology, biotechnology and a broad enthusiasm for human and social enhancement technologies allowed widespread radical longevity, thriving colonies on the Moon and Mars, and near-complete management of geophysical processes on Earth. With one minor exception (the existence of point-to-point teleportation), this was, if anything, a fairly conservative take on the Singularity scenario, but the near-universal reaction I witnessed from participants was fear and displeasure. Few of the participants wanted the kinds of enhancement technologies offered in the scenario dramatization, and all lamented the decline of the "natural" world and local culture. I noted at the time that I was the youngest person in my sub-group(!), and easily in the youngest 10% of the conference as a whole; I do wonder what the reaction to this scenario would have been from a larger younger-person contingent.

    The near-Singularity scenario was presented in a fairly tongue-in-cheek fashion, and even those who found the world unsettling left the room in relatively good humor. This carried over to the second world my group saw, the collapse scenario, positing an independent, militarized, and resurgent royalist Hawaii struggling to deal with a peak-oil energy collapse, climate disaster, and global economic meltdown. One person stated quite vocally that he found the conceit offensive, but most participants accepted the scenario's elements -- it may have been a dangerous, depressing world, but it was more familiar than one with rejuvenation biotechnology, nanofabbers and Mars colonies!

    I'm told, however, that those who entered the collapse scenario first were fairly traumatized by the presentation (attendees were treated as newly-arrived refugees), and this shock carried through when they swapped over to the near-Singularity world.

    The main caution I have about the set of scenarios is the translation from "this is a world of tomorrow" to "these are choices you'll have to make about tomorrow." The collapse world had a clearer pathway from the present than did the near-Singularity world -- and in some ways, that makes sense -- but all would have been better-served with a minimal set of bullet-point-style summaries outlining which choices and dilemmas today lead to or militate against the various scenarios. It's too easy for participants, when confronted by future stories that are too disturbing, to wave them off as impossible or "silly" if they don't have explicit links to the present.

    But even without the easy-mode handouts, this was a remarkable event. Think about it: community, political and economic leaders of an American state spending a day living in different futures, all with the goal of figuring out sustainable pathways. Imagine doing the same thing for California or New York, or even a national government. What would it take for leaders outside of Hawaii to start thinking about the future in terms of systems and sustainability?

    Hawaii had a secret advantage. 36 years ago, the state convened the Hawaii 2000 project (PDF), helping the decision-makers of 1970 to think about their choices and planning strategies. Futurists from Alvin Toffler to Arthur C. Clarke attended, as well as some of the people -- such as Jim Dator -- still working on Hawaiian futures. The set of scenarios about the state's condition in the distant future of 2000 ranged from paradise to commercial near-disaster. Dator tells me that the general consensus, unfortunately, is that the subsequent legislatures ignored the project's recommendations, and that the real world Hawaii of today best matches the near-disaster world feared in 1970.

    Such a combination of accurate projection and dismally wrong choices arguably made the Hawaii 2050 project possible, as the earlier project demonstrated both how relevant foresight workshops can be and what happens when their results are discarded. Hawaii 2050 is the state's chance to make up for what happened to Hawaii 2000.

    I'm cautiously optimistic about this process. The argument that Hawaii ignored the last scenario project to its own detriment dovetails nicely with the growing prominence of the "Inconvenient Truth" memeplex. More and more people in positions of civic responsibility are realizing the existential risks associated with climate collapse, but in Hawaii, they've had the tools for figuring out strategies for success in their kit for over three decades. I have no doubt that more than one attendee at Saturday's conference realized that, if Hawaii becomes a leader in the field of local and regional environmental response, it has the potential to be an economic dynamo in the years to come.

    I hope that Hawaii's project becomes more visible. If Hawaii hadn't experimented with a futurist project 36 years ago, it's unlikely that the state would have even considered such an oddity today. If Hawaii is successful with the 2050 Sustainability endeavor, however, it could in turn serve as a role model for other political entities looking for a proven set of techniques for grappling with uncertainty.

    A great deal is riding on the shoulders of this project, even more than its supporters might suspect.

    August 22, 2006

    Future of the Future

    The next five days will see a potentially interesting -- at least to me -- intersection of a variety of important dynamics I've been following closely.

    Global guerillas, or the reaction to them. What should be an hour wait for the flight will be several hours as Janice & I wrangle with security. This habit we in the West seem to have of responding to the most recent security brouhaha, no matter whether the threat was actually new or persistant, is just one of the ways the bad guys win. Frankly, I suspect that "foiled" plans are more disruptive than "successful" attacks. If a plane blows up, we all freak out, but eventually get back to normal. If a terror cell is arrested preparing for an underwear bomb, suddenly we'll all be subject to even more intrusive inspections for years to come.

    The stickiness of virtual communities. This trip will be the longest I've gone in quite some time without at least poking my head into my current preferred metaverse, World of Warcraft. It's not that I'll miss the raids and battlegrounds and whatnot all that much, but I'll really miss the cameraderie of my friends and colleagues.

    Climate awareness. Weather in Hawaii is close to perfect -- a balmy mid-80s, with occasional passing rainshowers. But lurking over the horizon is what could be the strongest Pacific storm season in quite a while. No tropical storms are predicted for this stay, but it's inevitable that Hawaii will get hit in the near future. What happens to a city under weather siege when there's no place to run? The Sustainability 2050 project will have to confront the question of what conditions like that would do to the state.

    Immersive futurism. My talk on Thursday night will address the changing face of futurism, with the emergence of "experiential futurism," whether using role-playing, immersive environments or artifacts. I see this as part of a larger trend towards the democratization of futurism: no longer will we be content with experts telling us what the future will hold, now we want to be able to experience it -- and to change it, through our own choices.

    See you on the beach.

    July 5, 2006

    Shorter Version of Below

    The future is an ongoing conversation.

    Our futures are words yet unsaid.

    The Unspoken Word

    "But that the dread of something after death,
    The undiscovered country, from whose bourn
    No traveller returns, puzzles the will,
    And makes us rather bear those ills we have
    Than fly to others that we know not of?"

         —Wm. Shakespeare, Hamlet

    The idea that tomorrow is a destination, an "undiscovered country," is the lifeblood of classic futurism. We wish to see where we are headed; we want to know what hidden shoals to avoid, and which strong currents to follow. It's this idea of the future as a place just over the horizon that allows us to imagine the "end of history," to fear getting to the future as a race to be lost, to see tomorrow as a land we have yet to conquer.

    But what if we instead imagine tomorrow in wholly different terms. What if tomorrow is a word we have yet to speak? The future can be an ongoing conversation, filled with phrases and pauses, debates and soliloquies, a conversation in which all of our voices can be heard. A conversation is larger than any single sentence, although each word is important. It has a narrative and flow, but can head off in surprising directions (although often quite predictable, in retrospect) as new ideas occur to us and new participants enter the scene. A conversation may have had a beginning, but it need not have an ending, as long as we have something to say.

    If the future is an undiscovered country, it belongs to none of us (except, perhaps, those who we might displace when we take possession); if the future is an unfinished conversation, it belongs to all of us, as it only matters as long as there are voices to be heard.

    The notion of tomorrow as a land just out of reach is an artifact of an age long past, when those who sought to change the world did so by seeking out its most distant edges, whether for trade, treasure or empire. The concept of the future as conversation, however, resonates with today's world, where changes come through mutual creation, collaborative innovation, and the growth of our networks. Inspiration is far more meaningful than exploration in today's world; anticipation -- of the next word, of the next moment -- far more powerful than expectation of what's over the horizon.

    An undiscovered country could be found and given name by a lone explorer; conversations, by definition, require more than a single voice. Some speakers will stand out, to be sure, and individual voices may guide the course of the discussion, for a time. But a conversation is not owned by any single person, no matter how vocal; the words move on, the subjects shift, and in due course the conversation bears little resemblance to past debates.

    This isn't simply philosophical mumbling. How we speak shapes how we think. As long as we speak of the future in geographic language, we'll continue to look at our choices for tomorrow in terms of ownership, demarcation and, ultimately, limits. Where is the future when there no more lands left to discover?

    A conversational metaphor for tomorrow has neither the history nor the breadth of the geographical metaphor, and we will likely speak of horizon-scanning and frontiers and such for some time to come. But it is to our benefit to pay attention to the words we use, and what they truly mean, rather than allow the language of exploration and conquest to remain as unexamined jargon, words that unknowingly shape our vision. It's more important now than ever before that we as a civilization learn how to build an understanding of how the future is shaped into our present-day decisions. We shouldn't let that understanding be created through language with diminishing relevance to our lives, our ideas and our tomorrows.

    June 22, 2006

    Stephen Hawking, Global Warming, and Moving Out

    mchawking.jpgLast week, at a talk in Hong Kong, Stephen Hawking made what struck me at the time as being such a reasonable and obvious observation that I didn't think it needed commentary:

    ''It is important for the human race to spread out into space for the survival of the species,'' Hawking said. ''Life on Earth is at the ever-increasing risk of being wiped out by a disaster, such as sudden global warming, nuclear war, a genetically engineered virus or other dangers we have not yet thought of.''

    To my surprise, Hawking's comments have been taken by otherwise intelligent people to mean that Hawking believes that the Earth is, or should be, "disposable," and that moving into space would be a way to escape global warming rather than mitigate or reverse it.

    I'm 99% certain that this is not what Hawking meant (I can't find a transcript of the speech, so I'll leave that remaining fraction of a possibility open for now). It's pretty clear to me that what Hawking is talking about instead is the fragility of the planet, and the recognition that, for human civilization to survive over the long run, we can't keep ourselves limited to a single home. As Hawking suggests, we face a multitude of existential risks, and the best efforts to eliminate one won't come close to eliminating them all. Even if we manage to avoid a "tipping point" threshold for global warming, for example, we would still face threats from pandemic disease, nuclear war, or the classic asteroid impact.

    In the face of such risks, the wise approach is to do what we can to prevent the problems from arising, as well as to do what we can to make certain we can recover if the problem happens too swiftly, too aggressively, or too unexpectedly to be countered. In short, to borrow from the familiar realm of computer tech support, we need to perform both planetary maintenance and civilization backups. Programs and projects to head off global warming, to shift incoming asteroids so that they miss Earth, to improve global health and development, and so forth -- the kinds of good, incredibly important efforts described every day at places like Gristmill, Treehugger, and WorldChanging -- exemplify what I mean by planetary maintenance; looking to a future where humans live on more than one world, what Hawking is talking about, exemplifies what I mean by civilization backups.

    I've talked about other kinds of civilization backups before, starting with Norwegian seed archive vaults to muse about information access in a post-disaster world. These are massive projects, and could take decades to complete, but letting us rebuild after planetary-scale disasters. Off-Earth colonies are just another variation -- not because they'd let us leave Earth behind, but because they'd help Earth recover.

    But backups are not substitutes for maintenance. Dealing with disasters after the fact is always far more costly, time-consuming and frustrating -- and, on the scale we're talking about, life-threatening -- than performing regular maintenance. Maintenance projects (fighting global warming, eliminating global poverty, eradication of pandemic diseases) reduce our need to use backups; backup projects are our last hope when maintenance fails.

    Hawking's comments weren't calling on us to abandon efforts to keep the Earth healthy, or to plan to abandon the Earth, period. They were a reminder that sometimes maintenance fails, and that if human civilization is worth keeping around, we need to think big.

    June 16, 2006

    Responding to Bruce

    saturn_encedalus.jpgBruce Sterling did me the honor of devoting an entire Beyond the Beyond blog post to my Twelve Things... item from a couple of days ago. He provided an additional service by disagreeing with part of my post, and explaining precisely why. I figured I should pay close attention.

    Bruce, while stating that the "draft of a list of twelve principles here is pretty good," grabs onto the apparent contradiction between my point #1 ("Nobody can predict the future") and my point #2 ("Not everyone is surprised by surprises"). If someone has successfully identified an upcoming change before it happens, haven't they predicted the future? He writes:

    ((((If I frame an obvious truism as a "prediction" and you feel any genuine surprise, then prediction, as a social act, has taken place. I'm like an Egyptian priest with some elementary understanding of astronomy, who can and will win awestruck admiration when he foretells an eclipse. If somebody foretells that the sun will go dark and nobody else expects the sun goes dark, that is a major revelation. That's not a measure of the absolute unlikelihood of the predicted event. It's a measure of the social distance between specialized insight and general incredulity.)))
    (((It makes no pragmatic difference how the predictor found these astounding things out. Frankly, nobody much wants to know that. Generally a futurist spots future trends by spending a lot of time closeted with obscure geeks. He does some groundwork and he scrapes up some poorly distributed future. That's not second-sight. It's kind of a lot of work, and for most people it's rather boring. The whole point of hanging out with futurists is that they will do that kind of thing for you. They can also generally talk about it in some persuasive, jazzy way that eases your native incredulity.)))

    Bruce is largely correct, of course, and his point here about "the social distance between specialized insight and general incredulity" is worth emphasizing. Futurists are, in some ways, a different species; for better and for worse, most people don't think in the same ways or about the same things that futurists do. But remember that what I wrote wasn't a Field Guide to Futurism (although, now that I mention it...), it was a set of reminders for journalists approaching futurists for the purposes of reportage. The purpose of point #1 derives from the very same social distance between specialized insight and general incredulity that Bruce describes.

    When journalists report on people who describe themselves as futurists, they may not understand why a futurist would make a given observation; what we often get as a result are assertions of certainty. I doubt there are many professional foresight workers out there claiming perfect predictive knowledge, so I have to assume that this comes from how some journalists perceive futurists operate. Point #1 was meant to inoculate reporters against such beliefs.

    The kind of reportage prompting point #1 is most visible in the generally superficial articles about emerging trends and upcoming technologies. But as I say later on, Gadgets are not Futurism. Bruce reminds me that the more important kinds of foresight work is heavily science-based, and can make accurate predictions of future events based on existing research. We shouldn't treat a climate scientist (as a pointed example) with the kind of jaded skepticism that we might have for a pop culture trend guru.

    So here's how a reworked point #1 should look, taking into account this diversity:

    1. "Prediction is very hard, especially when it's about the future." -- Yogi Berra Completely accurate foresight is a rare thing; most of the time, good futurism means getting key elements right, even if the superficial details are wrong. Predictions based on physical principles and scientific knowledge tend to do better than those based on "trendspotting" and "cool hunting," and are more likely to be corroborated by other specialists. In every case, however, the most important question to ask is "why?" Why would the suggested change happen? Why would people make the predicted choice? Why would we see this particular outcome?

    What do you think?

    (BTW, the picture of Saturn and Encedalus at the top of the post is a call back to Bruce's own Saturn/Encedalus post earlier today.)

    June 14, 2006

    Twelve Things Journalists Need To Know to be Good Futurist/Foresight Reporters

    J. Bradford DeLong is a professor of economics at UC Berkeley, and was an economic advisor to President Clinton; Susan Rasky is a senior lecturer in journalism at UC Berkeley, and was an award-winning reporter for the New York Times. Together, they have compiled for the Neiman Foundation for Journalism at Harvard lists of what economists need to know about journalists, and what journalists need to know about economists, in order to result in useful and accurate economic reporting. The lists are straightforward, and if followed would make a world of difference.

    This is a remarkably good idea, one with direct application in a number of disciplines that are important for society but prone to obfuscation and confusion in the press: environmental science; bioscience; computer science (pretty much all sciences, in fact); developments on the Internet; and, of particular focus here, futurism and foresight. It's too easy for poorly-informed journalists to skim off unrepresentative (but sound-byte-friendly) examples and concepts, and help to further public confusion instead of help to clear it up.

    This isn't because journalists are corrupt or stupid or anything like that: by and large, they're generalists talking about fields that they probably didn't study, under time and financial pressure from editors and publishers who almost certainly know even less. It's a wonder that reportage about science, technology and the future isn't worse than it already is.

    Although I think the "12 Things Journalists Need To Know" model has broad application, I'm only going to look at the futurist/foresight area here, and am only going to compile the list for journalists writing about futurists. Fortunately, the instructions for economists about journalists is quite applicable to academics and specialists across disciplines.

    Here's my initial draft of 12 -- what would you change?

    1. Nobody can predict the future. This should go without saying, but too often, reports about trends or emerging science and technology tell us what will happen instead of what could happen. In fact, most futurists and foresight consultants will avoid making any predictive claims, and you should take them at their word; any futurist who tells you that something is inevitable probably has something to sell.

    2. Not everyone is surprised by surprises. The corollary to #1, be on the lookout for people who saw early indicators of surprises before they happened. Just like an "overnight success" worked for years to get there, the vast majority of wildcards and "bolt from the blue" changes have been on someone's foresight radar for quite awhile. When something happens that "nobody expected," look for the people who actually did expect it -- chances are, they'll be able to tell you quite a bit about why and how it took place.

    3. Even when it's fast, change feels slow. It's tempting to assume that, because a possible change would make the world a decade from now very different from the world today, that the people ten years hence will feel "shocked" or "overwhelmed." In reality, the people living in our future are living in their own present. That is, they weren't thrust from today to the future in one leap, they lived through the increments and dead-ends and passing surprises. Their present will feel normal to them, just as our present feels normal to us. Be skeptical of claims of imminent future shock.

    4. Most trends die out. Just because something is popular or ubiquitous today doesn't mean it will be so in a few years. Be cautious about pronouncements that a given fashion or gadget is here to stay. There's every chance that it will be overtaken by something new all too soon -- and this includes trends and technologies that have had some staying power.

    5. The future is usually the present, only moreso. Conversely, don't expect changes to happen quickly and universally. The details will vary, but most of the time, the underlying behaviors and practices will remain consistent. Most people (in the US, at least) watch TV, drive a car, and go to work -- even if the TV is high definition satellite, the car is a hybrid, and work is web programming.

    6. There are always options. We may not like the choices we have, but the future is not written in stone. Don't let a futurist get away with solemn pronouncements of doom without pressing for ways to avoid disaster, or get away with enthusiastic claims of nirvana without asking about what might prevent it from happening.

    7. Dinosaurs lived for over 200 million years. A favorite pundit cliche is the "dinosaurs vs. mammals" comparison, where dinosaurs are big, lumbering and doomed, while mammals are small, clever and poised for success. In reality, dinosaurs ruled the world for much, much longer than have mammals, and even managed to survive a planetary disaster by evolving into birds. When a futurist uses the dinosaurs/mammals cliche, that's your sign to investigate why the "dinosaur" company/ organization/ institution may have far greater resources and flexibility than you're being led to believe.

    8. Gadgets are not futurism.Don't get too enamored of "technology" as the sole driver of change. What's important is how we use technology to engage in other (social, political, cultural, economic) activities. Don't be hypnotized by blinking lights and shiny displays -- ask why people would want it and what they'd do with it.

    9. "Sports scores and stock quotes" was 1990s futurist-ese for "I have no idea;" "social networking and tagging" looks to be the 2000s version. Technology developers, industry analysts and foresight consultants rarely want to tell you that they don't know how or why a new invention will be used. As a result, they'll often fall back on claims about utility that are easily understood, familiar to the journalist, and almost certainly wrong.

    10. "Technology" is anything invented since you turned 13. What seems weird and confusing will become familiar and obvious, especially to people who grow up with it. This means that, very often, the real utility of a new technology won't emerge for a few years after it's introduced, once people get used to its existence, and it stops being thought of as a "new technology." Those real uses will often surprise -- and sometimes upset -- the creators of the technology.

    11. The future belongs to the curious. If you want to find out why a new development is important, don't just ask the people who brought it about; their agenda is to emphasize the benefits and ignore the drawbacks. Don't just ask their competitors; their agenda is the opposite. Always ask the hackers, the people who love to take things apart and figure out how they work, love to figure out better ways of using a system, love to look for how to make new things fit together in unexpected ways.

    12. "The future is process, not a destination." -- Bruce Sterling The future is not the end of the story -- people won't reach the "future" and declare victory. Ten years from now has its own ten years out, and so on; people of tomorrow will be looking at their own tomorrows. The picture of the future offered by foresight consultants, scenario planners, and futurists of all stripes should never be a snapshot, but a frame from a movie, with connections to the present and pathways to the days and years to come.

    When talking with a futurist, then, don't just ask what could happen. The right question is always "...and what happens then?"

    May 31, 2006

    Futurist Matrix Revisited (Again)

    things_to_come.jpgDavid Brin wrote a provocative and thoughtful response to my futurist matrix idea, and posted it over at his blog. Unfortunately, the system he uses -- Blogger -- has once again broken its comment system. Rather than wait to reply, I've decided to post my response to his response here. (David -- this is an updated version of the email I sent.)

    The futurist matrix is clearly a work in progress, and the changes have been slow and evolutionary. The main difference between the first and second versions of the matrix is in the terminology, not the concept -- I dropped the word "realist," and replaced with "pragmatist." More importantly, I tried to make the sub-headings less normative, less apt to appear biased towards one particular option along an axis.

    I suspect I'll need to do something similar with "optimist" and "pessimist." The danger of using commonplace terms in a setup like this is that readers' interpretations of the words may not match my use. The present sub-headings of "inclusive success" and "exclusive success or failure" are more accurate than optimist/pessimist, and I'll likely make them the axis labels.

    These more expressive terms help to illustrate a seemingly-illogical aspect of the matrix: the combination of ideologically opposed groups in the same philosophical box, such as Marxists and Dispensationalists in the lower-right quadrant. But the matrix is less concerned with a group's ideology than with its eschatology: how do the philosophies see the future unfolding? As Brin points out, neither Marxists nor Dispensationalists would see themselves as particularly pessimistic. But while they may see a happy future world, it's a world limited to the true believers. They may want everyone to become a true believer, but people outside of the circle cannot achieve a successful future.

    There is a bigger problem with putting exclusive success and failure in the same box, though, one that Brin gets at with his Paul Erlich example: it's a pejorative combination, implying that the two are equivalent. I certainly wouldn't be happy in a Left Behind world (in fact, I'd probably be hunted down by the Tribulation Commandos), but few Dispensationalists would see their own success as a form of failure -- while they would likely see the upper left world as indicative of one where they've lost. Failure becomes an issue of perspective, not objective reality.

    For many pragmatists, exclusive success and failure may in fact be equivalent concepts; many (most?) people willing to accept different pathways to positive change would see the success of a limited group of people at the expense of everyone else as a form of failure. Even the doomiest doom-sayers among the peak oil and civilization collapse crowd (e.g., James Howard Kunstler) wouldn't see being right as a form of success, even if pockets of well-prepared survivalists carried on (although they may get a bit of schadenfreude out of saying "I told you so" as the boat sinks).

    So perhaps it's better to drop "failure" as a hard term, recognizing that each of the four quadrants would likely be seen as a "failure" outcome for somebody.

    Regarding some particular points Brin raises:

  • I do agree with Brin's list of What To Avoid for ideological matrices; in fact, those are pretty much identical to the What To Avoid list elements for making dual-axis scenario sets, too.

  • It's not an accident that the various examples in each box are all folks who "care about the future" -- it *is* a matrix of futurist perspectives, after all.

    I disagree with the argument that groups that dislike or oppose each other shouldn't end up in the same box. If the point of opposition is unrelated to the dynamics of the axes, while the issue arguably connecting them is fundamental to the matrix, it's a completely appropriate structure.

    [As a (very) crude example, imagine a spectrum of "singularity technologies are inevitable and all-powerful" versus "singularity technologies will be haphazard and only marginally transformative," one would put both Ray Kurzweil and Bill Joy at the same end of the spectrum, even though they have radically different visions of what these technologies would actually do.]

    One last item: with regards to this:

    I feel we have to get smarter. Maybe a LOT smarter, before we will be able to deal with AI and immortality and molecular manufacturing and nanotech and bioengineering. Effective intelligence is where we really should be investing research and development. Because if we do get smarter, or make a next generation that is, then the rest of it could be much easier.
    Frankly, when I look at Aubrey de Gray and Ray Kurzweil... and when I look in a mirror... I see jumped up cavemen who want to live forever and get all pushy with the universe and quite frankly, I am not at all sure that cavemen are ready to leap into the role of gods.

    I agree that we need to get smarter and that we need to focus attention on effective intelligence. I disagree, however, that this means we need to pull back. Intelligence evolves with the environment, broadly conceived, and (if William Calvin is right, and I think he is) we get smarter faster when the environmental pressures are the most extreme. Calvin argues, for example, that the measurable improvements in hominid and early human cognitive skills closely correlated with rapid climate shifts.

    In other words, we may not get the intelligence we need if we don't put ourselves in the position of needing it.

  • May 29, 2006

    Memorial Day

    Andrew Jackson Wickline, my grandfather, the man I was named for, died three years ago, shortly before Memorial Day; a veteran of World War II, he was given a military service on Memorial Day itself, 2003.

    A short while before he died, Grandpa Jack gave me a box of old photos from the war. Over 500 pictures, taken by the company chaplain for the 80th Field Hospital, and offered to the men afterwards; Jack was one of very few who took copies of the pictures. I've scanned a small handful of them, and put them up on the web, but I really need to scan them all.

    The photos are yellowed and clearly showing their age, but they are intact. Will the same be said in sixty years for the pictures we take today? My hard drive is full of images, taken by all manner of digital cameras -- but few have been printed out, and while I have multiple backups, digital media is inherently ephemeral. Formats change; people get sloppy. I have disks with essays from graduate school in formats that I can no longer read. How long until I can no longer read the image files found on some old CD I burned years ago?

    Physical objects are not permanent, and I couldn't share the photos from the middle of the last century so easily without converting them first to digital form. I know the value and power of electronic media. I simply wonder how much of our future's past will be lost when locked into long-discarded formats and devices.

    It is especially incumbent upon those of us who think about the future to remember what has gone before. The future doesn't just happen; events don't emerge fully-formed, like Athena from Zeus' head. The world in which we live is the result of myriad victories and mistakes, chances taken and decisions regretted, paths followed and options ignored, people loved and people forgotten. Too often, we pay attention solely to prominent names, the leaders and celebrities, and give them credit for creating the present. Artifacts like a box of old photos from a long-ago war remind us of how today's world was truly shaped, and the roles that everyday people played in making it come about.

    I look at the people in my grandfather's photos, and wonder: did they know they were remaking the world? Were these simply snapshots to them, vacation photos with an edge, or did they recognize that they were documenting their roles in a monumental political transformation? How would our understanding of the second world war differ if everyone had carried a camera, not just one person out of hundreds, or thousands?

    Under Mars, a site archiving soldiers' photos from the present Iraq War, gives us a hint. For some soldiers, the pictures are simply snapshots, a way to hang onto a moment with friends. For others, they are historical records, filtered not through the eyes of a journalist or through the official accounts, but anchored to their own perspective, their fear and elation and wonder and horror. These are the artifacts of a citizens' history of the world -- if we can remember how to view them.

    Memories are imperfect, and photos -- digital or physical -- have an aura of authority, but are no less subjective. But in the gathering of myriad subjective stories and images, a collaborative truth emerges. The more memories that get added to this collection, the more powerful the truth; beware histories that are written solely by victorious leaders.

    My grandfather, Andrew Jackson Wickline, gave me many gifts over the years, but this box of photos is an incredible legacy. Every time I look at them, I sense their gravity and power. I don't know what I'll do with them -- I'm very happy to listen to suggestions -- but I do know that I'll treasure them. They're tangible evidence that history comprises the lives of all of us, not just the great and the famous, and that all of our actions help to shape the world to come.

    May 24, 2006

    Future Matrix, Updated

    Yesterday's post What's Your Future has gotten a bit of attention, and much of the commentary (especially the discussion following the post itself) has been quite useful and interesting.

    Upon reflection, I think the use of "Realist" to denote the top of the vertical axis is somewhat confusing. I use the term to mean a position/ideology that welcomes compromise and embraces ambiguity; unfortunately, I noticed that a few people seemed to take it to mean "realistic" (or, better yet, "reality-based"). Given what that suggests about the opposite end of that spectrum, people who might feel some sympathy for the (e.g.) Optimist-Idealist box would reject that position.

    I'd like to replace the term Realist with Pragmatist.

    To further clarify, by Pragmatist I mean "open to multiple methodologies," and by Idealist I mean "strong preference for a particular methodology." In both cases, "methodologies" is intentionally broad.

    So, as a revised matrix:

    futurist_map_rev.jpg

    May 22, 2006

    What's Your Future?

    How do you envision the future? Are we on the verge of dystopia? Soon to be transformed by accelerating change? Ready to strap on the jet packs to pick up our food pills? Settling in for a long struggle?

    It struck me recently, while talking with my friend Jacob Davies, that the relative success of WorldChanging and similar projects could be linked to the re-invigoration of a worldview combining optimism (a belief that success is possible, and can be broadly achieved) and realism (a belief that global processes are imperfect and cannot be perfected, and change happens through compromise and evolution). Jacob gave some further thought to this idea, and elaborated a bit on its implications in a comment at the Making Light weblog. The combination of belief sets -- optimism vs. pessimism, realism vs. idealism -- offer us a matrix for describing divergent ways of looking at the future.

    futurist_map.jpg

    It's important to note first off that there isn't a strict correlation here between politics and foresight worldview. Both premillennial dispensationalists (the Left Behind, "rapture ready" types) and traditional revolutionary Marxists would be situated in the lower-right Idealist-Pessimist box, for example. It wouldn't be hard to find similar pairs of contrasting ideologies for the other boxes.

    Instead, let's populate the matrix with examples of differing approaches to understanding a changing world.

    In the upper left, Optimist-Realist, we can put WorldChanging and its fellow-travelers -- success is possible, but requires a clear understanding of problems and a willingness to adapt to meet changing conditions (use new tools, work with new allies, etc.). I put myself in this category, too (unsurprisingly), and I suspect that a large portion of the new generation of people doing foresight work would call this box home.

    In the upper right, Pessimist-Realist, probably the most familiar manifestation would be the cyberpunk sub-genre of science fiction, where the world is complex, change is messy, and the best we can hope for is staving off the worst of it for our own (likely small) group. As Jacob noted, many traditional environmentalists fall into this box; I'd also put various critics of technology such as Neil Postman or Bill McKibben in this category.

    In the lower right, Pessimist-Idealist, we can find (as noted) the religious revolutionaries, be they Left Behind-type Christians, Caliphate-fixated Muslims, or Third Temple-building Jews, all ready to wash away the unbelievers and enemies in order to transform the world. I would also put the "back to the Pleistocene" Deep Ecologists here, too, the folks who think that the only way to save the planet is to wipe out 9/10ths of the population.

    Finally, in the lower left, Optimist-Idealist, are those who see a transcendent, transformative future available to all. The most visible manifestation of this worldview can be found in those who see the advent of a technological Singularity fixing the world's problems and giving us all near-infinite knowledge and power. I don't put all Transhumanist-type folks here; James Hughes is an excellent example of someone who sees both a potential for technology-driven transformation and the need to work to make sure the benefits extend beyond a small group of elites. But anyone who has read Ray Kurzweil's books The Age of Spiritual Machines and The Singularity is Near knows how readily the Singularitarians can slip into millennialist language.

    For now, this matrix gives us a taxonomy of futurism, but it may prove to be a useful tool for understanding heretofore unexpected alliances (such as the growing anti-technology coalition between some environmentalists and some religious conservatives).

    Where would you put yourself? What does this matrix miss?

    May 19, 2006

    The Spacer Tabi

    spacer_tabi.jpgDavid Brin keeps a running tab of the "predictions" he got right in his 1991 novel Earth. He didn't write the book as a piece of forecasting, but has managed to get a variety of things right about how the early 21st century would look.

    It may be time for me to start my own list.

    In 2003's Transhuman Space: Toxic Memes, I wrote about the "Spacer Tabi:"

    TRANSHUMAN STYLE: THE SPACER TABI
    Ever since humans moved into space full-time, the quest for comfortable, useful, and attractive clothing for zero gee has been unending. A variety of outfit designs have come and gone over the decades, but one item has stuck around: the tabi. Based on the Japanese split-toe slipper, the so-called "spacer tabi" allows for both comfort when walking in positive-gee environments and the ability to use the crude gripping ability of one's toes in zero gee.
    [...] Spacer tabis come in a wider variety of color and fabric on Earth than they do in space, and have become popular in most urban settings. Most adults in Fourth and Fifth Wave countries have at least one pair of spacer tabis in the closet.

    Today's boingboing brings us this bit of news:

    Space-sneakers like a Japanese toe-sock

    These "space-sneakers," manufactured by Japan's Asics, were designed in response to a Russian cosmonaut's complaint that the space-shoes he'd worn had hurt his feet. These shoes are more like Japanese tabi, a sock with a split toe, and they weigh a mere 130g. The slightly inclined toe is meant to keep the calf-muscle taut in low gravity. The company hopes that Japan's astronaut Takao Doi will beta-test them on his Space Shuttle/ISS mission in 2007.

    I don't know about you, but I'm totally ready to buy a pair.

    It's actually pretty unusual for futurists to get their scenaric elements right. That's not to say that the projections/forecasts are useless. Even "wrong" pieces of foresight are usually wrong in illustrative, useful ways, and get us to keep our eyes open for changes to culture (or technology or politics) that we may otherwise have ignored. Futurist work isn't really about telling people what will happen, but about getting people to anticipate change from a new perspective.

    May 4, 2006

    What's the Opposite of Triage?

    impact.jpgI've been thinking quite a bit lately about how we make long-term decisions. The trite reply of "poorly" is perhaps correct, but only underscores the necessity of coming up with reliable (or, at least, trustable) mechanisms for thinking about the very long tomorrow. Many of the biggest crises likely to face human civilization in the 21st century have important long-term characteristics, and our relative inability to think in both complex and actionable ways about slow processes may be our fundamental problem.

    Whether we're talking about asteroid impact, global warming, introduction of engineered self-replicating devices (biotech or nanotech) into the environment, or radical longevity, we seem stuck in the mindset that says "if it's not a squeaky wheel, it gets no grease." It's a triage mentality -- we're dealing with bloody, awful problems right here and right now, and something that won't affect us for decades is something we can ignore for the moment. The thing is, these aren't the kinds of problems where the cause and the effect happen close together, and they're not the kinds of problems that can be dealt with quickly. If we wait until they're the bloody, awful problems of right here and right now, it's far too late. So why is it so hard to think in the long term?

    Our brains evolved in conditions where individuals would likely live just a few decades, and some of the explanation for why it's so hard for us to think long-term comes from that. We may not be wired to do so easily, and teaching ourselves to think creatively about the future might be as difficult as training any other kind of behavior that runs against biological pressures. If this is so, it would suggest that long-term thinkers may end up a kind of "monk," disconnected from the everyday world, potentially given respect and support but rarely completely understood by society at large.

    It could also be a function of the relatively rapid pace of technological innovation. This would have two big repercussions: the first is that we become accustomed to thinking of present-day problems as simply being a matter of engineering -- we may not be able to do X now, but surely we'll come up with a way to do it cheaply and easily in The Future, so why worry?; the second is that we are often burned by attempts to "predict" the future of technology, and find the pace of change a bit overwhelming. If so, this suggests that better thinking about longer-term problems is a process issue, and a better methodology would potentially work well.

    A lot to mull on here, and I don't have good answers yet.