September 21, 2015

Uncertainty, Complexity, and Taking Action (revisited)

I stumbled across this transcript of a talk I gave way back in late 2008, at the "Global Catastrophic Risks" conference. I was asked to provide some closing thoughts, based on what had gone before in the meeting, so it's more off-the-cuff than a prepared statement. The site hosting the transcript seems to have gone dark, though, so I wanted to make sure that it was preserved. There was some pretty decent thinking there -- apparently, I had a functioning brain back then.

Uncertainty, Complexity, and Taking Action

Jamais Cascio gave the closing talk at GCR08, a Mountain View conference on  Global Catastrophic Risks. Titled “Uncertainty, Complexity and Taking Action,” the discussion focused on the challenges inherent in planning to prevent future disasters emerging as the result of global-scale change.

The following transcript of Jamais Cascio’s GCR08 presentation “Uncertainty, Complexity, and Taking Action” has been corrected and approved by the speaker. Video and audio are also available.

Anders Sandberg: Did you know that Nick [Bostrom] usually says that there have been more papers about the reproductive habits of dung beetles than human extinction.  I checked the number for him, and it’s about two orders of magnitude more papers.

Jamais Cascio:  There is an interesting question there—why is that?  Is it because human extinction is just too depressing?  Is it because human extinction is unimaginable?  There is so much uncertainty around these issues that we are encapsulating under “global catastrophic risk.”

There is an underlying question in all of this.  Can we afford a catastrophe? I think the consensus answer and the reason we are here is that we can’t.  If we can’t afford a catastrophe, or a series of catastrophes, the question then is, what do we do that won’t increase the likelihood of catastrophe?  That actually is a hard question to answer.  We have heard a number of different potential solutions—everything from global governance in some confederated form to very active businesses.  We didn’t quite get the hardcore libertarian position today—that’s not a surprise at an IEET meeting—and I’m not complaining.  We have a variety of answers that haven’t satisfied.

I think it really comes down to unintended consequences. We recognize that these are complex fucking systems.  Pardon my language about using “complex,” but these are incredibly difficult, twisty passages all leading to being devoured by a grue.  This is a global environment in which simple answers are not just limited, they are usually dangerous.  Yet, simple answers are what our current institutions tend to come up with—that’s a problem.

One way this problem manifests is with silo thinking.  This notion of “I’m going to focus on this particular kind of risk, this particular kind of technology, and don’t talk to me about anything else.”  That is a dangerous thought, not in the politically incorrect sense, but in the sense that the kinds of solutions that you might develop in response to that kind of silo thinking are likely to be counterproductive when applied to the real world, which is, as you recall, a complex fucking system.

There is also, you’ve noticed here, an assumption of efficiency.  I mean by that an assumption that all of these things work.  That is not necessarily a good assumption to make.  We are going to have a lot of dead ends with these technologies.  Those dead ends, in and of themselves, may be dangerous, but the assumption that all the pieces work together and that we can get the global weather system up and running in less than a week…

With a sufficiently advanced tested, reliable system, no doubt.  If we are in that kind of world of global competition where I have to get this up before the Chinese do, we’re not going to spend a lot of time testing the system.  I’m not going to be doing all the various kinds of safety checks, longitudinal testing to make sure the whole thing is going to work as a complex fucking system.  There is an assumption that all of these things are going to work just fine, when in actuality: one, they may not—they may just fall flat.  Two, the kinds of failure states that emerge may end up being even worse, or at least nastier in a previously unpredictable way than what you thought you were confronting with this new system/ technology/ behavior, etc.

This is where I come back to this notion of unintended consequences—uncertainty.  Everything that we need to do when looking at global catastrophic risks has to come back to developing a capacity to respond effectively to global complex uncertainty.  That’s not an easy thing.  I’m not standing up here and saying all we need is to get a grant request going and we’ll be fine.

This may end up being, contrary to what George was saying about the catastrophes being the focus—it’s the uncertainty that may end up being the defining focus of politics in the 21st century.  I wrote recently on the difference between long-run and long-lag. We are kind of used to thinking about long-run problems: we know this thing is going to hit us in fifty years, and we’ll wait a bit because we will have developed better systems by the time it hits.  We are not so good at thinking about long-lag systems: it’s going to hit us in fifty years, but the cause and proximate sources are actually right now, and if we don’t make a change right now, that fifty years out is going to hit us regardless.

Climate is kind of the big example of that.  Things like ocean thermal inertia, carbon commitment, all of these kinds of fiddly forces that make it so that the big impacts of climate change may not hit us for another thirty years, but we’d damn well better do something now because we can’t wait thirty years. There is actually with ocean thermal inertia two decades of warming guaranteed, no matter what we do. We could stop putting out any carbon right this very second and we would still have two more decades of warming, probably another good degree to degree and a half centigrade.

That’s scary, because we are already close to a tipping point.  We’re not really good at thinking about long-lag problems.  We are not really good at thinking about some of these complex systems, so we need to develop better institutions for doing that.  That institution may be narrow—the transnational coordinating institutions focusing on asteroids or geoengineering.  This may end up being a good initial step, the training wheels, for the bigger picture transnational cooperation.

We might start thinking about the transnational cooperation not in terms of states, but in terms of communities.  I mentioned in response to George earlier about a lot of the super-powered angry individuals, terrorist groups, etc. that in the modern world actually tend to come not from anarchic states or economically dislocated areas but in fact from community dislocated areas.  Rethinking the notion of non-geographic community—“translocal community” is a term we are starting to use at the Institute for the Future—that ends up requiring a different model of governance.

You talk about getting away from wars and thinking about police actions, but police actions are 20th century… so very twen-cen. Thomas Barnett, a military thinker, has a concept that I think works reasonably well as a jumping off point.  He talks about combined military intervention civilian groups as sys admin forces—system administration forces.  I’m kind of a geek at heart, so I appreciate it from that regard, but also the notion that these kinds of groups go in, not to police or enforce, but to administrate the complex fucking system.


Cascio:  Exactly.

One last questions that I think plays into all of this popped into my mind during Alan’s talk.  I’m not asking this because I know the answer ahead of time—I’m actually curious.  When have we managed to construct speculative regulation?  That is, regulatory rules that are aimed at developments that have not yet manifest.  We know this technology is coming, so let’s make the rules now and get them all working before the problem hits.  Have we managed to do that, because if so, that then becomes a really useful model for dealing with some of these big catastrophic risks.

Goldstein:  The first Asilomar Conference on Recombinant DNA.

Cascio:  Were the proposals coming out of Asilomar ever actually turned into regulatory rules?

Hughes:  No, they were voluntary.

Cascio:  I’m not trying to dismiss that.  What would be a Bretton Woods, not around the economy but around technology?  Technology is political behavior.  Technology is social.  We can talk about all of the wonderful gadgets, all of the wonderful prizes and powers, but ultimately the choices that we make around those technologies (what to create, what to deploy, how those deployments manifest, what kinds of capacities we add to the technologies) are political decisions.

The more that we try to divorce technology from politics, the more we try to say that technology is neutral, the more we run the risk of falling into the trap of unintended consequences.  No one here did today, but it’s not hard to find people who talk about technology as neutral.  I think that is a common response in the broader Western discourse.

I want to finish my observations here by saying that ultimately the choices that we make in thinking about these technologies, these choices matter.  We can’t let ourselves slip into the pretense that we are just playing with ourselves socially.  We are actually making choices that could decide the fate of billions of people.  That’s a heavy responsibility, but this is a pretty good group of people to start on that.

March 20, 2014

Mirror, Mirror -- Science Fiction and Futurism

Futurism -- scenario-based foresight, in particular -- has many parallels to science fiction literature, enough that the two can sometimes be conflated. It's no coincidence that there's quite a bit of overlap between the science fiction writer and futurist communities, and (as a science fiction reader since I was old enough to read) I could myself as extremely fortunate to be able to call many science fiction writers friends. But science fiction and futurism are not the same thing, and it's worth a moment's exploration to show why.

The similarities between the two are obvious. Broadly speaking, both science fiction and futurism involve the development of internally-consistent, plausible future worlds extrapolating from the present. Science fiction and many (but not all) scenario-based forms of futurism both rely on narrative to explore their respective future worlds. Futurist works and many (but not all) science fiction stories have as an underlying motive a desire to illuminate the present (and the dilemmas we now face) by showing ways in which the existing world may evolve.

But here's the twist, and the reason that science fiction and futurism are not identical, but instead are mirror-opposites:

In science fiction, the author(s) build their internally-consistent, plausible future worlds to support a character narrative (taking "character" in the broadest sense -- in science fiction, it's entirely possible for the main character to be a space ship, a computer network, a city, even a planet). In short, a story. Conversely, futurists develop any story or character narrative (here found primarily in scenario-based futurism) to support the depiction of internally-consistent, plausible future worlds.

Science fiction writers need to build out their worlds with enough detail and system knowledge to provide consistent scaffolding for character behavior, allowing the reader (and the author) to understand the flow of the story logic. It's often the case that a good portion of the world-building happens behind the scenes -- written for the author's own use, but never showing up directly on the page. But there's little need for science fiction writers to build their worlds beyond that scaffolding.

Futurists need to make as much of their world-building explicitly visible as possible (and here the primary constraint is usually the intersection of limits to report length and limits to reader/client attention); any "behind the scenes" world-building risks leaving out critical insights, as often the most important ideas to emerge from foresight work concerns those basic technology drivers and societal dynamics. When a futurist narrative includes a story (with or without a main character), that story serves primarily to illuminate key elements of the internally-consistent, plausible future worlds. (The plural "worlds" is intentional; as anyone who follows my work knows, one important aspect of futures work is often the creation of parallel alternative scenarios.)

In science fiction, the imagined world supports the story; in futurism, the story supports the imagined world.

It's a simple but crucial difference, and one that too many casual followers of foresight work miss. If a futurist scenario reads like bad science fiction, it's because it is bad science fiction, in the sense that it's not offering the narrative arc that most good pieces of literature rely upon. And if the future presented in a science fiction story is weak futurism, that's not a surprise either -- as long as the future history helps to make the story compelling, it's done its job.

Futurists and science fiction writers often "talk shop" when they get together -- but fundamentally, their jobs are very, very different.

July 29, 2013

Call for Papers: The Ethics of Geoengineering

I've been asked to serve as guest editor for an upcoming edition of the Journal of Evolution and Technology, a peer-reviewed electronic journal published by the Institute for Ethics and Emerging Technologies. (Full disclosure: I've been a senior fellow at IEET for seven years.) The topic of the edition is, as the title of this post suggests, the ethics of geoengineering. Link to the full call for papers.

Here's a bit about what we're looking for:

For this issue of JET we would like to solicit papers exploring both the proposed geoengineering methods, and ethical, social and political questions that must be considered before they are explored and undertaken. Which methods make sense to explore? How can we keep the pressure on to shift to renewable and sustainable forms of energy, agriculture and manufacturing if we avail ourselves of this techno-fix? What agencies should be empowered to research and undertake these initiatives? What risks and benefits should be considered? What kinds of evidence and modeling should be required before they are undertaken, and at what point should they be deployed?

And the relevant info:

Important dates

Submission deadline: Nov 1, 2013
Notification of acceptance/rejection: Feb 1, 2014
Final revision deadline: March 1, 2014
Publication: Spring/Summer 2014


Length and Style

We anticipate that this issue will contain around 10 papers and, as a working guide, the papers should be between 4000 and 12,000 words in length. Instructions on format and style are here:

Submission procedure

Manuscripts must be submitted electronically in Microsoft Word to

Review process

Each submission will ideally receive two reviews. Completed reviews will be forwarded to the corresponding authors. Please suggest up to three external reviewers to facilitate the review process.

Here's what I'll be looking for: arguments and discussions that directly address the underlying dilemma driving the consideration of geoengineering, namely, the growing possibility that dire effects from climate disruption will happen faster than any carbon emission cuts could stop. Papers that just assert that geoengineering is bad and we should feel bad for talking about it, or that geoengineering is great because it will mean we don't have to waste money on cutting carbon will very likely find themselves stuck in a spam filter.

I've written quite a bit about the politics and ethics of geoengineering, but I know that I'm (a) not the only one thinking about it, and (b) not in possession of a monopoly on good ideas. I'd really love to see submissions of pieces that change my mind.

June 22, 2011

Summer Reading (Had Me A Blast)


What to read, what to read, as one takes a summer holiday...

Here are some books that you might not have heard of (I've talked up stuff like the Mars Trilogy and Transmetropolitan before). They're all science fiction or fantasy, and one's a graphic novel, but I'm not feeling like putting up a list of really depressing non-fiction books right now.

Anyway, I've read all of these, and liked them:

The Epic of Gilgamesh: An English Verison with an Introduction (Penguin Classics) by Some Mesopotamian Guy ~5000 years ago (paperback, Kindle)
No, really. This is one of the very first epic stories ever written, influencing storytelling for millennia.

The Lifecycle of Software Objects by Ted Chiang 2010 (hardcover) Free HTML version
Novella-length (hardcover runs 150 short pages), but utterly captivating. AI story with a heart.

Phonogram: Rue Britannia by Kieron Gillen and Jamie McKelvie 2007 (paperback)
Music is magic, and somebody is trying to resurrect the goddess of Britpop. Uh oh. Has a sequel, Phonogram Volume 2: The Singles Club, which is if anything even more brilliant.

River of Gods by Ian McDonald 2007 (hardcover, paperback, Kindle)
Compelling exploration of identity, AI, and power set in late 21st century India.

Sandman Slim by Richard Kadrey 2009 (hardcover, paperback, Kindle)
James Stark spent 11 years in Hell, and now he's living in Los Angeles. You make the jokes. Urban magic noir. Has a sequel (Kill the Dead: A Sandman Slim Novel ), and another out soon.

Spin by Robert Charles Wilson 2006 (paperback, Kindle)
Aliens put a shell around the Earth, slowing time -- a million years pass outside the shell for every year passing on Earth. This has, as you might expect, some troubling implications... Has a sequel (Axis ).

Vast (The Nanotech Succession) by Linda Nagata 1998 (Kindle Only)
Hard science fiction story of survivors of an interstellar war trying to escape an enemy warship, each traveling at near light-speed. Some of the survivors are still human. Actually has three very good novels leading into it (Tech-Heaven, The Bohr Maker, and Deception Well), but stands alone nicely.

When Gravity Fails by George Alec Effinger 1987 (paperback & Kindle)
Cyberpunk novel (with all that implies) set in a future Middle East. Yes, it's an old book (not Gilgamesh old, but still). Read it anyway. Has two sequels, A Fire in the Sun and The Exile Kiss.

May 18, 2010

OtF Core: Ethical Futurism (from 2006)

(This is the original Ethical Futurism piece I wrote for Futurismic in 2006; I intend to update and build on it, but I wanted to make sure the original could be found in its entirety here.)

What does it mean to be an “ethical futurist?”

I don’t mean just the basics of being an ethical human being, or even the particular ethical guidelines one might see for any kind of professional — disclosure of conflicts of interest, for example, or honesty in transactions. I mean the ethical conventions that would be essentially unique to futurists. What kinds of rules should apply to those of us who make a living (or a life’s goal) out of thinking about what may come?

Futurists — including scenario planners, trend-spotters, foresight specialists, paradigm engineers, and the myriad other labels we use — have something of an odd professional role. We are akin to reporters, but we’re reporters of events that have not yet happened — and may not happen. We are analysts, but analysts of possibilities, not histories. We’re science fiction storytellers, but the stories we tell are less for entertainment than for enlightenment. And, much to our surprise, we may be much more influential than we expect.

It’s not that no futurists have considered ethical issues before. Foresight professionals regularly grapple with the question of what kinds of ethical guidelines should govern futurism, in mailing lists, organizational debates, and academic papers. But — to my surprise — neither of the two main professional organizations for futurists, the World Future Society and the Association of Professional Futurists, have any lists, documents or debates on the subject available to the public. This doesn’t mean that futurists are inclined to behave unethically or amorally, but simply that there seems to be no overarching set of principles for the field, at least none open to the broader community in which futurists act.

As I gave this some thought, it struck me that futurists are not alone in thinking about tomorrow professionally. Most business consultant types also concern themselves with what may come, with the results of corporate decisions and organizational choices. But the difference between that sort of business consulting and foresight consulting comes down to the difference between outcomes and consequences. Outcomes are the (immediate or longer-term) results of actions; consequences are how those actions connect to the choices and actions of others, and to the larger context of society, the environment, and the future itself.

As I see it, then, where business professionals are responsible to the client and their various stakeholders, foresight professionals are responsible to the future.

Here’s what I think that means:

It means that the first duty of an ethical futurist is to act in the interests of the stakeholders yet to come — those who would suffer harm in the future from choices made in the present. This harm could come (in my view) in the form of fewer options or possibilities for development, less ecological diversity and environmental stability, and greater risks to the health and well-being of people and other species on the planet. Futurists, as those people who have chosen to become navigators for society — responsible for watching the path ahead — have a particular responsibility for safeguard that path, and to ensure that the people making strategic choices about actions and policies have the opportunity to do so wisely.

From this, I would argue for the following set of ethical guidelines:

An ethical futurist has a responsibility not to let the desires of a client (or audience, or collaborator) for a particular outcome blind him or her to the consequences of that goal, and will always informs the client of both the risks and rewards.

An ethical futurist has the responsibility to understand, as fully as possible, the range of issues and systems connected to the question under consideration, to avoid missing critical potential consequences.

An ethical futurist has the responsibility to acknowledge and make her or his client (audience, collaborators) cognizant of the uncertainty of forecasts, and to explain why some outcomes and consequences are more or less likely than others.

An ethical futurist has the responsibility to offer unbiased analysis, based on an honest appraisal of sources, with as much transparency of process as possible.

An ethical futurist has the responsibility to recognize the difference between short-term results and long-term processes, and to always keep an eye on the more distant possibilities.

Futurists perform a quirky, but necessary, task in modern society: we function as the long-range scanners for a species evolved to pay close attention to short-range horizons. Some neurophysiologists argue that this comes from the simple act of throwing an object to hit a moving target. Chimpanzees and bonobos, even with DNA 98% identical to our own, are simply unable to do so, while most humans can (at least with a bit of experience). It turns out that the same cognitive structures that let us understand where a moving target will be may also help us recognize the broader relationship between action and result — or, more simply, how “if” becomes “then.”

I’m not sure how many futurists recognize the weight of responsibility that rests on their shoulders; this is an occupation in which attention-deficit disorder is something of a professional requirement. But when we do our jobs well, we can play a pretty damn important role in shaping the course of human history. It’s incumbent upon us, then, to do our jobs with a sense of purpose and ethics.

September 10, 2009

....and another FC: APIs Are Not A Substitute for Ethics

Building on a Twitter post from the other day, my latest Fast Company essay looks at what happens when we try to limit misbehavior through tools, not rules.

The best kind of rules are those we apply to ourselves, those we believe in. Ethics--sometimes thought of as "how you behave when no-one is looking"--have the advantage of being readily applied to novel situations, and able to guide responses fitting the spirit of the law. People in positions of social power (such as doctors and lawyers) often receive training in ethics as part of their educations. What I'd like to see is the introduction of ethics training in these new catalytic disciplines.

Computer programmers, biotechnologists, environmental scientists, neuroscientists, nanotech engineers--all of these fields, and more, should have at least a course in ethics as part of their degree requirements. Ideally, it should be a recurring element in every class, so that it's not seen as just another hoop to jump through (check off the "is this ethical? Y/N" box), but instead as a consideration woven into every professional decision.

Along the way, I take a slap at a couple of my usual targets, too.

March 11, 2009

Living in the Green Future

Popped into Costco today to pick up a couple of items, and what did I see?

Cheap Home Solar

Just in case you can't read that too well, it's a 60W solar panel setup, with inverter (allowing it to power 110V devices), junction box to hook the four panels together, cabling, and frame... for under $300.

Stacked like tires at Costco.

This is a beautiful example of why I talk about the banality of the future. Cheap solar power systems readily available to the unwashed masses was once something out of science fiction; today, it barely elicits a glance from shoppers stocking up on cases of pickles and TVs by the six-pack.

The future isn't here. The future was here awhile ago, ate all your donuts, and took off to get some beer.

February 23, 2009

Scaffolding, Redux

Mike Flynn of Opportunity Green attended the Art Center College summit, and took these shots of the three futures presented in my talk. Thanks, Mike!

Also: very, very cool to discover the Mobility Vision Integration Process, a nifty way of quickly generating scenaric futures. The focus here is on mobility, but the process -- which Art Center College has chosen not to patent or otherwise restrict -- can be applied much more broadly. While the basic card version offers a fun way of playing scenario design (I so want to come up with the Collectible Card Game rules for it!), the site also offers a Flash version that does a decent job of replicating the experience.


What's particularly nice about the process is that it provides not just scenario bullet points, but the "design context" -- that is, the various constraints and demands that go into shaping the strategies operating in this scenario.

December 15, 2008

Value Ecologies

I have to admit something: I've been a business consultant.

Not just in the consulting futurist sense, but also in the "let me help you innovate your product cycle, grow your stakeholders, and immanentize your eschaton" sense. Although I don't really do that any more, I'm still somewhat attuned to that language. So when this past weekend I attended the "2008 Venture Showcase" for the Presidio School of Management -- which specializes in sustainability MBAs -- a phrase used in passing by one of the presenters triggered an idea that, upon reflection, might be worth sharing.

What popped into my head during the presentation was the term value ecologies. With the phrase in mind, a rough definition started to spill out: the collection of interdependent producers, suppliers, customers, shippers, competitors, supporters, creators of add-ons, and so forth, all contributing to the perceived value of a product or service, for better and worse.

Unlike "value chains," which focus on how a product or service gets made, or "value networks," which focus on the web of buyers, makers, users, etc., that support a given product or service, a value ecology demands that we consider more than a single "species," makes no assumptions about mutual benefit, and offers no implications of stability.

A given value ecology may include parasites (e.g., companies selling knock-off versions), predators (e.g., big companies looking to buy a smaller product/service provider in order to gain access to the employees), and diseases (e.g., poor performance, usually temporary but potentially fatal), all in a context of a changing environment (e.g., technology, global economics, and so forth).

While this may seem like consultant-ese, this model could actually be useful for foresight practitioners looking to understand the potential for change in a given economic (or technological, or social) niche. You can't just look at how a product or service is made (value chain) or its complement of users and suppliers (value network), you have to think about the whole range of actors and institutions dependent upon and competing with that product or service.

What's happening right now with the US automotive industry is a good illustration of the concept. If auto manufacturing in the US dies off, there are obvious concerns about the workers, suppliers, and buyers. But when you start to play out the larger web of interactions, you start to run into surprises. The death of US automakers could be a deadly blow to network television, for example; watch a few hours of prime time TV (without TiVoing over the ads), and you'll start to see just how dependent commercial television is on car advertisements. Large-scale sporting events are likely hurt, too, for similar reasons. A shift to higher-mileage vehicles (likely faster without the Big Three than with them) would reduce state incomes from gas taxes, and likely accelerate pressure to implement some kind of per-mile-traveled fee in order to pay for infrastructure. The demise of the big US automakers could also open up a niche for unexpected players to enter the market -- the Apple iCar has become something of a cliche, but one could imagine ExxonMobil (flush with cash) setting up a side-operation making cheap gas-only autos. Or Swatch cars making a come-back. Or even IKEA starting to make quick-assembly vehicles (perfect for the DIY crowd).

A more positive version might be seen in something like the iPhone. Its value ecology would include the hackers who "jailbreak" the system to allow new kinds of applications, the competitors scrambling to come up with appealing alternatives (as well as the increased demand for similar devices), the users of other 3G phones on the same network as the iPhone finding the data tubes overloaded with iPhone surfers, web designers having to decide whether to make a site iPhone-friendly, perhaps even the makers of fingerless gloves -- remember that the iPhone touchscreen requires your uncovered fingertip. Any chance of a slight uptick in frostbitten fingers?

I coughed up pretty much all of this in a moment's insight during the Venture Showcase, and later discovered -- much to my surprise -- that nobody used the term "value ecology" for anything even remotely similar to this notion (nearly all of the links found on the All-Seeing Eye of Mountain View simply had the two terms adjacent to each other).

So, for all of you out there doing consulting and management-analsyis type work: is this useful?

September 20, 2008

Tomorrow Matters

(Every now and again, it's useful to remind readers -- and myself -- just why structured thinking about the future should matter to people intensely concerned about today's problems. Long-time readers will find much of this familiar, but I hope you will also appreciate a straightforward encapsulation of the argument.)

When the world seems to be falling down all around us, can we afford to spend our time thinking about the future?

In the midst of ongoing wars, accelerating economic collapse, and cascading environmental ruin, it's easy to dismiss futurism as self-indulgence, a superficial pastime devoted to spotting the next hot gizmo or telling us all how some coming development changes everything. What really matters is the here-and-now. Serious people know that thinking about the future is frivolous; anyone (or any business) not focusing laser-like on the problems of today is wasting time and money. Right?


Thinking about the future is fundamentally important to dealing with the challenges of today. In order to confront these problems successfully, we have to think carefully about the implications and results of the steps we might take, not just in the immediate moment, but as conditions continue to evolve. As we've seen time and again, it's all too easy for actions that seem reflexively correct to lead to far greater crises down the road.

Futurism -- or, as I prefer to articulate it, structured thinking about the future -- is a means of putting both the problems we face today and the solutions we might try in a larger context. It does so in three key ways:

  • It expands our understanding of the scope of the situation. How do these various problems connect to each other? Are there underlying similarities? How would the outcomes that we fear would arise from problem X affect the course of problem Z? Would the steps we want to take in one arena positively or negatively affect outcomes in another situation?

Now, to be sure, good present-focused analysis will give you much of this, too. And doing this sort of thinking about a problem is far, far better than the "ooh shiny!/ooh scary!" model we seem to reflexively use, especially in major crises. But futurism does more.

  • It expands our understanding of the horizon of the situation. Not just how does this affect us now, but how would this affect us over time? In parallel, it allows us to think through what happens with different kinds of solutions we may want to use to deal with a problem. What's the potential for undesirable consequences? What kind of conditions result after this "solves" the problem?

Again, you might say, "this isn't futurism, it's simply responsible thinking" -- again, sorely lacking in much of our current discourse. But you might notice that conventional analysis that looks at horizon issues (implications, blowback, and the like) rarely gets combined with conventional analysis that looks at scope issues (relationships, reinforcement, interdependencies). Carrying off that kind of combination is hard to do, and especially hard to do well.

That's why few of the discussions of (for example) the current global financial meltdown will include more than a cursory reference to energy (and even there, will almost entirely focus on oil), a glance at demographics (and only in regards to pensions and, in the US, Social Security), or anything at all about climate disruption, migration patterns, and the role of participatory technologies. Yet all of these issues both helped to create the conditions that made the financial panic possible, and will shape both the kinds of responses we can undertake and how well those responses will work.

But futurism has one more, critical, trick up its sleeve:

  • It expands our understanding of the kind of world we want. By bringing into focus both the scope of connections among issues, and the potential impacts and implications on the horizon, futures thinking allows us to begin to see the path we'd need to take to get to a better world -- or, at minimum, the paths we need to avoid in order to forestall a worsening situation. Futurism, structured thinking about the future, clarifies the responsibility and capacity we have to create a tomorrow worth living in.

Heady stuff. And a bit presumptuous, too -- how can we think that we can see the future?

We can't. We can only see possibilities. But that's okay. We're not trying to predict what will happen tomorrow; we're trying to understand possible consequences. We're trying to lay out maps of the landscape ahead, in order chart a better course. These maps won't always be accurate -- sometimes they'll be completely wrong. But the process of creating the maps will give us a more detailed look and clearer perspective on where we are today. Even being completely wrong has value: figuring out why we were wrong, what we missed, can sometimes be even more illuminating than being right.

There's a rapidly-growing variety of methods available to us, from scenario planning to simulations to futures-mapping to so-called "prediction markets." Perhaps the most exciting is something new: massively-collaborative forecasting. I have the good fortune to be part of the Superstruct project, a "massively-multiplayer forecasting game;" Superstruct will begin in early October, and thousands of people will work together to explore what the future could hold.

With all of these tools, the goal is to examine tomorrow to give us a better understanding of how to deal with today.

I've sometimes called futures thinking a "wind-tunnel," a way of testing plans and ideas. Now I think that's a bit limited. Futures thinking is perhaps better understood as an immune system for our civilization. By examining and testing different possible outcomes -- potential threats, emerging ideas, exciting opportunities -- we strengthen our collective capacity to deal with what really does transpire. Thinking about the future, and doing so in a careful, structured, open and collaborative way, makes us a stronger civilization.

Focusing only the challenges of the present may seem imperative, especially when those challenges are massive and frightening. But without a sense of what's next, a capacity for understanding connections and horizons, and a vision of what kind of world we want, our efforts to deal with today's problems will inevitably leave us weakened, vulnerable, and blind to challenges to come.

By ignoring tomorrow, we undermine today.

April 9, 2008


Neologisms coming to mind during the Institute for the Future Ten-Year Forecast event (Updated):

  • "Mesh-to-Mesh" -- social network applications, like Twitter, structured as overlapping peer networks. Living in the space between one-to-one and many-to-many, mesh-to-mesh networks serve as a medium for discovering & creating new network connections, and bridging otherwise distinct communities. This one emerged as I was thinking about Twitter.

    In brief, questions and responses to someone on my Twitter who's part of one community (say, eco-bloggers) are visible everyone on my Twitter list, across the full array of represented communities. If they aren't already linked, they'll only see my half of the conversation, but (in my experience) speaking directly to someone often leads to some folks on my network becoming part of theirs. Mesh-to-mesh networks are likely to be strongest when there's moderate overlap: too much overlap and they become functionally identical networks; too little overlap and call-outs and links to the alternative networks happen too infrequently. Mesh-to-mesh can have the intimacy of personal links and the diversity of a mass discussion.

  • "Planet-to-Peer" -- an interactive environmental information network allowing for both monitoring and (when appropriate) manipulation. A green sousveillance system with feedback. This one emerged during a small group session led by David Pescovitz, covering eco-monitoring technologies; he'd asked me to describe how some of these networks might work, and by way of explanation I offered "they're planet-to-peer systems."


  • "Adaptive Optics" -- not a new term, but a new use. Optical metaphors are commonplace in consulting, with talk about "lenses" and "prisms" almost a requirement. In thinking about cognitive or cultural lenses for understanding a rapidly changing environment, the term "adaptive optics" came to mind. In reality a technology for dealing with a rapidly changing visual environment (such as turbulence in the atmosphere), the metaphorical version would be systems for dealing with a rapidly changing foresight environment.

If and when more new phrases bubble up during the event, I'll add to this post.

(Photo by Alex Pang)

March 24, 2008

Super-Empowered Hopeful Individuals

This is my column for the latest edition of Nanotechnology Now. Mike Treder reposted it over at CRN's blog, so I thought I'd go ahead and repost it here, too. Feedback, as always, is more than welcome.

Most discussions of the benefits of technologies like molecular manufacturing tend to focus either on broad social advances (engineered by helpful governments, NGOs, or businesses) or individual desires that transformative technologies may be able to satisfy. These are surely useful ways of thinking about a nanotech-enabled world. But what if this model misses another category, one that may be less noticeable precisely because we pay so much attention to its opposite?

A leading fear for those of us looking at the longer-term implications of molecular manufacturing is the technology's capacity to give small groups -- or even individuals -- enormous destructive capacity. This isn't unique to advanced nanotechnology; similar worries swirl around all manner of catalytic technologies. In fact, some analysts consider this a problem we currently face, and give it the forbidding label of "super-empowered angry individuals."

Thinking about it for a moment, the question arises: Where are the "super-empowered hopeful individuals?"

The core of the "super-empowered angry individual" (SEAI) argument is that some technologies may enable individuals or small groups to carry out attacks, on infrastructure or people, at a scale that would have required the resources of an army in decades past. This is not an outlandish concern by any means; many proponents of the SEAI concept cite the September 11 attacks as a crude example of how vulnerable modern society can be to these kinds of threats. It's not hard to imagine what a similar band of terrorists, or groups like Aum Shinrikyo, might try to do with access to molecular manufacturing or advanced bioengineering tools.

But angry people aren't the only ones who could be empowered by these technologies.

As a parallel, the core of the "super-empowered hopeful individual" (SEHI) argument is that these technologies may also enable individuals or small groups to carry out socially beneficial actions at a scale that would have required the resources of a large NGO or business in decades past. They would rebuild towns or villages after a natural disaster, or provide health care to refugees; they would clean up environmental toxins, or build renewable energy systems. The Millennium Development Goals would be their checklist. They would carry out the kinds of projects that humanitarian organizations do today, but be able to do so with smaller numbers, greater speed, and a far larger impact.

To an extent, these are tasks we might expect governments, NGOs or businesses would seek to accomplish, and they'd be welcome to do so. But catalytic technologies like molecular manufacturing could so enhance the capabilities of individuals that, just as we have to account for SEAIs in our nano-era policies and strategies, we should pay attention to the beneficial role SEHIs could play. They change the structure of the game.

In my work at Worldchanging, I became acquainted with numerous individuals and small organizations who would jump at the chance to become SEHIs. There's a tremendous desire out there for tools and ideas to build a better world. In addition, if molecular manufacturing proves as economically disruptive as some have argued, there may also be large numbers of people looking for something to do with their lives after their previous jobs disappear; it's in our collective interest to make sure that more of them become SEHIs than SEAIs.

Some readers may be wondering why we should care. It's obvious that we need to be concerned about SEAIs -- they can kill us -- but if SEHIs want to go out and make the world a better place, hooray for them (and the world). So why worry?

One answer is that there would be debate over just how beneficial some of the SEHI plans would actually be. Clean water, rebuilt homes? Fine. But what about building churches or mosques or other religious centers? Or think of the controversy surrounding the One Laptop per Child project; now picture thousands of One Laptop per Child-scale projects, run by passionate (but quirky) individuals. Worse yet, imagine the havoc that could ensue if well-intended but misguided SEHIs decide to solve global warming on their own and embark on massive geoengineering projects with disastrous side-effects.

Still, the outlook is not all bad. Far from it. The amount of good that can be done by future super-empowered hopeful individuals may prove to be far greater than the damage produced by their angry counterparts.

The lesson I took from Worldchanging was that it is precisely when the risks and challenges are greatest that we see just how many of us are willing to act to build a better world. There are millions of people out there right now, looking for ways to build a better world. Perhaps you’re one of them. As Pierre Teilhard de Chardin has said, "The future belongs to those who give the next generation reason for hope."

December 11, 2007


The Center for Responsible Nanotechnology today published eight scenarios exploring differing drivers for the advent of molecular manufacturing. This was the "virtual workshop" series I led earlier this year, and the scenarios reflect the work of over 50 people across six different countries. I wrote the initial drafts of five of the eight narratives.

I'm particularly happy with a couple of them:

"Negative Drivers"

Thirty million dead from the Rot in the US. Today, everyone knows at least one person who had died a horrible death during the pandemic, and most of us know a lot more than that.

As soon as it was clear that the Rot was showing up in cargo, collapse was unavoidable. All nations called a quarantine on goods shipped from China. China, suddenly losing its export dollars, called in trillions of dollars in debt from the USA. The US dollar crashed. The credit rating of the United States went through the floor.

If you think about the money, it makes it easier not to think about the corpses.


"Breaking the Fever"

Refugees from ecological disaster zones, surging towards those countries seemingly less-affected by global warming, were met by armed force; nations hit by drought or agricultural collapse no longer regarded it as a temporary problem, and some grabbed the water supplies and farmland of weaker neighbors; those places still producing abundant levels of greenhouse gases came under verbal attack at the UN and in the global media, and the world was treated to the surreal spectacle of the United States (greatest per-capita greenhouse output) and China (greatest total greenhouse output) on the verge of coming to blows over which one was the worst carbon offender.

Those tensions came to a boil in 2015 when coordinated acts of sabotage took nearly a hundred Chinese coal-fired power plants offline. The Chinese government blamed the U.S. and put its military on high alert; the American government responded in kind. Fortunately, before either side could launch a preemptive attack, a rural Chinese movement took credit for the sabotage. Beijing was taken by surprise when the resulting crackdown backfired, with some regiments refusing to attack Chinese citizens and others actively joining the movement. A smuggled camphone clip of renegade Chinese military aircraft bombing the nation's largest coal-fired power plant was the top-rated video on YouTube that year.

Believe it or not, both of these have happy endings.

September 23, 2007

Give an XO, Get an XO

Correction -- *tiny* Laptop...I don't think the One Laptop Per Child project knows what it is about to unleash.

On November 12, and for an unspecified (but brief) period following, the OLPC project will offer the "Give 1, Get 1" special:

For $399, you will be purchasing two XO laptops—one that will be sent to empower a child to learn in a developing nation, and one that will be sent to your child at home.

(Heh, yeah, "your child at home.")

But that's it: for $399, you'll get an XO laptop of your own, and fund an XO for a child in the developing world.

Considering the hype and the enthusiasm surrounding the XO, and considering that, as far as gadgets go, $400 isn't really a huge investment, I expect the demand for this to be huge. The question, then, is the OLPC project ready to meet that demand?

(Update: Ethan Zuckerman has further observations, well worth reading.)

An Unexpected Engine for Innovation

Could universal health insurance be an engine for entrepreneurial innovation?

I don't mean innovation in the healthcare space in particular, although that's possible. I mean more generally, as an unanticipated benefit, an "economy of scope," if you will, of universal health coverage. It may well be that a shift to broad health coverage could trigger a period of surprising economic growth. This may actually be an argument that would win support for single-payer insurance among those not persuaded by the moral or social aspects.

I came at this thought in a somewhat roundabout way. It will come as no surprise to anyone who has done a rapid succession of talks and travel that, a couple of days after getting back from Zürich, my immune system went on strike and I was hammered by one of those colds that served as a reminder of just how much we take our health for granted. My current health insurance situation is a bit complicated, as it is for most freelancers, and although this situation wasn't enough to warrant going to a doctor, I began once again (in my waking, lucid moments) to think about whether I needed to find a "real" job that would come with benefits such as health coverage.

Today, it struck me: I can't be the only person facing this kind of choice.

How many people want to be out there, trying new professional experiments, working for themselves, but are held back by the thought that doing so would mean a lack of real health insurance?

It's not uncommon to see paeans to the entrepreneurial spirit of US citizens*, and read consultant-ese observations that the one success skill in a rapidly-changing economy and society is flexibility, a willingness to try new things. This latter argument makes sense, from the "economic resilience" perspective. In a period of turmoil, successful adaptation demands the ability to iterate, rapidly and in parallel, multiple different models. With product design, it may be sad but ultimately of little consequence to toss out the less-adaptive concepts; the same cannot be said for human lives.

This is the health care risk at the heart of entrepreneurialism: if you or someone in your family gets sick or injured, you could easily lose everything. And if you have a "pre-existing condition" (such as my palindromic rheumatism), you're really out of luck. If you're youthful and willing to take a chance, this may be an acceptable trade-off; but remember, this is an aging population, and innovation is not just a sport for the young. If you have a spouse with health benefits, you may be okay, but that puts enormous responsibility on the shoulders of one's partner to keep the job s/he's in, no matter how unhappy or unfulfilled it might be. COBRA works for awhile, if you can get it, but it has its own limitations. So too with the variety of packages for freelancers (if you can get them). The handful of remaining options -- including just going without -- can be amazingly expensive.

I don't think that there is necessarily a massive population of proto-entrepreneurs just waiting for universal health coverage in order to go out and change the world. I do think that there's a small number, however, which would then provide a model for people who might have long-ago discarded the idea of working for themselves. The lack of universal healthcare in the United States may well be a brake to the kinds of innovation and individual experimentation that will be necessary to adapt to a rapidly-changing economic -- and geophysical -- environment.

Just some thoughts on a Sunday afternoon, still in the midst of recovery.

(*The European experience provides neither strong support nor contradiction of this premise, given the substantial cultural and, often, legal differences regarding entrepreneurialism between the US and Europe.)

May 13, 2007

Open Source with a Bullet: John Robb's Brave New War

brave_new_war.jpgThe U.S. is Microsoft. Al Qaeda is Linux.

That, at least, is the grossly-oversimplified version of John Robb's new book, Brave New War. Such a parallel has nothing to do with politics, but with position. The United States, and other centralized, conventionally powerful global actors, fill a role in the geopolitical ecosystem akin to Microsoft: big and slow to respond; wealthy and wasteful; hierarchical and ossified. Al Qaeda, and other distributed, guerrilla insurgency and terrorist movements, fill a geopolitical role more akin to Linux: decentralized and nimble; open to new entrants; innovative out of necessity. It's for good reason that Robb refers to the conflicts now underway as "open source warfare," and the distributed participants, "global guerrillas."

I'll leave it to others to address the military implications of Robb's argument; it's enough to say that I found his ideas compelling (this should come as no surprise, given how often I link to his site when I write about global politics). I'd like to focus, instead, on what he calls out as the proper response those opposed to the global guerrillas should adopt.

Robb makes it clear that the tactics the United States (and, to a lesser extent, Europe and other post-industrial nations) now employs are bad, bad ideas. "Knee-jerk police states" and "preemptive war" fall into a category Robb borrows from security specialist Bruce Schneier: "brittle security." The big problem with brittle security is that, when it fails, it fails catastrophically; moreover, by employing these tactics, the U.S. (etc.) undermines the very moral suasion and memetic influence that are among the most important tools to fight empowered extremism.

He proposes instead the adoption of "dynamic decentralized resilience:"

It is simply the ability to dynamically mitigate and dampen system shocks. Specifically, it is those things we (and our state) can do to change the configuration of our networks to ensure that intentional or naturally occurring attacks on our society don't do much damage or spiral out of control.

This is a welcome argument. The concept of resilience is useful as a response to a spectrum of threats, as it emphasizes not the specific counters to a particular challenge, but the broader ability of a society or network to survive and thrive even when faced with major threats. Robb uses it here as a way of dealing with open source warfare; a few months ago, I used it as a way of dealing with environmental disruption:

"Resiliency," conversely, admits that change is inevitable and in many cases out of our hands, so the environment -- and our relationship with it -- needs to be able to withstand unexpected shocks. Greed, accident or malice may have harmful results, but [...] such results can be absorbed without threat to the overall health of the planet's ecosystem. If we talk about "environmental resiliency," then, we mean a goal of supporting the planet's ability to withstand and regenerate in the event of local or even widespread disruption.

Robb and I are not alone in the use of resilience as a fundamental part of surviving the 21st century. The Resilience Alliance greatly expands on the notion of environmental resilience, and links it to concepts such as adaptive cycles and Panarchy. (I'd love to see how Robb would make use of the Panarchy argument in his own work -- there are definite connections.)

This isn't simply a coincidental use of the same word. The overlaps between social resilience and ecological resilience are quite profound. A small example of this can be seen when Robb leads us through reconfiguring an existing system to make it more resilient. He argues that the power grid could be made much more resilient -- that is, much better able to absorb and mitigate threats -- by becoming much more decentralized, with individual buildings becoming power generators as well as power consumers. To be clear, this isn't a call for energy isolationism -- he doesn't want to go "off-grid." It's a call for a much more deeply-networked grid. And it happens to be an argument very familiar to those of us looking at ways to deal with environmental crises, not simply because it supports greater use of renewable energy, but because of its resiliency under stress.

Looking more broadly, Robb lists three rules for successful "platforms," or sets of services, operating under his resiliency model: transparency (so all participants can see and understand what's happening); two-way (so all participants can act as both providers and consumers of the services); and openness (so the number and kind of participants isn't artificially limited). Again, these rules should sound very familiar to readers of (among other sites) Open the Future and WorldChanging.

I make a point of highlighting these similarities in order to demonstrate that the concepts that Robb discusses as a way of dealing with a particular kind of challenge actually have far broader applicability. An open, transparent, distributed and resilient system is precisely what's needed to survive successfully threats from:

  • Natural disasters, such as tsunamis, earthquakes, and pandemic disease.
  • Environmental collapse, especially (but not solely) global warming.
  • Emerging transformative technologies, such as molecular manufacturing, cheap biotechnology and artificial general intelligence.
  • Open source warfare.
  • Even (should it happen) the Singularity.

    John Robb addresses some of these when referring to "naturally occurring attacks" or the value of sustainability as a way of supporting resilience. Because he focuses on the military/security manifestations, however, he doesn't make a strong connection to the broader utility of the concept. I hope that he starts to look more closely at these other arenas as sources of innovation and even alliance.

    The one element that Brave New War lacked, and would have been well-received, is some exploration of what kinds of counter-global guerrilla strategies might be in the offing. He's clear that the current approach is disastrous, and the resilience argument does a good job of showing how post-industrial nations can better survive the threat of global guerrillas without surrendering their values. But I found myself wondering what kinds of tactics and technologies will emerge as a way of meeting the open source warfare threat head-on. Is it something as obvious as re-tooling conventional militaries to adopt more "open source" style techniques? Is it something as surprising as a shift in focus towards what might be thought of as an "open source peace corps"? Maybe it will require a major technological leap, where we find that the best counter to open source guerrillas is ultra-high-tech swarming bots, or nano-weapons, or something even more startling.

    The question I have for John Robb is, then, if we build the open future, how do we defend it?

  • May 7, 2007

    I Own This Number

    24 EB 93 14 E0 4B C0 BD 99 44 65 AD 86 CC DE 92

    ...and I'd better not catch any of you using it!

    (See here for explanation... and to get your own 128-bit number!)

    April 12, 2007

    One Revolution Per Child

    I wish that Nicolas Negroponte had never referred to it as the "one hundred dollar computer."

    Yes, yes, it's an attention-grabbing name, but noting with a smirk that the first ones will actually cost $150 has become a game for reporters. I'm particularly aghast when technology journalists do it, because they of all people should know that information technology prices always fall -- the OLPC laptop won't remain $150 for long.

    All of this comes to mind because of a new article from IEEE Spectrum magazine, "The Laptop Crusade." For the first time, I've become really excited about the potential this project holds, and not solely because of its leapfrogging possibilities.

    (Some people I really respect, like Lee Felsenstein and my friend Ethan Zuckerman, show up in the article with some astute comments; I was interviewed for the piece, as well, with the usual result that a couple of my throwaway comments got used, and the main point I tried to make nowhere to be found. So it goes.)

    I'm excited about the OLPC machine's potential because it's so clearly a revolutionary device, both in the sense of it having capabilities that nobody has ever before seen in a laptop, and in the sense it being a catalyst for out-of-control social transformation. The OLPC project will drop millions of powerful, deeply networked, information technology devices into the hands of precisely the population (children and teens) most likely to want to figure out the unanticipated uses.

    From the startlingly long-range wifi mesh networking to the "Sugar" social interface, these devices were built to treat hierarchies as damage, and route around them.

    Bletsas says his design will provide node-to-node connectivity over 600 meters. Over a flat area without buildings and with low radio noise, that connection can stretch to 1.2 km. Students can put their computers on the mesh network simply by flipping the antennas up. This turns on the Wi-Fi subsystem of the machine without waking the CPU, allowing the laptop to route packets while consuming just 350 milliwatts of power. [...]

    The mesh network feature lets students in the same classroom share a virtual whiteboard with a teacher, chat (okay, gossip) during class, or collaborate on assignments. [...]

    The OLPC team also constructed a completely new user environment, code-named Sugar, designed to break down the isolation that students might experience from staring at laptops all day. It introduces the concept of “presence”—the idea behind instant-messaging buddy lists. The user interface is aware of other students in the classroom, showing their pictures or icons on the screen, allowing students to chat or share work with others in the class.

    The system shares with the other students new tasks, like a drawing or a document, by default, though students can choose to make them private. Sugar creates a “blog” for each child—a record of the activities they engaged in during the day—which lets them add public or private diary entries.

    This is a participatory culture dream device. Using entirely open source software, the laptops are enormously friendly to "hacking" (in the exploration sense, not the criminal sense), yet can be returned to a safe configuration at the push of a button. Moreover, they're extraordinarily, wonderfully, energy-efficient: at normal use, a OLPC laptop draws 3 watts, compared to 30 watts for a typical lower-end conventional laptop; and a full charge lasts for over six hours at maximum power use, 25 hours in power conservation mode.

    Felsenstein notes that teachers will (rightly) see these laptops as a direct assault on their authority, and many will be banned from classrooms, leaving the kids to use the machines unsupervised.

    I sure hope so.

    A generation growing up believing in their capability to hack the system, work collaboratively, and make information a tool is probably one of the best things that could happen to a developing nation. Possibly not in the short run -- backlash from fearful authorities will be nasty -- but certainly in the longer term, as the first wave of OLPC children reaches adulthood.

    The revolution begins in 2008.

    February 21, 2007


    As the result of a casual conversation at the Good Ancestor Principle workshop, I've added to the list of domains in my care. The name is a reference to SourceForge and BioForge, websites that offer resources for open source programming and open source bioengineering, respectively. I bought it because it seemed like a good name and concept, but I didn't really have an agenda for what I'd do with it.

    Any suggestions as to what would be the best approach to the use of SocialForge?

    January 28, 2007

    GRM Warfare

    Denise Caruso's new column at the New York Times kicks off with an essay on patents in the world of biotechnology. Most of the piece looks at how to build an intellectual property regime for biotechnology that serves the interests of society, not just a handful of companies. She cites a troubling, if not surprising, statistic: more than 20% of the human genome has already been patented, mostly by corporate biotech.

    She also mentions the case of genetically-modified potatoes from biotech firm Syngenta. Not only are the GM spuds patented, they've been modified to be sterile without the application of a particular chemical. Potato farmers can't "copy" the crop without paying a fee.

    The combination of these two facts is frightening. She doesn't use the term, but it's very clear as to what's going on here:

    Genetic Rights Management.

    Genetic Rights Management (GRM) is copy-protection for genes, a direct parallel to Digital Rights Management for CDs, DVDs, and other media. It's a term I came up with in 2002, as I was writing Transhuman Space: Broken Dreams. It's a way of preventing the duplication of patented genetic modifications by preventing unlicensed reproduction of individuals bearing those genes. It was an idea that struck me as the nearly-absurd but utterly plausible extension of trends in both biotech and intellectual property law; it now appears to be another case of successfully predicting the present.

    Biotech companies are unlikely to successfully put GRM onto naturally-occuring human genes that they have patented. They'll try, but it seems likely to be a legal loser; despite the current situation of biotech companies having a strong monopoly on the genes they patent, ownership of one's own naturally-occuring genes is a sufficiently common-sense notion that, even if the courts upheld the patent rights, legislatures are likely to jump in to fix the laws. Biotech companies will be on firmer ground if they GRM-protect genemods that do not naturally occur in human beings, but can be used as a genetic treatment or enhancement.

    The tools to make this possible already exist. One way would be through the use of Human Artificial Chromosomes (HACs). Bacterial genetic research often uses artificial chromosomes inserted into a bacterial nucleus, allowing researchers precise control over the placement and replication of the new genes. The same is possible with human biology, giving a cell which would normally have 46 chromosomes an extra, 47th, micro-chromosome with a small number of DNA base pairs. Case Western scientists reported the development of HACs in 1997, but the technique is not known to be in common use at this point. HACs would make the application of genetic rights management simple, either by applying the genemod directly via the artificial chromosome, or by putting the control mechanism in the HAC.

    The notion of introducing sterility in a genetic modification recipient to prevent unlicensed duplication is a staggeringly awful idea, yet is the logical result of current practices. Human genes are, as Denise Caruso describes, already subject to strong patent rights.

    As Tim Hubbard, a Human Genome Project researcher, noted at a 2001 conference: “If you have a patent on a mousetrap, rivals can still make a better mousetrap. This isn’t true in the case of genomics. If someone patents a gene, they have a real monopoly.”

    This monopoly gives patent holders total control over patented genetic materials for any use whatsoever — whether for basic research, a diagnostic test, as a test for the efficacy of a drug or the production of therapies.

    And biotech companies are already employing crude forms of GRM on genetically-modified plants and animals: so-called "terminator technology," blocking the reproduction of modified crops, has been around for years; and the recently-introduced hypoallergenic cats developed by Allerca arrive to their new owners pre-neutered/spayed.

    It may be that GRM goes too far, and that any attempt to roll out such a system will result in backlash against the underlying notion of genetic patents. I hope so, at least; already, too much of what had been in the commons has been locked up as private intellectual property. But as we work to raise awareness of and resistance to overreaching by big bio, we need to recognize that things are not nearly as bad as they might yet be.

    (Update: Be sure to look at the first comment, from Rob Carlson.)

    January 11, 2007

    Beauty and the Beast

    iphone.jpgDamn, that iPhone is pretty.

    I am primarily a Mac user, so I follow the annual announcements at Macworld fairly closely. This year, most folks expected Steve Jobs to unveil a phone, so when he announced it, few people were terribly surprised. But when he demonstrated it... geek lust heaven (or, as Brent at PvP put it, "Jesus has come back and he's a phone now.") The gestural interface, the Jonathan Ive design, the way it gets the little things right (like shutting off the touchscreen when you lift it to your ear), all of these inspired a near frenzy among a broad array of mac geeks, tech geeks and design geeks. It was just that cool.

    Then I discovered something that turned this beauty into a nasty little beast.

    The iPhone is a closed device. Users cannot install any applications on it, not even the little mobile Java apps that run on pretty much every phone with a color screen. This may not sound like a big deal; after all, the iPhone will do everything you need it to do already, right? And even if it doesn't, look how pretty it is!

    Here are a couple of reasons why this is a big deal, from an Open the Future perspective:

  • It runs counter to one of the most important trends in the online and offline world right now: DIY culture. This is becoming a fairly common observation, so I won't spend too much time on it. In brief, the mixed and mashed contributions and creations of individuals drive innovation, and we're increasingly building a world (both online and off) that enables and encourages these contributions. From open source to "Web 2.0," Second Life digital LEGOs to Wikipedia, the future is being built by collaborative creation. A locked down system prevents the iPhone from being a part of that world, to its detriment, and to the detriment of its users.

  • It's dangerous to Apple and Cingular (really!) One of the reasons why Steve Jobs doesn't want to allow outside applications is that he doesn't want poorly-written (or malicious) programs to cause problems. As he says in Newsweek:

    You don’t want your phone to be an open platform,” meaning that anyone can write applications for it and potentially gum up the provider's network, says Jobs. “You need it to work when you need it to work. Cingular doesn’t want to see their West Coast network go down because some application messed up.”

    Here's the problem, Steve: keeping the system closed won't stop that from happening -- in fact, it makes it more likely. It's a dead certainty that the iPhone will be cracked, will be turned into a defacto open platform, whether through taking advantage of system or application flaws -- as was done with the just-as-"closed" Playstation Portable -- or through simply turning it into a Linux box -- as has been done with the original iPod. A security plan cannot be based on the concept that nobody will do the obvious.

    Once the iPhone was cracked open, people who would have been inclined to use it to do nasty things to the Cingular network (or other users on that network) would still be able to do so -- and regular users would have no tools at their disposal to counter or circumvent that threat, other than those from Apple and Cingular. Which one takes the blame (and possible lawsuits)? That's a situation that's absolutely ripe with the possibility of finger-pointing, instead of solutions.

    At the same time, the potential for accidental damage to the network is greater in this scenario, as non-malicious hardware hackers and garage programmers poke around, trying to figure out what the different components and programming interfaces do. A home-brewed application that should be just fine might in fact be disastrous, simply because of a hidden undocumented feature. Realistically, while the chances of this happening are pretty slim, they're still greater than if the iPhone was open to developers, with officially documented interfaces and commands.

    In short, a closed iPhone will be no less subject to malice, and probably more subject to accident, than an open iPhone.

    The iPhone is still six months away, so there's still time for policies and technologies to change. Given the way Jobs talks in interviews in both Newsweek and the New York Times, however, he may be digging in his heels on the matter, holding his position even when it's no longer tenable. In that case, Apple may be in for a painful lesson in the dynamics of the new world.

  • December 31, 2006

    An Eschatological Taxonomy

    Eschatology: (noun) The study of the end of the world.
    Taxonomy: (noun) A classification in a hierarchical system.

    What do we mean when we talk about the "end of the world?"

    It's a term that get thrown around a bit too often among a variety of futurist-types, whether talking about global warming, nanofabrication, or non-friendly artificial intelligence. "Existential risks" is the lingo-du jour, referring to the broad panoply of processes, technologies and events that put our existence at risk. But, still, what does that mean? The destruction of the Earth? The end of humankind? A "Mad Max" world of leather-clad warriors, feral kids, and armed fashion models? All are frightening and horrific, but some are moreso than others. How do we tell them apart?

    Here, then, is a first pass at a classification system for the varying types of "end of the world" scenarios.

    Class Effect
    0Regional Catastrophe (examples: moderate-case global warming, minor asteroid impact, local thermonuclear war)
    Global civilization not eliminated, but regional civilizations effectively destroyed; millions to hundreds of millions dead, but large parts of humankind retain current social and technological conditions. Chance of humankind recovery: excellent. Species local to the catastrophe likely die off, and post-catastrophe effects (refugees, fallout, etc.) may kill more. Chance of biosphere recovery: excellent.
    1 Human Die-Back (examples: extreme-case global warming, moderate asteroid impact, global thermonuclear war)
    Global civilization set back to pre- or low-industrial conditions; several billion or more dead, but human species as a whole survives, in pockets of varying technological and social conditions. Chance of humankind recovery: moderate. Most non-human species on brink of extinction die off, but most other plant and animal species remain and, eventually, flourish. Chance of biosphere recovery: excellent.
    2 Civilization Extinction (examples: worst-case global warming, significant asteroid impact, early-era molecular nanotech warfare)
    Global civilization destroyed; millions (at most) remain alive, in isolated locations, with ongoing death rate likely exceeding birth rate. Chance of humankind recovery: slim. Many non-human species die off, but some remain and, over time, begin to expand and diverge. Chance of biosphere recovery: good.
    3a Human Extinction-Engineered (examples: targeted nano-plague, engineered sterility absent radical life extension)
    Global civilization destroyed; all humans dead. Conditions triggering this are human-specific, so other species are, for the most part, unaffected. Chance of humankind recovery: nil. Chance of biosphere recovery: excellent.
    3b Human Extinction-Natural (examples: major asteroid impact, methane clathrates melt)
    Global civilization destroyed; all humans dead. Conditions triggering this are general and global, so other species are greatly affected, as well. Chance of humankind recovery: nil. Chance of biosphere recovery: moderate.
    4 Biosphere Extinction (examples: massive asteroid impact, "iceball Earth" reemergence, late-era molecular nanotech warfare)
    Global civilization destroyed; all humans dead. Biosphere massively disrupted, with the wholesale elimination of many niches. Chance of humankind recovery: nil. Chance of biosphere recovery: slim. Chance of eventual re-emergence of organic life: good.
    5 Planetary Extinction (examples: dwarf-planet-scale asteroid impact, nearby gamma-ray burst)
    Global civilization destroyed; all humans dead. Biosphere effectively destroyed; all species extinct. Geophysical disruption sufficient to prevent or greatly hinder re-emergence of organic life.
    X Planetary Elimination (example: post-Singularity beings disassemble planet to make computronium)
    Global civilization destroyed; all humans dead. Ecosystem destroyed; all species extinct. Planet itself destroyed.

    Suggestions, additions, changes all welcome.

    And, on that note, Happy New Year! See you in 2007.

    (Updated 1/5 to change "ecosystem" to "biosphere" -- thanks, Mitch!)

    December 30, 2006

    Making the Future Yours

    As a species, Homo sapiens isn't particularly good at thinking about the future. It's not really what we evolved to do. Our cognitive tools developed in a world where rapid and just-accurate-enough pattern recognition and situation analysis meant the difference between finding enough tubers & termites to munch on for the evening and ending up as dinner for the friendly neighborhood predator. In a world of constant, imminent existential threats, the ability to recognize subtle, long-term processes and multi-generational changes wasn't a particularly important adaptive advantage.

    But what we haven't evolved to do, we can learn to do. And now, more than at any previous point in human history, our survival depends on our capacity to think beyond the immediate future. The existential threats we face today are, in nearly every case, slow, subtle, and seemingly -- but deceptively -- remote. We no longer live in a world of obvious cause and easily-connected effect, and choices based on these sorts of expectations are apt to cause us vastly more harm than benefit.

    Unfortunately, thinking in the language of the long term isn't a habit most of us have cultivated. So the development I'd like to see happen in 2007 is something that all of us can do: try to imagine tomorrow. Not in a gauzy, indeterminate "what if..." kind of way, and not in a cyber-chrome & nano-goo science fiction kind of way. I'd like us to start with something concrete and personal.

    On January 1st, as we recover from the previous night's celebrations, rather than making out a list of resolutions we know we're unlikely to keep, I'd like us each to imagine, with as much plausibility and detail as we can muster, what our lives will be like in just one year, at the beginning of 2008. What has the last year been like? What has changed? What has surprised us? What are we (the "we" of a year hence) thinking about? Regretting? Looking forward to?

    Then, after we've exercised our future-thinking muscles a bit, try this: do the same thing, only for ten years hence. What are our lives like in 2017? If possible, we should try to give this as much detail as we gave 2008. Not because this will make it more accurate -- it won't. But it can make it more real, more anchored in our lives of the present.

    We should write down what we've come up with, and save it (or if we're feeling a bit adventurous, blog it).

    That's it; just for a little while, let's think about our future.

    We create our tomorrows with every choice we make, but too few of us take even a moment to consider the consequences of our decisions. Every now and again, we need to think beyond the present, and recognize that we are as connected to our future as we are to our past. It's a good habit to get into; as our choices become ever more complex, it's the kind of habit that can even be worldchanging.

    (This was my contribution to WorldChanging's "What's Next:2007" series, posted today.)

    November 16, 2006

    OLPC Laptops Arrive (Updated)

    olpc.jpgI've had my doubts in the past as to whether the One Laptop Per Child project (aka, "$100 Laptop") was taking the right course. After all, mobile phones have far greater penetration in the developing world, and a system that piggybacked on the mobile phone networks -- and used a device that could double as such a phone -- seemed a safer, and more likely to succeed, plan. Nonetheless, the OLPC group has done a good deal that's right with this project, and if it succeeds, it would certainly have a greater positive impact than would the fancy cell phone approach. The OLPC system is much more open than a phone would be, from the underlying Linux kernel/GNU OS (hi, Glyn!) to the use of WiFi mesh networking instead of a proprietary cellular network.

    The OLPC headquarters received their first shipment of working computers today, and have posted a series of "unboxing" photos to the web. A few things stand out: these are cute machines, from the bright green highlights to the "rabbit ear" antennae; they have some features I'd love to have on my Macbook (the twisting screen and ebook mode, in particular); and these suckers are small -- that's a 12" notebook used as a size reference!

    Congratulations to the OLPC team!

    (Update: Greetings, WorldChanging readers! I see that Alex linked here in reference to the mobile phone alternative argument, so it might be useful for me to go into a bit more detail.

    Information tools evolved from mobile phones have several key advantages over the OLPC model. These include: very low power requirements, easily met by cheap power generation and/or storage technologies; an existing infrastructure across much of the developing world, requiring very little in the way of new routing hardware; near-ubiquitous usage, reducing the likelihood of theft -- potentially a huge problem for the OLPC project; broad utility, so that the device can serve more than the education market (mobile phones are a major economic driver in the developing world); portability, for near-constant information and communication access.

    There are disadvantages in comparison to the OLPC unit, of course. The most glaring is the usability of the interface: a typical mobile phone screen is just tiny, and even a PDA-sized phone (akin to a Palm device or Simputer) is less useful for reading and graphics than the OLPC laptop; and the lack of a real keyboard imposes significant limits on composition beyond short text-message length prose.

    I do think it would be useful to offer phone/PDA type information tools as learning devices in the developing world, simply to allow a real comparison. I'm not sure it will happen through a dedicated, OLPC-type effort -- chances are, it will be an accidental result of the increasing dependence upon and power of mobile phones.)

    November 15, 2006

    The New World: the Rise of the New Culture of Participation

    The following is the text of the talk I gave this morning at the International Association for Public Participation conference in Montreal, Canada. Where useful or necessary, I've added the relevant slide images. Updated: added links.

    My name is Jamais Cascio, and I'm a foresight specialist by trade -- that's a fancy way of saying "futurist." Now, when most of you hear the word "futurist," you probably imagine the guys telling us about personal jetpacks and honeymoons in orbit, or maybe the marketing types eager to identify new trends and fads. I like to think that I fall into a third category, however: futurists who take seriously the call to serve as society's radar, giving us all early warnings of big changes ahead.

    I know you've heard a bit already about the increasingly critical role that Internet technologies play in the world of public participation. Whether we think of this new world as "Citizens 2.0" or some less catchy phrase, it's clear that the emergence of these network-empowered tools is serving as a catalyst for some important changes in how we relate to each other, our governments and -- most importantly -- our civil societies.

    Think of it as the emergence of a new participatory culture.

    Continue reading "The New World: the Rise of the New Culture of Participation" »

    April 18, 2006

    OtF Core: The Open Future

    To get a sense of how this perspective has evolved over the past couple of years, here's "The Open Future," the essay that kicked off a series I produced for WorldChanging in my final month. The most important improvement, in my view, is the recognition of the larger connections of this approach -- it's not just about emerging technologies. Still a bit too solemn, though.

    crowduk.jpgThe future is not written in stone, but neither is it unbounded. Our actions, our choices shape the options we'll have in the days and years to come. We can, with all too little difficulty, make decisions that call into being an inescapable chain of events. But if we try, we can also make decisions that expand our opportunities, and push out the boundaries of tomorrow.

    If there is a common theme across our work at WorldChanging, it is that we are far better served as a global civilization by actions and ideas that increase our ability to respond effectively, knowledgably, and sustainably to challenges that arise. In particular, I've focused on the value of openness as a means of worldchanging transformation: open as in free, transparent and diverse; open as in participatory and collaborative; open as in broadly accessible; and open as in choice and flexibility, as with the kind of future worth building -- the open future.

    Continue reading "OtF Core: The Open Future" »

    OtF Core: Open the Future

    I wrote nearly 2,000 articles for WorldChanging, and I am very happy to have them there. Nonetheless, some of the pieces I wrote are fundamental parts of my worldview, and it's useful to have them here, too.

    "Open the Future," written in mid-2003, was originally scheduled to appear in the Whole Earth magazine. Unfortunately, that issue turned out to be the final, never actually published, appearance of the magazine. I posted the essay on WorldChanging in February, 2004. In retrospect, it's a bit wordy and solemn, and focuses too much on the "singularity" concept, but it still gets the core idea across: openness is our best defense.

    Very soon, sooner than we may wish, we will see the onset of a process of social and technological transformation that will utterly reshape our lives -- a process that some have termed a "singularity." While some embrace this possibility, others fear its potential. Aggressive steps to deflect this wave of global transformation will not save us, and would likely make things worse. Powerful interests desire drastic technological change; powerful cultural forces drive it. The seemingly common-sense approach of limiting access to emerging technologies simply further concentrates power in the hands of a few, while leaving us ultimately no safer. If we want a future that benefits us all, we'll have to try something radically different.

    Continue reading "OtF Core: Open the Future" »