« On the Horizon (03/03/06): Charge! | Main | See SPOT. See SPOT Sense. Sense, SPOT, Sense! »

The Open Future: The Reversibility Principle

Two philosophies dominate the broad debates about the development of potentially-worldchanging technologies. The Precautionary Principle tells us that we should err on the side of caution when it comes to developments with uncertain or potentially negative repercussions, even when those developments have demonstrable benefits, too. The Proactionary Principle, conversely, tells us that we should err on the side of action in those same circumstances, unless the potential for harm can be clearly demonstrated and is clearly worse than the benefits of the action. In recent months, however, I've been thinking about a third approach. Not a middle-of-the-road compromise, but a useful alternative: the Reversibility Principle.

It's very much a work-in-progress, but read on to see what this could entail, and please feel free to add comments and critiques.

The Precautionary Principle, first articulated in 1988, argues that uncertainty should be a trigger for caution when it comes to technological advances. The most widely-accepted version of the principle comes from the Wingspread Statement:

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof. The process of applying the Precautionary Principle must be open, informed and democratic and must include potentially affected parties.

Transhumanism advocates Max More and Natasaha Vita-More created the Proactionary Principle in 2004 as a direct counter to the Precautionary Principle. This concept argues that only probable and serious negative outcomes should be enough to block the development of potentially-useful technologies. The current version of the statement can be found on Max More's website:

People’s freedom to innovate technologically is highly valuable, even critical, to humanity. This implies a range of responsibilities for those considering whether and how to develop, deploy, or restrict new technologies. Assess risks and opportunities using an objective, open, and comprehensive, yet simple decision process based on science rather than collective emotional reactions. Account for the costs of restrictions and lost opportunities as fully as direct effects. Favor measures that are proportionate to the probability and magnitude of impacts, and that have the highest payoff relative to their costs. Give a high priority to people’s freedom to learn, innovate, and advance.

There's room for debate in each of these philosophies, of course. Many worldchangers and WorldChanging allies subscribe to a version of the Precautionary Principle that focuses on taking responsibility for possible negative outcomes rather than simply avoiding any action that might lead to problems; our friends at the Center for Responsible Nanotechnology characterize this as the "active" form of the Precautionary Principle. The Proactionary Principle doesn't yet have multiple strongly-articulated versions, but the principle's authors have modified its wording in response to ongoing discussion; it's currently at version 1.2, although an earlier phrasing can be found in the Wikipedia article.

Critics of the Precautionary Principle claim that it focuses too much on worst-case scenarios, and gives insufficient weight to likely benefits of disputed technologies. Critics of the Proactionary Principle claim that it focuses too much on simple cause-and-effect logic, and ignores both complex results arising from interactions with other developments, and the potential for significant-but-not-inevitable problems. In my view, both of these arguments are largely correct.

We live in a world of rapid technological advances and tremendous global problems. Ideally, the first can help ameliorate the second; unfortunately, given the power of many of these advances, we run a strong risk that the first could make the second even worse. A binary "do it"/"don't do it" argument isn't well-suited to the degree of uncertainty that accompanies technological advances, nor the combinatorial, mutually-reinforcing aspects of global problems (such as climate disruption making conditions of poverty worse in the developing world, driving people towards survival strategies that degrade the environment). I propose, instead, that we think not in terms of "caution" or "action," but in terms of "reversibility."

A word of warning: this idea isn't yet fully-baked, and I hope to see serious critiques coming from both precautionary and proactionary advocates. I welcome the criticism, as it will help me work out the details of the argument.

The Reversibility Principle

This is my first effort to articulate the Reversibility Principle:

When considering the development or deployment of beneficial technologies with uncertain, but potentially significant, negative results, any decision should be made with a strong bias towards the ability to step back and reverse the decision should harmful outcomes become more likely. The determination of possible harmful results must be grounded in science but recognize the potential for people to use the technology in unintended ways, must include a consideration of benefits lost by choosing not to move forward with the technology, and must address the possibility of serious problems coming from the interaction of the new technology with existing systems and conditions. This consideration of reversibility should not cease upon the initial decision to go forward to hold back, but should be revisited as additional relevant information emerges.

Let's look at this in more detail.

"... development or deployment..." Ideally, the Reversibility approach would take hold in the early stages of the research and development process. The goal isn't necessarily to shut down research the moment potential problems are discovered, but to make certain to design the technology or process with reversibility in mind. We can assume that responsible technological development includes a desire to avoid harm; the Reversibility Principle would add to that a desire to include an "off switch" if harm is later identified.

"...technologies..." By this I mean any human-constructed tool, whether mechanical, biological or social.

"...uncertain, but potentially significant, negative results..." This encompasses two key issues: the negative results need not be guaranteed or inevitable; they should, however, be demonstrably serious. How "significant" is defined is likely to be a point of debate, but to start, I would look at the possibility of death, the difficulty of mitigation or amelioration, and the potential to make other, existing problems worse.

"...strong bias..." The potential for reversibility should be a critical issue as to whether to develop or deploy a technology, but shouldn't be the sole determinant. Other issues, such as the need to avert an even greater problem, will always come into play.

"...reverse the decision..." This is the cornerstone of the principle. Ideally, we would be able to recall the technology and undo the damage it has done should an unexpected negative result emerge. This will not necessarily be easy or even possible -- but the difficulty of reversing the effects of an action arises, in part, from not taking reversibility into account during the design process.

"...grounded in science..." Misunderstandings, rumors or myths -- even popular ones -- should not be sufficient to cause a decision to hold off the development or deployment of useful technologies. At the same time, we must recognize that all science is contingent upon better information, and the inherent uncertainties of scientific study should not be cause to dismiss concerns as not "grounded in science."

"...the potential for people to use the technology in unintended ways..." Saying that something is safe if used correctly isn't the same as it being safe. If "the street finds its own uses for things," those uses will often be contrary to the manufacturer's instructions. In short, consideration of possible harmful results must include possible misuses and abuses of the technology.

"...consideration of benefits lost..." The strongest argument against the strict form of the Precautionary Principle is that it fails to account for the harm that could result from the lack of the new technology in the same way as it accounts for the harm that could result from its deployment. In a world of large-scale problems requiring innovative solutions, this is dangerously short-sighted. The potential for irreversible negative results coming from the use of the technology must be weighed against the irreversible negative results coming from its relinquishment.

"...interaction of the new technology with existing systems and conditions..." This will be the most difficult to measure part of the Reversibility Principle. New technologies do not exist in a vacuum. When deployed, they immediately become part of a larger technological ecosystem, and effects that, in isolation, may be essentially harmless can, in combination with other parts of the ecosystem, lead to serious problems. An example would be a biofuel plan that leads many food farmers to shift to fuel crops, at the expense of the availability of food for poverty-stricken regions.

"...should not cease..." Once a decision has been made to deploy or not to deploy a given technology, questions about the technology should not be forgotten. New discoveries and analysis may change the balance of issues around the decision, and what was once the right choice may in time become the wrong one. In short, the decision as to whether a technology is sufficiently reversible should itself be reversible.

Why Reversibility?

Reversibility is something that would be useful for everyone to think about as they decide whether or not to adopt a particular tool or system, but the concept is particularly important for designers and planners.

From the design perspective, reversibility is something that should be part of the overall design process, much like sustainability. Just as it's easier to undertake a sustainable or "cradle-to-cradle" project by including the concept from the beginning, technology deployments are more likely to be reversible if the concept is inherent to the design, not simply an afterthought. For designers, then, the Reversibility Principle would advocate the question "how can we make this technology in a way that gives us the best ability to shut it off and undo any harm it might cause?" There may not be a perfect answer to the question, but it's almost inevitable that designs that take this issue into account will be more reversible than those that do not.

For planners, reversibility becomes an issue to take into account as technology development turns into deployment. By "planners," I mean anyone with responsibility for how a technological system gets into common use. For manufacturers, Reversibility Principle planning could be a hedge against lawsuits; for governments, Reversibility Principle planning could be a part of both economic and political strategy. If the reversibility concept were to take hold, I would imagine that insurance companies would be among its most strident advocates.

So how would the Reversibility Principle play out in practice?

One obvious candidate for reversibility analysis is biotechnology. A Precautionary approach says that we don't know the long-term effects of introducing genetically modified organisms into the ecosystem, as they are self-replicating technologies subject to evolutionary pressures; we should, therefore, avoid their deployment. Proactionary advocates argue that the benefits of the use of GMOs can be substantial, particularly in parts of the world that (for political or environmental reasons) are unable to grow enough food for local populations; we should, therefore, encourage their development. As before, both of these positions are, in my view, more or less correct.

A Reversibility Principle approach to biotechnology in general would argue that GMOs should be engineered in a way to make it possible to remove them from the environment if unexpected or low-probability problems emerge. Issues of human consumption of GMOs would be handled on a case-by-case basis, with a bias towards holding off on products that demonstrate a possibility of serious or irreversible problems.

Another candidate for the reversibility approach is the response to global warming. The Precautionary Principle and the Proactionary Principle could each be use to justify both rapid action to reduce carbon and a "wait for better methods" approach. From a Reversibility Principle perspective, however, the choice is clear. The potential problems arising from immediate action to cut carbon emissions are largely economic, and while in the worst case scenario they are serious, they are more easily mitigated than those that would come from a slow response, which in even a moderate-case scenario would harm hundreds of millions of people in irreversible ways.

The Reversibility Principle would also apply in the case of geo-engineering or "terraforming Earth" projects to stop globally catastrophic climate outcomes. It's likely that, should we be forced to consider such global-scale engineering to respond to climate disaster, few of the options will be reversible. The question then becomes which option -- including the option of doing nothing -- would in the worst reasonable scenarios result in the least amount of death and destruction, and which would give us the greatest opportunity for gradual mitigation of harm. Underlying the choices will be the need to make the ways the options as reversible as possible, even if full reversibility isn't plausible.

There are two major questions that come to mind about the Reversibility Principle.

To be blunt, the first is whether "reversibility" is even possible. From a purely physical perspective, it's not; even the act of stepping back and brushing over one's footprints still shifts the sand. But there's a difference between being unable to return the world exactly to how it once was and being unable to avoid inevitable disasters. Some of the difference arises from how soon we decide that a choice needs to be reversed; even gradual changes can become irreversible if given enough time to accumulate.

We should see "reversibility," then, not as an attempt to go back to precisely how the world once looked, but as an attempt to eliminate further harm by its source, and to ameliorate the harm that has occurred.

But the bigger issue for the Reversibility Principle perspective is just how readily we can predict the various possible outcomes, both good and bad. The quick answer is we can't fully, but that hasn't stopped us from planning for the future before; we often need to act in situations of limited information. This doesn't mean our choices must be ill-informed.

This is a situation where Scenario Planning methodology could be of value. The scenario approach intentionally avoids coming up with a single "most likely" future. Instead, scenario planners come up with multiple contingent futures, with none of them meant to be a prediction. Rather, the collection of scenarios function as environments in which to test plans -- strategic wind tunnels, if you will. In Reversibility analysis, planners would come up with multiple contingent futures in which to think about outcomes if the given technology is or is not deployed.

There's also the possibility of increasingly sophisticated models and simulations. I have enough experience in the use of computer models for political and social analysis to know that simulations should stick to physical systems, but it may be possible in time to develop decision-making aids using computer models that help human decision-makers to better understand both the physical and social dynamics at work. In situations where harmful outcomes are highly contingent but potentially very serious, good simulations could help answer the "what happens if..." questions in ways that can better be applied to questions of reversibility.

Reversibility and the Open Future

A cornerstone of the open future concept is that we should be striving towards a world that maximizes our flexibility in response to challenges. We will never have perfectly free choices when problems arise, but we are more likely to come up with good solutions under less-constrained conditions than we would if we were limited to a handful of options. The choice to pull back and say "let's try something different" is an option that we should strive to maintain.

Ultimately, the Reversibility Principle should be a heuristic, a prism through which we look at the world and make our decisions. We may not always choose the path with the simplest way back -- it may not always be the right choice -- but it would encourage us to consider the issue for all of our options. Asking ourselves, "if we do this, how readily can it be undone if we discover problems?" forces us think in terms of more than immediate gratification, and to consider how the choice connects to other choices we and the people around us have made and will make. In the end, it may even be a good first-order approximation of wisdom.

TrackBack

Listed below are links to weblogs that reference The Open Future: The Reversibility Principle:

» Cascio's Reversibility Principle from George
Unsatisfied with both the Precautionary Principle and its bipolar cousin, the Proactionary... [Read More]

Comments (15)

Leif:

This is a nice exploration of Change Management, as practiced by many large companies. Establish a backout plan as part of your plan to change, so that if something catastrophic and unpredicted occurs despite thorough testing, you can gracefully revert to the prior state. When the Reversibility Principle cannot be adequately applied to a specific change, then ample and well-researched evidence in favor of the nonreversible (read: permanent) change must be presented, examined and accepted. Furthermore, a system to allow emergency change should be researched and formulated -- do we allow certain changes to occur with limited review in dire circumstances, or do we inhibit changes of any sort even if the payoff far outweighs the potential risk BUT the change has to be made rapidly. I live in a world of flowcharts and policies, and this neatly tucks into the niche.

I find what you're saying echoes my feelings when reading a WC article on, say, the use of genetically modified methane fixing bacteria to keep the greenhouse gas levels down.

Ultimately, it all comes down to risk manangement, and providing as complete a set of conceptual tools for assessing risk as we can. Thus:
- The Proactionary Principle is fine so long as doing something entails no risk.
- The Precautionary Principle is fine so long as doing nothing entails no risk.
- The Reversability Principle is fine so long as getting back to the start entails no risk greater than setting out in the first place.

Of the three, the Reversability Principle offers the greatest level of moderation in approach. I suspect it is the tactic employed by most 'precautionary' thinkers.

To offer a couple of concrete examples on the perennial topic of global warming:
- doing nothing in case we do more harm than we heal is looking increasingly unattractive.
- methane fixing bacteria would be fairly quick to do, but just about impossible to remove from the biosphere if they proved to have unpleasant side effects. (Cane toads were bad enough!)
- OTOH it would be relatively simple to adjust the amount of solar radiation blocked out by a parasol stationed at the Earth-Sun Lagrange point. Just a little more effort...

No genetic engineeing on anything smaller than a cow: that way, if one escapes, you can at least track it down fairly easily.

;-)

Seriously, though, I like this a lot. It provides a really good conceptual framework to consider a lot of different ideas: fighting global warming with giant space mirrors is OK because you can pull them down again if they don't work right: reversible.

Populating the seas with carbon sequestering triffids; not OK, non-reversible.

I like it. A lot.

matt:

What about the importance of making decisions with permanent consequences? In other words, isn't there value in teaching youth and society to live with an understanding that the choices they make will have consequences that are not reversible, and therefore one must attempt to make the 'best' choice under the given circumstances?

I think there are serious problems in communities where nobody puts their nose to the grindstone and makes a decision to change something becuase they are too afraid of possible risky results. Granted these communities may become more interested in trying change if they know it can be reversible; but constant backpedling may stifle community decision making structure, as well participation that is productive and faciliatory of progress.

If everyone grows up with a system of choice-making that allows them to press the back button on a simulation--or even on real-life--how will that alter the types of assumptions people make during planning and design processes, as well as prior to entering those "reversible" processes?

Is this feasible on a human level? People hate to be told they are wrong, or that something they've just invested X amount of time in is actually going to be torn down and replaced with the orange version.

Jamais,
one of the main endeavors of The Natural Step has been to repond to the questions you pose.
Substantially, new technologies should be developend within "system boundaries", also know as the "4 System Conditions," about which there is a strong agreem ent within the scientific community, and houdreds of cases of application.

In the sustainable society, nature is not subject to systematically increasing:
1. concentrations of substances extracted from the Earth's crust,
2. concentrations of substances produced by society,
3. degradation by physical means
and, in that society. . .
4. people are not subject to conditions that systematically undermine their
capacity to meet their needs.

The systematic violation of any of these 4 principles leads to the destrucition of the complex system in which we live.

I suggest you to get in touch directly with the person who first developed this concept, Karl Henrik Robert.
Best
Eric Ezechieli

wimbi:

Which one of these principles are we using right now as we modify the earth with coal, oil, nuclear and worst of all, obscene waste of everything?????

I nominate solar thermal power as one quick and predictable way to feel our way forward reversibly.

One thing I wuld build into this are sustainable ways of obtaining feedback and learning. I think that the reversability principle has lots of merit, but it needs attention to how one gauges the resukt of the work.

IN a world of massively decentralized infrastructure, it seems to me that there is a potential to use social systems for feedback and learning that would give good timely information in a way that would aloow the deployment to work better, or back off where it needs to back off.

Thinking of the Open Source method of learning and improvement. Networked conversations, gift economy, knowledge sharing and rapid response.

The problem with the Proactionary Principle is all the unknown unknowns (to steal Donald Rumsfeld's rhetoric). I certainly hope your surgeon doesn't take the proactionary principle.

Adam Scott:

I wonder if someone might have made a stong argument before the industrial revolution that the resulting climate change would be reversable? Then I wonder if they even had the capacity to even evaluate the possible impacts that might need reversing.

Ultimately, in a situation where there is a decent chance of unforseen impacts, we should exercise wisdom and prevent that development. The key here is that the most dangerous impacts are almost always undefined at the time of the decision is made.

I see the argument for the reversability principle, but I think we should use only the precautionary approach in circumstances when we cannot identify that ALL potential impacts are reversable. I am sad to say that I think this is 95% of the time.

Who says we have to restrict ourselves to just one principle? They all have problems, especially the precautionary principle, but, there can be very specific cases where any one of them could apply.

For example, some actions and consequences may initially appear irreversible and thus would excluded if we only follow the reversibility principle. But what if scientific knowledge advances and we figure out a way to make something thought irreversible entirely reversible?

The problem is a fiendishly complicated world where there are always unexpected consequences both beneficial and harmful for action, inaction or actions that initially appear reversible.

This is why I am suspicious of blanket use of precautionary principle. Inaction or avoiding entire classes of technology because they are worrisome also produces unforseen and worrisome consequences--damned if you do, damned if you don't.

Hm. Not necessarily the most coherent of comments. Oh well.

My institute is running a conference in Oxford on this topic next week. We are webcasting the plenary sessions, so if you would like to watch at the time or view the archives later, visit:
www.martininstitute.ox.ac.uk/jmi/forum2006/Forum+2006+Webcast.htm

I think I object to 'proactionary' just on the basis that it is an assault on the English language! On the other hand, I think the reversibility principle may be more sensible than both alternatives since it gives a more central role to uncertainty. It is hard to be precautionary when it comes to the unknown unknowns; there are many decisions, including the manufacture of CFC's where they were first lauded for their inertness and benign chemical characteristics. Precaution is hard to operationalise in complex systems and can collapse into an anti-technology position. Reversibility is a tough design criteria, but may be more effective than either of the alternatives, but it requires that the designers think through all the possible effects.

The transhumanist embrace of a new techno-optimism is also difficult. All new technological regimes are accompanied by these grand rhetorical flourishes. Few transhumanists think about the subversive or illegal uses of technologies and medicines or about whether past technological leaps have ultimately improved people's lives.

The problem with even reversible planetary-scale engineering projects, I suspect, is not primarily technical, but political.

If a space mirror costs multiple billions, how much political inertia will adhere to its use? We have situations where the science is comparatively unequivocal -- burning oil heats the planet; certain chemical harm human health -- but where sustained lobbying/PR efforts on behalf of an industry which would be impacted by change has prevented the science from being ackowledged as a basis for policy change.

Engineering the planet would demand not only more advanced scientific knowledge than we are yet capable of, but also a completely different relationship between science and politics. I think the latter is far more difficult than the former, and to ignore it is to be living in a dreamworld.

Which is not to say that making use of our better understanding of the planet to intervene at strategic points is a bad idea, only to say that we should be doubly cautious when we think about what "reversibility" actually means in practice.

Many serious problems occur in matters where moral values come immediately to the foreground. For example, war is obviously counter-productive in every way, and is morally unjustifiable, especially with weapons of mass destruction increasing in power and sophistication. Yet no country will turn away from them, and once having engaged in war, they will be used even though the damage done is beyond repair. One wonders how a reversability policy could be adopted in midstream, so to speak. Ideally, yes. But practically, I doubt it. It seems to me that in this case, the demand must be for complete renunciation of violence as a means to any end, coupled of course with the intensive development of techniques (many and various) for institutionalizing alternative behaviors. We need it badly and we need it now. Can it be done? We won't know till we try.

What I like about the precautionary principle is that it is actually forward thinking. So many technological inventions seem "present" driven, and I wonder whether their inventor would have pushed the innovation had they really projected the potential implications and consequences. I can hardly believe that 19th century Europe really stopped to evaluate whether the emergence of industrialization would adversely affect their environment. In this way the precautionary principle - or active PP encourages people to responsibly think through the innovation.

About

This page contains a single entry from the blog posted on March 6, 2006 4:28 PM.

The previous post in this blog was On the Horizon (03/03/06): Charge!.

The next post in this blog is See SPOT. See SPOT Sense. Sense, SPOT, Sense!.

Many more can be found on the main index page or by looking through the archives.

Powered by
Movable Type 3.34