« December 2008 | Main Page | February 2009 »

Monthly Archives

January 28, 2009

New Geoengineering Study, Part II

LentonandVaughn.png

The article The radiative forcing potential of different climate geoengineering options is now out and available for download and discussion. As expected, it offers one of the first useful comparisons of different geoengineering techniques.

(It should be noted that the accuracy of the measurements and predicted effects of the various proposals is likely to be moderate at best; the value comes from having a clear comparison using the same modeling standards for each approach.)

In the paper, Tim Lenton and his student Naomi Vaughn, of the Tyndall Centre for Climate Change Research and the University of East Anglia, UK, focus strictly on the radiative impact of geoengineering -- that is, how much heat absorption is prevented -- and don't examine costs or risks. The goal here is to help figure out the "benefit" half of the cost-benefit ratio. Lenton and Vaughn have another paper (to be published later this year) taking a look at the cost side, and that will be just as important as this one.

Lenton and Vaughn split geoengineering proposals into two categories:

  • Shortwave options that either increase the reflectivity (or albedo) of the Earth or block some percentage of incoming sunlight. These include megascale projects like orbiting mirrors and stratospheric sulphate, as well as more localized and prosaic methods like white rooftops and planting brighter (=more reflective) plants.
  • Longwave options are those that attempt to pull CO2 out of the atmosphere in order to slow warming. These include massive reforestation projects, "bio-char" production and storage, various air capture and filtering plans, and ocean biosphere manipulation with iron fertilization or phosphorus.

Lenton and Vaughn run the numbers on the likely maximum results from each of the methods, working under the assumption of simultaneous aggressive carbon cutting efforts. One thing that becomes immediately clear is that no form of geoengineering would be enough to avert catastrophe if emissions aren't cut quickly. Unfortunately, they also argue that even aggressive carbon emission cuts won't be enough to forestall disaster.

So, what works?

Here's the first cut analysis in a chart form:

geo-comparison-chart.png

Most effective (again, strictly in terms of radiative impact) over this century would be either space shields, stratospheric injection, or increasing cloud levels with seawater. Any of these, alone, could actually be enough to counteract global warming along with aggressive carbon emission reductions.

Next would be increasing desert albedo (essentially putting massive reflective sheets across the deserts of the world) or direct carbon capture and storage (ideally captured from burning biofuels). These would slow global warming disaster, but wouldn't necessarily be enough to stop it. Biochar, reforestation, and increasing cropland & grassland albedo come in third, half again as effective as the previous proposals; the remaining methods would be even less-effective, in some cases multiple orders of magnitude less-effective.

And all of these proposals have drawbacks. Space shields would be ridiculously expensive absent later-stage nanofabrication techniques. Stratospheric injection alters rainfall patterns, and any abrupt cessation of albedo manipulation would be worse than what had been prevented. Laying thousands upon thousands of square kilometers of reflective sheets across the desert is an ecosystem nightmare, while reforestation at sufficient levels to have an impact -- and any kind of biofuel or cropland/grassland modification -- would be incompatible with feeding the Earth's people. Carbon capture has the fewest potential drawbacks, other than cost -- and the fact that it alone wouldn't be sufficient to stop disaster, only delay it.

With the various drawbacks (which Lenton and Vaughn will examine in more detail later this year), why even consider geoengineering?

The explanation comes as an extension of the "bathtub model" Andy Revkin talks about today in the New York Times.

Imagine the climate as a bathtub with both a running faucet and an open drain. As long as the amount of water coming from the faucet matches (on average) the capacity of the drain, the water level in the tub (that is, the carbon level in the atmosphere) remains stable. Over the course of the last couple of centuries, however, we've been turning up the water flow -- increasing atmospheric carbon concentrations -- first slowly, then more rapidly. At the same time, one consequence of our actions is that the drain itself is starting to get clogged -- that is, the various environmental carbon sinks and natural carbon cycle mechanisms are starting to fail. With more water coming into the tub, and a clogging drain, the inevitable result will be water spilling over the sides of the bathtub, a simple analogy for an environmental tipping point catastrophe.

With this model, we can see that simply slowing emissions to where they were (say) a couple of decades ago won't necessarily be enough to stop spillover, if the carbon input is still faster than the carbon sinks can handle.

That said, our efforts at stopping this catastrophe have -- rightly -- focused on reducing the water flowing from the faucet (cutting carbon emissions) as much as possible. But the flow of the water is still filling the tub faster than we can turn the faucet knob (we're far from getting carbon emissions to below carbon sink capacity). Without something big happening, we're still going to see a disaster.

The shortwave geoengineering proposals, by blocking some of the incoming heat from the Sun, are the equivalent here to building up the sides of the tub with plastic sheets. The tub will be able to hold more water, although if the sheeting fails, the resulting spillover will be even worse than what would have happened absent geoengineering.

The longwave geoengineering proposals, by increasing carbon capture, are the equivalent here to clearing out the drain, or even drilling a few holes in the bottom of the tub (let's assume that just goes to the drain, too). The water will leave the tub faster, but you may have to drill a lot of holes to have the impact you need -- and drilling too many holes could itself be ruinous.

According to Lenton and Vaughn's study, the longwave geoengineering proposals would be much more effective in the long run -- at millennium scales -- than shortwave, but the shortwave would have a more immediate impact. It's clear that a combination of the two approaches would be best, of course coupled with aggressive carbon emission reductions. Build up the sides of the tub and drill a few holes, in other words.

Of all of the proposals, air capture seems to be closest to a winner here, but the costs (and technology) remains a bit unclear, and will take some time to get up and running in any event. That delay will mean pressure to use one of the shortwave approaches, too. My guess is that stratospheric sulphate injection will be cheaper at the outset than the cloud albedo manipulation with seawater, but the latter seems likely to have fewer potential risks; we'll likely try both, but probably transition solely to cloud manipulation (at least until molecular nanofabrication allows us to do space-based shielding). The various minor proposals -- reforestation, urban rooftop albedo, and the like -- certainly won't hurt to do, and every little bit helps, but alone are massively insufficient.

Lenton and Vaughn's study is precisely the kind of research that is needed to better understand what the geoengineering options are. As I emphasize here at every turn, this doesn't obviate the need for aggressive reductions in carbon emissions. But it's looking more and more like simply changing our light bulbs, boosting building efficiency, and taking a bike instead of a car, while clearly helpful, will still be insufficient to avert disaster, and even a global shift away from fossil fuels wouldn't come in time to stop the water from spilling over the edges of the tub.

Nature stopped being natural centuries ago. It's been in our hands, under our influence, for much longer than we've been willing to admit. We've got to get smart about how we're reshaping the environment -- and do so before it's too late.

January 27, 2009

New Geoengineering Study

A number of people have sent me links to reports about a new geoengineering study to be released tomorrow (the 28th) in Atmospheric Chemistry and Physics, an open-access science journal from the European Geosciences Union. I haven't seen the study itself yet -- I'll download it as soon as it becomes available -- but from what's been reported, it looks like a good attempt to grapple with the diverse effects from different geoengineering strategies.

In the meantime, this piece at Grist gives you a flavor of environmentalist reaction to the study, while this post by Oliver Morton offers some of the scientific details.

The quick conclusion from reading the various reports about the report is that no one approach to geoengineering is a magic bullet. A combination of programs, with some insolation-blocking (likely with stratospheric sulphates) and some carbon sequestration (through buried charcoal and "biochar"), seems the most reasonable partner to aggressive emissions reductions.

It's increasingly hard for me to see a climate survival strategy that doesn't involve some geoengineering. Reports like this one are important tools in helping us figure out what we can do, carefully, and what's not worth the risk.

Topsight: January 27, 2009

Okay, I gotta close these tabs...

• Robots!: BoingBoing points to a chart at IEEE Spectrum showing the number of industrial robots per manufacturing worker. Top of the list: Japan, naturally, at 295 robots per 10,000 workers. Singapore is second at 169, and the US has a meagre 85 per 10K. All interesting stuff, to be sure, but I'd love to see a more general robots per person figure, including little devices like Roombas and Pleos. That'll be a fun number to watch over the next decade or two.

• Filter This: One of the posters at Near Future Laboratory -- I believe it was Julian Bleecker -- just eviscerates the prospectus for an upcoming conference on "Pervasive Advertising." Beautifully done.

There’s really not much more of an end game for pervasive advertising than that of the extrapolation of today’s conditions as in the remarkable design fiction of Spielberg’s visual rendering of P.K. Dick’s “Minority Report”. The assemblage of participants in the world of advertising is optimized for itself, which is well-greased linkages between me, my “interests” (to the extent these translate into commerce) and those who have something to gain in economic terms from selling me my interests. It’s optimized to leverage the pervasively networked, databased world and this can only lead to an intensely uninspired, technically awesome, intrusive and annoying world.

He also offers a hundred bucks to the first person to come up with a compelling version of an economically-vibrant world without advertising.

• Crooked Charlie: Charlie Stross is now engaged in an extended conversation over at Crooked Timber, chatting about his books -- and the ideas and scenarios they present -- with folks like Ken Macleod, Brad DeLong, and Paul Krugman. Yeah, that Paul Krugman.

So far, we have discussions of development economics, the rights of robots, and the question of what would the onset of a singularity really look like...

January 22, 2009

Boosting Your Brain for Fun and Profit

A diverse assortment of legal, bioscience, psychology, and ethics academics argue in the pages of Nature for

  • ...a presumption that mentally competent adults should be able to engage in cognitive enhancement using drugs.
  • ...an evidence-based approach to the evaluation of the risks and benefits of cognitive enhancement.
  • ...enforceable policies concerning the use of cognitive-enhancing drugs to support fairness, protect individuals from coercion and minimize enhancement-related socioeconomic disparities.
  • ...a programme of research into the use and impacts of cognitive-enhancing drugs by healthy individuals.
  • ...physicians, educators, regulators and others to collaborate in developing policies that address the use of cognitive-enhancing drugs by healthy individuals.
  • ...information to be broadly disseminated concerning the risks, benefits and alternatives to pharmaceutical cognitive enhancement.
  • ...careful and limited legislative action to channel cognitive-enhancement technologies into useful paths.
  • You might not think this is a terribly controversial idea, but it is -- remember, drugs are bad, m'kay? As far as I can tell, that's the core of the argument against the use of enhancement biochemistry. If the cognitive enhancement came about through education, through computer use, or even through some less-conventional methods like meditation and yoga, the arguments would be about how to increase access, not prevent it.

    The notable element here is that this argument is appearing in the pages of Nature, pretty much the biggest name in science journals. That doesn't mean that such proposals are likely to be adopted any time soon, but it does mean that they're starting to receive mainstream attention -- or, to be precise, more mainstream attention. Recall that Tech Crunch reported that cognitive enhancement drugs were becoming all the rage in Silicon Valley. I can't imagine that, in a rougher economic environment, these executives and programmers are going to rely less on such assistance.

    Here's a bit of what I wrote about the phenomenon in the last draft of the Atlantic article (which now looks like a summer publish date, which means that it will go through yet another round of big edits and rewrites).

    This is one way a world of intelligence augmentation emerges. Little by little, people who don't know about drugs like modafinil (or don’t want/can't afford to use them) will find themselves facing greater competition from the people who do. [...]

    But these are primitive enhancements. As the science improves, we could see other kinds of cognitive modification drugs, boosting recall, brain plasticity, even empathy and emotional intelligence. They would start as therapeutic treatments, but would end up being used to make users "better than normal." Eventually, some of these may end up as over-the-counter products, for sale at your local pharmacy, or on the juice and snack aisle at the supermarket. Spam email would be full of offers to make your brain bigger, and your idea production more powerful.

    Such a future would bear little resemblance to "Brave New World" or similar narcomantic nightmares; we may fear the idea of a population kept doped and placated, but we're more likely to see a populace stuck on overdrive, searching out the last bit of competitive advantage, business insight, and radical innovation. No small amount of that innovation would be directed towards inventing the next, more powerful, cognitive enhancement technology.

    Cognitive enhancement drugs may be primitive for now, but they're here -- and in increasing use. It would be painfully irresponsible to think that it's a fringe issue, and to continue to pretend that prohibition is a reasonable response.

    The series of proposals in the Nature article strike me as eminently reasonable, cautious, and forward-looking. I'm trying hard not to be cynical about their likelihood of implementation. Maybe they should start working on optimism-enhancement technologies, too.

    January 21, 2009

    TED Talk, 2006

    So, yeah. My talk from TED 2006 is now up on the TED site. (There's a higher-res downloadable MP4 version, as well.) The subject matter is ostensibly Worldchanging, but I use the platform to talk about the environmental participatory panopticon concept, too.

    gorewatches2.pngIt's an interesting historical artifact. It was the first big presentation I'd ever given, and I had to give it in front of a thousand people, including some fairly high-profile folks. I was incredibly nervous, and it shows (I really needed to stop leaning on the little podium, and would somebody please give me a glass of water!). Moreover, I read the talk, rather than just speak extemporaneously, in part because the time limit was drilled into my head, but mostly because I didn't have the confidence that I'd be able to carry off the presentation without a script. I don't do that any more.

    As exciting as it was to have a chance to speak at TED, I almost feel sad about it now. Not because it was a thrown-into-the-deep-end introduction to giving big talks to big audiences -- on balance, given the situation, I did okay -- but TED generally only has a given speaker show up once. From what I'm told, I give much more engaged, engaging, presentations these days, and I have a lot more interesting things to say -- it's just too bad I won't have a chance to do so on the TED stage.

    January 19, 2009

    Grim Meathead Future

    So with the (welcome) return of a Democrat to the White House, we get the (not so welcome) return of militia loons. This time around, the right wing gun clubs seem to be organizing around something called "Molon labe" (usually written in the Greek, "Μολών Λaβέ"). It means "Come and get them!", generally interpreted as "over my dead body," and comes from the supposed response of the Spartans to Persia's demand that they surrender their weapons. These guys are organizing and buying up weapons in fear that Obama's going to take their guns away (pure fantasy on their part, of course -- President O has much bigger issues to wrestle with).

    The last time we had this kind of thing, we got bombings of the Olympics and federal buildings, deadly attacks on doctors and radio commentators, and myriad attempts to intimidate government officials.

    This time around, we have the Iraqi resistance as a model for really engaging in some "system disruption."

    Oh, goody.

    Dark Clouds

    clouds2.png

    Cloud computing: Threat or Menace?

    I did some sustainability consulting recently for a major computer company. We focused for the day on building a better understanding of their energy and material footprint and strategies; during the latter part of the afternoon, we zeroed in on testing the sustainability of their current business strategies. It turned out that, like many big computer industry players, this company is making its play in the "cloud computing" field.

    ("Cloud computing," for those of you not up on industry jargon, refers to a "a style of computing in which resources are provided “as a service” over the Internet to users who need not have knowledge of, expertise in, or control over the technology infrastructure." The canonical example would be Google Docs, fully-functional office apps delivered entirely via one's web browser.)

    Lots of big companies are hot for cloud computing right now, in order to sell more servers, capture more customers, or outsource more support. But there's a problem. As the company I was working with started to detail their (public) cloud computing ideas, I was struck by the degree to which cloud computing represents a technical strategy that's the very opposite of resilient, dangerously so. I'll explain why in the extended entry.

    But before I do so, I should say this: A resilient cloud is certainly possible, but would mean setting aside some of the cherished elements of the cloud vision. Distributed, individual systems would remain the primary tool of interaction with one's information. Data would live both locally and on the cloud, with updates happening in real-time if possible, delayed if necessary, but always invisibly. All cloud content should be in open formats, so that alternative tools can be used as desired or needed. Ideally, a personal system should be able to replicate data to multiple distinct clouds, to avoid monoculture and single-point-of-failure problems. This version of the cloud is less a primary source for computing services, and more a fail-safe repository. If my personal system fails, all of my data remains available and accessible via the cloud; if the cloud fails, all of my data remains available and accessible via my personal system.

    This version of cloud computing is certainly possible, but is not where the industry is heading. And that's a problem.

    For big computer companies, the cloud computing model breathes new life into the centralized server markets that were once their bread-and-butter, as they offer high profits on sales and service contracts. Cloud computing doesn't just use a server to store and transfer files, it uses the servers to do the hard computing work, too, in principle making your personal machine little more than a fancy dumb terminal. Companies that already have significant server and bandwidth space, such as Amazon and Google, love the idea because it offers them more ways to lock users in to proprietary formats and utilities. For many of the corporate users looking at cloud services, that's a worthwhile trade-off to avoid having to deal with continuously expanding IT expenditures. Let the cloud companies worry about the software and hardware upgrades; all we need to handle are the dumb terminals.

    Cost-effective, perhaps. But by no means resilient.

    Recall that the core premise of a resilience strategy is that failure happens, and that the precise mode of failure can't necessarily be predicted. Resilience demands that we prepare for unexpected problems so as to minimize actual disruption -- minimize in terms of time, but particularly in terms of how widespread the disruption may be.

    Resilience design principles include: Diversity (or avoidance of monocultures); Redundancy; Decentralization; Transparency; Collaboration; Graceful Failure; Minimal Footprint; Flexibility; Openness; Reversibility; and Foresight. As per Jim Moore's comments on this post, we should add "Spare Capacity" to the list.

    How does cloud computing match up?

    On the positive side, the standard (Google Apps) model for cloud computing does well with collaboration, reversibility, and (arguably) spare capacity. While the collaboration and reversibility aspects of these apps could likely be replicated with standard desktop software, they're definitely intrinsic to the cloud approach. These are fundamental to the appeal of the cloud model.

    Conversely, cloud computing clearly falls well short in terms of diversity, decentralization, graceful failure, and flexibility; one might also include redundancy, transparency, and openness on the negative list.

    Here's where we get to the heart of the problem. Centralization is the core of the cloud computing model, meaning that anything that takes down the centralized service -- network failures, massive malware hit, denial-of-service attack, and so forth -- affects everyone who uses that service. When the documents and the tools both live in the cloud, there's no way for someone to continue working in this failure state. If users don't have their own personal backups (and alternative apps), they're stuck.

    Similarly, if a bug affects the cloud application, everyone who uses that application is hurt by it. As the cloud applications and services become more sophisticated (well beyond word processors and spreadsheets), the ability to pull up an alternative system to manipulate the same data becomes far more difficult -- especially if the failed cloud application limits access to stored content.

    Flexibility suffers when one is limited to just the applications available on the cloud. That's not much of a worry right now, when most cloud computing takes place via normal laptops and desktop computers, able to load and run any kind of application. It's a greater problem in the future envisioned by many cloud proponents, where people carry systems that provide little more than cloud access.

    There's also the issue of how well it fares when network access is spotty or degraded.

    In short, the cloud computing model envisioned by many tech pundits (and tech companies) is a wonderful system when it works, and a nightmare when it fails. And the more people who come to depend upon it, the bigger the nightmare. For an individual, a crashed laptop and a crashed cloud may be initially indistinguishable, but the former only afflicts one person and one point of access to information. If a cloud system locks up, potentially millions of people lose access.

    So what does all of this mean?

    My take is that cloud computing, for all of its apparent (and supposed) benefits, stands to lose legitimacy and support (financial and otherwise) when the first big, millions-of-people-affecting, failure hits. Companies that tie themselves too closely to this particular model, as either service providers or customers, could be in real trouble. Conversely, if the big failure hits before cloud has swept up lots of users and visibility, the failure could be a signal to shift towards a more resilient model.

    I would love to use the resilient cloud described above, and I suspect I'm not alone. But who's going to provide it?

    January 15, 2009

    Life on Mars? Why It Matters

    PSP_010219_2020.jpg

    News today from NASA that they've confirmed the presence of methane in the Martian atmosphere, concentrated in three areas (one of the major sources, Nili Fossae, is shown here). For a variety of reasons, this offers the strongest evidence yet that Mars may have an active biology under the surface.

    While both geology and biology can produce methane on Earth, inorganic production of methane is generally associated with volcanic and tectonic activity, none of which has been witnessed on Mars (it's clear that Mars was once geologically active, but there's little or no evidence of current vulcanism). In addition, the three source areas each have very different geologies, further complicating the argument that the methane comes from geological activity. Finally, the "serpentinization" process on Earth tends to plug up sources of methane. NASA's Lisa Pratt, one of the scientists delivering the press conference today, argues that while this isn't positive proof that the methane comes from biological activity, it does make the geological argument harder to sustain and makes the biological argument "more plausible."

    An additional bit of complexity is that the methane seems to be leaving the Martian atmosphere faster than the chemical composition of the Martian environment would suggest. A biological process -- where the methane was being consumed by microbial life -- would fit the evidence. Follow-up research, unfortunately not possible with the current satellites and robots working Mars now, should be able to find more definitive (positive or negative) evidence.

    So what would we have if we determined that there were microbes making methane, and other microbes consuming methane? An ecosystem -- the first ecosystem found someplace other than Earth.

    This would be amazingly important for a variety of reasons, not the least of which being that we'd finally have a chance to do comparative ecology.

    Everything we know about how ecosystems work, how biology works, comes from a single data point: Earth. And while there's quite a bit of diversity within the Earth's ecology, it's all based on more-or-less the same basic biological stuff. What would Martian microbes have as the equivalent of DNA? Genes? Would there be elements of their biochemistry that would be unusually surprising?

    Then there's the possibility that said Martian microbes would have a biology essentially identical to that found on Earth. The most plausible explanation for that would be that Earth life actually started on Mars (which cooled faster than Earth, so would have started its biology sooner) and was exported via Martian rocks ejected from massive impacts and hitting Earth as meteorites. We've discovered Mars-origin meteorites on Earth, so we know this is plausible.

    So many questions. Hopefully, NASA will get the funding it needs to look for the answers.

    Nanobama

    Mike Treder, director of the Center for Responsible Nanotechnology, has offered up an entry to the "Citizen's Briefing Book" section of Obama's Change.org website. In "Advanced Nanotechnology - What, When, Why" Mike argues that an investment in the development of molecular manufacturing should be seen as part of a larger strategy for dealing with global climate disruption. He lays out a series of suggestions that confront issues around both disruptive technological change and disruptive climate change head-on.

    Set aside an equivalent amount of funding to study the implications of advanced nanotechnology, and to develop and ultimately implement comprehensive strategies for maximizing safety, security, and responsible use on a cooperative international basis.

    Prepare for disaster mitigation. Given that time is rapidly growing shorter for us to slow global warming before irreversible carbon cycle feedbacks kick in, it is essential that we begin preparing soon for the likely impacts of climate change. [...] We may have a decade or two to make ready for what's coming -- how well we use that time to prevent and/or alleviate suffering of our fellow humans (and other species) will show just how humane we truly are.

    I'm often frustrated by people who blithely dismiss crises like global warming with the suggestion that some fantastic future technology will solve our problem, so we don't need to worry now. That's not what Mike argues here -- he's clear that while a nanotech-based solution to global warming would be wonderful to see, we can't depend on it. We need to act now, but be ready to take responsible advantage of new tools as they emerge.

    The Change.org site allows registered visitors to vote up (and vote down) on the myriad proposals weaving their way through the system. It takes just a moment to register, and I would encourage you to do so -- and to give Mike's proposal your thumbs-up.

    January 12, 2009

    Hurt Feelings and China

    Even if China isn't likely to be a drop-in replacement for US hegemony in the 21st century, it will certainly be a key player on the international stage. It's useful, then, for futurists to pay attention to the interesting details of how China interacts with other nations.

    Last month, the Atlantic's James Fallows posted a fascinating set of items about the use of the term "hurt the feelings of the Chinese people" in diplomatic communiques from the Beijing government.

    Ah, it "hurt the feelings of the Chinese people." This is the phrase I wait for in every Chinese government statement on matters of international disagreement.

    Yes, there is a real concept buried beneath this boilerplate slogan. The concept might be expressed other places as "an insult to the dignity of our nation," or "disrespect for our people and their principles" or something. But it is generally used quite sparingly in other nations' pronunciamentos, because in the end listeners don't find it that persuasive.

    Joel Martinsen at the Danwei blog lists the number of times the term has popped up in various diplomatic disputes, and with which countries. The biggest inflictor of hurt to the Chinese people? Japan, unsurprisingly, with "hurt the feelings..." used 47 times in official statements. The US came in second at less than half that number, 23 times. NATO, the Vatican, and the Nobel Committee have all hurt the feelings of the Chinese people more than once, as well.

    It's a term that has some resonance in Chinese language and culture, apparently, but as Fallows notes, is less persuasive outside the Chinese borders. Outsiders are unlikely to take the phrase with the intended level of anger; the phrase has great potential for massive miscommunication, and it will be interesting to see whether China learns to speak diplomat-ese, or whether the rest of the world has to learn what China means when it says something odd.

    The potential for "hurt feelings" is a two-way street, however.

    Shanghaist links to a video of Chinese elementary students reciting a poem; the video is apparently whipping its way around the Chinese Internet, gaining quite a bit of attention and play in China. The poem includes the following lines:

    Lead: Earthquakes, shifting back and forth like the positions of Sarkozy, with his dirty tricks, trying to shake the great China
    Lead: Did China retreat?
    All: No. The Shenzhou-7 launched. We are victorious!
    Lead: Pathetic Europe will never stop the insurmountable force of our great dynasty
    All: Just the aftershocks from the earthquake would destroy France!

    [...]

    Lead: Do not waver, do not slow down, do not make big changes

    Lead: Do not change the flag, Do not turn back

    All: Step ruthlessly over all anti-China forces

    China and the West both have a lot to learn about diplomatic engagement with each other, it seems.

    January 6, 2009

    Uncertainty, Complexity, and Taking Action

    Here's the video of the second talk I gave at the Global Catastrophic Risks event last November. It's only 15 minutes long -- they just wanted a quick discussion of the day -- but I think it actually turned out okay. I just need to stop doing that weird thing with my hands.


    Uncertainty, Complexity, and Taking Action by Jamais Cascio, posted by Jeriaska on Vimeo.

    This was essentially an extemporaneous talk about what had just happened, so no slides. There is, however, a transcript.

    What would be a Bretton Woods, not around the economy but around technology? Technology is political behavior. Technology is social. We can talk about all of the wonderful gadgets, all of the wonderful prizes and powers, but ultimately the choices that we make around those technologies (what to create, what to deploy, how those deployments manifest, what kinds of capacities we add to the technologies) are political decisions.

    The more that we try to divorce technology from politics, the more we try to say that technology is neutral, the more we run the risk of falling into the trap of unintended consequences.

    (Warning for the sensitive: I drop the f-bomb a few times in this talk.)

    Upcoming Stuff

    My 2009 is already filling up!

    Here's what the calendar holds so far:

    Yeah, that May-June period's gonna be rough.

    I also have something fun tentatively scheduled for July, but I can't talk about it yet.

    January 2, 2009

    Uncertainty and Resilience

    Ecotrust has launched People and Place, a webzine looking at the relationship between humankind and its environment. P&P's inauguratory issue features an article on resilience by Brian Walker of the Resilience Alliance, Resilience Thinking. The editor at P&P asked me to write a companion essay -- Uncertainty and Resilience -- and it's now available on the site.

    In my work as a futurist, focusing on the intersection of environment, technology and culture, the concept of resilience has come to play a fundamental role. We face a present and a future of extraordinary change, and whether that change manifests as threat or opportunity depends on our capacity to adapt and remake ourselves and our civilization -- that is, depends upon our resilience.

    My piece looks at how defaulting to least harm (or graceful failure, as I've called it elsewhere) and foresight are useful additions to the model of resilience that Walker proposes.

    Resilience seems to be my theme of the moment. It's appropriate for the times, I suppose. When things seem to be falling apart, it's helpful to remind ourselves that we have ways to endure.

    January 1, 2009

    Aspirational Futurism

    One of the secondary effects of the latest set of crises to grip the world is the rise of essays and articles from various insightful folks, laying out scenarios of what the future will look like in an era of limited resources, energy, money, and so forth. Most of these follow a similar pattern: a list of reasonable depictions of a more limited future, and at least one item that seems completely out of the blue.

    The best example has to come from James Kunstler's description of the world to come in his "non-fiction" The Long Emergency and his explicitly fictional World Made By Hand. Along with his schadenfreude-soaked claims about the end of suburbia, automobiles, and all things superficial, he comes in with stark assertions that we'll all be making our own music and acting on stage for each other, instead of listening to that damnable recorded "rock-roll" music and the disco and suchlike.

    Yeah, I'm no big fan of JHK's reactionary futurism, but this points to a bigger trend, one that I'm seeing across a variety of political spectra: the vision of an apocalyptic near-future as a catalyst for making the kinds of social/economic/political/technological/religious/etc. changes that the ignorant or deceived masses wouldn't have otherwise made.

    This isn't just Rapturism, where a glorious transformation happens, which may or may not have nasty results for some; in that kind of scenario, an apocalypse isn't a trigger so much as a possible side-effect. In this kind of scenario -- "aspirational apocaphilia" -- the global disaster is a requisite enabler.

    It's a notable trend in that it's something that those of us who consider ourselves ethical futurists need to pay close attention to in our own work. I'd love to see the current crises result in a variety of more sustainable social patterns -- but I have to be careful not to mistake my desire with what would be a useful forecast.

    Jamais Cascio

    Contact Jamais  ÃƒÂƒÃ‚ƒÃ‚ƒÃ‚ƒÃ‚¢Ã‚€Â¢  Bio

    Co-Founder, WorldChanging.com

    Director of Impacts Analysis, Center for Responsible Nanotechnology

    Fellow, Institute for Ethics and Emerging Technologies

    Affiliate, Institute for the Future

    Archives

    Creative Commons License
    This weblog is licensed under a Creative Commons License.
    Powered By MovableType 4.37