May 8, 2010


Keep Calm, Civilization


January 21, 2010

The Return of El Niño

I've been awakened several times this week at 4am by 30+ mile-per-hour winds ripping through the bushes in the backyard, pushing the soaked metal table around on the stone patio. The rain is loud, but the wind somehow more disturbing, foreboding. And there's at least another week more of this to come.

California (and the western US as a whole) needs the rainfall, to be sure, but the intensity of the inundation in an El Niño cycle can itself be destructive -- flooding, mudslides, trees and power lines blown down, and so forth. California natives (like me) often joke about local news turing half an inch of rainfall into an OMGSTORMWATCH'010!!! environoia event, but when we're looking at getting close to a half-season's worth of rain over the course of a couple of weeks, the hyperbole is almost warranted. And rainfall arriving in torrential bursts doesn't soak in and store up as readily as slower, more spread out, showers.

And so our weather becomes a metaphor: we need the rain; the rain arrives, but it does so in a way that doesn't actually help much, and undermines other aspects of our lives. Sound like anything else going on these days?

Maintaining optimism when the storm is approaching its peak is difficult, at best. It's easy to fall victim to the 4am darkness. And, just maybe, it's good to let ourselves have that moment of despair. It's the despair, the fear, the sorrow that lets us truly appreciate the opportunities to act that will eventually come. The calm, clearing skies never look so good as they do after a terrifying storm; the tree limbs and broken fences littering the streets confirm the power of the wind and the rain, but in the breaking sunlight seem less like a nightmare made real, and more like a challenge to be cleared.

November 2, 2009

Resilience Fail (updated)

Quick question: where does this URL go to?

How about this one?

Would you have guessed that the first goes to a Computerworld article about business-appropriate avatars, and the second goes to the previous post on Open the Future?

The use of URL-shortening services is a classic example of short-term need trumping long-term resilience. Shortened URLs:

  • are not human-readable, and even the versions with user-generated mnemonics are little better than crude tags;
  • they don't provide contextual clues, which would offer a way to find the information later (if the article has expired, for example) by looking up relevant keywords or related concepts;
  • they rely on the continued presence of the particular shortener - any downtime or disappearance kills potentially millions of links.

    That is, URL-shorteners violate three key principles of resilient design: they offer no transparency, no redundancy, and no decentralization. They're classic single-points of failure.

    As a result, shortened URLs have little or no reference or archival value. A dead short URL is far worse than a dead standard URL, in fact, because (a) you have no way of getting contextual meaning, and (b) you can't even go look up the address on the Internet Archive. This is a real problem for those of us who think of the Internet as a tool for building knowledge. For better or for worse, services such as Twitter have gone from being ephemeral conversation media to being used as tools of collaborative awareness about the world. We can no longer assume that a link in a short message is of only transient value.

    Yet many of us (including me) rely heavily on shorteners when using URLs "conversationally," such as on Twitter or in an instant message chat. They take far fewer characters than a typical URL; in length-limited media such as Twitter, that's a critical advantage.

    So, in the immortal phrase, what is to be done?

    Given that the need for URL shortening will remain as long as we use character-limit media such as Twitter or SMS, I can think of a few steps that would help to return some of the information resilience to the system:

  • Embed shortening "behind the scenes" in Twitter and the like, so that senders just enter a full URL, and recipients see the full URL whenever possible. The full URL should show up on the web version, so that the real address gets archived.
  • Google, Bing, Yahoo, and the other search engines should auto-translate any shortened URLs they stumble upon when indexing pages, so that at the very least the cached version contains the full address. The Internet Archive should definitely be doing this.
  • All URL-shortening services should agree to make the records of short URL -> full URL links available to search and archival sites, under appropriate privacy conditions (e.g., all names/IP addresses of users stripped out, data only available if the company goes under, data only available after five years, users can choose to allow the URL link to expire).

    Any of these would be an enormous step forward, and the combination would make for a much more resilient system. Admittedly, all of these steps require a bit of coding work, and aren't going to be implemented overnight. However, nobody said resilience was easy -- just necessary.

  • October 12, 2009

    Danger, Danger!

    SadKick.pngMicrosoft/Danger/T-Mobile to millions of Sidekick users: Whoops.

    Short version: Microsoft (who now owns Danger, the makers of the Sidekick) decided to migrate data from one storage network to another. That migration failed, and corrupted the data. Okay, annoying, so restore from the backup, right?

    Wrong. No backups. None. Zero. El zilcho.

    So millions of Sidekick users awake this past weekend to find that all of their data are gone -- or, in the best scenario, the only data they have are the most recent stuff on the Sidekick itself, and if they let the device power down, they'll lose that, too.

    You can't say I didn't warn you.

    January 19, 2009 - "Dark Clouds":

    Here's where we get to the heart of the problem. Centralization is the core of the cloud computing model, meaning that anything that takes down the centralized service -- network failures, massive malware hit, denial-of-service attack, and so forth -- affects everyone who uses that service. When the documents and the tools both live in the cloud, there's no way for someone to continue working in this failure state. If users don't have their own personal backups (and alternative apps), they're stuck.

    Similarly, if a bug affects the cloud application, everyone who uses that application is hurt by it. [...]

    In short, the cloud computing model envisioned by many tech pundits (and tech companies) is a wonderful system when it works, and a nightmare when it fails. And the more people who come to depend upon it, the bigger the nightmare. For an individual, a crashed laptop and a crashed cloud may be initially indistinguishable, but the former only afflicts one person and one point of access to information. If a cloud system locks up, potentially millions of people lose access.

    So what does all of this mean?

    My take is that cloud computing, for all of its apparent (and supposed) benefits, stands to lose legitimacy and support (financial and otherwise) when the first big, millions-of-people-affecting, failure hits. Companies that tie themselves too closely to this particular model, as either service providers or customers, could be in real trouble.

    And what do we see now? "Microsoft's Danger Sidekick data loss casts dark cloud on cloud computing." "Microsoft's Sidekick data catastrophe." "Cloud Goes Boom, T-Mo Sidekick Users Lose All Data."

    Okay, it's easy to blame the failure to make backups for this disaster. But the point of resilience models is that failure happens. A complex system should not be so brittle that a single mistake can destroy it. Here's what I wrote back in January about what a resilient cloud could look like:

    Distributed, individual systems would remain the primary tool of interaction with one's information. Data would live both locally and on the cloud, with updates happening in real-time if possible, delayed if necessary, but always invisibly. All cloud content should be in open formats, so that alternative tools can be used as desired or needed. Ideally, a personal system should be able to replicate data to multiple distinct clouds, to avoid monoculture and single-point-of-failure problems. This version of the cloud is less a primary source for computing services, and more a fail-safe repository. If my personal system fails, all of my data remains available and accessible via the cloud; if the cloud fails, all of my data remains available and accessible via my personal system.

    It may not be as sexy as everything-on-the-cloud models, and undoubtedly not as profitable, but a failure like this past weekend's Microsoft/Danger fiasco -- or the myriad cloud failures yet to happen (and they will happen) -- simply wouldn't have been possible.

    April 22, 2009

    Scale-Based Antitrust

    Crypto-blogging in a meeting, but...

    One of the questions that came up after my "Resilience Economy Model" post was precisely how we could prevent businesses from becoming "too big to fail." A report on NPR's Marketplace offers one suggestion:

    Scale-Based Antitrust. Bob Moon interviewed Zephyr Teachout.

    MOON: So how do you augment these antitrust laws to apply to the banks?

    TEACHOUT: You could pass a new act, which would join the other antitrust law acts -- Clayton and Sherman acts. This new law would look at size as an independent variable. That could be a combination of looking at profit, assets or market value but would have a default rule that says no company can become larger than a certain size depending on the industry.

    Teachout wrote a piece for The Nation, "Trustbusting 2.?" that spells out this argument in more detail.

    I haven't had a chance yet to think this through, but it strikes me as a promising direction.

    April 20, 2009

    Next Big Thing: Resilience

    A few months ago, the editors at Foreign Policy magazine asked me to contribute to a section on the "Next Big Thing." My piece, on resilience, is now on the FP website -- and will appear in the May/June edition of the print magazine. [Link updated to local PDF copy.]

    Again, it'll be familiar to regular readers -- I think we're still at the point where it's important to introduce new audiences to the concept.

    How can we live within our means when those very means can change, swiftly and unexpectedly, beneath us? We need a new paradigm. As we look ahead, we need to strive for an environment, and a civilization, able to handle unexpected changes without threatening to collapse. Such a world would be more than simply sustainable; it would be regenerative and diverse, relying on the capacity not only to absorb shocks like the popped housing bubble or rising sea levels, but to evolve with them. In a word, it would be resilient.

    I'm particularly happy to discover that the other contributors to this issue include Juan Enriquez (Next Big Thing: A New You), Martin van Creveld (Next Big Thing: Anger Management), and Alvin Toffler (Next Big Thing: A Bigger Big Bang?).

    That will likely be the last general, intro-to-resilience piece I do. Time to focus on what it means.

    April 10, 2009

    Dark Optimism


    Shaun Chamberlin has written a book that, in my view, absolutely needs to be read by anyone who follows this blog.

    The Transtition Timeline: For a Local, Resilient Future combines a scenario-based look at how we as a global society can respond to the combination of global warming and peak oil, with a practical manual for building the kind of world that can successfully manage such a crisis.

    I saw a late draft of the work, and Shaun asked me for my reaction. Here's what I wrote, and I'm happy to see that it's included in the book's lengthy list of endorsements:

    It's been said that pessimism is a luxury of good times; in bad times, pessimism is a death sentence. But optimism is hard to maintain when facing the very real possibility of planetary catastrophe. What's needed is a kind of hopeful realism -- or, as Shaun Chamberlin puts it, a dark optimism.

    In The Transition Timeline, Chamberlin offers his dark optimism in the form of a complex vision of what's to come. He imagines not just a single future, or a binary "good tomorrow/bad tomorrow" pairing, but four scenarios set in the late 2020s, each emerging from the tension between two critical questions: can we recognize what's happening to us, and can we escape the choices and designs that have led us to this state? Chamberlin demonstrates that only an affirmative answer to both questions will allow us to avoid disaster -- and that's where the story he tells starts to get good. The Transition Timeline isn't another climate jeremiad, but a map of the course we'll need to take over the coming decade if we are to save our planet, and ourselves.

    The Transition Timeline is a book of hopeful realism, making clear that the future we want remains in our grasp -- but only for a short while longer.

    Buy this book.

    January 19, 2009

    Dark Clouds


    Cloud computing: Threat or Menace?

    I did some sustainability consulting recently for a major computer company. We focused for the day on building a better understanding of their energy and material footprint and strategies; during the latter part of the afternoon, we zeroed in on testing the sustainability of their current business strategies. It turned out that, like many big computer industry players, this company is making its play in the "cloud computing" field.

    ("Cloud computing," for those of you not up on industry jargon, refers to a "a style of computing in which resources are provided “as a service” over the Internet to users who need not have knowledge of, expertise in, or control over the technology infrastructure." The canonical example would be Google Docs, fully-functional office apps delivered entirely via one's web browser.)

    Lots of big companies are hot for cloud computing right now, in order to sell more servers, capture more customers, or outsource more support. But there's a problem. As the company I was working with started to detail their (public) cloud computing ideas, I was struck by the degree to which cloud computing represents a technical strategy that's the very opposite of resilient, dangerously so. I'll explain why in the extended entry.

    But before I do so, I should say this: A resilient cloud is certainly possible, but would mean setting aside some of the cherished elements of the cloud vision. Distributed, individual systems would remain the primary tool of interaction with one's information. Data would live both locally and on the cloud, with updates happening in real-time if possible, delayed if necessary, but always invisibly. All cloud content should be in open formats, so that alternative tools can be used as desired or needed. Ideally, a personal system should be able to replicate data to multiple distinct clouds, to avoid monoculture and single-point-of-failure problems. This version of the cloud is less a primary source for computing services, and more a fail-safe repository. If my personal system fails, all of my data remains available and accessible via the cloud; if the cloud fails, all of my data remains available and accessible via my personal system.

    This version of cloud computing is certainly possible, but is not where the industry is heading. And that's a problem.

    Continue reading "Dark Clouds" »

    January 2, 2009

    Uncertainty and Resilience

    Ecotrust has launched People and Place, a webzine looking at the relationship between humankind and its environment. P&P's inauguratory issue features an article on resilience by Brian Walker of the Resilience Alliance, Resilience Thinking. The editor at P&P asked me to write a companion essay -- Uncertainty and Resilience -- and it's now available on the site.

    In my work as a futurist, focusing on the intersection of environment, technology and culture, the concept of resilience has come to play a fundamental role. We face a present and a future of extraordinary change, and whether that change manifests as threat or opportunity depends on our capacity to adapt and remake ourselves and our civilization -- that is, depends upon our resilience.

    My piece looks at how defaulting to least harm (or graceful failure, as I've called it elsewhere) and foresight are useful additions to the model of resilience that Walker proposes.

    Resilience seems to be my theme of the moment. It's appropriate for the times, I suppose. When things seem to be falling apart, it's helpful to remind ourselves that we have ways to endure.

    December 6, 2008

    Catastrophic Risks Presentation

    View SlideShare presentation or Upload your own.

    ...In case anyone is curious, here are the slides I used at the Global Catastrophic Risks conference in Mountain View, California, last month. Enjoy.

    I don't tend to use many slides with text, so this is more to get a sense of how I give a talk than what I say. (Updated: Replaced slideshare file with one that doesn't require two clicks for each slide.)