« A Last Comment on New Awakenings | Main | Renewable Energy and Global Stability »

Devolution

devoband.jpgI'm posting this via a computer I haven't used for a few months. My current machine, a 2.0 GHz MacBook, began this morning to exhibit the "random shutdown syndrome" that apparently afflicts most of the units made prior to July or August. I've now sent it off to the mothership for a brain transplant.

I had a current backup, so this is at worst a serious annoyance, not a infocalypse. Still, it got me thinking about typical futurist discourse around technology. It's not impossible to find discussions of (for example) nanofactories or everyware sensor networks that assume that the systems will be buggy and prone to surprising and sometimes baffling failures, but they're not at all common. Admittedly, it's awfully hard to talk about failure states of vaporware. Paradigm shifts in technologies of material fabrication, communication and awareness will undoubtedly be accompanied by significant shifts in what broken or buggy systems look like. All too often, while in the middle of a technological revolution, we'll find ourselves forced to go backwards, forced into technological devolution, simply because the new stuff is broken.

It is entirely possible that the technologies underlying nanofabrication (again, for example) simply will not, cannot break in the ways we're accustomed to with our current high tech gear. This doesn't mean that they won't manifest their own quirks and failures. In fact, I'd go so far as to say that if the technologies offer such a radical leap that they cannot fail in familiar ways, unexpected and potentially significant new failure modes are inevitable, simply because of an imperfect understanding of the complex interaction of these new systems with each other, and (more importantly) with the remaining, and likely abundant, old-style systems.

Proponents of paradigm-shift technologies are so accustomed to having to demonstrate why the new invention will be utterly transformative that they often (in my experience, at least) neglect to consider how the system will behave in the midst of existing technical, legal and social systems. This leads to technologies that work perfectly well in the lab, but fail spectacularly when in the dirty, crowded environment of the real world.

The biggest danger with this sort of thinking is that it leads designers to neglect fail-safe and graceful degradation modes. When we have convinced ourselves that there's no possibility of failure, any failure that does (almost inevitably) occur presents a far, far greater problem than it would have had we considered that a problem might emerge. Instead, technologies should, in Adam Greenfield's words, "default to harmlessness:" Systems fail; when they do, they can fail gracefully or they can fail catastrophically. When a system fails, it should do so in a way which does not itself make problems worse.

The belief that successful outcomes are possible does not require us to ignore or wish away failure. Basing plans on perfection adds a great deal of risk with little added reward. Instead, success demands that we address failures directly: preventing them when possible, mitigating them when necessary, adapting to them if we must.

Comments

I agree. In some circles, among more utopian nano-futurists, this is not discussed sufficiently. I agree that gray goo is very unlikely (Such rogue, omnivorous replicators are unlikely to arise by chance. They would have to be intentionally designed.) but what about the much more believable situation of a kernel panic in your nanofactory? Not especially dangerous but damned annoying.

And what of vendors of nanofactories who, like Microsoft has, favor consumer convenience over good security? Or vendors of such fabricators who, like Apple has, simply seal the unit and hide all failure messages behind a sad mac icon?

Granted that mission critical stuff will probably be mostly open source and attended by highly trained technicians but can we really dismiss the possibility of a nanofactory hooked to the Internet with bad security churning out "fab spam" and filling your spare room or office with junk until the feedstocks run out?

Current systems are so complex they're essentially untestable.

Failsafe is only something you'll find in extremely expensive life critical designs/situations...and even then not often because it can't really be accomplished with COTS components/hardware.

Time to market pressures guarantee that the products and systems we get a rife with problems. People want the "latest stuff", and the market delivers it...along with all the latest problems.

ex. the last Intel CPU ever produced that worked virtually exactly as advertised was the E-step 80286. Everything since then has been chock full of nasty errata.

"Default to harmlessness"...maybe we should be looking to make technologies as passive as possible, so that if they do go wrong they simply stop doing anything at all, rather than doing the wrong thing? Or do I have the wrong end of the stick entirely?

so that if they do go wrong they simply stop doing anything at all

That would cost more. We do however see the results of not doing it already. ex. exploding/burning aftermarket cell phone batteries. The big players are going to do some engineering to put internal fusing and overcharge protection and such in their units, but the no-name Chinese knockoffs won't care about trifles like that.

Another example is the ubiquitous 6-plug power strip found near almost any computer. CPSC has a long list of units that fail in dangerous ways. Burst into flame, get hot, internal breakers that don't work, etc. Some are simply flat out wired wrong from the factory tying the ground prongs and neutral prongs together.

The real question: is the public willing to pay more for a well engineered quality unit that is actually safe?

I'm afraid of what the answer will be ;->

Archives

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37