« September 2009 | Main Page | November 2009 »

Monthly Archives

October 30, 2009

New Fast Company: 350

My latest Fast Company piece is up. 350 takes a look at the global movement to limit CO2 concentrations in the atmosphere to 350 parts per million.

If this sounds like I think the 350 movement is a bad idea... I don't. I rather like the simplicity of the meme, and the target is--if difficult--smart. It's not saying "let's keep things from getting too much worse," it's saying "let's make things better." That's the kind of goal I like.

But getting back to 350ppm requires more than a rapid cessation of anthropogenic sources of atmospheric carbon. It requires an acceleration of the processes that cycle atmospheric CO2. Planting trees is an obvious step, but it's slow and actually doesn't do enough alone. We'll also need to bring in more advanced carbon sequestration techniques, such as bio-char. The combination of the two would likely bring down atmospheric carbon levels, given enough time.

Unfortunately, we may not have enough time.

I have a habit (good or bad, your call) of trying to tease out the unexpected, and often unwanted, implications of big ideas. It can be frustrating for allies, because it sounds like I'm being critical. What I'm doing is trying to get people to recognize that choices, even good ones, have consequences, and the more we think through the consequences ahead-of-time, the better-off we'll be.

October 26, 2009

Well, You Can Tell By the Way I Use My Walk...

...I've got robot legs, but no mouth to talk.

And again! With the shoving!

Boston Dynamics really likes to abuse its robots.

(For the whippersnappers in the audience who don't get the title reference, here. Yes, the usage is ironic. And get offa my lawn.)

October 21, 2009

Biopolitics of Pop Culture

pinocchio.pngJoin me and a pretty nifty selection of speakers on December 4 at the Biopolitics of Popular Culture event in HOLLYW--er, IRVINE, California.

Popular culture is full of tropes and cliches that shape our debates about emerging technologies. Our most transcendent expectations for technology come from pop culture, and the most common objections to emerging technologies come from science fiction and horror, from Frankenstein and Brave New World to Gattaca and the Terminator.

Why is it that almost every person in fiction who wants to live a longer than normal life is evil or pays some terrible price? What does it say about attitudes towards posthuman possibilities when mutants in Heroes or the X-Men, or cyborgs in Battlestar Galactica or Iron Man, or vampires in True Blood or Twilight are depicted as capable of responsible citizenship?

Is Hollywood reflecting a transhuman turn in popular culture, helping us imagine a day when magical and muggle can live together in a peaceful Star Trek federation? Will the merging of pop culture, social networking and virtual reality into a heightened augmented reality encourage us all to make our lives a form of participative fiction?

During this day long seminar we will engage with culture critics, artists, writers, and filmmakers to explore the biopolitics that are implicit in depictions of emerging technology in literature, film and television.

On the roster are Annalee Newitz (the first time we'll be speaking on the same program!) and my friend and comic book/superhero fiction historian Jess Nevins, along with:

Natasha Vita-More
Kristi Scott
J. Hughes
Mike Treder
Michael LaTorra
RJ Eskow
PJ Manney
Matthew Patrick
Alex Lightman
Edward Miller

(Still not gender parity, but a speaker list that's one-third women is a significant improvement over nearly other future-focused event I've been to. Good work!)

New FC: Futures Thinking: Asking the Question

My latest Fast Company essay is up, and with it I return to the "Futures Thinking" series. This one, "Asking the Question," looks at how to craft a question for a foresight exercise that's most likely to generate useful results.

It's a subtle point, but I tend to find it useful to talk about strategic questions in terms of dilemmas, not problems. Problem implies solution--a fix that resolves the question. Dilemmas are more difficult, typically situations where there are no clearly preferable outcomes (or where each likely outcome carries with it some difficult contingent elements). Futures thinking is less useful when trying to come up with a clear single answer to a particular problem, but can be extremely helpful when trying to determine the best response to a dilemma. The difference is that the "best response" may vary depending upon still-unresolved circumstances; futures thinking helps to illuminate possible trigger points for making a decision.

As always, let me know what you think.

October 13, 2009

All Money is Fantasy

Future of Money.pngMy friend Stowe Boyd, consultant and provocateur, interviewed me recently for his Future of Money project. The video of that interview is now available at Stowe's blog, /Message.

It's a good conversation, although I clearly haven't learned the blogger video conversation practice of simply talking over the person I'm conversing with. I'm far too polite.

I start with the observation that all money is fantasy. I laugh/sigh when I see "gold bugs" going on and on about how money should be tied to gold, because gold has "real value." The only intrinsic value that gold has relates to how we can use it (in electronics, mostly, or as meal garnish); its utility as money is just as imaginary, just as "fiat," as post-Bretton Woods currency. It's a mutually-agreed upon fantasy. A "consensual hallucination," to steal from Gibson.

Atlantic: Filtering Reality

My second article for the Atlantic Monthly hits the shelves this week, and can now be found online. "Filtering Reality" looks at the political implications of augmented reality. It's a theme I've explored before, but the Atlantic editors asked me specifically to do this topic.

You don’t want to see anybody who has donated to the Palin 2012 campaign? Gone, their faces covered up by black circles. You want to know who exactly gave money to the 2014 ban on SUVs? Easy—they now have green arrows pointing at their heads.

You want to block out any indication of viewpoints other than your own? Done.

This will not be a world conducive to political moderation, nor one where differing perspectives get along comfortably. It won’t take a majority of people using these filters to poison public discourse; imagine this summer’s town-hall screamers on constant alert, wherever they go. Yet this world will be the unintended consequence of otherwise desirable developments—spam filters, facial recognition, augmented reality—that many of us will find useful.

It's a much shorter piece than my previous Atlantic essay, but hopefully the readers will find it just as provocative.

(Top Image: by "Gluekit" as illustration for the article; it's a variant of my original artifact image, below.)

Handheld Augmented Reality

October 12, 2009

Danger, Danger!

SadKick.pngMicrosoft/Danger/T-Mobile to millions of Sidekick users: Whoops.

Short version: Microsoft (who now owns Danger, the makers of the Sidekick) decided to migrate data from one storage network to another. That migration failed, and corrupted the data. Okay, annoying, so restore from the backup, right?

Wrong. No backups. None. Zero. El zilcho.

So millions of Sidekick users awake this past weekend to find that all of their data are gone -- or, in the best scenario, the only data they have are the most recent stuff on the Sidekick itself, and if they let the device power down, they'll lose that, too.

You can't say I didn't warn you.

January 19, 2009 - "Dark Clouds":

Here's where we get to the heart of the problem. Centralization is the core of the cloud computing model, meaning that anything that takes down the centralized service -- network failures, massive malware hit, denial-of-service attack, and so forth -- affects everyone who uses that service. When the documents and the tools both live in the cloud, there's no way for someone to continue working in this failure state. If users don't have their own personal backups (and alternative apps), they're stuck.

Similarly, if a bug affects the cloud application, everyone who uses that application is hurt by it. [...]

In short, the cloud computing model envisioned by many tech pundits (and tech companies) is a wonderful system when it works, and a nightmare when it fails. And the more people who come to depend upon it, the bigger the nightmare. For an individual, a crashed laptop and a crashed cloud may be initially indistinguishable, but the former only afflicts one person and one point of access to information. If a cloud system locks up, potentially millions of people lose access.

So what does all of this mean?

My take is that cloud computing, for all of its apparent (and supposed) benefits, stands to lose legitimacy and support (financial and otherwise) when the first big, millions-of-people-affecting, failure hits. Companies that tie themselves too closely to this particular model, as either service providers or customers, could be in real trouble.

And what do we see now? "Microsoft's Danger Sidekick data loss casts dark cloud on cloud computing." "Microsoft's Sidekick data catastrophe." "Cloud Goes Boom, T-Mo Sidekick Users Lose All Data."

Okay, it's easy to blame the failure to make backups for this disaster. But the point of resilience models is that failure happens. A complex system should not be so brittle that a single mistake can destroy it. Here's what I wrote back in January about what a resilient cloud could look like:

Distributed, individual systems would remain the primary tool of interaction with one's information. Data would live both locally and on the cloud, with updates happening in real-time if possible, delayed if necessary, but always invisibly. All cloud content should be in open formats, so that alternative tools can be used as desired or needed. Ideally, a personal system should be able to replicate data to multiple distinct clouds, to avoid monoculture and single-point-of-failure problems. This version of the cloud is less a primary source for computing services, and more a fail-safe repository. If my personal system fails, all of my data remains available and accessible via the cloud; if the cloud fails, all of my data remains available and accessible via my personal system.

It may not be as sexy as everything-on-the-cloud models, and undoubtedly not as profitable, but a failure like this past weekend's Microsoft/Danger fiasco -- or the myriad cloud failures yet to happen (and they will happen) -- simply wouldn't have been possible.

New FC: Singularity Scenarios

Singularity Scenarios

My latest Fast Company essay goes up today, talking about the different scenarios for a "Singularity" that arise when you take into account different cultural and political drivers for both before and after the development of greater-than-human intelligence.

Three of the four scenarios (leaving aside "Out of Control") assume that human social intelligence, augmentation technology, and competition continue to develop. And in all three, human civilization -- with its resulting conflicts and mistakes, communities and arts, and, yes, politics -- remains a vital force even after a Singularity has begun.

One key aspect of the three is that they're not necessarily end states. Each could, given the right drivers, eventually evolve into one of the others. Moreover, all three could in principle exist side-by-side.

I noted earlier that I differ from many of the Singularity enthusiasts in my take on what happens before and what happens after a Singularity. I suppose I differ in my take on what happens during one, as well. I don't think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late-followers learn from the reactions and dilemmas of those who had initially encountered the disruptive change.

Ultimately, I think the "singularity" language has outlived its usefulness. By positing that the culmination of certain technological changes is simply Beyond the Minds of Mortal Men, the concept both dismisses (or greatly downplays) the potential of human action to modify the evolution of the technologies, and undermines the stated desire of many Singularity proponents to avoid disastrous outcomes. "If it's completely out of our hands, then why worry?" is not exactly the mantra of a responsible, safe, globally beneficial future.

October 4, 2009

"Singularity Salon" Talk

Here's my slide deck from my talk at last night's New York Futures Salon. This is the raw Slideshare conversion, so a few of the transitions end up as blank slides (and you lose all of the nifty Keynote effects).
The talk was videotaped, and the recording will be available on the net Real Soon Now. I'll post a link when it's available. Overall, the talk went well. Good questions, good crowd (it ended up being considerably more crowded than the early gathering shown below). I'll have more to say in this week's Fast Company piece.
Waiting to begin my talk

Jamais Cascio

Contact Jamais  ÃƒÂƒÃ‚ƒÃ‚ƒÃ‚ƒÃ‚¢Ã‚€Â¢  Bio

Co-Founder, WorldChanging.com

Director of Impacts Analysis, Center for Responsible Nanotechnology

Fellow, Institute for Ethics and Emerging Technologies

Affiliate, Institute for the Future

Archives

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37