« April 2006 | Main Page | June 2006 »

Monthly Archives

May 31, 2006

Joining the Late 1990s

otf_shirt.jpgOkay, so I'm the last kid on the block to do this, and it has long been considered kind of silly... but I went ahead and set up Open the Future shirts and mugs (and a couple of other items) over at CafePress.

At the very least, I can get a new coffee mug to replace my sadly broken BBC cup.

And consider it a social experiment: I know I get a handful of readers -- is anybody interested in swag?

Or: is there a better place to create these kinds of items?

Edit: the very first comment reminds me to say: I've done a bare minimum ($1) zero mark-up on the items, so this is *not* a money-making venture on my part.

Why Am I Doing This?

Readers (and there are a few of you out there, I've seen the server logs) who are familiar with my writing at WorldChanging may be thinking to themselves right about now, "when's he gonna start writing the kinds of stuff he used to?"

Soon. Probably.

Open the Future is a very different beast from WorldChanging. WC has a distinct revolutionary purpose: to change the minds of thousand (and, eventually, millions) of people and to open their eyes to the fact that, while the challenges are great, solutions are possible. OtF, conversely, is more evolutionary in its goals: to give me a place to explore not-fully-fleshed-out ideas, the kinds of subjects and concepts that may turn into something more important and powerful down the road. That I'm doing it in public is simultaneously a bit of exhibitionism -- after blogging at WC for two and a half years, it's now a bit difficult to think in private -- and a chance to get feedback from the clever folks who have found me here.

I do miss some of how I wrote at WC, though, and I can feel the itch to post some interesting links and (hopefully) pithy observations at any moment.

Futurist Matrix Revisited (Again)

things_to_come.jpgDavid Brin wrote a provocative and thoughtful response to my futurist matrix idea, and posted it over at his blog. Unfortunately, the system he uses -- Blogger -- has once again broken its comment system. Rather than wait to reply, I've decided to post my response to his response here. (David -- this is an updated version of the email I sent.)

The futurist matrix is clearly a work in progress, and the changes have been slow and evolutionary. The main difference between the first and second versions of the matrix is in the terminology, not the concept -- I dropped the word "realist," and replaced with "pragmatist." More importantly, I tried to make the sub-headings less normative, less apt to appear biased towards one particular option along an axis.

I suspect I'll need to do something similar with "optimist" and "pessimist." The danger of using commonplace terms in a setup like this is that readers' interpretations of the words may not match my use. The present sub-headings of "inclusive success" and "exclusive success or failure" are more accurate than optimist/pessimist, and I'll likely make them the axis labels.

These more expressive terms help to illustrate a seemingly-illogical aspect of the matrix: the combination of ideologically opposed groups in the same philosophical box, such as Marxists and Dispensationalists in the lower-right quadrant. But the matrix is less concerned with a group's ideology than with its eschatology: how do the philosophies see the future unfolding? As Brin points out, neither Marxists nor Dispensationalists would see themselves as particularly pessimistic. But while they may see a happy future world, it's a world limited to the true believers. They may want everyone to become a true believer, but people outside of the circle cannot achieve a successful future.

There is a bigger problem with putting exclusive success and failure in the same box, though, one that Brin gets at with his Paul Erlich example: it's a pejorative combination, implying that the two are equivalent. I certainly wouldn't be happy in a Left Behind world (in fact, I'd probably be hunted down by the Tribulation Commandos), but few Dispensationalists would see their own success as a form of failure -- while they would likely see the upper left world as indicative of one where they've lost. Failure becomes an issue of perspective, not objective reality.

For many pragmatists, exclusive success and failure may in fact be equivalent concepts; many (most?) people willing to accept different pathways to positive change would see the success of a limited group of people at the expense of everyone else as a form of failure. Even the doomiest doom-sayers among the peak oil and civilization collapse crowd (e.g., James Howard Kunstler) wouldn't see being right as a form of success, even if pockets of well-prepared survivalists carried on (although they may get a bit of schadenfreude out of saying "I told you so" as the boat sinks).

So perhaps it's better to drop "failure" as a hard term, recognizing that each of the four quadrants would likely be seen as a "failure" outcome for somebody.

Regarding some particular points Brin raises:

  • I do agree with Brin's list of What To Avoid for ideological matrices; in fact, those are pretty much identical to the What To Avoid list elements for making dual-axis scenario sets, too.

  • It's not an accident that the various examples in each box are all folks who "care about the future" -- it *is* a matrix of futurist perspectives, after all.

    I disagree with the argument that groups that dislike or oppose each other shouldn't end up in the same box. If the point of opposition is unrelated to the dynamics of the axes, while the issue arguably connecting them is fundamental to the matrix, it's a completely appropriate structure.

    [As a (very) crude example, imagine a spectrum of "singularity technologies are inevitable and all-powerful" versus "singularity technologies will be haphazard and only marginally transformative," one would put both Ray Kurzweil and Bill Joy at the same end of the spectrum, even though they have radically different visions of what these technologies would actually do.]

    One last item: with regards to this:

    I feel we have to get smarter. Maybe a LOT smarter, before we will be able to deal with AI and immortality and molecular manufacturing and nanotech and bioengineering. Effective intelligence is where we really should be investing research and development. Because if we do get smarter, or make a next generation that is, then the rest of it could be much easier.
    Frankly, when I look at Aubrey de Gray and Ray Kurzweil... and when I look in a mirror... I see jumped up cavemen who want to live forever and get all pushy with the universe and quite frankly, I am not at all sure that cavemen are ready to leap into the role of gods.

    I agree that we need to get smarter and that we need to focus attention on effective intelligence. I disagree, however, that this means we need to pull back. Intelligence evolves with the environment, broadly conceived, and (if William Calvin is right, and I think he is) we get smarter faster when the environmental pressures are the most extreme. Calvin argues, for example, that the measurable improvements in hominid and early human cognitive skills closely correlated with rapid climate shifts.

    In other words, we may not get the intelligence we need if we don't put ourselves in the position of needing it.

  • May 29, 2006

    Memorial Day

    Andrew Jackson Wickline, my grandfather, the man I was named for, died three years ago, shortly before Memorial Day; a veteran of World War II, he was given a military service on Memorial Day itself, 2003.

    A short while before he died, Grandpa Jack gave me a box of old photos from the war. Over 500 pictures, taken by the company chaplain for the 80th Field Hospital, and offered to the men afterwards; Jack was one of very few who took copies of the pictures. I've scanned a small handful of them, and put them up on the web, but I really need to scan them all.

    The photos are yellowed and clearly showing their age, but they are intact. Will the same be said in sixty years for the pictures we take today? My hard drive is full of images, taken by all manner of digital cameras -- but few have been printed out, and while I have multiple backups, digital media is inherently ephemeral. Formats change; people get sloppy. I have disks with essays from graduate school in formats that I can no longer read. How long until I can no longer read the image files found on some old CD I burned years ago?

    Physical objects are not permanent, and I couldn't share the photos from the middle of the last century so easily without converting them first to digital form. I know the value and power of electronic media. I simply wonder how much of our future's past will be lost when locked into long-discarded formats and devices.

    It is especially incumbent upon those of us who think about the future to remember what has gone before. The future doesn't just happen; events don't emerge fully-formed, like Athena from Zeus' head. The world in which we live is the result of myriad victories and mistakes, chances taken and decisions regretted, paths followed and options ignored, people loved and people forgotten. Too often, we pay attention solely to prominent names, the leaders and celebrities, and give them credit for creating the present. Artifacts like a box of old photos from a long-ago war remind us of how today's world was truly shaped, and the roles that everyday people played in making it come about.

    I look at the people in my grandfather's photos, and wonder: did they know they were remaking the world? Were these simply snapshots to them, vacation photos with an edge, or did they recognize that they were documenting their roles in a monumental political transformation? How would our understanding of the second world war differ if everyone had carried a camera, not just one person out of hundreds, or thousands?

    Under Mars, a site archiving soldiers' photos from the present Iraq War, gives us a hint. For some soldiers, the pictures are simply snapshots, a way to hang onto a moment with friends. For others, they are historical records, filtered not through the eyes of a journalist or through the official accounts, but anchored to their own perspective, their fear and elation and wonder and horror. These are the artifacts of a citizens' history of the world -- if we can remember how to view them.

    Memories are imperfect, and photos -- digital or physical -- have an aura of authority, but are no less subjective. But in the gathering of myriad subjective stories and images, a collaborative truth emerges. The more memories that get added to this collection, the more powerful the truth; beware histories that are written solely by victorious leaders.

    My grandfather, Andrew Jackson Wickline, gave me many gifts over the years, but this box of photos is an incredible legacy. Every time I look at them, I sense their gravity and power. I don't know what I'll do with them -- I'm very happy to listen to suggestions -- but I do know that I'll treasure them. They're tangible evidence that history comprises the lives of all of us, not just the great and the famous, and that all of our actions help to shape the world to come.

    May 26, 2006

    Compute Green (or, Even Web Journalism Isn't Fast Enough)

    In mid-April, PC World asked me to write an article on green computing for their online version; by late April, the article was done. By early May, the piece had been edited; and on May 22, the final version (which isn't identical to my final copy, but close enough to be familiar) appeared at the PC World website:

    Green PC

    It's a lightweight piece on using less power and avoiding toxic components, and while it's a bit more "here's what you can buy" than I would have otherwise wanted (and a pithy paragraph on what's coming down the road is nowhere to be found), it's not bad.

    Problem is... it's obsolete.

    EPEAT is the just-announced new standard for green computing and other electronic equipment. Joel Makower gave us the rundown last Sunday at WorldChanging:

    IEEE 1680, as the standard is known, is the first U.S. standard to supply environmental guidelines for institutional purchasing decisions involving desktop and laptop computers and monitors. It offers criteria in eight categories -- materials selection, environmentally sensitive materials, design for end of life, end-of-life management, energy conservation, product longevity and life-cycle extension, packaging, and corporate performance. (Download the standard here in PDF.) The new standard will encourage manufacturers to design their products to be used longer, be more energy efficient, easier to upgrade and recycle, and contain fewer hazardous materials.

    This is good news all around, even if it does make my just-published article look a bit slow. Of course, if the article had gone up when I was done writing it -- on a blog, say -- it would have been timely, and easily updated when the new standard was announced. Back in the print days, a less-than-a-month turn-around from author to reader would have been impressive for a tech monthly. Today, even the compressed publishing model of online magazines can be ponderously slow.

    May 25, 2006

    Synthetic Biology

    sblogo-small.jpgThe Synthetic Biology 2.0 conference just ended, and Rob Carlson (of open biology fame) and Oliver Morton (author of the terrific and under-appreciated Mapping Mars) attended and blogged the event. Carlson is working on his book on open biology (Learning to Fly: The past, present, and future of Biological Technology), and used his comments about the event to offer up a sample of his book-in-progress. Morton's notes are more extensive (unsurprising, given his day job as a writer/blogger for Nature), and look in some detail at the question of just how the synthetic biology tools would be used.

    So what is synthetic biology? I wrote about it a few times at WorldChanging, and the following description still works:

    [Synthetic biology] is the application of mathematically-driven engineering principles to the construction of novel genetic structures; in contrast, genetic engineering is often a trial-and-error process, with numerous opportunities for and examples of unanticipated results. Many of the reasonable concerns about GMO foods and animals come from this hit or miss aspect of biotech. Biological Engineers have a more systematic approach, and use an increasingly deep understanding of how DNA works to then make microorganisms perform narrowly specified tasks.

    The engineering model underlying synthetic biology goes so far as to include the use of "bio-bricks" as construction elements.

    Synthetic biology specialists (it seems a bit off to call them "synthetic biologists") have managed to create both re-engineered versions of existing single-cell organisms and entirely novel "vesicle bioreactors," objects which display most of the characteristics of life.

    As Carlson notes in his blog, the difference between the Synthetic Biology 1 conference and the Synthetic Biology 2 conference was that the first was all about the science, and the second was increasingly about the money. Synthetic biology is getting awfully close to commercial and potentially practical applications; this means it's getting awfully close to needing some kinds of regulation and scrupulous oversight.

    It could, however, eventually become the organic equivalent of Lego, a way to build bio-objects quickly and safely, for experimentation, education, and occasional practical use. The use of pre-designed modules would go a long way towards keeping the whole process relatively safe. If these bio-Lego came with some kind of Creative Commons or GPL-style license allowing for the distribution of products, you could even imagine a kind of open source synthetic bio movement.

    What's particularly interesting to me about Synthetic Biology, however, is that it's a first draft of what we'll see at the advent of molecular nanotechnology: a simpler, less-capable, model, perhaps, but offering many of the same regulatory and access questions that will emerge when nanofabbers become possible. If we can work out reasonable rules, we're almost certain to apply them to similar future technologies; if we can't, that foreshadows even more difficulty for complex future technologies. How the scientific, engineering, marketing and policy-making communities work together to figure out how to manage the commercial use of synthetic bio will likely have a great deal of influence over how molecular nanotech is regulated. CRN and other interested parties, take note.

    May 24, 2006

    Future Matrix, Updated

    Yesterday's post What's Your Future has gotten a bit of attention, and much of the commentary (especially the discussion following the post itself) has been quite useful and interesting.

    Upon reflection, I think the use of "Realist" to denote the top of the vertical axis is somewhat confusing. I use the term to mean a position/ideology that welcomes compromise and embraces ambiguity; unfortunately, I noticed that a few people seemed to take it to mean "realistic" (or, better yet, "reality-based"). Given what that suggests about the opposite end of that spectrum, people who might feel some sympathy for the (e.g.) Optimist-Idealist box would reject that position.

    I'd like to replace the term Realist with Pragmatist.

    To further clarify, by Pragmatist I mean "open to multiple methodologies," and by Idealist I mean "strong preference for a particular methodology." In both cases, "methodologies" is intentionally broad.

    So, as a revised matrix:


    May 22, 2006

    What's Your Future?

    How do you envision the future? Are we on the verge of dystopia? Soon to be transformed by accelerating change? Ready to strap on the jet packs to pick up our food pills? Settling in for a long struggle?

    It struck me recently, while talking with my friend Jacob Davies, that the relative success of WorldChanging and similar projects could be linked to the re-invigoration of a worldview combining optimism (a belief that success is possible, and can be broadly achieved) and realism (a belief that global processes are imperfect and cannot be perfected, and change happens through compromise and evolution). Jacob gave some further thought to this idea, and elaborated a bit on its implications in a comment at the Making Light weblog. The combination of belief sets -- optimism vs. pessimism, realism vs. idealism -- offer us a matrix for describing divergent ways of looking at the future.


    It's important to note first off that there isn't a strict correlation here between politics and foresight worldview. Both premillennial dispensationalists (the Left Behind, "rapture ready" types) and traditional revolutionary Marxists would be situated in the lower-right Idealist-Pessimist box, for example. It wouldn't be hard to find similar pairs of contrasting ideologies for the other boxes.

    Instead, let's populate the matrix with examples of differing approaches to understanding a changing world.

    In the upper left, Optimist-Realist, we can put WorldChanging and its fellow-travelers -- success is possible, but requires a clear understanding of problems and a willingness to adapt to meet changing conditions (use new tools, work with new allies, etc.). I put myself in this category, too (unsurprisingly), and I suspect that a large portion of the new generation of people doing foresight work would call this box home.

    In the upper right, Pessimist-Realist, probably the most familiar manifestation would be the cyberpunk sub-genre of science fiction, where the world is complex, change is messy, and the best we can hope for is staving off the worst of it for our own (likely small) group. As Jacob noted, many traditional environmentalists fall into this box; I'd also put various critics of technology such as Neil Postman or Bill McKibben in this category.

    In the lower right, Pessimist-Idealist, we can find (as noted) the religious revolutionaries, be they Left Behind-type Christians, Caliphate-fixated Muslims, or Third Temple-building Jews, all ready to wash away the unbelievers and enemies in order to transform the world. I would also put the "back to the Pleistocene" Deep Ecologists here, too, the folks who think that the only way to save the planet is to wipe out 9/10ths of the population.

    Finally, in the lower left, Optimist-Idealist, are those who see a transcendent, transformative future available to all. The most visible manifestation of this worldview can be found in those who see the advent of a technological Singularity fixing the world's problems and giving us all near-infinite knowledge and power. I don't put all Transhumanist-type folks here; James Hughes is an excellent example of someone who sees both a potential for technology-driven transformation and the need to work to make sure the benefits extend beyond a small group of elites. But anyone who has read Ray Kurzweil's books The Age of Spiritual Machines and The Singularity is Near knows how readily the Singularitarians can slip into millennialist language.

    For now, this matrix gives us a taxonomy of futurism, but it may prove to be a useful tool for understanding heretofore unexpected alliances (such as the growing anti-technology coalition between some environmentalists and some religious conservatives).

    Where would you put yourself? What does this matrix miss?

    May 19, 2006

    The Spacer Tabi

    spacer_tabi.jpgDavid Brin keeps a running tab of the "predictions" he got right in his 1991 novel Earth. He didn't write the book as a piece of forecasting, but has managed to get a variety of things right about how the early 21st century would look.

    It may be time for me to start my own list.

    In 2003's Transhuman Space: Toxic Memes, I wrote about the "Spacer Tabi:"

    Ever since humans moved into space full-time, the quest for comfortable, useful, and attractive clothing for zero gee has been unending. A variety of outfit designs have come and gone over the decades, but one item has stuck around: the tabi. Based on the Japanese split-toe slipper, the so-called "spacer tabi" allows for both comfort when walking in positive-gee environments and the ability to use the crude gripping ability of one's toes in zero gee.
    [...] Spacer tabis come in a wider variety of color and fabric on Earth than they do in space, and have become popular in most urban settings. Most adults in Fourth and Fifth Wave countries have at least one pair of spacer tabis in the closet.

    Today's boingboing brings us this bit of news:

    Space-sneakers like a Japanese toe-sock

    These "space-sneakers," manufactured by Japan's Asics, were designed in response to a Russian cosmonaut's complaint that the space-shoes he'd worn had hurt his feet. These shoes are more like Japanese tabi, a sock with a split toe, and they weigh a mere 130g. The slightly inclined toe is meant to keep the calf-muscle taut in low gravity. The company hopes that Japan's astronaut Takao Doi will beta-test them on his Space Shuttle/ISS mission in 2007.

    I don't know about you, but I'm totally ready to buy a pair.

    It's actually pretty unusual for futurists to get their scenaric elements right. That's not to say that the projections/forecasts are useless. Even "wrong" pieces of foresight are usually wrong in illustrative, useful ways, and get us to keep our eyes open for changes to culture (or technology or politics) that we may otherwise have ignored. Futurist work isn't really about telling people what will happen, but about getting people to anticipate change from a new perspective.

    May 16, 2006

    Alpha-Testing the Participatory Panopticon

    853_web.jpgIt looks like the first draft version of the participatory panopticon -- the set of technologies allowing individuals to record everything that happens around them, for later playback, analysis, and archiving -- will come not from mobile phones on steroids, but as part of the US Defense Advance Research Projects Administration (DARPA) efforts to increase the information-recall capacity of soldiers in the field.

    The Defense Advanced Research Projects Agency (DARPA) is exploring the use of soldier-worn sensors and recorders to augment a soldier's recall and reporting capability. The National Institute of Standards and Technology (NIST) is acting as an independent evaluator for the "Advanced Soldier Sensor Information System and Technology" (ASSIST) project. NIST researchers are designing tests to measure the technical capability of such information gathering devices.
    [...] The sensors are expected to capture, classify and store such data as the sound of acceleration and deceleration of vehicles, images of people (including suspicious movements that might not be seen by the soldiers), speech and specific types of weapon fire.
    A capacity to give GPS locations, an ability to translate Arabic signs and text into English, as well as on-command video recording also are being demonstrated in Aberdeen. Sensor system software is expected to extract keywords and create an indexed multimedia representation of information collected by different soldiers. For comparison purposes, the soldiers wearing the sensors will make an after-action report based on memory and then supplement that after-action report with information learned from the sensor data.

    Let's see... recording of images and sounds the wearer may not have noticed, but later prove useful? Check. Integration with location-based systems for greater situational awareness? Check. Depiction of the system as a memory assistant? Check.

    The original DARPA proposal goes into more detail about what ASSIST will be trying to accomplish, and it's appropriately ambitious. They clearly recognize that the challenge isn't the hardware -- as the illustration shows, you can cobble together something right now with off-the-shelf cameras and recorders -- but the software that makes sense of the recorded data. Many of the goals described in the DARPA item (check the section starting with "Task 2: Advanced Technology Research") parallel the issues being confronted by Microsoft in its MyLifeBits project and Nokia with its Lifeblog project: interpretation of images; assignment of metadata; ontologies for location, objects and activities; and interfaces for access to and editing of recorded material.

    I wonder if Microsoft is working with DARPA on this; they certainly could be of use to each other.

    It strikes me that we'll probably see the emergence of this kind of technology first in the work of the military and (possibly more likely) the work of first responders. Many police vehicles already have automatic recorders; insisting that officers wear recorders as well isn't a big leap. Firefighters and other emergency-response personnel could wear them for after-action analysis and investigation, as well as for liability reasons (proof that a responder behaved professionally, or that s/he violated protocol).

    If ASSIST works well enough to do a "real" version, I wonder how many soldiers returning from duty will want to have something like that for their regular lives?

    May 13, 2006


    Says security guru Bruce Schneier:

    "The NSA would like to remind everyone to call their mothers this Sunday. They need to calibrate their system."

    That is all.

    May 12, 2006

    Development Intensity: First Draft

    In trying to figure out an answer to yesterday's question -- what's the relationship between quality of life and energy efficiency? -- I discovered that getting a good answer may be more difficult than I had hoped. Although metrics such as Gross National Happiness and the Genuine Progress Indicator offer tantalizing perspectives on the measurement of quality of life and non-economic development factors, their application has been (as far as I could find) fairly inconsistent and far from global. As a fall-back, I took a look at the United Nations Development Program's Human Development Index, part of the annual Human Development Report. The HDI combines measurements of literacy/education enrolment and life expectancy along with purchasing-power-parity-corrected GDP; it doesn't include any measurements of pollution or environmental sustainability, "happiness" or comfort, or creative or innovative output. Although it's a bit more complex than GDP alone, I'm afraid that it still puts too much emphasis on classical economic activity.

    The most recent UN HDI came out in 2005, and covers the year 2003; as a fortunate coincidence, the most recent data on international energy use from the US Department of Energy also goes up to 2003. I ran a direct comparison between the 2003 HDI value and the gross energy consumption per capita figure from the DOE, then plotted the results. Here's the 2003 chart, grouped by the UN's broad development ranking (high, medium and low):


    (click for larger version)

    As a broad rule, energy use per capita increases along with development, hardly a surprise. The handful of countries that fall well outside of this pattern are, for the most part, oil-rich nations (UAE, Saudi Arabia, etc.) that have very high per-capita energy use along with mediocre development scores. This tells us, essentially, that countries with more active economies tend to have better health and education levels, along with higher energy use.

    Looking at a few select countries over a multi-year period, the story gets slightly more complex. Australia, Canada, Sweden, the US, Japan, the UK, Mexico, Brazil, China and India are familiar figures from the various efficiency and intensity explorations I assembled at WorldChanging. The following chart shows the HDI to energy comparison for 1997, 2000 and 2003, with the spot size based on how much "development" each state got per unit of per-capita energy.


    (click for larger version)

    For the low and medium developed countries, the relationship is pretty straightforward: more energy=more development, with higher development levels more "costly" in energy required.

    The highly developed nations are a bit more of a jumble. All six of the selected countries rank very close to one another in terms of HDI. Japan and the UK (#11 and #15 on the HDI list, respectively), along with Canada (#5), seem to follow the general rule that a higher HDI means more per-capita energy use. Australia (#3 on the HDI list), Sweden (#6) and the US (#10) tell a different story; Australia and Sweden both use less energy per person than the US, while still ranking higher. Moreover, Australia and Canada are both trending up in per capita consumption, while Sweden and the US have sharp downward trends.

    What does all of this mean?

    In and of itself, not a lot. Aside from confirming some already-expected results, the figures at the high end are too closely tied to GDP -- and too noisy -- to tell us much that's new. Still, it's a good baseline to work from, and as I bring in metrics around use efficiency (energy/GDP) and other quality of life metrics, we can refer back to these charts to help us find the surprises.

    May 11, 2006

    Lifestyle Efficiency

    This post over at WorldChanging, along with this article at New Scientist, got me thinking -- not about the presentation of information, but about energy.

    By and large, I think most of us would agree that a simple total BTU consumed measurement by nation (the image used by New Scientist as an example) is only superficially useful; big countries will easily use more energy in total than smaller countries, even if the smaller countries are more wasteful.

    The next more complex version is energy use per capita. This is better, but still misses quite a bit. What do they do with that energy?

    "Energy intensity" seems to answer that by comparing energy use not to population, but to GDP. In these posts at WorldChanging, I called this value the "use efficiency of energy," as it tries to show how much use-value you get out of a given amount of power. I know that some environmentalists dislike intensity/use efficiency as a metric, as it makes the US position a bit more ambiguous -- yeah, the US uses a lot of power, but the US does a lot with it, too.

    But GDP sucks as a metric, for a variety of reasons. I've played around a bit with using slight modifications (such as purchasing-power-parity valuations), but it struck me today that it's simply not the right category to examine. We should, instead, look at standardized quality-of-life metrics as the point of comparison to energy use. Lifespan, healthiness, availability of health care, leisure time, Internet access, creative work publications, education levels, voting percentages -- the variety of factors that tell us not whether a society is wealthy, but whether the society is thriving. Most of those metrics wouldn't aggregate in a population-linear way (as GDP more-or-less does), so we'd probably need to compare not to raw energy consumption, but energy consumption per capita.

    There's undoubtedly quite a bit of room for debate about how to measure these various quality of life factors, but the goal is uniformity/standardization, so even if they combine in somewhat weird ways, as long as the combination is consistent across countries being studied, it's okay.

    I haven't actually started this study; it just occurred to me today. But I'm wondering if any of you know whether (a) it's already been done, or (b) there's some Very Good Reason why it wouldn't work...

    May 9, 2006

    Days of Futurism Past

    opencroquet.jpgThose who cannot remember the futurist predictions of the past are condemned to repeat them, usually at conferences. That was the mantra running through my head, at least, during the Metaverse Roadmap Project event last Friday and Saturday. This is not to say that the conference, which included technologists, pundits, academics, journalists, and assorted cross-subject thinkers, wasn't worth the time. It was extremely interesting, in fact, and I'm very happy to have been a part of it. But throughout the discussions, I had this eerie sense of being back in 1996, when the web and the popular Internet began to really show promise -- and technologists, pundits, academics, journalists, and assorted cross-subject thinkers all wanted to be the first to proclaim that the revolution was at hand.

    The purpose of the Metaverse Roadmap Project (hereafter MVR) was to begin to sketch out the possible evolution of the broad collection of technologies subsumed under the label of the "3D Web." Most of the discussion centered on the 3D virtual world technologies found in games like World of Warcraft and avatar chat environments like Second Life, but the MVR crew quite rightly included people who work on "geospatial web" technologies, too -- location-aware, information-dense systems that layer onto the visible, "physical" world. These are 3D technologies, too, even if they don't use cartoon people and fantasy places.

    This inclusion of geospatial (or "augmented reality") systems in the metaverse concept allowed the participants to construct a spectrum of scenarios, ranging from the cautiously incremental to the fantastically radical. (I can sum up the latter end of the spectrum in two words: brain implants.) Curiously, the group that fell into the "futurist" affinity group -- me, Esther Dyson, Helen Cheng, Janna Anderson and Randy Moss -- had a strong bias towards the cautious and incremental. I suspect that a great deal of that caution came from having heard technology-drenched proclamations of social revolution before. Fool me once, shame on you; fool me... can't get fooled again. Or something like that.

    Despite our caution, however, we did manage to catch a glimpse of a truly transformative vision. Open Croquet is an open source, peer-to-peer 3D environment system that everyone who got a chance to see it declared to be shockingly cool. Microsoft's Robert Scoble (who was at MVR for the second day) describes it thusly:

    We have just seen a new world. [...]
    This is rough, early-adopterish, but once you see this you realize a new kind of computing experience is coming.
    ...All running P2P. No centralized servers needed. It's remarkable. They showed how you could just "step into" a new virtual world. Just move toward something that looks like a window and you "dive into" that Window and are instantly in a new world. In that new world there would be new people, new things to see.
    Sometimes I pinch myself at what I get to be among the first human beings to experience.

    Scoble isn't exaggerating -- it was simply that cool. You can download the Open Croquet SDK right now; it runs on Mac, Windows and Linux.

    Open Croquet wasn't the only technology demo at MVR, just the flashiest. The variety of tools and ideas kicked around this last weekend in Menlo Park made it very clear that the next decade will see an increasing integration of our virtual existence and our physical lives. In the nearly-certain scenario, this will mean an immersive information environment, accessible wherever and whenever, augmenting and enhancing -- but not replacing -- our day to day experiences. In the more-adventurous version, 3D spaces become a common interface for communication and interaction, putting more of our daily lives into virtual settings, but for largely functional reasons (e.g., working from home).

    I'm really hesitant to go as far as many of my colleagues at MVR; I asked There.com's Betsy Book whether the vision she articulated was meant to portray virtual life as augmentation for physical life, or the physical world as augmentation for our virtual worlds. She answered, "Both," and suggested that a large part of the population will see these synthetic worlds as their real homes. But even if the technology is up to it -- likely, but not certain -- it's hard for me to see the cultural transformation required to make this a reality happen in just a decade.

    Two aspects of virtual/synthetic/metaversal spaces seemed conspicuous by their relative absence. The first was the distributed awareness technologies of "everyware," "spimes," "things that think" and the like; these aren't directly part of the 3D web, but to the degree that the geospatial and augmented reality components are important, these systems will be seen as part of the package. The second was the fabrication and material production technologies exemplified by 3D printers; as Rebang's Sven Johnson has demonstrated, the connection between the physical and virtual worlds isn't simply a matter of creating digital analogues of material goods -- sometimes, we're going to want physical instantiations of virtual products. To the degree that we shift to just-in-time/local-fabrication economies, the use of synthetic environments to design and test prototype goods could become extremely common.

    I may not be ready to buy a homestead in Second Second Life, but it's pretty clear that, at the Metaverse Roadmap event, I got a glimpse of tomorrow's digital world.

    Added bonus: I got a chance to have a good, long conversation with WorldChanging board of directors Chair (and Global Voices conductor) Ethan Zuckerman -- and event photographer John Swords managed to get a decent shot of the two of us.

    May 5, 2006

    Metaverse Roadmap Underway

    mvrpgraphic.jpgThe first day of the Metaverse Roadmap Project is hurtling to its conclusion, and it's been a mixed bag of small group discussions and plenary lectures, all playing blind men around an elephant, groping out what the "metaverse" future could look like. Much of the discussion has been predicated on the concept of a metaverse as a separate place, akin to the original Neal Stephenson concept; I'm not so sure that works, in part because of the uncomfortable echoes of the decade-old concepts of how the Internet would evolve, and in part because of my own bias towards the intersection of location-related virtual information and physical space.

    To that end, one of today's best presentations came from IFTF's Mike Liebhold, discussing the concept of the geospatial web, and how it could evolve. I won't try to describe it here, because Ethan Zuckerman has already done a masterful job of it: Michael Leibhold on building a tricorder - the geographic web. Ethan's semi-live-blogging the event; if you're interested in what's happening, hit his site, ...My Heart's in Accra.

    May 4, 2006

    What's the Opposite of Triage?

    impact.jpgI've been thinking quite a bit lately about how we make long-term decisions. The trite reply of "poorly" is perhaps correct, but only underscores the necessity of coming up with reliable (or, at least, trustable) mechanisms for thinking about the very long tomorrow. Many of the biggest crises likely to face human civilization in the 21st century have important long-term characteristics, and our relative inability to think in both complex and actionable ways about slow processes may be our fundamental problem.

    Whether we're talking about asteroid impact, global warming, introduction of engineered self-replicating devices (biotech or nanotech) into the environment, or radical longevity, we seem stuck in the mindset that says "if it's not a squeaky wheel, it gets no grease." It's a triage mentality -- we're dealing with bloody, awful problems right here and right now, and something that won't affect us for decades is something we can ignore for the moment. The thing is, these aren't the kinds of problems where the cause and the effect happen close together, and they're not the kinds of problems that can be dealt with quickly. If we wait until they're the bloody, awful problems of right here and right now, it's far too late. So why is it so hard to think in the long term?

    Our brains evolved in conditions where individuals would likely live just a few decades, and some of the explanation for why it's so hard for us to think long-term comes from that. We may not be wired to do so easily, and teaching ourselves to think creatively about the future might be as difficult as training any other kind of behavior that runs against biological pressures. If this is so, it would suggest that long-term thinkers may end up a kind of "monk," disconnected from the everyday world, potentially given respect and support but rarely completely understood by society at large.

    It could also be a function of the relatively rapid pace of technological innovation. This would have two big repercussions: the first is that we become accustomed to thinking of present-day problems as simply being a matter of engineering -- we may not be able to do X now, but surely we'll come up with a way to do it cheaply and easily in The Future, so why worry?; the second is that we are often burned by attempts to "predict" the future of technology, and find the pace of change a bit overwhelming. If so, this suggests that better thinking about longer-term problems is a process issue, and a better methodology would potentially work well.

    A lot to mull on here, and I don't have good answers yet.

    May 2, 2006

    Climate, Cancer and Changing Minds

    Can smoking cause lung cancer? Yes. Is any given case of lung cancer caused by smoking? No way to know. The complexity of cause-and-effect is such that, while we can be certain of a strong connection between smoking and lung cancer, we can't be certain that this connection will be true of individual cases. There are plenty of people who smoke who never develop lung cancer; there are numerous cases of lung cancer in people who never smoked and never lived or worked with smokers. These examples don't undermine the scientific conclusions, only reinforce the difficulty of charting precise causal relationships in a complex environment.

    The same can be said of the relationship between climate disruption and weather disasters such as strong hurricanes or the massive floods in Europe over the last week or so. Can global warming cause weather disasters? Yes. Is any given disaster caused by global warming? No way to know. This parallel between the smoking-cancer connection and the global warming-weather disaster connection is worth keeping in mind as we look for ways to communicate the dangers the planet faces to broad audiences.

    It's not hard to find thoughtful observers lamenting the difficulty of getting people to understand what's happening to the climate when the cause-and-effect relationships are complex and slow-moving, and when scientists are so cautious. You'll find few if any reputable scientists who will say that global warming caused Hurricane Katrina last year. Carbon industry lobbyists and their dupes pounce on that scientific caution about a given example as a sign that the broader connection between global warming and weather disasters is uncertain.

    But it wasn't too long ago that cigarette lobbyists and the psuedo-skeptic crowd made the same kinds of claims about smoking and cancer. For awhile, that worked, and it wasn't hard to find politicians and citizens willing to accept the industry's perspective. But as the public grew more comfortable with the idea of a complex, long-term result from current behavior, and the evidence grew for the big-picture smoking-cancer connection -- even while the cause-and-effect for a given example could be no more certain -- the culture (in the US) shifted, and the cigarette industry lobbyists stopped trying to undermine the science and started trying to hold off lawsuits.

    The public response to global warming isn't quite at that point yet, but we're moving in that direction. The carbon industry voices trying to plant doubt about climate science are dying down, replaced by voices arguing, in effect, that global warming's not that big of a deal, can be adapted to more readily than stopped, and that we should, in effect, just lie back and enjoy it. They are still fighting any suggestion that weather disasters are linked to global warming, however, as they need to hold that line as long as possible. Once it falls -- once the public becomes willing to accept that global warming can cause weather disasters, even if any single disaster can't be definitively traced to atmospheric carbon overload -- the gates are open to lawsuits and economic ruin for the companies that enabled the environmental ruin.

    The people at the forefront of the effort to build a public consensus around fighting global warming should study the history of the anti-smoking fight. Somehow, the anti-smoking movement managed to convince a broad majority of the American public that a complex problem, without certainty in individual cases, and with a cure still a long way off, needed to be stopped as rapidly and as aggressively as possible. What did the smoking crusaders do right, what did they do wrong, and what could we do better in the new media environment? How did they trigger the necessary cultural shift? Was there a catalytic moment, or was this an avalanche of pebbles, an overwhelming multitude of small, personal changes?

    Bruce Sterling -- among many others -- has long compared the carbon industries to the smoking industry, in terms of how the public mood can change. One year, doctors are happy to advertise for your product; the next, you're reviled as a source of misery and decay. Oil companies aren't quite there yet, but it's not far off. The broad disgust leveled at the out-going ExxonMobil CEO's retirement package -- which begun before the recent run-up of gas prices -- is just one example of how the public mood is shifting to see these industries as criminal and dangerous. It may well be that the avalanche is already underway.

    May 1, 2006

    Remaking the World

    My friend J. Eric Townsend posted a truly thought-provoking essay on his design blog, All Art Burns. In "On the Path to a Spime-full Future," Eric talks from a designer's perspective on what it would take to transition to a world of everyware (or spimes, in Bruce Sterling's pithier but less euphonious phrasing). He focuses on the concept of "spime retrofit modules," a kind of proto-spime that would give everyware-like functionality to previously dumb objects.

    The arbitrary line I draw between a proto-spime and a spime is that of design intent. A proto-spime was not intended to have spimelike behavior when it was initially conceived and designed; a real spime has intent in the initial conception and design. Compare this to early portable personal computers and modern laptops: Early portable computers were PC-ATs smushed into portable cases while modern laptops are not only designed and built on the plan of portability but often contain features unique to portable devices or lack those found in non-portable devices. [...]

    Initially, SRM’s can be easily attached to or installed in existing items that their humans want to know more about (or will soon discover they want to know more about). Some of these items might not be worth redesigning as proper spimes while others might be more than useful with an embedded SRM.

    Once we’ve learned a few lessons with proto-spimes we’ll be able to include the other side of spimes — data collection and management — in the iterative development process of spimes and SRMs.

    Eric then goes on to discuss the kinds of users who would be most likely to adopt SRMs. This is an incredibly important question, but is one that can easily be swept aside in discussions of signalling protocols and hardware formats. Adam Greenfield gets at it too in Everyware, and the fact that this discussion of a distributed awareness scenario is focusing on user requirements and concerns is a strong indicator that we're on the right track with this.

    Adam is currently winding down a conversation at the Well, over at the Inkwell free-to-the-public conference. I was enormously pleased to see that Adam responded in detail to my first iteration of the distributed awareness quadrants in the previous post; I will bow to his argument that "everyware" would encompass all four of the quadrants, although I do think the focus in the book is primarily on the extimate/watching us category.

    Jamais Cascio

    Contact Jamais  ÃƒÂƒÃ‚ƒÃ‚ƒÃ‚ƒÃ‚¢Ã‚€Â¢  Bio

    Co-Founder, WorldChanging.com

    Director of Impacts Analysis, Center for Responsible Nanotechnology

    Fellow, Institute for Ethics and Emerging Technologies

    Affiliate, Institute for the Future


    Creative Commons License
    This weblog is licensed under a Creative Commons License.
    Powered By MovableType 4.37