Main Page | May 2006 »

Monthly Archives

April 28, 2006

Everyware, Blogjects and the Participatory Panopticon

fractaluniverse.jpgI love to watch the future take shape.

For the past few years, I've closely watched the emergence of a set of technologies that make possible constant, widespread, and inexpensive observation and annotation of ourselves and the world. Cheap processors, low-power sensors, and ubiquitous wireless networks are critical elements of a scenario in which we can more readily know and, in the better versions, access the world around us. The key drivers for this emergence are our need to connect with each other and, increasingly, our need to monitor the changes taking place to our environment.

The version of this world that I've followed most closely is what I've called the "participatory panopticon" -- a scenario in which our personal mobile devices watch the world around us, acting both as a back-up memory and as a medium for the capture of our experiences of the world. These are intimate devices, carried by or worn on ourselves, made to serve as adjuncts to our own capacity to observe the space around us. The political version of the participatory panopticon (hereafter PP -- and I know, the need to abbreviate it is a sign that this isn't the right language for the concept) has been around longest, in the form of "sousveillance;" the digital Witness project is its latest example. At the TED 2006 conference, I described an imagined environmental version of the PP, monitoring not social behavior but ecological conditions. The PP could take on numerous forms, but all with the same core element: the technology is an interface between ourselves and the world that focuses on what's around us.

USC's Julian Bleecker, in A Manifesto for Networked Objects — Cohabiting with Pigeons, Arphids and Aibos in the Internet of Things, describes a clearly related but not identical manifestation of this technology. He refers to them as "blogjects," objects that create an ever-expanding record of themselves, accessible over the net -- objects that tell their own stories.

[Blogjects tell us] about their conditions of manufacture, including labor contexts, costs and profit margins; materials used and consumed in the manufacturing process; patent trackbacks; and, perhaps most significantly, rules, protocols and techniques for retiring and recycling [them].

In my WorldChanging discussion of the essay, I note that Bleecker's vision gives us something akin to an "augmented world." Like the technologies of the PP, blogjects provide an interface between ourselves and the world, focused upon the world -- except here the technologies are not intimate, but are instead extimate, spread around the environment, augmenting our sense of the world at a distance.

The third, and most recent, manifestation of this "distributed attention" technology can be found in the pages of Adam Greenfield's Everyware, subtitled "the dawning age of ubiquitous computing." Greenfield's everyware model is in some respects the polar opposite of the participatory panopticon: rather than intimate devices watching the world, Everyware posits a world of extimate devices watching each of us.

That sounds more Orwellian than I think Greenfield would intend. Although it's clear he's very concerned about the social, cultural and legal implications of devices that pay attention to our behavior, Greenfield is also able to explain why the capabilities inherent to ubiquitous computing make its arrival essentially inevitable. This isn't techno-determinism, it's (for lack of a better phrase) utility-determinism. When a technology, or behavior, or idea can let people do significantly more with less effort or cost, or do useful things they could never do in the past, the likelihood of widespread adoption of that technology/behavior/idea is increased. Reading through Greenfield's examples of proto-everyware already in use, it's easy to see just how attractive aspects of this scenario will be.

Even if getting around town were the only thing Octopus [Hong Kong's smart transit card system] could be used for, that would be useful enough. But of course that's not all you can do with it, not nearly. The cards are anonymous, as good as cash at an ever-growing number of businesses, from Starbucks to local fashion retailer Bossini. You can use Octopus at vending machines, libraries, parking lots and public swimming pools. It's quickly replacing keys, card and otherwise, as the primary means of access to a wide variety of private spaces, from apartment and office buildings to university dorms. Cards can be refilled at just about any convenience store or ATM. And, of course, you can get a mobile with Octopus functionality built right into it, ideal for a place as phone-happy as Hong Kong.

Notably, Greenfield is able to avoid both ascriptions of "good" or "evil" qualities to technology and bland assertions of technology's "neutrality." All artifacts are biased, because they embed the assumptions and priorities of their creators. Sometimes the biases are sufficiently universal or inconsequential that we don't perceive them as biases, but ask any left-handed person about living in a right-handed world and you can begin to understand how pervasive subtle bias can be.

The way I've described these three manifestations of this technology suggests a larger structure at play. Here's the inevitable four-box:

Is the technology intimate (carried or worn -- or implanted -- on ourselves) or extimate (extant in the world around us)? Is the technology focused upon us (individual humans or human behavior) or the world (everything else)? In this structure, an obvious fourth niche presents itself, devices that are both intimate and self-focused. Medical monitors are a clear candidate for this box, but I wonder if there's something else that would be a more likely fit. Is this where personal augmentation technology slots in?

May Day

The month of May looks to be filled with events -- many of them asking variations on the same question: who am I?

On May 1, I'll be at the Internet Identity Workshop, put together by Kaliya Hamlin, Doc Searls and Phil Windley. The event runs through May 3, but I'll only be able to make the first day. The agenda looks good -- and the logo is brilliant.

As I mentioned earlier, May 5 and 6 finds me at the Metaverse Roadmap Project. The timing looks to be perfect for an event on this subject, as virtual worlds have hit the mainstream zeitgeist. I got a chance to talk for a bit with Robin Harper of Linden Lab (makers of Second Life) at a project I worked on for the Institute for the Future this week; I really should take the time to download and investigate Second Life. Its current participation numbers -- around 200,000 people, according to Robin -- pale in comparison to the 8 million people in World of Warcraft, but we Mac (and Linux) users know that numbers aren't everything.

Coincidentally, my friend James Hughes (director of the Institute for Ethics and Emerging Technologies, among his many affiliations and titles) sent me a link to, noting that the participation rate in virtual worlds is on a classic exponential curve. He tossed out a term I'll make certain to use at MVRP: the MMOGularity.

But no rest for the wicked: Meshforum 2006 starts on May 7, running through May 9. I had the distinct pleasure of speaking at the first Meshforum, which took place last year in Chicago. Luckily, this year's meeting is in San Francisco. I'll be at this one, too, on a panel with Howard Greenstein and Christopher Allen, talking about breaking networks. Jon Lebkowsky will be there, as well, along with Robert Scoble and Karen Stephenson. Should be fun.

Spaces are still available.

The subsequent Saturday, May 13, will find me at the Singularity Summit, held at Stanford University. A single day conference (thankfully) where I'll be a guest, not a speaker (thankfully), the line-up for the shindig includes both long-time Singularitarians and long-time skeptics and critics. I'm particularly interested in seeing what the reaction is to McKibben; I expect this to be a largely Singularity-friendly crowd. Will they give him a fair hearing?

May 20th is the wedding of a dear friend, Iona Mara-Drita, and her delightful partner, Tom Berger. Congratulations are in order, of course, and I expect few of the reception conversations to include the phrases "metaverse," "singularity" or "nanoengineering."

Finally (!), May 27 and 28 will find me at the Human Enhancement Technologies and Human Rights conference, once again at Stanford. Put together in part by the IEET, the meeting will explore many of the social, cultural and legal implications of various scenarios -- current and projected -- of technologically enhanced human capabilities. I will probably be speaking on the second day. Registration is still open for the conference, and there's a free public reception on the evening of Friday, May 26.

April 25, 2006


No posts until Friday, I'm on the road.

April 22, 2006

Still A Work In Progress

Warren Ellis graciously linked over to Open the Future last night; for those of you who peek in as a result of that post, bear in mind that I'm still unpacking, figuratively speaking, and there's meatier content to come.

April 21, 2006


Bad bad arthritis flare-up the last couple of days, leading to very little sleep (with resulting loss of higher cognitive functions). I did manage to update the Writing page, including some links to articles I wrote 7 years ago. It's interesting to look back at that late-90s content--it's clear that I've been thinking about some of my pet issues for a very long time.

Also, because I could never do this at WorldChanging: Friday Cat Blogging!


I won't be making a habit of it, but I needed at least to get it out of my system. I blame the aforementioned degradation of higher cognitive functions.

April 19, 2006

Metaverse Roadmap

mvrlogo.gifI've been fascinated for many years by the emergence of virtual worlds. Their attractiveness is obvious to anyone who has read a work of fiction and imagined themselves in that world, either alongside the heroes or off exploring new spaces. Paper and dice role-playing games (such as D&D or Transhuman Space) offered an approximation of virtual existence, but did so through descriptive language (and, often, little lead wizards, goblins and the like). As personal computers grew to have powerful visual capacities and global network connections, however, the opportunity arose to create immersive alternative worlds that could be experienced by anyone, regardless of imagination.

Today's virtual worlds (whether massively-multiplayer games, like World of Warcraft, or online communities, like Second Life) are sophisticated enough to have developed a variety of emergent behaviors, from unexpected game environment events to nuanced social and economic behavior. Future virtual worlds promise more complex experiences and expressive interfaces -- but what are we going to do with all of that power? Will tomorrow's virtual worlds still be about killing things and taking their stuff?

The Acceleration Studies Foundation wanted to find out, and has set up what promises to be a pretty remarkable event: the Metaverse Roadmap Project. Taking its name from the online world in Neal Stephenson's novel Snowcrash, the Metaverse Roadmap Project will explore the potential development of both virtual world technology and virtual world society over the next ten years. The conference will take place May 5th and 6th -- and I am one of the participants. Other attendees include people like Raph Koster, Edward Castranova and Joi Ito, along with folks from the game industry, academia, and the media. I'm particularly looking forward to seeing my old WorldChanging colleague Ethan Zuckerman at the event.

My own thoughts on where the metaverse is heading are still evolving, but some early indicators of what I'm thinking about can be found in a few of my later posts at WC. In The Open Future: Living in Multiple Worlds, I look at the combination of virtual environments, augmented reality and simulations as decision-support tools for an increasingly complex world. In Making the Virtual Real, Virtual Complimentary Currencies, and The Open Future: Spirits in the Material World, I talk about the potential overlap of virtual environments and material fabrication technologies.

In short, I increasingly see virtual worlds not as alternative environments but as augmentation to our physical existence. We'll soon be living in a "mash-up" of the real world and virtual worlds; arguably, some of us already are. Like so many of the 1990s predictions about the evolution of the Internet, the idea that multi-user environments lead to isolation and detatchment from reality turned out to be 180° wrong.

The risk is that we'll back into this augmentation, and will be stuck with the virtual world equivalent of QWERTY -- technological standards and habituated user behaviors that were once functional, but now serve more as roadblocks to efficient use of new technologies. Incompatible identity rules, interfaces built more for killing dragons than interacting with colleagues, and closed, proprietary systems requiring the reinvention of the wheel over and over again all threaten to lead us to a world of virtual/augmented life that is far clumsier and harder to use than it needs to be.

My goal for the Metaverse Roadmap Project meeting, then, is not to identify the winning technologies and companies for the next ten years, but to identify the kinds of approaches and strategies for building virtual environments that stand the greatest chance of making us all winners.

April 18, 2006

OtF Core: The Open Future

To get a sense of how this perspective has evolved over the past couple of years, here's "The Open Future," the essay that kicked off a series I produced for WorldChanging in my final month. The most important improvement, in my view, is the recognition of the larger connections of this approach -- it's not just about emerging technologies. Still a bit too solemn, though.

crowduk.jpgThe future is not written in stone, but neither is it unbounded. Our actions, our choices shape the options we'll have in the days and years to come. We can, with all too little difficulty, make decisions that call into being an inescapable chain of events. But if we try, we can also make decisions that expand our opportunities, and push out the boundaries of tomorrow.

If there is a common theme across our work at WorldChanging, it is that we are far better served as a global civilization by actions and ideas that increase our ability to respond effectively, knowledgably, and sustainably to challenges that arise. In particular, I've focused on the value of openness as a means of worldchanging transformation: open as in free, transparent and diverse; open as in participatory and collaborative; open as in broadly accessible; and open as in choice and flexibility, as with the kind of future worth building -- the open future.

Creating an open future requires foresight, to be sure, but it also requires that we embrace a way of looking at the world that emphasizes responsibility, caution and (perhaps paradoxically) a willingness to experiment. It requires that we recognize that the status quo is contingent, and that we can never be in full control of our environment. Even the most powerful among us live at the sufferance of the universe.

The tools that we depend upon to enable effective, knowledgable and sustainable responses are neither surprising nor obscure: information about the planet, its people and its systems; collaboration and cooperation among the world's citizens; access to the means by which we expand our knowledge, feed our people, and cure our illnesses. Actions taken to restrict information, hinder collaboration, and centralize power in the hands of the few will, almost invariably, cut off our options. Actions we take that expand what we know, how well we work together, and how readily the people of the world can build their future, conversely, almost invariably increase the options we have for a better tomorrow.

As a planet, we face a handful of truly profound dilemmas taking shape in the first part of this century. It's no exaggeration to say that the decisions we make about how to handle these dilemmas will make the difference between a flourishing of global civilization and a fate akin to extinction. And while there is a small variety of world-ending challenges that could emerge at any moment -- from an asteroid impact to a naturally-emerging pandemic -- the key dilemmas of this century are entirely in our hands.

The first, and most certain, is the threat from global climate disruption. The more we learn about the changes now taking place in our planet's climate systems, the greater the challenge appears. We are unaccustomed to thinking about slow-moving problems with long lag times between actions and reactions; there is a real risk that the first serious efforts to cut carbon emissions will coincide with an acceleration of problems arising from decades-old changes to the atmosphere. Successful response to this challenge will require us to think in terms of big systems and long cycles far outside our every day experience.

The second, and as yet still incipient, is the impact of molecular nanotechnology. I've followed the development of this discipline for well over a decade, and our understanding of how self-replicating molecular engineering could be built is moving at a startlingly rapid pace. This may seem like an obscure concern, and it's true that molecular nanotechnology is not nearly as immediate an issue as the other two challenges. But molecular nanotech is an enabling technology that can create enormous differences in economic, technological and military power between the haves and have-nots. I don't fear a bolt-from-the-blue catastrophe like "grey goo" nearly as much as I worry about the race among nations to be the first to wield this technology and, as we've discussed here numerous times, there's no reason why focused work in developing nations can't come up with the necessary engineering breakthroughs. Students of political history know that periods where the balance of power shifts are often the most violent and dangerous.

The third, and most painful, is the growing difference between the hyperdeveloped and the most poverty-stricken parts of the world. It's not simply the moral crisis that a fraction of the planet swims in abundance while a larger fraction drowns in misery; the greater the number of people who take desperate measures for survival, the greater the number of societies rendered powerless by the status quo, the harder it will be to navigate the other global problems successfully. Starving people do not have the luxury of being thoughtful planetary guardians; weakened societies will not hesitate to take advantage of the immediate power arising from a new technological paradigm. To be blunt: unless we solve the problem of global poverty, we will not be able to solve the other two world-ending challenges.

But thinking of these solely in terms of the problems they present is not the WorldChanging way. It's clear that the steps necessary to meet each challenge can enable better solutions for the rest. The innovations in technology and lifestyle required to avoid climate disaster could dramatically reduce the resource competition that drives a significant part of the global zero-sum political game, scaling back both the threat of conflict over nanotechnologies and enabling the kinds of energy and agricultural infrastructure that can lift up poverty-stricken societies. The emergence of a responsible model for molecular manufacturing could enable multiple orders-of-magnitude leaps in efficiency of production and energy use, even while enabling the poorest societies to start building a universally high quality of life. And the efforts needed to solve problems of famine, unclean water, disease and privation will shape the course of research in energy and material technologies; the more we grapple with global poverty, the more we'll see the potential for solutions emerging from our technological choices.

Across all of these issues, the fundamental tools of information, collaboration and access will be our best hope for turning world-ending problems into worldchanging solutions. If we're willing to try, we can create a future that's knowledgable, democratic and sustainable -- a future that's open. Open as in transparent. Open as in participatory. Open as in available to all. Open as in filled with an abundance of options. There are few other choices that see us through the century.

We can have an open future, or we might have no future at all.

OtF Core: Open the Future

I wrote nearly 2,000 articles for WorldChanging, and I am very happy to have them there. Nonetheless, some of the pieces I wrote are fundamental parts of my worldview, and it's useful to have them here, too.

"Open the Future," written in mid-2003, was originally scheduled to appear in the Whole Earth magazine. Unfortunately, that issue turned out to be the final, never actually published, appearance of the magazine. I posted the essay on WorldChanging in February, 2004. In retrospect, it's a bit wordy and solemn, and focuses too much on the "singularity" concept, but it still gets the core idea across: openness is our best defense.

Very soon, sooner than we may wish, we will see the onset of a process of social and technological transformation that will utterly reshape our lives -- a process that some have termed a "singularity." While some embrace this possibility, others fear its potential. Aggressive steps to deflect this wave of global transformation will not save us, and would likely make things worse. Powerful interests desire drastic technological change; powerful cultural forces drive it. The seemingly common-sense approach of limiting access to emerging technologies simply further concentrates power in the hands of a few, while leaving us ultimately no safer. If we want a future that benefits us all, we'll have to try something radically different.

Many argue that we are powerless in the face of such massive change. Much of the memetic baggage accompanying the singularity concept emphasizes the inevitability of technological change and our utter impotence in confronting it. Some proponents would have you believe that the universe itself is structured to produce a singularity, so that resistance is not simply futile, it's ridiculous. Fortunately, they say, the post-singularity era will be one of abundance and freedom, an opportunity for all that is possible to become real. Those who try to divert or prevent a singularity aren't just fighting the inevitable, they're trying to deny themselves Heaven.

Others who take the idea of a singularity seriously are terrified. For many deep ecologists, religious fundamentalists, and other "rebels against the future," the technologies underlying a singularity -- artificial intelligence, nanotechnology, and especially biotechnology -- are inherently cataclysmic and should be controlled (at least) or eliminated (at best), as soon as possible. The potential benefits (long healthy lives, the end of material scarcity) do not outweigh the potential drawbacks (the possible destruction of humanity). Although they do not claim that resistance is useless, they too presume that we are merely victims of momentous change.

Proponents and opponents alike both forget that technological development isn't a law of the universe, nor are we slaves to its relentless power. The future doesn't just happen to us. We can -- and should -- choose the future we want, and work to make it so.

Direct attempts to prevent a singularity are mistaken, even dangerous. History has shown time and again that well-meaning efforts to control the creation of powerful technologies serve only to drive their development underground into the hands of secretive government agencies, sophisticated political movements, and global-scale corporations, few of whom have demonstrated a consistent willingness to act in the best interests of the planet as a whole. These organizations -- whether the National Security Agency, Aum Shinri Kyo, or Monsanto -- act without concern for popular oversight, discussion, or guidance. When the process is secret and the goal is power, the public is the victim. The world is not a safer place with treaty-defying surreptitious bioweapons labs and hidden nuclear proliferation. Those who would save humanity by restricting transformative technologies to a power elite make a tragic mistake.

But centralized control isn't our only option. In April of 2000, I had an opportunity to debate Bill Joy, the Sun Microsystems engineer and author of the provocative Wired article, "Why The Future Doesn't Need Us." He argued that these transformative technologies could doom civilization, and we need to do everything possible to prevent such a disaster. He called for a top-down, international regime controlling the development of what he termed "knowledge-enabled" technologies like AI and nanotech, drawing explicit parallels between these emerging systems and nuclear weapon technology. I strongly disagreed. Rather than looking to some rusty models of world order for solutions, I suggested we seek a more dynamic, if counter-intuitive approach: openness.

This is no longer the Cold War world of massive military-industrial complexes girding for battle; this is a world where even moderately sophisticated groups can and do engage in cutting-edge development. Those who wish to do harm will get their hands on these technologies, no matter how much we try to restrict access. But if the dangerous uses are "knowledge-enabled," so too are the defenses. Opening the books on emerging technologies, making the information about how they work widely available and easily accessible, in turn creates the possibility of a global defense against accidents or the inevitable depredations of a few. Openness speaks to our long traditions of democracy, free expression, and the scientific method, even as it harnesses one of the newest and best forces in our culture: the power of networks and the distributed-collaboration tools they evolve.

Broad access to singularity-related tools and knowledge would help millions of people examine and analyze emerging information, nano- and biotechnologies, looking for errors and flaws that could lead to dangerous or unintended results. This concept has precedent: it already works in the world of software, with the "free software" or "open source" movement. A multitude of developers, each interested in making sure the software is as reliable and secure as possible, do a demonstrably better job at making hard-to-attack software than an office park's worth of programmers whose main concerns are market share, liability, and maintaining trade secrets.

Even non-programmers can help out with distributed efforts. The emerging world of highly-distributed network technologies (so-called "grid" or "swarm" systems) make it possible for nearly anyone to become a part-time researcher or analyst. The "Folding@Home" project, for example, enlists the aid of tens of thousands of computer users from around the world to investigate intricate models of protein folding, critical for the development of effective treatments of complex diseases. Anyone on the Internet can participate: just download the software, and boom, you're part of the search for a cure. It turns out that thousands of cheap, consumer PCs running the analysis as a background task can process the information far faster than expensive high-end mainframes or even supercomputers.

The more people participate, even in small ways, the better we get at building up our knowledge and defenses. And this openness has another, not insubstantial, benefit: transparency. It is far more difficult to obscure the implications of new technologies (or, conversely, to oversell their possibilities) when people around the world can read the plans. Monopolies are less likely to form when everyone understands the products companies make. Even cheaters and criminals have a tougher time, as any system that gets used can be checked against known archives of source code.

Opponents of this idea will claim that terrorists and dictators will take advantage of this permissive information sharing to create weapons. They will -- but this also happens now, under the old model of global restrictions and control, as has become depressingly clear. There will always be those few who wish to do others harm. But with an open approach, you also get millions of people who know how dangerous technologies work and are committed to helping to detect, defend and respond. That these are "knowledge-enabled" technologies means that knowledge also enables their control; knowledge, in turn, grows faster as it becomes more widespread.

These technologies may have the potential to be as dangerous as nuclear weapons, but they emerge like computer viruses. General ignorance of how software works does not stop viruses from spreading. Quite the opposite - it gives a knowledgeable few the power to wreak havoc in the lives of everyone else, and lets companies that keep the information private a chance to deny that virus-enabling defects exist. Letting everyone read the source code may help a few minor sociopaths develop new viruses, but also enables a multitude of users to find and repair flaws, stopping many viruses dead in their tracks. The same applies to knowledge-enabled transformative technologies: when (not if) a terrorist figures out how to do bad things with biotechnology, a swarm of people around the world looking for defenses will make us safer far faster than would an official bureaucracy worried about being blamed.

Consider: in early 2003, the U.S. Environmental Protection Agency announced that it was upgrading its nationwide system of air-quality monitoring stations across to detect certain bioterror pathogens. Now, analyzing air samples quickly requires enormous processing time; this limits both the number of monitors and the variety of germs they can detect. If one of the monitors picks up evidence of a biological agent in the air, under current conditions it will take time to analyze and confirm the finding, and then more time to determine what treatments and preventive measures -- if any -- will be effective.

Imagine, conversely, a more open system's response. EPA monitoring devices could send data not to a handful of overloaded supercomputers but to millions of PCs around the nation -- a sort of "CivilDefense@Home" project. Working faster, this distributed network could spot a wider array of potential threats. When a potential threat is found, the swarm of computers could quickly confirm the evidence, and any and all biotech researchers with the time and resources to work on countermeasures could download the data to see what we're up against. Even if there are duplicated efforts, the wider array of knowledge and experience brought to bear on the problem has a greater chance of resulting in a better solution faster - and of course answers could still be sought in the restricted labs of the CDC, FBI and USAMRIID. The more participants, the better.

Both the threat and the response could happen with today's technology. Tomorrow's developments will only produce more threats and challenges more quickly. Do we really think government bureaucracies can handle them alone?

A move to a society that embraced open technology access is possible, but it will take time. This is not a tragedy; even half-measures are an improvement over a total clamp down. The basic logic of an open approach -- we're safer when more of us can see what's going on -- will still help as we take cautious steps forward. A number of small measures suggest themselves as good places to start:

• Re-open academic discourse. Post-9/11 security fears have led to restrictions on what scholarly researchers are allowed to discuss or publish. Our ability to understand and react to significant events -- disasters or otherwise -- is hampered by these controls.

• Public institutions should use open software. The philosophy of open source software matches our traditions of free discourse. Moreover, the public should have the right to see the code underlying government activities. In some cases, such as with word processing software, the public value may be slight; in other cases, such as with electronic voting software, the public value couldn't be greater. That nearly all open source software is free means public institutions will also save serious money over time.

• Research and development paid for by the public should be placed in the public domain. Proprietary interests should not be able to use of government research grants for private gain. We all should benefit from research done with our resources.

• Our founding fathers intended intellectual property (IP) laws to promote invention, but IP laws today are used to shore up cultural monopolies. Copyright and other IP restrictions should reward innovation, not provide hundred-plus year financial windfalls for companies long after the innovators have died.

• Those who currently develop these powerful technologies should open their research to the public. Already there is a growing "open source biotechnology" movement, and key figures in the world of nanotechnology and artificial intelligence have spoken of their desire to keep their work open. This is, in many ways, the critical step. It could become a virtuous cycle, with early successes in turn bringing in more participants, more research, and greater influence over policy.

While these steps would not result in the fully-open world that would be our best hope for a safe future, they would let in much-needed cracks of light.

Fortunately, the forces of openness are gaining a greater voice around the world. The notion that self-assembling, bottom-up networks are powerful methods of adapting to ever-changing conditions has moved from the realm of academic theory into the toolbox of management consultants, military planners, and free-floating swarms of teenagers alike. Increasingly, people are coming to realize that openness is a key engine for innovation.

Centralized, hierarchical control is an effective management technique in a world of slow change and limited information -- the world in which Henry Ford built the model T, say. In such a world, when tomorrow will look pretty much the same as today, that's a reasonable system. In a world where each tomorrow could see fundamental transformation of how we work, communicate, and live, it's a fatal mistake.

A fully open, fully distributed system won't spring forth easily or quickly. Nor will the path of a singularity be smooth. There is a feedback loop between society and technology -- changes in one drive changes in the other, and vice-versa -- but there is also a pace disconnect between them. Tools change faster than culture, and there is a tension between the desire to build new devices and systems and the need for the existing technologies to be integrated into people's lives. As a singularity gets closer, this disconnect will worsen; it's not just that society lags technology, it's that technology keeps lapping society, which is unable to settle on new paradigms before having to start anew. People trying to live in fairly modern ways live shoulder-by-jowl with people desperately trying to hang on to well-understood traditions, and both are still confronted by surprising new concepts and systems.

Change lags and lurches, as rapid improvements and innovations in technology are haphazardly integrated into other aspects of our culture(s). Technologies cross-breed, and advances in one realm spur a flurry of breakthroughs in others. These new discoveries and inventions, in turn, open up new worlds of opportunities and innovation.

If there is a key driving force pushing towards a singularity, it's international competition for power. This ongoing struggle for power and security is why, in my view, attempts to prevent a singularity simply by international fiat are doomed. The potential capabilities of transformative technologies are simply staggering. No nation will risk falling behind its competitors, regardless of treaties or UN resolutions banning intelligent machines or molecular-scale tools. The uncontrolled global transformation these technologies may spark is, in strategic terms, far less of a threat than an opponent having a decided advantage in their development -- a "singularity gap," if you will. The "missile gap" that drove the early days of the nuclear arms race would pale in comparison.

Technology doesn't make the singularity inevitable; the need for power does. One of the great lessons of the 20th century was that openness -- democracy, free expression, and transparency -- is the surest way to rein in the worst excesses of power, and to spread its benefits to the greatest number of us. Time and again, we have learned that the best way to get decisions that serve us all is to let us all decide.

The greatest danger we face comes not from a singularity itself, but from those who wish us to be impotent at its arrival, those who wish to keep its power for themselves, and those who would hide its secrets from the public. Those who see the possibility of a revolutionary future of abundance and freedom are right, as are those who fear the possibility of catastrophe and extinction. But where they are both wrong is in believing that the future is out of our hands, and should be kept out of our hands. We need an open singularity, one that we can all be a part of. That kind of future is within our reach; we need to take hold of it now.

What's This? What's This?

New along the right side bar is a selection of some of the resources I used while writing WorldChanging, and continue to read for insights and early warnings. This represents about a quarter of the sites I would skim through on a daily basis.

Also included is a small group of blogs belonging to friends of mine. If you think your blog should be on this list, you know what to do.

April 17, 2006

Moving Slowly

After blogging pretty much every day for two years -- including vacations -- having some time away is a bit surreal. In some ways, it's like I'm missing a part of me; in other ways, it's like I finally have time to breathe. I did just under 2,000 posts at WorldChanging, 1100 in 2005 alone. It's no wonder I'm tired.

That said, I will be adding new material (and links to old) on this site this week. I don't know how many people are reading this (if you are, say hi) -- I didn't expect there to be much traffic here yet, but my site logs show a steady trickle of activity. I don't intend to let you get bored.

But as a special, shiny present for anyone who does wander by, here's a picture from TED2006 of me and Alex having a chat with a Mr. Al Gore:


(Photo by Robert Leslie)

April 13, 2006

My Talk at Yahoo!

My friend Jeffrey McManus, who is responsible for bringing me to Yahoo! last Friday, took this picture of me as the talk began:

speaking at Yahoo!

(Thanks for letting me use the picture...)

Upcoming Events

I'll be attending the Jimmy Wales talk for the Long Now Foundation tomorrow night in San Francisco.

I'll be in Princeton, New Jersey, from the evening of April 25 through the afternoon of April 27, working on a project for the Robert Wood Johnson Foundation for the Institute for the Future.

I'll be in Palo Alto, California, from May 5 through May 6 for the "Metaverse Roadmap" project for the Acceleration Studies Foundation.

April 12, 2006

Coming Up For Air

I've been startlingly busy since leaving WorldChanging. I just finished a draft of an article for PC World, spoke at Yahoo!, and attended the Institute for the Future's "10 Year Forecast" event. More stuff going on today, too.

Jamais Cascio

Contact Jamais  ÃƒÂƒÃ‚ƒÃ‚ƒÃ‚ƒÃ‚¢Ã‚€Â¢  Bio


Director of Impacts Analysis, Center for Responsible Nanotechnology

Fellow, Institute for Ethics and Emerging Technologies

Affiliate, Institute for the Future


Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37