« August 2007 | Main Page | October 2007 »

Monthly Archives

September 29, 2007

Budapest

I'm heading off to Budapest for the next week. I'm not sure what kind of Internet access I'll have while there, so posting may be sporadic or worse until next weekend. If I can, I'll post pictures from the trip to my Flickr page along the way.

September 28, 2007

iForgetaboutit

I've been a Mac user for years, and (generally) happily so. I'm not an Apple fanboy, but I do appreciate the combination of good hardware and software design found in Macs. When the iPhone came out, some people I knew assumed that I'd get one for myself -- and I admit, I was tempted. But ultimately I chose not to, and I'm glad I did.

My initial reason for not getting an iPhone concerned the carrier. AT&T is hardly a bastion of respect for privacy and civil rights, and I had no desire to give them any more money than I have to. The various sim-card unlocks would render that moot, except...

Anyone who thought that Apple -- with an iPhone business model that gets a huge chunk of the subscription fees from its carrier partners -- wouldn't re-lock the iPhone wasn't paying attention. And once the iPod Touch came out with an as-yet-unbreakable lockdown for applications, the writing was on the wall for the various third-party apps that clever hackers had figured out how to install on the iPhone. In short, the period in which the iPhone was relatively free and open (if not by Apple's doing) was always likely to be brief, and may never be repeated.

I'm utterly disgusted with the wireless telecom business models that actively prevent customers from actually making use of the technologies built into the hardware. Some will disable useful features, only to re-enable them at a fee; some simply disallow the use of given capabilities altogether. By barring the installation of any outside iPhone applications, Apple is actually among the most-offensive vendors in this regard. Claims that "most people" wouldn't ever use the ability to add applications are irrelevant, and likely wrong: one of the distinctly appealing aspects of the iPhone technology was its potential to shift the mobile phone world away from appliances and towards platforms -- i.e., to a world in which people think of their phones as they do their computers, as devices that can always be made to do more.

The alternatives are limited, but intriguing.

My next phone is very likely going to be a Linux-based OpenMoko Neo1973 phase 2, due out in December. A completely-open platform, the OpenMoko operates on the global GSM standard, and includes WiFi. It's not a perfect device -- no camera and no 3G make it definitely sub-optimal -- but it's a project I want to give my whole-hearted support.

In the longer-term, if Google wins the 700MHz auction and goes ahead with its plans for a open-hardware model for the spectrum, the wireless companies may find themselves in a real scramble. And Sprint's plans for WiMax actually appear to be relatively openness-friendly: among the first devices to take advantage of the high-speed wireless system will be a version of the Nokia N800, a Linux-based internet tablet with voice-over-IP capabilities.

It may well be that the next couple of years will be the last stand of the overly-locked down, paranoid and arguably corrupt wireless networks. It's too bad that Apple has chosen to stand with them instead of with the future.

September 27, 2007

Security through Ubiquity

Another idea I want to get out and into at least my working lexicon.

Security through Ubiquity refers to the reduced vulnerability to attack that can manifest due to being part of a transcendently common multitude; in this context "attack" includes social approbation and the deleterious effects of a loss of privacy.

This apparent security comes from several sources:

  • An abundance of identical items/behaviors can make it proportionately less likely that one's own item/behavior gets targeted. ("Weak" security through ubiquity.)
  • An abundance of identical items/behaviors can lessen the desire to attack the item/behavior -- the item/behavior is not scarce, unusual or out-of-place. ("Moderate" security through ubiquity.)
  • An abundance of identical items/behaviors can mean that many, many people know how to recognize and potentially resolve or mitigate damage from misuses or abuses of that item or behavior. ("Strong" security through ubiquity, overlaps with open source security argument.)

    The example of this that comes to mind is the increasingly commonplace appearance of "inappropriate" pictures and personal stories on publicly-visible social networking sites, websites, and chat logs. In an era when such appearances were unusual and/or out-of-place, participants could be easily targeted and social norms readily enforced. In an era when such appearances are commonplace, it becomes harder to generate ongoing interest or opprobrium absent another factor that makes the appearance scarce or unusual (e.g., celebrity status).

    This is why I don't believe that the up-and-coming network generation will be particularly harmed professionally or socially in the future by "wild" behavior documented online today.

  • Molecular Rights Management

    I'll have more to say about this soon, but I just want to toss the idea out to the noösphere and make it visible.

    Molecular Rights Management refers to the panoply of technologies employed to prevent the unrestricted reproduction of the products of molecular scale (atomically-precise, nano-fabricated) manufacturing technologies. The source concept for the term is digital rights management, technologies employed to prevent the unrestricted reproduction of digital products. As of yet, no actual molecular rights management technologies exist.

    MRM is likely to emerge for two primary reasons: the continued need for intellectual property controls, so as to prevent a wave "napster fabbing;" and the need for security to prevent the production of controlled goods ("assault rifles," figuratively or literally).

    MRM could reside in the design media (the CAD files and the like), such as with single-execution licenses, digital watermarks, and so forth.

    MRM could reside in the production hardware (the "nanofactory"), such as with systems that "store" all designs online (no local storage), blacklist systems that a nanofactory would check an input design against, smart systems that recognize disallowed designs as they are being made, even in disconnected parts, and so forth.

    MRM could reside in the network, with agents that check the designs loaded in a nanofactory for proper licensing information.

    Given that the final results of a nanomanufactured product can, in principle, be used without any need to connect back to the original fabber or design, the impact of MRM on end-users is likely to be less onerous than the impact of DRM has been on the users of digital media. Couple that to the safety/security aspects, and it seems to me that MRM is likely to be broadly tolerated, and potentially even accepted.

    September 26, 2007

    Political Relationships and Technological Futures

    The night before the Singularity Summit, a team from the Singularity Institute for Artificial Intelligence sat me down and interviewed me, asking my thoughts on AI and what was to come. That interview is now available at the SIAI site, both as a downloadable high-quality Quicktime movie (.MOV, ~65MB) and as an embedded Flash video... as seen above.

    Whether we talk about AI or molecular manufacturing... we may talk about them as gadgets, nuts and bolts, we may be fascinated by the underlying circuitry, but the choices that we make about what we pursue and what we abandon, the decisions that we make about what goes into the code, and ultimately the policies that we develop around how to integrate this into society have political origins. The more that we can make explicit the political aspects of these technologies, the better we will be able to handle the repercussions when they do eventually emerge.

    September 25, 2007

    Turning the Body Against Itself

    Is the most effective form of warfare akin to an auto-immune disease?

    System disruption, attacks upon infrastructure and the other basic networks allowing a society to function, is a core goal of the "open source warfare" model. Not system destruction -- disruption, or partial damage and degradation, which reduces legitimacy and undermines the ability of the state to fight. Normally, we think of such damage to infrastructure coming from the direct action of attackers: blowing up power plants, attacking food shipments, etc.

    But it seems that a potentially more effective form of system disruption happens as the result of actions taken by the state itself in response to a threat (or perceived threat) from insurgents. The disruption to critical networks happens not as a direct result of attacks, but as the (usually unintended) result of defensive measures taken to head off an attack.

    This post from Bruce Schneier today makes illustrates this idea. He points to a blog post by Eric Umansky about the emergence of cholera in Iraq. The cause of the cholera outbreak is already known: the lack of chlorine to use to purify water.

    "We are suffering from a shortage of chlorine, which is sometimes zero," Dr. Ameer said in an interview on Al Hurra, an American-financed television network in the Middle East. "Chlorine is essential to disinfect the water.

    Chlorine is hard to come by because of a series of unsuccessful "chlorine bomb" attacks a few months ago; chlorine is now under tight restriction. The intended result of the restriction was to make it harder for insurgents to use chlorine to create improvised chemical weapons, even though the various attempts to do so resulted in no actual fatalities. The actual result was to disrupt the water infrastructure by putting a stranglehold on the ability to purify water, in turn leading to cholera outbreaks. As Umansky puts it:

    In other words, the biggest damage from chlorine bombs -- as with so many terrorist attacks -- has come from overreaction to it. Fear operates as a "force multiplier" for terrorists, and in this case has helped them cut off Iraq's clean water. Pretty impressive feat for some bombs that turned out to be close to duds.

    To be clear: the chlorine bombs, while scary, had no serious military impact. But they were exactly the kind of weapon that could trigger an overreaction. The same can be said for the various threats against airplanes that served as catalysts for security measures that slow air travel (although obviously with less dire consequences).

    It struck me that a minor attack triggering a defensive response which continues long after the attack, and causes much more damage than the original attack ever could have, is pretty much a description of an auto-immune disorder. In a wide variety of diseases, the body has turned against itself, with the immune system attacking what should be seen as healthy, normal tissue. Lupus, multiple sclerosis, and (most personal to me) rheumatoid arthritis are common examples of auto-immune diseases.

    Interestingly, recent research suggests that a low level of auto-immunity is useful as a way of developing and testing the rapid immune response; the wikipedia entry suggests that this is akin to "play fighting" in animals that need to learn how to hunt.

    Similarly, it's likely that many of the useful steps that can be taken to block or create resilience towards system disruption attacks may engender a bit of "auto-immune" disruption, such as requiring that more time be taken to examine cargo containers at shipping ports. What's needed is an ability to recognize when an "unhealthy" auto-immune disruption is underway -- or, better still, when it's a likely result of a tactical or strategic choice. This, in turn, requires a greater willingness to admit to bad decisions, and to rescind mistakes. Unsurprisingly, it all boils down to greater transparency about the decision-making process, and more efficient channels of communication between the people who determine strategy and the people who have to live with the results.

    September 24, 2007

    CRN Leadership Team Expands

    CRNlogo.jpgPress Release

    The Center for Responsible Nanotechnology (CRN) is adding two new members to its leadership team. Jamais Cascio will become CRN’s Director of Impacts Analysis, and Jessica Margolin will take on the role of Director of Research Communities, effective October 1, 2007. CRN co-founder Chris Phoenix will begin his scheduled sabbatical in October. Co-founder Mike Treder will continue to serve as Executive Director of CRN.

    Since its inception in December 2002, CRN has significantly contributed to better public understanding about molecular manufacturing, a specialty area of nanotechnology associated with extremely high risks and returns. CRN promotes awareness and education, and the development of effective recommendations to maximize benefits and reduce dangers.

    “I’ve been looking forward to this opportunity for some time,” said Phoenix. “With growing recognition about the importance of molecular manufacturing, with Jamais and Jessica, two extremely talented people, coming on board, and with Mike’s ongoing leadership, I feel comfortable taking a sabbatical.”

    Jamais Cascio is a writer, blogger and futurist covering the intersection of emerging technologies and cultural transformation. He speaks about future scenarios around the world and his essays about technology and society have appeared in a variety of print and online publications. He is a fellow at the Institute for Ethics and Emerging Technologies, as well as a research affiliate at the Institute for the Future. He also works on a variety of independent projects including serving as a lead author of the recent Metaverse Roadmap Overview report.

    “I’ve admired CRN’s work for a long time,” said Cascio, “and in recent months I’ve become more actively involved. Now I’m extremely pleased to be joining the team in a leadership capacity.”

    In 2003, Cascio co-founded WorldChanging.com, a Web site dedicated to finding and calling attention to models, tools, and ideas for building a ‘bright green’ future. Cascio authored nearly 2,000 articles during his time at WorldChanging, looking at topics such as energy and the environment, global development, open-source technologies, and catalysts for social change. In 2006, he started OpenTheFuture.com as his online home.

    “My understanding of technology development and societal change lead me to conclude that molecular manufacturing will be hugely disruptive,” added Cascio. “I’ve said before that if we manage to get through this century with our civilization intact, CRN's work will bear much of the credit. I hope I can make a worthwhile contribution to that effort.”

    Jessica Margolin is an entrepreneur who consults in the area of purposeful conversations and messaging systems. Her professional background includes industry roles in financial analysis, business development, organizational design, and marketing strategy and communications; her education includes an MS in Materials Science in the area of nanotechnology, and an MBA.

    “It's important to ensure all voices are heard during periods of profoundly rapid scientific innovation,” said Margolin. “Many nanoscale technologies are poised to be disruptive, and CRN focuses on what is potentially the most disruptive of all. I look forward to accelerating the development of the community surrounding CRN's work.”

    Currently a research affiliate at Institute for the Future, Margolin synthesizes her professional experience in the financial and internet industries as well as her philanthropic work to address problems concerning the design of organizations, institutions, and communities.

    “I’m ecstatic about the opportunity to work closely with both Jamais and Jessica as we move forward in the important cause of ensuring safe development and responsible use of advanced nanotechnology,” said Treder.

    The Center for Responsible Nanotechnology is a research and advocacy organization concerned with the major societal and environmental implications of advanced nanotechnology. CRN is an affiliate of World Care, an international, non-profit, 501(c)(3) organization. The opinions of CRN do not necessarily represent those of World Care.

    My Talk at the Singularity Summit

    Anyone who wants to hear the presentation, here you go:

    MP3 of my talk (~30 minutes)

    Let me know what you think.

    BTW, the first third or so just covers the metaverse roadmap; the real fun part starts when I offer my "second disclaimer" (at about 8:24).

    September 23, 2007

    Give an XO, Get an XO

    Correction -- *tiny* Laptop...I don't think the One Laptop Per Child project knows what it is about to unleash.

    On November 12, and for an unspecified (but brief) period following, the OLPC project will offer the "Give 1, Get 1" special:

    For $399, you will be purchasing two XO laptops—one that will be sent to empower a child to learn in a developing nation, and one that will be sent to your child at home.

    (Heh, yeah, "your child at home.")

    But that's it: for $399, you'll get an XO laptop of your own, and fund an XO for a child in the developing world.

    Considering the hype and the enthusiasm surrounding the XO, and considering that, as far as gadgets go, $400 isn't really a huge investment, I expect the demand for this to be huge. The question, then, is the OLPC project ready to meet that demand?

    (Update: Ethan Zuckerman has further observations, well worth reading.)

    An Unexpected Engine for Innovation

    Could universal health insurance be an engine for entrepreneurial innovation?

    I don't mean innovation in the healthcare space in particular, although that's possible. I mean more generally, as an unanticipated benefit, an "economy of scope," if you will, of universal health coverage. It may well be that a shift to broad health coverage could trigger a period of surprising economic growth. This may actually be an argument that would win support for single-payer insurance among those not persuaded by the moral or social aspects.

    I came at this thought in a somewhat roundabout way. It will come as no surprise to anyone who has done a rapid succession of talks and travel that, a couple of days after getting back from Zürich, my immune system went on strike and I was hammered by one of those colds that served as a reminder of just how much we take our health for granted. My current health insurance situation is a bit complicated, as it is for most freelancers, and although this situation wasn't enough to warrant going to a doctor, I began once again (in my waking, lucid moments) to think about whether I needed to find a "real" job that would come with benefits such as health coverage.

    Today, it struck me: I can't be the only person facing this kind of choice.

    How many people want to be out there, trying new professional experiments, working for themselves, but are held back by the thought that doing so would mean a lack of real health insurance?

    It's not uncommon to see paeans to the entrepreneurial spirit of US citizens*, and read consultant-ese observations that the one success skill in a rapidly-changing economy and society is flexibility, a willingness to try new things. This latter argument makes sense, from the "economic resilience" perspective. In a period of turmoil, successful adaptation demands the ability to iterate, rapidly and in parallel, multiple different models. With product design, it may be sad but ultimately of little consequence to toss out the less-adaptive concepts; the same cannot be said for human lives.

    This is the health care risk at the heart of entrepreneurialism: if you or someone in your family gets sick or injured, you could easily lose everything. And if you have a "pre-existing condition" (such as my palindromic rheumatism), you're really out of luck. If you're youthful and willing to take a chance, this may be an acceptable trade-off; but remember, this is an aging population, and innovation is not just a sport for the young. If you have a spouse with health benefits, you may be okay, but that puts enormous responsibility on the shoulders of one's partner to keep the job s/he's in, no matter how unhappy or unfulfilled it might be. COBRA works for awhile, if you can get it, but it has its own limitations. So too with the variety of packages for freelancers (if you can get them). The handful of remaining options -- including just going without -- can be amazingly expensive.

    I don't think that there is necessarily a massive population of proto-entrepreneurs just waiting for universal health coverage in order to go out and change the world. I do think that there's a small number, however, which would then provide a model for people who might have long-ago discarded the idea of working for themselves. The lack of universal healthcare in the United States may well be a brake to the kinds of innovation and individual experimentation that will be necessary to adapt to a rapidly-changing economic -- and geophysical -- environment.

    Just some thoughts on a Sunday afternoon, still in the midst of recovery.







    (*The European experience provides neither strong support nor contradiction of this premise, given the substantial cultural and, often, legal differences regarding entrepreneurialism between the US and Europe.)

    September 20, 2007

    She's Geeky

    Mark your calendars: the first "She's Geeky" unconference is now set for October 22-23, at the Computer History Museum in Mountain View, California. Organized by my colleague Kaliya Hamlin (the so-called "Identity Woman"), She's Geeky will be an opportunity for the growing number of women tech specialists to network and collaborate.

    We have three simple goals with the event.
    • Exchange skills and learning from women from diverse fields of technology.
    • Discuss topics about women and technology.
    • Connect the diverse range of women in technology, computing, entrepreneurship, funding, hardware, open source, nonprofit and any other technical geeky fields.

    What is the value of coming? It should be a great networking opportunity to meet other interesting women who you or your company might do business with. In this format you will get to learn more then you would just having interesting meetings in a hallway like you do at typical conferences that cost a lot more.

    Not being female, I'm not attending, but this is the kind of event I'm happy to give my whole-hearted support, and I'm pleased that Kaliya asked me to blog about it.

    Looking out over the audience (and the speaker list) at the Singularity Summit earlier this month, I was reminded just how narrow the perspectives seem to be in the world of tech-centered futurism. Ideas are not determined by the color of one's skin or the shape of the bits in one's pants, but they are shaped by experience. Diversity -- a cognitive and social polyculture, if you will -- gives us strength.

    September 17, 2007

    Monday Topsight, September 17, 2007

    Frankfurt Nuclear Plant 2Back in the US now, at least for the next two weeks -- then off to Budapest, to speak at a conference entitled "Visions of the Future. Technology and Society: Global and Local Challenges."

    Trying to get back into the blogging practice.

    • Fast Lane to the Uncanny Valley: Motion Portrait is a new Japanese company offering a novel service: it can take a single 2D image of a person and turn it into a believable 3D animation. The website has a couple of examples of the process at work. Start by mousing over the woman in the box in the upper right of the page -- notice that she'll start following your mouse pointer around. Click on the bell and she'll talk (she'll also chastise you if you mouse around in a circle too quickly, making her "dizzy").

    The company wants to use the technology (which they claim will run on a low-end computer or even mobile phone) to provide personalized avatars in 3D environments, as well as animations for entertainment. Other, less-friendly, applications are also quite possible. As this gets more realistic (and, arguably, it's pretty spookily realistic now), how difficult would it be to make a believable animation of someone saying something they never said just by using a single quick snapshot?

    For a real sense of just how weird this technology can be, click here. It's entirely safe for work, but arguably NSFS (not safe for sanity).

    • Word of the Day: Anthropocene -- the current geological era, marked by the accelerating human impact on the Earth. The term was first used in 2000 by Paul Crutzen, a scientist who has popped up again last year as an advocate of looking at what would and wouldn't work in geoengineering.

    The question that comes to my mind, of course, is "what follows the Anthropocene?"

    If our civilization is destroyed, there won't be anyone to name the era, so let's set that scenario aside.

    If we suffer a significant die-back, and the planet starts to revert to pre-human influence conditions, then we'd probably end up calling it something like the "Rehabilicene."

    My bet, though, would be a world in which our information sensing, communication and analysis tools are so pervasive that they change every aspect of how we understand and manage the planet around us. A world so fully enriched by knowledge could only be called one thing:

    The Noöcene.

    • It's Future Conference Season: The Singularity Summit wasn't the only future-focused conference underway this month. Aubrey de Gray's Strategies for Engineered Negligible Senescence foundation assembled the third annual SENS Conference, in Cambridge UK, over September 6-10 (thus overlapping the Singularity Summit entirely), and the Center for Responsible Nanotechnology put together its own 3 day event in Tucson, Arizona, from September 10-12. I particularly regret having to miss the latter, as the CRN Scenarios were unveiled there for the first time (more on that later).

    Fortunately, all was not lost: OtF friend Michael Anissimov live-blogged all three days of the CRN conference, providing in rich detail the proceedings of the various talks and conversations. This is a long, long blog entry, complete with some of Michael's own pictures. I look forward to his upcoming entry talking about his own reaction to the proceedings.

    One big agreement emerged from all of this, however: next time, the three big transformative technologies conferences won't all be scheduled for the same damn week.

    • Dollar Auctions, War and the Future: Oliver R. Goodenough, professor of law at Vermont Law School and a faculty fellow at the Berkman Center for Internet and Society at Harvard Law School, has a short, straightforward commentary in the Rutland, Vermont, Herald discussing how rational decisions can, in the aggregate, lead to disastrously undesirable results. He uses a classroom game called a "dollar auction," where students bid on a dollar; the twist is that the top bidder may win the dollar, but the #2 bidder has to pay up, as well.

    The problem surfaces when the bidders get up close to a dollar. After 99 cents the last vestige of profitability disappears, but the bidding continues between the two highest players. They now realize that they stand to lose no matter what, but that they can still buffer their losses by winning the dollar. They just have to outlast the other player. Following this strategy, the two hapless students usually run the bid up several dollars, turning the apparent shot at easy money into a ghastly battle of spiraling disaster.

    Goodenough applies this concept to the Iraq war, but it strikes me that it's an interesting example of what commons theorist Peter Kollock, in The Anatomy of Cooperation, refers to as a "social trap," where rational near-term benefits can create nearly unavoidable long-term costs -- but where the consequences of changing behavior can be nearly as costly as continuing, and will continue to increase.

    One of the drivers of a social trap/dollar auction is the perception that, by bowing out of the competition, someone else will be benefitting from the result, offering a superior strategic (or economic) position. It's not just that I don't get the benefit myself, the logic goes, but my competitor gets it instead. This kind of trap is sadly commonplace in the world of environmental policy, where one can see it in the interactions between the US and China over signing onto carbon reduction measures.

    No grand conclusions, yet, on what can be done about this kind of engagement, but it's helpful to have a mental model for what's going on.

    • I'm Just Innocently Sousveilling the Nuclear Reactor, Officer: I took the picture at the top of the page from the airplane window, flying into Frankfurt on my way to Zürich. I have to admit, I felt a little suspicious snapping pictures of a nuclear plant from the air, and I know I got at least one odd look. No arrests have been made, however.

    September 13, 2007

    Greetings from Rüschlikon

    Ruschlikon-LakeZurich-pan.jpg

    I write this gazing out over Lake Zurich, in a hotel room that seems quintessentially European: spare, vaguely futuristic, extremely stylish. I'm here as a guest of Swiss Reinsurance, the second-largest reinsurance company in the world, and a long-time leader in grappling with the implications of climate disruption on global systems. This is Swiss Re's "Centre for Global Dialogue," and I go on stage in just about two hours.

    I'll be delivering a talk tonight, and two more tomorrow, in my guise as an affiliate of the Institute for the Future, but I was asked to do this more because of the breadth of work I've done outside IFTF. And when asked to speak at Swiss Re, I jumped at the chance.

    Thinking about my presentation got me musing about the difficulty of imagining a future that's neither identical to the present, nor on the verge of apocalypse. Not a utopia, per se, but a future that gives us a bit more to hope for than to fear.

    I think it's because, to reverse Tolstoy, all unhappy futures are identical, but every happy future is happy in a different way. Unhappy futures, no matter their province -- environmental disaster, technological doom, bird flu, peak oil, civilizational suicide-by-spam -- are really about three basic fears: deprivation, pain and death. The relative balance of the three will vary, as will the proximate causes, but for the starving masses, it ultimately doesn't make much difference whether their demise was at the hands of a global climate collapse or a super-empowered high-tech terrorist.

    We know all too well, conversely, that definitions of happiness vary considerably between cultures and between individuals. A bucolic life of growing my own food and living amidst nature doesn't work as a "happy future" for me, but would be idyllic for some of you; neither of us, however, would likely welcome a future that would be a happy one for a religious zealot.

    Or, to put it in a more considered (and less pointed) fashion, we tend to recognize that happiness is contingent, and because we can so easily imagine how any given happy future could become less happy -- and have trouble imagining how a disastrous future, once underway, could become less apocalyptic -- it's far harder to accept that we might succeed (at avoiding doom, at improving our society, at changing our values, etc.) than that we might fail. It's my job to make those happier, or at least less-apocalyptic, futures easier to accept.

    Sometimes, being a futurist isn't about making forecasts or spotting trends.

    Sometimes, being a futurist means acting as a civilizational therapist.

    September 9, 2007

    Reactions to the Singularity Summit Talk

    eh.jpgA few bloggers -- and a couple of photographers -- took some notes on my talk at the Singularity Summit yesterday. Most simply recapped some of my lines (and one simply reprinted the whole talk), but I'll put the ones with commentary at the top:

    Bruce Sterling: "(((I'm really enjoying this, even though I believe that "Artificial Intelligence" is so far from the ground-reality of computation that it ought to be dismissed like the term "phlogiston.")))"

    Dan Farber, at ZDNet: "How a democratic, open process can be applied to a complex idea like Singularity, and the right choices made, remains a mystery."

    Mike Linksvayer: "My unwarranted extrapolation: the ideal of free software has some potential to substitute for the dominant ideal (representative democracy), but cannot compete directly, yet."

    Insider Chatter by Donna Bogatin: "...what does personal, direct experience become when observation and archiving of experience is the ultimate end game, rather than the activity itself? In other words, whatever happened to the joy of serendipitously living in the moment?"

    Singularity News

    David Orban

    Renee Blodgett, who includes some photos (one of which graces the top of this post).

    Frontier Channel

    • And a special shout out to a commentary at ZDNet by Chris Matyszczyk, who manages to get an entire article snarking on the event out of making fun of my name.

    Seriously.

    September 8, 2007

    Singularity Summit Talk: Openness and the Metaverse Singularity

    Still at the summitThe following is the text of the presentation I'm giving today at the Singularity Summit. I've set the post to go live at the same time I go onto the stage. Update: this is now the corrected version, with the updated language of the talk I actually gave (last minute edits hand-written in my notes for the win!).

    I was reminded, earlier this year, of an observation made by polio vaccine pioneer Dr. Jonas Salk. He said that the most important question we can ask of ourselves is, "are we being good ancestors?"

    This is a particularly relevant question for those of us here at the Summit. In our work, in our policies, in our choices, in the alternatives that we open and those that we close, are we being good ancestors? Our actions, our lives have consequences, and we must realize that it is incumbent upon us to ask if the consequences we're bringing about are desirable.

    It's not an easy question to answer, in part because it can be an uncomfortable examination. But this question becomes especially challenging when we recognize that even small choices matter. It's not just the multi-billion dollar projects and unmistakably world-altering ideas that will change the lives of our descendants. Sometimes, perhaps most of the time, profound consequences can arise from the most prosaic of topics.

    Which is why I'm going to talk a bit about video games.

    Well, not just video games, but video games and cameraphones and Google Earth and the myriad day-to-day technologies that, individually, may attract momentary notice, but in combination, may actually offer us a new way of grappling with the world. And just might, along the way, help to shape the potential for a safe Singularity.

    (more)

    Earlier this year, I co-authored a document that I know some of you in the audience have seen: the Metaverse Roadmap Overview. In this work, along with my colleagues John Smart and Jerry Paffendorf, I sketch out four scenarios of how a combination of forces driving the development of immersive, richly connected information technologies may play out over the next decade. But what has struck me more recently about the Roadmap scenarios is that the four worlds could also represent four pathways to a Singularity. Not just in terms of the technologies, but -- more importantly -- in terms of the social and cultural choices we make while building those technologies.

    The four metaverse worlds emerged from a relatively commonplace scenario structure. We arrayed two spectra of possibility against each other, thereby offering four outcomes. Specialists sometimes refer to this as the "four-box" method, and it's a simple way of forcing yourself to think through different possibilities.

    This is probably the right spot to insert my first disclaimer: scenarios are not predictions, they're provocations. They're ways of describing different future possibilities not to demonstrate what will happen, but to suggest what could happen. They offer a way to test out strategies and assumptions—what would the world look like if we undertook a given action in these four futures?

    To construct our scenario set we selected two themes likely to shape the ways in which the Metaverse unfolds: the spectrum of technologies and applications ranging from augmentation tools that add new capabilities to simulation systems that model new worlds; and the spectrum ranging from intimate technologies, those that focus on identity and the individual, to external technologies, those that provide information about and control over the world around you. These two spectra collide and contrast to produce four scenarios.

    The first, Virtual Worlds, emerges from the combination of Simulation and Intimate technologies. These are immersive representations of an environment, one where the user has a presence within that reality, typically as an avatar of some sort. Today, this means World of Warcraft, Second Life, Sony Home and the like.

    Over the course of the Virtual Worlds scenario, we'd see the continued growth and increased sophistication of immersive networked environments, allowing more and more people to spend substantial amounts of time engaged in meaningful ways online. The ultimate manifestation of this scenario would be a world in which the vast majority of people spend essentially all of their work and play time in virtual settings, whether because the digital worlds are supremely compelling and seductive, or because the real world has suffered widespread environmental and economic collapse.

    The next scenario, Mirror Worlds, comes from the intersection of Simulation and Externally-focused technologies. These are information-enhanced virtual models or “reflections” of the physical world, usually embracing maps and geo-locative sensors. Google Earth is probably the canonical present-day version of an early Mirror World.

    While undoubtedly appealing to many individuals, in my view, the real power of the Mirror World setting falls to institutions and organizations seeking to have a more complete, accurate and nuanced understanding of the world's transactions and underlying systems. The capabilities of Mirror World systems is enhanced by a proliferation of sensors and remote data gathering, giving these distributed information platforms a global context. Geospatial, environmental and economic patterns could be easily represented and analyzed. Undoubtedly, political debates would arise over just who does, and does not, get access to these models and databases.

    Thirdly, Augmented Reality looks at the collision of Augmentation and External technologies. Such tools would enhance the external physical world for the individual, through the use of location-aware systems and interfaces that process and layer networked information on top of our everyday perceptions.

    Augmented Reality makes use of the same kinds of distributed information and sensory systems as Mirror Worlds, but does so in a much more granular, personal way. The AR world is much more interested in depth than in flows: the history of a given product on a store shelf; the name of the person waving at you down the street (along with her social network connections and reputation score); the comments and recommendations left by friends at a particular coffee shop, or bar, or bookstore. This world is almost vibrating with information, and is likely to spawn as many efforts to produce viable filtering tools as there are projects to assign and recognize new data sources.

    Lastly, we have Lifelogging, which brings together Augmentation and Intimate technologies. Here, the systems record and report the states and life histories of objects and users, enhancing observation, recall, and communication. I've sometimes talked about one version of this as the "participatory panopticon."

    Here, the observation tools of an Augmented Reality world get turned inward, serving as an adjunct memory. Lifelogging systems are less apt to be attuned to the digital comments left at a bar than to the spoken words of the person at the table next to you. These tools would be used to capture both the practical and the ephemeral, like where you left your car in the lot and what it was that made your spouse laugh so much. Such systems have obvious political implications, such as catching a candidate's gaffe or a bureaucrat's corruption. But they also have significant personal implications: what does the world look like when we know that everything we say or do is likely to be recorded?

    This underscores a deep concern that crosses the boundaries of all four scenarios: trust.

    "Trust" encompasses a variety of key issues: protecting privacy and being safely visible; information and transaction security; and, critically, honesty and transparency. It wouldn't take much effort to turn all four of these scenarios into dystopias. The common element of the malevolent versions of these societies would be easy to spot: widely divergent levels of control over and access to information, especially personal information. The ultimate importance of these scenarios isn't just the technologies they describe, but the societies that they create.

    So what do these tell us about a Singularity?

    Second disclaimer time: although I worked with John and Jerry on the original Metaverse scenarios, they should not be blamed for any of what follows.

    Across the four Metaverse scenarios, we can see a variety of ways in which the addition of an intelligent system would enhance the user's experience. Dumb non-player characters and repetitive bots in virtual worlds, for example, might be replaced by virtual people essentially indistinguishable from characters controlled by human users. Efforts to make sense of the massive flows of information in a Mirror World setting would be enormously enhanced with the assistance of sophisticated machine analyst. Augmented Reality environments would thrive with truly intelligent agent systems, knowing what to filter and what to emphasize. In a lifelogging world, an intelligent companion in one's mobile or wearable system would be needed in order to figure out how to index and catalog memories in a personally meaningful way; it's likely that such a system would need to learn how to emulate your own thought processes, becoming a virtual shadow.

    None of these systems would truly need to be self-aware, self-modifying intelligent machines -- but in time, each could lead to that point.

    But if the potential benefits of these scenaric worlds would be enhanced with intelligent information technology, so too would the dangers. Unfortunately, avoiding dystopian outcomes is a challenge that may be trickier than some may expect -- and is one with direct implications for all of our hopes and efforts for bringing about a future that would benefit human civilization, not end it.

    It starts with a basic premise: software is a human construction. That's obvious when considering code written by hand over empty pizza boxes and stacks of paper coffee cups. But even the closest process we have to entirely computer-crafted software -- emergent, evolutionary code -- still betrays the presence of a human maker: evolutionary algorithms may have produced the final software, and may even have done so in ways that remain opaque to human observers, but the goals of the evolutionary process, and the selection mechanism that drives the digital evolution towards these goals, are quite clearly of human origin.

    To put it bluntly, software, like all technologies, is inherently political. Even the most disruptive technologies, the innovations and ideas that can utterly transform society, carry with them the legacies of past decisions, the culture and history of the societies that spawned them. Code inevitably reflects the choices, biases and desires of its creators.

    This will often be unambiguous and visible, as with digital rights management. It can also be subtle, as with operating system routines written to benefit one application over its competitors (I know some of you in this audience are old enough to remember "DOS isn't done 'til Lotus won't run"). Sometimes, code may be written to reflect an even more dubious bias, as with the allegations of voting machines intentionally designed to make election-hacking easy for those in the know. Much of the time, however, the inclusion of software elements reflecting the choices, biases and desires of its creators will be utterly unconscious, the result of what the coders deem obviously right.

    We can imagine parallel examples of the ways in which metaverse technologies could be shaped by deeply-embedded cultural and political forces: the obvious, such as lifelogging systems that know to not record digitally-watermarked background music and television; the subtle, such as augmented reality filters that give added visibility to sponsors, and make competitors harder to see; the malicious, such as mirror world networks that accelerate the rupture between the information haves and have-nots -- or, perhaps more correctly, between the users and the used; and, again and again, the unintended-but-consequential, such as virtual world environments that make it impossible to build an avatar that reflects your real or desired appearance, offering only virtual bodies sprung from the fevered imagination of perpetual adolescents.

    So too with what we today talk about as a "singularity." The degree to which human software engineers actually get their hands dirty with the nuts & bolts of AI code is secondary to the basic condition that humans will guide the technology's development, making the choices as to which characteristics should be encouraged, which should be suppressed or ignored, and which ones signify that "progress" has been made. Whatever the degree to which post-singularity intelligences would be able to reshape their own minds, we have to remember that the first generation will be our creations, built with interests and abilities based upon our choices, biases and desires.

    This isn't intrinsically bad; emerging digital minds that reflect the interests of their human creators is a lever that gives us a real chance to make sure that a "singularity" ultimately benefits us. But it holds a real risk. Not that people won't know that there's a bias: we've lived long enough with software bugs and so-called "computer errors" to know not to put complete trust in the pronouncements of what may seem to be digital oracles. The risk comes from not being able to see what that bias might be.

    Many of us rightly worry about what might happen with "Metaverse" systems that analyze our life logs, that monitor our every step and word, that track our behavior online so as to offer us the safest possible society -- or best possible spam. Imagine the risks associated with trusting that when the creators of emerging self- aware systems say that they have our best interests in mind, they mean the same thing by that phrase that we do.

    For me, the solution is clear. Trust depends upon transparency. Transparency, in turn, requires openness.

    We need an Open Singularity.

    At minimum, this means expanding the conversation about the shape that a singularity might take beyond a self-selected group of technologists and philosophers. An "open access" singularity, if you will. Dr. Kurzweil's books are a solid first step, but the public discourse around the singularity concept needs to reflect a wider diversity of opinion and perspective.

    If the singularity is as likely and as globally, utterly transformative as many here believe, it would be profoundly unethical to make it happen without including all of the stakeholders in the process -- and we are all stakeholders in the future.

    World-altering decisions made without taking our vast array of interests into account are intrinsically flawed, likely fatally so. They would become catalysts for conflicts, potentially even the triggers for some of the "existential threats" that may arise from transformative technologies. Moreover, working to bring in diverse interests has to happen as early in the process as possible. Balancing and managing a global diversity of needs won't be easy, but it will be impossible if democratization is thought of as a bolt-on addition at the end.

    Democracy is a messy process. It requires give-and-take, and an acknowledgement that efficiency is less important than participation.

    We may not have an answer now as to how to do this, how to democratize the singularity. If this is the case -- and I suspect that it is -- then we have added work ahead of us. The people who have embraced the possibility of a singularity should be working at least as hard on making possible a global inclusion of interests as they do on making the singularity itself happen. All of the talk of "friendly AI" and "positive singularities" will be meaningless if the only people who get to decide what that means are the few hundred of us in this room.

    My preferred pathway would be to "open source" the singularity, to bring in the eyes and minds of millions of collaborators to examine and co-create the relevant software and models, seeking out flaws and making the code more broadly reflective of a variety of interests. Such a proposal is not without risks. Accidents will happen, and there will always be those few who wish to do others harm. But the same is true in a world of proprietary interests and abundant secrecy, and those are precisely the conditions that can make effective responses to looming disasters difficult. With an open approach, you have millions of people who know how dangerous technologies work, know the risks that they hold, and are committed to helping to detect, defend and respond to crises. That these are, in Bill Joy's term, "knowledge-enabled" dangers means that knowledge also enables our defense; knowledge, in turn, grows faster as it becomes more widespread. This is not simply speculation; we've seen time and again, from digital security to the global response to SARS, that open access to information-laden risks ultimately makes them more manageable.

    The metaverse roadmap offers a glimpse of what the next decade might hold, but does so recognizing that the futures it describes are not end-points, but transitions. The choices we make today about commonplace tools and everyday technologies will shape what's possible, and what's imaginable, with the generations of technologies to come. If the singularity is in fact near, the fundamental tools of information, collaboration and access will be our best hope for making it happen in a way that spreads its benefits and minimizes its dangers -- in short, making it happen in a way that lets us be good ancestors.

    If we're willing to try, we can create a future, a singularity, that's wise, democratic and sustainable -- a future that's open. Open as in transparent. Open as in participatory. Open as in available to all. Open as in filled with an abundance of options.

    The shape of tomorrow remains in our grasp, and will be determined by the choices we make today. Choose wisely.

    September 6, 2007

    Opportunity Green

    On Saturday, November 17, I'll be speaking in Los Angeles at the first annual Opportunity Green conference, looking at the future of "sustainable business." Opportunity Green is being produced in cooperation with UCLA's Sustainable Resource Center.

    I'm the least business-focused of the four "keynote" speakers, but it looks like it will be an interesting mix of personalities (albeit in the form of four middle-aged white guys... sigh).

    Hope to see you there!

    September 4, 2007

    Visionary (?)

    "Don't be deceived when they tell you things are better now. Even if there's no poverty to be seen because the poverty's been hidden. Even if you ever got more wages and could afford to buy more of these new and useless goods which industries foist on you and even if it seems to you that you never had so much, that is only the slogan of those who still have much more than you.

    Don't be taken in when they paternally pat you on the shoulder and say that there's no inequality worth speaking of and no more reason to fight because if you believe them they will be completely in charge in their marble homes and granite banks from which they rob the people of the world under the pretence of bringing them culture.

    Watch out, for as soon as it pleases them they'll send you out to protect their gold in wars whose weapons, rapidly developed by servile scientists, will become more and more deadly until they can with a flick of the finger tear a million of you to pieces."

      – Attributed to Jean Paul Marat (May 24, 1743 – July 13, 1793), but likely from Paul Weiss' play Marat/Sade

    (BTW, if anyone has a direct source for this quote, I'd love the precise reference.)

    (Please check the comments for discussion of attribution.)

    Jamais Cascio

    Contact Jamais  ÃƒÂƒÃ‚ƒÃ‚ƒÃ‚ƒÃ‚¢Ã‚€Â¢  Bio

    Co-Founder, WorldChanging.com

    Director of Impacts Analysis, Center for Responsible Nanotechnology

    Fellow, Institute for Ethics and Emerging Technologies

    Affiliate, Institute for the Future

    Archives

    Creative Commons License
    This weblog is licensed under a Creative Commons License.
    Powered By MovableType 4.37