Main

July 28, 2015

High-Frequency Combat

MILITARY ROBOT v2Science and technology luminaries Stephen Hawking, Elon Musk, and Steve Wozniak count among the hundreds of researchers pledging support of a proposed ban on the use of artificial intelligence technologies in warfare. In "Autonomous Weapons: an Open Letter from AI & Robotics Researchers", the researchers (along with thousands of citizens not directly involved with AI research) call on the global community to ban "offensive autonomous weapons beyond meaningful human control." They argue that the ability to deploy fully-autonomous weapons is imminent, and the potential dangers of a "military AI arms race" are enormous. Not just in the "blow everything up" sense -- we've been able to do that quite nicely for decades -- but in the "cause havoc" sense. They call out:

Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

They don't specify in the open letter (which is surprisingly brief), but the likely rationale as to why autonomous weapons would be particularly useful for assassinations, population control, and genocide is that they wouldn't say "no." Despite the ease with which human beings can be goaded into perpetrating atrocities, there are lines past which some of us could never cross, no matter the provocation. During World War II, only 15-20 percent of U.S. soldiers in combat actually fired upon enemy troops, at least according to Brigadier General S.L.A. Marshal; while some debate his numbers, it's clear that a significant fraction of soldiers will say "no" even to lawful orders. Certainly a higher percentage of troops will refuse to carry out unlawful and inhumane orders.

Autonomous weapons wouldn't say no.

There's another problematic aspect, alluded to in the title of this piece: autonomous military systems will make decisions far faster than the human mind can follow, sometimes for reasons that will elude researchers studying the aftermath. The parallel here is to "high-frequency trading" systems, operating in the stock market at a speed and with a sophistication that human traders simply can't match. The problem here is manifold:

  • High-speed decision-making will push against any attempt by human leaders to think through consequences -- not by making that consideration impossible, but by making it inefficient or even dangerous. If your opponent is using "high-frequency" military AI (HFMAI), a slow response may be detrimental to your future.
  • HFMAI can make opaque decisions, again with the result of potentially undermining longer-term strategic thinking. Note that "autonomous weapons" and "high frequency military AI" does not mean fully-self-aware, Singularity-style super-intelligent machines able to consider long-term possible consequences. HFMAI in the near term will be complex software designed to make specific kinds of on-the-spot decisions in the moment. If you've ever experienced a game AI doing something that gains a quick benefit but weakens its long-term position, or is simply utterly inscrutable, you'll understand what I mean.
  • Worst of all is that, just like high-frequency trading systems, opponents will be able to figure out how to spoof, confuse, or otherwise game the HFMAI software. Think about zero-day exploits tricking your weapons into making bad decisions.

    Although I signed the open letter, I do think that fully-autonomous weapon systems aren't quite as likely as some fear. I'm frankly more concerned about semi-autonomous weapon systems, technologies that give human operators the illusion of control while restricting options to pre-programmed limits. If your software is picking out bombing targets for you, that you tap the "bomb now" on-screen button may technically give you the final say, but ultimately the computer code is deciding what to attack. Or, conversely, computer systems that decide when to fire after you pull the trigger -- giving even untrained shooters uncanny accuracy -- distance the human action from the violent result.

    With semi-autonomous weapons, the human bears responsibility for the outcome, but retains less and less agency to actually control it -- whether or not he or she recognizes this. That's a more subtle, but potentially more dangerous, problem. One that's already here.

  • December 12, 2012

    I'm Just a Love Machine

    Metropolis maria

    Artifice and Consent in the Age of Robotics

    The notion of robot love has a long history, and by far the dominant emphasis has been on its erotic manifestation. After all, the reasoning goes, a sufficiently advanced robot would offer all of the physical pleasure of a real partner with no emotional entanglements, personal judgments, or dissipating affections, in an un-aging body that can be sculpted to look exactly as one desires. Famous movie actors and actresses might even set up a lucrative side-business licensing their own bodily images to robot manufacturers, even long after time and nature had taken a toll.

    In this scenario, physical beauty wouldn’t be the only attraction. A robotic lover would never say no, and would willingly embrace one’s darkest fantasies without revulsion. Curiosities, kinks, and perversions could be explored safely, without the potential to harm or exploit any other person.

    Given all of this, it seems that sex with robots is almost over-determined. It’s a cliché to assert that sex is a prime driver of digital innovation, but that has certainly been true for many Internet-related technologies. It’s unclear how readily that would translate to robotics, but one indicator is the abundance of the “sex bot” trope (in both male and female forms) in popular fiction, from “Lucy LiuBot” in Futurama to “Gigolo Joe” in A.I..

    Such scenarios remain, for now, deeply embedded in the world of fiction, but it’s not hard to imagine that we’re already halfway there. A quick visit to a present-day sex toy website will find hundreds of life-like devices, for both men and women, available for physical enjoyment (although it’s interesting to note that the vast majority of life-like sex toys are built for women, not men). For those customers with deeper pockets, full-size sex dolls, with internal articulated skeletons and life-like silicone bodies—and all necessary orifices and/or protuberances—can be had for around $5,000. Admittedly, these sexual tools are only marginally robot-like; at best, some offer limited motions, or make triggered noises. Sex bots that actively participate in the encounter remain fevered dreams.

    Unsurprisingly, there are plenty of critics of the very idea of a sex robot. Most focus on sexualized gynoids in fiction, arguing (fairly convincingly) that most non-parody uses of female-appearing sex bots embody larger social biases about women’s roles. But some critiques attack the potential reality of sex bots, not just their use as metaphor. Here, the fears focus on the possible disruption to social norms arising from the availability of artificially “perfect” sexual partners.

    At minimum, critics claim, the presence of sex bots would begin to alter expectations for how members of the appropriate sex would look and behave. This follows from similar arguments about how present-day popular culture shape desires, often through images manipulated to portray an almost inhuman level of attractiveness—only now, this once unattainable beauty has an entirely attainable physical form. Even more troubling for critics, sex bots are inherently willing to do whatever a person may want; real mates would never be as agreeable and as submissive to one’s desires as a machine you programmed yourself.

    In these fearful scenarios, the appeal of human sexual partners can do nothing but wither in comparison to the lust-made-”flesh” of a sex bot. The inevitable result of people foregoing real relationships in favor of perfect (but non-reproducing) partners is, of course, the End of Civilization. It’s as if these critics see sex as the only driver for human relationships, and are all-too-ready to abandon any other form of intimate connection. Fortunately, there are strong drivers for bonding that go beyond physical coupling.

    But even if the critics exaggerate the possibility of a “sex bot apocalypse,” there is a more subtle cultural complication that would arise along with LoveMakerBots. Our fundamental laws and norms around sex come down to consent: entities that are incapable of giving true consent are off-limits. A robot can be programmed to be constantly willing, but—absent the emergence of self-aware artificial intelligence—cannot be programmed to give true consent. This isn’t something many of us worry about when it comes to, say, vibrators, but when the design of the robot elicits an empathic, emotional reaction, intentionally or otherwise, an inability to give consent may for some move unexpectedly from irrelevant to deeply disturbing.

    As the robotic devices we build trigger our emotional sensitivities in more and more complex ways, some of us will find it difficult to simply dismiss sex bots as nothing more than advanced models of sex toys. Sex play with a lifeless device is one thing; sex play with something that acts as if it has feelings (no matter how artificial), but inherently cannot say “no,” is quite another. And the more that these artificial feelings replicate and generate human responses, the more difficult this problem will become.

    This is where we may see the first signs of a real dispute over the ethics of how robots will be treated. Sex bots offer a dilemma that overlaps issues of sexual norms, non-human rights, gender, technology’s social role, religion, even economics (for example: if inexpensive sex robots exist, what would happen to women who had been working as prostitutes for economic survival?); as such, it will be a conflict that will swiftly escalate in intensity and rancor.

    Early debates on the treatment of robots may be driven, at least in part, by a sense of “wrongness” about the treatment of something that looks—and increasingly feels—human. What does it do to us, as humans, to treat something that looks and acts as if it is human in every important way as little more than a toy to be shoved under the bed? This argument may end up being the first shot in a larger battle over where autonomous devices fit in our society.

    It’s an ironic scenario: the sex bot, conceived of as little more than a vibrator that talks, may end up being the catalyst for the fight for true robot rights.

    August 23, 2012

    CRUSH ALL HU-MANS (aka, the Robot Economy)

    The Huffington Post, a well-known news/blogging site, has just started a new thing: HuffPost Live, a 12 hour/day streaming video program. It seems to cover a fairly wide array of topics, but with the kind of pop-politics, pop-technology slant found on the main site. With in-person hosts, video chats via Google Hangout, and abundant viewer commentary, it seems to live comfortably in that interzone between niche cable channel and YouTube. Of course, the reason I know about this is because I was asked yesterday morning to appear on a 20 minute segment last evening. The topic? Robots taking our jobs!

    Unfortunately, HuffPost Live doesn't seem to allow embedding their videos, so if you're interested you'll have to follow this link.

    The conversation was better than I expected, and I got a chance to bring up the "pink collar future" idea that I've been exploring of late. My fellow talking heads, "Buster Brown" and Wayne Caswell, offered good alternative perspectives, and the discussion occasionally got lively. The format, however, left a bit to be desired -- Google Hangout video seems to do a lousy job of synchronizing audio and video feeds. This made my usual discussion mode of jazz hands and hyperactivity even more distracting than usual (one commenter wished that he could break my arms to stop me), but I offer no apologies. Regardless, it's clear that this is a topic needing greater discussion; as the HPL folks said that they found me engaging, I may be back...

    November 29, 2011

    The Prevail Project

    Joel Garreau has one of the most sensitive radars for big changes of anyone that I know. I first met him back at GBN, and I quickly came to realize that I should pay very close attention to whatever he's thinking about or working on -- and what he's working on now is definitely worth the time to check out.

    The "Prevail Project" (named for one of the scenarios in his book Radical Evolution) at the Sandra Day O'Connor College of Law at Arizona State University is an attempt to draw together people thinking about -- and building -- a livable human future, one that uses (but is not dominated by) transformative technologies.

    Joel's statement in the press release sums up his perspective:

    "Prevailproject.org will be a place for everybody from my mother to technologists inventing the future to grapple with some of the most pressing questions of our time: How are the genetics, robotics, information and nano revolutions changing human nature, and how can we shape our own futures, toward our own ends, rather than being the pawns of these explosively powerful technologies?” said Joel Garreau, the Lincoln Professor of Law, Culture and Values at the Sandra Day O’Connor College of Law at Arizona State University, and director of The Prevail Project: Wise Governance for Challenging Futures.

    “The Prevail Project is a collaborative effort, worldwide, to see if we can help accelerate this social response to match or exceed the pace of technological change,” Garreau said. “The fate of human nature hangs in the balance.”

    I'll set aside my resistance to the traditional "social response to technological change" model to celebrate the placement of this project in the Law School, and not as part of the school of engineering or some technical discipline. It's far too common to see these issues dominated by technologists (and technology-fetishists) with little understanding of law and culture; it's vital to get a more sophisticated understanding of society into the conversation.

    As the Prevail Project kicks off its public unveiling, it has invited a set of writers to offer up their thoughts on what it means to "prevail" in a transformative future. Bruce Sterling's essay went up yesterday; mine went up today.

    February 24, 2011

    Homesteading a Society of Mind

    Scientific American reports about research done at Cornell's Computational Synthesis Laboratory intended to give robot minds a degree of "self-awareness." The initial version gave the robot a way of watching and analyzing its own body, so that it could more readily adapt to new conditions (such as losing a limb). The next version, however, was much more ambitious:

    Now, instead of having robots modeling their own bodies Lipson and Juan Zagal, now at the University of Chile in Santiago , have developed ones that essentially reflect on their own thoughts. They achieve such thinking about thinking, or metacognition, by placing two minds in one bot. [...] By reflecting on the first controller's actions, the second one could make changes to adapt to failures... In this way the robot could adapt after just four to 10 physical experiments instead of the thousands it would take using traditional evolutionary robotic techniques.

    They refer to this system of having one controller analyze another as "metacognition," but what immediately came to mind for me was Marvin Minsky's description of a "Society of Mind" -- the idea that the conscious mind is an emergent process resulting from multiple independent sub-cognitive processes working in parallel. This piece at MIT gives a better overview of the Society of Mind argument than the Wikipedia stub, including this quote from a Minsky essay on the concept:

    The mind is a community of "agents." Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. [...] In our picture of the mind we will imagine many "sub-persons", or "internal agents", interacting with one another. Solving the simplest problem—seeing a picture—or remembering the experience of seeing it—might involve a dozen or more—perhaps very many more—of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or "censoring" others from thinking forbidden thoughts.

    Clearly, a two-"agent" robot mind isn't quite a real "society of mind" -- it's more like a "neighborly acquaintance of mind." Nonetheless, it shows an obvious direction for further research, as well as offering interesting support for Minsky's idea.

    It also echoes something I wrote in 2003, for the Transhuman Space: Toxic Memes game book. In discussing why AI "infomorphs" weren't significantly smarter than humans, I offered up this:

    Despite their different material base, human minds and AI minds are remarkably similar in form. Both display consciousness as an emergent amalgam of subconscious processes. For humans, this was first suggested well over a century ago, most famously in the work of Marvin Minsky and Daniel Dennett, and proven by the notorious Jiap Singh “consciousness plasticity” experiments of the 2030s. [..] In the same way, nearly all present-day AI infomorphs use an emergent-mind structure made up of thousands of subminds, each focused on different tasks. There is no single “consciousness” system; thought, awareness, and even sapience emerge from the complex interactions of these subprocesses. Increased intellect... is the result of increasingly complex subsystems.

    We're still a ways away from declaring this a successful predictive hit, but it's amusing nonetheless.


    November 19, 2009

    New Fast Company: The Meowtrix

    I CAN HAS SINGULARITY?

    My new Fast Company essay is now up, looking at the news that IBM researchers have produced a cortical computing system with the connection complexity of a cat's brain. (My original title is shown here on the illustration; the replacement title is a bit inaccurate and I've suggested a replacement, so let's just move along.) It's a follow-up to the research from a couple of years ago on a mouse-scale brain simulation; we're still on-target for a human-level brain connection simulation by 2020.

    All of the stories about this, including my own, have emphasized the cat brain aspect, but in reality the truly nifty development is the improved ability to map brain structures using advanced MRI and supercomputer modeling.

    Ultimately, this is a very interesting development, both for the obvious reasons (an artificial cat brain!) and because of its associated "Blue Matter" project, which uses supercomputers and magnetic resonance to non-invasively map out brain structures and connections. The cortical sim is intended, in large part, to serve as a test-bed for the maps gleaned by the Blue Matter analysis. The combination could mean taking a reading of a brain and running the shadow mind in a box.

    Science fiction writers will have a field day with this, especially if they develop a way to "write" neural connections, and not just read them. Brain back-ups? Shadow minds in a box, used to extract secret knowledge? Hypercats, with brains operating at a thousand times normal speed? The mind reels.

    The phrase "shadow minds" should be familiar to anyone who read the Transhuman Space game books -- this is almost exactly what the game talked about, and on an even more aggressive schedule!

    October 26, 2009

    Well, You Can Tell By the Way I Use My Walk...

    ...I've got robot legs, but no mouth to talk.

    And again! With the shoving!

    Boston Dynamics really likes to abuse its robots.

    (For the whippersnappers in the audience who don't get the title reference, here. Yes, the usage is ironic. And get offa my lawn.)

    October 12, 2009

    New FC: Singularity Scenarios

    Singularity Scenarios

    My latest Fast Company essay goes up today, talking about the different scenarios for a "Singularity" that arise when you take into account different cultural and political drivers for both before and after the development of greater-than-human intelligence.

    Three of the four scenarios (leaving aside "Out of Control") assume that human social intelligence, augmentation technology, and competition continue to develop. And in all three, human civilization -- with its resulting conflicts and mistakes, communities and arts, and, yes, politics -- remains a vital force even after a Singularity has begun.

    One key aspect of the three is that they're not necessarily end states. Each could, given the right drivers, eventually evolve into one of the others. Moreover, all three could in principle exist side-by-side.

    I noted earlier that I differ from many of the Singularity enthusiasts in my take on what happens before and what happens after a Singularity. I suppose I differ in my take on what happens during one, as well. I don't think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late-followers learn from the reactions and dilemmas of those who had initially encountered the disruptive change.

    Ultimately, I think the "singularity" language has outlived its usefulness. By positing that the culmination of certain technological changes is simply Beyond the Minds of Mortal Men, the concept both dismisses (or greatly downplays) the potential of human action to modify the evolution of the technologies, and undermines the stated desire of many Singularity proponents to avoid disastrous outcomes. "If it's completely out of our hands, then why worry?" is not exactly the mantra of a responsible, safe, globally beneficial future.

    October 4, 2009

    "Singularity Salon" Talk

    Here's my slide deck from my talk at last night's New York Futures Salon. This is the raw Slideshare conversion, so a few of the transitions end up as blank slides (and you lose all of the nifty Keynote effects).
    The talk was videotaped, and the recording will be available on the net Real Soon Now. I'll post a link when it's available. Overall, the talk went well. Good questions, good crowd (it ended up being considerably more crowded than the early gathering shown below). I'll have more to say in this week's Fast Company piece.
    Waiting to begin my talk

    September 29, 2009

    New FC: The Singularity and Society

    My Fast Company essay this week is a long one, offering up an overview of the Singularity concept for people who haven't following it closely -- as well as some thoughts about what might be missing.

    Despite the presence of the Singularity concept within various (largely online) sub-cultures, it remains on the edges of common discussion. That's hardly a surprise; the Singularity concept doesn't sit well with most people's visions of what tomorrow will hold (it's the classic "the future is weirder than I expect" scenario). Moreover, many of the loudest voices discussing the topic do so in a manner that's uncomfortably messianic. Assertions of certainty, claims of inevitability, and the dismissal of the notion that humankind has any choice in the matter--all for something that cannot be proven, and is built upon a nest of assumption--do tend to drive away people who might otherwise find the idea intriguing.

    And that's a problem, as the core of the Singularity argument is actually pretty interesting, and worth thinking about. Increasing functional intelligence--whether through smarter machines or smarter people--will almost certainly disrupt how we live in pretty substantial ways, for better and for worse. And there have been periods in our history where the combination of technological change and social change has resulted in quite radical shifts in how we live our lives--so radical that the expectations, norms, and behaviors of pre-transformation societies soon become out of place in the post-transformation world.

    The essay ends with an invitation to join me for the Singularity Salon in New York this Saturday. Cross-marketing, people!

    August 2, 2009

    Cascio's Laws of Robotics: The Motion Picture

    Last March, I gave a talk in Menlo Park entitled "Cascio's Laws of Robotics." I've already posted a link to the slides I used, and to essays and interviews covering related topics. Now -- finally -- the video of the talk is available.

    It was shot in HD, and looks pretty good if you make it full-screen. It runs just under 70 minutes, but is -- if I do say so myself -- a fairly interesting talk.

    Thanks to Monica Anderson for organizing the event, and for the terrific job she did editing the video.

    February 4, 2009

    Flunking Out

    singularityU.pngSo, Singularity University is now up and running (and has evidently fixed its web hosting problem). I've had a few people already ask me what I think of it. Based on what I've seen so far, I can just say:

    This is about as close to getting it wrong as I could imagine.

    I find the name and slogan annoying, but let's set those aside. I'm mostly astounded -- and not in a good way -- by the academic tracks. For those of you who haven't yet ventured into SU's ivy-covered marble halls, they are:

    1. Future Studies & Forecasting
    2. Networks & Computing Systems
    3. Biotechnology & Bioinformatics
    4. Nanotechnology
    5. Medicine, Neuroscience & Human Enhancement
    6. AI, Robotics, & Cognitive Computing
    7. Energy & Ecological Systems
    8. Space & Physical Sciences
    9. Policy, Law & Ethics
    10. Finance & Entrepreneurship

    The message here? People don't matter.

    The first track is just Singularitarianism 101. The next seven cover technology-based industries -- the mix of "here's what you can invest in now!" with "here's something that we can imagine" still to be determined. The last one, on "Finance & Entrepreneurship," gives away the game with its introduction: "...how can we monetize this new knowledge of future technologies?"

    The only one that gives a glance at social forces? The catch-all on "Policy, Law & Ethics." Nice that they can fit all of those issues, which have consumed the human mind for millennia, into a single theme. Too bad they couldn't have found room for politics (which is not the same as policy), economics (sorry, finance isn't the same thing, either), demographics, history, cities and urban planning, trade and resources, or war, let alone art, media, psychology, or cultural studies, too.

    For an institution that claims to be "preparing humanity for accelerating technological change," it sure seems to be spending a lot more time talking about nifty gadgets than about the connection between technology and society.

    To put it another way: this is all about the symptoms of "accelerating technological change," and almost nothing about the consequences.

    For a trade show or a business workshop, that's fine. For something calling itself a university, it's amazingly short-sighted. Given the nature of the subject matter, that's especially ironic/tragic.

    Of course, constructive criticism is always more useful than ranty carping, so -- having noticed that they say that their academic tracks are still being created -- here's what I think they should have as their areas of study (limiting myself to ten, as well, albeit by cheating a bit):

      [Intro:] Future Studies & Forecasting:
      With Ray K as the chancellor, you're not going to get away without a Singularity 101 session -- but this doesn't need to be a full track.

    1. Remaking Our Bodies:
      Understanding biotech, radical longevity, and enhancement.

    2. Remaking Our World:
      Understanding energy, ecological systems, and nanotechnologies.

    3. Remaking Our Minds:
      Understanding neurotech, cognitive systems, and AI.

    4. Power and Conflict:
      Emphasizing the role that political choices have in shaping technology.

    5. Scarcity, Trade, and Economics:
      How does scarcity manifest in an accelerating tech world? How do you deal with mass unemployment, technology diffusion, leapfrogging?

    6. Demography, Aging, and Human Mobility:
      Shifts in population and cultural identity; understanding impact of extending life.

    7. Human Identity and Communication:
      Understanding the changing nature of identity in a densely-linked world, looking at how different forms of identity clash.

    8. Governance and Law:
      How does governance emerge? How are laws about technology shaped?

    9. Ethics, Morality, and Unintended Consequences:
      How ethics emerges in a swiftly-changing environment; morality and technology; precautionary/proactionary principles.

    10. Openness, Resilience, and Models for Dealing with Rapid Transformation:
      Open source, open access, open governance; understanding resilience.

    That is: three tracks on emerging techs, two tracks on political/economic impacts, two tracks on human/culture impacts, and three on the processes and institutions that grapple with large-scale change. These kinds of classes would be much harder to put together than "This Tech Will Change Everything! 101", but they'd be correspondingly much more powerful.

    A useful Singularity University (or whatever it would be called) would be one that dove deeply into the nature of disruption, how society and technology co-evolve, and how we deal with unintended and unanticipated results of our choices. As sorry as I am to say it -- there are some very good people, folks I admire and respect, who are on the faculty & advisor list -- this institution isn't what we need in an era of uncertainty, crisis, and potential transformation.

    February 3, 2009

    We're Sorry, Due to Unforeseen Circumstances, the Singularity Has Been Postponed

    SingU.png

    September 2, 2008

    Singularity Summit 2008

    singsum08.jpgSo, the official announcement for the 2008 Singularity Summit is now up, and for folks looking to get their fill of conversations about the transcendent, here's your chance to sign up. This time around, the Summit will take place on Saturday, October 25, at the Montgomery Theater in San Jose. Seating is limited to 500 attendees, so it's a bit smaller than last year (I think).

    There's a bit of a usual-suspects element to the speaker list this time around, with a few Singularity Institute for Artificial Intelligence-associated names on the stage as always, and a mix of reasonably well-known tech pundits and lesser-known (but probably more provocative) thinkers. I do give the Singularity Institute credit for including a skeptic or two in the mix. I'm not sure if the Singularity concept is yet mainstream enough to get a really wide mix of perspectives, but I hold out hope that at some point, we'll have more non-technologists than technologists on stage at one of these.

    Since I spoke last year, I won't be on stage this time around; however, I will be giving the closing keynote for the Singularity Institute/SciVestor Emerging Technologies Workshop happening at the San Jose Tech Museum on Friday, October 24. Seats are limited to 50 for this. I don't know yet what I'm going to talk about, but I suspect it will involve some mix of environmental futurism, take-responsibility encouragement, and a panoply of new terminology.

    July 3, 2008

    Singular Sensations

    Creation 2.0The Singularity concept remains inescapable these days, although rarely well-understood. Both are unfortunate developments, for essentially the same reason: the popularity of the term "Singularity" has undermined its narrative value. Its use in a discussion is almost guaranteed to become the focus of a debate, one that rarely changes minds. This is especially unfortunate because the underlying idea is, in my view, a useful tool for thinking about how we'll face the challenges of the 21st century.

    For many of its detractors -- and more than a few of its proponents -- the Singularity refers only to the rise of godlike AIs, able to reshape the world as they see fit. Sometimes this means making the world a paradise for humanity, sometimes it means eliminating us, and sometimes it means "uploading" mere human minds into its ever-expanding digital world. That this isn't all that close to Vinge's original argument is really irrelevant -- by all observations this appears to be the most commonplace definition.

    It's not hard to see why this gets parodied as a "rapture for nerds." It's not that it's a religious argument per se, but that it has narrative beats that map closely to eschatological arguments of all kinds: Specialists (with seemingly hermetic knowledge) [Premillennial Dispensationalists, Singularitarians, Marxist Revolutionaries] predict an imminent transformative moment in history [Rapture, Singularity, Withering Away of the State] that will create a world unlike anything before possible in human history, a transformation mandated by the intrinsic shape of history [The Book of Revelation, the Law of Accelerating Returns, Historical Materialism]. The details of the various eschatological stories vary considerably, of course, and this general framework matches each version imperfectly. Nonetheless, this pattern -- a predicted transformation creates a new world due to forces beyond our ken -- recurs.

    This comparison drives many Singularity adherents to distraction, as they see it as the intentional demeaning of what they believe to be a scientifically-grounded argument.

    The thing is, the Singularity story, broadly conceived, is actually pretty compelling. What Vinge and the better of the current Singularity adherents argue is that we have a set of technological pathways that, in both parallel and combination, stand to increase our intelligence considerably. Yes, artificial intelligence is one such pathway, but so is bioengineering, and so is cybernetic augmentation (I'll argue in a subsequent post that there's yet another path to be considered, one that Vinge missed).

    The version of the Singularity story that I think is well-worth holding onto says this: due to more detailed understandings of how the brain works, more powerful information and bio technologies, and more sophisticated methods of applying these improvements, we are increasingly able to make ourselves smarter, both as individuals and as a society. Such increased intelligence has been happening slowly, but measurably. But as we get smarter, our aggregate capacity to further improve the relevant sciences and technologies also gets better; in short, we start to make ourselves smarter, faster. At a certain point in the future, probably within the next few decades, the smarter, faster, smarter, faster cycle will have allowed us to remake aspects of our world -- and, potentially, ourselves -- in ways that would astonish, confuse, and maybe even frighten earlier generations. To those of us imagining this point in the future, it's a dramatic transformation; to those folks living through that future point, it's the banality of the everyday.

    Regardless of what one thinks of the prospects for strong AI, it's hard to look at the state of biotechnology, cognitive science, and augmentation technologies without seeing this scenario as distinctly plausible.

    What I'm less convinced of is the continuing value of the term "Singularity." It made for a good hook for an idea, but increasingly seems like a stand-in for an argument (for both proponents and detractors). Discussions of the Singularity quickly devolve into debates between those who argue that godlike AI is surely imminent because we have all of these smart people working on software that might at some point give us a hint as to how we could start to look at making something approaching an intelligent machine, which would then of course know immediately how to make itself smarter and then WHOOSH it's the Singularity... and those who argue that AI is impossible because AI is impossible, QED. And we know this because we haven't built it, except for the things we called AI until they worked, and then we called them something else, because those weren't real AI, because they worked. Since AI is impossible.

    In Warren Ellis' snarky piece on the Singularity from a few weeks ago, he suggested replacing "the Singularity" with "the Flying Spaghetti Monster," and seeing if that actually changed the argument much. Here's the parallel: replace "the Singularity" with "increasing intelligence," too. If it still reads like eschatology, it's probably not very good -- but if it starts to make real sense, then it might be worth thinking about.

    June 28, 2008

    Singularities Enough, and Time

    brain-sil.pngA few people have asked me what I thought of Karl Schroeder's recent article at Worldchanging, "No Time for the Singularity." Karl argues that we can't count on super-intelligent AIs to save us from environmental disaster, since by the time they're possible (assuming that they're possible), things will have gotten so bad that they won't matter (and/or won't have any resources available to act, or even persist). It's a pretty straightforward argument, and echoes pieces I've written on parallel themes. In short, my initial reaction, was "yeah, of course."

    But giving it a bit more thought, I see that Karl's argument has a couple of subtle, but important, flaws.

    The first is that he makes the assumption that nearly every casual discussion of the Singularity concept makes, in that he defines it as "...within about 25 years, computers will exceed human intelligence and rapidly bootstrap themselves to godlike status." But if you go back to Vinge's original piece, you'll see that he actually suggests four different pathways to a Singularity, only two of which arguably include super-intelligent AI. His four pathways are:

    • There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
    • Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
    • Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
    • Biological science may provide means to improve natural human intellect.

    The first two depend upon computers gaining self-awareness and boostrapping themselves into super-intelligence through some handwaved process. People don't talk much about the Internet "waking up" these days, but talk of artificially intelligent systems remains quite popular. And while the details of how we might get from here to a seemingly intelligent machine grow more sophisticated, there's still quite a bit of handwaving about how that bootstrapping to super-intelligence would actually take place.

    The second two -- computer/human interfaces and biological enhancement -- fall into the category of "intelligence augmentation," or IA. Here, the notion is that the human brain remains the smartest thing around, but has either cybernetic or biotechnological turbo chargers. It's important to note that the cyber version of this concept does not require that the embedded/connected computer be anything other than a fancy dumb system -- you wouldn't necessarily have to put up with an AI in your head.

    So when Karl says that the Singularity, if it's even possible, wouldn't arrive in nearly enough time to deal with global environmental disasters, he's really only talking about one kind of Singularity. It's this narrowing of terms that leads to the second flaw in his argument.

    Karl seems to suggest that only super-intelligent AIs would be able to figure out what to do about an eco-pocalypse. But there's still quite a bit of advancement to be had between the present level of intelligence-related technologies, and Singularity-scale technologies -- and that pathway of advancement will almost certainly be of tremendous value to figuring out how to avoid disaster.

    This pathway is especially clear when it comes to the two non-AI versions of the Singularity concept. With bio-enhancement, it's easy to find stories about how Ritalin or Adderall or Provigil have become productivity tools in school and in the workplace. To the degree that our sense of "intelligence" depends on a capacity to learn and process new information, these drugs are simple intelligence boosters (ones with potential risks, as the linked articles suggest). While they're simple, they're also indicative of where things are going: our increasing understanding of how the brain functions will very likely lead to more powerful cognitive modifications.

    The intelligence-boosting through human-computer connections is even easier to see -- just look in front of you. We're already offloading certain cognitive functions to our computing systems, functions such as memory, math, and increasingly, information analysis. Powerful simulations and petabyte-scale datasets allow us to do things with our brains that would once have been literally unimaginable. That the interface between our brains and our computers requires typing and/or pointing, rather than just thinking, is arguably a benefit rather than a drawback: upgrading is much simpler when there's no surgery involved.

    You don't have to believe in godlike super-AIs to see that this kind of intelligence enhancement can lead to some pretty significant results as the systems get more complex, datasets get bigger, connections get faster, and interfaces become ever more useable.

    So we have intelligence augmentation through both biochemistry and human-computer interface well underway and increasingly powerful, with artificial intelligence on some possible horizon. Let's cast aside the loaded term "Singularity" and just talk about getting smarter. This is happening now, and will under nearly any plausible scenario keep happening for at least the next decade and a half. Enhanced intelligence alone won't solve global warming and other environmental threats, but it will almost certainly make the solutions we come up with more effective. We could deal with these crises without getting any smarter, to be sure, and we shouldn't depend on getting smarter later as a way of avoiding hard work today. But we should certainly take advantage of whatever new capacities or advantages may emerge.

    I still say that the Singularity is not a sustainability strategy, and agree with Karl that it's ludicrous to consider future advances in technology as our only hope. But we should at the same time be ready to embrace such advances if they do, in fact, emerge. The situation we face, particularly with regards to climate disruption, is so potentially devastating that we have to be willing to accept new strategies based on new conditions and opportunities. In the end, the best tool we have for dealing with potential catastrophe is our ability to innovate.

    January 21, 2008

    Singularity Summit talk: the video

    The video of my talk at last year's Singularity Summit is finally available. As always, feedback welcome.

    December 3, 2007

    Talking About the Metaverse & the Singularity

    sl-talk.pngJust a few updates for those of you who like to hear these things:

    My talk at the Metaverse Meetup the other night went splendidly, and the video should be available real soon now. For those of you who can't wait, the folks at Ugo Trade did a terrific job of capturing the content and the spirit of my talk -- not a transcription, but a thoughtful depiction.

    The picture of me giving the talk was taken in Second Life by Lisa Rein. Thanks, Lisa!

    The entire recording of my interview for Spark! on CBC radio is now available for downloading & listening. It actually holds together pretty well, and nicely covers (in a conversational way, not a lecture) many of what I think are the key issues surrounding the rise of the metaverse concept.

    This past weekend's edition of, um, Weekend Edition included a story about the Singularity Summit (you remember, back in September... I guess this worked best as a filler story). Many of the notables show up in interview snippets, and I make a guest appearance, too. The odd thing is that it was at least a 30 minute conversation that ended up being cut down to two sentences. Welcome to radio.

    September 24, 2007

    My Talk at the Singularity Summit

    Anyone who wants to hear the presentation, here you go:

    MP3 of my talk (~30 minutes)

    Let me know what you think.

    BTW, the first third or so just covers the metaverse roadmap; the real fun part starts when I offer my "second disclaimer" (at about 8:24).

    September 9, 2007

    Reactions to the Singularity Summit Talk

    eh.jpgA few bloggers -- and a couple of photographers -- took some notes on my talk at the Singularity Summit yesterday. Most simply recapped some of my lines (and one simply reprinted the whole talk), but I'll put the ones with commentary at the top:

    Bruce Sterling: "(((I'm really enjoying this, even though I believe that "Artificial Intelligence" is so far from the ground-reality of computation that it ought to be dismissed like the term "phlogiston.")))"

    Dan Farber, at ZDNet: "How a democratic, open process can be applied to a complex idea like Singularity, and the right choices made, remains a mystery."

    Mike Linksvayer: "My unwarranted extrapolation: the ideal of free software has some potential to substitute for the dominant ideal (representative democracy), but cannot compete directly, yet."

    Insider Chatter by Donna Bogatin: "...what does personal, direct experience become when observation and archiving of experience is the ultimate end game, rather than the activity itself? In other words, whatever happened to the joy of serendipitously living in the moment?"

    Singularity News

    David Orban

    Renee Blodgett, who includes some photos (one of which graces the top of this post).

    Frontier Channel

    • And a special shout out to a commentary at ZDNet by Chris Matyszczyk, who manages to get an entire article snarking on the event out of making fun of my name.

    Seriously.

    September 8, 2007

    Singularity Summit Talk: Openness and the Metaverse Singularity

    Still at the summitThe following is the text of the presentation I'm giving today at the Singularity Summit. I've set the post to go live at the same time I go onto the stage. Update: this is now the corrected version, with the updated language of the talk I actually gave (last minute edits hand-written in my notes for the win!).

    I was reminded, earlier this year, of an observation made by polio vaccine pioneer Dr. Jonas Salk. He said that the most important question we can ask of ourselves is, "are we being good ancestors?"

    This is a particularly relevant question for those of us here at the Summit. In our work, in our policies, in our choices, in the alternatives that we open and those that we close, are we being good ancestors? Our actions, our lives have consequences, and we must realize that it is incumbent upon us to ask if the consequences we're bringing about are desirable.

    It's not an easy question to answer, in part because it can be an uncomfortable examination. But this question becomes especially challenging when we recognize that even small choices matter. It's not just the multi-billion dollar projects and unmistakably world-altering ideas that will change the lives of our descendants. Sometimes, perhaps most of the time, profound consequences can arise from the most prosaic of topics.

    Which is why I'm going to talk a bit about video games.

    Well, not just video games, but video games and cameraphones and Google Earth and the myriad day-to-day technologies that, individually, may attract momentary notice, but in combination, may actually offer us a new way of grappling with the world. And just might, along the way, help to shape the potential for a safe Singularity.

    (more)

    Continue reading "Singularity Summit Talk: Openness and the Metaverse Singularity" »