« January 2011 | Main Page | March 2011 »

Monthly Archives

February 28, 2011

Surviving the Future

I just found out that Surviving the Future, the documentary produced by the Canadian Broadcasting Company and featuring yours truly, is being shown on the American cable network CNBC this week. (It does not appear to be listed for CNBC Europe.)

I discovered this after two of the three showings (Friday Feb 25 and last night, Feb 27) had been completed -- fortunately, there's one more showing left.

On THURSDAY March 3, at 8PM Eastern (5PM Pacific), CNBC will once again be showing Surviving the Future. Set your TiVos to stun.

February 24, 2011

Homesteading a Society of Mind

Scientific American reports about research done at Cornell's Computational Synthesis Laboratory intended to give robot minds a degree of "self-awareness." The initial version gave the robot a way of watching and analyzing its own body, so that it could more readily adapt to new conditions (such as losing a limb). The next version, however, was much more ambitious:

Now, instead of having robots modeling their own bodies Lipson and Juan Zagal, now at the University of Chile in Santiago , have developed ones that essentially reflect on their own thoughts. They achieve such thinking about thinking, or metacognition, by placing two minds in one bot. [...] By reflecting on the first controller's actions, the second one could make changes to adapt to failures... In this way the robot could adapt after just four to 10 physical experiments instead of the thousands it would take using traditional evolutionary robotic techniques.

They refer to this system of having one controller analyze another as "metacognition," but what immediately came to mind for me was Marvin Minsky's description of a "Society of Mind" -- the idea that the conscious mind is an emergent process resulting from multiple independent sub-cognitive processes working in parallel. This piece at MIT gives a better overview of the Society of Mind argument than the Wikipedia stub, including this quote from a Minsky essay on the concept:

The mind is a community of "agents." Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. [...] In our picture of the mind we will imagine many "sub-persons", or "internal agents", interacting with one another. Solving the simplest problem—seeing a picture—or remembering the experience of seeing it—might involve a dozen or more—perhaps very many more—of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or "censoring" others from thinking forbidden thoughts.

Clearly, a two-"agent" robot mind isn't quite a real "society of mind" -- it's more like a "neighborly acquaintance of mind." Nonetheless, it shows an obvious direction for further research, as well as offering interesting support for Minsky's idea.

It also echoes something I wrote in 2003, for the Transhuman Space: Toxic Memes game book. In discussing why AI "infomorphs" weren't significantly smarter than humans, I offered up this:

Despite their different material base, human minds and AI minds are remarkably similar in form. Both display consciousness as an emergent amalgam of subconscious processes. For humans, this was first suggested well over a century ago, most famously in the work of Marvin Minsky and Daniel Dennett, and proven by the notorious Jiap Singh “consciousness plasticity” experiments of the 2030s. [..] In the same way, nearly all present-day AI infomorphs use an emergent-mind structure made up of thousands of subminds, each focused on different tasks. There is no single “consciousness” system; thought, awareness, and even sapience emerge from the complex interactions of these subprocesses. Increased intellect... is the result of increasingly complex subsystems.

We're still a ways away from declaring this a successful predictive hit, but it's amusing nonetheless.

February 23, 2011

Is the Alphabet Making Us Stupid?

Socrates: “ ...The story goes that Thamus said many things to Thoth in praise or blame of the various arts, which it would take too long to repeat; but when they came to the letters,“This invention, O king,” said Thoth, “will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.”

But Thamus replied, “Most ingenious Thoth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess. For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.

"You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

     –[Plato, Phaedrus, 274e-275b]

February 17, 2011

Fear of Teratocracy

What is a democracy?

I've been thinking about the nature of democracy over the past few weeks, for both obvious (Egypt) and less-obvious (potential for social change under conditions of disruption) reasons. The definition of democracy that most people are familiar is something along the lines of "rule by the people through voting, where the recipient of a majority of the vote wins." That's a decent description of the mechanism of democracy, I suppose, but I don't think it captures the important part.

Democracy is defined by how you lose, not (just) how you win.

The real test of whether a society that uses a plebiscite to determine leadership is really a democracy is whether the losing party accepts the loss and the legitimacy of their opponent's victory. This is especially true for when the losing party previously held power. Do they give up power willingly, confident that they'll have a chance to regain power again in the next election? Or do they take up arms against the winners, refuse to relinquish power, and/or do everything they can to undermine the legitimacy of the opposition's rule?

The last bit is possibly the most important. It's easy to see that a political faction unleashing civil conflict or refusing to give up power after an election loss is anti-democratic. The line between "appropriately tough attacks on an opponents' policies" and "attacks on the legitimacy of the opponent," however, can be somewhat more difficult to recognize. One key element is where the attacks come from: are they vitriol from the fire-breathers in the streets, trying to shift the Overton Window? Or are they coming from duly-elected, theoretically responsible figures? The former is part-and-parcel of a spectacle-driven media culture; the latter is a much more serious problem, as it's not a disagreement based on policies (and subject to negotiation), but one based on identity. The current leadership is bad not because the policies are bad, but because they have no right to lead.

(This is all made more complex by the possibility that a seemingly legitimately-elected leader may in fact be illegitimate due to corruption of the process.)

All of this matters from a futures perspective because in times of disruption there is likely to be substantial disagreement over the correct strategies needed to deal with big/dangerous changes. If the political discourse in a democracy is such that policy disputes get overwhelmed by (or become triggers for) arguments over legitimacy, then the potential to come up with approaches to the world that embrace long-term thinking is dramatically reduced. Difficult decisions get pushed off in order to avoid (or to focus on dealing with) fights over whether the in-office leadership has the right to lead.

Unfortunately, it appears that attacking the in-power opposition's legitimacy may be an increasingly effective way to derail policy initiatives. When a substantial portion (at least 30%, perhaps up to 50%) of the Republican party, for example, believes that not only does Obama have bad policies, he has no legitimate right to be President, compromise and negotiation become difficult at best. Republican leaders willing to negotiate aren't just compromising principles, they're aiding and abetting a violation of the Constitution. And while this is currently a Republican problem, there's nothing to say that Democrats -- the political leaders, not just the activists -- won't learn the lesson that this is an effective way to fight once Republicans retake the Presidency. This is also a situation just begging for a Participatory Decepticon moment.

The question, then, is (as always) what is to be done? My answer is (also as always) more transparency, but that isn't enough. We also need to see a shift in the larger culture away from spectacle and attention-grabbing stimulation, and towards illumination and empathy-building consideration (watch this video for what I'm referring to). But that shift doesn't seem like it will happen any time soon.

In the meantime, then, we watch the initial signs of emerging democracies around the world, ignoring the signs of fading democracy at home.

[Teratocracy: Rule by Monsters]

February 16, 2011

Speculative Gaming

In the past 24 hours, I've received two different pings from my Respected Elders asking about games as a mechanism for articulating disruptive scenarios. Both inquiries mentioned the wizard-queen of persuasive games, Jane McGonigal, of course. It's kind of odd when someone you know and have worked with hits the bigtime; fortunately, there's no doubt that Jane deserves the attention.

For me, the ultimate "serious game" has long been the fictional WorldRun, from Bruce Sterling's 1988 novel Islands in the Net. A massive, global simulation of the world, WorldRun was described as a way for people to examine different strategies for dealing with complex problems. Any real-world version of WorldRun would suffer from the problem that such a simulation is just too damn complex, unfortunately.

Fate of the World, however, comes closer than anything else so far to an honest-to-goodness world sim. It has quite a bit of what one would want -- global politics, environmental crises, resource limits, technological breakthroughs, biodiversity dilemmas, and more. I'm really excited to try it out -- it's currently in beta, but purchasing it now gives access to the beta versions as well as the final version upon release.

My big question about FotW is whether it's a (for lack of a better term) first-order simulation, where events happen as direct results of the rules embedded in the code, or a second-order sim, where events happen because of the interaction of basic environmental conditions and player actions. First-order sims are straightforward and fairly robust -- the player can't do anything the game doesn't explicitly allow. Second-order sims are much more complex, and as a result can be much more prone to "breaking" by producing nonsensical emergent results -- but also much more open to innovative solutions.

A basic example of a first-order game would be a classic text adventure, where there's usually only one way to achieve a result, even if your character holds multiple objects that would also, in reality, also work. For example, if you had to press a button to open your cell by throwing your shoe to hit it, only your shoe would work -- even if you had a brick, a book, or a frying pan (all typical text adventure supplies).

In a second-order version of the same situation, the game would "know" that shoes, bricks, books, and frying pans were all smaller objects, and that one thing a character could do with a smaller object was throw it. It might also allow you to throw the pillow from the cot in the cell (also a smaller object), but may have basic world rules that say that pillows are "soft" and don't hit hard when thrown. Conversely, it may also know that a brick is "very hard" and could damage what it hits.

Much more complex to program, much more prone to weird outcomes, but much more open to novel strategies (e.g., throwing the pillow over the button, then throwing the brick onto the pillow).

We're still a ways away from being able to build fully second-order global simulations. It's not just going to require a lot more processing power and much more data to pull from, it's going to require much better models of underlying systems, models that can interact without leading to weird emergent results.

The worry I have about this surge of interest in games is that people who aren't familiar with the reality of games and simulations, only the Hollywood-esque version where every computer has a Do-What-I-Mean interface and every simulation perfectly captures reality, are going to expect much more than they get. Disappointment with the mundane limits of real games may mean that interest in games crashes just as quickly as it arose. I hope not, but it's incumbent upon us who do understand what games and sims can and cannot do to make sure we explain this clearly to our new audiences.

Jamais Cascio

Contact Jamais  ÃƒÂƒÃ‚ƒÃ‚ƒÃ‚ƒÃ‚¢Ã‚€Â¢  Bio

Co-Founder, WorldChanging.com

Director of Impacts Analysis, Center for Responsible Nanotechnology

Fellow, Institute for Ethics and Emerging Technologies

Affiliate, Institute for the Future


Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37