June 6, 2019

Participatory Panopticon: 2019


In April of 2004--just a bit over 15 years ago--I posted this question to Worldchanging:

"What happens when you combine mobile communications, always-on cameras, and commonplace wireless networks?" I called the answer the Participatory Panopticon.

Remember: at that point in time, Blackberry messaging was the height of mobile communication, cameras in phones were rare and of extremely poor quality, and EDGE was the most common form of wireless data network. Few people had truly considered what the world might look like as all of these systems continued to advance.

The core of the Participatory Panopticon concept was that functionally ubiquitous personal cameras with constant network connections would transform much of how we live, from law enforcement to politics to interpersonal relationships. We'd be able to--even expected to--document everything around us. No longer could we assume that a quiet comment or private conversation would forever remain quiet or private.

The Participatory Panopticon didn't simply describe a jump in technology, it envisioned the myriad ways in which our culture and relationships would change in the advent of a world of unrelenting peer-to-peer observation.

Here's the canonical version of the original argument: the text of the talk I gave at Meshforum in 2005. It's long, but it really captures what I was thinking about at the time. As with any kind of old forecast, it's interesting to look for the elements that were spot-on, the ones that were way off, and the ones that weren't quite right but hinted at a change that may still be coming. I like to think about this as engaging in "forecast forensics," and there's a lot to dig through in this talk.

A surprising amount of what I imagined 15 years ago about the Participatory Panopticon has borne out: The interweaving of social networks and real time commentary, the explosion of "unflattering pictures and insulting observations," the potential for citizens to monitor the behavior (and misbehavior) of public servants, even (bizarrely) the aggressiveness of agents of copyright going after personal videos that happen to include music or TV in the background. The core of the Participatory Panopticon idea is ubiquity, and that aspect of the forecast has succeeded beyond my wildest expectations.

If you were around and aware of the world in 2005, you may remember what digital cameras were like back then. We were just at the beginning of the age of digital cameras able to come close to the functionality and image quality of film cameras, if you could afford a $2,000 digital SLR. Even then, the vast majority of digital cameras in the hands of regular people were, in a word, crap. The cameras that could be found on mobile phones were even worse--marginally better than nothing. Marginally. The idea of a world of people constantly taking pictures and video on their personal devices (not just phones, but laptops, home appliances, and cars) seemed a real leap from the world we lived in at the time. A world where all of these pictures and videos would then matter to our politics, our laws, and our lives, seemed an even greater leap.

But the techno-cultural jolt I termed the Participatory Panopticon has profoundly changed our societies to a degree that it's sometimes hard to remember what life was like beforehand. Today's "Gen Z" youth have never been conscious of a world without the Participatory Panopticon. They're a generation that has been constantly surrounded by cameras held by family, friends, and most importantly, themselves.

We may not always recognize just how disorienting this has been, how much it has changed our sense of normal. One bit from the essay that still resonates today concerns the repercussions of never letting go of the documented past:

Relationships--business, casual or personal--are very often built on the consensual mis-rememberings of slights. Memories fade. Emotional wounds heal. The insult that seemed so important one day is soon gone. But personal memory assistants will allow people to play back what you really said, time and again, allow people to obsess over a momentary sneer or distracted gaze. Reputation networks will allow people to share those recordings, showing their friends (and their friends' friends, and so on) just how much of a cad you really are.

(Okay, forget the "personal memory assistants" and "reputation networks" jargon, and substitute "YouTube" and "Instagram" or something.)

Think about what happens today when someone's offensive photo or intentionally insulting joke from a decade or two ago bubbles back up into public attention. Whether or not the past infraction was sufficiently awful as to be worthy of present-day punishment is beside the point: one of the most important side-effects of the Participatory Panopticon (and its many connected and related technologies and behaviors) is that we've lost the ability to forget. This may be a good thing; it may be a tragedy; it is most assuredly consequential.

But if that aspect of the Participatory Panopticon idea was prescient, other parts of the forecast were excruciatingly off-target.

Although there were abundant inaccuracies with the technological scenarios, the one element that stands out for me as being the most profoundly wrong is the evident--and painfully naiive--trust that transparency is itself enough to force behavioral changes. That having documentation of misbehavior would, in and of itself, be sufficient to shame and bring down bad actors, whether they were forgetful spouses, aggressive cops, or corrupt politicians. As we've found far too many times as the real-world version of the Participatory Panopticon has unfolded, transparency means nothing if the potential perpetrators can turn off the cameras, push back on the investigators, or even straight up deny reality.

Transparency without accountability is little more than voyeurism.

There are elements of the Participatory Panopticon concept that haven't emerged, but also can't be dismissed as impossible. Two in particular stand out as having the greatest potential for eventual real-world consequences.

The first is the more remote of the two, but probably more insidious. We're on the cusp of the common adoption of wearable systems that can record what's around us, systems that are increasingly indistinguishable from older, "dumb" versions of the technology. Many of the privacy issues already extant around mutual snooping will be magnified, and new rounds of intellectual property crises will emerge, when the observation device can't be distinctly identified. If my glasses have a camera, and I need the glasses to see, will I be allowed to watch a movie? If I can tap my watch and have it record a private conversation or talk without anyone around me noticing, will we even be allowed to wear our wearables anywhere?

(By the way, I can already do that with my watch. Be warned.)

The second might be the most important element of the Participatory Panopticon story, even if it received little elaboration in the 2005 talk. It's included there almost as a throwaway idea, in a brief aside about the facility with which pictures can be altered:

It's easy to alter images from a single camera. Somewhat less simple, but still quite possible, is the alteration of images from a few cameras, owned by different photographers or media outlets.

But when you have images from dozens or hundreds or thousands of digital cameras and cameraphones, in the hands of citizen witnesses? At that point, I start siding with the pictures being real.

The power of the Participatory Panopticon comes not just from a single person being able to take a picture or record a video, but from the reinforcement of objective reality that can come from dozens, hundreds, thousands of people independently documenting something. A mass of observers, each with their own perspectives, angles, and biases, can firmly establish the reality of an event or an action or a moment in a way that no one official story could ever do.

It's common to ask what we can do about the rise of "deep fakes" and other forms of indistinguishable-from-reality digital deceptions. Here's one answer. The visual and audio testimony of masses of independent observers may be an effective counter to a convincing lie.

In a recent talk, I argued that "selfies" and other forms of digital reflection aren't frivolous acts of narcissism, but are in fact a form of self-defense--an articulation that I am here, I am doing this, I can claim this moment of my life.

In 2009, the Onion offered a satiric twist on this concept, in yet another example of dystopian humor predicting the future.

As this suggests, my images aren't just documentation of myself, they're documentation of everyone around me. My verification of my reality also verifies the reality of those around me, and vice versa... whether we like it or not. Like so many of the consequences of the Participatory Panopticon, its manifestation in the real world can occasionally be brutal. With "Instagram Reality" mockery, for example, the editing and "improvement" of images of social network influencers is called out by other people's pictures showing their real appearance.

It's harsh and more than a little misogynist. But that's the ugly reality of the Participatory Panopticon: it was never going to change who we are. It was really only going to make it harder to hide it.

Foresight (forecasts, scenarios, futurism, etc.) is the most useful when it alerts us to emerging possible developments that we had not otherwise imagined. Not just as a "distant early warning," but as a vaccination. A way to become sensitive to changes that we may have missed. A way to start to be prepared for a disruption that is not guaranteed to happen, but would be enormously impactful if it did. I've had the good fortune of talking with people who heard my Participatory Panopticon forecast and could see its application to their own work in human rights, in environmentalism, and in politics. The concept opened their eyes to new ways of operating, new channels of communication, and new threats to manage, and allowed them to act. The vaccination succeeded.

It's good to know that, sometimes, the work I do can matter.

August 20, 2013


One of the first rules one is taught as a futurist-in-training is to avoid "normative scenarios" -- forecasts that describe what you want to see, even when the signals and evidence at hand make the scenario highly unlikely. This is much more of a challenge than non-futurists may think, as a good scenarist can usually come up with a plausible set of early indicators and distant early warnings to support just about any forecast. If one's work focuses on issues that have a strong ethical component (around human rights, for example, or the global environment) the problem is further multiplied.

One of the reasons I've been running silent over the past month or so has been the explosion of news around government (and corporate) surveillance of the Internet. Not that I'm especially worried about my own stuff -- I have a fairly public life, and have few secrets worth knowing. But the implications for the futures of privacy, security, commerce, communications, big data, and so forth are so enormous that I'm still trying to wrap my mind around where this is all going. And the desire to imagine normative scenarios about the potential outcomes is almost overwhelming.

Reality has a bad habit of undermining desired futures. Here's a non-privacy example: If you have a moral stance that says that individual access to guns should be strictly controlled or prohibited in the U.S., you may wish to imagine future outcomes where such restrictions are possible and widely accepted. But the evolution of 3D printers has made that kind of future highly implausible, as designs for 3D-printer-friendly firearms have now emerged and spread. As long as 3D printers are available, it will be extremely difficult to eliminate or control access to firearms, and as 3D printers become more capable, we'll see increasingly diverse and powerful printable weapons. Any discussions of "gun control" that don't acknowledge this are doomed to imminent irrelevance.

So when we think about the future of privacy, surveillance, and related concepts, one of the first questions we need to ask is "what real-world conditions constrain our possible futures?" What are the technical aspects of privacy and surveillance, and what kinds of changes would have to happen to shift the balance between the two? What are the political barriers? For example, if a leader took positive steps to reduce government surveillance, and subsequently a major terrorist attack happened, how likely is it that the public (and certainly the political opponents of said leader) would link the two? If the technological standards underlying the present-day Internet make full privacy essentially impossible -- not just vis-a-vis government snoops but also corporate "big data" behavior analysis -- who would actually have the capability to construct a more secure alternative?

I'm still thinking.

July 16, 2013

Google Glass Ten Second Review

Glass PainsI got a chance to play briefly with IFTF's new Google Glass device (see accompanying photo). Some quick notes:

  • As the photo illustrates (and as the manual apparently states), Google Glass devices do not work with regular eyeglasses. Unfortunately, despite my nearsightedness, the test on the screen for the Glass display is set in a way that left it illegible for me without my eyeglasses. (I suspect that the image is projected in a way to be legible while your eyes are focused at a distance.)
  • Since it uses a bone-conduction mic & headphone, it wouldn't be a simple task to just stick a Glass unit on regular glasses.
  • The voice control works reasonably well, and I was able to instruct it to take a photo (I'll link to it when I get access to it).
  • The voice control doesn't work perfectly, and was confused by terms more complex than "take picture."
  • The voice control is speech recognition, not speaker recognition. It responded to accidental commands from the person I was speaking with while testing them.

    This last is the biggest risk factor for abuse. Saying "OK Glass Google [something shocking]" when someone is using Google Glass in near proximity will make it search Google for whatever startling content you've given it.

    Not that I would suggest any such thing. Nosiree.

  • May 22, 2013

    Unwittingly Participating in the Panopticon

    This does not seem like a good combination:

    1. The FBI wants (has?) backdoors to monitor Skype (along with other Internet telephony apps); tellingly, there's already concern that such backdoors are open to non-FBI intruders.

    2. The soon-to-arrive XBox One has Skype built-in, making use of the video camera on the Kinect.

    3. Said Kinect is required, and cannot be disconnected.

    4. And the XBox One will need to have regular (daily) Internet access, meaning that most people will just leave it connected to their home broadband.

    Anyone want to place a wager on how many months it will be after the XBox One is released before there's a "XBox Spying" scandal?

    May 14, 2013

    Getting It (Almost) Right

    Ask any reputable modern futurist to make a prediction, and you'll nearly always get the same general reply: futurists don't make predictions, we talk about scenarios, implications, and forecasts -- structured narratives about future possibilities that make clear the uncertainty and contingency of outcomes.

    But push a little harder, and you might hear something a little different: it's always fun to get one right.

    So it's with all due humility that I quote the opening of this CNN/Fortune article:

    As Wall Street predictions go, Jamais Cascio had a good one. A little less than a year ago, Cascio, a distinguished fellow at think tank Institute for the Future, in a blog post, predicted that retweeting Twitter bots combined with a fake news story posted by hackers on a major media website would cause a market crash. That's pretty close to what happened.

    The post in question was "Lies, Damn Lies, and Twitter Bots" from last August. My blog post argued that it would likely take a bunch of twitter bots/hacks acting in concert to shift stock market activity, but it turned out that it only took the temporary hijacking of the Associated Press twitter feed. I guess I over-estimated how risk-averse high-frequency trading systems would be.

    So was the point of the hack to get the stock market to undergo a brief crash, allowing someone to make a bunch of money? It's unclear, but the utility of the twitter-driven-flash-crash is now abundantly clear. This won't be the last time something like this happens.

    February 22, 2013


    GoogleGlassGoogle Glass: a wearable heads-up display and camera, linked to your mobile device, able to do live recording, searches, route guidance, and more. Available soon for about $1500, and in "explorer" testing now. (The title hashtag -- #ifihadglass -- is how Google is picking testers.) Joshua Topolsky at The Verge got an extended try-out with the device, and wrote about his experience. In short, he found it useful and awkward and very much the possible start of something big.

    But I walked away convinced that this wasn’t just one of Google’s weird flights of fancy. The more I used Glass the more it made sense to me; the more I wanted it. If the team had told me I could sign up to have my current glasses augmented with Glass technology, I would have put pen to paper (and money in their hands) right then and there. And it’s that kind of stuff that will make the difference between this being a niche device for geeks and a product that everyone wants to experience.

    After a few hours with Glass, I’ve decided that the question is no longer ‘if,’ but ‘when?’

    You'll forgive me if I'm not terribly surprised by all of this. This is pretty much a spot-on manifestation of the next phase of the Participatory Panopticon. The first phase used cameraphones -- ubiquitous and useful, to be sure, but reactive: you had to take it out and do something to make it record. A cameraphone isn't a tool of a panopticon in your pocket. But a wearable system, particularly something that looks stylish and not "tech," leads to very different kinds of outcomes.

    Here's a bit of something I wrote in 2005 ("personal memory assistant" was my term for a Google Glass-like device):

    But the world of the participatory panopticon is not as interested in privacy, or even secrecy, as it is in lies. A police officer lying about hitting a protestor, a politician lying about human rights abuses, a potential new partner lying about past indiscretions -- all of these are harder in a world where everything might be on the record. The participatory panopticon is a world where accusations can easily be documented, where corporations will become more transparent to stakeholders as a matter of course, where officials may even be required to wear a recorder while on duty, simply to avoid situations where they are discovered to have been lying. It's a world where we can all be witnesses with perfect recall. Ironically, it's a world where trust is easy, because lying is hard.

    But ask yourself: what would it really be like to have perfect memory? Relationships -- business, casual or personal -- are very often built on the consensual misrememberings of slights. Memories fade. Emotional wounds heal. The insult that seemed so important one day is soon gone. But personal memory assistants will allow people to play back what you really said, time and again, allow people to obsess over a momentary sneer or distracted gaze. Reputation networks will allow people to share those recordings, showing their friends (and their friends' friends, and so on) just how much of a cad you really are.

    In the world of the Participatory Panopticon, it's not just politicians concerned about inadvertent gestures, quick glances or private frowns.

    And avoiding it won't be as easy as simply agreeing to shut off the recorders. Unless you schedule your arguments, it's inevitable that something will be caught and archived. And if you leave your assistant off as a matter of course, you lose its value as an aid to recalling details that pass in an instant or didn't seem important at the time.

    Moreover, if you turn your recorder off while those around you are still archiving their lives, you place yourself at a disadvantage -- it's not knowledge that's power, it's recall of and access to knowledge that's power.

    The recently-posted video interview includes some of my more recent thinking on the topic.

    It's a really big deal. There are enormous intellectual property implications here, and undoubtedly issues around distracted driving and whatnot. But for me, the truly important aspect is how it changes relationships. And as this becomes more commonplace, it will change relationships -- between business partners, spouses, parents and children, everyone.

    And that's with the relatively simple technology of something like Google Glass. When we add things like active visual filtering and face recognition -- just look at someone and get their Twitter stream or Facebook page in front of you -- we get the third phase of the Participatory Panopticon. All of that's still ahead of us -- but the advent of Google Glass makes it much more likely to happen.

    And, okay, I admit it. Even though we very modern futurists (who pooh-pooh "predictions" as the stuff of astrologers and TV pundits) are loathe to admit it, getting it right is a thrill. Laying out a forecast that, in the subsequent years, maps to an emerging reality is neat stuff, especially when the forecast includes various social components yet to show up. Add a catchy name and... well, you have the makings of a nice bullet point for the always-inevitable "hey Mr. Futurist, what predictions of yours have come true?" question.

    September 5, 2012

    Future is Now, Part 58

    It's always a bit unsettling when reality has the temerity to confirm a speculative scenario. It's rarely a 100% match; more typically, it's a parallel event that reinforces the underlying logic of said forecast. Better still, this one, as it turns out, is a two-fer.

    In India last week, as-yet unidentified individuals sent mass text messages to Hindus in the north-east of the country, sparking a panicked evacuation of thousands from the state of Assam. The text messages -- which were entirely false -- told people that Muslims were attacking Hindus in retaliation for violence against Muslims earlier in the year. According to New Scientist, one typical message read:

    "Madam, do not get out of your house. There is a lot of trouble. People from your caste are being beaten. Seven women have been killed in Yelahanka [a suburb of Bangalore]."

    As of recent reports, the refugees are returning to their homes -- but slowly.

    This story underscores the power of networked social media as a medium for political rumors, one of the key points from my previous post Lies, Damn Lies, and Twitter Bots. Although in this case the specific medium was text messaging rather than Twitter, the larger argument fits: in a social environment primed to treat rumor as fact, properly coded and targeted messages can prompt a mass upheaval.

    It also fits with an argument from a few years ago, in my Fast Company article "The Dark Side of Twittering a Revolution." The genocide in Rwanda was driven, in part, by the use of local pirate radio stations targeting particular ethnic communities. The broadcasts reported tales of rival communities killing helpless individuals of the target ethnicity, encouraging (in this instance) people to rise up and kill their neighbors while they still had the chance (the ambiguity of my language here reflects the fact that both Hutu and Tutsi ethnic communities used this method, apparently).

    I wrote:

    This shouldn't be read as an indictment of social networking technologies in general, or of Twitter in particular. As I said at the outset, I'm thrilled at how critical this technology has been to the viability and potential success of the pro-democracy demonstrations. […] What I'm arguing, however, is that we shouldn't see the positive political successes of emerging social tools as being the sole model. We should be aware that, as these tools proliferate, they will inevitably be used for far more deadly goals.

    In India, the text messages prompted an evacuation; next time, the results may be much worse.

    May 1, 2012

    New Pollution

    I spoke last month at the Swissnex office in San Francisco (Swissnex is kind of the Swiss embassy for technology issues), at an event entitled "Data is (sic) the New Oil." The focus of the event was the tension between privacy and "publicy" (Stowe Boyd's term for the intentional revelation of aspects of one's life, the opposite of privacy). A video of the entire event is now online, and below you'll find the 15 minutes or so of my talk.

    Jamais Cascio on Polluting the Data Stream from Jamais Cascio on Vimeo.

    This talk covers what I wrote about in "Opaque Projections," but this is a moving image, with sound.

    April 11, 2012

    Opaque Projections

    Last night (April 10, 2012), I spoke at the San Francisco Swissnex office on a panel entitled "Data is* the New Oil." When I was told the title of the panel, it struck me as an odd metaphor. Oh, I understand the intent: oil was the fuel for the 20th century industrial economy, and information is the fuel for the 21st. But oil has a key characteristic that simply isn't true for data.

    Oil is limited -- we have a declining stock. Whether you think peak oil happened a few years ago, will happen soon, or is still a ways off, the truth is the same: there is a finite supply of oil, and unless we stop using it there will at some point be no more to extract. Nearly all of the other social, economic, and political aspects of oil derive from this fact. Its importance is inextricably linked to its scarcity.

    Data (or information), conversely, is growing in availability. One study claims that we'll go from a global digital data footprint of 800 billion gigabytes (0.8 zettabytes) in 2009 to 35 trillion gigabytes (35 zettabytes) in 2020. If you need an energy metaphor, you could say that information works more like a "breeder reactor" -- processing the information we create allows us to create even more information. A service called "Dataminr" uses the 340 million-odd daily public Twitter posts to algorithmically monitor the world, and claims that it was able to tell its subscribers of Osama bin Laden's death 20 minutes before big media because of its ongoing analysis. It's not the first example of that kind of processing, by the way; Canadian epidemiologists at the Global Public Health Information Network scan newspaper articles around the world, spotting early indicators of emerging disease outbreaks.

    In other words, information and data aren't scarce, they're increasing rapidly and dramatically.

    But a related phenomenon is scarce, is declining in availability and increasing in value: opacity. Being hidden. Privacy.

    Information isn't the new oil; opacity is the new oil. The ability to be opaque -- the opposite of transparent -- is increasingly rare, valuable, and in many cases worth fighting for. It's also potentially quite dangerous, often dirty, and can be a catalyst for trouble. In short, it's just like oil. (Which makes me wonder when we'll have a new OPEC -- Organization of Privacy Enabling Companies.)

    Opacity isn't inherently good or bad -- it's both. To people who need privacy and secrecy to survive, opacity is immensely, critically valuable; for people who want privacy and secrecy to hide misbehavior, opacity is also rather important. But for individuals and organizations alike, opacity is becoming harder to maintain.

    Some people have argued that privacy is dead. Typically, those making this argument are wealthy white guys, able to buy as much privacy as they want (and likely to get extremely annoyed when their privacy is violated). And for folks like these, opacity will always be easier to come by than for the rest of us.

    This does not make people happy, unsurprisingly, and there have been a few different approaches to trying to hold onto a shred of opacity; as the technologies of observation and transparency continue to evolve, these approaches will have to evolve, too.

    The first, and most common, is Regulation. Top-down and reactive, regulation says "don't violate privacy" and then punishes those who do. The only way that regulation stops the violation of opacity is through deterrence -- if I think I'll get caught and charged with a crime, I'm ostensibly less likely to want to do the bad thing in the first place. Ostensibly. In and of itself, regulation seems to be of declining value.

    It's better when coupled with the second approach, Protection. Privacy protections tend to be bottom-up -- that is, undertaken directly by those who wish to keep things private -- and proactive. If nobody can see what I'm doing, my privacy is secure. Increasingly, this method of holding onto opacity requires strong crypto and smart technologies, but the real problem is economic: being able to stop others from seeing your personal information increasingly eats into their profits. And the kinds of tools that can protect me from Marc Zuckerberg can also protect a criminal from the government, so there isn't a lot of official encouragement to use individual privacy protection.

    It's the last approach that really interests me: Pollution. Poisoning the data stream. Putting out enough false information that the real information becomes unreliable. At that point, anyone wishing to know the truth about me has to come to me directly, allowing me to control access. It's hardly a perfect option -- the untrue things can be permanently connected to you, and it does kind of make you hard to trust online -- but it's the one approach to opacity that's purely social and extremely difficult to stop.

    Quick question: for those of you on Facebook, did you provide your real birthday? If so, why?

    Part of the reason why commercial entities are able to run roughshod over our personal privacies is that we've become programmed to give them our information. They'll say in BIG SCARY LETTERS that you must provide truthful personal info, but seriously -- if you give Facebook a fake date of birth, how are they going to know? If you check in from fake locations, how can they prove you're not where you say you are? Your actual friends and family will know the truth.

    And here's the fun part: if lots of people start lying about themselves on social media, even the truth becomes unreliable.

    I think somebody should start selling T-Shirts that say, in big block letters, I LIE TO FACEBOOK. That may or may not be true for me -- but how would Facebook (or Google Plus, or Friendster, or whatever) know for sure?

    So here's the big problem: we've become accustomed to the assumption that the status quo of deteriorating privacy is the only possible world. That's unlikely -- but the alternatives are going to be problematic in their own ways. Is a world of people lying about themselves preferable to a world of asymmetric transparency, where those with money and power can hide themselves but know whatever they want about you?

    We're not likely to have a perfect future of (as David Brin says) privacy for me and accountability for everybody else. It's going to be a choice between various imperfect options. Wish us luck.

    * Yes, yes -- grammatically, it should be "Data are the new oil."

    February 14, 2012

    Scenarios of Ill Repute

    RepsocietyA new volume on the evolving role of digital reputation, The Reputation Society: How Online Opinions Are Reshaping the Offline World is now out (also in Kindle format). Edited by my former Worldchanging colleague Hassan Masum (along with his colleague at the University of Waterloo, Mark Tovey), The Reputation Society includes essays by a wide array of writers, including Craig Newmark, Cory Doctorow, Alex Steffen, and me. My contribution, the cleverly-titled "The Future of Reputation Networks," is a set of scenarios of how online reputation systems might evolve over the next 10-20 years.

    I use a classic two-dynamic scenario structure (whether the reputation networks are broad or narrow, and whether the reputation scores are directly assigned by users or "emergent"), resulting in four fairly different worlds.

    In the extended entry you'll find one of the four scenarios, "Augmented Relationships."

    Continue reading "Scenarios of Ill Repute" »

    July 13, 2011

    Sight Licenses

    BERG's Matt Jones asked if I'd be willing to contribute a short essay to a print item he was designing, a little something called SVK. Written by Warren Ellis, drawn by D'Israeli, foreword by William Gibson. Yeah, let me think about that and get back to you.

    Because the plot of SVK concerns an unusual form of augmented reality technology, Matt asked if I'd do a little exploration of some of the other impacts of AR. Here's what I came up with:

    Sight Licenses

    With early prototypes already in the labs, augmented reality (AR) contact lenses promise to be a commonplace tool by sometime in the next decade. Most AR enthusiasts talk about environmental information or social networking as key uses of the technology, but they’re missing the larger vision: AR lenses will allow real-time control -- and pricing -- of what we see.

    AR lenses would work by putting a visual layer over what you’re looking at, as identified by a combination of location information and local transponders. That visual layer can be anything from mapping info to text bubbles to animations; as the technology improves, limits such as data rates, graphic resolution, and image placement precision will become non-issues. By the time AR lenses become commonplace, the experience could be seamless.

    Right now, we control what can be seen by putting walls -- physical or technological -- around that which we want to limit. These mediated experiences are exceptions to the normal rule that, if it’s in public, you can see it. But with AR lenses, all visual experiences can be mediated, no matter the place or the format. By adding a transponder or locative data, anything we look at -- buildings, scenic vistas, people, clothing, anything -- can have an augmentation overlay. That means the visual experience of anything we look at can be controlled. And if I can control access, I can make you pay for access.

    In many cases, this will simply lead to the further expansion and sophistication of visual advertisements. Changing, targeted ads will blanket walls, roads, even clothing. Reality becomes a sponsored app.

    As annoying as this would be, however, it pales in significance compared to the ability of commercial and/or governmental gatekeepers to charge for visual access to everyday experiences.

    Architecture, fashion, art, all of this and more could be declared protected design, for reason of copyright or security, and the ability to see it limited by license or law. You want to witness Lady Gaga’s latest get-up? Pay for it. You want to gaze at the majesty of a new mile-high tower? Buy a license. Some designs may be impossible to use otherwise, the interface and affordances apparent only to those who have topped up their accounts.

    What about the “analog hole,” removing the AR lenses to see unfiltered reality? The likelihood that the full extent of the design would only be visible with augmented vision is one limit; the hassle of taking out contact lenses out in public is another. Not using AR lenses at all would be unthinkable -- if the utility of AR is great enough to lead to widespread adoption of the technology, going without would be as socially, economically, and even politically crippling as going without a mobile phone today.

    This mean that the current technology fights -- between jailbreakers and phone makers, between DRM advocates and open source activists -- aren’t going away any time soon. The battlefield, however, will shift to our eyeballs. As a result, any hope of a shared vision of the future should be set aside. Instead, we’ll be fighting over an increasingly fragmented, splintered view of the world in front of us... and paying for the privilege.

    March 3, 2010

    New Fast Company: Augmented (Fashion) Reality

    My latest Fast Company piece is up: Augmented (Fashion) Reality takes a look at what happens when the world of fashion gets ahold of AR technology.

    It starts out with a scenario. Here's a bit of it:

    I remember the first time I saw an AR outfit. I did a double-take, because I could have sworn that the woman had been wearing a fairly bland dress when I saw her at a distance, but suddenly she was wearing a sparkling gown that I could swear was made of diamonds. A few minutes later, I took off my arglasses to get something out of my eye, and *poof* her dress was back to the simple beige shift. That bland outfit was actually carrying a half-dozen or so specialized smart tags, providing abundant 3D data that my arglasses--and the AR systems of everyone else around her--translated into that diamond dress.

    I note late in the essay that fashion may end up being the "killer app" for wearable AR. The more I think about it, the more it rings true -- AR can't just be about finding the nearest Starbucks or getting a read on local environmental conditions. It has to be playful, too.

    March 2, 2010

    Participatory Panopticon On Its Way (Maybe)

    Picturephoning gives a heads-up on "Recognizr" (you know it's cutting-edge when they leave out the "e"), an iPhone app that will supposedly recognize faces seen by the camera. Here's the promo video:

    It's a prototype from the Swedish group The Astonishing Tribe. Apparently, a photo taken in Recognizr (sigh) gets compared to pictures in various social networking platforms, including Flickr (see? no "e"!!!!), Facebook, and the like.

    Picturephoning links to a hysterical Daily Mail article, which plays up the STALKRS WILL STEAL UR VIRTUE angle, not really looking at the more interesting -- and potentially more troubling -- aspects. Popular Science is a little more sober, but ultimately not hugely more informative.

    Until I see something more than just the one video, I'm going to call this one Plausible, but not at all difficult to hoax. Anybody know better?

    January 30, 2010

    New Fast Company: iWorry

    MosesPadclip.png(Well, "new" in the sense of it's the most recent; it actually went up earlier this week, I just didn't get around to linking to it here. Ahem.)

    "iWorry" is my foray into the iPad discussion, focusing less on the product and more on its support infrastructure:

    But the iPad isn't a phone; it is a general purpose computer. It does email and Web and documents and presentations and games and all of the other kinds of things we do with our "regular" computers. Yet it will suffer under the same restrictions as the iPhone--prohibition of any application that Apple doesn't like, for whatever reason. Sometimes that means the application uses undocumented features, but startlingly often it just means "duplication of features"--the application does something that Apple's own software does, but does it differently. (This raises the uncomfortable question as to whether the Kindle app for the iPhone--which works quite nicely, actually--will run on the iPad.)

    These restrictions aren't going to hurt Apple's bottom line, and admittedly will probably make for a more comfortable user experience on the device itself. But the risk -- and the source of my worry -- is that the locked-down app model moves from these kind of appliance systems to the kinds of devices that have historically been open. If the next version of the MacOS insists that you use a "MacOS App Store" to get the software you want, I'll be moving to another platform.

    I brought up a similar point in a conversation with Annalee Newitz, who wrote about her own concerns about the iPad for, Why the iPad is Crap Futurism. I think her summary of my point following the quote gets it exactly right.

    As futurist Jamais Cascio told io9:
    This is Apple's big push of its top-down control over applications into the general-purpose computing world. The only applications that will work with the iPad are those approved by Apple, under very opaque conditions. On a phone, that's borderline acceptable, but it's not for something that is positioned to overlap with regular computers.

    The iPad has all the problems of television, with none of the benefits of computers.

    If I get one, it will be for the hands-on experience of seeing what kinds of uses I would have for a device that sits between a smart pocket device and a notebook computer. But I promise not to like it.

    October 13, 2009

    Atlantic: Filtering Reality

    My second article for the Atlantic Monthly hits the shelves this week, and can now be found online. "Filtering Reality" looks at the political implications of augmented reality. It's a theme I've explored before, but the Atlantic editors asked me specifically to do this topic.

    You don’t want to see anybody who has donated to the Palin 2012 campaign? Gone, their faces covered up by black circles. You want to know who exactly gave money to the 2014 ban on SUVs? Easy—they now have green arrows pointing at their heads.

    You want to block out any indication of viewpoints other than your own? Done.

    This will not be a world conducive to political moderation, nor one where differing perspectives get along comfortably. It won’t take a majority of people using these filters to poison public discourse; imagine this summer’s town-hall screamers on constant alert, wherever they go. Yet this world will be the unintended consequence of otherwise desirable developments—spam filters, facial recognition, augmented reality—that many of us will find useful.

    It's a much shorter piece than my previous Atlantic essay, but hopefully the readers will find it just as provocative.

    (Top Image: by "Gluekit" as illustration for the article; it's a variant of my original artifact image, below.)

    Handheld Augmented Reality

    October 12, 2009

    Danger, Danger!

    SadKick.pngMicrosoft/Danger/T-Mobile to millions of Sidekick users: Whoops.

    Short version: Microsoft (who now owns Danger, the makers of the Sidekick) decided to migrate data from one storage network to another. That migration failed, and corrupted the data. Okay, annoying, so restore from the backup, right?

    Wrong. No backups. None. Zero. El zilcho.

    So millions of Sidekick users awake this past weekend to find that all of their data are gone -- or, in the best scenario, the only data they have are the most recent stuff on the Sidekick itself, and if they let the device power down, they'll lose that, too.

    You can't say I didn't warn you.

    January 19, 2009 - "Dark Clouds":

    Here's where we get to the heart of the problem. Centralization is the core of the cloud computing model, meaning that anything that takes down the centralized service -- network failures, massive malware hit, denial-of-service attack, and so forth -- affects everyone who uses that service. When the documents and the tools both live in the cloud, there's no way for someone to continue working in this failure state. If users don't have their own personal backups (and alternative apps), they're stuck.

    Similarly, if a bug affects the cloud application, everyone who uses that application is hurt by it. [...]

    In short, the cloud computing model envisioned by many tech pundits (and tech companies) is a wonderful system when it works, and a nightmare when it fails. And the more people who come to depend upon it, the bigger the nightmare. For an individual, a crashed laptop and a crashed cloud may be initially indistinguishable, but the former only afflicts one person and one point of access to information. If a cloud system locks up, potentially millions of people lose access.

    So what does all of this mean?

    My take is that cloud computing, for all of its apparent (and supposed) benefits, stands to lose legitimacy and support (financial and otherwise) when the first big, millions-of-people-affecting, failure hits. Companies that tie themselves too closely to this particular model, as either service providers or customers, could be in real trouble.

    And what do we see now? "Microsoft's Danger Sidekick data loss casts dark cloud on cloud computing." "Microsoft's Sidekick data catastrophe." "Cloud Goes Boom, T-Mo Sidekick Users Lose All Data."

    Okay, it's easy to blame the failure to make backups for this disaster. But the point of resilience models is that failure happens. A complex system should not be so brittle that a single mistake can destroy it. Here's what I wrote back in January about what a resilient cloud could look like:

    Distributed, individual systems would remain the primary tool of interaction with one's information. Data would live both locally and on the cloud, with updates happening in real-time if possible, delayed if necessary, but always invisibly. All cloud content should be in open formats, so that alternative tools can be used as desired or needed. Ideally, a personal system should be able to replicate data to multiple distinct clouds, to avoid monoculture and single-point-of-failure problems. This version of the cloud is less a primary source for computing services, and more a fail-safe repository. If my personal system fails, all of my data remains available and accessible via the cloud; if the cloud fails, all of my data remains available and accessible via my personal system.

    It may not be as sexy as everything-on-the-cloud models, and undoubtedly not as profitable, but a failure like this past weekend's Microsoft/Danger fiasco -- or the myriad cloud failures yet to happen (and they will happen) -- simply wouldn't have been possible.

    September 9, 2009

    New Fast Company: Awareness is Everything

    I'm a bit late in noting this, but last week's Fast Company article is indeed available. "Awareness is Everything" looks at what happens as we keep adding sensory awareness to our personal devices.

    Imagine a desktop with a camera that knows to shut down the screen and eventually go to sleep when you walk away (but stays awake when you're sitting there reading something or thinking), and will wake up when you sit down in front of it (no mouse-jiggling required).

    Or a system with a microphone that listens for the combination of a phone ringing (sudden loud noise) followed by a nearby voice saying "hello" (or similar greeting), and will mute the system automatically. [...]

    ... the question isn't "can this happen?," it's "will we want it?"


    August 6, 2009

    New Fast Company: New Rules for the Photoshop Era

    My new Fast Company essay, "Five New Rules for the Photoshop Era," takes on the participatory decepticon, and discovers that it was apparently born in Kenya.

    If you're annoyed by the "birther" churn, get used it--this kind of political hack is here to stay. It's easy and effective. Cheap digital tools make the work of faking official documents, "candid" images, and behind-the-scenes videos readily possible, even for rough amateurs.

    Moreover, the hacks don't have to convince skeptics--they only need to strengthen believers. Faked materials just need to be convincing enough to cause doubt in the minds of people already inclined to believe a lie. For people trying to undermine political opponents, uncertainty is both easy and useful. Imagine if the hoax Obama birth certificate had been produced in October of 2008, instead of August of 2009: it's all too likely that the chaos surrounding the document could have cut his percentage in closely-contested states.

    By the way, you, too, can make your own Totally Official and Not At All Hoaxed Kenyan Birth Certificate!

    July 15, 2009

    Human Interfaces

    Warren Ellis:

    Clay Shirky’s line about how anything that ships without a mouse is broken — that’s her [his daughter's] generation. (I still think he was just one foot behind the time — I understand he was working from an anecdote, but I can’t help thinking the word he should have used is "touchscreen.")

    Yes. This.

    KAMPI've had the Amazon Kindle 2 for a few months now (that's it on the left in the picture, next to my ancient Newton MessagePad), and it's been a great device for the far-too-abundant travel I've been doing lately. Much of that travel has been overseas, and since the Kindle isn't available outside of the US (and Canada, I think), I've been running into a lot of people who are curious and want to check it out.

    And what's the first thing they try to do?

    They try to "turn the page" by flicking a finger across the screen. But the Kindle doesn't have a touch screen. The "e-paper" display it uses is easy to read (at least in good lighting) and extremely low-power, but it is not touch sensitive. Which means that the second thing that people checking out my Kindle do is get a funny confused look -- why doesn't it work? -- before having that moment of realization that this device doesn't have that seemingly obvious functionality. That it's "broken."

    What's particularly notable here is that the vast majority of people who have gone through this "Ooh! Oh." experience aren't teens or young adults; they're people across a wide range of ages, including people who are older than I am.

    A handheld device's screen should be touch-sensitive. It took us awhile to figure that out, requiring a smart user interface team (at Apple, in this case) to turn the annoying (stylus-based touch screens are usability insults) into the obvious. But now that the kinetic-memetics have taken root, anything that works otherwise is incomplete.

    Or, for all intents and purposes, broken.

    June 11, 2009

    New Fast Company Column: iPhone Augmented Reality

    Because it's actually a blogging requirement to write something about new Apple hardware when it's announced.

    One of the more important features of the new iPhone may be the least-widely heralded by the tech punditry: it has a compass.

    This matters not because now you'll always know which way is North with the iPhone, or even because you can make a quick-and-dirty metal detector with it. It matters because it finally opens up the iPhone to real augmented reality. In that august position, it joins the ranks of a handful of other smartphones, including (in particular) the Android G1 and the Nokia N97.

    Of course, I say that about the Nokia N97 simply due to tech specs and early reviews, not because I have my hands on one. Just saying.

    May 28, 2009

    Participatory Panopticon: The Official Version

    The Institute for the Future's 2007 Ten-Year Forecast included, as one of the forecast items, the Participatory Panopticon. IFTF is now making past Ten-Year Forecast materials more readily accessible to the public, and I was pleased to see that the Participatory Panopticon document (including a discussion between David Brin and myself) is now available for download (PDF).

    A highlight from the Brin-Cascio conversation:

    Jamais: Historically, we haven’t done a very good job at making village communities that allow their members to do and become the things that they want. Overwhelming observation has, by and large, been more often used to suppress outside-the-mainstream behavior than to go after the powerful and corrupt. How do you see this emerging world differing?

    David: You and I are examples of the sort of people who were burned at the stake in almost any other culture. Yet, in this one, we are paid well to poke at the boundaries of the “box.” I’m pretty grateful for that, and for the millions of others like us, who are allowed and encouraged to bicker and compete and criticize. It is a noisy, noisome civilization and its imperfections may yet kill us all. But is so vastly beats all of the neat and tidy ones that came before.

    Now we’re entering a new era when the village seems about to return. With our senses and memories enhanced prodigiously by new prostheses, suddenly we can “know” the reputations of millions, soon to be billions, of fellow Earth citizens. A tap of your VR eyeglasses will identify any person, along with profiles and alerts, almost as if you had been gossiping about him and her for years.

    It’s seriously scary prospect and one that is utterly unavoidable. The cities we grew up in were semi-anonymous only because they were primitive. The village is returning. And with it serious, lifelong worry about that state of our reputations. Kids who do not know this are playing with fire. They had better hope that the village will be a nice one. A village that shrugs a lot, and forgives.

    I have to say, that last line may be my favorite thing that David has ever said or written.

    Fast Company: The Transparency Dilemma

    Last week's and this week's "Open Future" columns for Fast Company make up a two-part examination of the dilemmas surrounding transparency.

    In "I Can See You," I wrote:

    We leave digital footprints everywhere we go, and those footprints are becoming easier and easier to track. Although many of us believe that sunlight is the best disinfectant, and that transparency is generally a good thing for a society, the lack of control over what you reveal about yourself is often troubling. The ease with which abundant personal info can be used for (e.g.) identity theft creates a situation where we have many of the dilemmas of transparency without enough of the benefit. [...]

    We live in a world of unrelenting transparency. What can we do about it?

    I do believe that transparency is, on balance, a social good. But it would be naïve at best to believe that this social good is unalloyed. Greater transparency -- particularly a kind of transparency that's both incomplete and hard-to-control can create enormous problems for individuals, without offering reliable solutions.

    Now, in "Managing Transparency," I continue:

    What are the strategies we can use to deal with unrelenting transparency? Fight it. Accept it. Deceive it. [...]

    The last strategy, deception, boils down to this: we may be able to watch each other, but that doesn't mean what we show is real.

    Call it "polluting the datastream"--introducing false and misleading bits of personal information (about location, about one's history, about interests and work) into the body of public data about you. It could be as targeted as adding lies to your Wikipedia entry (should you have one) or other public bios; it could be as random as putting enough junk info about yourself onto Google-indexed websites and message boards. Many of us do this already, at least to a minor degree: at a recent conference, I asked the audience how many give false date-of-birth info on website sign-ups; over half the audience raised their hands.

    The goal here isn't to construct a consistent alternate history for yourself, but to make the public information sufficiently inconsistent that none of it could be considered entirely reliable.

    This is actually a point I explored in a bit of depth at Futuresonic earlier this month. In a world of partial transparency, where both total privacy and symmetric transparency are effectively impossible, it may be that deception is the most workable method of protecting one's privacy.

    I didn't mention in the FC piece -- it runs long as it is -- but the technologies of the "participatory decepticon" have an interesting role here. Rather than using the various means of creating false images, videos, recordings and such to manipulate perceptions of political figures and other public targets, those tools could be used to easily create false histories for ourselves.


    May 21, 2009

    New Fast Company Column: I Can See You

    My new Fast Company column is now up. I Can See You looks at the dilemmas surrounding mass transparency and the "culture of documentation."

    With the rise of cheap, networked recording devices--aka, cameraphones--we're seeing the emergence of a culture of documentation, where individuals use their cameraphones to record and share unusual and often problematic moments. From events as amusingly scandalous as South Korea's "dog poop girl" to those as shocking and tragic as the New Year's Eve killing by an Oakland transit cop, citizens are using cameraphones to catch misbehavior and make it undeniable. What's particularly notable (although not especially surprising) is the availability of multiple perspectives on the same event, as personal documentation with a cameraphone becomes almost second-nature for many of us.

    (Here's a tip for aspiring filmmakers: one way for an audience to see a spectacular event as "real" is for any crowd scenes surrounding the event to include at least 10% of the people there recording the moment with their phones. Disaster or science fiction movies set in the present day that don't include such mass documentation will increasingly look weird and dated.)

    That last bit was inspired by seeing the preview trailer for a new science fiction TV show (the remake of "V", I think) that had crowds all over the world gazing in wonder at the big giant space ships floating over their cities.

    And not a one of them was holding up a cameraphone.


    April 8, 2009

    Topsight, April 8, 2009

    Participatory Panopticon edition!

    I've been pounded with work, and haven't been keeping up with my bloggy duties. Here are some of the issues I've been following:

    • Sigh, Eyeborg: Yeah, "eyeborg" -- a guy in the UK Canada (thanks @clothbot) has built a micro-camera into his vacant eye socket.

    The eye will include a 1.5mm CMOS camera, an RF transmitter “smaller than the tip of a pencil eraser” and a lithium-polymer battery. Footage will probably be sent to recording equipment in a rucksack, which will presumably be worn by Spence.

    His aim, aside from breaking technological boundaries, is to raise awareness of the issues surrounding surveillance in our society.

    I have to say, there are ways to raise awareness of this issue without implanting a camera in your eye socket, but that's just how he rolls, apparently. (Via Futurismic)

    • Hope It Doesn't Conveniently "Break": The company behind the taser stun weapon -- Taser, appropriately enough -- is set to release a wearable digital camera and recording system for use by law enforcement officers. The Axon system (PDF) provides real-time recording, from the officer's perspective, of everything that happens on duty. The recording, which can't be altered in-system, and gets uploaded to a secure off-site location at the end of a shift, can then provide documentary evidence of precisely what happened in every policing encounter.

    This actually sounds pretty good, although I'd love for it to have a streaming upload mode so that the evidence gets locked up as it happens, instead of at the end of the day. Still, this is exactly the kind of thing that should be a mandatory part of the police uniform, for the protection of both the police officers and the citizens.

    Just one problem, though: "One-Touch “Privacy Mode” temporarily suspends recording"

    Sigh. Yes, I know that the cops don't want to be recorded while they go to the bathroom, but this just screams "abuse me" -- both to the cops & prosecutors and to defense attorneys trying to find a way to dispute a recorded encounter. I would hope, at the very least, that the GPS and time tracking don't get suspended in "privacy mode."

    • Plausibly Surreal: This iPhone application, described on the "TidBITS" website, is, unfortunately, just an April Fool's joke. That said, there's no reason why "Invisibility" couldn't happen -- and, I suspect, there are quite a few people who would want it.

    Invisibility works by creating a profile of each person you want to avoid, using a variety of inputs. [...] The tracking screen uses Google Maps to show you the current location (if known) of anyone you've profiled, along with a circle of probability and a timestamp. This is useful when you're taking a stroll and want to make sure the coast is clear.

    Invisibility can also use Bluetooth and Wi-Fi signals to identify someone's cell phone within a range of 30 to 100 feet. [...] The program can also tap into Facebook messages, Flickr geotagging information, Skyhook Wireless location updates, Twitter, Dopplr travel logging, Blogger posts, and all kinds of other public and private (once you've connected it to your accounts) social media and buddy services.

    The best part? The description of the app as "Asocial Networking" -- a way to avoid constant availability. This is so inevitable, it's not even funny.

    March 24, 2009

    New Column Kicks Off

    Starting today, I have a ~weekly column at Fast Company, covering technology, ethics and the environment, and innovation, all from a futures perspective. My editor, Noah Robischon, asked me to kick off the column with a topic near-and-dear to my heart: what happens to social relationships when we live in the era of immersive visible data.

    When 'Mad Men' Meets Augmented Reality

    ...The more top-down control there is in the digital world, the less spam and malware we'll see -- but we'll also lose the opportunity to do disruptive, creative things. Consider Apple's iPhone App Store: Apple's vetting and remote-disable process may minimize the number of harmful applications, but it also eliminates programs that do things outside of what the iPhone designers intended.

    Blended-reality technology could play in a limited, walled-garden world, but history suggests that it won't really take off until it offers broad freedom of use. This means, unfortunately, that ads, spam, and malware are probably inevitable in a blended-reality world. We're likely to deal with these problems the same way we do now: Good system design to resist malware, and filters to limit the volume of unwanted ads. All useful and necessary, but there's a twist: Filtering systems for blended-reality technologies may allow us to construct our own visions of reality.

    All familiar stuff to long-time readers of Open the Future, but hopefully a nice bit of provocation for the Fast Company audience.

    March 23, 2009

    A Thin Slab of Book

    Not very bigThe Kindle is one of those devices that tends to elicit one of two responses: "waste of money" or "must have it now." For quite awhile, I had the first reaction. Then, after what one might call a "context shift," I found myself squarely embracing the second.

    When the second-generation Kindle came out last month, I found its updated styling a bit more appealing than the first generation version, but I couldn't get past the idea of paying $359 for a book reader. I'm not a one-a-day reader by any means; I do most of my reading online these days. That $359 would easily pay for a year or two's book purchases. Feh, I said, and went back to ignoring it.

    The recent flap over whether the Kindle's text-to-speech feature should be considered the equivalent of an audiobook brought it back to my attention. In the course of looking around for various relevant arguments, I stumbled across a single blog entry that changed my mind about the device. Randall Munroe, the creator of the genius XKCD webcomic, blogged about his recent purchase of a Kindle 2. He wrote:

    I’m surprised at the talk of the cost being too high. For me, the comparison is to a laptop with a cellular broadband internet card — $1440 for a standard two-year contract. The Kindle 2 doesn’t have a full web browser, but if you’re favoring text-heavy websites (news, blogs, mail, wikis), it’s perfectly sufficient. Plus, it’s a nice screen and has many-day battery life. All in all I think it’s a more-than-reasonable price for something that lets me read reddit on the street corner so as to better shout at sheeple about government conspiracies.

    Shifting context: if I compare it to a stack of books, the cost is high; if I compare it, instead, to a web device -- one with a full-time 3G wireless connection and no monthly fee -- suddenly the price looks almost like a bargain.

    (More geekery follows.)

    Continue reading "A Thin Slab of Book" »

    March 6, 2009

    Rise of the Participatory Panopticon

    I'm at a future of video workshop at the Institute for the Future today, and the topic of the participatory panopticon has come up. For people who are new to the concept, here's the original discussion of the participatory panopticon, the text of a talk I gave in May of 2005. I'd been talking about the PP since early 2004, but this was the best summary of the argument (at least as it stood in 2005).

    The Rise of the Participatory Panopticon

    Soon -- probably within the next decade, certainly within the next two -- we'll be living in a world where what we see, what we hear, what we experience will be recorded wherever we go. There will be few statements or scenes that will go unnoticed, or unremembered. Our day to day lives will be archived and saved. What’s more, these archives will be available over the net for recollection, analysis, even sharing.

    And we will be doing it to ourselves.

    This won't simply be a world of a single, governmental Big Brother watching over your shoulder, nor will it be a world of a handful of corporate siblings training their ever-vigilant security cameras and tags on you. Such monitoring may well exist, probably will, in fact, but it will be overwhelmed by the millions of cameras and recorders in the hands of millions of Little Brothers and Little Sisters. We will carry with us the tools of our own transparency, and many, perhaps most, will do so willingly, even happily.

    I call this world the Participatory Panopticon.

    October 23, 2008

    This May Not Be the Droid You're Looking For

    So, through a series of unlikely events, I have a T-Mobile G1 "Google phone" on my desk right now. It arrived yesterday; beyond the jump are my 24-hours-later observations.

    New G1

    (More pictures can be found here.)

    Short version: it's not even close to perfect, but it's a viable alternative to the iPhone. The combination of camera, GPS, good screen and open source make it a likely first platform for early participatory panopticon development.

    Tech geekitude ahoy -- follow the link at your peril.

    Continue reading "This May Not Be the Droid You're Looking For" »

    September 12, 2008

    Massively-Multiplayer Decepticon

    A new pandemic is sweeping the planet. Police fired on secessionist demonstrators in Oregon. The Chinese government is trying (unsuccessfully) to suppress news of eco-terrorists bombing multiple coal-fired power plants. We're looking at climate refugees numbering in the tens of millions. The human race will go extinct by 2042.

    None of these are true. All of these are draft plot elements of Superstruct, the "massively-multiplayer forecasting game" I'm working on with Jane McGonigal and the Institute for the Future. The game -- which Jane describes as "real play, not role play" asks participants to imagine themselves in 2019, and to tell us (and the world) about the kinds of challenges they face, and the choices they make with their lives. We're asking participants to use a variety of media, from YouTube videos to Twitter posts, to document their future lives.

    Here's the dilemma: some people are going to believe that it's real. We're going to be playing at the edge of the Participatory Decepticon.

    The use of plausible-but-fake media has a long history, but increasingly, we live in a media-saturated culture that makes it hard to distinguish between the real and the realistic. And this has consequences.

    Earlier this week, Google News posted as current a six-year-old article from the South Florida Sun, reporting on the 2002 bankruptcy filing of United Airlines (UAL). The article in the Sun archive didn't carry a date record, and the Google algorithm decided (not unreasonably) that it was new. Although United isn't going into bankruptcy, it -- like all airlines--faces a decidedly tough market, so a seemingly new announcement of bankruptcy proceedings seemed just reasonable enough to send United's stock price plummeting by 75%. After traders realized the mistake, the stock price regained most of its value by the end of the day.

    As anyone who knows the story of Orson Welles' War of the Worlds radio broadcast can attest, a story doesn't have to be true to cause a panic. But Welles' radio show had a limited audience; with the Internet, and with various news-pushing tools (from email to RSS to Twitter to texting to...) that emphasize short headlines, the reaction is both orders of magnitude faster and orders of magnitude stronger. It has to come from a reasonably-trusted source; it can't be just some random email or Twitter post (although the current spam trend of using plausible-but-provocative headlines to get you to open the message plays on this tendency, too).

    A quirky misalignment of the Sun archives and the Google News spider? Probably. But think about this for a moment: the sudden appearance of something easily proven to be untrue, but just plausible enough to be believed, was enough to cut three-quarters of the market value of a major corporation. Anyone who bought UAL at $3--the bottom of the drop -- made quite a tidy profit when the stock bounced back to around $11. (Bloomberg's mistaken publication of an obituary for Steve Jobs on August 28 would likely have had a similar impact, had it occurred during trading hours.)

    The UAL event appears to have been entirely accidental. The next time probably won't be. My only question is whether it will happen as a plot for a crime drama episode before it happens in real life.

    It's remarkably easy for false-but-plausible images, video, and stories to be used to muddy the waters of a economic, social, and political conflicts. I'm honestly a bit surprised that we haven't yet seen clear examples of this happening in the current US presidential election (photoshopped images of a vice-presidential candidate in a bikini notwithstanding); as the campaign grows more rancorous, though, I expect to see faked recordings showing up at any moment. Palin's relative obscurity would make it easy to create plausible-enough media of her doing or saying ridiculous or offensive things; the 20-30% of the American public willing to believe just about anything bad about Obama would make it easy to do the same thing to him. McCain and Biden could get their turns, too.

    It doesn't have to be anything close to real -- it just has to be realistic, and sufficiently believable to cause a quick "market" collapse. Even after the market recovers, the meme has been planted.

    What does this mean for Superstruct? Hopefully, we won't have too many people taking the various posts and videos to be real. But expect the Participatory Decepticon to have a prominent place in the world of 2019 -- and don't believe everything you read.

    August 13, 2008

    ...And Lest You Think I Was Just Kidding...

    Here's a very early version of an augmented reality system for the iPhone from ARToolworks.

    (Soundtrack Warning: The 1990s wants its rave music back.)

    August 12, 2008

    Making the Visible Invisible

    The Metaverse Roadmap Overview, an exploration of imminent 3D technologies, posited a number of different scenarios of what a future "metaverse" could look like. The four scenarios -- augmented reality, life-logging, virtual worlds, and mirror worlds -- each offered a different manifestation of an immersive 3D world. Of the four, I suspect that augmented reality is most likely to be widespread soon; moreover, when it hits, it's going to have a surprisingly big impact. Not just in terms of "making the invisible visible" -- showing us flows and information that we otherwise wouldn't recognize -- but also in terms of the opposite: making the visible invisible.

    Augmented reality (AR) can be thought of as a combination of widely-accessible sensors (including cameras), lightweight computing technologies, and near-ubiquitous high-speed wireless networks -- a combination that's well-underway -- along with a sophisticated form of visualization that layers information over the physical world. The common vision of AR technology includes some kind of wearable display, although that technology isn't as far along as the other components. For that reason, at the outset, the most common interface for AR will likely be a handheld device, probably something evolved from a mobile phone. Imagine holding up an iPhone-like device, scanning what's around you, seeing various pop-up items and data links on your screen.

    Handheld Augmented Reality

    That's something like what an early AR system might look like (click on the image for much larger version).

    I have what I think is a healthy, albeit a bit perverse, response when I think about new technologies: I wonder how they can be used in ways that the designers never intended. Such uses may be beneficial (think of them as "off-label" uses), while others will be malign. William Gibson's classic line that "the street finds its own uses for things" captures the ambiguity of this question.

    The "maker society" argument that has so swept up many in the free/open source world is a positive manifestation of the notion that you don't have to be limited to what the manufacturer says are the uses of a given product. A philosophy that "you only own something if you can open it up" pervades this world. There's certainly much that appeals about this philosophy, and it's clear that hackability can serve as a catalyst for innovation.

    You're probably a bit more familiar with a basic example of the negative manifestation: spam and malware.

    (continued after the jump, with lots more images)

    Continue reading "Making the Visible Invisible" »

    August 6, 2008

    Mozilla Scenarios


    Last year, I mentioned obliquely that I had been asked to work on something very, very cool, but couldn't talk about it. Finally, I can: I joined with Adaptive Path to create a set of scenarios of the future of the Internet, used to build a model of what the future version of the web browser could look like. Adaptive Path and Mozilla have now announced that model, now dubbed Aurora, with a series of videos demonstrating its use.

    Today, Adaptive Path chief Jesse James Garrett put up the original scenarios, and described a bit of the thinking.

    Jamais called on a whole lot of smart people and led them (and a bunch more from both Adaptive Path and Mozilla) through a two-day workshop to forecast one possible future for browsers and the Web. Through a series of group exercises, we identified three major trends that we thought would have the biggest impact on the web:
    • Augmented Reality: The gap is closing between the Web and the world. Services that know where you are and adapt accordingly will become commonplace. The web becomes fully integrated into every physical environment.
    • Data Abundance: There’s more data available to us all the time — both the data we produce intentionally and the data we throw off as a by-product of other activities. The web will play a key role in how people access, manage, and make sense of all that data.
    • Virtual Identity: People are increasingly expected to have a digital presence as well as a physical one. We inhabit spaces online, but we also create them through our personal expression and participation in the digital realm.

    You can read the scenarios here.

    They've been released under a Creative Commons license (Non-Commercial/Attribution/Share-Alike), so if the mood strikes you to play with these stories a bit, feel free.

    I'll be on a panel with Jesse next week at the UX Week conference, talking about the Aurora project and the future of the web.

    [Updated 10/25/11 to new location for scenarios.]

    June 10, 2008

    The Participatory Decepticon

    lincoln-douglas1.pngWhat happens when not only have the tools of documenting the world become democratized, so too have the tools for manipulating our interpretations of reality?

    The rise of technologies of ubiquitous personal observation -- what I've termed the "participatory panopticon" -- has already begun to transform how we relate to each other socially and politically. The acceleration of mobile media creation capabilities maps to a growing desire by individuals of all ages and backgrounds to have greater control over their personal media technologies. These tools move quickly from dubious to ubiquitous, and streaming video from cameraphones offers the best example.

    I've argued before that this kind of live streaming video from phones will likely be abundant and potentially quite important during the 2008 general election campaign in the US. We saw in 2004 how "video vigilantes" could demonstrate that the NY police had edited their arrest videos, resulting in a near-90% dismissal rate for protestor arrests during the Republican national convention. In 2008, anyone with a cheap Internet-enabled cameraphone will be able to serve the same "watching the watchmen" function.

    To get a sense of the potential scale of this phenomenon, take a look at these fantastic photos by Scout Tufankjian, who has followed the Obama campaign since well before the Iowa caucuses. Ignore for a moment the political context, and look at the crowds. In nearly every shot involving masses of people, you'll see cameraphones held up to record the moment. Most are likely to have been used for still photos, but a significant -- and growing -- percentage will have been used to record video (here's an example of what they get).

    We are flush with video documentation of our political world, and have become increasingly comfortable with checking out YouTube or Google Video links for political content.

    But just as the tools for recording the world have come down in price (sometimes in a dramatic fashion), so too have the tools for editing and reshaping video recordings. Both the MacOS and Windows come with decent to good free applications for movie editing, and the commercial packages offer even more power. It's entirely possible for a professional video productions to be crafted on the same kinds of hardware you might use for playing games or blogging.

    This progression of technological capacities coincides with the increasing polarization and visibility of personal political discourse. People rant daly on blogs, produce fist-pounding videocasts, record angry podcasts. Political videos become viral hits, sometimes spawning parodies.

    The initial result of the combination of easy video documentation and political polarization can be summed up in two words: "Macaca Moment."

    But add easy video manipulation to the mix, and another possibility emerges: the crafting of political videos documenting candidate insults and errors that never happened. Not in a clumsy, easily-detected form, but as a sufficiently-believable web video. There are more than enough audio recordings out there of most major political candidates to allow political pranksters/"dirty tricksters" to make that candidate say just about anything; the cameraphone and flash video media offer insufficient clarity to be able to see if a candidate's mouth is truly saying the words he or she seems to be saying.

    Such a deception wouldn't stand for very long, but would almost certainly last long enough set off a wave of furious blog posts and mainstream media attention. Initially, claims that the video was fake would be characterized as "campaign denials," and only after a bit of forensics (and people coming forward with alternate videos of the same events, but with different words) would it be clear that the video was a fake. Call it three days of chaos.

    Then it happens again. And again. Against other candidates. The returns would diminish rather quickly, but the percentage of Americans who believe firmly that Barack Obama is a Muslim suggests that the effects of faked videos would linger. The right "wrong" message, unleashed at the right time, could shift an election.

    Moreover, a proliferation of faked political videos would undermine the legitimacy of the YouTube/web video medium for political purposes. Any video showing a candidate -- or, just as easily, police officers, or neighbors, or musician, or anyone else -- saying or doing something offensive could be dismissed as "just another Internet video hoax."

    Is there a way to counter this kind of participatory deception? The answer that comes initially to mind is labor-intensive, but very amenable to a bottom-up approach. Constant monitoring of new additions to video sites, looking for claims that a video shows a candidate (or candidate's spouse) doing something untoward. If the campaign can jump on and discredit the video before it takes hold, it might be able to head off the three days of chaos.

    I suspect that, once we see a faked video score a hit on a candidate, that we'll see myriad counter-attacks and follow-ups. Some will be so ridiculous as to be easily dismissed; others will be so close to reality that they'll be hard to refute. Some will even be real mistakes or insults, but ignored by the press as yet another hoax.

    But don't worry: things will be even crazier for the 2012 election.

    ("Participatory Decepticon" phrase suggested by my friend and colleague Matt Chwierut)

    May 21, 2008


    As powerful as the images of people dealing with the immense disaster in Sichuan's 7.9 earthquake have been, none have struck me as much as this series. It was a wedding, and the photographer was starting to do his set of shots of the bride & groom.


    More here. Apparently, 33 guests remain missing in the collapsed church. (Update: All 33 made it out okay.)

    With every snapshot, every recording, every blog entry, we're documenting our world.

    April 14, 2008

    On the Record

    deceptogram.jpgWhenever I talk about the participatory panopticon, one issue grabs an audience more often than anything else -- privacy. But the more I dig into the subject, the more it becomes clear that the real target of the panopticon technologies isn't privacy, but deception. We're starting to see the onset of a variety of technologies allowing the user to determine with some degree of accuracy whether or not the subject is lying. The most promising of these technologies use functional magnetic resonance imaging -- handy if you're conducting a police interview, perhaps, but not likely to be built into a cell phone any time soon. But it turns out that there's another emerging system for discovering deception, one that's not just potentially portable, but also offers the tantalizing possibility of determining if someone lied long after the fact.

    Ron Brinkmann is a visual technology expert, author of The Art and Science of Digital Compositing, and an occasional Open the Future reader. He recently blogged about a set of emerging, very experimental lie-detection technologies relying on images. One takes advantage of observations of so-called "microexpressions," a real phenomenon where micro-second changes in our facial expressions correlate to our feelings about what we are saying. The other takes advantage of changes in skin temperature around the eyes, looking for a brief flare-up of heat that correlates with stress. Rather than reiterate Ron's post, I suggest you go read it.

    I want to call particular attention to an observation he makes late in the piece, however, because I think it's worth careful consideration:

    But enough about the future. Let’s talk about now. Because those last few video/audio analysis techniques I mentioned raise a particularly interesting scenario: Even though we may not have the technology yet to accurately and consistently detect when someone is lying, we will eventually be able to look back at the video/audio that is being captured today and determine, after the fact, whether or not the speaker was being truthful. In other words, even though we may not be able to accurately analyze the data immediately, we can definitely start collecting it. Infrared cameras are readily available, and microexpressions (which may occur over a span of less than 1/25th of a second) should be something that even standard video (at 30fps) would be able to catch. And today’s cameras should have plenty of resolution to grab the details needed, particularly if you zoom in on the subject [...].

    Which brings us to the real point of this post. Is it possible that we’ve gotten to the point where certain peoples - I’m thinking specifically of politicians both foreign and domestic - should be made aware that anything they say in public will eventually be subject to retroactive truth-checking… Because it seems to me that someone needs to start recording all the Presidential debates NOW with a nice array of infrared and high-definition cameras. And they need to do it in a public fashion so that every one of these candidates is very aware of it and of why it is being done.

    (emphasis in original)

    There's no question in my mind that, when these lie-detection systems become seen as good enough (which does not mean 100% accurate, of course), people will start using them to go back through video recordings looking for microexpressions. Politicians offer an obvious set of initial subjects, but I suspect our attention would shift quickly to celebrities. I wouldn't be surprised to see the technologies adopted by activists, especially if we're in an age of going after environmental or economic criminals. Finally, once the systems have come down in price and increased in portability, we'll start pointing them at friends and lovers.

    What then? It's hard to believe that cheap, easy-to-use, after-the-fact applicable lie-detection systems won't be snapped up. But do we really want to know that sometimes when spouses or parents say "I love you," their microexpressions and facial heat say "...but not right now..."? Imagine the market for facial analysis apps as add-ons to video conferencing systems for businesses or the home. Video iChat, now with iTruth!

    Arguably, the only thing worse than this kind of technology getting into everybody's hands would be if it only got into the hands of people already in power.

    Information is power, but so is misinformation. People who lie to achieve some outcome have very real power over the people they've lied to. The capacity to identify those lies, even after-the-fact, can undermine that power. This won't be an easy transition; the technological rebalancing of the political system is already underway (as shown with blogs, YouTube, and the like). Any efforts to pull back from this shift will be met with resistance, anger, and worse. And they will undoubtedly be on the record, like it or not.

    March 21, 2008

    Exit the Machine

    JC@SXSWCameron Reilly, voice of "G'Day World," on Australia's Podcast Network, listened to "The Chorus" -- the scenario I had constructed for the Futurist's Sandbox panel at SXSW -- and was thoroughly disturbed by the story it told. Disturbed enough, it turns out, to ping me and ask to do an interview for his podcast on where we seem to be going with social media technologies, and just what it might mean to opt out.

    GDay World #320 icon for podpress Approximately 50 minutes, ~100MB.

    Oh, and anyone who wants to see how long I've been mulling some of these ideas should check out Howard Rheingold's archive of Electric Minds, his 1996 website bringing together a variety of writers to talk about cutting-edge subjects. I wrote the "Future Surf" column (all six entries), and it's somewhat amusing to look back and see early iterations of my obsessions.


    January 10, 2008

    The Medical Panopticon

    654343he2.jpgWeb-enabled personal medical information technologies have been a standard item in the futurist's scrapbook for a few years now. It's one of those concepts that's hard to imagine not happening: the demographic, technological, and market pressures for Internet-mediated health technologies aimed at the elderly have terrific momentum.

    So it comes as little surprise to see this post in Medgadget, describing the HealthPoint Home Telemetry system. The only thing it's missing are smart implants doing direct somatic monitoring:

    The recommended starter kit for the IL service includes the Home HealthPoint, three motion detectors, and an emergency pendant. The motion detectors are strategically placed around the home during the professional installation in the bedroom, at the entrance to the primary bathroom, and in the main trafficked area such as a foyer or living room. Additional sensor devices such as additional motion detectors, access contacts on the refrigerator or doors, a smart pillbox, or IP cameras can be utilized to supplement the monitoring data sets being produced within the home. Safety, comfort, and energy saving devices for the senior can be added such as a networked thermostat, safety lighting in or outside the home, appliance and lighting control accessories, gas leak detectors, air quality & fire detectors, or an IP-based intercom system. 4HM, a member of the Continua Alliance, is a strong advocate of open standards in medical devices and its ControlPoint ™ in-home software is able to support a wide variety of medical diagnostic devices to further supplement the health and well-being information for the senior, including a digital weight scale, a blood-pressure cuff, a glucose meter, or a pulse oximeter, depending on the needs of that particular monitored senior. Lastly, to battle psychological duress and the frequent isolation of a senior living alone– a common difficulty among the elder population that has proven negative health repercussions—4HM has integrated into the solution set friends-and-family photo sharing, interactive health surveys, and health and wellness video education.

    It's like Facebook, but with your family and your doctor always looking over your shoulder!

    The one big question about home health monitoring that too few people ask is whether the people being monitored want to give family, doctors, and random packet sniffers personalized Total Information Awareness about their every trip to the refrigerator or bathroom. This may end up being a catalyst for health-care robots ("Roomba, MD") -- a system that can pay attention to the patient 24/7 without being judgmental, distractible, or far too personal.

    December 22, 2007

    Touchy, Touchy

    3devicesAlthough I tend to focus on social impacts in my discussions of the participatory panopticon, and the related Metaverse concepts of augmented reality and lifelogging, I'm not immune to the siren song of gadgets. If I'm going to argue that technology and culture co-evolve, I shouldn't focus entirely on only one side of that pairing. That said, I don't do the full-on gadget geek thing very often, but it's the end of the year, and I'm going to indulge.

    It's not just an academic subject for me, of course. As I noted in May, I have a Nokia N800 Internet tablet, a Linux-based device that offers full computery goodness in a platform a bit bigger than a mobile phone. I'm no stranger to the field -- I still have my first-generation Newton. But as the picture to the right suggests, my menagerie of touch-screen toys has recently expanded.

    Pocket Internet tablets represent a digital niche that has yet to reach its full potential. None of the current devices are anywhere close to perfect. But as the wireless Internet becomes more pervasive -- in terms of both its presence in our lives and its presence in the air around us -- the more that we'll start to depend on mobile tools to give us rich access to our networks and data. By checking out these tablets, I'm not just satisfying my gadget urges, I'm beta testing one scenario of the future. At least, that's my justification.

    Let me note that none of these devices are phones. My misgivings about the iPhone have not abated (that's an iPod Touch in the photo), and the alternatives I've considered all have serious failings. For now, pairing a dedicated Internet tablet with a 3G phone gives me the greatest flexibility. Since May, that's meant the N800, and I've been fairly happy with it. But after doing a short bit of casual consulting for Nokia, they generously offered to send me a new N810, the follow-up to the N800.

    It arrived yesterday. Hit the extended entry for some gadgetry observations:

    Continue reading "Touchy, Touchy" »

    November 30, 2007

    Misinformation, Identity, and Power in the Internet Age

    In a world of networked transparency, misinformation is increasingly more powerful than privacy.

    During my presentation at the Metaverse Meetup last night (video available soon), I got into a discussion about what happens to control over one's own information in a world of information saturation. If privacy is effectively unattainable, or the institutions to protect privacy are too weak to withstand the relentless expansion of Internet observation, what recourse would those wishing to maintain some control over their external visibility have available?

    One possible alternative: intentional misinformation about oneself, reducing the "signal to noise" ratio of networked transparency.

    This misinformation would need to be widespread, and at least match in abundance the "real" information. The various automated tools for gathering personal data would be hampered by this approach, and if the false information became sufficiently abundant, it might render the real data effectively invisible. Such a technique probably wouldn't work well for those of us who have long-standing Internet histories (a quick check to the Internet Archive would confirm when new stories first appeared), but might work beautifully for people just starting to leave a footprint. That is, misinformation could be a very effective defense for everyday folks who would prefer not to have their life stories available to anyone with access to Google.

    One can readily imagine small service providers appearing, offering to produce a wealth of garbage info to spoof one's online identity. Powerful digital engines like the "Storm Worm" network of zombie PCs might be pressed into service, spewing out misinformation by the gigabyte. If the false data made its way into trusted repositories, it might be nearly impossible to eliminate.

    The flip side of this, however, is that misinformation appearing in trusted locations can be quite damaging to the people who have built careers online. I learned this for myself just today.

    Someone, a few months ago, changed the Wikipedia entry for Worldchanging to completely eliminate references to my having co-founded the site. Gone. Down the memory hole. I have no idea who would do this, but (judging by the page history) it was clearly intentional, and was not an accidental over-zealous edit. In the intervening several months, numerous stories have been written in print and online media about Worldchanging; to the degree that journalists would check Wikipedia for "objective" info, my contributions to the site's development would have been invisible.

    As annoying as this might be, it was easily corrected -- at least on Wikipedia. I have no way of knowing where this misinformation might have spread, or whether any of the places using it as a reference could in turn be used as a reference for others. I'm hopeful that it was an isolated event, and will have no lasting impact. But that's the thing -- as I said, I have no way of knowing.

    Increasingly, the balance of power for information and identity is not the clash between transparency and privacy, but between transparency and misinformation. Some might find this a useful way to protect themselves; others might find it a real threat to their livelihoods. But as Internet information and identity become more important, the creation of misinformation about individuals is likely to become an intentional, strategic act, another way of asserting power in the Internet age.

    November 1, 2007

    Make It So


    How soon until we see one of these? The "artifact from the future" shown above is my visualization of a bluetooth headset with an embedded cameraphone-style camera, able to send the video to one's handheld for recording and display. Given that fairly decent cameras can be put into the very small, low-power space of a phone, it stands to reason that -- very soon, if not today -- clever designers could successfully build one into a headset.

    The vision of the "Lifelogging," Participatory Panopticon future assumes that the network-enabled personal cameras be used to capture images and video of one's life in a serendipitous fashion, and not require the few seconds of fumbling with a camera or phone to get it ready to shoot a picture. Current test versions of such technologies use medallion cameras (such as Microsoft's SenseCam or ExisTech's WearCam), offering all of the style of a wearing a big piece of weird technology around your neck, and all of the social appeal of an accessory that absolutely demands that people look at your chest. The canonical non-goofy medium for future always-enabled cameras would be camera-enabled eyeglasses, offering both a view of the world equivalent to what one already sees, and a potential avenue for display.

    But this medium isn't perfect, either. The necessary technologies remain some ways away, but more importantly, the social role of eyeglasses is changing. The increasing popularity of laser eye surgery is steadily reducing the number of people in the hyperdeveloped world who have to wear corrective lenses, and for those people who choose to continue to wear eyeglasses, the frames have become something of a fashion item. It's not unusual to find people who have a variety of eyeglasses to match different outfits and situations. In short, the idea of eyeglasses-based cameras seems to run counter to current trends.

    Conversely, the use of bluetooth headsets for mobile phones seems to be on an upswing. They're still far too ungainly to be considered fashion items, but it's getting to be difficult to find a public setting in which there aren't people appearing to suffer from the early stages of Borganism. The calls for laws banning the use of handheld phones while driving will only accelerate this trend.

    Headset-mounted cameras for Lifelogging and the Participatory Panopticon would have many of the advantages of the eyeglasses versions, but would require simpler technology to produce. The processing and recording of images would still take place in the phone, minimizing the power demands of the headset cam. A device like this would be an ideal partner for a Nokia N800 tablet or one of the myriad iPhone-copy touch phones on the market.

    So, who makes the first bluetooth headcam? Nokia? Apple? One of you?

    September 28, 2007


    I've been a Mac user for years, and (generally) happily so. I'm not an Apple fanboy, but I do appreciate the combination of good hardware and software design found in Macs. When the iPhone came out, some people I knew assumed that I'd get one for myself -- and I admit, I was tempted. But ultimately I chose not to, and I'm glad I did.

    My initial reason for not getting an iPhone concerned the carrier. AT&T is hardly a bastion of respect for privacy and civil rights, and I had no desire to give them any more money than I have to. The various sim-card unlocks would render that moot, except...

    Anyone who thought that Apple -- with an iPhone business model that gets a huge chunk of the subscription fees from its carrier partners -- wouldn't re-lock the iPhone wasn't paying attention. And once the iPod Touch came out with an as-yet-unbreakable lockdown for applications, the writing was on the wall for the various third-party apps that clever hackers had figured out how to install on the iPhone. In short, the period in which the iPhone was relatively free and open (if not by Apple's doing) was always likely to be brief, and may never be repeated.

    I'm utterly disgusted with the wireless telecom business models that actively prevent customers from actually making use of the technologies built into the hardware. Some will disable useful features, only to re-enable them at a fee; some simply disallow the use of given capabilities altogether. By barring the installation of any outside iPhone applications, Apple is actually among the most-offensive vendors in this regard. Claims that "most people" wouldn't ever use the ability to add applications are irrelevant, and likely wrong: one of the distinctly appealing aspects of the iPhone technology was its potential to shift the mobile phone world away from appliances and towards platforms -- i.e., to a world in which people think of their phones as they do their computers, as devices that can always be made to do more.

    The alternatives are limited, but intriguing.

    My next phone is very likely going to be a Linux-based OpenMoko Neo1973 phase 2, due out in December. A completely-open platform, the OpenMoko operates on the global GSM standard, and includes WiFi. It's not a perfect device -- no camera and no 3G make it definitely sub-optimal -- but it's a project I want to give my whole-hearted support.

    In the longer-term, if Google wins the 700MHz auction and goes ahead with its plans for a open-hardware model for the spectrum, the wireless companies may find themselves in a real scramble. And Sprint's plans for WiMax actually appear to be relatively openness-friendly: among the first devices to take advantage of the high-speed wireless system will be a version of the Nokia N800, a Linux-based internet tablet with voice-over-IP capabilities.

    It may well be that the next couple of years will be the last stand of the overly-locked down, paranoid and arguably corrupt wireless networks. It's too bad that Apple has chosen to stand with them instead of with the future.

    July 17, 2007

    Technology as Political Catalyst

    camphonepol.jpgIt's become almost a cliché to observe that the Internet is changing the face of electoral politics at the national scale. The use of the web for fundraising (and to observe fundraising) is an obvious example, but for me a more interesting phenomenon is the way in which an existing Internet technology (one that had previously not been considered inherently political) can suddenly emerge as a major force in making and breaking a candidate. There's no reason to expect that to change -- and that adds a wildcard to the 2008 political races in the US.

    In 2004, the big story was the use of MeetUp to make Howard Dean a frontrunner (and eventually the leader of the Democratic Party); social networking apps had been around for awhile, but suddenly people realized that they had power. In 2006, the big story was the use of YouTube to post damning video of George "Macaca" Allen; again, YouTube wasn't a new site, but suddenly it was able to bring down a leading candidate. For 2008, candidates across the political spectrum already have their social networking and web video strategies in order, and neither technology will have the same kind of "out of nowhere" transformative impact again.

    For the 2008 campaign, we've yet to see which Internet technology will shake up the political world. What kinds of characteristics would such a technology possess? Let's see...

    • It's likely to be already commonplace, but without a real political footprint. One or two candidates trying to figure out how to make it work for them is fine, but there shouldn't be any coherent strategy for its use -- yet. (So MySpace and Facebook are out.)

    • It's likely to be something that obeys Metcalfe's Law, drawing power not from the number of users, but the connections between the users. (So Google Docs are out -- not that I really expected Google Docs to be a king-making app.)

    • It doesn't necessarily have to have an immediate impact, but it should be something that can be easily explained and understood. (So wikis are probably out, sad to say.)

    What we're looking for is a technology that has the potential to make a dark horse candidate an unexpected contender, or make a leading candidate stumble and possibly fall.

    Here are the technologies that I think might fit this role -- and, as always, I'm more than happy to entertain counter-arguments, alternative suggestions, and private insults.

    • Microblogging apps, like Twitter and Pownce. A couple of the candidates have presences in the microblogs, but nobody has quite figured out how to use the technology really well. Could Twitter (etc.) become engines for political flash mobs, or ways to spread information/disinformation more effectively?

    • Geolocative technologies, like Google Maps and cheap GPS. This is likely to manifest as a map mashup, connecting candidate/campaign-relevant information to location. I could see something like this used for pinpoint targeting of donors and visits by campaigners (e.g., "Mr. Smith donated to my opponent last time around by this time, but hasn't done so yet this campaign -- he may be a possible conversion."), or to create "open source intelligence" about the appearances of a candidate and his/her team.

    • Photo-sharing sites, like Flickr and Zoomr. Using these sites for open source intel or counter-campaigning seems the most likely possibility, but there may be some kind of application that really charges up a campaign.

    • Participatory Panopticon/Sousveillance. Okay, not technically an Internet technology per se, but clearly dependent upon Internet tools. As I think about it, this may end up being less its own candidate, and more a variant for each of the previous three suggestions. In either case, the value comes in large part from the swarm possibilities: not just a cameraphone video recording of a macaca-style gaffe, but a mass of recordings, from different positions, capturing a scene in greater detail than any single regular camera could.

    (An early signal of the last becoming a real possibility would be a steadily-increasing use of cameraphones to record speeches at rallies and campaign stops.)

    Given my work with the Metaverse Roadmap, some readers might be curious as to why I didn't include virtual worlds on my list. They certainly fit the listed criteria (the third a bit shakily, but close enough), and we're already seeing some initial efforts at campaigning in Second Life. My sense is that the technology isn't quite mature enough to make the big political splash this time around -- but virtual worlds have the potential to be catalytic in 2010 or 2012.

    Of course, the 2008 campaigns may be run largely on TV, with the Internet used for fundraising and for organizing supporters, without any disruption from unexpectedly useful technologies... but I doubt it. A more real possibility is that the leading campaigns will have become sufficiently Internet-savvy that emerging technologies with disruptive potential get identified and co-opted before they have a chance to change the game. I hope that's not the case; political innovation in the use of social technologies remains one of the few democratizing elements in an electoral system that seems less and less responsive to the will of the people.

    (Photo adapted from Creative Commons-licensed image by Unsure Shot on Flickr.)

    May 23, 2007

    Web Heaven

    n800.jpgOkay, this is kinda cool.

    I'm posting this entry via my new Nokia n800 interwebtube tablet (tubelet?). As much as I've long been fascinated by mobile devices, most tend to be better-suited to information consumption than creation. The n800 is the first one I've tried that makes posting to OtF at least a reasonable option.

    What makes this device particularly appealing is that it uses Linux as its OS, not Windows Mobile. It will be my first everyday Linux box.

    I'm entering this post via the touch screen keyboard. It's not perfect, but it's far better for text creation than a phone's number pad. I wouldn't want to write a novel this way, but the occasional blog post won't be too bad. When I get a bluetooth keyboard to go with it, I'll be set.

    What is it missing? A decent camera, for one thing. The little pop-out camera is cute, but very low rez. I'd also like to see it be able to sync with my laptop, pulling over bookmarks and contacts automatically.

    Nevertheless, I'm looking forward to seeing what I'll be able to do with this.

    May 13, 2007

    Participatory Panopticon in Action

    justintvspam.jpgJustin of was the guest at today's recording of the RU Sirius podcast. A pretty genial guy, he seems reasonably conscious of the implications of his ongoing project. For those of you unfamiliar with, he wears a live-streaming wireless camera on his hat all day, every day, recording everything he sees. These recordings are available as archives.

    You can see the archive of today's RU Sirius interview here -- scan ahead to 2:45 to see his arrival.

    (Yes, I'm walking with a cane. It's not a two-bit Warren Ellis impression, I'm having an arthritis flare-up. Yes, arthritis. Yes, it sucks.)

    The conversation is lively, and worth listening to. As pictured, I have the honor of being the very first person ever to try to spam the video feed -- unsuccessfully, as the resolution on his camera is pretty lousy. Fortunately, he was nice enough to read out what I wrote: the URL for Open the Future.

    I'm sure the money will start rolling in any second now.

    April 1, 2007

    Augmented Fluid Intelligence

    Can we survive the multitasking era?

    Okay, multitasking is hardly up there with global warming, pandemic disease and asteroid strikes as a civilization threat, but it's becoming increasingly clear that multitasking reduces overall effectiveness and accuracy. Yet we're forced to juggle more and more simultaneous activities in our work, in our social networks, even in our play. As a result, simple tasks take longer, and we're far more likely to make errors. In short, as our world gets more complex and we face greater challenges, we're becoming less able to respond successfully.

    Theorist Linda Stone calls this overtaxed ability to focus "Continuous Partial Attention" -- a name that's much cooler than multitasking, you have to admit -- and she describes it as an "artificial sense of constant crisis." But in many ways, the world we're moving into is even worse than this, because we're becoming so accustomed to the constant interruption that we're starting to find it hard to focus even when we've achieved a bit of quiet. It's an induced form of ADD -- a "Continuous Partial Attention Deficit Disorder," if you will, ADD via CPA.

    Our ability to handle simultaneous complexity is governed by what cognitive scientists call "fluid intelligence," commonly defined as the ability to find meaning in confusion and to solve new problems. Fluid intelligence can be exercised, and in fact appears to be increasing. If Steven Johnson's argument in Everything Bad is Good For You is right, we're seeing this gradual increase in intelligence precisely because our cultural and social expressions are increasingly taking forms that are stimulating to our fluid intelligence.

    But this process will inevitably have limits. Eventually, we'll hit a ceiling in the ways in which we can improve our fluid intelligence naturally. At that point, we'll face a hard choice: make major changes to our work and social cultures, so as to reduce the degree of simultaneous attention-grabbing activity; or develop augmentation systems that enhance our natural fluid intelligence by recognizing, from moment to moment, what needs our actual focus, and what can be handled by proxies. The wise choice would be the first one. It should come as no surprise, then, that I suspect that we'll do the second.

    As it happens, we're already working on devices that will do just this. The problem is, these systems aren't quite done -- and at present, actually tend to make matters worse.

    If you haven't heard of Twitter, count yourself lucky. It's an application that lives somewhere in the interzone between blogging and text-messaging, and went from being nearly invisible to nearly ubiquitous in less than a week in early March. [This explosion was likely due to the combination of overlapping tech-fests (TED, SXSW, GDC) with concentrated early-adopter attendance and Twitter's complete dependence upon network effects for utility (i.e., the more people have it, the more useful it becomes). Expect other network-dependent apps to try to artifically reproduce this perfect storm next March.]

    Twitter allows you to send quick and easy messages about your various activities to people who have selected to receive them; the current joke is that where regular blogging let you give daily reports on your cat, Twitter lets you give minute-by-minute updates. During busy periods, it's quite easy to be overwhelmed by the volume of incoming messages, the vast majority of which will be of only passing, mild interest at best.

    But that leaves the tiny minority of truly useful and interesting posts, ones which have particular value due to their timely arrival. At present, finding those requires wading through the mass of "my kitty sneezed!" or "I hate this taco" messages; of course, this is exactly the kind of low-complexity activity that we'd habitually perform via Continuous Partial Attention.

    Imagine, however, if Twitter had a bot that could learn what kinds of messages you pay attention to, and which ones you discard. Perhaps some kind of Bayesian system, more complex than current spam filters, but not outrageously so. Over time, the messages that you don't really care about would start to fade out in the display, while the ones that you do want to see would get brighter -- an adaptation of the "ambient technology" concept. These bright headlines would stand out against the field of gray, drawing your attention only when you would desire it. If this worked reasonably well, you'd have reduced the overall demands on your fluid intelligence by outsourcing some of the rote filtering to a device.

    These kinds of bots -- attention filters, perhaps, or focus assistants -- are likely to become important parts of how we handle our daily lives. We don't want to have the information streams we've embraced taken away from us, and every decision to scale back how frequently we check email or stock tickers or combat results or the like raises the spectre of our competitors choosing to tough it out. As these information streams become more and more important to our professional and personal lives, the harder it will be to pull away. So rather than disconnect, we'll get help.

    We'll be moving from a world of Continuous Partial Attention to one of Continuous Augmented Attention.

    January 18, 2007

    Co-opting the Participatory Panopticon?

    Is it still "sousveillance" -- watching from below -- if it's going straight to The Man?

    The city of New York, in a rather clever move, has decided to equip its 911 (emergency) and 311 (non-emergency) call centers with the ability to receive cameraphone pictures and videos. In his State of the City address, Mayor Michael Bloomberg declared:

    To build stronger trust and cooperation between the public and the police, we're also going to empower more New Yorkers to step forward and join the fight against crime.

    This year, we'll begin a revolutionary innovation in crime-fighting: Equipping "911" call centers to receive digital images and videos New Yorkers send from cell phones and computers something no other city in the world is doing.

    If you see a crime in progress or a dangerous building condition you'll be able to transmit images to 911, or online to NYC.GOV. And we'll start extending the same technology to 311 to allow New Yorkers to step forward and document non-emergency quality of life concerns holding City agencies accountable for correcting them quickly and efficiently.

    This is one of those developments that makes so much sense, it's a wonder that nobody made it happen earlier. I have no doubt that we'll see other cities adopt this approach in the months to come, both in the US and internationally. As much as it has the potential for frivolous or malicious use -- just as regular 911 calls do -- it has the potential to give first responders a better idea of an emergency situation, allowing the professionals and the civilians to work together to evaluate conditions.

    It's also an example of how a participatory panopticon society can be embraced by traditional channels of authority and social control. This will undoubtedly have some benefits, but it also raises uncomfortable questions. Will the photo/video 911 calls be given greater priority than the voice-only calls? Conversely, will the police respond as quickly to a situation where they can see the color of the victim (the NYC police is known for having issues in this regard)? And for me, the big question: will the existence of an "official" channel for using cell phones to capture images and videos of emergency and non-emergency problems eliminate non-official versions?

    If the participatory 911/311 panopticon stands alongside other emerging community response networks, then this is, on balance, likely a positive development, as the citizens will continue to have channels to report problems that the city personnel might neglect. If the program results in pressure to shut down or block non-official networks, these citizen systems won't go away, of course, they'll just be driven underground, making them less reliable and pervasive.

    This could be a moment for civic empowerment -- or a moment where an early version of the participatory panopticon is smothered by bureaucracy. Let's hope they don't screw it up.

    (Thanks, Anthony Townsend!)

    December 4, 2006

    December Futurismic Column Now Up

    ucla2.jpgThis month's Futurismic column is now up (my fault that it's late). It's an update on what's happening with the participatory panopticon. This time, I look at what Michael Richards, UCLA cops, and George "Macaca" Allen have in common, and the lessons they have for the rest of us.

    The proliferation of cameras this scenario suggests is undoubtedly troubling for many civil libertarians and privacy advocates. The problem is, these cameras have already proliferated -- the majority of mobile phones sold around the world have a camera, and more cameraphones were sold in 2005 than any other kind of camera, digital or film. We will have more examples of the participatory panopticon in action in the coming weeks and months. Similarly, surveillance cameras have become a commonplace part of urban policing, whether mounted on buildings, street lights, or police car dashboards. What we need are rules and practices that make the use of these tools more responsible and transparent.

    Forecast that might seem obvious in retrospect: during the 2007-2008 runup to the US election, we'll see a rash of hoax videos on YouTube (and similar sites) impugning the credibility and character of numerous political candidates. As a result, some candidates will start recording every moment they're in the public (or semi-public) eye, as self-defense. By the 2010 US elections, every candidate will do so.

    October 12, 2006

    I Want My Google Data Privacy

    googleeyes.jpgThe Hawk Wings blog points us to a site called Freds House, wherein a writer proposes something new and, as far as I'm concerned, absolutely brilliant: Google Data Privacy.

    I'm feeling increasingly uneasy about my dependence on Google services. [...] I look around my desktop and I see Google Reader, Google Mail, Google Talk, Google Toolbar, Google Maps, Google Calendar, Google News, Google Analytics, Google Earth, and of course Google Google. [...]

    I think I need a new Google product to drop into beta. That would be, let's see, Google Data Privacy. GDP would allow me to review all of the information that Google retains on me across all services, from all devices, and from all sources. GDP would allow me to determine the maximum data retention period for each of my services. GDP would allow me to selectively opt out of cross-service data mining & correlation, even if it reduced the quality of the services I receive. GDP would allow me to correct any inaccurate data in my profile. And GDP would log and alert me when my data was queried by other services.

    I want my Google Data Privacy.

    So do I. This is exactly the kind of thing that Google could do, should do, to maintain its "Don't Be Evil" motto, while compiling better -- more accurate and more useful -- information.

    This is the best web idea I've seen in a long time, and it deserves wider discussion.

    October 5, 2006

    Participatory Panopticon Draws Ever Closer

    waymarkr_examples_of_wearing_cellular_ph.jpgJust a couple of quick items on the participatory panopticon front:

    Life Caching has the current lead for the pronunciation-friendly name for the participatory panopticon -- and it's the term used by Waymarkr, the first public software with an explicitly PP purpose.

    The Waymarkr system allows you to effortlessly document and share your life with others. Just install our software on your mobile device... . Once the WayMarkr software is enabled, your phone will continously take photgraphs of your events and perspectives. All photographs are sent to a the Waymarkr web site so your phone never runs out of room. You can then login to the Waymarkr web site, annotate and share your photos, see stop motion movies of your captured event and map out where your photos were taken. You can also see other user's photos that were taken at the same time and place as your photos, giving you an alternate perspective on your experience.

    Right now, the program only supports the Nokia Series 60 phones (which, interestingly enough, aren't just made by Nokia). You do have to wear your phone around your neck -- but being on the cutting edge is worth a little public embarrassment, no?

    (More details can be found here and here. Hat tip Picturephoning for the lead.)


    Although Participatory... er, Life Caching discussions typically focus on the use of mobile phones, most of already carry a powerful computing system with significant storage capacity in the form of an iPod. Now we're starting to see add-ons for the iPod that do more than just make it easier to play music.

    The iBreath is a fully-functioning iPod-based breathalyzer. It also serves as an FM transmitter, but that's not really interesting now. As far as I can tell, it's the first non-sound-related sensor device for attachment to the iPod -- and there's absolutely no reason it would be the last.

    I'd love to see environmental sensor add-ons for the iPod, letting you store abundant data and upload when you sync.

    (Via Infinite Loop.)

    August 30, 2006

    Continuous Partial Social Attention

    Working on the big IFTF project today, I discovered that a phrase I'd been playing with did not exist anywhere in Googlespace (and if you can't Google it, it doesn't exist, right?). I thought I'd go ahead and stake a claim now, in case the term has any legs.

    Continuous Partial Social Attention: the maintenance of multiple constant social connections through networked tools so as to maintain ongoing relationships, with links on the "awareness periphery" but always accessible.

    Continuous Partial Attention (CPA) is a concept originated by cybertheoretician Linda Stone back in 1998, describing the modern phenomenon of having multiple activities and connections underway simultaneously, dividing one's time between them as opportunities arise. Here's how Stone defines it on the CPA wiki:

    Continuous partial attention describes how many of us use our attention today. It is different from multi-tasking. The two are differentiated by the impulse that motivates them. When we multi-task, we are motivated by a desire to be more productive and more efficient. We're often doing things that are automatic, that require very little cognitive processing. We give the same priority to much of what we do when we multi-task -- we file and copy papers, talk on the phone, eat lunch -- we get as many things done at one time as we possibly can in order to make more time for ourselves and in order to be more efficient and more productive.

    To pay continuous partial attention is to pay partial attention -- CONTINUOUSLY. It is motivated by a desire to be a LIVE node on the network. Another way of saying this is that we want to connect and be connected. We want to effectively scan for opportunity and optimize for the best opportunities, activities, and contacts, in any given moment. To be busy, to be connected, is to be alive, to be recognized, and to matter.

    We pay continuous partial attention in an effort NOT TO MISS ANYTHING. It is an always-on, anywhere, anytime, any place behavior that involves an artificial sense of constant crisis. We are always in high alert when we pay continuous partial attention. This artificial sense of constant crisis is more typical of continuous partial attention than it is of multi-tasking.

    Continuous Partial Social Attention (CPSA) plays off of this concept, describing the smart mob social world in which many of us -- especially younger people -- live. With active buddy lists, real time location tags indicating who's nearby or in town, virtual world chat, a near-constant flow of text messages (and, less often, email or voice), and even webcams, many of us maintain an ongoing set of multiple connections, paying just enough attention to maintain a link. The connections remain on our awareness periphery, but can easily float to the surface when they need more complete attention.

    The purpose of CPSA connections is not to pursue constant conversation; indeed, more often than not the other people on the network remain in the background of one's activity flow. The purpose is to maintain a social relationship that could otherwise wither if left only to transient links like email, phone calls or in-person visits. CPSA is, in essence, a way of saying "I'm thinking about you" to a wider variety of people than one could engage with otherwise.

    The difference between CPSA connections and more traditional email-type connections roughly parallels the difference between using RSS feeds to follow a weblog and visiting a weblog via a web browser. The RSS link allows the connection between blogger and reader to remain viable, even if the blogger (or reader, for that matter) is temporarily unavailable; people who visit weblogs solely via a browser tend to be less tolerant of extended periods of bloggers not blogging.

    If CPA "involves an artificial sense of constant crisis," however, CPSA involves an artificial sense of constant intimacy. Keeping Skype open in order to allow buddies to call or text any time maintains a continuous connection, but is arguably far less personal than devoting one's attention to someone in conversation. Nonetheless, if someone who has had you on a buddy list suddenly drops you, or no longer pops up as being available, you can feel almost unreasonably injured. The intimacy may be somewhat contrived, but it is real.

    As more of the MySpace generation moves into the adult world, CPSA will become as commonplace as CPA is now, and those of us unaccustomed to that kind of Internet intimacy could well find ourselves at a competitive disadvantage as significant as the one that faced the generation unable to deal with email and mobile phones.

    May 16, 2006

    Alpha-Testing the Participatory Panopticon

    853_web.jpgIt looks like the first draft version of the participatory panopticon -- the set of technologies allowing individuals to record everything that happens around them, for later playback, analysis, and archiving -- will come not from mobile phones on steroids, but as part of the US Defense Advance Research Projects Administration (DARPA) efforts to increase the information-recall capacity of soldiers in the field.

    The Defense Advanced Research Projects Agency (DARPA) is exploring the use of soldier-worn sensors and recorders to augment a soldier's recall and reporting capability. The National Institute of Standards and Technology (NIST) is acting as an independent evaluator for the "Advanced Soldier Sensor Information System and Technology" (ASSIST) project. NIST researchers are designing tests to measure the technical capability of such information gathering devices.
    [...] The sensors are expected to capture, classify and store such data as the sound of acceleration and deceleration of vehicles, images of people (including suspicious movements that might not be seen by the soldiers), speech and specific types of weapon fire.
    A capacity to give GPS locations, an ability to translate Arabic signs and text into English, as well as on-command video recording also are being demonstrated in Aberdeen. Sensor system software is expected to extract keywords and create an indexed multimedia representation of information collected by different soldiers. For comparison purposes, the soldiers wearing the sensors will make an after-action report based on memory and then supplement that after-action report with information learned from the sensor data.

    Let's see... recording of images and sounds the wearer may not have noticed, but later prove useful? Check. Integration with location-based systems for greater situational awareness? Check. Depiction of the system as a memory assistant? Check.

    The original DARPA proposal goes into more detail about what ASSIST will be trying to accomplish, and it's appropriately ambitious. They clearly recognize that the challenge isn't the hardware -- as the illustration shows, you can cobble together something right now with off-the-shelf cameras and recorders -- but the software that makes sense of the recorded data. Many of the goals described in the DARPA item (check the section starting with "Task 2: Advanced Technology Research") parallel the issues being confronted by Microsoft in its MyLifeBits project and Nokia with its Lifeblog project: interpretation of images; assignment of metadata; ontologies for location, objects and activities; and interfaces for access to and editing of recorded material.

    I wonder if Microsoft is working with DARPA on this; they certainly could be of use to each other.

    It strikes me that we'll probably see the emergence of this kind of technology first in the work of the military and (possibly more likely) the work of first responders. Many police vehicles already have automatic recorders; insisting that officers wear recorders as well isn't a big leap. Firefighters and other emergency-response personnel could wear them for after-action analysis and investigation, as well as for liability reasons (proof that a responder behaved professionally, or that s/he violated protocol).

    If ASSIST works well enough to do a "real" version, I wonder how many soldiers returning from duty will want to have something like that for their regular lives?

    May 13, 2006


    Says security guru Bruce Schneier:

    "The NSA would like to remind everyone to call their mothers this Sunday. They need to calibrate their system."

    That is all.

    May 5, 2006

    Metaverse Roadmap Underway

    mvrpgraphic.jpgThe first day of the Metaverse Roadmap Project is hurtling to its conclusion, and it's been a mixed bag of small group discussions and plenary lectures, all playing blind men around an elephant, groping out what the "metaverse" future could look like. Much of the discussion has been predicated on the concept of a metaverse as a separate place, akin to the original Neal Stephenson concept; I'm not so sure that works, in part because of the uncomfortable echoes of the decade-old concepts of how the Internet would evolve, and in part because of my own bias towards the intersection of location-related virtual information and physical space.

    To that end, one of today's best presentations came from IFTF's Mike Liebhold, discussing the concept of the geospatial web, and how it could evolve. I won't try to describe it here, because Ethan Zuckerman has already done a masterful job of it: Michael Leibhold on building a tricorder - the geographic web. Ethan's semi-live-blogging the event; if you're interested in what's happening, hit his site, ...My Heart's in Accra.

    May 1, 2006

    Remaking the World

    My friend J. Eric Townsend posted a truly thought-provoking essay on his design blog, All Art Burns. In "On the Path to a Spime-full Future," Eric talks from a designer's perspective on what it would take to transition to a world of everyware (or spimes, in Bruce Sterling's pithier but less euphonious phrasing). He focuses on the concept of "spime retrofit modules," a kind of proto-spime that would give everyware-like functionality to previously dumb objects.

    The arbitrary line I draw between a proto-spime and a spime is that of design intent. A proto-spime was not intended to have spimelike behavior when it was initially conceived and designed; a real spime has intent in the initial conception and design. Compare this to early portable personal computers and modern laptops: Early portable computers were PC-ATs smushed into portable cases while modern laptops are not only designed and built on the plan of portability but often contain features unique to portable devices or lack those found in non-portable devices. [...]

    Initially, SRM’s can be easily attached to or installed in existing items that their humans want to know more about (or will soon discover they want to know more about). Some of these items might not be worth redesigning as proper spimes while others might be more than useful with an embedded SRM.

    Once we’ve learned a few lessons with proto-spimes we’ll be able to include the other side of spimes — data collection and management — in the iterative development process of spimes and SRMs.

    Eric then goes on to discuss the kinds of users who would be most likely to adopt SRMs. This is an incredibly important question, but is one that can easily be swept aside in discussions of signalling protocols and hardware formats. Adam Greenfield gets at it too in Everyware, and the fact that this discussion of a distributed awareness scenario is focusing on user requirements and concerns is a strong indicator that we're on the right track with this.

    Adam is currently winding down a conversation at the Well, over at the Inkwell free-to-the-public conference. I was enormously pleased to see that Adam responded in detail to my first iteration of the distributed awareness quadrants in the previous post; I will bow to his argument that "everyware" would encompass all four of the quadrants, although I do think the focus in the book is primarily on the extimate/watching us category.

    April 28, 2006

    Everyware, Blogjects and the Participatory Panopticon

    fractaluniverse.jpgI love to watch the future take shape.

    For the past few years, I've closely watched the emergence of a set of technologies that make possible constant, widespread, and inexpensive observation and annotation of ourselves and the world. Cheap processors, low-power sensors, and ubiquitous wireless networks are critical elements of a scenario in which we can more readily know and, in the better versions, access the world around us. The key drivers for this emergence are our need to connect with each other and, increasingly, our need to monitor the changes taking place to our environment.

    The version of this world that I've followed most closely is what I've called the "participatory panopticon" -- a scenario in which our personal mobile devices watch the world around us, acting both as a back-up memory and as a medium for the capture of our experiences of the world. These are intimate devices, carried by or worn on ourselves, made to serve as adjuncts to our own capacity to observe the space around us. The political version of the participatory panopticon (hereafter PP -- and I know, the need to abbreviate it is a sign that this isn't the right language for the concept) has been around longest, in the form of "sousveillance;" the digital Witness project is its latest example. At the TED 2006 conference, I described an imagined environmental version of the PP, monitoring not social behavior but ecological conditions. The PP could take on numerous forms, but all with the same core element: the technology is an interface between ourselves and the world that focuses on what's around us.

    USC's Julian Bleecker, in A Manifesto for Networked Objects — Cohabiting with Pigeons, Arphids and Aibos in the Internet of Things, describes a clearly related but not identical manifestation of this technology. He refers to them as "blogjects," objects that create an ever-expanding record of themselves, accessible over the net -- objects that tell their own stories.

    [Blogjects tell us] about their conditions of manufacture, including labor contexts, costs and profit margins; materials used and consumed in the manufacturing process; patent trackbacks; and, perhaps most significantly, rules, protocols and techniques for retiring and recycling [them].

    In my WorldChanging discussion of the essay, I note that Bleecker's vision gives us something akin to an "augmented world." Like the technologies of the PP, blogjects provide an interface between ourselves and the world, focused upon the world -- except here the technologies are not intimate, but are instead extimate, spread around the environment, augmenting our sense of the world at a distance.

    The third, and most recent, manifestation of this "distributed attention" technology can be found in the pages of Adam Greenfield's Everyware, subtitled "the dawning age of ubiquitous computing." Greenfield's everyware model is in some respects the polar opposite of the participatory panopticon: rather than intimate devices watching the world, Everyware posits a world of extimate devices watching each of us.

    That sounds more Orwellian than I think Greenfield would intend. Although it's clear he's very concerned about the social, cultural and legal implications of devices that pay attention to our behavior, Greenfield is also able to explain why the capabilities inherent to ubiquitous computing make its arrival essentially inevitable. This isn't techno-determinism, it's (for lack of a better phrase) utility-determinism. When a technology, or behavior, or idea can let people do significantly more with less effort or cost, or do useful things they could never do in the past, the likelihood of widespread adoption of that technology/behavior/idea is increased. Reading through Greenfield's examples of proto-everyware already in use, it's easy to see just how attractive aspects of this scenario will be.

    Even if getting around town were the only thing Octopus [Hong Kong's smart transit card system] could be used for, that would be useful enough. But of course that's not all you can do with it, not nearly. The cards are anonymous, as good as cash at an ever-growing number of businesses, from Starbucks to local fashion retailer Bossini. You can use Octopus at vending machines, libraries, parking lots and public swimming pools. It's quickly replacing keys, card and otherwise, as the primary means of access to a wide variety of private spaces, from apartment and office buildings to university dorms. Cards can be refilled at just about any convenience store or ATM. And, of course, you can get a mobile with Octopus functionality built right into it, ideal for a place as phone-happy as Hong Kong.

    Notably, Greenfield is able to avoid both ascriptions of "good" or "evil" qualities to technology and bland assertions of technology's "neutrality." All artifacts are biased, because they embed the assumptions and priorities of their creators. Sometimes the biases are sufficiently universal or inconsequential that we don't perceive them as biases, but ask any left-handed person about living in a right-handed world and you can begin to understand how pervasive subtle bias can be.

    The way I've described these three manifestations of this technology suggests a larger structure at play. Here's the inevitable four-box:

    Is the technology intimate (carried or worn -- or implanted -- on ourselves) or extimate (extant in the world around us)? Is the technology focused upon us (individual humans or human behavior) or the world (everything else)? In this structure, an obvious fourth niche presents itself, devices that are both intimate and self-focused. Medical monitors are a clear candidate for this box, but I wonder if there's something else that would be a more likely fit. Is this where personal augmentation technology slots in?