« Topsight, May 26, 2009 | Main | Participatory Panopticon: The Official Version »

Fast Company: The Transparency Dilemma

Last week's and this week's "Open Future" columns for Fast Company make up a two-part examination of the dilemmas surrounding transparency.

In "I Can See You," I wrote:

We leave digital footprints everywhere we go, and those footprints are becoming easier and easier to track. Although many of us believe that sunlight is the best disinfectant, and that transparency is generally a good thing for a society, the lack of control over what you reveal about yourself is often troubling. The ease with which abundant personal info can be used for (e.g.) identity theft creates a situation where we have many of the dilemmas of transparency without enough of the benefit. [...]

We live in a world of unrelenting transparency. What can we do about it?

I do believe that transparency is, on balance, a social good. But it would be naïve at best to believe that this social good is unalloyed. Greater transparency -- particularly a kind of transparency that's both incomplete and hard-to-control can create enormous problems for individuals, without offering reliable solutions.

Now, in "Managing Transparency," I continue:

What are the strategies we can use to deal with unrelenting transparency? Fight it. Accept it. Deceive it. [...]

The last strategy, deception, boils down to this: we may be able to watch each other, but that doesn't mean what we show is real.

Call it "polluting the datastream"--introducing false and misleading bits of personal information (about location, about one's history, about interests and work) into the body of public data about you. It could be as targeted as adding lies to your Wikipedia entry (should you have one) or other public bios; it could be as random as putting enough junk info about yourself onto Google-indexed websites and message boards. Many of us do this already, at least to a minor degree: at a recent conference, I asked the audience how many give false date-of-birth info on website sign-ups; over half the audience raised their hands.

The goal here isn't to construct a consistent alternate history for yourself, but to make the public information sufficiently inconsistent that none of it could be considered entirely reliable.

This is actually a point I explored in a bit of depth at Futuresonic earlier this month. In a world of partial transparency, where both total privacy and symmetric transparency are effectively impossible, it may be that deception is the most workable method of protecting one's privacy.

I didn't mention in the FC piece -- it runs long as it is -- but the technologies of the "participatory decepticon" have an interesting role here. Rather than using the various means of creating false images, videos, recordings and such to manipulate perceptions of political figures and other public targets, those tools could be used to easily create false histories for ourselves.

Fun.

Comments

Vernor Vinge had the information pollution strategy in "Rainbow's End". In the novel, the Friends of Privacy essentially dumped disinformation about everyone onto the Web. It becomes a huge challenge to sort through what is true and what is false.

I've introduced non-factual data into my public datastream in the past, as part of specific debates about whether the fact that someone posting something using my nickname meant that "I had said it" personally. Given that anyone can post anything in most places online and claim to be whomever they want, it's trivially easy to spoof bad data into the record.

Oddly, even the fact that some posts made as "me" were done while I was in federal prison isn't enough to illuminate this obvious dynamic to those whose simplistic understanding of "facts" is "whatever the computer says to me when I type in search queries."

There is an entire science behind this in the zoophile community - we live and die based on this sort of question so it's not purely academic to us. There are classes of being "outed" in one's sexuality, for example. One can be "outed" in the sense that "everyone knows" - but nobody can actually prove it. And there's outing in the sense of self-outing. And there's outing via overwhelming, legitimate data that is enough to be proof whether someone wanted to acknowledge it or not. All these gradations of just how much a fact is "known to be true" become far more relevant online - the "nudge nudge wink wink" approach to highly sensitive fact patterns will become a form of art, as time goes by.

Disinformation has always been a disproportionately effective strategy, insofar as our basic cognitive approach to "facts" militates against accepting that some "facts" are intentionally fed into the system not to convince us of a lie, but to throw us off the truth. I've been subject to government surveillance myself, and later had the pleasure of discussing what was learned with the agents involved (in handcuffs, no less, and facing life in prison) - they were simply unable to comprehend that I'd used simple disinformation tools. Changing a pronoun from 'he' to 'she' is enough to turn an entire surveillance effort upside-down. Arrange a false "meeting location" in the middle of nowhere and agents will spend weeks combing the woods looking for evidence of the rendezvous.

The funny thing is, today it's not illegal to spread false bread crumbs like this - I have no doubt it will fall under criminal conspiracy laws in future statutory updates - the strategy is too effective as a form of asymmetric information warfare for the hegemonic authorities to allow its use without threat of punishment. Just as knowingly selling 100% fake narcotics is a crime, so it will soon enough be a crime to spread false information about oneself online - mark my words. Vinge got it right in Rainbow's End (along with much else - indeed I now own "affiliance.org" thanks to the book!), but he failed to see that the FoP would be prosecuted for "felony conspiracy to pollute the datastream" in a politically realistic future.

Fausty | www.cultureghost.org

Post a comment

All comments go through moderation, so if it doesn't show up immediately, I'm not available to click the "okiedoke" button. Comments telling me that global warming isn't real, that evolution isn't real, that I really need to follow [insert religion here], that the world is flat, or similar bits of inanity are more likely to be deleted than approved. Yes, it's unfair. Deal. It's my blog, I make the rules, and I really don't have time to hand-hold people unwilling to face reality.

Archives

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37