Fast Company: The Transparency Dilemma
Last week's and this week's "Open Future" columns for Fast Company make up a two-part examination of the dilemmas surrounding transparency.
In "I Can See You," I wrote:
We leave digital footprints everywhere we go, and those footprints are becoming easier and easier to track. Although many of us believe that sunlight is the best disinfectant, and that transparency is generally a good thing for a society, the lack of control over what you reveal about yourself is often troubling. The ease with which abundant personal info can be used for (e.g.) identity theft creates a situation where we have many of the dilemmas of transparency without enough of the benefit. [...]
We live in a world of unrelenting transparency. What can we do about it?
I do believe that transparency is, on balance, a social good. But it would be naïve at best to believe that this social good is unalloyed. Greater transparency -- particularly a kind of transparency that's both incomplete and hard-to-control can create enormous problems for individuals, without offering reliable solutions.
Now, in "Managing Transparency," I continue:
What are the strategies we can use to deal with unrelenting transparency? Fight it. Accept it. Deceive it. [...]
The last strategy, deception, boils down to this: we may be able to watch each other, but that doesn't mean what we show is real.
Call it "polluting the datastream"--introducing false and misleading bits of personal information (about location, about one's history, about interests and work) into the body of public data about you. It could be as targeted as adding lies to your Wikipedia entry (should you have one) or other public bios; it could be as random as putting enough junk info about yourself onto Google-indexed websites and message boards. Many of us do this already, at least to a minor degree: at a recent conference, I asked the audience how many give false date-of-birth info on website sign-ups; over half the audience raised their hands.
The goal here isn't to construct a consistent alternate history for yourself, but to make the public information sufficiently inconsistent that none of it could be considered entirely reliable.
This is actually a point I explored in a bit of depth at Futuresonic earlier this month. In a world of partial transparency, where both total privacy and symmetric transparency are effectively impossible, it may be that deception is the most workable method of protecting one's privacy.
I didn't mention in the FC piece -- it runs long as it is -- but the technologies of the "participatory decepticon" have an interesting role here. Rather than using the various means of creating false images, videos, recordings and such to manipulate perceptions of political figures and other public targets, those tools could be used to easily create false histories for ourselves.