Thursday, June 21, 2012

Social transparency (or hyper transparency): why privacy is not the (complete ) answer…

We are not far from the ‘transparent society’ described by David Brin in his excellent book with the same title.
He warned of dangers to freedoms from surveillance technologies being used by a few people rather than by many. The privacy will be lost in the ‘transparent society’ of tomorrow. 
Hence it would be better for society as a whole that surveillance is equal for all and for the public to have the same information access as those in power (everybody should watch everybody regardless of social or political status). With the ubiquitous camera surveillance of today, we are very close to fulfilling Brin's prophecy.
But new developments within the cyber world create a more insidious 'transparency' (a sort of ‘hyper-transparency’).
In advertising, new technical instruments (the recommendation engines) try to connect potential consumers with products they are interested in buying. One has to gather the buyer with unique items he would be interested in. Therefore, it is needed to understand, ‘personalize’, or ‘profile’ individuals' buying behavior. 
ICT profiling tools are the best solution to the problem. This ‘automatic profiling' is different from the forensic activity seen in popular culture. It represents much more, since it is made on a large scale, all over and all the time’. It is also less since this profiling focuses (yet) only aspects of ‘homo oeconomicus,’ the individual is seen as a consumer with his tastes, habits, etc.
The most interesting variety is the ‘recommendation engines' based on collaborative filtering’ - making automatic predictions (filtering) about users’ interests by collecting taste's information from many other users (collaborating).
An individual asks his friends for advice about how to choose newspapers, records, books, movies, or other items in everyday life. He can figure out which of his friends has tastes similar to his own and which are different. 
The ‘collaborative filtering’ automates the process based on the idea that there are probably several other items they would also find interesting whenever two similar people like an article. These systems try to discover connections between people's interests in a very labor-intensive approach based on data mining, pattern recognition, or other sophisticated techniques.
There are some distinct advantages to these techniques since ‘profiling’ ensures the adaptation of our technological environment to the user through a sort of intimate ‘personal experience.’ But there are equally some trade-offs since profiling technologies make possible a far-reaching monitoring of an individual's behavior and preferences.
Individuals needed some sort of protection.
-The first step was to adopt rules regarding privacy and data protection. According to these rules, the data can be processed freely as soon as they are not personal data (from the origin or 'anonymization'). Therefore, many profiling systems are built, taking into account that processing personal data after anonymization techniques would be free from the incidence of data protection legislation.
However, with new knowledge inference techniques, the frontier between anonymous data and identifying data tends to blur and evolve. Data can be considered anonymous at a given time and context. But later on, because new, seemingly unrelated data has been released, generated, or forwarded to a third party, they may allow the “re-identification.”

-And much more seems to be at stake. There is an information imbalance among the parties involved—where the firms “know” the consumers better than the consumers know themselves. This is what one might call ‘hyper-transparency.’
The profiling process is mostly unknown for the individual, who might never imagine the logic behind the decision taken towards him. Therefore it becomes hard, if not impossible, to contest the application of a particular group profile.
The information could also be used to discriminate among the users while relying on a variety of factors and strategies.
-Finally, such knowledge could potentially allow the targeting firms to use their insights into the individuals’ preferences and previous actions to unfairly manipulate them (with a subsequent loss of personhood and autonomy). 
Should we search for new, eventually, legal remedies? Or should we think about a new social paradigm where 'social transparency' or 'hyper-transparency' through profiling becomes general and accessible to everybody?
Interesting questions…