Thursday, June 21, 2012

Social transparency (or hyper transparency): why privacy is not the (complete ) answer…

We are not far from the ‘transparent society’ described by David Brin in his excellent book with the same title.
He warned of dangers to freedoms from surveillance technologies being used by a few people rather than by many. The privacy will be lost in the ‘transparent society’ of tomorrow. 
Hence it would be better for society as a whole that surveillance is equal for all and for the public to have the same information access as those in power (everybody should watch everybody regardless of social or political status). With the ubiquitous camera surveillance of today, we are very close to fulfilling Brin's prophecy.
But new developments within the cyber world create a more insidious 'transparency' (a sort of ‘hyper-transparency’).
In advertising, new technical instruments (the recommendation engines) try to connect potential consumers with products they are interested in buying. One has to gather the buyer with unique items he would be interested in. Therefore, it is needed to understand, ‘personalize’, or ‘profile’ individuals' buying behavior. 
ICT profiling tools are the best solution to the problem. This ‘automatic profiling' is different from the forensic activity seen in popular culture. It represents much more, since it is made on a large scale, all over and all the time’. It is also less since this profiling focuses (yet) only aspects of ‘homo oeconomicus,’ the individual is seen as a consumer with his tastes, habits, etc.
The most interesting variety is the ‘recommendation engines' based on collaborative filtering’ - making automatic predictions (filtering) about users’ interests by collecting taste's information from many other users (collaborating).
An individual asks his friends for advice about how to choose newspapers, records, books, movies, or other items in everyday life. He can figure out which of his friends has tastes similar to his own and which are different. 
The ‘collaborative filtering’ automates the process based on the idea that there are probably several other items they would also find interesting whenever two similar people like an article. These systems try to discover connections between people's interests in a very labor-intensive approach based on data mining, pattern recognition, or other sophisticated techniques.
There are some distinct advantages to these techniques since ‘profiling’ ensures the adaptation of our technological environment to the user through a sort of intimate ‘personal experience.’ But there are equally some trade-offs since profiling technologies make possible a far-reaching monitoring of an individual's behavior and preferences.
Individuals needed some sort of protection.
-The first step was to adopt rules regarding privacy and data protection. According to these rules, the data can be processed freely as soon as they are not personal data (from the origin or 'anonymization'). Therefore, many profiling systems are built, taking into account that processing personal data after anonymization techniques would be free from the incidence of data protection legislation.
However, with new knowledge inference techniques, the frontier between anonymous data and identifying data tends to blur and evolve. Data can be considered anonymous at a given time and context. But later on, because new, seemingly unrelated data has been released, generated, or forwarded to a third party, they may allow the “re-identification.”

-And much more seems to be at stake. There is an information imbalance among the parties involved—where the firms “know” the consumers better than the consumers know themselves. This is what one might call ‘hyper-transparency.’
The profiling process is mostly unknown for the individual, who might never imagine the logic behind the decision taken towards him. Therefore it becomes hard, if not impossible, to contest the application of a particular group profile.
The information could also be used to discriminate among the users while relying on a variety of factors and strategies.
-Finally, such knowledge could potentially allow the targeting firms to use their insights into the individuals’ preferences and previous actions to unfairly manipulate them (with a subsequent loss of personhood and autonomy). 
Should we search for new, eventually, legal remedies? Or should we think about a new social paradigm where 'social transparency' or 'hyper-transparency' through profiling becomes general and accessible to everybody?
Interesting questions…

Wednesday, May 23, 2012

Cyberwar? ... attaining one hundred victories in one hundred battles is not the pinnacle of excellence. Subjugating the enemy’s army without fighting is the true pinnacle of excellence. ~ Sun Tzu, The Art of War

The  war is a violent continuation of interstates politics (according to von Clausewitz). The cyber war should enter the same paradigm.

From a legal perspective the international doctrine examined whether a cyber attack could be qualified as use of force (under article 2.5 of UNO charter) or as an armed attack (under article 51 of UNO charter). The criteria for considering a cyber attack as covered by above notions was the degree of physical destructiveness. A small degree of physical destruction will qualify a cyber attack under non military, international concepts like economic force, reprisals, international responsibility, etc.

This analysis seemed pertinent. It try to see the cyber warfare (form of interstate cyber attack) as an analogical extension of classical warfare. As such the cyber warfare would be another step within an unchanged framework.But the real evolution of cyber attacks by state actors shows the limits of this vision. No state is willing to escalate a cyber attack and produce the huge destruction that may trigger the classical forms of war or armed conflict. They will prefer to act unnoticed but pursuing their political aims with the new tool.

We can figure out even a more challenging. The question of destructiveness seems to be the core issue.  As such the main concept is ‘information (virtual) destructiveness’ that may relate to physical destructiveness in the same way that the 'intellectual property rights' relates to ordinary 'property rights'. Without dead and wounded, without casualties, this concept of ‘virtualized’ (invisible but not least severe) destruction might be an essential aspect of a cyber warfare undermining the base of a knowledge economy,  knowledge society or knowledge state. Finally we get a hint of cyber attacks as a cyber warfare (or cyber war) on itself , as a new genus,  and not as part of classical armed conflict paradigm.

Monday, May 7, 2012

The hidden collapse : Lucio Russo’s scientific fall down of the first century BC

NB. A downloadable version of this post may be found at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1804297.
The decline and collapse of human societies is nowadays a popular subject. The environmental questions, the resource shortage, the nuclear menace, the demographic explosion, and other “apocalyptic” dangers taught us that our civilization might have a limited life.
From Gibbon[1] to Tainter[2] and, more recently, to Jarred Diamond[3], the decline and collapse of societies like Roman or Maya empires were primarily studied. Nobody ever mentioned a similar breakdown of the Hellenistic cultures. However, Lucio Russo[4] discovered a different kind of collapse, almost invisible and maybe even more important: a scientific fall down.
Russo's main idea is that a scientific revolution took place during Hellenistic times, and it was forgotten while science, as a systematic inquiry,  has been abandoned in Antiquity.
The recovery of science has been accomplished only 17 centuries later.
Russo’s contributions cover in detail the birth, the place, the decline and fall of  Hellenistic science and technology in the domains like mathematics, mechanics of solids and fluids, topography and geodesy, optics, astronomy, anatomy. He obtained impressive results, among which the inverse square law of gravitation was discovered by some Hellenistic authors. Such statements might be challenged. This is not the point here since we would like to inquire about Russo’s supporting hypothesis and researching methodology. Only the results of this inquiry might offer materials for deep reflections or futures studies[5].

1. The timing of the first scientific revolution and the Hellenism

It is now generally accepted that the Hellenis­tic age started in 323 B.C., with the death of Alexander the Great and finished by 30 BC with the death of Cleopatra and the annexation of Egypt by Rome.
Russo agrees with the starting point of the Hellenistic times. For him, the end of that age was linked to the end of a scientific revolution. And it happened from the second century B.C. when scientific studies declined rapidly.
For Russo, the most severe adverse effect on the scientific activity lay in the longs wars between Rome and the Hellenistic states, from Syracuse's plunder and the death of Archimedes in 212 B.C to 146 B.C. when Carthage and Corinth were razed to the ground. Russo considers that the Roman world from the third and second centuries B.C. was much more brutal than Virgil and Horace's world. As a matter of fact, the refined culture later acquired by Roman intellectuals was the result of continuing contact with the Hellenistic civilization, mainly through Greeks taken as slaves and by the plundering of Greek works of art.
According to him, “Alexandria's scientific activity, in particular, stopped in 145-144 B.C. when the king Ptolemy VIII initiated a policy of brutal persecution against the Greek ruling class. For example, Polybius acknowledged that the Greek population of Alex­andria was almost entirely destroyed at that time”[6].

2. Arguments for a scientific discontinuity followed by an unstoppable decline

The feeling of decay was generally shared in Antiquity. As an example, Seneca[7] thought that "... far from advance being made toward the discovery of what the older generations left insufficiently investigated, many of their discoveries are being lost".
The interruption of the oral transmission made ancient works incomprehensible. For example, Russo mentions Epictetus's case, regarded, at the beginning of the second century A.D. as the “greatest luminary of Stoicism.” Epictetus confessed to being unable to understand Chrysippus, his Hellenistic predecessor.
Russo also challenges the common opinion that the Almagest rendered earlier astronomical treaties obsolete. To him, such a vision is incompatible with the overlooked reality that  “whereas astronomy enjoyed an uninterrupted tradition down to Hipparchus (and especially in the period since Eudoxus), the subsequent period, lasting almost until Ptolemy's generation, witnessed no scientific activity.” There was, in that period, a profound cultural discontinuity. This break, attested in different ways, is clearly illustrated, especially by the astronomical observations mentioned in the Almagest. “They are spread over a few centuries, from 720 B.C. to 150 A.D., but leaving a major gap of 218 years: from 126 B.C., the date of the last observation attributed to Hipparchus, to 92 A.D., corresponding to a lunar observation made by Agrippa[8].
The author mentions the relationship between the Almagest's star catalog and the star coordinates of Hipparchus, citing the works of Grasshoff, according to whom, although Ptolemy included some coordinates measured by himself, he also widely used the Hipparchian data from three centuries before.

3. A partial recovery based on reproduction and selection of some scientific results (drawback: the simplest and not the best results have been preserved)

Hellenistic culture ‘survived’ during the Imperial Roman age. The former Hellenistic kingdoms were not assimilated linguistically or culturally, and from a technological or economic point of view, there was absolute continuity with the preceding period.
After the interruption produced by the wars with Rome, the ‘Pax Romana’ allowed partial recovery of scientific research during the first and second centuries A.D. (in the time of Heron, Ptolemy, and Galen).
However, after that moment, the decline was inexorable. For some centuries, “Alexandria re­mained the center of any scientific activity to be. The last scientist worthy of mention may have been Diophantus if he really lived in the third century A.D. The activity documented in the fourth century A.D. is limited to com­pilations, commentaries, and rehashing of older works; among the com­mentators and editors of that time, we will be particularly interested in Pappus, whose Collection brings together many mathematical results”[9].
The extent of the destruction of Hellenistic works has been usually underestimated in the past due to the assumption that it was the best material that survived. Russo considers the optimistic view that ‘classical civilization’ handed over specific major works that included the lost writings' knowledge as groundless. In fact, in the face of a general regression in the level of civilization, ''it's never the best works that will be saved through an automatic process of selection”[10].
According to Russo, even among real scien­tific works preserved by Byzantines and Arabs, two selection criteria seem to have been in use. “The first was to give preference to authors of the imperial period, whose writings are, in general, methodologically inferior but easier to use: we have, for example, Heron's work on mirrors, but not the treatise that, according to some testimonies, Archimedes wrote on the same subject. Next, among the works of an au­thor, the ones selected are generally the more accessible, and of these often only the initial chapters. We have the Greek text of the first four, more elementary, books of Apollonius' Conics, but not the next four (of which three survived in Arabic); we have Latin and Arabic translations of the work of Philo of Byzantium on experiments in pneumatics, but none of his works on theoretical principles”.
Is this vision of Russo confirmed by other research? We might say yes since there are similar discontinuities and decays in technologies closely related to scientific activities.
In this respect, Derek de Solla Price[11] considered that “The existence of [...] Antikythera mecha­nism necessarily changes all our ideas about the nature of Greek high technology. [...] Hero and Vitruvius should be looked upon as chance survivors that may not by any mean be as representative as hitherto assumed”.
Price[12] also stated that “Judging from the texts of Heron, Philon, and Ctesibius…from the tradition of automatic globes and planetarium made by Archimedes and from the few extant objects (...)  we may say that the technology of astronom­ical automata underwent a period of intense development. The first major advances seem to have been made by Ctesi­bius and Archimedes, and the subsequent improvement must have been prodigious indeed. In the first century B.C., those facts made possible the building of Antikythera mecha­nism with its extraordinary complex astronomical gear­ing. From this, we must suppose that the writings of Heron and Vitruvius preserve for us only a small and incidental portion of the corpus of mechanical skill that existed in Hellenistic and Roman times”.

4. The ‘fossilization of knowledge’ as a mean for reconstructing ancient scientific achievements

Russo considers that Latin or Greek authors of the imperial period are citing the Hellenistic authors without understanding the ancient scientific methodology. The science became ‘fossilized’[13], crystallized, a dead fragment from an ancient living organism. 
Is this vision of a ‘fossilized science’ consistent? We might think, yes. One can give just an example of such a ‘fossilized astronomical knowledge’ transmitted using oral communication. 
In this respect, Neugebauer[14] cites the book Kâla San­kalita published in Madras in 1825 by Warren. Warren had traveled extensively in South­ern India and recorded the Tamil natives' astronomical teachings for the lunar motion computation. “His informants no longer had any idea about the reasons for the single steps which they performed according to their rules. The numbers themselves were not written down but were represented by groups of shells placed on the ground. (...)
Nevertheless, they carried out long computations to determine the magnitude, duration, beginning, and end of an eclipse with numbers that run into the billions in their integral part and with several hexadecimal places for their fractions. Simultaneously, they used memorized tables for the sun and moon's daily motion involving many thousands of numbers”.
For Neugebauer, it is “evident that the methods found by Warren still in existence in the 19th century are the last witness of procedures which go back through the medium of Hellenistic astronomy…”.
This kind of Hellenistic ‘fossilized knowledge‘ is, for Russo, the starting point for the recovery of science in the XVIth century. And the ‘fossilized science‘ is also the ground on which he realized a spectacular and highly controversial reconstruction of some scientific Hellenistic theories. To accomplish it, Russo opens a methodological novelty in the interpretation of the original sources. He focuses on the second-hand information (‘fossilized knowledge’) spread throughout the literary sources, not just scientific references. This close examination of more resources than the traditional ones allows him to deepen the historical perspective and makes possible his spectacular discoveries.

5. Conclusions and implications

A. The actuality of Russo’s study 
Such research seems, at first, without practical significance. However, the final interrogation of Russo concerns us all. 
The author asks if the decrease of a general and unified scientific theory to some fragmented and ‘fossilized knowledge’ unable to produce new results may occur in the coming future or is just a matter of the ancient past. His answer to the question is definitely affirmative. Russo thinks that the vital substance of knowledge is now reserved for smaller and smaller groups of specialists, which may endanger science's future survival. Knowing what produced ancient decay may allow us to avoid the same fate in the future.

B. The testing of some primary hypotheses 
In our description of Russo’s results, we have seen that his theory and conclusions were agreed upon by other savants. In this way, one can accept that such phenomena were possible without knowing anything about their probability.
However, some of Russo’s hypotheses may be tested. We might verify, for example, if the transmission of scientific knowledge from the more advanced society (Hellenistic) versus, the less advanced one (Rome) was, in fact, based on reproducing the most accessible and not the most advanced works. Therefore one can imagine an actual sociological test, finely tuned to meet the real conditions from Hellenistic and Roman times.

C. The opening up of other research (some questions and tentative answers) 
We may underline several other issues raised by the Russo’s constructions such as:
-How can one measure an ancient society's scientific and technological creativity, lacking today's patent systems?
-What made the Greek Hellenistic world the first (and the last) scientifically developed culture before the modern one? Is it linked to the plurality of science centers in the competing Hellenistic kingdoms? What other issues were still relevant?
-Why this advanced Hellenistic culture was so fragile? Is it because of the reduced number of Scientific, the lack of printing facilities, the spreading of illiterate, the inexistence of institutions like modern scientific academies? 
-What kind of sociology of science characterized the Hellenistic time? What makes the transfer of ancient scientific knowledge different from the transmission of old technology?
-Has Hellenistic science played an inevitable role in the emergence of modern science? Could the developments of science have followed a different pathway?
-Finally, what role has played this scientific decay in the fall of the Western Roman Empire? If the Romans, as successors of the Hellenistic states, lived in a scientifically impoverished society, was the path to the ‘Decline and Fall of the Empire’ unavoidable? Was the disappearance of the scientific method the mortal illness of the Roman Empire? Anyway, we may assume that a society without real technological and scientific creativity has a very dark future. 
All these questions open up a new domain for future research. This sort of inquiry makes the history of science and technology on Russo's pathway such a captivating subject. 




[1] The Decline and Fall of the Roman Empire (1776-88).
[2]Tainter, Joseph (1990), The Collapse of Complex Societies (1st paperback Ed.), Cambridge University Press.
[3]Diamond, Jared (2005). Collapse: How Societies Choose to Fail or Succeed.
[4]Lucio Russo, an Italian physicist, mathematician, and historian of science, is a professor at the University of Rome Tor Vergata. He reconstructed some contributions of the Hellenistic astronomer Hipparchus ([1]"The Astronomy of Hipparchus and his Time: a Study Based on Pre-Ptolemaic Sources," Vistas in Astronomy, 1994, Vol. 38, p. 207-248), reconstructed the proof of heliocentric attributed by Plutarch to Seleucus of Seleucia ([2]The Forgotten Revolution: How Science Was Born in 300 BC and Why It Had To Be Reborn, Berlin, Springer, 2004, ISBN 978-3-540-20396-4) and studied later the history of theories of tides, from the Hellenistic to modern age ([3] Flussi e riflussi: indagine sull'origine di una teoria scientifica, Feltrinelli, 2003).
[5]Such a glimpse is developed by the end of this presentation.
[6] Russo, [2], p 11.
[7]Seneca: “Quaestiones naturals.”, VII, xxix, 4.apud THE ANCIENT ENGINEERS by L. SPRAGUE DE CAMP, The MIT Press Paperback Edition, March 1970.
[8] Russo, [2], p. 282.
[9]Russo, [2], p. 240.
[10]Russo, [2], p 8.
[11]Derek de Solla Price, Science since Babylon, Enlarged Edition, New Haven and London Yale University Press, Third printing, 1978. pp44.
[12] Idem. Pp 56, 57.
[13] Russo[3], p 13.
[14]O. NEUGEBAUER Chapter VI, “Origin and Transmission of Hellenistic Science,” pp 165 in THE EXACT SCIENCES IN ANTIQUITY Second Edition, DOVER PUBLICATIONS.

  

Tuesday, April 17, 2012

Reviewing “What Would Google Do?: Reverse-Engineering the Fastest Growing Company in the History of the World” by Jeff Jarvis

This is a fantastic analysis...     
Its author, Jeff Jarvis, is a trained journalist who covered New Media stories in business and is now a graduate school professor at NYU and practices equally as a consultant and business speaker.
" What Would Google Do" is a book about seeing the world through Google's glasses. The book is organized into two parts.
In the first part, Jarvis translates Google's way of doing business into a set of rules (about 30 in total). Some of the most critical include:
- Give the people trust, and we will use it. Don't, and you will lose it. The dominant (companies, institutions, and governments) used to be in charge because of their control, but the world has changed. They can get it back by simply being more transparent and listening to their customers.
- Your customer is your advertising agency: Google spends almost nothing on advertising; people spread the word for them (the buzz effect). Let your customers do that for you.
- Join the Open Source, Gift Economy: Your customers will help you if you ask them. People like to be generous (look at Wikipedia, for example).
The masses are dead, long live the niches: Aggregation of the long tail replaces the mass (Andersen's ideas about the long-tail). For example, no online video may hit the ratings of "Terminator," but together, they will capture a vast audience.
Free is a business model: Google will find ways to make money by offering free services. Charging money costs money.
- Make mistakes well: It can be a good thing to make mistakes, but it depends on how you handle them. Corrections enhance credibility. You don't need to launch the perfect product. Your customers can (and will) help you to improve it. Google is always working with beta versions of its applications.

In the second part of the book, Jarvis describes these aggregate rules (Google business model) over many different industries.
These industries will be forced to change and become winners by changing faster than the competition - or lose everything if they believe that their current business model will survive.
Here become essential the rule of the first part, "the middlemen are doomed." The middleman should disappear because we don't need them anymore (an Internet effect).
For example, Jarvis examines the way Google would run a newspaper. You should keep in mind that Jarvis wrote these things well before large American newspapers fall into severe financial distress.
The same applies to real estate agents who are now challenged in the US by Craiglist's advertisements.
From media to advertising, from retail to manufacturing, from the service industry to banking, health, and school, and so on, Google's model would have a huge impact.

The final part of the book, Generation G, is about Google's impact on anyone's personal life. Google will keep people connected: young people will stay linked, likely for the rest of their lives. Past mistakes will be visible forever, but if you made mistakes, it not a big issue because everybody makes them.
This age of transparency will be an age of forgiveness (in Life is a beta). Privacy is not anymore an issue. The new generations are putting their lives online because sharing information is the basis of connections. And the sharing brings social benefits that outweigh risks.

This book confirms some of my intuitions.
Google is a media company, enormous, excellent, providing highly demanded assets (information). Its business model is based on revenue streams from advertising.
This can apply only to newspapers, magazines, professional sports teams, film producers, and TV stations. The `Google way' will refer to them, and only a few will survive the next `transformation'). It will also apply to the middlemen who will disappear (eventually surviving around another kind of service).
Therefore the model applies to companies processing the information as to their core business model. It generally concerns tertiary companies and not just all of them. The business needing "atoms being displaced or transformed," or real processing of matter (and not only information) are not touched. The industrial market or primary sector business is not concerned either. In the future knowledge economy, they (these industries) will still exist.
Equally, the advertisement model for Google has its limits. Can this advertising model be transposed everywhere in the information processing field? Merely speaking, can it overpass 500 billion (the amount of today's world advertising market)? The answer is, clearly, negative.
But besides these limits, Jarvis' prediction will be materialized sooner than we think. Jarvis makes us aware that we are living in exciting times...

Sunday, October 16, 2011

Something in French about "La comparaison, technique essentielle du juge européen", my book published at L'Harmattan, Paris

-Pourquoi un tel sujet ?
Cette recherche est l’achèvement d’un intĂ©rĂŞt constant aux mĂ©thodes interprĂ©tatives du juge. .
-Pourquoi on parle d’une technique comparative et non pas d’une mĂ©thode (dans le sens de mĂ©thode interprĂ©tative) comparative ?
Parce que la comparaison est un technique multiforme, contenant une mĂ©thode interprĂ©tative mais Ă©galement une procĂ©dure d’Ă©laboration d’une source de droit : les principes communs. 
-Qu’est-ce que la comparaison comme technique ?
La comparaison comme technique juridique d’un juge est l’emploi volontaire de la comparaison par lui comme moyen de dĂ©terminer la regle juridique applicable au cas concret,  par  un recours aux solutions juridiques appartenant (au moins) Ă  un ordre juridique autre que celle  du juge mĂŞme.
Donc le recours imposĂ© au juge au droit Ă©tranger est hors propos ici. Il s’agit dans ce dernier cas d’un emploi non volontaire de la comparaison. L’application du droit Ă©tranger comme rĂ©sultat d’un renvoi opĂ©rĂ© par une règle de droit d’un Etat n'est pas une comparaison comme technique du juge. On peut qualifier ces situations comme comparaison nĂ©cessaire car il s’agit d'une application ordonnĂ©e par une regle du système juridique du juge et il manque l’Ă©lĂ©ment volontaire (volontĂ© libre) du regard vers le droit Ă©tranger qui est la caractĂ©ristique d’une technique juridique (interprĂ©tative au sens large). 
-OĂą trouver et comment identifier  les manifestations pertinentes de la comparaison ?
Un premier tri s’imposait, pour dĂ©limiter seulement les manifestations de la comparaison comme technique. Si on se limitait aux seuls arrĂŞts oĂą le recours Ă  la comparaison comme technique Ă©tait visible, la rĂ©colte aurait du ĂŞtre trop maigre (quelque dizaines des arrĂŞts sur cinquante ans). Donc il a fallu prendre en compte toutes les arrĂŞts comparatifs et leur ajouter les conclusions comparatives des avocats gĂ©nĂ©raux[1]. Il a rĂ©sultĂ© ainsi une masse d’approximativement 200 documents (des arrĂŞts avec motivation comparative et/ou  des conclusions des avocats gĂ©nĂ©raux avec motivation comparative).
Dans l’analyse ultĂ©rieure d'une première importance Ă©taient les rapports entre l’argumentation comparative du juge et celle des avocats gĂ©nĂ©raux. On a considère ainsi les solutions juridiques fondĂ© sur les conclusions comparatives des avocats gĂ©nĂ©raux et qui sont reprises silencieusement par le juge. 

-La structure de la recherche
On a consacrĂ© un premier chapitre Ă  la mĂ©thodologie, avec une section concernant la dĂ©finition et l’orientation de l’analyse et une autre section sur l'’Ă©valuation des documents comparatifs.
Ensuite la recherche etait organisĂ© en deux grands parties : une analyse mĂ©thodologique de la comparaison comme technique suivie d’une analyse fonctionnelle.
1. L’analyse mĂ©thodologique de la comparaison comme technique
On pouvait considĂ©rer la comparaison comme mĂ©thode a cotĂ© d’autres mĂ©thodes traditionnelles du juge comme ils ont fait dĂ©jĂ  certains anciens juges Ă  la Cour. Il s’agit d’une approche qui prolonge la vision de Zweigert (fameux comparatiste allemand) qui avait parlĂ© de la comparaison comme mĂ©thode universelle d’interprĂ©tation. En droit interne cette vision avait Ă©tĂ© contestĂ©e par les auteurs considĂ©rant qu’il Ă©tait mieux essayer d’Ă©largir le canon mĂ©thodologique classique pour inclure la comparaison.
‘Mutatis mutandis’ en droit communautaire on devait essayer cette dernière approche. L’analyse devait ĂŞtre poussĂ©  plus loin encore par une vĂ©ritable dĂ©construction des mĂ©thodologies juridiques du juge.
A.  Le premier titre de la partie
On a adoptĂ© cette dĂ©marche et consacrĂ© un premier titre Ă  l’analyse et Ă  la ‘traduction’ de la comparaison interprĂ©tative dans les concepts mĂ©thodologiques classiques. 
Partant de la thĂ©orie de la rĂ©ception du droit Ă©tranger en droit interne et ensuite en droit international et examinant le dialogue entre la doctrine et la pratique du juge communautaire on est arrivĂ© Ă  dĂ©terminer l’Ă©mergence d’une comparaison interprĂ©tative ‘standard’. Il s’agit d’une recherche comparative multilatĂ©rale des significations  nationales pour trouver leur ‘noyau commun‘.
Finalement on a Ă©tĂ© capables expliciter et ‘traduire’ dans les concepts mĂ©thodologiques classiques cette comparaison interprĂ©tative ‘standard’ comme interprĂ©tation grammaticale objective de concepts indĂ©terminĂ©s, occupant une place dans le canon objectif classique (a cotĂ© des mĂ©thodes systĂ©matiques et tĂ©lĂ©ologiques)[4].
Mais cette analyse de la comparaison interprĂ©tative a laissĂ© un reste: une masse des arrĂŞts relatifs Ă  la Convention de Bruxelles qui n’entraient pas dans le paradigme mentionnĂ©. D’un point de vue superficiel, ici le procĂ©dĂ©e semblait identique avec la comparaison interprĂ©tative ‘standard’ mais, en rĂ©alitĂ©, le fondement et l’agencement avec les mĂ©thodes du canon interprĂ©tatif classique etait diffĂ©rent. C’est la raison de l’avoir appelĂ© comparaison interprĂ©tative ‘unifiante’ : elle a comme but de donner une interprĂ©tation du textes de Droit unifiĂ© (en espèce la Convention de Bruxelles ) en accord avec le plus grand nombre possible des conceptions particulières nationales.
B. Le deuxième titre de la partie
-La comparaison normative
Un autre titre de la recherche a été consacré à la comparaison comme technique permettant le développement du droit par le juge à travers les principes communs aux droits nationaux.
On a appelĂ©e cette manifestation de la technique comparative  comparaison ‘normative’.
Il s’agit des «principes reconnus par les nations civilisĂ©es», une vĂ©ritable directive adressĂ©e au juge international par l’article 39 du Statut de la CPIJ et ensuite CIJ. 
Confrontant les situations  comparatives significatives en droit communautaires avec la thĂ©orie et la pratique de principes fondĂ© comparativement en droit international on est arrivĂ© a tracer ‘le portait robot’ de cette manifestation de la comparaison ‘normative’
-Comparaison ‘diversitĂ©’
ArrivĂ© a cet point des dĂ©veloppements on  a trouvĂ© une autre masse d’arrĂŞtes comparatifs (relatifs a la Convention de Bruxelles et au renvoi prĂ©judiciel) qui n’entrait pas, ni dans le paradigme de la comparaison interprĂ©tative ‘standard’ ou ‘unifiante’, ni dans celle de la comparaison ‘normative’.
Il s’agissait de situations oĂą le juge confrontĂ©, sur la ligne de frontière entre le droit communautaire (ou para communautaire -comme c’est le cas de la Convention de Bruxelles) et le droit national Ă©valuait la nĂ©cessitĂ© de laisser les droits nationaux en Ă©tat (dans leur diversitĂ© spĂ©cifique) par rapport Ă  l’intĂ©rĂŞt d’Ă©tablir une solution commune (communautaire ou para communautaire).
Lorsque le juge donne raison Ă  l’unitĂ© sur la diversitĂ© il emploie un raisonnement par effets reconnu comme tel par la littĂ©rature mĂ©thodologique.
Cette comparaison qu’on a appelle ‘diversitĂ©‘ ne reprĂ©sente qu’un infra argument, un infra raisonnement imbriquĂ© dans un raisonnement plus large (Ă  un premier niveau, un raisonnement par effets).  En associant le deux versants, positif et nĂ©gatif du raisonnement on comprend la rĂ´le jouĂ© par le juge et surtout le vrai mĂ©canisme qui est en jeu.
Cette comparaison ‘diversitĂ©, comme infra raisonnement, est imbriquĂ© dans un raisonnement se manifestant dans la sphère de dĂ©veloppement du droit par le juge. Ainsi la comparaison ‘diversitĂ©‘ ne dĂ©veloppe pas, par elle-mĂŞme, le droit. Elle est un sous-ensemble d’un raisonnement (par pesĂ©e des intĂ©rĂŞts) qui exprime le dĂ©veloppement du droit par le juge. C’est le principal raison d’avoir placĂ©e cette comparaison ‘diversitĂ©‘dans le titre consacrĂ© Ă  la comparaison comme technique de dĂ©veloppement du droit.  
2. L'analyse fonctionnelle de la comparaison comme technique
Apres avoir fini la caractĂ©risation mĂ©thodologique de la technique comparative au service du juge communautaire il fallait bâtir une deuxième partie montrant les accomplissements du juge par cette technique. Le vrai angle d’attaque de cette deuxième partie est basĂ© sur les fonctions accomplies par la comparaison.
-Dans un premier titre on a analysĂ© la comparaison et ses ‘fonctions administratives’.
-Dans un deuxième titre on a examinĂ© le rĂ´le de la comparaison dans l’Ă©dification et ‘constitutionnalisation’ de l’ordre juridique communautaire.
Il s’agit, d’un cĂ´tĂ©, de l’Ă©mergence, fondĂ©e comparativement, des principes structurants (quasi fĂ©dĂ©raux) relatifs aux rapports entre  droit communautaire et droits nationaux. D’un autre cĂ´tĂ©, il s’agit d’avènement, fondĂ© toujours comparativement, d’une protection du ressortissant communautaire en lui attribuant des vĂ©ritables droits subjectifs publics et finalement des droits et libertĂ©s fondamentaux.
3. Dernière problĂ©matique et conclusion de la recherche se focalise sur la justification profonde (essentielle) du recours a la technique comparative corrĂ©lĂ©e Ă  la nature du juge et de l’ordre juridique communautaire.
On a recherchĂ© l’analogie fĂ©dĂ©rale des formes comparatives dĂ©jĂ  mentionĂ©es pour constater que les Cours fĂ©dĂ©rales ne recourent jamais Ă  un instrument semblable (une comparaison comme technique). Autrement dit il n’y a pas des solutions fĂ©dĂ©rales bâties sur les solutions des Etats fĂ©dĂ©rĂ©s. Il n’y a pas non plus une sorte de comparaison ‘diversitĂ©’ dans les rapports entre le droit fĂ©dĂ©ral et les droits fĂ©dĂ©rĂ©s.
Cette diffĂ©renciation entre une large pratique comparative du juge europĂ©en et l’inexistante pratique ‘comparative’ d'un juge fĂ©dĂ©ral doit s’expliquer par les aspects distinguant le système communautaire d’un système fĂ©dĂ©ral.
La CommunautĂ© et l’Etat fĂ©dĂ©ral diffèrent quant Ă  la base juridique de leur existence. La CommunautĂ© est fondĂ©e sur un traitĂ© international tandis que l’Etat fĂ©dĂ©ral est engendrĂ© par une constitution. Et Ă  partir de cet aspect le principal motif invoquĂ© contre l’assimilation de la CommunautĂ© Ă  l’Etat fĂ©dĂ©ral est celui de la «compĂ©tence de la com­pĂ©tence». Il fallait regarder surtout l’aspect dynamique des compĂ©tences. Contrairement Ă  l’autoritĂ© centrale de l’État fĂ©dĂ©ral, la CommunautĂ© ne dĂ©tient pas le pouvoir d’Ă©tendre ses compĂ©tences d’une manière autonome, c’est-Ă -dire sans procĂ©der Ă  une rĂ©vision du traitĂ© fondateur.
En somme, si la ‘structure apparente’ peut parfois sembler fĂ©dĂ©rale, la souverainetĂ© gardĂ©e par les Etats membres montre que la structure profonde de la CommunautĂ© n’est pas de type fĂ©dĂ©ral. En effet, malgrĂ© les apparences, les Etats restent maĂ®tres de la CommunautĂ©.
Face au rĂ´le essentiel des Etats membres le juge communautaire doit ĂŞtre attentif Ă  l’Ă©quilibre entre unitĂ© et diversitĂ© (moins important pour une Cour suprĂŞme fĂ©dĂ©rale) pour avoir l’accord des Etats membre (au moins implicite) face aux dĂ©cisions qu’il adopte. La Cour est obligĂ© de faire appel Ă  des considĂ©rations politiques et donc une vĂ©ritable pesĂ©e d’intĂ©rĂŞts entre la CommunautĂ© et les Etats est  prĂ©sente toujours(justifiant ainsi la comparaison ‘diversitĂ©’).
L’irrĂ©ductible souverainetĂ© de type international des Etats et leur position très puissante justifie finalement le recours Ă  la comparaison interprĂ©tative ’standard’ et a la comparaison ‘normative’.
Ces rapports fondamentaux entre la CommunautĂ© et les Etats qui sont gĂ©rĂ©s par la Cour nous donnent l’esquisse du lien profond entre la nature (la justification) de la comparaison, la nature du juge communautaire et surtout la nature de l’ordre juridique communautaire. Ainsi la comparaison au-delĂ  d’une technique spĂ©cifique du juge est Ă©galement  un rĂ©vĂ©lateur de la nature profonde des CommunautĂ©s.





Wednesday, September 28, 2011

Cloud computing as new IT paradigm: examining Nicholas Carr’s ideas


The idea behind cloud computing is the sharing of computing resources across many activities. For example, an operator may build a massive data center with tens of thousands of servers and then sell computing power to interested customers. Customers send their tasks through the internet and retrieve in the same way the results of calculations. At the end of the month, the operator may charge the clients to pay only the computing power they have used. The same thing may exist for data storage by giving access to companies that need the capacities of several thousand terabytes.

The great public became aware of the new technology through the bestselling book of Nicholas Carr, “The Big Switch: Rewiring the World from Edison to Google.” The switch here stays for a button linked to the coming out of electric power. Equally, the switch means a change in a computing paradigm that emulates for knowledge economy the ancient shift to electricity in the industrial world. Hence, the author's main argument is grounded on an analogy between early developments of the electrical grid system and today's application rooted in Internet and cloud computing.

The first part of the book sets the basis of the whole argumentation: an analysis of electric power developments some 100 years ago and its tremendous social outcome.
In the first chapter, “Burden's Wheel lays out,” Carr examines the past position of water power, the precursor to electricity, and then explains what these technologies economically means. Carr underlines the exceptional economic impact of general-purpose technologies – those who are the basis for many other commercial activities.
In the second chapter, “The Inventor and His Clear,” Carr reconstitutes the emergence, development, and electric power adoption. He emphasizes that electric power had a false start with Edison's request for continuous electrical current generators. Edison saw himself as a provider for local (every building) electrical generators and associated services ( maintenance, installation, etc.).
The next chapter, “Digital Millwork,” examines the modern history of computing. Here we see the analogy between old client service computing and old continuous electric power generators of Edison.  However, Tesla's generators of alternative electrical current replaced those of Edison's. The following electrical network (allowed by the alternative current) finally displaced the local continuous electric generators. Carr considers that the Internet will do to computing what the dynamo of Tesla (integrated into a grid) did by transforming local electric plants into huge electric power companies.
In this chapter, Carr complains about actual IT costs as being too big for what it delivers. The answer is the cloud computation that will bring computing power on demand exactly as the electric grid give electric power when and where it was needed.
In chapter 4, “Goodbye, Mr. Gates,” Carr paints the future world - a future of virtual computing where physical location and devices based on software licensing no longer exist (hence undermining Microsoft's core business). The comments here are almost in favor of Google and his business model of services via the Internet.
In chapter 5: “The White City,” the author moves back to a historical discussion of how electricity changed people's lives and societies.

The second part of the book underlines the hopes and the dangers of such a paradigm shift.
Electricity drove many revolutions in the years that followed the end of the XIX century, like transportation, middle-class development, and the new mass culture. Carr considers that the switch from local to cloud computing will also bring seismic shifts in business, technology, and learning and that the change is on the run.
The chapter “World Wide Computer” underlines the rampant possibilities of a programmable internet. This chapter focus on how astonishing it will be a world for individuals with infinite information and computing power available to them. 
Carr discusses equally the future of corporate computing. The basic idea is that today's IT will disappear. Until now, if someone needed computing power, he needed to buy servers and install software on them. On the contrary, cloud computing makes the hardware and software transparent. Computing power becomes a commodity that is used as the electric current that we pay to use. The equipment that generates it becomes invisible to the end-user.
The equipment will become abstract when the infrastructure consists of thousands of servers. More than that,  the failure of one server does not cause any break in service. 
That explains, according to experts, the real reason for Google's superiority over competitors. Google does not necessarily have the best search algorithms but has separate intelligent infrastructure software that gives him a tremendous competitive advantage. Many smaller companies will have access to the tools once reserved for enormous undertakings. Productivity and creativity will, therefore, increase.
Cloud computing is a step closer to post-industrial society because a client will not buy servers anymore from a manufacturer but will buy a service. 
Manufacturers like Dell, HP, or IBM will, therefore, be faced with clients that will buy servers by thousands and have enormous bargaining power. A price war will then engage, and computing power will become incredibly cheap and accessible.
Tens of thousands of IT jobs on maintenance will be destroyed. But the software development market will expand. Equally, the application software that can use this unexpected power will explode. The software will move to the SaaS model (Software as a Service), where the software will be installed on cloud computing and used through the Internet. This transition is already underway (via software such as SalesForce.com and many other software packages) but will accelerate.
The chapter “From Many to the Few” discusses the social impacts of a programmable Internet. Carr systematically unveils here the negative consequences. Fewer and fewer people will need to work in a global world of the programmable Internet, but the utopia of equality and the web's local industries will never arrive. The author also underlines the erosion of the middle class.
The chapter “The Great Unbundling” describes the move from mass markets to markets of niches. The section also debates the social implications of the web to create a tribal and increasingly sliced world rather than a unified one. The Internet is strengthening the user's ideas because it allows him to find only others with the same goals. The World Wide Web is increasingly become a zone of niches and not a universal space assumed to arise with the Internet.
In the chapter “Fighting the Net,” Carr discusses the weaknesses and vulnerabilities of the free-flowing of information and the net's structural integrity.
In “A Spider's Web,” he addresses the personal privacy issues associated with the web and the realization that "we live in a world without secrets." This chapter is equally a warning about what it means to do business where everything is recorded and tracked: "Computer systems are not, at their core, technologies of emancipation. They are technologies of control." He points out that even a decentralized cloud network can be programmed to monitor and control.
The last chapter, “iGod,” discusses a futuristic vision of fusion between men and machines.  What would be possible when the human brain can immediately access infinite information, and the tools gain Artificial Intelligence (apparently this is Google’s program)?
These are the questions raised but unanswered in the chapter. As a matter of fact, the book ends by saying that we will not know where IT is going until our children, the first generation to be “wired” from the beginning, become adults.

General evaluation of the book
The main argument is that desktop software will be displaced by Web 2.0 (peer to per and participative internet) and cloud computing. When electricity began its development, many businesses used local sources for power, including local waterwheel or windmill on their own property. As the electrical grid developed, companies were able to get fuel delivered from somewhere else. They didn't know or really care, as long as it came in. 
In the computer industry, the same transition is going on. Instead of using programs on your PC, more and more businesses are using Web 2.0 technology to host their critical mission software somewhere else, and they don't really know where or even care. The advantages of such a transition would be at least economical, and the author cites one source as saying that Google’s computation (probably the largest user of cloud computing through its distributed data centers) can do a task at one-tenth of the cost.
If some aspects are never mentioned (for example, the copyright topics), the author did not fall either in a triumphalist or utopian vision. Some of his conclusions underline the shortcoming of the new paradigm: a further decline in print publishing, a reduction in the middle class, and the continuing erosion of privacy.
But can his core analogy with the electric grid be maintained? The electrical power system is a highly regulated market system (from a strategic perspective, this is quite normal). Can the knowledge economy and the new paradigm of cloud computing be envisioned in the same way? This is a fascinating point which the author never addresses. 
How to understand the actual trend of decentralized electric power generators (for example, by solar panels alimenting electrically autonomous houses)? These new tendencies may affect the pertinence of a similar paradigm shift linked to cloud computing.
The issue of privacy can also undermine all the intellectual build up. Inevitably the high computation power plants of the future will control the data of their clients. And these privacy intrusions might concern not just lay individuals (that may eventually accommodate with it) but the economic data, business plans, commercial secrets, secret inventions, or know-how of different undertakings. 
Is it possible to give all those highly sensitive information to a giant and an almost unique player like Google? Can we trade efficiency with the lack of independence? The answer seems to be negative. But these issues are likely to incite a debate of challenges and possible technical or juridical remedies. The future will decide on the balance between efficiency and privacy, independence, and monopolistic control.
However, one can reasonably agree with the author that ‘cloud computing’ and the pervasive network will be essential in shaping the future landscape of our knowledge economy and society.