Showing posts with label information society. Show all posts
Showing posts with label information society. Show all posts

Thursday, June 21, 2012

Social transparency (or hyper transparency): why privacy is not the (complete ) answer…

We are not far from the ‘transparent society’ described by David Brin in his excellent book with the same title.
He warned of dangers to freedoms from surveillance technologies being used by a few people rather than by many. The privacy will be lost in the ‘transparent society’ of tomorrow. 
Hence it would be better for society as a whole that surveillance is equal for all and for the public to have the same information access as those in power (everybody should watch everybody regardless of social or political status). With the ubiquitous camera surveillance of today, we are very close to fulfilling Brin's prophecy.
But new developments within the cyber world create a more insidious 'transparency' (a sort of ‘hyper-transparency’).
In advertising, new technical instruments (the recommendation engines) try to connect potential consumers with products they are interested in buying. One has to gather the buyer with unique items he would be interested in. Therefore, it is needed to understand, ‘personalize’, or ‘profile’ individuals' buying behavior. 
ICT profiling tools are the best solution to the problem. This ‘automatic profiling' is different from the forensic activity seen in popular culture. It represents much more, since it is made on a large scale, all over and all the time’. It is also less since this profiling focuses (yet) only aspects of ‘homo oeconomicus,’ the individual is seen as a consumer with his tastes, habits, etc.
The most interesting variety is the ‘recommendation engines' based on collaborative filtering’ - making automatic predictions (filtering) about users’ interests by collecting taste's information from many other users (collaborating).
An individual asks his friends for advice about how to choose newspapers, records, books, movies, or other items in everyday life. He can figure out which of his friends has tastes similar to his own and which are different. 
The ‘collaborative filtering’ automates the process based on the idea that there are probably several other items they would also find interesting whenever two similar people like an article. These systems try to discover connections between people's interests in a very labor-intensive approach based on data mining, pattern recognition, or other sophisticated techniques.
There are some distinct advantages to these techniques since ‘profiling’ ensures the adaptation of our technological environment to the user through a sort of intimate ‘personal experience.’ But there are equally some trade-offs since profiling technologies make possible a far-reaching monitoring of an individual's behavior and preferences.
Individuals needed some sort of protection.
-The first step was to adopt rules regarding privacy and data protection. According to these rules, the data can be processed freely as soon as they are not personal data (from the origin or 'anonymization'). Therefore, many profiling systems are built, taking into account that processing personal data after anonymization techniques would be free from the incidence of data protection legislation.
However, with new knowledge inference techniques, the frontier between anonymous data and identifying data tends to blur and evolve. Data can be considered anonymous at a given time and context. But later on, because new, seemingly unrelated data has been released, generated, or forwarded to a third party, they may allow the “re-identification.”

-And much more seems to be at stake. There is an information imbalance among the parties involved—where the firms “know” the consumers better than the consumers know themselves. This is what one might call ‘hyper-transparency.’
The profiling process is mostly unknown for the individual, who might never imagine the logic behind the decision taken towards him. Therefore it becomes hard, if not impossible, to contest the application of a particular group profile.
The information could also be used to discriminate among the users while relying on a variety of factors and strategies.
-Finally, such knowledge could potentially allow the targeting firms to use their insights into the individuals’ preferences and previous actions to unfairly manipulate them (with a subsequent loss of personhood and autonomy). 
Should we search for new, eventually, legal remedies? Or should we think about a new social paradigm where 'social transparency' or 'hyper-transparency' through profiling becomes general and accessible to everybody?
Interesting questions…

Tuesday, May 22, 2012

Cyberwar? ... attaining one hundred victories in one hundred battles is not the pinnacle of excellence. Subjugating the enemy’s army without fighting is the true pinnacle of excellence. ~ Sun Tzu, The Art of War

The  war is a violent continuation of interstates politics (according to von Clausewitz). The cyber war should enter the same paradigm.

From a legal perspective the international doctrine examined whether a cyber attack could be qualified as use of force (under article 2.5 of UNO charter) or as an armed attack (under article 51 of UNO charter). The criteria for considering a cyber attack as covered by above notions was the degree of physical destructiveness. A small degree of physical destruction will qualify a cyber attack under non military, international concepts like economic force, reprisals, international responsibility, etc.

This analysis seemed pertinent. It try to see the cyber warfare (form of interstate cyber attack) as an analogical extension of classical warfare. As such the cyber warfare would be another step within an unchanged framework.But the real evolution of cyber attacks by state actors shows the limits of this vision. No state is willing to escalate a cyber attack and produce the huge destruction that may trigger the classical forms of war or armed conflict. They will prefer to act unnoticed but pursuing their political aims with the new tool.

We can figure out even a more challenging. The question of destructiveness seems to be the core issue.  As such the main concept is ‘information (virtual) destructiveness’ that may relate to physical destructiveness in the same way that the 'intellectual property rights' relates to ordinary 'property rights'. Without dead and wounded, without casualties, this concept of ‘virtualized’ (invisible but not least severe) destruction might be an essential aspect of a cyber warfare undermining the base of a knowledge economy,  knowledge society or knowledge state. Finally we get a hint of cyber attacks as a cyber warfare (or cyber war) on itself , as a new genus,  and not as part of classical armed conflict paradigm.

Tuesday, April 17, 2012

Reviewing “What Would Google Do?: Reverse-Engineering the Fastest Growing Company in the History of the World” by Jeff Jarvis

This is a fantastic analysis...     
Its author, Jeff Jarvis, is a trained journalist who covered New Media stories in business and is now a graduate school professor at NYU and practices equally as a consultant and business speaker.
" What Would Google Do" is a book about seeing the world through Google's glasses. The book is organized into two parts.
In the first part, Jarvis translates Google's way of doing business into a set of rules (about 30 in total). Some of the most critical include:
- Give the people trust, and we will use it. Don't, and you will lose it. The dominant (companies, institutions, and governments) used to be in charge because of their control, but the world has changed. They can get it back by simply being more transparent and listening to their customers.
- Your customer is your advertising agency: Google spends almost nothing on advertising; people spread the word for them (the buzz effect). Let your customers do that for you.
- Join the Open Source, Gift Economy: Your customers will help you if you ask them. People like to be generous (look at Wikipedia, for example).
The masses are dead, long live the niches: Aggregation of the long tail replaces the mass (Andersen's ideas about the long-tail). For example, no online video may hit the ratings of "Terminator," but together, they will capture a vast audience.
Free is a business model: Google will find ways to make money by offering free services. Charging money costs money.
- Make mistakes well: It can be a good thing to make mistakes, but it depends on how you handle them. Corrections enhance credibility. You don't need to launch the perfect product. Your customers can (and will) help you to improve it. Google is always working with beta versions of its applications.

In the second part of the book, Jarvis describes these aggregate rules (Google business model) over many different industries.
These industries will be forced to change and become winners by changing faster than the competition - or lose everything if they believe that their current business model will survive.
Here become essential the rule of the first part, "the middlemen are doomed." The middleman should disappear because we don't need them anymore (an Internet effect).
For example, Jarvis examines the way Google would run a newspaper. You should keep in mind that Jarvis wrote these things well before large American newspapers fall into severe financial distress.
The same applies to real estate agents who are now challenged in the US by Craiglist's advertisements.
From media to advertising, from retail to manufacturing, from the service industry to banking, health, and school, and so on, Google's model would have a huge impact.

The final part of the book, Generation G, is about Google's impact on anyone's personal life. Google will keep people connected: young people will stay linked, likely for the rest of their lives. Past mistakes will be visible forever, but if you made mistakes, it not a big issue because everybody makes them.
This age of transparency will be an age of forgiveness (in Life is a beta). Privacy is not anymore an issue. The new generations are putting their lives online because sharing information is the basis of connections. And the sharing brings social benefits that outweigh risks.

This book confirms some of my intuitions.
Google is a media company, enormous, excellent, providing highly demanded assets (information). Its business model is based on revenue streams from advertising.
This can apply only to newspapers, magazines, professional sports teams, film producers, and TV stations. The `Google way' will refer to them, and only a few will survive the next `transformation'). It will also apply to the middlemen who will disappear (eventually surviving around another kind of service).
Therefore the model applies to companies processing the information as to their core business model. It generally concerns tertiary companies and not just all of them. The business needing "atoms being displaced or transformed," or real processing of matter (and not only information) are not touched. The industrial market or primary sector business is not concerned either. In the future knowledge economy, they (these industries) will still exist.
Equally, the advertisement model for Google has its limits. Can this advertising model be transposed everywhere in the information processing field? Merely speaking, can it overpass 500 billion (the amount of today's world advertising market)? The answer is, clearly, negative.
But besides these limits, Jarvis' prediction will be materialized sooner than we think. Jarvis makes us aware that we are living in exciting times...

Tuesday, September 27, 2011

Cloud computing as new IT paradigm: examining Nicholas Carr’s ideas


The idea behind cloud computing is the sharing of computing resources across many activities. For example, an operator may build a massive data center with tens of thousands of servers and then sell computing power to interested customers. Customers send their tasks through the internet and retrieve in the same way the results of calculations. At the end of the month, the operator may charge the clients to pay only the computing power they have used. The same thing may exist for data storage by giving access to companies that need the capacities of several thousand terabytes.

The great public became aware of the new technology through the bestselling book of Nicholas Carr, “The Big Switch: Rewiring the World from Edison to Google.” The switch here stays for a button linked to the coming out of electric power. Equally, the switch means a change in a computing paradigm that emulates for knowledge economy the ancient shift to electricity in the industrial world. Hence, the author's main argument is grounded on an analogy between early developments of the electrical grid system and today's application rooted in Internet and cloud computing.

The first part of the book sets the basis of the whole argumentation: an analysis of electric power developments some 100 years ago and its tremendous social outcome.
In the first chapter, “Burden's Wheel lays out,” Carr examines the past position of water power, the precursor to electricity, and then explains what these technologies economically means. Carr underlines the exceptional economic impact of general-purpose technologies – those who are the basis for many other commercial activities.
In the second chapter, “The Inventor and His Clear,” Carr reconstitutes the emergence, development, and electric power adoption. He emphasizes that electric power had a false start with Edison's request for continuous electrical current generators. Edison saw himself as a provider for local (every building) electrical generators and associated services ( maintenance, installation, etc.).
The next chapter, “Digital Millwork,” examines the modern history of computing. Here we see the analogy between old client service computing and old continuous electric power generators of Edison.  However, Tesla's generators of alternative electrical current replaced those of Edison's. The following electrical network (allowed by the alternative current) finally displaced the local continuous electric generators. Carr considers that the Internet will do to computing what the dynamo of Tesla (integrated into a grid) did by transforming local electric plants into huge electric power companies.
In this chapter, Carr complains about actual IT costs as being too big for what it delivers. The answer is the cloud computation that will bring computing power on demand exactly as the electric grid give electric power when and where it was needed.
In chapter 4, “Goodbye, Mr. Gates,” Carr paints the future world - a future of virtual computing where physical location and devices based on software licensing no longer exist (hence undermining Microsoft's core business). The comments here are almost in favor of Google and his business model of services via the Internet.
In chapter 5: “The White City,” the author moves back to a historical discussion of how electricity changed people's lives and societies.

The second part of the book underlines the hopes and the dangers of such a paradigm shift.
Electricity drove many revolutions in the years that followed the end of the XIX century, like transportation, middle-class development, and the new mass culture. Carr considers that the switch from local to cloud computing will also bring seismic shifts in business, technology, and learning and that the change is on the run.
The chapter “World Wide Computer” underlines the rampant possibilities of a programmable internet. This chapter focus on how astonishing it will be a world for individuals with infinite information and computing power available to them. 
Carr discusses equally the future of corporate computing. The basic idea is that today's IT will disappear. Until now, if someone needed computing power, he needed to buy servers and install software on them. On the contrary, cloud computing makes the hardware and software transparent. Computing power becomes a commodity that is used as the electric current that we pay to use. The equipment that generates it becomes invisible to the end-user.
The equipment will become abstract when the infrastructure consists of thousands of servers. More than that,  the failure of one server does not cause any break in service. 
That explains, according to experts, the real reason for Google's superiority over competitors. Google does not necessarily have the best search algorithms but has separate intelligent infrastructure software that gives him a tremendous competitive advantage. Many smaller companies will have access to the tools once reserved for enormous undertakings. Productivity and creativity will, therefore, increase.
Cloud computing is a step closer to post-industrial society because a client will not buy servers anymore from a manufacturer but will buy a service. 
Manufacturers like Dell, HP, or IBM will, therefore, be faced with clients that will buy servers by thousands and have enormous bargaining power. A price war will then engage, and computing power will become incredibly cheap and accessible.
Tens of thousands of IT jobs on maintenance will be destroyed. But the software development market will expand. Equally, the application software that can use this unexpected power will explode. The software will move to the SaaS model (Software as a Service), where the software will be installed on cloud computing and used through the Internet. This transition is already underway (via software such as SalesForce.com and many other software packages) but will accelerate.
The chapter “From Many to the Few” discusses the social impacts of a programmable Internet. Carr systematically unveils here the negative consequences. Fewer and fewer people will need to work in a global world of the programmable Internet, but the utopia of equality and the web's local industries will never arrive. The author also underlines the erosion of the middle class.
The chapter “The Great Unbundling” describes the move from mass markets to markets of niches. The section also debates the social implications of the web to create a tribal and increasingly sliced world rather than a unified one. The Internet is strengthening the user's ideas because it allows him to find only others with the same goals. The World Wide Web is increasingly become a zone of niches and not a universal space assumed to arise with the Internet.
In the chapter “Fighting the Net,” Carr discusses the weaknesses and vulnerabilities of the free-flowing of information and the net's structural integrity.
In “A Spider's Web,” he addresses the personal privacy issues associated with the web and the realization that "we live in a world without secrets." This chapter is equally a warning about what it means to do business where everything is recorded and tracked: "Computer systems are not, at their core, technologies of emancipation. They are technologies of control." He points out that even a decentralized cloud network can be programmed to monitor and control.
The last chapter, “iGod,” discusses a futuristic vision of fusion between men and machines.  What would be possible when the human brain can immediately access infinite information, and the tools gain Artificial Intelligence (apparently this is Google’s program)?
These are the questions raised but unanswered in the chapter. As a matter of fact, the book ends by saying that we will not know where IT is going until our children, the first generation to be “wired” from the beginning, become adults.

General evaluation of the book
The main argument is that desktop software will be displaced by Web 2.0 (peer to per and participative internet) and cloud computing. When electricity began its development, many businesses used local sources for power, including local waterwheel or windmill on their own property. As the electrical grid developed, companies were able to get fuel delivered from somewhere else. They didn't know or really care, as long as it came in. 
In the computer industry, the same transition is going on. Instead of using programs on your PC, more and more businesses are using Web 2.0 technology to host their critical mission software somewhere else, and they don't really know where or even care. The advantages of such a transition would be at least economical, and the author cites one source as saying that Google’s computation (probably the largest user of cloud computing through its distributed data centers) can do a task at one-tenth of the cost.
If some aspects are never mentioned (for example, the copyright topics), the author did not fall either in a triumphalist or utopian vision. Some of his conclusions underline the shortcoming of the new paradigm: a further decline in print publishing, a reduction in the middle class, and the continuing erosion of privacy.
But can his core analogy with the electric grid be maintained? The electrical power system is a highly regulated market system (from a strategic perspective, this is quite normal). Can the knowledge economy and the new paradigm of cloud computing be envisioned in the same way? This is a fascinating point which the author never addresses. 
How to understand the actual trend of decentralized electric power generators (for example, by solar panels alimenting electrically autonomous houses)? These new tendencies may affect the pertinence of a similar paradigm shift linked to cloud computing.
The issue of privacy can also undermine all the intellectual build up. Inevitably the high computation power plants of the future will control the data of their clients. And these privacy intrusions might concern not just lay individuals (that may eventually accommodate with it) but the economic data, business plans, commercial secrets, secret inventions, or know-how of different undertakings. 
Is it possible to give all those highly sensitive information to a giant and an almost unique player like Google? Can we trade efficiency with the lack of independence? The answer seems to be negative. But these issues are likely to incite a debate of challenges and possible technical or juridical remedies. The future will decide on the balance between efficiency and privacy, independence, and monopolistic control.
However, one can reasonably agree with the author that ‘cloud computing’ and the pervasive network will be essential in shaping the future landscape of our knowledge economy and society.

Future of wars, wars of the future...

Predicting the future of war is a challenging but fascinating endeavor. 
The pop culture we are now submerged in has familiarized us with the images of a nutty–gritty mix of weapons, robots, hi-tech knights in a sort of a mega computer game. Shall the war of tomorrow be a purely technological competition in advanced weaponry? My belief is that this colorful image is just a part of the answer.
To foresee the future war as in a “crystal ball,” we shall question the past. In the history of mankind, wars were competitions between societies for resources, wealth, prestige, and domination. The winners took it all, and their way of fighting became the model, the key to success for other societies.

The history shows the importance of geographical (or geopolitical) factors in war.
But equally important were the 'brainpower', the creativity or inventiveness of leading groups or whole societies at a strategic, tactical, or technological level. 
Take, for example, just the Greece-Persian wars, where a league of the city-states with divergent interests but common values fought and defeated the biggest empire of the time. Some of the reasons are still to be discovered, but for sure, the Greeks were valuing the open–spirit, the technology, and the art of war ('strategy' and 'tactics' are Ancient Greek concepts). War has become, from that moment on, a confrontation between the ‘intelligence’ of different cultures (on the pathway of Sung Tzu’s ‘Art of war’).
Particular case studies may hint at this ‘social intelligence’ (or ‘social creativity’) with significant military implications. 

For example, suppose one compares Germany and Great Britain during the Second World War. In that case, one can identify a higher British strategic creativity (including espionage superiority-like Enigma code-breaking, propaganda superiority, ‘soft power’ superiority) while Germany had, at least for a while, superior tactical creativity (the invention of blitzkrieg through learning and extrapolation from the First WW). The technological creativity was apparently equal between the two states.
We can also identify another critical factor: creative management. For example, the naval convoy organization in the Battle of Atlantic resulted from operational research, which was far better in England[2] than in Germany. The same type of case study applied to the US versus Germany in the Second World War may also identify the superior creative management in the US.
The US had equally superior technological creativity (the atomic bomb and the computers are real breakthroughs, much more than any of the ‘marvel weapons’ of Hitler). At a strategic level, the USA were likewise superior to Germany (soft power, strategic bombardments, etc.).

To foresee the future of war, it is crucial to understand the upcoming society's ‘characters’. The social paradigm is already shifting from matter and energy to information. The production of goods and energy will always play a role, but the primary asset will be information retrieval, processing, and, most of all, its creation. 
I firmly believe that we are in the middle of a rapid expansion that will soon (by 2015) establish a first ‘knowledge (or ‘creative’) society’ within the US. Future applications (based on the semantic web, data mining, grid and cloud computing, virtualization, simulations, social network, etc.) will act as creative ‘accelerators’  with a direct impact on military factors identified above.
As a result, several scenarios seem plausible.

In the first scenario, the ‘social (enhanced) creativity’ will change the war's technological ‘environment’. Some classical weaponry will apparently survive by becoming ‘smarter’ and communicating between them or military personnel. 
The real-time inter-connectivity will allow optimization by reducing casualties and destruction. Equally important will be the development of new artifacts: war robots. They will assist and enhance the abilities of human warriors[3]. Well adapted for asymmetric wars (against pre-industrial or industrial societies), they will pursue the same warfare optimization trend.

A more extreme scenario is related to the emergence of ‘ICT weapons’, controlling the enemy's information flow or information systems (industrial or post-industrial societies). 
The Israeli attack on some (assumed) nuclear Syrian facilities in operation Orchard seems to be a step in that direction[4]. Apparently, the Israelis may have used a technology similar to America’s Suter airborne network attack system to allow their planes to pass undetected by Syrians radars. Such a system can identify the enemy’s radars see, then process in real-time the radar’s signals, and feed it back (using high energy antennas) with desired signals (in this case erasing any signature of an upcoming air attack). 
Massive real-time processing power and microwave antennas might neutralize behind the scene without destruction, an essential piece in the enemy's arsenal[5]. Here, we can see another model of a future confrontation between a ‘knowledge (network-oriented) society’ and an industrial one.
This kind of ‘ICT armament’ might be the final precision weapon. There will be almost no casualties; no need to dismantle the cities, they will be 'only shut down'; no necessity for enemy’s armies to be taken out, they will be only neutralized - in each case not for very long. Without causalities and destruction, there is no need for post war reconstruction or nation (re)building. The war, as we know it, dematerialize.

The last scenario, related to the former, concerns the future conflicts between equally developed ‘knowledge societies’[6]. The mains asset is the emergence of “tools,” allowing to guess and eventually control the enemy’s ‘thinking’.
In the Second WW (by 1943), the US political leaders asked prominent psychologists from Harvard University to realize Hitler's ‘profiling‘. They concluded that he will fight to the very end and will finally commit suicide. Therefore, while dealing with a dictatorship, the ‘profiling’ of the leader may give an insight, a sort of ‘profile' for a whole society (because he is the only decider). 
For a democratic country, as any ‘knowledge society’ of today or tomorrow, the situation is different. But the ‘information revolution’ underway might provide an answer.
Google can ‘profile’ all of us, as users, and aggregate our behavior at a social level. Some time ago, for example, Google announced, based on a statistical analysis of researching words, a flu epidemic days before  American health administration. 
In the future, these kinds of instruments might be applied to a military context. The knowledge societies will become ‘transparent’ to themselves and to eventual enemies. No surprises, no hidden attacks will be possible since the new technologies will allow the ‘profiling’ of an army staff or entire army or society. Therefore, each one may compute its chances with a deterrent effect for the two enemies[7]
In this situation, the future warfare will become a mega high tech covered operation matching today espionage or propaganda operations.

Let’s hope that future war will change its present nature and render obsolete the suffering, death, and immense destruction we experienced since the beginning of History.




[1]Occam's razor may be used to ‘cut off’ specific scenarios by understanding the causes for success or failure in past predictions.
[2]One can distinguish four closely connected aspect of ‘social creativity’ with direct impact on war: strategic creativity (real understanding of own strengths, deep 'soft power', innovative policy decision making, profiling and understanding of the enemy, etc.), creative management (efficient, flexible and creative organizations and logistics), tactical creativity (intelligent development of tactics) and technological creativity (the invention of new weapons). To ensure victory, all these factors (or at least most of them) had to be stronger than those of the enemy. All these factors will play a role in shaping future wars.
[3]It will be impossible for some years from now (at least 20 years) to build real android robots. Let's compare the whole Internet network with a human brain (while the brain seems to be much more complicated). We understand the difficulty in building a robot able to act, even at a basic psychological level, like a man.
[4]Israelis realized this airstrike on a target at the Deir Ez-Zor region of Syria on the 6th of September 2007.
[5]Eventually, this might be a high-tech deception against state-of-the-art Russian air defense system: two radars reckoned to be Tor-M1 launchers that carry a payload of eight missiles, as well as two Pachora-2A systems. The same model may be applied in the future against the strategic command infrastructures of an enemy.
[6]Maybe future relationships between the USA and China will approach such a situation.
[7] Similar to satellite surveillance of the ’Cold War’ era.