Author:
(1) Salvatore Iaconesi, ISIA Design Florence and *Corresponding author (salvatore.iaconesi@artisopensource.net).
Table of Links
4. Interface and Data Biopolitics
5. Conclusions: Implications for Design and References
3. Bubbles, Guinea Pigs
An evidence of this occurrence is the emergence of knowledge and information "bubbles".
In the age of Hyperconnectivity (Wellman, 2001) information abundance quickly turns into information overload (O’Reilly, 1980). Therefore, relevance becomes an invaluable competitive advantage and attention a precious currency (Davenport, Beck, 2013).
This is why large operators (from social media services, to search engines, to news and media operators, all the way up to the ones which extract information from devices, appliances and other services) use specific algorithms to try to interpret users’ behaviors to try to understand which content might be more relevant for them, filtering out all the rest (or giving it minor priority, visually or hierarchically) and, at the same time, ensuring that the content which generates more revenue for them is granted higher shares of our attention space, to maximize earnings.
These algorithms and software agents also have the effect of tendentially excluding all the rest, closing us in "bubbles", in which what is outside is not even perceived, or very hard to perceive (Pariser, 2011).
Information spectacularization (for example through data and information visualization) further weights down on these processes. Bratton (2008) describes how spectacularized information visualizations (also called “data smog”) “distance people—now ‘audiences’ for data—even further from their abilities and responsibilities to understand relationships between the multiple ecologies in which they live, and the possibilities for action that they have.”
These elements – bubbles, algorithmic governance of information and information spectacularization –, thus, may bear the possibility that individuals progressively inhabit a controlled infosphere, in which a limited number of subjects is able to determine what is accessible, usable and, most important of all, knowable.
This power asymmetry also implies the fact that users can systematically be unknowingly exposed to experiments intended to influence their sphere of perception to drive them to adopt certain behaviors over other ones.
This is exactly what happened with Facebook in 2014 (Rushe, 2014; Booth 2014). In an experiment (Kramer et al, 2014), Facebook manipulated information appearing on 689 thousand users’ homepages to study the phenomenon of “emotional contagion” answering the question: how to users’ emotional expression change when they are exposed to content which is emotionally characterized in specific ways. By algorithmically filtering in or out content with specific characteristics they were able to induce particular expressions. The study (Kramer et al, 2014) concluded: “Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks.”
This is not the first case: dozens of other experiments (Hill, 2014) dealing with hundreds of thousands of unknowing users included analyses of A/B tests, content filtering for specific purposes, comment and interaction analysis for predictions, spreading of rumors and manufactured information, self censorship, social influence in advertising, and more.
In 2014, Jonathan Zittrain described an experiment in which Facebook attempted a civic-engineering feat to answer the question: “Could a social network get otherwise-indolent people to cast a ballot in that day’s congressional midterm elections?” (Zittrain, 2014). The answer was positive. And the past 2016 elections also demonstrated further ways in which massive, algorithmic controlled social media interactions can influence the determination of major events.
In her article describing the effects of computational agency during the Ferguson protests, Zeynep Tufekci described:
“Computation is increasingly being used to either directly make, or fundamentally assist, in gatekeeping decisions outside of online platforms. [...] Computational agency is expanding into more and more spheres. Complex, opaque and proprietary algorithms are increasingly being deployed in many areas of life, often to make decisions that are subjective in nature, and hence with no anchors or correct answers to check with. Lack of external anchors in the form of agreed-upon 'right' answers makes their deployment especially fraught. They are armed with our data, and can even divine private information we have not disclosed. They are interactive, act with agency in the world, and are often answerable only to the major corporations that own them. As the internet of things and connected, 'smart' devices become more widespread, the data available to them, and their opportunities to act in the world will only increase. And as more and more corporations deploy them in many processes from healthcare to hiring, their relevance and legal, political and policy importance will also rise.” (Tufekci, 2015)
This paper is available on arxiv under CC BY 4.0 DEED license.