Wednesday, April 27. 2016
Note: we blogged last week about automation and funilly, the Jacquard process was mentioned as one of the early stage of automation and computing during an exhibition in Wien. The "Métiers Jacquard" were an inspiration to Ch. Babbage when he started to design his Difference Engine, one of the early mechanic autonomous and programmable computer (in the sense of a calculator). We should also not forget that in reality, "computers" were real persons doing calculations -- often women (in particular during last world wars), which then became the first operators of automatic computers (see the ENIAC girls) -- until back in the middle of 20th century.
So to say, digital computers have already replaced "person computers" and automated, as well as by far quickened their activities... The first purpose of the computer as we know it was automation. It is part of its DNA.
Now, as a wink to this history but also as a possible "return of the repressed", Google literally enters the textile business and brings computing (back) to fabrics! So it is not by chance that they've picked up this name obviously, "Jacquard".
More about it on MIT Technology Review.
Thursday, April 21. 2016
Note: the idea of automation is very present again recently. And it is more and more put together with the related idea of a society without work, or insufficient work for everyone --which is already the case in the liberal way of thinking btw--, as most of it would be taken by autonomous machines, AIs, etc.
Many people are warning about this (Bill Gates among them, talking precisely about "software substitution"), some think about a "universal income" as a possible response, some say we shouldn't accept this and use our consumer power to reject such products (we spoke passionatey about it with my good old friend Eric Sadin last week during a meal at the Palais de Tokyo in Paris, while drinking --almost automatically as well-- some good wine), some say it is almost too late and we should plan and have visions for what is coming upon us...
Now comes also an exhibition about the same subject at Kunsthalle Wien that tries to articulate the questions: "Technical devices that were originally designed to serve and assist us and are now getting smarter and harder to control and comprehend. Does their growing autonomy mean that the machines will one day overpower us? Or will they remain our subservient little helpers, our gateway to greater knowledge and sovereignty?"
Installation view The Promise of Total Automation. Image Kunsthalle Wien
The word ‘automation’ is appearing in places that would have seemed unlikely to most people less than a decade ago: journalism, art, design or law. Robots and algorithms are being increasingly convincing at doing things just like humans. And sometimes even better than humans.
The Promise of Total Automation, an exhibition recently opened at Kunsthalle Wien in Vienna, looks at our troubled relationship with machines. Technical devices that were originally designed to serve and assist us and are now getting smarter and harder to control and comprehend. Does their growing autonomy mean that the machines will one day overpower us? Or will they remain our subservient little helpers, our gateway to greater knowledge and sovereignty?
The Promise of Total Automation is an intelligent, inquisitive and engrossing exhibition. Its investigation into the tensions and dilemmas of human/machines relationship explore themes that go from artificial intelligence to industrial aesthetics, from bio-politics to theories of conspiracy, from e-waste to resistance to innovation, from archaeology of digital communication to utopias that won’t die.
The show is dense in information and invitations to ponder so don’t forget to pick up one of the free information booklet at the entrance of the show. You’re going to need it!
A not-so-quick walk around the show:
James Benning, Stemple Pass, 2012
James Benning‘s film Stemple Pass is made of four static shots, each from the same angle and each 30 minutes long, showing a cabin in the middle of a forest in spring, fall, winter and summer. The modest building is a replica of the hideout of anti-technology terrorist Ted Kaczynski. The soundtrack alternates between the ambient sound of the forest and Benning reading from the Unabomber’s journals, encrypted documents and manifesto.
Kaczynski’s texts hover between his love for nature and his intention to destroy and murder. Between his daily life in the woods and his fears that technology is going to turn into an instrument that enables the powerful elite to take control over society. What is shocking is not so much the violence of his words because you expect them. It’s when he gets it right that you get upset. When he expresses his distrust of the merciless rise of technology, his doubts regarding the promises of innovation and it somehow makes sense to you.
Konrad Klapheck, Der Chef, 1965. Photo: © Museum Kunstpalast – ARTOTHEK
Konrad Klapheck’s paintings ‘portray’ devices that were becoming mainstream in 1960s households: vacuum cleaner, typewriters, sewing machines, telephones, etc. In his works, the objects are abstracted from any context, glorified and personified. In the typewriter series, he even assigns roles to the objects. They are Herrscher (ruler), Diktator, Gesetzgeber (lawgiver) or Chef (boss.) These titles allude to the important role that the instruments have taken in administrative and economic systems.
Tyler Coburn, Sabots, 2016, courtesy of the artist, photo: David Avazzadeh
This unassuming small pair of 3D-printed clogs alludes to the workers struggles of the Industrial Revolution. The title of the piece, Sabots, means clogs in french. The word sabotage allegedly comes from it. The story says that when French farmers left the countryside to come and work in factories they kept on wearing their peasant clogs. These shoes were not suited for factory works and as a consequence, the word ‘saboter’ came to mean ‘to work clumsily or incompetently’ or ‘to make a mess of things.’ Another apocryphal story says that disgruntled workers blamed the clogs when they damaged or tampered machinery. Another version saw the workers throwing their clogs at the machine to destroy it.
In the early 20th century, labor unions such as the Industrial Workers of the World (IWW) advocated withdrawal of efficiency as a means of self-defense against unfair working conditions. They called it sabotage.
Tyler Coburn contributed another work to the show. Waste Management looks like a pair of natural stones but the rocks are actually made out of electronic waste, more precisely the glass from old computer monitors and fiber powder from printed circuit boards that were mixed with epoxy and then molded in an electronic recycling factory in Taiwan. The country is not only a leader in the export of electronics, but also in the development of e-waste processing technologies that turn electronic trash into architectural bricks, gold potassium cyanide, precious metals—and even artworks such as these rocks. Coburn bought them there as a ready made. They evoke the Chinese scholar’s rocks. By the early Song dynasty (960–1279), the Chinese started collecting small ornamental rocks, especially the rocks that had been sculpted naturally by processes of erosion.
Accompanying these objects is a printed broadsheet which narrates the circulation and transformation of a CRT monitor into the stone artworks. The story follows from the “it-narrative” or novel of circulation, a sub-genre of 18th Century literature, in which currencies and commodities narrated their circulation within a then-emerging global economy.
Osborne & Felsenstein, Personal Computer Osborne 1a and Monitor NEC, 1981, Loan Vienna Technical Museum, photo: David Avazzadeh
Adam Osborne and Lee Felsenstein, Personal Computer Osborne 1a, 1981, Courtesy Technisches Museum, Wien
Several artifacts ground the exhibition into the technological and cultural history of automation: A mechanical Jacquard loom, often regarded as a key step in the history of computing hardware because of the way it used punched cards to control operations. A mysterious-looking arithmometer, the first digital mechanical calculator reliable enough to be used at the office to automate mathematical calculations. A Morse code telegraph, the first invention to effectively exploit electromagnetism for long-distance communication and thus a pioneer of digital communication. A cybernetic model from 1956 (see further below) and the first ‘portable’ computer.
Released in 1981 by Osborne Computer Corporation, the Osborne 1 was the first commercially successful portable microcomputer. It weighed 10.7 kg (23.5 lb), cost $1,795 USD, had a tiny screen (5-inch/13 cm) and no battery.
At the peak of demand, Osborne was shipping over 10,000 units a month. However, Osborne Computer Corporation shot itself in the foot when they prematurely announced the release of their next generation models. The news put a stop to the sales of the current unit, contributing to throwing the company into bankruptcy. This has comes to be known as the Osborne effect.
Kybernetisches Modell Eier: Die Maus im Labyrinth (Cybernetics Model Eier: The Mouse in the Maze), 1956. Image Kunsthalle Wien
Around 1960, scientists started to build cybernetic machines in order to study artificial intelligence. One of these machines was a maze-solving mouse built by Claude E. Shannon to study the labyrinthian path that a call made using telephone switching systems should take to reach its destination. The device contained a maze that could be arranged to create various paths. The system followed the idea of Ariadne’s thread, the mouse marking each field with the path information, like the Greek mythological figure did when she helped Theseus find his way out of the Minotaur’s labyrinth. Richard Eier later re-built the maze-solving mouse and improved Shannon’s method by replacing the thread with two two-bits memory units.
Régis Mayot, JEANNE & CIE, 2015. Image Kunsthalle Wien
In 2011, the CIAV (the international center for studio glass in Meisenthal, France) invited Régis Mayot to work in their studios. The designer decided to explore the moulds themselves, rather than the objects that were produced using them. By a process of sand moulding, the designer revealed the mechanical beauty of some of these historical tools, producing prints of a selection of moulds that were then blown by craftsmen in glass.
Jeanne et Cie (named after one of the moulds chosen by the designer) highlights how the aesthetics of objects are the result of the industrial instruments and processes that enter into their manufacturing.
Bureau d’études, ME, 2013, © Léonore Bonaccini and Xavier Fourt
Bureau d’Etudes, Electromagnetic Propaganda, 2010
The exhibition also presented a selection of Bureau d´Études‘ intricate and compelling cartographies that visualize covert connections between actors and interests in contemporary political, social and economic systems. Because knowledge is power, the maps are meant as instruments that can be used as part of social movements. The ones displayed at Kunsthalle Wien included the maps of Electro-Magnetic Propaganda, Government of the Agro-Industrial System and the 8th Sphere.
I fell in love with Mark Leckey‘s video. So much that i’ll have to dedicate another post to his work. One day.
David Jourdan’s poster alludes to an ad in which newspaper Der Standard announced that its digital format was ‘almost as good as paper.’
More images from the exhibition:
Magali Reus, Leaves, 2015
Thomas Bayrle, Kleiner koreanischer Wiper
Juan Downey, Nostalgic Item, 1967, Estate of Joan Downey courtesy of Marilys B. Downey, photo: David Avazzadeh
Judith Fegerl, still, 2013, © Judith Fegerl, Courtesy Galerie Hubert Winter, Wien
Installation view The Promise of Total Automation. Image Kunsthalle Wien
Installation view. Image Kunsthalle Wien
Installation view. Image Kunsthalle Wien
More images on my flickr album.
Also in the exhibition: Prototype II (after US patent no 6545444 B2) or the quest for free energy.
The Promise of Total Automation was curated by Anne Faucheret. The exhibition is open until 29 May at Kunsthalle Wien in Vienna. Don’t miss it if you’re in the area.
Wednesday, April 06. 2016
If not "algorithmic communism" then "algorithmic liberalism"? > At Uber, the Algorithm Is More Controlling Than the Real Boss 7 #algorithms #politics #economy
Note: to pursue fueling this resource and reflection about what kind of social, political and economic rules are implemented within algorithms that will then become the foundation layer of the so called "world without work"... (and to get how far current political parties seem not to address these stakes), here comes an interesting study to exemplify what algorithmic rules can be(come) and how they implement a "way of thinking", in this case at Uber. By extension, think of course about Amazon's automated or crowdsourced services, AirBn'B, etc.
Obviously, these algorithms are already being written right "under our noses" (think about algorithmic trading, smart cities, smart "things" & "stuff", autonomous cars and drones, etc.), certainly under the radar but not not under an "algorithmic communism" basis. Not that we know about ...
Keith Bedford for The Wall Street Journal
In defending his company against assertions that Uber drivers should be classified as employees, Uber CEO Travis Kalanick often wields the algorithm. Uber isn’t a boss, he argues. It’s a software platform that balances supply and demand to connect entrepreneurs with customers.
A new academic paper pokes holes in that argument.
Researchers Alex Rosenblat and Luke Stark at the Data and Society Research Institute and New York University point out that Uber uses software to exert similar control over workers that a human manager would. The company’s algorithm uses performance metrics, scheduling prompts, behavioral suggestions, dynamic prices, and information asymmetry “as a substitute for direct managerial power and control,” they wrote.
Uber did not immediately respond to a request for comment.
The researchers, who conducted in-depth interviews with Uber drivers and studied posts in drivers-only online forums, situate Uber and similar sharing-economy platforms in a wider conversation about the trend toward employee management and so-called on demand or predictive scheduling software. Starbucks, for instance, hasn’t replaced traditional managers, but it’s among a growing group of companies that increasingly rely on software to manage worker schedules and behavior.
Bottom line: Robots aren’t stealing your job – at least in this instance – but they’re becoming your boss. And the level of control and surveillance they exert is often far greater than human management would, the authors found.
Rather than undertaking a human-driven performance review process, Uber evaluates employees according to an automated rating system. Riders enter scores into the Uber app to rate drivers with one to five stars. Back-end software tallies the scores and sends drivers regular summaries of their performance and how they stack up to their peers.
The system, the researchers wrote, empowers Uber customers to serve as “middle managers,” essentially outsourcing management. It lets Uber “achieve an organization where the workforce behaves relatively homogeneously” without needing a manager to bark orders.
Uber’s software also exerts control over when and where drivers work, the researchers noted. The company never orders workers to drive, but its software does prod them. It alerts them when the software predicts that surge pricing is due to kick in, boosting the fare by up to four times and increasing the driver’s fee.
But drivers reported that it was difficult to tell the difference between the company’s predictions and an actual surge. They often showed up at a surge location to find the area saturated with drivers and the company no longer offering to reward them with higher payments.
Thus Uber’s software is not passive but manipulates the supply of labor and shapes the marketplace as a whole, the authors argued.
Drivers told the researchers they resisted Uber by failing to reply to company emails inquiring about their whereabouts and by posting on message boards to advise other drivers to “resist the surge.” They said they didn’t want Uber to know where they planned to be for fear the company would trick them into driving elsewhere without delivering the benefit of higher fees. Essentially, the authors said, Uber drivers resist the algorithm-boss by trying to trick him – perhaps not unlike the decisions traditional employees make about what information to share with a human boss.
Despite Uber’s depiction of drivers as entrepreneurs who control their own labor, an environment in which Uber has all the information makes it harder for drivers to make decisions that are in their interest, the authors said. Uber drivers are discouraged from turning down a fare – in some cities, drivers are prodded to pick up 90% of passengers who request a pickup – and they aren’t given fare information in advance. Drivers complained that this asymmetry resulted in sometimes losing money, since some rides are too short to be worthwhile, and they have no way to know how much they could expect to earn.
Of course, many of the practices that benefit Uber and annoy its drivers also benefit customers. And like Starbucks employees and other workers whose lives are made unpredictable by such predictive scheduling software, Uber drivers are free to quit. In that sense, they are the self-determined entrepreneurs that Uber describes. But in other ways, they clearly aren’t.
Monday, April 04. 2016
On Algorithmic Communism - Ian Lowrie on Inventing the Future : Postcapitalism and a World Without Work | #algorithms #future #postcapitalism
Note: in a time when we'll soon have for the first time a national vote in Switzeralnd about the Revenu de Base Inconditionnel ("Universal Basic Income") --next June, with a low chance of success this time, let's face it--, when people start to speak about the fact that they should get incomes to fuel global corporations with digital data and content of all sorts, when some new technologies could modify the current digital deal, this is a manifesto that is certainly more than interesting to consider. So as its criticism in this paper, as it appears truly complementary.
More generally, thinking the Future in different terms than liberalism is an absolute necessity. Especially in a context where, also as stated, "Automation and unemployment are the future, regardless of any human intervention".
By Ian Lowrie
January 8th, 2016
IN THE NEXT FEW DECADES, your job is likely to be automated out of existence. If things keep going at this pace, it will be great news for capitalism. You’ll join the floating global surplus population, used as a threat and cudgel against those “lucky” enough to still be working in one of the few increasingly low-paying roles requiring human input. Existing racial and geographical disparities in standards of living will intensify as high-skill, high-wage, low-control jobs become more rarified and centralized, while the global financial class shrinks and consolidates its power. National borders will continue to be used to control the flow of populations and place migrant workers outside of the law. The environment will continue to be the object of vicious extraction and the dumping ground for the negative externalities of capitalist modes of production.
It doesn’t have to be this way, though. While neoliberal capitalism has been remarkably successful at laying claim to the future, it used to belong to the left — to the party of utopia. Nick Srnicek and Alex Williams’s Inventing the Future argues that the contemporary left must revive its historically central mission of imaginative engagement with futurity. It must refuse the all-too-easy trap of dismissing visions of technological and social progress as neoliberal fantasies. It must seize the contemporary moment of increasing technological sophistication to demand a post-scarcity future where people are no longer obliged to be workers; where production and distribution are democratically delegated to a largely automated infrastructure; where people are free to fish in the afternoon and criticize after dinner. It must combine a utopian imagination with the patient organizational work necessary to wrest the future from the clutches of hegemonic neoliberalism.
Strategies and Tactics
In making such claims, Srnicek and Williams are definitely preaching to the leftist choir, rather than trying to convert the masses. However, this choir is not just the audience for, but also the object of, their most vituperative criticism. Indeed, they spend a great deal of the book arguing that the contemporary left has abandoned strategy, universalism, abstraction, and the hard work of building workable, global alternatives to capitalism. Somewhat condescendingly, they group together the highly variegated field of contemporary leftist tactics and organizational forms under the rubric of “folk politics,” which they argue characterizes a commitment to local, horizontal, and immediate actions. The essentially affective, gestural, and experimental politics of movements such as Occupy, for them, are a retreat from the tradition of serious militant politics, to something like “politics-as-drug-experience.”
Whatever their problems with the psychodynamics of such actions, Srnicek and Williams argue convincingly that localism and small-scale, prefigurative politics are simply inadequate to challenging the ideological dominance of neoliberalism — they are out of step with the actualities of the global capitalist system. While they admire the contemporary left’s commitment to self-interrogation, and its micropolitical dedication to the “complete removal of all forms of oppression,” Srnicek and Williams are ultimately neo-Marxists, committed to the view that “[t]he reality of complex, globalised capitalism is that small interventions consisting of relatively non-scalable actions are highly unlikely to ever be able to reorganise our socioeconomic system.” The antidote to this slow localism, however, is decidedly not fast revolution.
Instead, Inventing the Future insists that the left must learn from the strategies that ushered in the currently ascendant neoliberal hegemony. Inventing the Future doesn’t spend a great deal of time luxuriating in pathos, preferring to learn from their enemies’ successes rather than lament their excesses. Indeed, the most empirically interesting chunk of their book is its careful chronicle of the gradual, stepwise movement of neoliberalism from the “fringe theory” of a small group of radicals to the dominant ideological consensus of contemporary capitalism. They trace the roots of the “neoliberal thought collective” to a diverse range of trends in pre–World War II economic thought, which came together in the establishment of a broad publishing and advocacy network in the 1950s, with the explicit strategic aim of winning the hearts and minds of economists, politicians, and journalists. Ultimately, this strategy paid off in the bloodless neoliberal revolutions during the international crises of Keynesianism that emerged in the 1980s.
What made these putsches successful was not just the neoliberal thought collective’s ability to represent political centrism, rational universalism, and scientific abstraction, but also its commitment to organizational hierarchy, internal secrecy, strategic planning, and the establishment of an infrastructure for ideological diffusion. Indeed, the former is in large part an effect of the latter: by the 1980s, neoliberals had already spent decades engaged in the “long-term redefinition of the possible,” ensuring that the institutional and ideological architecture of neoliberalism was already well in place when the economic crises opened the space for swift, expedient action.
Srnicek and Williams argue that the left must abandon its naïve-Marxist hopes that, somehow, crisis itself will provide the space for direct action to seize the hegemonic position. Instead, it must learn to play the long game as well. It must concentrate on building institutional frameworks and strategic vision, cultivating its own populist universalism to oppose the elite universalism of neoliberal capital. It must also abandon, in so doing, its fear of organizational closure, hierarchy, and rationality, learning instead to embrace them as critical tactical components of universal politics.
There’s nothing particularly new about Srnicek and Williams’s analysis here, however new the problems they identify with the collapse of the left into particularism and localism may be. For the most part, in their vituperations, they are acting as rather straightforward, if somewhat vernacular, followers of the Italian politician and Marxist theorist Antonio Gramsci. As was Gramsci’s, their political vision is one of slow, organizationally sophisticated, passive revolution against the ideological, political, and economic hegemony of capitalism. The gradual war against neoliberalism they envision involves critique and direct action, but will ultimately be won by the establishment of a post-work counterhegemony.
In putting forward their vision of this organization, they strive to articulate demands that would allow for the integration of a wide range of leftist orientations under one populist framework. Most explicitly, they call for the automation of production and the provision of a basic universal income that would provide each person the opportunity to decide how they want to spend their free time: in short, they are calling for the end of work, and for the ideological architecture that supports it. This demand is both utopian and practical; they more or less convincingly argue that a populist, anti-work, pro-automation platform might allow feminist, antiracist, anticapitalist, environmental, anarchist, and postcolonial struggles to become organized together and reinforce one another. Their demands are universal, but designed to reflect a rational universalism that “integrates difference rather than erasing it.” The universal struggle for the future is a struggle for and around “an empty placeholder that is impossible to fill definitively” or finally: the beginning, not the end, of a conversation.
In demanding full automation of production and a universal basic income, Srnicek and Williams are not being millenarian, not calling for a complete rupture with the present, for a complete dismantling and reconfiguration of contemporary political economy. On the contrary, they argue that “it is imperative […] that [the left’s] vision of a new future be grounded upon actually existing tendencies.” Automation and unemployment are the future, regardless of any human intervention; the momentum may be too great to stop the train, but they argue that we can change tracks, can change the meaning of a future without work. In demanding something like fully automated luxury communism, Srnicek and Williams are ultimately asserting the rights of humanity as a whole to share in the spoils of capitalism.
Inventing the Future emerged to a relatively high level of fanfare from leftist social media. Given the publicity, it is unsurprising that other more “engagé” readers have already advanced trenchant and substantive critiques of the future imagined by Srnicek and Williams. More than a few of these critics have pointed out that, despite their repeated insistence that their post-work future is an ecologically sound one, Srnicek and Williams evince roughly zero self-reflection with respect either to the imbrication of microelectronics with brutally extractive regimes of production, or to their own decidedly antiquated, doctrinaire Marxist understanding of humanity’s relationship towards the nonhuman world. Similarly, the question of what the future might mean in the Anthropocene goes largely unexamined.
More damningly, however, others have pointed out that despite the acknowledged counterintuitiveness of their insistence that we must reclaim European universalism against the proliferation of leftist particularisms, their discussions of postcolonial struggle and critique are incredibly shallow. They are keen to insist that their universalism will embrace rather than flatten difference, that it will be somehow less brutal and oppressive than other forms of European univeralism, but do little of the hard argumentative work necessary to support these claims. While we see the start of an answer in their assertion that the rejection of universal access to discourses of science, progress, and rationality might actually function to cement certain subject-positions’ particularity, this — unfortunately — remains only an assertion. At best, they are being uncharitable to potential allies in refusing to take their arguments seriously; at worst, they are unreflexively replicating the form if not the content of patriarchal, racist, and neocolonial capitalist rationality.
For my part, while I find their aggressive and unapologetic presentation of their universalism somewhat off-putting, their project is somewhat harder to criticize than their book — especially as someone acutely aware of the need for more serious forms of organized thinking about the future if we’re trying to push beyond the horizons offered by the neoliberal consensus.
However, as an anthropologist of the computer and data sciences, it’s hard for me to ignore a curious and rather serious lacuna in their thinking about automaticity, algorithms, and computation. Beyond the automation of work itself, they are keen to argue that with contemporary advances in machine intelligence, the time has come to revisit the planned economy. However, in so doing, they curiously seem to ignore how this form of planning threatens to hive off economic activity from political intervention. Instead of fearing a repeat of the privations that poor planning produced in earlier decades, the left should be more concerned with the forms of control and dispossession successful planning produced. The past decade has seen a wealth of social-theoretical research into contemporary forms of algorithmic rationality and control, which has rather convincingly demonstrated the inescapable partiality of such systems and their tendency to be employed as decidedly undemocratic forms of technocratic management.
Srnicek and Williams, however, seem more or less unaware of, or perhaps uninterested in, such research. At the very least, they are extremely overoptimistic about the democratization and diffusion of expertise that would be required for informed mass control over an economy planned by machine intelligence. I agree with their assertion that “any future left must be as technically fluent as it is politically fluent.” However, their definition of technical fluency is exceptionally narrow, confined to an understanding of the affordances and internal dynamics of technical systems rather than a comprehensive analysis of their ramifications within other social structures and processes. I do not mean to suggest that the democratic application of machine learning and complex systems management is somehow a priori impossible, but rather that Srnicek and Williams do not even seem to see how such systems might pose a challenge to human control over the means of production.
In a very real sense, though, my criticisms should be viewed as a part of the very project proposed in the book. Inventing the Future is unapologetically a manifesto, and a much-overdue clarion call to a seriously disorganized metropolitan left to get its shit together, to start thinking — and arguing — seriously about what is to be done. Manifestos, like demands, need to be pointed enough to inspire, while being vague enough to promote dialogue, argument, dissent, and ultimately action. It’s a hard tightrope to walk, and Srnicek and Williams are not always successful. However, Inventing the Future points towards an altogether more coherent and mature project than does their #ACCELERATE MANIFESTO. It is hard to deny the persuasiveness with which the book puts forward the positive contents of a new and vigorous populism; in demanding full automation and universal basic income from the world system, they also demand the return of utopian thinking and serious organization from the left.
Wednesday, March 23. 2016
The Birth And Death Of Privacy: 3,000 Years of History Told Through 46 Images | #privacy #transparency #history
Note: to put things in perspective, especially in the private-public data debate, it is interesting to start digg into the history of privacy, or how, where and when it possibly came from... So as how, where and when it will possibly vanish...
The following article was found on Medium, written by journalist Greg Ferenstein. It has some flaws (or rather some quick shortcuts due to its format) and seems driven by the statement that the natural state of human beings is "transparency/no privacy", but it also doesn't pretend to be a scientific final story about privacy. It is rather instead a point of view and a relatively brief introduction by a writer, considering the large period adressed (to start digg in then). It is not a detailed paper by an historian specialized into this topic.
The article should be taken with a bit of critical distance therefore. Especially, to my reader's point of view, there are missing arguments regarding the fact that "privacy" is obviously not mainly "physical privacy" (walls, curtains, etc.), or not anymore for a long time. It certainly started as physical privacy -- as the author demonstrates it well -- but at a critcal point, this gained privacy, this evolution from a state of "no privacy" helped guarantee a certain freedom of thinking that became therefore highly related to the foundation of our "democratic" political system, to the "enlightenment" period as well.
And this is the main element regarding this question according to me. Loosing one could also clearly mean loosing the other... (if it's not already lost... a subject that could be debated as well, not to speculate further to know if a different system could emerge from this nor not, maybe even an "egalitarian" one).
Also, to state as a conclusion (last 7 lines) that our "natural state" is to be "transparent" (state of no privacy) and that the actual move toward "transparency" is just a manner to go "naturally" back to what it always was is a bit intellectually lazy: the current "transparency" that is pushed mainly by big corporations and also by States for security reasons --as stated, "law enforcements" of many sorts-- is not the old "passive" transparency but a highly "dynamic", computed, processed, and often merchandised one.
It has nothing to do with the old "all the family lives naked with their nurturing animals in the same room" sort of argument then... It is a different system, not democratc anymore but a mix of ultra liberalism and monitored surveillance. Not a funny thing at all...
So, all in all, the arguments in the article remain very interesting, related to many contemporary issues and there are several useful resources as well in there. But you should definitely keep your brain "switch on" while reading it!
Jean-Leon Gerome, The Large Pool Of Bursa
2-Minute Summary ...
- Privacy, as we understand it, is only about 150 years old.
- Humans do have an instinctual desire for privacy. However, for 3,000 years, cultures have nearly always prioritized convenience and wealth over privacy.
- Section II will show how cutting edge health technology will force people to choose between an early, costly death and a world without any semblance of privacy. Given historical trends, the most likely outcome is that we will forgo privacy and return to our traditional, transparent existence.
*This post is part of an online book about Silicon Valley’s Political endgame. See all available chapters here.
How privacy was invented slowly over 3,000 years
“Privacy may actually be an anomaly” ~ Vinton Cerf, Co-creator of the military’s early Internet prototype and Google executive.
Cerf suffered a torrent of criticism in the media for suggesting that privacy is unnatural. Though he was simply opining on what he believed was an under-the-radar gathering at the Federal Trade Commission in 2013, historically speaking, Cerf is right.
Privacy, as it is conventionally understood, is only about 150 years old. Most humans living throughout history had little concept of privacy in their tiny communities. Sex, breastfeeding, and bathing were shamelessly performed in front of friends and family.
The lesson from 3,000 years of history is that privacy has almost always been a back-burner priority. Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.
Tribal Life (~200,000 B.C. to 6,000 B.C)
Flickr user Rod Waddington
"Because hunter-gatherer children sleep with their parents, either in the same bed or in the same hut, there is no privacy. Children see their parents having sex. In the Trobriand Islands, Malinowski was told that parents took no special precautions to prevent their children from watching them having sex: they just scolded the child and told it to cover its head with a mat" - UCLA Anthropologist, Jared Diamond
While extremely rare in tribal societies, privacy may, in fact, be instinctive. Evidence from tribal societies suggests that humans prefer to make love in solitude (In 9 of 12 societies where homes have separate bedrooms for parents, people prefer to have sex indoors. In those cultures without homes with separate rooms, sex is more often preferred outdoors).
However, in practice, the need for survival often eclipses the desire for privacy. For instance, among the modern North American Utku’s, a desire for solitude can seem profoundly rude:
Inuit family. Source: Wikipedia Ansgar Walk
"It dawned on me how forlorn I would be in the wildness if they forsook me. Far, far better to suffer loss of privacy” - Anthropologist Jean Briggs, on being ostracized by her host Utku family, after daring to explore the wilderness alone for a day.
The big question: if privacy isn’t the norm, where did it start? Let’s start from the first cities:
Ancient Cities (6th Century B.C. — 4th Century AD)
Image: Roman citizens engaged in conversation in a public restroom. Credit: A Day In The Life Of Ancient Rome
Like their tribal ancestors, the Greeks displayed some preference for privacy. And, unlike their primitive ancestors, the Greeks had the means to do something about it. University of Leicester’ Samantha Burke found that the Greeks used their sophisticated understanding of geometry to create housing with the mathematically minimum exposure to public view while maximizing available light.
Sightline analysis of maximum viewable space from the street. Burke (2000)
“For where men conceal their ways from one another in darkness rather than light, there no man will ever rightly gain either his due honour or office or the justice that is befitting” - Socrates
Athenian philosophy proved far more popular than their architecture. In Greece’s far less egalitarian successor, Rome, the landed gentry built their homes with wide open gardens. Turning one’s house into a public museum was an ostentatious display of wealth. Though, the rich seemed self-aware of their unfortunate trade-off:
“Great fortune has this characteristic, that it allows nothing to be concealed, nothing hidden; it opens up the homes of princes, and not only that but their bedrooms and intimate retreats, and it opens up and exposes to talk all the arcane secrets” ~ Pliny the Elder, ‘The Natural History’, circa 77 A.D
The majority of Romans lived in crowded apartments, with walls thin enough to hear every noise. “Think of Ancient Rome as a giant campground,” writes Angela Alberto in A Day in the life of Ancient Rome.
And, thanks to the Rome’s embrace of public sex, there was less of a motivation to make it taboo—especially considering the benefits.
Sex art, Pompeii
“Baths, drink and sex corrupt our bodies, but baths, drink and sex make life worth living” - graffiti — Roman bath
Early Middle Ages (4th century AD-1,200 AD): Privacy As Isolation
Early Christian saints pioneered the modern concept of privacy: seclusion. The Christian Bible popularized the idea that morality was not just the outcome of an evil deed, but the intent to cause harm; this novel coupling of intent and morality led the most devout followers (monks) to remove themselves from society and focus obsessively on battling their inner demons free from the distractions of civilization.
“Just as fish die if they stay too long out of water, so the monks who loiter outside their cells or pass their time with men of the world lose the intensity of inner peace. So like a fish going towards the sea, we must hurry to reach our cell, for fear that if we delay outside we will lost our interior watchfulness” - St Antony of Egypt
It is rumored that on the island monastery of Nitria, a monk died and was found 4 days later. Monks meditated in isolation in stone cubicles, known as “Beehive” huts.
Even before the collapse of ancient Rome in 4th century A.D., humanity was mostly a rural species
A stylized blueprint of the Lord Of The Rings-looking shire longhouses, which were popular for 1000 years, shows animals and humans sleeping under the same room—because, there was only one room.
Photo credit: Georges Duby, A History Of Private Life: Revelations of the Medieval World
“There was no classical or medieval latin word equivalent to ‘privacy’. privatio meant ‘a taking away’” - Georges Duby, author, ‘A History Of Private Life: Revelations of the Medieval World’
Late Medieval/Early Renaissance (1300–1600) — The Foundation Of Privacy Is Built
“Privacy — the ultimate achievement of the renaissance” - Historian Peter Smith
In 1215, the influential Fourth Council Of Lateran (the “Great Council”) declared that confessions should be mandatory for the masses. This mighty stroke of Catholic power instantly extended the concept of internal morality to much of Europe.
“The apparatus of moral governance was shifted inward, to a private space that no longer had anything to do with the community,” explained religious author, Peter Loy. Solitude had a powerful ally.
Fortunately for the church, some new technology would make quiet contemplation much less expensive: Guttenberg’s printing press.
Thanks to the printing presses invention after the Great Counsel’s decree, personal reading supercharged European individualism. Poets, artists, and theologians were encouraged in their pursuits of “abandoning the world in order to turn one’s heart with greater intensity toward God,” so recommended the influential canon of The Brethren of the Common Life.
To be sure, up until the 18th century, public readings were still commonplace, a tradition that extended until universal book ownership. Quiet study was an elite luxury for many centuries.
Citizens enjoy a public reading.
The Architecture of privacy
Individual beds are a modern invention. As one of the most expensive items in the home, a single large bed became a place for social gatherings, where guests were invited to sleep with the entire family and some servants.
People gather around a large bed.
But, the uncleanness of urbanized life quickly caught up with the Europeans, when infectious diseases wiped out large swaths of newly crowded cities. The Black Death, alone, killed over 100 million people.
This profoundly changed hygiene attitudes, especially in hospitals, where it was once common for patients to sleep as close together as houseguests were accustomed to.
"Little children, both girls and boys, together in dangerous beds, upon which other patients died of contagious diseases, because there is no order and no private bed for the children, [who must] sleep six, eight, nine, ten, and twelve to a bed, at both head and foot" - notes of a nurse (circa 1500), lamenting the lack of modern medical procedures.
Though, just because individual beds in hospitals were coming into vogue, it did not mean that sex was any more private. Witnessing the consummation of marriage was common for both spiritual and logistical reasons:
“Newlyweds climbed into bed before the eyes of family and friends and the next day exhibit the sheets as proof that the marriage had been consummated” - Georges Duby, Editor, "A History of Private Life"
Few people demanded privacy while they slept because even separate beds wouldn’t have afforded them the luxury. Most homes only had one room. Architectural historians trace the origins of internal walls to the more basic human desire to be warm.
Below, in the video, is a Hollywood re-enactment of couples sleeping around the burning embers of a central fire pit, from the film, Beowulf. It’s a solid illustration of the grand hall open architecture that was pervasive before the popularization of internal walls circa 1,400 A.D.
https://www.youtube.com/watch?v=UT4gELLPPzs > Couples sleep around the warmth of a fire (clip from Beowulf)
“Firstly, I propose that there be a room common to all in the middle, and at its centre there shall be a fire, so that many more people can get round it and everyone can see the others faces when engaging in their amusements and storytelling” - 15th century Italian Architect, Sebastian Serlio.
To disperse heat more efficiently without choking houseguests to death, fire-resistant chimney-like structures were built around central fire pits to reroute smoke outside. Below is an image of a “transitional” house during the 16th century period when back-to-back fireplaces broke up the traditional open hall architecture.
Source: Housing Culture: Traditional Architecture In An English Landscape (p. 78).
“A profound change in the very blueprint of the living space” - historian Sarti Raffaella, on the introduction of the chimney.
Pre-industrial revolution (1600–1840) — The home becomes private, which isn’t very private
The first recorded daily diary was composed by Lady Margaret Hoby, who lived just passed the 16th century. On February 4th, 1600, she writes that she retired “to my Closit, wher I praid and Writt some thinge for mine owne priuat Conscience’s”.
By the renaissance, it was quite common for at least the wealthy to shelter themselves away in the home. Yet, even for those who could afford separate spaces, it was more logistically convenient to live in close quarters with servants and family.
“Having served in the capacity of manservant to his Excellency Marquis Francesco Albergati for the period of about eleven years, that I can say and give account that on three or four occasions I saw the said marquis getting out of bed with a perfect erection of the male organ” - 1751, Servant of Albergati Capacelli, testifying in court that his master did not suffer from incontinence, thus rebutting his wife’s legal suit for annulment.
It was just prior to the industrial revolution that citizens, for the first time, demanded that the law begin to keep pace with the evolving need for secret activities.
In this early handwritten note on August 20th, 1770, revolutionist and future President of the United States, John Adams, voiced his support for the concept of privacy.
“I am under no moral or other Obligation…to publish to the World how much my Expences or my Incomes amount to yearly.”
Despite some high-profile opposition, the first American Census was posted publicly, for logistics reasons, more than anything else. Transparency was the best way to ensure every citizen could inspect it for accuracy.
Privacy-conscious citizen did find more traction with what would become perhaps America’s first privacy law, the 1710 Post Office Act, which banned sorting through the mail by postal employees.
“I’ll say no more on this head, but When I have the Pleasure to See you again, shall Inform you of many Things too tedious for a Letter and which perhaps may fall into Ill hands, for I know there are many at Boston who dont Scruple to Open any Persons letters, but they are well known here.” - Dr. Oliver Noyes, lamenting the well-known fact that mail was often read.
This fact did not stop the mail’s popularity
Gilded Age: 1840–1950 — Privacy Becomes The Expectation
“Privacy is a distinctly modern product” - E.L. Godkin, 1890
By the time the industrial revolution began serving up material wealth to the masses, officials began recognizing privacy as the default setting of human life.
Source: Wikipedia user MattWade
“The material and moral well-being of workers depend, the health of the public, and the security of society depend on each family’s living in a separate, healthy, and convenient home, which it may purchase” - speaker at 1876 international hygiene congress in Brussels.
For the poor, however, life was still very much on display. The famous 20th-century existentialist philosopher Jean Paul-Satre observed the poor streets of Naples:
Crowded apartment dwellers spill on to the streets
“The ground floor of every building contains a host of tiny rooms that open directly onto the street and each of these tiny rooms contains a family…they drag tables and chairs out into the street or leave them on the threshold, half outside, half inside…outside is organically linked to inside…yesterday i saw a mother and a father dining outdoors, while their baby slept in a crib next to the parents’ bed and an older daughter did her homework at another table by the light of a kerosene lantern…if a woman falls ill and stays in bed all day, it’s open knowledge and everyone can see her.”
Insides of houses were no less cramped:
The “Right To Privacy “ is born
“The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.” - “The Right To Privacy” ~ December 15, 1890, Harvard Law Review.
Interestingly enough, the right to privacy was justified on the very grounds for which it is now so popular: technology’s encroachment on personal information.
However, the father of the right to privacy and future Supreme Court Justice, Louis Brandeis, was ahead of his time. His seminal article did not get much press—and the press it did get wasn’t all that glowing.
"The feelings of these thin-skinned Americans are doubtless at the bottom of an article in the December number of the Harvard Law Review, in which two members of the Boston bar have recorded the results of certain researches into the question whether Americans do not possess a common-law right of privacy which can be successfully defended in the courts." - Galveston Daily News on ‘The Right To Privacy’
Privacy had not helped America up to this point in history. Brazen invasions into the public’s personal communications had been instrumental in winning the Civil War.
A request for wiretapping.
This is a letter from the Secretary of War, Edwin Stanton, requesting broad authority over telegraph lines; Lincoln simply scribbled on the back “The Secretary of War has my authority to exercise his discretion in the matter within mentioned. A. LINCOLN.”
It wasn’t until the industry provoked the ire of a different president that information privacy was codified into law. President Grover Cleveland had a wife who was easy on the eyes. And, easy access to her face made it ideal for commercial purposes.
The rampant use of President Grover Cleveland’s wife, Frances, on product advertisements, eventually led to the one of the nation’s first privacy laws. The New York legislature made it a penalty to use someone’s unauthorized likeness for commercial purposes in 1903, for a fine of up to $1,000.
Indeed, for most of the 19th century, privacy was practically upheld as a way of maintaining a man’s ownership over his wife’s public and private life — including physical abuse.
“We will not inflict upon society the greater evil of raising the curtain upon domestic privacy, to punish the lesser evil of trifling violence”- 1868, State V. Rhodes, wherein the court decided the social costs of invading privacy was not greater than that of wife beating.
The Technology of Individualism
The first 150 years of American life saw an explosion of information technology, from the postcard to the telephone. As each new communication method gave a chance to peek at the private lives of strangers and neighbors, Americans often (reluctantly) chose whichever technology was either cheaper or more convenient.
Privacy was a secondary concern.
"There is a lady who conducts her entire correspondence through this channel. She reveals secrets supposed to be the most pro- found, relates misdemeanors and indiscretions with a reckless disregard of the consequences. Her confidence is unbounded in the integrity of postmen and bell-boys, while the latter may be seen any morning, sitting on the doorsteps of apartment houses, making merry over the post-card correspondence.” - Editor, the Atlantic Monthly, on Americas of love of postcards, 1905.
Even though postcards were far less private, they were convenient. More than 200,000 postcards were ordered in the first two hours they were offered in New York City, on May 15, 1873.
Source: American Privacy: The 400-year History of Our Most Contested Right
The next big advance in information technology, the telephone, was a wild success in the early 20th century. However, individual telephone lines were prohibitively expensive; instead, neighbors shared one line, known as “party lines.” Commercial ads urged neighbors to use the shared technology with “courtesy”.
But, as this comic shows, it was common to eavesdrop.
“Party lines could destroy relationships…if you were dating someone on the party line and got a call from another girl, well, the jig was up. Five minutes after you hung up, everybody in the neighborhood — including your girlfriend — knew about the call. In fact, there were times when the girlfriend butted in and chewed both the caller and the callee out. Watch what you say.” - Author, Donnie Johnson.
Where convenience and privacy found a happy co-existence, individualized gadgets flourished. Listening was not always an individual act. The sheer fact that audio was a form of broadcast made listening to conversations and music a social activity.
This all changed with the invention of the headphone.
“The triumph of headphones is that they create, in a public space, an oasis of privacy”- The Atlantic’s libertarian columnist, Derek Thompson.
Late 20th Century — Fear of a World Without Privacy
By the 60's, individualized phones, rooms, and homes became the norm. 100 years earlier, when Lincoln tapped all telegraph lines, few raised any questions. In the new century, invasive surveillance would bring down Lincoln’s distant successor, even though his spying was far less pervasive.
Upon entering office, the former Vice-President assured the American people that their privacy was safe.
“As Vice President, I addressed myself to the individual rights of Americans in the area of privacy…There will be no illegal tappings, eavesdropping, bugging, or break-ins in my administration. There will be hot pursuit of tough laws to prevent illegal invasions of privacy in both government and private activities.” - Gerald Ford.
Justice Brandeis had finally won
2,000 A.D. and beyond — a grand reversal
In the early 2,000s, young consumers were willing to purchase a location tracking feature that was once the stuff of 1984 nightmares.
“The magic age is people born after 1981…That’s the cut-off for us where we see a big change in privacy settings and user acceptance.” - Loopt Co-Founder Sam Altman, who pioneered paid geo-location features.
The older generations’ fear of transparency became a subject of mockery.
“My grandma always reminds me to turn my GPS off a few blocks before I get home “so that the man giving me directions doesn’t know where I live.” - a letter to the editor of College Humor’s “Parents Just Don’t Understand” series.
Increased urban density and skyrocketing rents in the major cities has put pressure on communal living.
A co-living space in San Francisco / Source: Sarah Buhr, TechCrunch
“We’re seeing a shift in consciousness from hyper-individualistic to more cooperative spaces…We have a vision to raise our families together.” - Jordan Aleja Grader, San Francisco resident
At the more extreme ends, a new crop of so-called “life bloggers” publicize intimate details about their days:
Blogger Robert Scoble takes A picture with Google Glass in the shower
At the edges of transparency and pornography, anonymous exhibitionism finds a home on the web, at the wildly popular content aggregator, Reddit, in the aptly titled community “Gone Wild”.
How privacy will again fade away
For 3,000 years, most people have been perfectly willing to trade privacy for convenience, wealth or fame. It appears this is still true today.
AT&T recently rolled out a discounted broadband internet service, where customers could pay a mere $30 more a month to not have their browsing behavior tracked online for ad targeting.
“Since we began offering the service more than a year ago the vast majority have elected to opt-in to the ad-supported model.” - AT&T spokeswoman Gretchen Schultz (personal communication)
Performance artist Risa Puno managed to get almost half the attendees at an Brooklyn arts festival to trade their private data (image, fingerprints, or social security number) for a delicious cinnamon cookie. Some even proudly tweeted it out.
twitter.com/kskobac/status/515956363793821696/photo/1 > "Traded all my personal data for a social media cookie at #PleaseEnableCookies by @risapuno #DAF14" @kskobac
Tourists on Hollywood Blvd willing gave away their passwords to on live TV for a split-second of TV fame on Jimmy Kimmel Live. > https://www.youtube.com/watch?v=opRMrEfAIiI
Even for holdouts, the costs of privacy may be too great to bear. With the advance of cutting-edge health technologies, withholding sensitive data may mean a painful, early death.
For instance, researchers have already discovered that if patients of the deadly Vioxx drug had shared their health information publicly, statisticians could have detected the side effects earlier enough to save 25,000 lives.
As a result, Google’s Larry Page has embarked on a project to get more users to share their private health information with the academic research community. While Page told a crowd at the TED conference in 2013 that he believe such information can remain anonymous, statisticians are doubtful.
"We have been pretending that by removing enough information from databases that we can make people anonymous. We have been promising privacy, and this paper demonstrates that for a certain percent of a population, those promises are empty”- John Wilbanks of Sage Bionetworks, on a new academic paper that identified anonymous donors to a genetics database, based on public information
Speaking as a statistician, it is quite easy to identify people in anonymous datasets. There are only so many 5'4" jews living in San Francisco with chronic back pain. Every bit of information we reveal about ourselves will be one more disease that we can track, and another life saved.
If I want to know whether I will suffer a heart attack, I will have to release my data for public research. In the end, privacy will be an early death sentence.
Already, health insurers are beginning to offer discounts for people who wear health trackers and let others analyze their personal movements. Many, if not most, consumers in the next generation will choose cash and a longer life in exchange for publicizing their most intimate details.
What can we tell with basic health information, such as calories burned throughout the day? Pretty much everything.
With a rudimentary step and calorie counter, I was able to distinguish whether I was having sex or at the gym, since the minute-by-minute calorie burn profile of sex is quite distinct (the image below from my health tracker shows lots of energy expended at the beginning and end, with few steps taken. Few activities besides sex have this distinct shape)
My late-night horizontal romp, as measured by calories burned per minute
More advanced health monitors used by insurers are coming, like embedded sensors in skin and clothes that detect stress and concentration. The markers of an early heart attack or dementia will be the same that correspond to an argument with a spouse or if an employee is dozing off at work.
No behavior will escape categorization—which will give us unprecedented superpowers to extend healthy life. Opting out of this tracking—if it is even possible—will mean an early death and extremely pricey health insurance for many.
If history is a guide, the costs and convenience of radical transparency will once again take us back to our roots as a species that could not even conceive of a world with privacy.
It’s hard to know whether complete and utter transparency will realize a techno-utopia of a more honest and innovative future. But, given that privacy has only existed for a sliver of human history, it’s disappearance is unlikely to doom mankind. Indeed, transparency is humanity’s natural state.
(Page 1 of 381, totaling 1905 entries) » next page
fabric | rblg
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
| rblg on Twitter