Tuesday, January 28. 2014Djibouti migrants | #communication
----- Twitter / pourmecoffee: ”Djibouti migrants try to capture cheap cell signals from Somalia to call relatives” (Photo: John Stanmeyer/@NatGeo)
Update: original source at National Geographic, via Timo - Photo taken as part of the "Out of Eden Walk" series.
Snowden’s Leaks Have Finally Forced Companies to Enhance Their Security | #surveillance #encryption
-----
Revelations about NSA surveillance have prompted Yahoo, Microsoft, and other companies to deploy long-overdue security improvements.
Last week, Google, Microsoft, and five other leading Web companies formally requested that the U.S. government rein in its use of dragnet surveillance. These companies don’t have to wait for the government to act, though. Encryption technology can protect the privacy of innocent users from indiscriminate surveillance, but only if tech companies deploy it. In the wake of the Snowden disclosures, they are starting to do so. It shouldn’t have taken them this long. In October of 2010, security researcher Eric Butler released an easy-to-use tool designed to hack into the webmail accounts of people using public Wi-Fi networks. Butler’s Firesheep wasn’t the first technology to make Wi-Fi sniffing possible, but it made it easy to intercept e-mails and documents, and even to capture authentication cookies that could be used at a later time to log in to a victim’s account. Firesheep exploited the fact that most webmail and social networking sites at the time did not use HTTPS encryption to protect their customers’ information, or provided such encryption only to users who enabled an obscure configuration option most people were unaware of. Google embraced encryption by default for its Gmail service a few months before Firesheep was released. Other major Web companies ignored calls from Pamela Jones Harbour, a commissioner with the Federal Trade Commission, for them to follow suit. One year later (soon after Firesheep was written about in the New York Times), Senator Chuck Schumer wrote a letter to Yahoo, Amazon, and Twitter urging them to enable HTTPS by default. Twitter, Facebook, and Microsoft’s e-mail service eventually did switch to HTTPS encryption by default. However, Yahoo continued to expose its customers’ private information not only to hackers using tools like Firesheep, but also to governments around the world that are capable of intercepting the communications of their own citizens. In January of this year the company finally announced an opt-in encryption setting, which few users were likely to use. Yahoo ignored not just strong words from an FTC commissioner and a letter from a U.S. senator, but also a public plea from human rights groups. What made the company finally decide to use HTTPS by default was a Washington Post story revealing that the NSA was intercepting nearly half a million of Yahoo users’ unencrypted webmail address books per day. Shortly after the news broke, Yahoo CEO Marissa Mayer proclaimed that “there is nothing more important to us than protecting our users’ privacy.” If that’s the case, why did it take the disclosures of Edward Snowden for the company to finally deliver industry-standard Web encryption? Why didn’t the company protect its customers from hackers using tools like Firesheep, or from the deep packet inspection equipment that we have long known governments around the world are using? The answer is that they didn’t care—until their utter failure to deploy basic Web security was featured on the front page of the Washington Post. Yahoo isn’t the only company to up its game in response to the Snowden disclosures. Indeed, many of the big cloud computing companies—including Google, Facebook, Yahoo, Microsoft, and others—have started to encrypt information between data centers. They have also increased the size of their encryption keys and switched to encryption algorithms that offer “perfect forward secrecy.” The EFF’s “Encrypt the Web” report reflects the rapid embrace of security technologies by major companies. However, were it not for Snowden’s whistle-blowing and the brave decision by journalists to reveal technical details about some of the NSA’s activities, it’s doubtful that many companies would have made these security improvements. For that reason alone, we owe Edward Snowden our thanks. Christopher Soghoian is principal technologist with the American Civil Liberties Union’s Speech, Privacy, and Technology Project. Monday, January 27. 2014The Real Privacy Problem | #monitoring #surveillance
-----
As Web companies and government agencies analyze ever more information about our lives, it’s tempting to respond by passing new privacy laws or creating mechanisms that pay us for our data. Instead, we need a civic solution, because democracy is at risk.
In 1967, The Public Interest, then a leading venue for highbrow policy debate, published a provocative essay by Paul Baran, one of the fathers of the data transmission method known as packet switching. Titled “The Future Computer Utility,” the essay speculated that someday a few big, centralized computers would provide “information processing … the same way one now buys electricity.”
It took decades for cloud computing to fulfill Baran’s vision. But he was prescient enough to worry that utility computing would need its own regulatory model. Here was an employee of the RAND Corporation—hardly a redoubt of Marxist thought—fretting about the concentration of market power in the hands of large computer utilities and demanding state intervention. Baran also wanted policies that could “offer maximum protection to the preservation of the rights of privacy of information”:
Sharp, bullshit-free analysis: techno-futurism has been in decline ever since.
All the privacy solutions you hear about are on the wrong track. To read Baran’s essay (just one of the many on utility computing published at the time) is to realize that our contemporary privacy problem is not contemporary. It’s not just a consequence of Mark Zuckerberg’s selling his soul and our profiles to the NSA. The problem was recognized early on, and little was done about it. Almost all of Baran’s envisioned uses for “utility computing” are purely commercial. Ordering shirts, paying bills, looking for entertainment, conquering forgetfulness: this is not the Internet of “virtual communities” and “netizens.” Baran simply imagined that networked computing would allow us to do things that we already do without networked computing: shopping, entertainment, research. But also: espionage, surveillance, and voyeurism. If Baran’s “computer revolution” doesn’t sound very revolutionary, it’s in part because he did not imagine that it would upend the foundations of capitalism and bureaucratic administration that had been in place for centuries. By the 1990s, however, many digital enthusiasts believed otherwise; they were convinced that the spread of digital networks and the rapid decline in communication costs represented a genuinely new stage in human development. For them, the surveillance triggered in the 2000s by 9/11 and the colonization of these pristine digital spaces by Google, Facebook, and big data were aberrations that could be resisted or at least reversed. If only we could now erase the decade we lost and return to the utopia of the 1980s and 1990s by passing stricter laws, giving users more control, and building better encryption tools! A different reading of recent history would yield a different agenda for the future. The widespread feeling of emancipation through information that many people still attribute to the 1990s was probably just a prolonged hallucination. Both capitalism and bureaucratic administration easily accommodated themselves to the new digital regime; both thrive on information flows, the more automated the better. Laws, markets, or technologies won’t stymie or redirect that demand for data, as all three play a role in sustaining capitalism and bureaucratic administration in the first place. Something else is needed: politics.
Even programs that seem innocuous can undermine democracy. First, let’s address the symptoms of our current malaise. Yes, the commercial interests of technology companies and the policy interests of government agencies have converged: both are interested in the collection and rapid analysis of user data. Google and Facebook are compelled to collect ever more data to boost the effectiveness of the ads they sell. Government agencies need the same data—they can collect it either on their own or in coöperation with technology companies—to pursue their own programs. Many of those programs deal with national security. But such data can be used in many other ways that also undermine privacy. The Italian government, for example, is using a tool called the redditometro, or income meter, which analyzes receipts and spending patterns to flag people who spend more than they claim in income as potential tax cheaters. Once mobile payments replace a large percentage of cash transactions—with Google and Facebook as intermediaries—the data collected by these companies will be indispensable to tax collectors. Likewise, legal academics are busy exploring how data mining can be used to craft contracts or wills tailored to the personalities, characteristics, and past behavior of individual citizens, boosting efficiency and reducing malpractice. On another front, technocrats like Cass Sunstein, the former administrator of the Office of Information and Regulatory Affairs at the White House and a leading proponent of “nanny statecraft” that nudges citizens to do certain things, hope that the collection and instant analysis of data about individuals can help solve problems like obesity, climate change, and drunk driving by steering our behavior. A new book by three British academics—Changing Behaviours: On the Rise of the Psychological State—features a long list of such schemes at work in the U.K., where the government’s nudging unit, inspired by Sunstein, has been so successful that it’s about to become a for-profit operation. Thanks to smartphones or Google Glass, we can now be pinged whenever we are about to do something stupid, unhealthy, or unsound. We wouldn’t necessarily need to know why the action would be wrong: the system’s algorithms do the moral calculus on their own. Citizens take on the role of information machines that feed the techno-bureaucratic complex with our data. And why wouldn’t we, if we are promised slimmer waistlines, cleaner air, or longer (and safer) lives in return? This logic of preëmption is not different from that of the NSA in its fight against terror: let’s prevent problems rather than deal with their consequences. Even if we tie the hands of the NSA—by some combination of better oversight, stricter rules on data access, or stronger and friendlier encryption technologies—the data hunger of other state institutions would remain. They will justify it. On issues like obesity or climate change—where the policy makers are quick to add that we are facing a ticking-bomb scenario—they will say a little deficit of democracy can go a long way. Here’s what that deficit would look like: the new digital infrastructure, thriving as it does on real-time data contributed by citizens, allows the technocrats to take politics, with all its noise, friction, and discontent, out of the political process. It replaces the messy stuff of coalition-building, bargaining, and deliberation with the cleanliness and efficiency of data-powered administration. This phenomenon has a meme-friendly name: “algorithmic regulation,” as Silicon Valley publisher Tim O’Reilly calls it. In essence, information-rich democracies have reached a point where they want to try to solve public problems without having to explain or justify themselves to citizens. Instead, they can simply appeal to our own self-interest—and they know enough about us to engineer a perfect, highly personalized, irresistible nudge. Privacy is a means to democracy, not an end in itself. Another warning from the past. The year was 1985, and Spiros Simitis, Germany’s leading privacy scholar and practitioner—at the time the data protection commissioner of the German state of Hesse—was addressing the University of Pennsylvania Law School. His lecture explored the very same issue that preoccupied Baran: the automation of data processing. But Simitis didn’t lose sight of the history of capitalism and democracy, so he saw technological changes in a far more ambiguous light. He also recognized that privacy is not an end in itself. It’s a means of achieving a certain ideal of democratic politics, where citizens are trusted to be more than just self-contented suppliers of information to all-seeing and all-optimizing technocrats. “Where privacy is dismantled,” warned Simitis, “both the chance for personal assessment of the political … process and the opportunity to develop and maintain a particular style of life fade.” Three technological trends underpinned Simitis’s analysis. First, he noted, even back then, every sphere of social interaction was mediated by information technology—he warned of “the intensive retrieval of personal data of virtually every employee, taxpayer, patient, bank customer, welfare recipient, or car driver.” As a result, privacy was no longer solely a problem of some unlucky fellow caught off-guard in an awkward situation; it had become everyone’s problem. Second, new technologies like smart cards and videotex not only were making it possible to “record and reconstruct individual activities in minute detail” but also were normalizing surveillance, weaving it into our everyday life. Third, the personal information recorded by these new technologies was allowing social institutions to enforce standards of behavior, triggering “long-term strategies of manipulation intended to mold and adjust individual conduct.” Modern institutions certainly stood to gain from all this. Insurance companies could tailor cost-saving programs to the needs and demands of patients, hospitals, and the pharmaceutical industry. Police could use newly available databases and various “mobility profiles” to identify potential criminals and locate suspects. Welfare agencies could suddenly unearth fraudulent behavior. But how would these technologies affect us as citizens—as subjects who participate in understanding and reforming the world around us, not just as consumers or customers who merely benefit from it? In case after case, Simitis argued, we stood to lose. Instead of getting more context for decisions, we would get less; instead of seeing the logic driving our bureaucratic systems and making that logic more accurate and less Kafkaesque, we would get more confusion because decision making was becoming automated and no one knew how exactly the algorithms worked. We would perceive a murkier picture of what makes our social institutions work; despite the promise of greater personalization and empowerment, the interactive systems would provide only an illusion of more participation. As a result, “interactive systems … suggest individual activity where in fact no more than stereotyped reactions occur.” If you think Simitis was describing a future that never came to pass, consider a recent paper on the transparency of automated prediction systems by Tal Zarsky, one of the world’s leading experts on the politics and ethics of data mining. He notes that “data mining might point to individuals and events, indicating elevated risk, without telling us why they were selected.” As it happens, the degree of interpretability is one of the most consequential policy decisions to be made in designing data-mining systems. Zarsky sees vast implications for democracy here:
This is the future we are sleepwalking into. Everything seems to work, and things might even be getting better—it’s just that we don’t know exactly why or how.
Too little privacy can endanger democracy. But so can too much privacy. Simitis got the trends right. Free from dubious assumptions about “the Internet age,” he arrived at an original but cautious defense of privacy as a vital feature of a self-critical democracy—not the democracy of some abstract political theory but the messy, noisy democracy we inhabit, with its never-ending contradictions. In particular, Simitis’s most crucial insight is that privacy can both support and undermine democracy. Traditionally, our response to changes in automated information processing has been to view them as a personal problem for the affected individuals. A case in point is the seminal article “The Right to Privacy,” by Louis Brandeis and Samuel Warren. Writing in 1890, they sought a “right to be let alone”—to live an undisturbed life, away from intruders. According to Simitis, they expressed a desire, common to many self-made individuals at the time, “to enjoy, strictly for themselves and under conditions they determined, the fruits of their economic and social activity.” A laudable goal: without extending such legal cover to entrepreneurs, modern American capitalism might have never become so robust. But this right, disconnected from any matching responsibilities, could also sanction an excessive level of withdrawal that shields us from the outside world and undermines the foundations of the very democratic regime that made the right possible. If all citizens were to fully exercise their right to privacy, society would be deprived of the transparent and readily available data that’s needed not only for the technocrats’ sake but—even more—so that citizens can evaluate issues, form opinions, and debate (and, occasionally, fire the technocrats). This is not a problem specific to the right to privacy. For some contemporary thinkers, such as the French historian and philosopher Marcel Gauchet, democracies risk falling victim to their own success: having instituted a legal regime of rights that allow citizens to pursue their own private interests without any reference to what’s good for the public, they stand to exhaust the very resources that have allowed them to flourish. When all citizens demand their rights but are unaware of their responsibilities, the political questions that have defined democratic life over centuries—How should we live together? What is in the public interest, and how do I balance my own interest with it?—are subsumed into legal, economic, or administrative domains. “The political” and “the public” no longer register as domains at all; laws, markets, and technologies displace debate and contestation as preferred, less messy solutions. But a democracy without engaged citizens doesn’t sound much like a democracy—and might not survive as one. This was obvious to Thomas Jefferson, who, while wanting every citizen to be “a participator in the government of affairs,” also believed that civic participation involves a constant tension between public and private life. A society that believes, as Simitis put it, that the citizen’s access to information “ends where the bourgeois’ claim for privacy begins” won’t last as a well-functioning democracy. Thus the balance between privacy and transparency is especially in need of adjustment in times of rapid technological change. That balance itself is a political issue par excellence, to be settled through public debate and always left open for negotiation. It can’t be settled once and for all by some combination of theories, markets, and technologies. As Simitis said: “Far from being considered a constitutive element of a democratic society, privacy appears as a tolerated contradiction, the implications of which must be continuously reconsidered.”
Laws and market mechanisms are insufficient solutions. In the last few decades, as we began to generate more data, our institutions became addicted. If you withheld the data and severed the feedback loops, it’s not clear whether they could continue at all. We, as citizens, are caught in an odd position: our reason for disclosing the data is not that we feel deep concern for the public good. No, we release data out of self-interest, on Google or via self-tracking apps. We are too cheap not to use free services subsidized by advertising. Or we want to track our fitness and diet, and then we sell the data. Simitis knew even in 1985 that this would inevitably lead to the “algorithmic regulation” taking shape today, as politics becomes “public administration” that runs on autopilot so that citizens can relax and enjoy themselves, only to be nudged, occasionally, whenever they are about to forget to buy broccoli.
What Simitis is describing here is the construction of what I call “invisible barbed wire” around our intellectual and social lives. Big data, with its many interconnected databases that feed on information and algorithms of dubious provenance, imposes severe constraints on how we mature politically and socially. The German philosopher Jürgen Habermas was right to warn—in 1963—that “an exclusively technical civilization … is threatened … by the splitting of human beings into two classes—the social engineers and the inmates of closed social institutions.” The invisible barbed wire of big data limits our lives to a space that might look quiet and enticing enough but is not of our own choosing and that we cannot rebuild or expand. The worst part is that we do not see it as such. Because we believe that we are free to go anywhere, the barbed wire remains invisible. Worse, there’s no one to blame: certainly not Google, Dick Cheney, or the NSA. It’s the result of many different logics and systems—of modern capitalism, of bureaucratic governance, of risk management—that get supercharged by the automation of information processing and by the depoliticization of politics. The more information we reveal about ourselves, the denser but more invisible this barbed wire becomes. We gradually lose our capacity to reason and debate; we no longer understand why things happen to us. But all is not lost. We could learn to perceive ourselves as trapped within this barbed wire and even cut through it. Privacy is the resource that allows us to do that and, should we be so lucky, even to plan our escape route. This is where Simitis expressed a truly revolutionary insight that is lost in contemporary privacy debates: no progress can be achieved, he said, as long as privacy protection is “more or less equated with an individual’s right to decide when and which data are to be accessible.” The trap that many well-meaning privacy advocates fall into is thinking that if only they could provide the individual with more control over his or her data—through stronger laws or a robust property regime—then the invisible barbed wire would become visible and fray. It won’t—not if that data is eventually returned to the very institutions that are erecting the wire around us.
Think of privacy in ethical terms. If we accept privacy as a problem of and for democracy, then popular fixes are inadequate. For example, in his book Who Owns the Future?, Jaron Lanier proposes that we disregard one pole of privacy—the legal one—and focus on the economic one instead. “Commercial rights are better suited for the multitude of quirky little situations that will come up in real life than new kinds of civil rights along the lines of digital privacy,” he writes. On this logic, by turning our data into an asset that we might sell, we accomplish two things. First, we can control who has access to it, and second, we can make up for some of the economic losses caused by the disruption of everything analog. Lanier’s proposal is not original. In Code and Other Laws of Cyberspace (first published in 1999), Lawrence Lessig enthused about building a property regime around private data. Lessig wanted an “electronic butler” that could negotiate with websites: “The user sets her preferences once—specifies how she would negotiate privacy and what she is willing to give up—and from that moment on, when she enters a site, the site and her machine negotiate. Only if the machines can agree will the site be able to obtain her personal data.” It’s easy to see where such reasoning could take us. We’d all have customized smartphone apps that would continually incorporate the latest information about the people we meet, the places we visit, and the information we possess in order to update the price of our personal data portfolio. It would be extremely dynamic: if you are walking by a fancy store selling jewelry, the store might be willing to pay more to know your spouse’s birthday than it is when you are sitting at home watching TV. The property regime can, indeed, strengthen privacy: if consumers want a good return on their data portfolio, they need to ensure that their data is not already available elsewhere. Thus they either “rent” it the way Netflix rents movies or sell it on the condition that it can be used or resold only under tightly controlled conditions. Some companies already offer “data lockers” to facilitate such secure exchanges. So if you want to defend the “right to privacy” for its own sake, turning data into a tradable asset could resolve your misgivings. The NSA would still get what it wanted; but if you’re worried that our private information has become too liquid and that we’ve lost control over its movements, a smart business model, coupled with a strong digital-rights-management regime, could fix that. Meanwhile, government agencies committed to “nanny statecraft” would want this data as well. Perhaps they might pay a small fee or promise a tax credit for the privilege of nudging you later on—with the help of the data from your smartphone. Consumers win, entrepreneurs win, technocrats win. Privacy, in one way or another, is preserved also. So who, exactly, loses here? If you’ve read your Simitis, you know the answer: democracy does. It’s not just because the invisible barbed wire would remain. We also should worry about the implications for justice and equality. For example, my decision to disclose personal information, even if I disclose it only to my insurance company, will inevitably have implications for other people, many of them less well off. People who say that tracking their fitness or location is merely an affirmative choice from which they can opt out have little knowledge of how institutions think. Once there are enough early adopters who self-track—and most of them are likely to gain something from it—those who refuse will no longer be seen as just quirky individuals exercising their autonomy. No, they will be considered deviants with something to hide. Their insurance will be more expensive. If we never lose sight of this fact, our decision to self-track won’t be as easy to reduce to pure economic self-interest; at some point, moral considerations might kick in. Do I really want to share my data and get a coupon I do not need if it means that someone else who is already working three jobs may ultimately have to pay more? Such moral concerns are rendered moot if we delegate decision-making to “electronic butlers.” Few of us have had moral pangs about data-sharing schemes, but that could change. Before the environment became a global concern, few of us thought twice about taking public transport if we could drive. Before ethical consumption became a global concern, no one would have paid more for coffee that tasted the same but promised “fair trade.” Consider a cheap T-shirt you see in a store. It might be perfectly legal to buy it, but after decades of hard work by activist groups, a “Made in Bangladesh” label makes us think twice about doing so. Perhaps we fear that it was made by children or exploited adults. Or, having thought about it, maybe we actually do want to buy the T-shirt because we hope it might support the work of a child who would otherwise be forced into prostitution. What is the right thing to do here? We don’t know—so we do some research. Such scrutiny can’t apply to everything we buy, or we’d never leave the store. But exchanges of information—the oxygen of democratic life—should fall into the category of “Apply more thought, not less.” It’s not something to be delegated to an “electronic butler”—not if we don’t want to cleanse our life of its political dimension.
Sabotage the system. Provoke more questions. We should also be troubled by the suggestion that we can reduce the privacy problem to the legal dimension. The question we’ve been asking for the last two decades—How can we make sure that we have more control over our personal information?—cannot be the only question to ask. Unless we learn and continuously relearn how automated information processing promotes and impedes democratic life, an answer to this question might prove worthless, especially if the democratic regime needed to implement whatever answer we come up with unravels in the meantime. Intellectually, at least, it’s clear what needs to be done: we must confront the question not only in the economic and legal dimensions but also in a political one, linking the future of privacy with the future of democracy in a way that refuses to reduce privacy either to markets or to laws. What does this philosophical insight mean in practice? First, we must politicize the debate about privacy and information sharing. Articulating the existence—and the profound political consequences—of the invisible barbed wire would be a good start. We must scrutinize data-intensive problem solving and expose its occasionally antidemocratic character. At times we should accept more risk, imperfection, improvisation, and inefficiency in the name of keeping the democratic spirit alive. Second, we must learn how to sabotage the system—perhaps by refusing to self-track at all. If refusing to record our calorie intake or our whereabouts is the only way to get policy makers to address the structural causes of problems like obesity or climate change—and not just tinker with their symptoms through nudging—information boycotts might be justifiable. Refusing to make money off your own data might be as political an act as refusing to drive a car or eat meat. Privacy can then reëmerge as a political instrument for keeping the spirit of democracy alive: we want private spaces because we still believe in our ability to reflect on what ails the world and find a way to fix it, and we’d rather not surrender this capacity to algorithms and feedback loops. Third, we need more provocative digital services. It’s not enough for a website to prompt us to decide who should see our data. Instead it should reawaken our own imaginations. Designed right, sites would not nudge citizens to either guard or share their private information but would reveal the hidden political dimensions to various acts of information sharing. We don’t want an electronic butler—we want an electronic provocateur. Instead of yet another app that could tell us how much money we can save by monitoring our exercise routine, we need an app that can tell us how many people are likely to lose health insurance if the insurance industry has as much data as the NSA, most of it contributed by consumers like us. Eventually we might discern such dimensions on our own, without any technological prompts. Finally, we have to abandon fixed preconceptions about how our digital services work and interconnect. Otherwise, we’ll fall victim to the same logic that has constrained the imagination of so many well-meaning privacy advocates who think that defending the “right to privacy”—not fighting to preserve democracy—is what should drive public policy. While many Internet activists would surely argue otherwise, what happens to the Internet is of only secondary importance. Just as with privacy, it’s the fate of democracy itself that should be our primary goal. After all, back in 1967 Paul Baran was lucky enough not to know what the Internet would become. That didn’t stop him from seeing the benefits of utility computing and its dangers. Abandon the idea that the Internet fell from grace over the last decade. Liberating ourselves from that misreading of history could help us address the antidemocratic threats of the digital future.
Evgeny Morozov is the author of The Net Delusion: The Dark Side of Internet Freedom and To Save Everything, Click Here: The Folly of Technological Solutionism.
Posted by Patrick Keller
in Culture & society, Science & technology
at
09:29
Defined tags for this entry: computing, culture & society, data, history, mining, profiling, science & technology, theory
Friday, January 24. 2014Bracket [takes action] | #callA new call by the very interesting Bracket magazine/books!
Via Bracket ----- Dear Bracket friends,
We are happy to announce the CFS for Bracket [Takes Action];
We hope you consider submitting. Please also pass this along to anyone you think might be interested.
The deadline is quickly approaching — February 28th!
Best wishes,
Neeraj & Mason
Bracket [Takes Action] Editors
![]() Bracket [takes action] “When humans assemble, spatial conflicts arise. Spatial planning is often considered the management of spatial conflicts.” —Markus Miessen Call for submissions
Hannah Arendt’s 1958 treatise The Human Condition cites “action” as one of the three tenants, along with labor and work, of the vita active (active life). Action, she writes, is a necessary catalyst for the human condition of plurality, which is an expression of both the common public and distinct individuals. This reading of action requires unique and free individuals to act toward a collective project and is therefore simultaneously ‘bottom-up’ and ‘top-down’. In the more than fifty years since Arendt’s claims, the public realm in which action materializes, and the means by which action is expressed, has dramatically transformed. Further, spatial practice’s role in anticipating, planning, or absorbing action(s) has been challenged, yielding difficulty in the design of the ‘space of appearance,’ Arendt’s public realm. Our young century has already seen contested claims of design’s role in the public realm by George Baird, Lieven De Cauter, Markus Meissen, Jan Gehl, among others. Perhaps we could characterize these tensions as a ‘design deficit’, or a sense that design does not incite ‘action’, in the Arendtian sense. Amongst other things, this feeling is linked to the rise of neo-liberal pluralism, which marks the transition from public to publics, making a collective agenda in the public realm often illegible. Bracket [takes action] explores the complex relationship between spatial design, and the public(s) as well as action(s) it contains. How can design catalyze a public and incite platforms for action? Consider two images indicative of contemporary action within the public realm of our present century: (i) the June 2009 opening of the High Line Park in New York City, and (ii) the January 2011 occupation of Tahrir Square in Cairo. These two spaces and their respective contemporary publics embody the range within today’s space of appearance. At the High Line, the urban public is now choreographed in a top-down manner along a designed, former infrastructure with an endless supply of vistas into an urban private realm. In Tahrir Square, an assembled swirling public occupies, and therefore re-designs, an infrastructural plaza overwhelming a government and communication networks. This example reveals a bottom-up, self-assembling public. But what role did spatial practice play in each of these scenarios and who were the spatial practitioners and public(s)? The contrast of two positions on action in a public realm offers an opening for wider investigations into spatial practice’s role and impact on today’s public(s) and their action(s). Bracket [takes action] asks: What are the collective projects in the public realm to act on? How have recent design projects incited political or social action? How can design catalyze a public, as well as forums for that public to act? What is the role of spatial practice to instigate or resist public actions? Bracket 4 provokes spatial practice’s potential to incite and respond to action today. The fourth edition of Bracket invites design work and papers that offer contemporary models of spatial design that are conscious of their public intent and actively engaged in socio-political conditions. It is encouraged, although not mandatory, that submissions documenting projects be realized. Positional papers should be projective and speculative or revelatory, if historical. Suggested subthemes include: Participatory ACTION – interactive, crowd-sourced, scripted Disputed PUBLICS – inconsistent, erratic, agonized Deviant ACTION – subversive, loopholes, reactive Distributed PUBLICS – broadcasted, networked, diffused Occupy ACTION – defiant, resistant, upheaval Mob PUBLICS – temporary, forceful, performative Market ACTION – abandoning, asserting, selecting The editorial board and jury for Bracket 4 includes Pier Vittorio Aureli, Vishaan Chakrabarti, Adam Greenfield, Belinda Tato, Yoshiharu Tsukamoto as well as co-editors Neeraj Bhatia and Mason White. Deadline for Submissions: February 28, 2014 Please visit www.brkt.org for more info. Related Links:
Posted by Patrick Keller
in Architecture, Culture & society, Design, Interaction design
at
10:25
Defined tags for this entry: architecture, culture & society, design, design (interactions), interaction design, politics, research, speculation, thinking
Friday, January 17. 2014Sync Your Files without Trusting the Cloud | #data #cloud
-----
The company behind the Bittorrent protocol is working on software that can replicate most features of file-syncing services without handing your data to cloud servers. By Tom Simonite on January 17, 2014
Data dump: New software from Bittorrent can synchronize files between computers and mobile devices without ever storing them in a data center like this one.
The debate over how much we should trust cloud companies with our data (see “NSA Spying Is Making Us Less Safe”) was reawakened last year after revelations that the National Security Agency routinely harvests data from Internet companies including Google, Microsoft, Yahoo, and Facebook. Bittorrent, the company behind the sometimes controversial file-sharing protocol of the same name, is hoping that this debate will drive adoption of its new file-syncing technology this year. Called Bittorrent Sync, it synchronizes folders and files on different computers and mobile devices in a way that’s similar to what services like Dropbox offer, but without ever copying data to a central cloud server. Cloud-based file-syncing services like Dropbox and Microsoft’s SkyDrive route all data via their own servers and keep a copy of it there. The Bittorrent software instead has devices contact one another directly over the Internet to update files as they are added or changed. That difference in design means that people using Bittorrent Sync don’t have to worry about whether the cloud company hosting their data is properly securing it against rogue employees or other threats. Forgoing the cloud also means that data shared using Bittorrent Sync could be harvested by the NSA or another agency only by going directly to the person or company controlling the synced devices. Synced data does travel over the public Internet, where it might be intercepted by a surveillance agency such as the NSA, which is known to collect data directly from the Internet backbone, but it travels in a strongly encrypted form. One drawback of Bittorrent Sync’s design is that two devices must both be online at the same time for them to synchronize, since there’s no intermediary server to act as an always-on source. Bittorrent Sync is available now as a free download for PCs and mobile devices, but in a beta version that lacks the polish and ease of use of many consumer applications. Bittorrent CEO Eric Klinker says the next version, due this spring, will feature major upgrades to the interface that will make the software more user friendly and in line with its established cloud-based competitors. Klinker says Bittorrent Sync shows how popular applications of the Internet can be designed in a way that gives people control of their own data, despite prevailing trends. “Pick any app on the Web today, it could be Twitter, e-mail, search, and it has been developed in a very centralized way—those businesses are built around centralizing information on their servers,” he says. “I’m trying to put more power in the hands of the end user and less in the hands of these companies and other centralizing authorities.” Anonymous data sent back to Bittorrent by its software indicates that more than two million people are already using it each month. Some of those people have found uses that go beyond just managing files. For example, the company says one author in Beijing uses Bittorrent Sync to distribute blog posts on topics sensitive with Chinese authorities. And one U.S. programmer built a secure, decentralized messaging system on top of the software. Klinker says that companies are also starting to use Bittorrent Sync to keep data inside their own systems or to avoid the costs of cloud-based solutions. He plans to eventually make Bittorrent Sync pay for itself by finding a way to sell extra services to corporate users of the software. Given its emphasis on transparency and data ownership, Bittorrent has been criticized by some for not releasing the source code for its application. Some in the tech- and privacy-savvy crowd attracted by Bittorrent Sync’s decentralized design say this step is necessary if people are to be sure that no privacy-compromising bugs or backdoors are hiding in the software. Klinker says he understands those concerns and may yet decide to release the source code for the software. “It’s a fair point, and we understand that transparency is good, but it opens up vulnerabilities, too,” he says. For now the company prefers to keep the code private and perform security audits behind closed doors, says Klinker. Jacob Williams, a digital forensic scientist with CSR Group, says that stance is defensible, although he generally considers open-source programs to be more secure than those that aren’t. “Open source is a double-edged sword,” says Williams, because finding subtly placed vulnerabilities is very challenging, and because open-source projects can be split off into different versions, which dilutes the number of people looking at any one version. Williams’s own research has shown how Dropbox and similar services could be used to slip malicious software through corporate firewalls because they are configured to use the same route as Web traffic, which usually gets a free pass (see “Dropbox Can Sync Malware”). Bittorrent Sync is configured slightly differently, he says, and so likely doesn’t automatically open up an open channel to the Internet. However, “Bittorrent Sync will likely require changes to the firewall in any moderately secure network,” he notes.
Posted by Patrick Keller
in Culture & society, Science & technology
at
15:06
Defined tags for this entry: computing, culture & society, data, internet, mobility, privacy, science & technology
(Page 1 of 2, totaling 7 entries)
» next page
|
fabric | rblgThis blog is the survey website of fabric | ch - studio for architecture, interaction and research. We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings. Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations. This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
QuicksearchCategoriesCalendarSyndicate This BlogArchivesBlog Administration |