As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 09.2023), via all its tags!
FIND BELOW ALL THE TAGS THAT CAN BE USED TO NAVIGATE IN THE CONTENTS OF | RBLG BLOG:
(to be seen just below if you're navigating on the blog's html pages or here for rss readers)
--
Note that we had to hit the "pause" button on our reblogging activities a while ago (mainly because we ran out of time, but also because we received complaints from a major image stock company about some images that were displayed on | rblg, an activity that we felt was still "fair use" - we've never made any money or advertised on this site).
Nevertheless, we continue to publish from time to time information on the activities of fabric | ch, or content directly related to its work (documentation).
Note: speaking about time, not in time, out of time, etc. and as a late tribute to Stephen Hawking, this experiement full of malice from him regarding the possibilities of time travel.
Who among us has never fantasized about traveling through time? But then, who among us hasn't traveled through time? Every single one of us is a time traveler, technically speaking, moving as we do through one second per second, one hour per hour, one day per day. Though I never personally heard the late Stephen Hawking point out that fact, I feel almost certain that he did, especially in light of one particular piece of scientific performance art he pulled off in 2009: throwing a cocktail party for time travelers — the proper kind, who come from the future.
"Hawking’s party was actually an experiment on the possibility of time travel," writes Atlas Obscura's Anne Ewbank. "Along with many physicists, Hawking had mused about whether going forward and back in time was possible. And what time traveler could resist sipping champagne with Stephen Hawking himself?" "
By publishing the party invitation in his mini-series Into the Universe With Stephen Hawking, Hawking hoped to lure futuristic time travelers. You are cordially invited to a reception for Time Travellers, the invitation read, along with the the date, time, and coordinates for the event. The theory, Hawking explained, was that only someone from the future would be able to attend."
Alas, no time travelers turned up. Since someone possessed of that technology at any point in the future would theoretically be able to attend, does Hawking's lonely party, which you can see in the clip above, prove that time travel will never become possible? Maybe — or maybe the potential time-travelers of the future know something about the space-time-continuum-threatening risks of the practice that we don't. As for Dr. Hawking, I have to imagine that he came away satisfied from the shindig, even though his hoped-for Ms. Universe from the future never walked through the door. “I like simple experiments… and champagne,” he said, and this champagne-laden simple experiment will continue to remind the rest of us to enjoy our time on Earth, wherever in that time we may find ourselves.
Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
Note: I'll move this afternoon to Grandhotel Giessbach (sounds like a Wes Anderson movie) to present later tonight the temporary results of the research I'm jointly leading with Nicolas Nova for ECAL & HEAD - Genève, in partnership with EPFL-ECAL Lab & EPFL: Inhabiting and Interfacing the Cloud(s). Looking forward to meet the Swiss design research community (mainly) at the hotel...
Christophe Guignard and myself will have the pleasure to present the temporary results of the design researchInhabiting & Interfacing the Cloud(s) next Thursday (28.01.2016) at theSwiss Design Networkconference.
The conference will happen at Grandhotel Giessbach over the lake Brienz, where we'll focus on the research process fully articulated around the practice of design (with the participation of students in the case of I&IC) and the process of project.
This will apparently happen between "dinner" and "bar", as we'll present a "Fireside Talk" at 9pm. Can't wait to do and see that...
The full program and proceedings (pdf) of the conference can be accessedHERE.
As for previous events, we'll try to make a short "follow up" on this documentary blog after the event.
Long introductory note: we all know how data have become important and how we're currently in need of open tools to declare and use static or dynamic data ...
There was once a community data service named Pachube, but it has been sold and its community commodified... There has been initiatives by designers like the one of Berg around the idea of electronic tools, cloud and data services (Berg Cloud), but it was funded by venture capitalists and went bankrupt, unfortunately bringing down the design studio as well. There are some good, simple and interesting online services as well, like Dweet.io, but these are companies that will finally need to make money out of your data (either ways by targeted publicity or by later commodification of the community), as this is one of their main product ...
So we were in need of a tool for our own work at fabric | ch that would remain just what it is supposed to be: a tool... As we are using a lot of dynamic and static data - any kind of data - in our own architectural & interaction works, we needed one. Something simple to use, that we could manage ourselves, that would hopefully not cost much to keep running ...
Following what we already did for many previous projects, for which we designed soft technologies and then publicly released them - and yet never tried to sell them in any manner, we should stress it in this case - (Rhizoreality, I-Weather v. 2001, I-Weather v. 2009 and related apps, Deterritorialized Living), we've designed our own data service: Datadroppers - http://www.datadroppers.org -, first for our own needs, and then just released it online as well. Free to use ...
We thought of it as a data commune... trying to keep it as "socially flat" as possible: there are no login, no password, no terms of service, no community, no profiles, no "friends", almost no rules, etc., ... only one statement: "We are the data droppers / Open inputs-outputs performers / We drop off an we pick up / Migrant citizens of the data commune", which also becomes the interface of the service ...
It is a data commune, but not a "community". It is from a "market product" point of view "unsocial", almost uninteresting to later commodify. Yet there is still one single rule (so to keep the service simple and costless to handle): once you publish your data on the site, they'll become public (for everybody, including third party services that won't necessary follow the same open rules) and you won't be able to erase them, as they'll be part of the commune and will possibly be used by other "data communards" as well. They'll be online as long as the service will (i.e. I-Weather is online for 14 years now). So just declare on Datadroppers raw data that you consider for yourself public ...
The service, directly developed on the basis of previous projects we did, was first published and used last June, for an exhibition at the Haus der elektronische Künste in Basel (Switzerland). It is hosted in Switzerland / Lausanne under strict laws when it comes to data. There are very few data on the site at this time, only the ones we published from the exhibition (as a test, you can for exemple try a data search using "Raspberry Pi" as a string in the Search data section, which will bring live sensors data as a result). We will now certainly continue to use the service for future works at fabric | ch, maybe will it be also usefull for you? ...
The tool is fully functional at this time, but not entirely completed yet. We expect to release Javascript and Processing libraries later on, so to ease the use of the service when developing applications ...
The "communal service" is in fact a statement, the statement becomes the navigation interface. The two main sections of the website are composed by the parts in which you can play with or search for data.
We drop off and we pick up is the area where one can see what can be achieved with data. Obviously, it is either possible to declare (drop off) data and tag them, or retrieve them (pick up) - image above -. You can also Search data following different criteria -below-.
Usual data will certainly be live feeds from sensors, like the one in the top image (i.e. value: lumen). But you could certainly go for more interesting things, either when you'll create data or when you'll use them. The two images above are about "curiosity" data. They were captured within an exhibition (see below) and are already partially interpreted data (i.e. you can leave a connected button with no explanation in the exhibition space, if people press it, well... they are curious). As another exemple, we also recorded data about "transgression" in the same exhibition: a small digital screen says "don't touch" and blinks in red, while an attached sensor obviously connected to the screen can indeed be touched. Childish transgression and slightly meaningless I must admit... It was just a test.
But you could also declare other type of data, any type, while using complementary tools. You could for exemple declare each new image or file within an open cloud service and start cascading things. Or you could start thinking about data as "built" artifacts... like we did in a recent project (see below, Deterritorialized Living) that is delivered in the form of data. Or you could also and of course drop off static data that you would like to store and make accessible for a larger community.
Possibilities seems in fact to be quite large.
Datadroppers as a commune could even be considered as a micro-society or nation. It comes with a dowloadable "flag", if you desire to manifest your attachment to its philosophy or plant it in your datacenter!
Finally, I must mention the project that initiated Datadroppers, both because we developed the rules of the data sharing service during this latter project (Link > follow "Access to open data feeds"), but also because it is probably one of the most interesting use of Datadroppers so far...
Deterritorialized Living is an artificial, yet livable troposphere that is delivered in the form of data. Just like if we indeed install atmospheric sensors in a real environment, unless the environment doesn't exist in this case (yet), it is the project. The process is therefore reversed within this almost geo-engineered climate that follows different rules than our earth/cosmos driven everyday atmosphere. We have the open data feed to later set it up. fabric | ch or another designer as the feed is open. We plan to use this feed and materialized it through different installations, like we already started to do.
So, for now, this fictive data flow of a designed atmosphere is also delivered as a feed (again: Search data > Deterritorialized), among other ones (some "real", some not), within the webservice offered by Datadroppers .
"In some ways, there’s a bug in the open source ecosystem. Projects start when developers need to fix a particular problem, and when they open source their solution, it’s instantly available to everyone. If the problem they address is common, the software can become wildly popular in a flash — whether there is someone in place to maintain the project or not. So some projects never get the full attention from developers they deserve. “I think that is because people see and touch Linux, and they see and touch their browsers, but users never see and touch a cryptographic library,” says Steve Marquess, one of the OpenSSL foundation’s partners."
How Heartbleed Broke the Internet — And Why It Can Happen Again
Illustration: Ross Patton/WIRED
Stephen Henson is responsible for the tiny piece of software code that rocked the internet earlier this week (note: early last month).
The key moment arrived at about 11 o’clock on New Year’s Eve, 2011. With 2012 just minutes away, Henson received the code from Robin Seggelmann, a respected academic who’s an expert in internet protocols. Henson reviewed the code — an update for a critical internet security protocol called OpenSSL — and by the time his fellow Britons were ringing in the New Year, he had added it to a software repository used by sites across the web.
Two years would pass until the rest of the world discovered this, but this tiny piece of code contained a bug that would cause massive headaches for internet companies worldwide, give conspiracy theorists a field day, and, well, undermine our trust in the internet. The bug is called Heartbleed, and it’s bad. People have used it to steal passwords and usernames from Yahoo. It could let a criminal slip into your online bank account. And in theory, it could even help the NSA or China with their surveillance efforts.
It’s no surprise that a small bug would cause such huge problems. What’s amazing, however, is that the code that contained this bug was written by a team of four coders that has only one person contributing to it full-time. And yet Henson’s situation isn’t an unusual one. It points to a much larger problem with the design of the internet. Some of its most important pieces are controlled by just a handful of people, many of whom aren’t paid well — or aren’t paid at all. And that needs to change. Heartbleed has shown — so very clearly — that we must add more oversight to the internet’s underlying infrastructure. We need a dedicated and well-funded engineering task force overseeing not just online encryption but many other parts of the net.
The sad truth is that open source software — which underpins vast swathes of the net — has a serious sustainability problem. While well-known projects such as Linux, Mozilla, and the Apache web server enjoy hundreds of millions of dollars of funding, there are many other important projects that just don’t have the necessary money — or people — behind them. Mozilla, maker of the Firefox browser, reported revenues of more than $300 million in 2012. But the OpenSSL Software Foundation, which raises money for the project’s software development, has never raised more than $1 million in a year; its developers have never all been in the same room. And it’s just one example.
In some ways, there’s a bug in the open source ecosystem. Projects start when developers need to fix a particular problem, and when they open source their solution, it’s instantly available to everyone. If the problem they address is common, the software can become wildly popular in a flash — whether there is someone in place to maintain the project or not. So some projects never get the full attention from developers they deserve. “I think that is because people see and touch Linux, and they see and touch their browsers, but users never see and touch a cryptographic library,” says Steve Marquess, one of the OpenSSL foundation’s partners.
Another Popular, Unfunded Project
Take another piece of software you’ve probably never heard of called Dnsmasq. It was kicked off in the late 1990s by a British systems administrator named Simon Kelley. He was looking for a way for his Netscape browser to tell him whenever his dial-up modem had become disconnected from the internet. Scroll forward 15 years and 30,000 lines of code, and now Dnsmasq is a critical piece of network software found in hundreds of millions of Android mobile phones and consumer routers.
Kelley quit his day job only last year when he got a nine-month contract to do work for Comcast, one of several gigantic internet service providers that ships his code in its consumer routers. He doesn’t know where his paycheck will come from in 2015, and he says he has sympathy for the OpenSSL team, developing critical and widely used software with only minimal resources. “There is some responsibility to be had in writing software that is running as root or being exposed to raw network traffic in hundreds of millions of systems,” he says. Fifteen years ago, if there was a bug in his code, he’d have been the only person affected. Today, it would be felt by hundreds of millions. “With each release, I get more nervous,” he says.
Money doesn’t necessarily buy good code, but it pays for software audits and face-to-face meetings, and it can free up open-source coders from their day jobs. All of this would be welcome at the OpenSSL project, which has never had a security audit, Marquess says. Most of the Foundation’s money comes from companies asking for support or specific development work. Last year, only $2,000 worth of donations came in with no strings attached. “Because we have to produce specific deliverables that doesn’t leave us the latitude to do code audits, security reviews, refactoring: the unsexy activities that lead to a quality code base,” he says.
The problem is also preventing some critical technologies from being added to the internet. Jim Gettys says that a flaw in the way many routers are interacting with core internet protocols is causing a lot of them to choke on traffic. Gettys and a developer named Dave Taht know how to fix the issue — known as Bufferbloat — and they’ve started work on the solution. But they can’t get funding. “This is a project that has fallen through the cracks,” he says, “and a lot of the software that we depend on falls through the cracks one way or another.”
Earlier this year, the OpenBSD operating system — used by security conscious folks on the internet — nearly shut down, after being hit by a $20,000 power bill. Another important project — a Linux distribution for routers called Openwrt is also “badly underfunded,” Gettys says.
Gettys should know. He helped standardize the protocols that underpin the web and build core components of the Unix operating system, which now serves as the basis for everything from the iPhone to the servers that drive the net. He says there’s no easy answer to the problem. “I think there are ways to put money into the ecosystem,” he says, “but getting people to understand the need has been difficult.”
Eric Raymond, a coder and founder of the Open Source Initiative, agrees. “The internet needs a dedicated civil-engineering brigade to be actively hunting for vulnerabilities like Heartbleed and Bufferbloat, so they can be nailed before they become serious problems,” he said via email. After this week, it’s hard to argue with him.
On Thursday April 10, the space agency is set to reveal its enormous database of where to find software for more than 1,000 of its projects (probably including rocket guidance, robotic control and/or climate simulators).
In the nineteen-seventies, the Internet was a small, decentralized collective of computers. The personal-computer revolution that followed built upon that foundation, stoking optimism encapsulated by John Perry Barlow’s 1996 manifesto “A Declaration of the Independence of Cyberspace.” Barlow described a chaotic digital utopia, where “netizens” self-govern and the institutions of old hold no sway. “On behalf of the future, I ask you of the past to leave us alone,” he writes. “You are not welcome among us. You have no sovereignty where we gather.”
This is not the Internet we know today. Nearly two decades later, a staggering percentage of communications flow through a small set of corporations—and thus, under the profound influence of those companies and other institutions. Google, for instance, now comprises twenty-five per cent of all North American Internet traffic; an outage last August caused worldwide traffic to plummet by around forty per cent.
Engineers anticipated this convergence. As early as 1967, one of the key architects of the system for exchanging small packets of data that gave birth to the Internet, Paul Baran, predicted the rise of a centralized “computer utility” that would offer computing much the same way that power companies provide electricity. Today, that model is largely embodied by the information empires of Amazon, Google, and other cloud-computing companies. Like Baran anticipated, they offer us convenience at the expense of privacy.
Internet users now regularly submit to terms-of-service agreements that give companies license to share their personal data with other institutions, from advertisers to governments. In the U.S., the Electronic Communications Privacy Act, a law that predates the Web, allows law enforcement to obtain without a warrant private data that citizens entrust to third parties—including location data passively gathered from cell phones and the contents of e-mails that have either been opened or left unattended for a hundred and eighty days. As Edward Snowden’s leaks have shown, these vast troves of information allow intelligence agencies to focus on just a few key targets in order to monitor large portions of the world’s population.
One of those leaks, reported by the Washington Post in late October (2013), revealed that the National Security Agency secretly wiretapped the connections between data centers owned by Google and Yahoo, allowing the agency to collect users’ data as it flowed across the companies’ networks. Google engineers bristled at the news, and responded by encrypting those connections to prevent future intrusions; Yahoo has said it plans to do so by next year. More recently, Microsoft announced it would do the same, as well as open “transparency centers” that will allow some of its software’s source code to be inspected for hidden back doors. (However, that privilege appears to only extend to “government customers.”) On Monday, eight major tech firms, many of them competitors, united to demand an overhaul of government transparency and surveillance laws.
Still, an air of distrust surrounds the U.S. cloud industry. The N.S.A. collects data through formal arrangements with tech companies; ingests Web traffic as it enters and leaves the U.S.; and deliberately weakens cryptographic standards. A recently revealed document detailing the agency’s strategy specifically notes its mission to “influence the global commercial encryption market through commercial relationships” with companies developing and deploying security products.
One solution, espoused by some programmers, is to make the Internet more like it used to be—less centralized and more distributed. Jacob Cook, a twenty-three-year-old student, is the brains behind ArkOS, a lightweight version of the free Linux operating system. It runs on the credit-card-sized Raspberry Pi, a thirty-five dollar microcomputer adored by teachers and tinkerers. It’s designed so that average users can create personal clouds to store data that they can access anywhere, without relying on a distant data center owned by Dropbox or Amazon. It’s sort of like buying and maintaining your own car to get around, rather than relying on privately owned taxis. Cook’s mission is to “make hosting a server as easy as using a desktop P.C. or a smartphone,” he said.
Like other privacy advocates, Cook’s goal isn’t to end surveillance, but to make it harder to do en masse. “When you couple a secure, self-hosted platform with properly implemented cryptography, you can make N.S.A.-style spying and network intrusion extremely difficult and expensive,” he told me in an e-mail.
Persuading consumers to ditch the convenience of the cloud has never been an easy sell, however. In 2010, a team of young programmers announced Diaspora, a privacy-centric social network, to challenge Facebook’s centralized dominance. A year later, Eben Moglen, a law professor and champion of the Free Software movement, proposed a similar solution called the Freedom Box. The device he envisioned was to be a small computer that plugs into your home network, hosting files, enabling secure communication, and connecting to other boxes when needed. It was considered a call to arms—you alone would control your data.
But, while both projects met their fund-raising goals and drummed up a good deal of hype, neither came to fruition. Diaspora’s team fell into disarray after a disappointing beta launch, personal drama, and the appearance of new competitors such as Google+; apart from some privacy software released last year, Moglen’s Freedom Box has yet to materialize at all.
“There is a bigger problem with why so many of these efforts have failed” to achieve mass adoption, said Brennan Novak, a user-interface designer who works on privacy tools. The challenge, Novak said, is to make decentralized alternatives that are as secure, convenient, and seductive as a Google account. “It’s a tricky thing to pin down,” he told me in an encrypted online chat. “But I believe the problem exists somewhere between the barrier to entry (user-interface design, technical difficulty to set up, and over-all user experience) versus the perceived value of the tool, as seen by Joe Public and Joe Amateur Techie.”
One of Novak’s projects, Mailpile, is a crowd-funded e-mail application with built-in security tools that are normally too onerous for average people to set up and use—namely, Phil Zimmermann’s revolutionary but never widely adopted Pretty Good Privacy. “It’s a hard thing to explain…. A lot of peoples’ eyes glaze over,” he said. Instead, Mailpile is being designed in a way that gives users a sense of their level of privacy, without knowing about encryption keys or other complicated technology. Just as important, the app will allow users to self-host their e-mail accounts on a machine they control, so it can run on platforms like ArkOS.
“There already exist deep and geeky communities in cryptology or self-hosting or free software, but the message is rarely aimed at non-technical people,” said Irina Bolychevsky, an organizer for Redecentralize.org, an advocacy group that provides support for projects that aim to make the Web less centralized.
Several of those projects have been inspired by Bitcoin, the math-based e-money created by the mysterious Satoshi Nakamoto. While the peer-to-peer technology that Bitcoin employs isn’t novel, many engineers consider its implementation an enormous technical achievement. The network’s “nodes”—users running the Bitcoin software on their computers—collectively check the integrity of other nodes to ensure that no one spends the same coins twice. All transactions are published on a shared public ledger, called the “block chain,” and verified by “miners,” users whose powerful computers solve difficult math problems in exchange for freshly minted bitcoins. The system’s elegance has led some to wonder: if money can be decentralized and, to some extent, anonymized, can’t the same model be applied to other things, like e-mail?
Bitmessage is an e-mail replacement proposed last year that has been called the “the Bitcoin of online communication.” Instead of talking to a central mail server, Bitmessage distributes messages across a network of peers running the Bitmessage software. Unlike both Bitcoin and e-mail, Bitmessage “addresses” are cryptographically derived sequences that help encrypt a message’s contents automatically. That means that many parties help store and deliver the message, but only the intended recipient can read it. Another option obscures the sender’s identity; an alternate address sends the message on her behalf, similar to the anonymous “re-mailers” that arose from the cypherpunk movement of the nineteen-nineties.
Another ambitious project, Namecoin, is a P2P system almost identical to Bitcoin. But instead of currency, it functions as a decentralized replacement for the Internet’s Domain Name System. The D.N.S. is the essential “phone book” that translates a Web site’s typed address (www.newyorker.com) to the corresponding computer’s numerical I.P. address (192.168.1.1). The directory is decentralized by design, but it still has central points of authority: domain registrars, which buy and lease Web addresses to site owners, and the U.S.-based Internet Corporation for Assigned Names and Numbers, or I.C.A.N.N., which controls the distribution of domains.
The infrastructure does allow for large-scale takedowns, like in 2010, when the Department of Justice tried to seize ten domains it believed to be hosting child pornography, but accidentally took down eighty-four thousand innocent Web sites in the process. Instead of centralized registrars, Namecoin uses cryptographic tokens similar to bitcoins to authenticate ownership of “.bit” domains. In theory, these domain names can’t be hijacked by criminals or blocked by governments; no one except the owner can surrender them.
Solutions like these follow a path different from Mailpile and ArkOS. Their peer-to-peer architecture holds the potential for greatly improved privacy and security on the Internet. But existing apart from commonly used protocols and standards can also preclude any possibility of widespread adoption. Still, Novak said, the transition to an Internet that relies more extensively on decentralized, P2P technology is “an absolutely essential development,” since it would make many attacks by malicious actors—criminals and intelligence agencies alike—impractical.
Though Snowden has raised the profile of privacy technology, it will be up to engineers and their allies to make that technology viable for the masses. “Decentralization must become a viable alternative,” said Cook, the ArkOS developer, “not just to give options to users that can self-host, but also to put pressure on the political and corporate institutions.”
“Discussions about innovation, resilience, open protocols, data ownership and the numerous surrounding issues,” said Redecentralize’s Bolychevsky, “need to become mainstream if we want the Internet to stay free, democratic, and engaging.”
The project, which consists in an "artificial troposphere" that reverses our causal relationship to the rythms of day and night, air, seasons, time -- based on real time global network activity by both humans and robots and that is delivered in the form of open data feeds, fictional data in some ways -- was displayed accompanied by videos of former projects by fabric | ch.
Specifically, we took the ocasion to complete an electromagnetic sample of Deterritorialized Daylight, based on its feed of data.
The simple spatialization took the appearance of two strong controllable projectors and two light reflectors. These were the only sources of light in the exhibition space, accompanied by five screens that displayed the different data feeds and the interactive version of Deterritorialized Daylight (a controllable intensity of the 13 last hours). Two small but intense "suns", an "eclipse" and a "waning moon" seemed to appear in the space, at the same time.
The variable intensity of the light in the space defined a pattern of illumination within the exhibition room where the display tables took place, in an apparent random manner, yet following this pattern accordingly to their own reflection potential and their exhibition program.
Exhibition after exhibition, we plan to develop physical samples of the data feeds and materialize the "geoengineered" troposphere. We will also look into some architectural explorations of this "geoengineered" climate, architectural environments that will locate themselves within, or just use this deterritorialized atmosphere.
Shapeoko was the little milling machine that could. It surpassed its Kickstarter goal and went into production with the goal of supplying CNC mill fans with an easy-to-use and inexpensive ($300) CNC machine.
Two years after the Kickstarter campaign concluded, creator Edward Ford has joined forces with Inventables to build the Shapeoko 2, which goes on pre-sale today. The second version features a completely redesigned Z-axis, dual Y-axis steppers, as well as Inventables’ MakerSlide linear bearing system.
If you’ll be in Chicago on today (note: last monday), Inventables will be holding a Shapeoko 2 launch event where you’ll get the opportunity to see the machine in action. You can also pre-order the kit. The price is $300 for just the mechanics — just add electronics — or you can get a full kit for $650.
arkOS is an open source project designed to let its users take control of their personal data and make running a home server as easy as using a PC
At the start of this year, analyst firm Gartner predicted that over the next four years a total of US$677 billion would be spent on cloud services. The growth of 'things-as-a-service' is upending enterprise IT and creating entirely new, innovative business models. At the same time, social networks such as Facebook and Twitter have built massive user bases, and created databases that are home to enormous amounts of information about account holders.
Collectively, all of this means that people's data, and the services they use with it, are more likely than ever to be found outside of home PCs and other personal devices, housed in servers that they will probably never likely to see let alone touch. But, when everything is delivered as a service, people's control and even ownership of their data gets hazy to say the least.
Earlier this year NSA whistle-blower Edward Snowden offered some insight – in revelations that probably surprised few but still outraged many – into the massive level of data collection and analysis carried out by state actors.
arkOS is not a solution to the surveillance state, but it does offer an alternative to those who would rather exercise some measure of control over their data and, at the very least, not lock away their information in online services where its retrieval and use is at the whim of a corporation, not the user.
arkOS is a Linux-based operating system currently in alpha created by Jacob Cook and the CitizenWeb Project. It's designed to run on a Raspberry Pi – a super-low-cost single board computer – and ultimately will let users, even of the non-technical variety, run from within their homes email, social networking, storage and other services that are increasingly getting shunted out into the cloud.
CitizenWeb Project
Cook is the founder of the CitizenWeb Project, whose goal is to promote a more decentralised and democratic Internet
"It does this by encouraging developers that work on tools to these ends, offering an 'umbrella' to aid with management and publicity for these projects," Cook says
"Decentralisation rarely gets any attention, even within the tech community, and it was even more obscure before the NSA scandal broke a few months ago," he adds.
The best way to promote decentralisation "is to provide great platforms with great experiences that can compete with those larger providers," Cook says
"This may seem like an impossible task for the open source development community, especially without the head start that the platforms have, but I believe it is entirely doable.
"We produce the best tools in the world – far better than any proprietary solutions can give – but there is a huge gap with these tools that the majority of the population cannot cross.
"When we tell them, 'oh, using this tool is as easy as installing a Python module on your computer,' for us geeks that is incredibly easy, but for most people, you lost them at the word Python and you will never get them back.
"So the momentum toward using centralised platforms will not relent until developers start making tools for a wider audience. Experience and usability is every bit as important as features or functionality."
arkOS is the CitizenWeb Project's first major initiative but more are on the way. "There are quite a few planned that have nothing to do with arkOS," Cook says.
"I've been working on arkOS since about February of this year, which was a few months before the [NSA] revelations," Cook says.
The birth of arkOS
There were two things that spurred work on arkOS
"The first was my decision to set up my own home server to host all of my data a few years ago," Cook explains.
"I had a good deal of experience with Linux and system administration, but it still took a huge amount of time and research to get the services I wanted set up, and secured properly.
"This experience made me realise, if I have background in these things and it takes me so long to do it, it must be impossible for individuals who don't have the expertise and the time that I do to work things out."
The second was the push by corporations "to own every aspect of one's online life."
"Regardless of your personal feelings about Google, Facebook, etc., there have been countless examples of these services closing themselves off from each other, creating those 'walled gardens' that give them supreme control over your data," Cook says.
"This might not bother people, until we find out what we did from Snowden, that this data doesn't always rest with them and that as long as there is a single point of failure, you always have to rely on 'trusting' your provider.
"I don't know about you, but I wouldn't trust a company that is tasked to sell me things to act in my best interest."
"All that being said, the NSA revelations have really provided a great deal of interest to the project. In all of the networks and communities that I have been through since the scandal broke, people are clamouring for an easy way to self-host things at home. It shouldn't have to be rocket science. I hope that arkOS can represent part of the solution for them."
The aim of the project is an easy-to-use server operating system than can let people self-host their own services with the ease that someone might install a regular desktop application
"Hosting one's own websites, email, cloud data, etc. from home can be a very time-consuming and occasionally expensive endeavour," Cook says.
"Not to mention the fact that it takes a good amount of knowledge and practice to do properly and securely. arkOS lets you set up these systems just like you do on your home computer or your smartphone, when you install something from an app store. It 'just works' with minimal configuration.
"There is no good reason why server software shouldn't be able to have the same experience."
Making servers simple
The OS is "all about simplicity" straight out of the box, Cook says.
"For example, on the Raspberry Pi, hosting server software that routinely writes to log files can quickly wear out your SD card. So arkOS caches them in memory to make as few writes as possible, and it does this from its first boot."
The team is building a range of tools that make it easy to manage an arkOS server. These include Beacon, which lets users find other arkOS servers on a local network, and Genesis, a GUI management system for arkOS.
Genesis is the "most important part" of the OS, Cook says. "It's the tool that does all the heavy lifting for you – installing new apps and software with one click, automatically configuring security settings, giving wizards for navigating through lengthy setup [processes].
"The goal with Genesis is to allow you to do anything you want with your server in an easy and straightforward way, without even having to think about touching the command line. It runs locally on the arkOS server, accessible through the browser of your home computer."
There are more tools for arkOS on the way, Cook says.
"Any one of these tools can be made to work with other distros; the key is that they are available in the default working environment with no additional setup or bother on the user's part."
At the moment the system is still very much in alpha. "It is minimally stable and still getting most of its major features piled in," Cook says. Despite it being early days the reception so far has been "very positive".
"It's been downloaded several hundred times, ostensibly by intrepid people willing to try out the framework and see if they can produce bugs," he says.
At the moment, Cook is leading the arkOS project and also doing the bulk of the development work on Genesis.
"Aside from myself, there are other individuals who contribute features when they are able, like working on Deluge or putting together plugins to use with Genesis," he says.
He is interested in finding more people to help out with the components of arkOS, particularly with Python and Golang experience, which are being used extensively. He's also interested in sysadmins or Linux veterans to help manage repositories, with an to expanding the operating system to other architectures.
"Web design is also a big one, both for the Genesis front-end as well as our Web properties and outreach efforts. Even non-tech people can lend a hand with outreach, community support and the like. No offer of help will be refused so people can be in touch confidently," he adds.
Looking beyond alpha
arkOS is under active development but the OS is still at a "very experimental" stage. Most of Cook's time is spent working on frameworks for Genesis, with a goal of completing its major frameworks by the end of this year and releasing a beta of arkOS.
A major sub-project the team working on is called Deluge: A dynamic DNS service and port proxy for users who don't have access to their own domain name or static IPs.
"This would make putting your services online truly simple and hassle-free," Cook says.
"I am working on the security framework right now, allowing users to easily segment their services based on the zone that they should be available to. For example, you can set your ownCloud site that you run with arkOS to only be available on your home network, while your Jeykll blog should be available to everyone.
"Then comes the certificates system, easily making SSL certs available to your different applications."
"Beyond that, most of what I will be working on is plugins that do certain things. Email is a really big thing, something that nearly everyone who asks about arkOS is interested in self-hosting. With the NSA revelations it isn't hard to see why."
Other features to be included in arkOS include XMPP chat server hosting, Radicale (calendar/contacts hosting), automatic backups, internationalisation, Tor integration, "and much, much more."
Contact Rohan Pearce at rohan_pearce at idg.com.au or follow him on Twitter: @rohan_p
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.