Thursday, July 03. 2014
Note: I'm happy to learn that I'm not a "social capitalist"! I am not a "regular capitalist" either...
Via MIT Technology
Social capitalists on Twitter are inadvertently ruining the network for ordinary users, say network scientists.
A couple of years, ago, network scientists began to study the phenomenon of “link farming” on Twitter and other social networks. This is the process in which spammers gather as many links or followers as possible to help spread their messages.
What these researchers discovered on Twitter was curious. They found that link farming was common among spammers. However, most of the people who followed the spam accounts came from a relatively small pool of human users on Twitter.
These people turn out to be individuals who are themselves trying to amass social capital by gathering as many followers as possible. The researchers called these people social capitalists.
That raises an interesting question: how do social capitalists emerge and what kind of influence do they have on the network? Today we get an answer of sorts, thanks to the work of Vincent Labatut at Galatasaray University in Turkey and a couple of pals who have carried out the first detailed study of social capitalists and how they behave.
These guys say that social capitalists fall into at least two different categories that reflect their success and the roles they play in linking together diverse communities. But they warn that social capitalists have a dark side too.
First, a bit of background. Twitter has around 600 million users who send 60 million tweets every day. On average, each Twitter user has around 200 followers and follows a similar number, creating a dynamic social network in which messages percolate through the network of links.
Many of these people use Twitter to connect with friends, family, news organizations, and so on. But a few, the social capitalists, use the network purely to maximize their own number of followers.
Social capitalists essentially rely on two kinds of reciprocity to amass followers. The first is to reassure other users that if they follow this user, then he or she will follow them back, a process called Follow Me and I Follow You or FMIFY. The second is to follow anybody and hope they follow back, a process called I Follow You, Follow Me or IFYFM.
This process takes place regardless of the content of messages, which is how they get mixed up with spammers, a point that turns out to be significant later.
Clearly, social capitalists are different from Twitter users who choose to follow people based on the content they tweet. The question that Labatut and co set out to answer is how to automatically identify social capitalists in Twitter and to work out how they sit within the Twitter network.
A clear feature of the reciprocity mechanism is that there will be a large overlap between the friends and followers of social capitalists. It’s possible to measure this overlap and categorize users accordingly. Social capitalists tend to have an overlap much closer to 100 percent than ordinary users.
One final way to categorize them is by their level of success. Here, Labatut and others set an arbitrary threshold of 10,000 followers. Social capitalists with more than this are obviously more successful than those with less.
To study these groups, Labatut and coanalyze an anonymized dataset of 55 million Twitter users with two billion links between them. And they find some 160,000 users who fit the description of social capitalist.
In particular, the team is interested in how social capitalists are linked to communities within Twitter, that is groups of users who are more strongly interlinked than average.
It turns out that there is a surprisingly large variety of social capitalists playing different roles. “We ﬁnd out the different kinds of social capitalists occupy very speciﬁc roles,” say Labatut and co.
For example, social capitalists with fewer than 10,000 followers tend not to have large numbers of links within a single community but links to lots of different communities. By contrast, those with more than 10,000 followers can have a strong presence in single communities as well as link disparate communities together. In both cases, social capitalists are significant because their messages travel widely across the entire Twitter network.
That has important consequences for the Twitter network. Labatut and co say there is a clear dark side to the role of social capitalists. “Because of this lack of interest in the content produced by the users they follow, social capitalists are not healthy for a service such as Twitter,” they say.
That’s because they provide an indiscriminate conduit for spammers to peddle their wares. “[Social capitalists’] behavior helps spammers gain inﬂuence, and more generally makes the task of ﬁnding relevant information harder for regular users,” say Labatut and co.
That’s an interesting insight that raises a tricky question for Twitter and other social networks. Finding social capitalists should now be straightforward now that Labatut and others have found a way to spot them automatically. But if social capitalists are detrimental, should their activities be restricted?
http://arxiv.org/abs/1406.6611 : Identifying the Community Roles of Social Capitalists in the Twitter Network.
http://www.mpi-sws.org/~farshad/TwitterLinkfarming.pdf : Understanding and Combating Link Farming in the Twitter Social Network
Wednesday, June 18. 2014
Learning doesn’t necessarily need to be formal – or expensive for that matter. Thanks to the Internet and some generous benefactors, you can further your education for free from the comfort of your own home. Top schools such as MIT and Harvard University are affiliated with free online learning resources, allowing people from all over the globe to connect and audit courses at their own pace. In some cases, these services even provide self-educators with proof for having completed courses. Keep reading after the break to check out our round-up of four free online learning resources.
In 2003, MIT officially launched OpenCourseWare – an online platform through which absolutely anyone can access the same course content as paying students – for free. The architecture section boasts over 100 undergraduate and graduate level courses, complete with downloadable lecture notes, assignments, reading lists, and in many cases, examples of past student work. Even though you won’t receive feedback from professors or certification for completing coursework, having free access to the oldest architecture department in the United States’ teachings is nevertheless an amazing resource. Below are two of the MIT OpenCourseWare architecture courses, described.
Architectural Construction and Computation is for architecture students interested in how computers can be facilitate design and construction. The course begins with a pre-prepared computer model, which is used for testing and investigating the construction process. The construction process is explored in terms of detail design and structural design, taking legal and computational issues into consideration.
Theory of City Form is one of the handful of architecture courses offered in audio and video format through MIT OpenCourseWare. The title is pretty self-explanatory – the course presents students with historical and modern theories of city form along with appropriate case studies, helping them build an understanding of urbanism and architecture for future educational and professional pursuits.
Just like MIT, TU Delft also has an OpenCourseWare platform – albeit less extensive. Even though the website does not have a designated architecture section, designers can still make use out of the prestigious school’s science and technical offerings. Available material for the majority of courses includes audio and video lecture recordings, readings, assignments, and practice exams.
Bio Inspired Design ”gives an overview of non-conventional mechanical approaches in nature and shows how this knowledge can lead to more creativity in mechanical design and to better (simpler, smaller, more robust) solutions than with conventional technology. It discusses a large number of biological organisms with smart constructions, unusual mechanisms or clever sensing and processing methods and presents a number of technical examples and designs of bio-inspired instruments and machines.”
Wastewater Treatment looks at the development of wastewater treatment technologies and their application. “High-tech and low-tech systems, which are applicable in both industrialized and developing countries, are discussed.” Specific examination topics include technologies for nutrient removal and recovery, such as anaerobic treatment systems and membrane filtration techniques.
EdX, a non-profit online initiative founded by MIT and Harvard University, offers free interactive classes from some of the world’s top universities. If you decide to take a course, you can try for a certificate of achievement – or you can simply audit it, choosing what and how much you want to do. It’s up to you. A huge benefit is being able to connect with like-minded classmates all over the world using the website’s peer-to-peer social learning tools. In addition to categories like computer science, music, and economics, they have a dedicated architecture section. Two of their architecture courses, described below, are currently open to fall registration.
The Search for Vernacular Architecture of Asia ”is a comprehensive, dialogue-based course providing an in-depth exploration of the vernacular concept and its applications to the culture and built environments of the past, present, and future. Designed to promote discussion and dialogue while contributing to the discourse surrounding the concept of the vernacular, this five-week course will challenge the perception of tradition and stimulate a deeper analysis of one’s local environment.” As suggested in the title, the course will focus specifically on the vernacular in Asia.
“While the development of cities in different parts of the world is moving in diverse directions, all estimations show that cities worldwide will change and grow strongly in the coming years” – especially in the tropics, where “it is expected that the number of new urban residents will increase by 3 times the population of Europe today.” With a specific focus on Asia, Future Cities will explore design and management methods over the course of nine weeks to increase the sustainable performance of cities and therefore, their resiliency.
Iversity is a similar platform to Edx, offering a wide range of interactive courses in collaboration with independent instructors, universities, and knowledge-based companies. Dr. Ivan Shumkov, one of the website’s educators, is a New York based architect, curator, and professor. He has taught at Harvard GSD, the Pratt Institute‘s School of Architecture, and Parsons The New School for Design – just to name a few. So far, he has offered two free architecture courses via Iversity, described below. Be sure to keep an eye out for his offerings in the future and take a look to see if any of the other courses appeal to you.
Contemporary Architecture analyzed “major contemporary architectural ideas, ideologies, and projects in the context of both globalization and specific local contexts” over a 12-week period. Students studied material from the 1990s onwards, submitting weekly assignments and sitting in on virtual classes and tours. Shumkov hopes to offer the course again after nearly 20,000 people from across the globe participated in its first iteration.
Designing Resilient Schools was taught by both Shumkov and Illac Diaz, the man behind the Liter of Light project in the Phillippines, which won the Curry Stone Design Prize in 2012. The 7-week course asked students to collaborate on resilient school design proposals for the victims of Typhoon Haiyan, which hit the Phillippines on November 9th, 2013. At the end of the course, which was essentially an online version of design studio, an international jury – including Kenneth Frampton and Giancarlo Mazzanti – selected the best design proposals for future implementation.
Wednesday, February 26. 2014
Three years ago we published a post by Nicolas Nova about Salvator Allende's project Cybersyn. A trial to build a cybernetic society (including feedbacks from the chilean population) back in the early 70ies.
Here is another article and picture piece about this amazing projetc on Frieze. You'll need to buy the magazione to see the pictures, though!
Phograph of Cybersyn, Salvador Allende's attempt to create a 'socialist internet, decades ahead of its time'
This is a tantalizing glimpse of a world that could have been our world. What we are looking at is the heart of the Cybersyn system, created for Salvador Allende’s socialist Chilean government by the British cybernetician Stafford Beer. Beer’s ambition was to ‘implant an electronic nervous system’ into Chile. With its network of telex machines and other communication devices, Cybersyn was to be – in the words of Andy Beckett, author of Pinochet in Piccadilly (2003) – a ‘socialist internet, decades ahead of its time’.
Capitalist propagandists claimed that this was a Big Brother-style surveillance system, but the aim was exactly the opposite: Beer and Allende wanted a network that would allow workers unprecedented levels of control over their own lives. Instead of commanding from on high, the government would be able to respond to up-to-the-minute information coming from factories. Yet Cybersyn was envisaged as much more than a system for relaying economic data: it was also hoped that it would eventually allow the population to instantaneously communicate its feelings about decisions the government had taken.
In 1973, General Pinochet’s cia-backed military coup brutally overthrew Allende’s government. The stakes couldn’t have been higher. It wasn’t only that a new model of socialism was defeated in Chile; the defeat immediately cleared the ground for Chile to become the testing-ground for the neoliberal version of capitalism. The military takeover was swiftly followed by the widespread torture and terrorization of Allende’s supporters, alongside a massive programme of privatization and de-regulation. One world was destroyed before it could really be born; another world – the world in which there is no alternative to capitalism, our world, the world of capitalist realism – started to emerge.
There’s an aching poignancy in this image of Cybersyn now, when the pathological effects of communicative capitalism’s always-on cyberblitz are becoming increasingly apparent. Cloaked in a rhetoric of inclusion and participation, semio-capitalism keeps us in a state of permanent anxiety. But Cybersyn reminds us that this is not an inherent feature of communications technology. A whole other use of cybernetic sytems is possible. Perhaps, rather than being some fragment of a lost world, Cybersyn is a glimpse of a future that can still happen.
Tuesday, November 05. 2013
Twitter data reveals the cities that set trends and those that follow. And the difference may be in the way air passengers carry information across the country, by-passing the Internet, say network scientists.
One of the defining properties of social networks is the ease with which information can spread across them. This flow leads to information avalanches in which videos or photographs or other content becomes viral across entire countries, continents and even the globe.
It’s easy to imagine that these trends are simply the result of the properties of the network. Indeed, there are plenty of studies that seem to show this.
But in recent years, researchers have become increasingly interested in the relationship between a network and the geography it is superimposed on. What role does geography play in the emergence and spread of trends? And which areas are trend setters and which are trend followers?
Today we get an answer of sorts thanks to the work of Emilio Ferrara and pals at Indiana University in Bloomington. These guys have examined the way trends emerge in cities across the US and how they spread to other cities and beyond.
Their research allows them to classify US cities as sources, those that lead the way in trends, or those that follow the trends which the team call sinks.
Their research also leads to a curious conclusion–that air travel plays a crucial role in the spread of information around the country This implies that trends spread from one part of the country to another not over the internet but via air passengers, just like diseases.
The method these guys use is straightforward. Twitter publishes a continuously updated list pf the the top ten most popular phrases or hashtags on its webpage. It also has webpages showing the trending topics for each of 63 US cities.
To capture the way these trends emerge and spread, Ferrra and co set up a web crawler to check each list every ten minutes between 12 April and 30 May 2013. In this way the collected over 11,000 different phrases and hashtags that became popular throughout these 50 days.
They then plotted the evolution of these trends in each US city over time. This allowed them to study how trends spread from one city to another and to look for clusters of cities in which the same topics trend together.
The results are revealing. They say most trends die away quickly–around 70 per cent of trends last only 20 minutes and only 0.3 per cent last more than a day.
Ferrara and co say they can see three distinct geographical regions that share similar trends–the East Coast, the Midwest and Southwest. It’s easy to imagine how trends arise at a low level and spread through the region through local links such as friends.
But these guys say there is also a fourth cluster of influential cities that also form a group where the emergence of trends is related. However, these place are not geographically related. They are metropolitan areas such as Los Angeles, New York, Atlanta, Chicago and so on.
What links these places is not geography but airports, say Ferrara and co. Their hypothesis is that topics trend in these places because of the influence of air passengers. In other words, trending topics spread just like diseases.
Ferrara and co have created a list of the cities that act as trend setters and those that act as trend followers.
The top five sources of trends are: Los Angeles, Cincinnati, Washington, Seattle and New York.
The top five trend followers (or sinks) are: Oklahoma City, Albuquerque, El Paso, Omaha and Kansas City.
That’s a fascinating result. In a sense it’s obvious that the large scale movement of people will influence the apread of information However, it’s not obvious that this should happen at a rate that is comparable to the spread of trends across the internet itself.
And it raises an interesting question that Ferrara and co hope to answer in future work. “Does information travel faster by airplane than over the Internet?” they ask.
We’ll be watching for when they reveal the answer.
Ref: arxiv.org/abs/1310.2671 : Traveling Trends: Social Butterﬂies or Frequent Fliers?
Interesting results (information travels by plane either, like diseases...), yet when only 0.3% of "trends" last more than a day, we can wonder if it even matters to have concerns about the 99.7% other ones (just some crap lolcats stuff ?)... or that on the contrary, it could possibly be the revelator of incredible "short time" pulses of "things/memes/subjects" in and in-between cities?
Monday, October 21. 2013
Note: will the communication industry be the one to finally build the Instant City?
A rapidly-deployable airborne communications network could transform communications during disasters, say researchers
Most people will have had the experience of being unable to get a mobile phone signal at a major sporting event, music festival or just in a crowded railway station. The problem becomes even more acute in emergency situations, such as in earthquake disasters zones, where the telecommunications infrastructure has been damaged.
So the ability to set up a new infrastructure quickly and easily is surely of great use.
Today, Alvaro Valcarce at TRiaGnoSys, a mobile communications R&D company in Germany, and a few pals unveil a system that could make this easier. These guys have developed a rapidly deployable wireless network system in the form of airborne base stations carried aloft by kite-shaped balloons called Helikites with a lifting capacity of 10 kg and that can remain airborne at an altitude of up to 4 km for several days, provided the weather conditions allow.
Valcarce and co say the system can be quickly deployed and provides large local mobile phone coverage thanks to a combination of multiple airborne nodes that link in to terrestrial and satellite telecommunications systems.
Their idea is that these systems could be deployed by network companies during temporary events such as the Olympic Games, or by first responders to an emergency event to set up the vital communications infrastructure necessary to coordinate emergency services.
One of the key challenges is to get the new equipment to work seamlessly with existing terrestrial networks. And to that end, Valcarce have been testing their airborne Helikite.
The team has a number of challenges to overcome in its ongoing work. For example the altitude of the Helikite determines its coverage but also influences the network capacity and delays. Evaluating these effects is one part of their future goals.
Having ironed out these kinds of operational problems, such a system will surely be valuable in a wide range of situations where reliable communication is not just a useful bonus but a life-saving necessity.
Ref: arxiv.org/abs/1307.3158 : Airborne Base Stations for Emergency and Temporary Events
Friday, September 20. 2013
Sentiment analysis on the social web depends on how a person’s state of mind is expressed in words. Now a new database of the links between words and emotions could provide a better foundation for this kind of analysis.
One of the buzzphrases associated with the social web is sentiment analysis. This is the ability to determine a person’s opinion or state of mind by analysing the words they post on Twitter, Facebook or some other medium. Much has been promised with this method—the ability to measure satisfaction with politicians, movies and products; the ability to better manage customer relations; the ability to create dialogue for emotion-aware games; the ability to measure the flow of emotion in novels; and so on. The idea is to entirely automate this process—to analyse the firehose of words produced by social websites using advanced data mining techniques to gauge sentiment on a vast scale. But all this depends on how well we understand the emotion and polarity (whether negative or positive) that people associate with each word or combinations of words. Today, Saif Mohammad and Peter Turney at the National Research Council Canada in Ottawa unveil a huge database of words and their associated emotions and polarity, which they have assembled quickly and inexpensively using Amazon’s crowdsourcing Mechanical Turk website. They say this crowdsourcing mechanism makes it possible to increase the size and quality of the database quickly and easily. Most psychologists believe that there are essentially six basic emotions– joy, sadness, anger, fear, disgust, and surprise– or at most eight if you include trust and anticipation. So the task of any word-emotion lexicon is to determine how strongly a word is associated with each of these emotions. One way to do this is to use a small group of experts to associate emotions with a set of words. One of the most famous databases, created in the 1960s and known as the General Inquirer database, has over 11,000 words labelled with 182 different tags, including some of the emotions that psychologist now think are the most basic. A more modern database is the WordNet Affect Lexicon, which has a few hundred words tagged in this way. This used a small group of experts to manually tag a set of seed words with the basic emotions. The size of this database was then dramatically increased by automatically associating the same emotions with all the synonyms of these words. One of the problems with these approaches is the sheer time it takes to compile a large database so Mohammad and Turney tried a different approach. These guys selected about 10,000 words from an existing thesaurus and the lexicons described above and then created a set of five questions to ask about each word that would reveal the emotions and polarity associated with it. That’s a total of over 50,000 questions. They then asked these questions to over 2000 people, or Turkers, on Amazon’s Mechanical Turk website, paying 4 cents for each set of properly answered questions. The result is a comprehensive word-emotion lexicon for over 10,000 words or two-word phrases which they call EmoLex. One important factor in this research is the quality of the answers that crowdsourcing gives. For example, some Turkers might answer at random or even deliberately enter wrong answers. Mohammad and Turney have tackled this by inserting test questions that they use to judge whether or not the Turker is answering well. If not, all the data from that person is ignored. They tested the quality of their database by comparing it to earlier ones created by experts and say it compares well. “We compared a subset of our lexicon with existing gold standard data to show that the annotations obtained are indeed of high quality,” they say. This approach has significant potential for the future. Mohammad and Turney say it should be straightforward to increase the size of the date database and at the same technique can be easily adapted to create similar lexicons in other languages. And all this can be done very cheaply—they spent $2100 on Mechanical Turk in this work. The bottom line is that sentiment analysis can only ever be as good as the database on which it relies. With EmoLex, analysts have a new tool for their box of tricks. Ref: arxiv.org/abs/1308.6297: Crowdsourcing a Word-Emotion Association Lexicon.
One of the buzzphrases associated with the social web is sentiment analysis. This is the ability to determine a person’s opinion or state of mind by analysing the words they post on Twitter, Facebook or some other medium.
Much has been promised with this method—the ability to measure satisfaction with politicians, movies and products; the ability to better manage customer relations; the ability to create dialogue for emotion-aware games; the ability to measure the flow of emotion in novels; and so on.
The idea is to entirely automate this process—to analyse the firehose of words produced by social websites using advanced data mining techniques to gauge sentiment on a vast scale.
But all this depends on how well we understand the emotion and polarity (whether negative or positive) that people associate with each word or combinations of words.
Today, Saif Mohammad and Peter Turney at the National Research Council Canada in Ottawa unveil a huge database of words and their associated emotions and polarity, which they have assembled quickly and inexpensively using Amazon’s crowdsourcing Mechanical Turk website. They say this crowdsourcing mechanism makes it possible to increase the size and quality of the database quickly and easily.
Most psychologists believe that there are essentially six basic emotions– joy, sadness, anger, fear, disgust, and surprise– or at most eight if you include trust and anticipation. So the task of any word-emotion lexicon is to determine how strongly a word is associated with each of these emotions.
One way to do this is to use a small group of experts to associate emotions with a set of words. One of the most famous databases, created in the 1960s and known as the General Inquirer database, has over 11,000 words labelled with 182 different tags, including some of the emotions that psychologist now think are the most basic.
A more modern database is the WordNet Affect Lexicon, which has a few hundred words tagged in this way. This used a small group of experts to manually tag a set of seed words with the basic emotions. The size of this database was then dramatically increased by automatically associating the same emotions with all the synonyms of these words.
One of the problems with these approaches is the sheer time it takes to compile a large database so Mohammad and Turney tried a different approach.
These guys selected about 10,000 words from an existing thesaurus and the lexicons described above and then created a set of five questions to ask about each word that would reveal the emotions and polarity associated with it. That’s a total of over 50,000 questions.
They then asked these questions to over 2000 people, or Turkers, on Amazon’s Mechanical Turk website, paying 4 cents for each set of properly answered questions.
The result is a comprehensive word-emotion lexicon for over 10,000 words or two-word phrases which they call EmoLex.
One important factor in this research is the quality of the answers that crowdsourcing gives. For example, some Turkers might answer at random or even deliberately enter wrong answers.
Mohammad and Turney have tackled this by inserting test questions that they use to judge whether or not the Turker is answering well. If not, all the data from that person is ignored.
They tested the quality of their database by comparing it to earlier ones created by experts and say it compares well. “We compared a subset of our lexicon with existing gold standard data to show that the annotations obtained are indeed of high quality,” they say.
This approach has significant potential for the future. Mohammad and Turney say it should be straightforward to increase the size of the date database and at the same technique can be easily adapted to create similar lexicons in other languages. And all this can be done very cheaply—they spent $2100 on Mechanical Turk in this work.
The bottom line is that sentiment analysis can only ever be as good as the database on which it relies. With EmoLex, analysts have a new tool for their box of tricks.
Ref: arxiv.org/abs/1308.6297: Crowdsourcing a Word-Emotion Association Lexicon.
Wednesday, May 23. 2012
by Bruce Sterling
*In contemporary practice, I guess this Vurb verbiage from FutureEverywhere boils down to “my pocket keeps beeping all the time,” but, well, of course in a network society you can take everyone you know and everyone you own, and scatter them across the planet’s surface. Especially if they already did that with you.
“In the background there are at the same time deeper, more systemic developments taking place: high-speed internet access, ubicomp, cloud computing, sensor networks, big data, etc.. And out of these, some weird, boutique threads that are relevant to spatial practice, like the 3D printing of rooms, robots weaving buildings, self-driving cars, domestic drones, urban operating systems and nonhuman cities.
“A few weeks ago, my dear friend Ben Cerveny stopped over in Amsterdam for a weekend on his way to Geneva. For a few years, Ben had been living in Amsterdam for some months a year, traveling back to San Francisco and Los Angeles after summer and returning to Amsterdam after winter. (((No wonder I keep running into that Cerveny guy all the time.)))
“It had almost been two years since we last saw each other, but because we have constantly been in touch via Twitter, Facebook, Foursquare, Instagram and iChat, I felt like it had been only yesterday. When I explained this to Ben, he immediately said, without stopping to think about what he was saying, ‘oh of course: the continuous partial everywhere.’
“And that is exactly it. The continous partial everywhere is the aspatial experience of simultaneity in immediate media. I am in the city where my friends are at the same as the one where I am myself. The city for me is no longer only a city in space, but now also a city in time. An aspatial city, without distances, in a kind of aspace….”
Thursday, May 03. 2012
While most camera innovations are aimed at higher megapixel counts or new image capturing techniques, Matt Richardson is taking an entirely different route with the Descriptive Camera: creating a device that turns your captured imagery into words. Designed as part of a class for New York University's Interactive Telecommunications Program, the camera consists of a USB webcam, a shutter button, a small thermal printer, and an ethernet connection. When a picture is "snapped," it's sent off to humans for analysis via Amazon's Mechanical Turk API. The human on the other end then creates a written description of the image, which is sent back to the camera. The resulting text is printed with the thermal printer, framed by a Polaroid-style photo outline (an example Richardson provides reads "It's a dark room with a window. The image is quite pixelated."
According to Richardson's post about the project, the Amazon Human Intelligence Task — or HIT — cost is about $1.25 for each image, with results usually taking between three to six minutes to return. An "accomplice mode" actually lets the camera send out links to the image via instant messenger, providing a cheaper option for human interpretation. While the device currently requires external power from a 5-volt source, Richardson does hope to make a version at some point that runs off self-contained batteries and can use wireless data. It's certainly an interesting project, and we won't deny that we're smitten with the idea of taking images out and about in the world, and seeing them perceived through someone else's eyes.
Wednesday, May 02. 2012
Archinect opened a page on Kickstarter with curated content.
Check it out to see if you want to help build an eco-pool in NYC, to support Raumlabor to build an inflatable or David Lynch to be documented! Or else...
Friday, March 23. 2012
(Page 1 of 4, totaling 32 entries) » next page
fabric | rblg
fabric | rblg is the survey website of fabric | ch -- studio for architecture, interaction and research. We curate and re-blog articles, researches, exhibitions and projects that we notice during our everyday practice.