Tuesday, August 02. 2016
By fabric | ch
As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 07.2016), via all its tags!
HERE ARE ALL THE CURRENT TAGS TO NAVIGATE ON | RBLG BLOG:
(to be seen just below if you're navigating on the blog's page or here for rss readers)
Posted by Patrick Keller in fabric | ch at 16:58
Defined tags for this entry: 3d, activism, advertising, agriculture, air, animation, applications, archeology, architects, architecture, art, art direction, artificial reality, artists, atmosphere, automation, behaviour, bioinspired, biotech, blog, body, books, brand, character, citizen, city, climate, clips, code, cognition, collaboration, commodification, communication, community, computing, conditioning, conferences, consumption, content, control, craft, culture & society, curators, customization, data, density, design, design (environments), design (fashion), design (graphic), design (interactions), design (motion), design (products), designers, development, devices, digital, digital fabrication, digital life, digital marketing, dimensions, direct, display, documentary, earth, ecal, ecology, economy, electronics, energy, engineering, environment, equipment, event, exhibitions, experience, experimentation, fabric | ch, farming, fashion, fiction, films, food, form, franchised, friends, function, future, gadgets, games, garden, generative, geography, globalization, goods, hack, hardware, harvesting, health, history, housing, hybrid, identification, illustration, images, information, infrastructure, installations, interaction design, interface, interferences, kinetic, knowledge, landscape, language, law, life, lighting, localization, localized, magazines, make, mapping, marketing, mashup, materials, media, mediated, mind, mining, mobile, mobility, molecules, monitoring, monography, movie, museum, music, nanotech, narrative, nature, networks, neurosciences, opensource, operating system, participative, particles, people, perception, photography, physics, physiological, politics, pollution, presence, print, privacy, product, profiling, projects, psychological, public, publishing, reactive, real time, recycling, research, resources, responsive, ressources, robotics, santé, scenography, schools, science & technology, scientists, screen, search, security, semantic, services, sharing, shopping, signage, smart, social, society, software, solar, sound, space, speculation, statement, surveillance, sustainability, tactile, tagging, tangible, targeted, teaching, technology, tele-, telecom, territory, text, textile, theory, thinkers, thinking, time, tools, topology, tourism, toys, transmission, trend, typography, ubiquitous, urbanism, users, variable, vernacular, video, viral, vision, visualization, voice, vr, war, weather, web, wireless, writing
Tuesday, July 05. 2016
Public Platform of Future-Past, extended study phase. Bots, "Ar.I." & tools? | #data #monitoring #architecture
Note: in the continuity of my previous post/documentation concerning the project Platform of Future-Past (fabric | ch's recent winning competition proposal), I publish additional images (several) and explanations about the second phase of the Platform project, for which we were mandated by Canton de Vaud (SiPAL).
The first part of this article gives complementary explanations about the project, but I also take the opportunity to post related works and researches we've done in parallel about particular implications of the platform proposal. This will hopefully bring a neater understanding to the way we try to combine experimentations-exhibitions, the creation of "tools" and the design of larger proposals in our open and process of work.
Notably, these related works concerned the approach to data, the breaking of the environment into computable elements and the inevitable questions raised by their uses as part of a public architecture project.
The information pavilion was potentially a slow, analog and digital "shape/experience shifter", as it was planned to be built in several succeeding steps over the years and possibly "reconfigure" to sense and look at its transforming surroundings.
The pavilion conserved therefore an unfinished flavour as part of its DNA, inspired by these old kind of meshed constructions (bamboo scaffoldings), almost sketched. This principle of construction was used to help "shift" if/when necessary.
In a general sense, the pavilion answered the conventional public program of an observation deck about a construction site. It also served the purpose of documenting the ongoing building process that often comes along. By doing so, we turned the "monitoring dimension" (production of data) of such a program into a base element of our proposal. That's where a former experimental installation helped us: Heterochrony.
As it can be noticed, the word "Public" was added to the title of the project between the two phases, to become Public Platform of Future-Past (PPoFP) ... which we believe was important to add. This because it was envisioned that the PPoFP would monitor and use environmental data concerning the direct surroundings of the information pavilion (but NO DATA about uses/users). Data that we stated in this case Public, while the treatment of the monitored data would also become part of the project, "architectural" (more below about it).
For these monitored data to stay public, so as for the space of the pavilion itself that would be part of the public domain and physically extends it, we had to ensure that these data wouldn't be used by a third party private service. We were in need to keep an eye on the algorithms that would treat the spatial data. Or best, write them according to our design goals (more about it below).
That's were architecture meets code and data (again) obviously...
By fabric | ch
The Public Platform of Future-Past is a structure (an information and sightseeing pavilion), a Platform that overlooks an existing Public site while basically taking it as it is, in a similar way to an archeological platform over an excavation site.
The asphalt ground floor remains virtually untouched, with traces of former uses kept as they are, some quite old (a train platform linked to an early XXth century locomotives hall), some less (painted parking spaces). The surrounding environment will move and change consideralby over the years while new constructions will go on. The pavilion will monitor and document these changes. Therefore the last part of its name: "Future-Past".
By nonetheless touching the site in a few points, the pavilion slightly reorganizes the area and triggers spaces for a small new outdoor cafe and a bikes parking area. This enhanced ground floor program can work by itself, seperated from the upper floors.
Several areas are linked to monitoring activities (input devices) and/or displays (in red, top -- that concern interests points and views from the platform or elsewhere --). These areas consist in localized devices on the platform itself (5 locations), satellite ones directly implented in the three construction sites or even in distant cities of the larger political area --these are rather output devices-- concerned by the new constructions (three museums, two new large public squares, a new railway station and a new metro). Inspired by the prior similar installation in a public park during a festival -- Heterochrony (bottom image) --, these raw data can be of different nature: visual, audio, integers from sensors (%, °C, ppm, db, lm, mb, etc.), ...
Input and output devices remain low-cost and simple in their expression: several input devices / sensors are placed outside of the pavilion in the structural elements and point toward areas of interest (construction sites or more specific parts of them). Directly in relation with these sensors and the sightseeing spots but on the inside are placed output devices with their recognizable blue screens. These are mainly voice interfaces: voice outputs driven by one bot according to architectural "scores" or algorithmic rules (middle image). Once the rules designed, the "architectural system" runs on its own. That's why we've also named the system based on automated bots "Ar.I." It could stand for "Architectural Intelligence", as it is entirely part of the architectural project.
The coding of the "Ar.I." and use of data has the potential to easily become something more experimental, transformative and performative along the life of PPoFT.
Observers (users) and their natural "curiosity" play a central role: preliminary observations and monitorings are indeed the ones produced in an analog way by them (eyes and ears), in each of the 5 interesting points and through their wanderings. Extending this natural interest is a simple cord in front of each "output device" that they can pull on, which will then trigger a set of new measures by all the related sensors on the outside. This set new data enter the database and become readable by the "Ar.I."
The whole part of the project regarding interaction and data treatments has been subject to a dedicated short study (a document about this study can be accessed here --in French only--). The main design implications of it are that the "Ar.I." takes part in the process of "filtering" which happens between the "outside" and the "inside", by taking part to the creation of a variable but specific "inside atmosphere" ("artificial artificial", as the outside is artificial as well since the anthropocene, isn't it ?) By doing so, the "Ar.I." bot fully takes its own part to the architecture main program: triggering the perception of an inside, proposing patterns of occupations.
"Ar.I." computes spatial elements and mixes times. It can organize configurations for the pavilion (data, displays, recorded sounds, lightings, clocks). It can set it to a past, a present, but also a future estimated disposition. "Ar.I." is mainly a set of open rules and a vocal interface, at the exception of the common access and conference space equipped with visual displays as well. "Ar.I." simply spells data at some times while at other, more intriguingly, it starts give "spatial advices" about the environment data configuration.
In parallel to Public Platform of Future Past and in the frame of various research or experimental projects, scientists and designers at fabric | ch have been working to set up their own platform for declaring and retrieving data (more about this project, Datadroppers, here). A platform, simple but that is adequate to our needs, on which we can develop as desired and where we know what is happening to the data. To further guarantee the nature of the project, a "data commune" was created out of it and we plan to further release the code on Github.
In tis context, we are turning as well our own office into a test tube for various monitoring systems, so that we can assess the reliability and handling of different systems. It is then the occasion to further "hack" some basic domestic equipments and turn them into sensors, try new functions as well, with the help of our 3d printer in tis case (middle image). Again, this experimental activity is turned into a side project, Studio Station (ongoing, with Pierre-Xavier Puissant), while keeping the general background goal of "concept-proofing" the different elements of the main project.
A common room (conference room) in the pavilion hosts and displays the various data. 5 small screen devices, 5 voice interfaces controlled for the 5 areas of interests and a semi-transparent data screen. Inspired again by what was experimented and realized back in 2012 during Heterochrony (top image).
----- ----- -----
PPoFP, several images. Day, night configurations & few comments
Public Platform of Future-Past, axonometric views day/night.
An elevated walkway that overlook the almost archeological site (past-present-future). The circulations and views define and articulate the architecture and the five main "points of interests". These mains points concentrates spatial events, infrastructures and monitoring technologies. Layer by layer, the suroundings are getting filtrated by various means and become enclosed spaces.
Walks, views over transforming sites, ...
Data treatment, bots, voice and minimal visual outputs.
Night views, circulations, points of view.
Night views, ground.
Random yet controllable lights at night. Underlined areas of interests, points of "spatial densities".
Project: fabric | ch
Team: Patrick Keller, Christophe Guignard, Christian Babski, Sinan Mansuroglu, Yves Staub, Nicolas Besson.
Monday, June 13. 2016
Note: after a few weeks posting about the Universal Income, here comes the "Universal data accumulator for devices, sensors, programs, humans & more" by Wolfram (best known for Wolfram Alpha computational engine and the former Mathematica libraries, on which most of their other services seem to be built).
Funilly, we've picked a very similar name for a very similar data service we've set up for ourselves and our friends last year, during an exhibition at H3K: Datadroppers (!), with a different set of references in our mind (Drop City? --from which we borrowed the colors-- "Turn on, tune in, drop out"?) Even if our service is logically much more grassroots, less developed but therfore quite light to use as well.
We developed this project around data dropping/picking with another architectural project in mind that I'll speak about in the coming days: Public Platform of Future-Past. It was clearly and closely linked.
"Universal" is back in the loop as a keyword therefore... (I would rather adopt a different word for myself and the work we are doing though: "Diversal" --which is a word I'm using for 2 yearnow and naively thought I "invented", but not...)
"The Wolfram Data Drop is an open service that makes it easy to accumulate data of any kind, from anywhere—setting it up for immediate computation, visualization, analysis, querying, or other operations." - which looks more oriented towards data analysis than use in third party designs and projects.
"Datadroppers is a public and open data commune, it is a tool dedicated to data collection and sharing that tries to remain as simple, minimal and easy to use as possible." Direct and light data tool for designers, belonging to designers (fabric | ch) that use it for their own projects...
Friday, March 13. 2015
Sunday, December 14. 2014
The third workshop we ran in the frame of I&IC with our guest researcher Matthew Plummer-Fernandez (Goldsmiths University) and the 2nd & 3rd year students (Ba) in Media & Interaction Design (ECAL) ended last Friday (| rblg note: on the 21st of Nov.) with interesting results. The workshop focused on small situated computing technologies that could collect, aggregate and/or “manipulate” data in automated ways (bots) and which would certainly need to heavily rely on cloud technologies due to their low storage and computing capacities. So to say “networked data objects” that will soon become very common, thanks to cheap new small computing devices (i.e. Raspberry Pis for diy applications) or sensors (i.e. Arduino, etc.) The title of the workshop was “Botcave”, which objective was explained by Matthew in a previous post.
The choice of this context of work was defined accordingly to our overall research objective, even though we knew that it wouldn’t address directly the “cloud computing” apparatus — something we learned to be a difficult approachduring the second workshop –, but that it would nonetheless question its interfaces and the way we experience the whole service. Especially the evolution of this apparatus through new types of everyday interactions and data generation.
Matthew Plummer-Fernandez (#Algopop) during the final presentation at the end of the research workshop.
Through this workshop, Matthew and the students definitely raised the following points and questions:
1° Small situated technologies that will soon spread everywhere will become heavy users of cloud based computing and data storage, as they have low storage and computing capacities. While they might just use and manipulate existing data (like some of the workshop projects — i.e. #Good vs. #Evil or Moody Printer) they will altogether and mainly also contribute to produce extra large additional quantities of them (i.e. Robinson Miner). Yet, the amount of meaningful data to be “pushed” and “treated” in the cloud remains a big question mark, as there will be (too) huge amounts of such data –Lucien will probably post something later about this subject: “fog computing“–, this might end up with the need for interdisciplinary teams to rethink cloud architectures.
2° Stored data are becoming “alive” or significant only when “manipulated”. It can be done by “analog users” of course, but in general it is now rather operated by rules and algorithms of different sorts (in the frame of this workshop: automated bots). Are these rules “situated” as well and possibly context aware (context intelligent) –i.e.Robinson Miner? Or are they somehow more abstract and located anywhere in the cloud? Both?
3° These “Networked Data Objects” (and soon “Network Data Everything”) will contribute to “babelize” users interactions and interfaces in all directions, paving the way for new types of combinations and experiences (creolization processes) — i.e. The Beast, The Like Hotline, Simon Coins, The Wifi Cracker could be considered as starting phases of such processes–. Cloud interfaces and computing will then become everyday “things” and when at “house”, new domestic objects with which we’ll have totally different interactions (this last point must still be discussed though as domesticity might not exist anymore according to Space Caviar).
Moody Printer – (Alexia Léchot, Benjamin Botros)
Moody Printer remains a basic conceptual proposal at this stage, where a hacked printer, connected to a Raspberry Pi that stays hidden (it would be located inside the printer), has access to weather information. Similarly to human beings, its “mood” can be affected by such inputs following some basic rules (good – bad, hot – cold, sunny – cloudy -rainy, etc.) The automated process then search for Google images according to its defined “mood” (direct link between “mood”, weather conditions and exhaustive list of words) and then autonomously start to print them.
A different kind of printer combined with weather monitoring.
The Beast – (Nicolas Nahornyj)
Top: Nicolas Nahornyj is presenting his project to the assembly. Bottom: the laptop and “the beast”.
The Beast is a device that asks to be fed with money at random times… It is your new laptop companion. To calm it down for a while, you must insert a coin in the slot provided for that purpose. If you don’t comply, not only will it continue to ask for money in a more frequent basis, but it will also randomly pick up an image that lie around on your hard drive, post it on a popular social network (i.e. Facebook, Pinterest, etc.) and then erase this image on your local disk. Slowly, The Beast will remove all images from your hard drive and post them online…
A different kind of slot machine combined with private files stealing.
Robinson – (Anne-Sophie Bazard, Jonas Lacôte, Pierre-Xavier Puissant)
Top: Pierre-Xavier Puissant is looking at the autonomous “minecrafting” of his bot. Bottom: the proposed bot container that take on the idea of cubic construction. It could be placed in your garden, in one of your room, then in your fridge, etc.
Robinson automates the procedural construction of MineCraft environments. To do so, the bot uses local weather information that is monitored by a weather sensor located inside the cubic box, attached to a Raspberry Pi located within the box as well. This sensor is looking for changes in temperature, humidity, etc. that then serve to change the building blocks and rules of constructions inside MineCraft (put your cube inside your fridge and it will start to build icy blocks, put it in a wet environment and it will construct with grass, etc.)
A different kind of thermometer combined with a construction game.
Note: Matthew Plummer-Fernandez also produced two (auto)MineCraft bots during the week of workshop. The first one is building environment according to fluctuations in the course of different market indexes while the second one is trying to build “shapes” to escape this first envirnment. These two bots are downloadable from theGithub repository that was realized during the workshop.
#Good vs. #Evil – (Maxime Castelli)
Top: a transformed car racing game. Bottom: a race is going on between two Twitter hashtags, materialized by two cars.
#Good vs. #Evil is a quite straightforward project. It is also a hack of an existing two racing cars game. Yet in this case, the bot is counting iterations of two hashtags on Twitter: #Good and #Evil. At each new iteration of one or the other word, the device gives an electric input to its associated car. The result is a slow and perpetual race car between “good” and “evil” through their online hashtags iterations.
A different kind of data visualization combined with racing cars.
The “Like” Hotline – (Mylène Dreyer, Caroline Buttet, Guillaume Cerdeira)
Top: Caroline Buttet and Mylène Dreyer are explaining their project. The screen of the laptop, which is a Facebook account is beamed on the left outer part of the image. Bottom: Caroline Buttet is using a hacked phone to “like” pages.
The “Like” Hotline is proposing to hack a regular phone and install a hotline bot on it. Connected to its online Facebook account that follows a few personalities and the posts they are making, the bot ask questions to the interlocutor which can then be answered by using the keypad on the phone. After navigating through a few choices, the bot hotline help you like a post on the social network.
A different kind of hotline combined with a social network.
Simoncoin – (Romain Cazier)
Top: Romain Cazier introducing its “coin” project. Bottom: the device combines an old “Simon” memory game with the production of digital coins.
Simoncoin was unfortunately not finished at the end of the week of workshop but was thought out in force details that would be too long to explain in this short presentation. Yet the main idea was to use the game logic of Simon to generate coins. In a parallel to the Bitcoins that are harder and harder to mill, Simon Coins are also more and more difficult to generate due to the game logic.
Another different kind of money combined with a memory game.
The Wifi Cracker – (Bastien Girshig, Martin Hertig)
Top: Bastien Girshig and Martin Hertig (left of Matthew Plummer-Fernandez) presenting. Middle and Bottom: the wifi password cracker slowly diplays the letters of the wifi password.
The Wifi Cracker is an object that you can independently leave in a space. It furtively looks a little bit like a clock, but it won’t display the time. Instead, it will look for available wifi networks in the area and start try to find their protected password (Bastien and Martin found a ready made process for that). The bot will test all possible combinations and it will take time. Once the device will have found the working password, it will use its round display to transmit the password. Letter by letter and slowly as well.
A different kind of cookoo clock combined with a password cracker.
Lots of thanks to Matthew Plummer-Fernandez for its involvement and great workshop direction; Lucien Langton for its involvment, technical digging into Raspberry Pis, pictures and documentation; Nicolas Nova and Charles Chalas (from HEAD) so as Christophe Guignard, Christian Babski and Alain Bellet for taking part or helping during the final presentation. A special thanks to the students from ECAL involved in the project and the energy they’ve put into it: Anne-Sophie Bazard, Benjamin Botros, Maxime Castelli, Romain Cazier, Guillaume Cerdeira, Mylène Dreyer, Bastien Girshig, Jonas Lacôte, Alexia Léchot, Nicolas Nahornyj, Pierre-Xavier Puissant.
From left to right: Bastien Girshig, Martin Hertig (The Wifi Cracker project), Nicolas Nova, Matthew Plummer-Fernandez (#Algopop), a “mystery girl”, Christian Babski (in the background), Patrick Keller, Sebastian Vargas, Pierre Xavier-Puissant (Robinson Miner), Alain Bellet and Lucien Langton (taking the pictures…) during the final presentation on Friday.
Friday, November 21. 2014
Note: the workshop continues and should finish today. We'll document and publish results next week. As the workshop is all about small size and situated computing, Lucien Langton (assistant on the project) made a short tutorial about the way to set up your Pi. I'll also publish the Github repository that Matthew Plummer-Fernandez has set up.
The Bots are running! The second workshop of I&IC’s research study started yesterday with Matthew’s presentation to the students. A video of the presentation might be included in the post later on, but for now here’s the [pdf]: Botcaves
First prototypes setup by the students include bots playing Minecraft, bots cracking wifi’s, bots triggered by onboard IR Cameras. So far, some groups worked directly with Python scripts deployed via SSH into the Pi’s, others established a client-server connection between their Mac and their Pi by installing Processing on their Raspberry and finally some decided to start by hacking hardware to connect to their bots later.
The research process will be continuously documented during the week.
Connecting to Pi via Proce55ing
Wednesday, October 08. 2014
Note: a few of our recent works and exhibitions are included in this promising young publication related to architectural thinking, Desierto, edited by Paper - Architectural Histamine in Madrid. At the editorial team invitation, I had the occasion to write a paper about Deterritorialized Living and one of its physical installation last year in Pau (France), during Pau Acces(s). We also took the occasion of the publication to give a glimpse of a related research project called Algorithmic Atomized Functioning.
By fabric | ch
From the editorial team:
"The temperature of the invisible and the desacralization of the air.
28° Celsius is the temperature at which protection becomes superfluous. It is also the temperature at which swimming pools are acclimatised. Within the limits of the this hygrothermal comfort zone, we do not require the intervention of our body's thermoregulatory mechanisms nor that of any external artificial thermal controls in order to feel pleasantly comfortable while carrying out a sedentary activity without clothing. 28° Celsius is thus the temperature at which clothing can disappear, just as architecture could."
Authors are Gabriel Ruiz-Larrea, Sean Lally, Philippe Rahm, Nerea Calvillo, myself, Helen Mallinson, Antonio Cobo, José Vella Castillo and Pauly Garcia-Masedo.
Editorial by gabriel Ruiz-Larrea (editor in chief). Editorial team composed of Natalia David, Nuria Úrculo, María Buey, Daniel Lacasta Fitzsimmons.
Inhabiting Deterritorialization, by Patrick Keller, with images of Deterritorialized Living website, Deterritorialized Daylight installation (Pau, France) and Algorithmic Atomized Functioning.
Desierto #3 and past issues can be ordered online on Paper bookstore.
Thursday, July 24. 2014
Besides health tracking, contact lens technology under development could enable drug delivery, night vision, and augmented reality.
Last week Google and Novartis announced that they’re teaming up to develop contact lenses that monitor glucose levels and automatically adjust their focus. But these could be just the start of a clever new product category. From cancer detection and drug delivery to reality augmentation and night vision, our eyes offer unique opportunities for both health monitoring and enhancement.
“Now is the time to put a little computer and a lot of miniaturized technologies in the contact lens,” says Franck Leveiller, head of research and development in the Novartis eye care division.
One of the Novartis-Google prototype lenses contains a device about the size of a speck of glitter that measures glucose in tears. A wireless antenna then transmits the measurements to an external device. It’s designed to ease the burden of diabetics who otherwise have to prick their fingers to test their blood sugar levels.
“I have many patients that are managing diabetes, and they described it as having a part-time job. It’s so arduous to monitor,” says Thomas Quinn, who is head of the American Optometric Association’s contact lens and cornea section. “To have a way that patients can do that more easily and get some of their life back is really exciting.”
Glucose isn’t the only thing that can be measured from tears rather than a blood sample, says Quinn. Tears also contain a chemical called lacryglobin that serves as a biomarker for breast, colon, lung, prostate, and ovarian cancers. Monitoring lacryglobin levels could be particularly useful for cancer patients who are in remission, Quinn says.
Quinn also believes that drug delivery may be another use for future contact lenses. If a lens could dispense medication slowly over long periods of time, it would be better for patients than the short, concentrated doses provided by eye drops, he says. Such a lens is not easy to make, though (see “A Drug-Dispensing Lens”).
The autofocusing lens is in an earlier stage of development, but the goal is for it to adjust its shape depending on where the eye is looking, which would be especially helpful for people who need reading glasses. A current prototype of the lens uses photodiodes to detect light hitting the eye and determine whether the eye is directed downward. Leveiller says the team is also looking at other possible techniques.
Google and Novartis are far from the only ones interesting in upgrading the contact lens with such new capabilities. In Sweden, a company called Sensimed is working on a contact lens that measures the intraocular pressure that results from the liquid buildup in the eyes of glaucoma patients (see “Glaucoma Test in a Contact Lens”). And researchers at the University of Michigan are using graphene to make infrared-sensitive contact lenses—the vision, as it were, is that these might one day provide some form of night vision without the bulky headgear.
A Seattle-based company, Innovega, meanwhile, has developed a contact lens with a small area that filters specific bands of red, green, and blue light, giving users the ability to focus on a very small, high resolution display less than an inch away from their eyes without interfering with normal vision. That makes tiny displays attached to glasses look more like IMAX movie screens, says the company’s CEO, Steve Willey. Together, the lens and display are called iOptik.
Plenty of challenges still remain before we’re all walking around with glucose-monitoring, cancer-detecting, drug-delivering super night vision. Some prototypes out there are unusually thick, Quinn says, and some use traditional, rigid electronics where clear, flexible alternatives would be preferable. And, of course, all will have to pass regulatory approval to show they are safe and effective.
Jeff George, the head of the Novartis eye care division, is certainly optimistic about Google’s smart lens. “Google X’s team refers to themselves as a ‘moon shot factory.’ I’d view this as better than a moon shot given what we’ve seen,” he says.
Wednesday, July 23. 2014
Could “Force Illusions” Help Wearables Catch On?
By John Pavlus
What if the compass app in your phone didn’t just visually point north but actually seemed to pull your hand in that direction?
Two Japanese researchers will present tiny handheld devices that generate this kind of illusion at next month’s annual SIGGRAPH technology conference in Vancouver, British Columbia. The “force display” devices, called Traxion and Buru-Navi3, exploit the fact that a vibrating object is perceived as either pulling or pushing when held. The effect could be applied in navigation and gaming applications, and it suggests possibilities in mobile and wearable technology as well. Tomohiro Amemiya, a cognitive scientist at NTT Communication Science Laboratories, began the Buru-Navi project in 2004, originally as a way to research how the brain handles sensory illusions. His initial prototype was roughly the size of a paperback novel and contained a crankshaft mechanism to generate vibration, similar to the motion of a locomotive wheel. Amemiya discovered that when the vibrations occurred asymmetrically at a frequency of 10 hertz—with the crankshaft accelerating sharply in one direction and then easing back more slowly—a distinctive pulling sensation emerged in the direction of the acceleration. With his collaborator Hiroaki Gomi, Amemiya continued to modify and miniaturize the device into its current form, which is about the size of a wine cork and relies on a 40-hertz electromagnetic actuator similar to those found in smartphones. When pinched between the thumb and forefinger, Buru-Navi3 creates a continuous force illusion in one direction (toward or away from the user, depending on the device’s orientation). The second device, called Traxion, was developed within the last year at the University of Tokyo by a team led by computer science researcher Jun Rekimoto. Traxion also generates a force illusion via an asymmetrically vibrating actuator held between the fingers. “We tested many users, and they said that it feels as if there’s some invisible string pulling or pushing the device,” Rekimoto says. “It’s a strong sensation of force.” Both devices create a pulling force significant enough to guide a blindfolded user along a path or around corners. This way-finding application might be a perfect fit for the smart watches that Samsung, Google, and perhaps Apple are mobilizing to sell. Haptics, which is the name for the technology behind tactile interfaces, has been explored for years in limited or niche applications. But Vincent Hayward, who researches haptics at the Pierre and Marie Curie University in Paris, says the technology is now “reaching a critical mass.” He adds, “Enough people are trying a sufficient number of ideas that the balance between novelty and utility starts shifting.” Nonetheless, harnessing these kinesthetic effects for mainstream use is easier said than done. Amemiya admits that while his device generates strong force illusions while being pinched between a finger and thumb, the effect becomes much weaker if the device is merely placed in contact with the skin (as it would be in a watch). The rise of even crude haptic wearable devices could accelerate this kind of scientific research, though. “A wearable system is always on, so it records data constantly,” Amemiya explains. “This can be very useful for understanding human perception.”
Two Japanese researchers will present tiny handheld devices that generate this kind of illusion at next month’s annual SIGGRAPH technology conference in Vancouver, British Columbia. The “force display” devices, called Traxion and Buru-Navi3, exploit the fact that a vibrating object is perceived as either pulling or pushing when held. The effect could be applied in navigation and gaming applications, and it suggests possibilities in mobile and wearable technology as well.
Tomohiro Amemiya, a cognitive scientist at NTT Communication Science Laboratories, began the Buru-Navi project in 2004, originally as a way to research how the brain handles sensory illusions. His initial prototype was roughly the size of a paperback novel and contained a crankshaft mechanism to generate vibration, similar to the motion of a locomotive wheel. Amemiya discovered that when the vibrations occurred asymmetrically at a frequency of 10 hertz—with the crankshaft accelerating sharply in one direction and then easing back more slowly—a distinctive pulling sensation emerged in the direction of the acceleration.
With his collaborator Hiroaki Gomi, Amemiya continued to modify and miniaturize the device into its current form, which is about the size of a wine cork and relies on a 40-hertz electromagnetic actuator similar to those found in smartphones. When pinched between the thumb and forefinger, Buru-Navi3 creates a continuous force illusion in one direction (toward or away from the user, depending on the device’s orientation).
The second device, called Traxion, was developed within the last year at the University of Tokyo by a team led by computer science researcher Jun Rekimoto. Traxion also generates a force illusion via an asymmetrically vibrating actuator held between the fingers. “We tested many users, and they said that it feels as if there’s some invisible string pulling or pushing the device,” Rekimoto says. “It’s a strong sensation of force.”
Both devices create a pulling force significant enough to guide a blindfolded user along a path or around corners. This way-finding application might be a perfect fit for the smart watches that Samsung, Google, and perhaps Apple are mobilizing to sell.
Haptics, which is the name for the technology behind tactile interfaces, has been explored for years in limited or niche applications. But Vincent Hayward, who researches haptics at the Pierre and Marie Curie University in Paris, says the technology is now “reaching a critical mass.” He adds, “Enough people are trying a sufficient number of ideas that the balance between novelty and utility starts shifting.”
Nonetheless, harnessing these kinesthetic effects for mainstream use is easier said than done. Amemiya admits that while his device generates strong force illusions while being pinched between a finger and thumb, the effect becomes much weaker if the device is merely placed in contact with the skin (as it would be in a watch).
The rise of even crude haptic wearable devices could accelerate this kind of scientific research, though. “A wearable system is always on, so it records data constantly,” Amemiya explains. “This can be very useful for understanding human perception.”
Wednesday, February 26. 2014
Seen everywhere online these days and now on | rblg too... Yet another "trojan horse" by Google to turn you into a mobile and indoor sensor for their own sake (data collection, if I said so). And soon will we be able to visit your flat or the ones of your friends through Google Maps/Earth, or through a constellation of other applications. After clicking at the door, of course.
But also, as it is often the case with such devices, an interesting tool as well... On top of which disruptive apps will be built that will further mix material and immaterial experiences and that will further locate parts of your "home" into "clouds".
As it consists in an open call for ideas, before they'll give away 200 dev. kits, don't hesitate to send them a line if you have an unpredictable one (this promiss to be very competing...)!
Link to the projetc and call HERE.
*An Android unit with registration.
“What is it?
“Our current prototype is a 5” phone containing customized hardware and software designed to track the full 3D motion of the device, while simultaneously creating a map of the environment. These sensors allow the phone to make over a quarter million 3D measurements every second, updating its position and orientation in real-time, combining that data into a single 3D model of the space around you.
“It runs Android and includes development APIs to provide position, orientation, and depth data to standard Android applications written in Java, C/C++, as well as the Unity Game Engine. These early prototypes, algorithms, and APIs are still in active development. So, these experimental devices are intended only for the adventurous and are not a final shipping product….”
(Page 1 of 7, totaling 61 entries) » next page
fabric | rblg
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
| rblg on Twitter