Monday, April 06. 2009The Immersive EdgeStarting just a few hours from now down at SCI-Arc, on a cloudless 73º day, "seven distinguished architects and theorists" whose designs straddle "the intersection of physical and virtual worlds" will be presenting their work at the Mediascapes Symposium, led by Ed Keller. -----
Via BLDBLOG Personal comment: Some names... actifs dans la région de LA autour de thématiques qui ne nous sont pas étrangères! Des "usual suspects" (comme Marcos Novak ou Eric Owenn Moss) et des nouveaux suspects (Ed Keller, Benjamin Bratton, etc.)
Posted by Patrick Keller
in Architecture, Interaction design, Science & technology
at
10:47
Defined tags for this entry: architecture, conferences, interaction design, research, science & technology, theory
Patrick Keller / Fabric (Postopolis! LA)Via City of Sound ----- By Dan Hill Patrick Keller’s talk was so up my strasse it’s ridiculous. Keller is from the Swiss architecture practice Fabric, originally from Geneva and now in Lausanne (I’m not sure why he was in LA.) He’s interested in the collaboration between information designers and architect, and works in the space between these disciplines, interested in creating responsive architectures (As am I.) He notes wryly that most of his projects are as yet un-built, “like most architecture”. (As are mine.) He runs through a few projects in quick succession, each a progression of the other. There’s a lot more to his projects than we were able to get through in 30 minutes. All concern the interplay between the environmental characteristics of spaces, in terms of the idea of conditioned environments (somewhat apposite, given Banham’s ideas of Architecture of the Well-Tempered Environment (see my own spin on that) and as LA must be home to as much air conditioning as anywhere.) The first project is from a few years back as evidenced by the title. ‘Real Rooms’ (2005) takes its name from Real Player, then one of the de rigeur streaming media players. The idea being that instead of streaming media you could stream actual environments into spaces. This project, for Swiss giant Néstle, was situated in the world HQ of Néstle, a late-modern building with large areas of transparent glass on the outside but an almost hermetically sealed series of interior spaces (corridors, offices, cubicles, boxes etc.) which had precisely controlled environments in terms of internal artificial climates, artificial lighting etc. So given the building had lost a relationship with climate, at least on the interior, Fabric proposed a series of container-like interior ‘boxes’ which could be altered in terms of temperature and lighting to ‘stream’ artificial climates into those spaces, representative of climates and time zones from across the world (playing up Néstle’s global reach, perhaps) - that you could “invite a climate into the box” in Keller’s words. So you could make one environment always 8am, staying on the same latitude. Some could be “always dark, some always daytime - some very cold and some very hot”. The building looked particularly beautiful at night, when only some part of it is lit, forming the shape of daylight across the globe. This shape would then change slightly varying with season. The second project Keller showed also concerns internal atmospheres, but instead of controlling, this is about “letting climate go by itself”. A competition entry for a tower in San Jose, containing a huge volume of air. This building would also communicates climate around the globe, but using the natural processes of convection, albedo effect and so on within the volume of air. As opposed to the former project, this would just enable a climate within the structure, articulated by the volume of air, and then communicate the performance of that air volume via boxes with sensors on the exterior, linking it to climates worldwide. Keller says it’s a sort of test tube environment for climatic variation: “In the morning you’d have cold air at bottom and warmer air at the top remaining from the day before … During the day, it gets warmer, and goes to the top, then cooler air is present at the bottom. Humidity collects at the top, condensates on the facade and runs down. Pollution particles are more dense at the bottom than at the top (as they’re heavier)”. And so on … So the form triggers a natural atmospheric variation into building, which is then captured by the sensors. These sensors then feed information displays on part of building (essentially on each floor), enabling a comparison at each moment of the performance of the building itself, and then “related atmospheres” to other places on the globe. (A fascinating idea, and really close to my own ideas of building as urban dashboard.) The third and final project was ‘Environ(ne)ment’ for the CCA, working with Philippe Rahm. This was both a form of “test environment and representation“ concerning an ‘artificial sun’ (as per Eliasson's Weather Project?) triggering differences in heat, light etc. within the gallery space. These phenomena then sampled by sensors, and feed projections about what’s going on in the space. Moving the ideas forward, Keller described how this exhibit then offered suggestions about how you could inhabit these spaces with differing climates - “propositions generated through software”, such as cooking on floor, sleeping on carpet, wearing clothes or not, furniture, activity and relationships all changing function in response to the different environmental characteristics. (This is an intriguing step forward for the ideas, moving into behavioural relationships as a result of changing environments quite directly, rather than the previous two projects with their more subtle but potentially ignorable approach of prompting.) So Keller sums up by noting how the first project concerned a “conditioned environment” which is then questioned and reopened. The second was a “let go” environment, which is “more natural and costs nothing to trigger variation”. The third concerns “some new way of inhabiting those climates”. He believes this third is interesting as neither totally conditioned nor totally free, but somewhere in between … Keller responds that it could theoretically “go large scale, though it is hard to trigger it at this scale. At urban scale you have to deal with natural conditions as well”. Having said that, in a sense the cities have already changed our climate at this planetary scale of course - and we could do so again, he suggests, say by “painting the cities all white and replace polar caps” accordingly. (There are many more fascinating projects at Fabric’s website, as well as details of the projects noted above, and it’ll be worth watching the work of this practice. This kind of work is at the most interesting junction with architecture and urbanism, to my mind, and perhaps the most potentially fruitful.)
Related Links:Personal comment: This was a really great and challenging event with sharp interventions. I hope we'll be able to organize one in Europe sooner or later (but rather sooner).
Posted by Patrick Keller
in fabric | ch, Architecture, Interaction design
at
10:33
Defined tags for this entry: architecture, conferences, fabric | ch, interaction design, publications, publications-fbrc, research, talks-fbrc, theory
Virtual Electronic Poem (1958, Bruxelles)The Poème électronique was a unique experience, originated from the request made by Philips to Le Corbusier for the design of the company’s pavilion at the Brussels World Fair in 1958. The whole project was initiated and directed by Le Corbusier, who also created and/or selected the images for the audiovisual show, with the organized sound composed by Edgar Varèse, and the stunning surfaces of the building designed by Iannis Xenakis. The result was a ground breaking immersive environment, since the space of the Pavilion hosted the audio and the visual materials as integral parts of the architectural design. Unluckily, such a visionary synthesis of innovative ideas could not stand with its times, and the paradigm was never repeated, or even attempted, again: the Pavilion, notwithstanding the incredible number of spectators (2 millions), was turned down a few months after its inauguration, at the end of the Exposition. The disappearance of the Pavilion makes the Poème électronique a destroyed masterpiece. What we stl have today are only fragments of the various components (i.e. photos and drafts of the architecture, the projected video in videotape from the Philips archives, a stereo reduction of Varèse’s and Xenakis’ musical pieces). Virtual Electronic Poem (VEP) is a project realized as a virtual reality (VR) environment that reproduces the experience of the dismantled masterpiece through an accurate philological reconstruction of the original installation. The website looks a bit out of date but the first of two films in this post shows the results of the work. The second shows the Poème électronique as a film rather than in its architectural context. Perhaps someone out there would be good enough to bring the building into a public setting on Second Life? Entry Filed under: Devices, Events, Furniture, New Materials, Scuplture/Installation, Visual ----- Personal comment: Le Corbusier à l'origine (aussi) des architectures immersives, "média" et interactives? ;) The Best Computer Interfaces: Past, Present, and FutureSay goodbye to the mouse and hello to augmented reality, voice recognition, and geospatial tracking.
By Duncan Graham-Rowe
The Command Line The granddaddy of all computer interfaces is the command line, which surfaced as a more effective way to control computers in the 1950s. Previously, commands had to be fed into a computer in batches, usually via a punch card or paper tape. Teletype machines, which were normally used for telegraph transmissions, were adapted as a way for users to change commands partway through a process, and receive feedback from a computer in near real time. Video display units allowed command line information to be displayed more rapidly. The VT100, a video terminal released by Digital Equipment Corporation (DEC) in 1978, is still emulated by some modern operating systems as a way to display the command line. Graphical user interfaces, which emerged commercially in the 1980s, made computers much easier for most people to use, but the command line still offers substantial power and flexibility for expert users. The Mouse Developed 41 years ago by Douglas Engelbart at the Stanford Research Institute, in California, the mouse is inextricably linked to the development of the modern computer and also played a crucial role in the rise of the graphic user interface. Engelbart demonstrated the mouse, along with several other key innovations, including hypertext and shared-screen collaboration, at an event in San Francisco in 1968. Early computer mouses came in a variety of shapes and forms, many of which would be almost unrecognizable today. However, by the time mouses became commercially available in the 1980s, the mold was set. Three decades on and despite a few modifications (including the loss of its tail), the mouse remains relatively unchanged. That's not to say that companies haven't tried adding all manner of enhancements, including a mini joystick and an air ventilator to keep your hand sweat-free and cool. Logitech alone has now sold more than a billion of these devices, but some believe that the mouse is on its last legs. The rise of other, more intuitive interfaces may finally loosen the mouse's grip on us. The Touchpad With most touchpads, a user's finger is sensed by detecting disruptions to an electric field caused by the finger's natural capacitance. It's a principle that was employed as far back as 1953 by Canadian pioneer of electronic music Hugh Le Caine, to control the timbre of the sounds produced by his early synthesizer, dubbed the Sackbut. The touchpad is also important as a precursor to the touch-screen interface. And many touchpads now feature multitouch capabilities, expanding the range of possible uses. The first multitouch touchpad for a computer was demonstrated back in 1984, by Bill Buxton, then a professor of computer design and interaction at the University of Toronto and now also principle researcher at Microsoft. The Multitouch Screen However, it's fair to say that Apple's iPhone has helped revive the potential of the approach with its multitouch screen. Several cell-phone manufacturers now offer multitouch devices, and both Windows 7 and future versions of Apple's Macbook are expected to do the same. Various techniques can enable multitouch screens: capacitive sensing, infrared, surface acoustic waves, and, more recently, pressure sensing. With this renaissance, we can expect a whole new lexicon of gestures designed to make it easier to manipulate data and call up commands. In fact, one challenge may be finding means to reproduce existing commands in an intuitive way, says August de los Reyes, a user-experience researcher who works on Microsoft's Surface. Gesture Sensing New mobile applications are also starting to tap into this trend. Shut Up, for example, lets Nokia users silence their phone by simply turning it face down. Another app, called nAlertMe, uses a 3-D gestural passcode to prevent the device from being stolen. The handset will sound a shrill alarm if the user doesn't move the device in a predefined pattern in midair to switch it on. The next step in gesture recognition is to enable computers to better recognize hand and body movements visually. Sony's Eye showed that simple movements can be recognized relatively easily. Tracking more complicated 3-D movements in irregular lighting is more difficult, however. Startups, including Xtr3D, based in Israel, and Soft Kinetic, based in Belgium, are developing computer vision software that uses infrared for whole-body-sensing gaming applications. Oblong, a startup based in Los Angeles, has developed a "spatial operating system" that recognizes gestural commands, provided the user wears a pair of special gloves. Force Feedback More specialized haptic controllers include the PHANTOM, made by SensAble, based in Woburn, MA. These devices are already used for 3-D design and medical training--for example, allowing a surgeon to practice a complex procedure using a simulation that not only looks, but also feels, realistic. Haptics could soon add another dimension to touch screens too: by better simulating the feeling of clicking a button when an icon is touched. Vincent Hayward, a leading expert in the field, at McGill University, in Montreal, Canada, has demonstrated how to generate different sensations associated with different icons on a "haptic button". In the long term, Hayward believes that it will even be possible to use haptics to simulate the sensation of textures on a screen. Voice Recognition This is now changing. As computers become more powerful and parsing algorithms smarter, speech recognition will continue to improve, says Robert Weidmen, VP of marketing for Nuance, the firm that makes Dragon Naturally Speaking. Last year, Google launched a voice search app for the iPhone, allowing users to search without pressing any buttons. Another iPhone application, called Vlingo, can be used to control the device in other ways: in addition to searching, a user can dictate text messages and e-mails, or update his or her status on Facebook with a few simple commands. In the past, the challenge has been adding enough processing power for a cell phone. Now, however, faster data-transfer speeds mean that it's possible to use remote servers to seamlessly handle the number crunching required. Augmented Reality The earliest augmented-reality interfaces required complex and bulky motion-sensing and computer-graphics equipment. More recently, cell phones featuring powerful processing chips and sensors have to bring the technology within the reach of ordinary users. Examples of mobile augmented reality include Nokia's Mobile Augmented Reality Application (MARA) and Wikitude, an application developed for Google's Android phone operating system. Both allow a user to view the real world through a camera screen with virtual annotations and tags overlaid on top. With MARA, this virtual data is harvested from the points of interest stored in the NavTeq satellite navigation application. Wikitude, as the name implies, gleans its data from Wikipedia. These applications work by monitoring data from an arsenal of sensors: GPS receivers provide precise positioning information, digital compasses determine which way the device is pointing, and magnetometers or accelerometers calculate its orientation. A project called Nokia Image Space takes this a step further by allowing people to store experiences--images, video, sounds--in a particular place so that other people can retrieve them at the same spot. Spatial Interfaces Google's Latitude, for example, lets users show their position on a map by installing software on a GPS-enabled cell phone. As of October 2008, some 3,000 iPhone apps were already location aware. One such iPhone application is iNap, which is designed to monitor a person's position and wake her up before she misses her train or bus stop. The idea for it came after Jelle Prins, of Dutch software development company Moop, was worried about missing his stop on the way to the airport. The app can connect to a popular train-scheduling program used in the Netherlands and automatically identify your stops based on your previous travel routines. SafetyNet, a location-aware application developed for Google's Android platform, lets user define parts of town that they deem to be generally unsafe. If they accidentally wander into one of these no-go areas, the program becomes active and will sound an alarm and automatically call 911 on speakerphone in response to a quick shake. Brain-Computer Interfaces Surgical implants or electroencephalogram (EEG) sensors can be used to monitor the brain activity of people with severe forms of paralysis. With training, this technology can allow "locked in" patients to control a computer cursor to spell out messages or steer a wheelchair. Some companies hope to bring the same kind of brain-computer interface (BCI) technology to the mainstream. Last month, Neurosky, based in San Jose, CA, announced the launch of its Bluetooth gaming headset designed to monitor simple EEG activity. The idea is that gamers can gain extra powers depending on how calm they are. Beyond gaming, BCI technology could perhaps be used to help relieve stress and information overload. A BCI project called the Cognitive Cockpit (CogPit) uses EEG information in an attempt to reduce the information overload experienced by jet pilots. The project, which was formerly funded by the U.S. government's Defense Advanced Research Projects Agency (DARPA), is designed to discern when the pilot is being overloaded and manage the way that information is fed to him. For example, if he is already verbally communicating with base, it may be more appropriate to warn him of an incoming threat using visual means rather than through an audible alert. "By estimating their cognitive state from one moment to the next, we should be able to optimize the flow of information to them," says Blair Dickson, a researcher on the project with U.K. defense-technology company Qinetiq. Copyright Technology Review 2009. ----- Related Links:Personal comment: Un petit overview des "coolest interfaces", passées, présentes ou en cours d'avènement ( pour l'Augmented Reality, ou les Brain Computer Interfaces).
Posted by Patrick Keller
in Interaction design, Science & technology
at
09:03
Defined tags for this entry: design (interactions), devices, history, interaction design, interface, science & technology
(Page 1 of 1, totaling 4 entries)
|
fabric | rblgThis blog is the survey website of fabric | ch - studio for architecture, interaction and research. We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings. Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations. This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
QuicksearchCategoriesCalendarSyndicate This BlogArchivesBlog Administration |