In case you missed it, Sony's got a thing for 3D with big plans to push the technology into your living room next year. While the first application will be applied to the flat screen TV, Sony's obviously thinking about other displays judging by this tiny prototype set for reveal at Tokyo's Digital Content EXP0 2009 on Thursday. The 13 x 27-cm device packs a stereoscopic, 24-bit color image measuring just 96 × 128 pixels viewable at 360-degrees without special glasses. If the prototype ever hits the assembly line then Sony envisions its commercial use in digital signage or medical imaging -- or as a 3D photo frame, television, house for your virtual pet, or visualizer to assist with web shopping in the home. We'll be on-hand for the unveil on Thursday with live coverage and hands-on, check back then for more.
The Sydney Morning Herald further reports these “new tools were introduced in partnership with the Danish Government ahead of the United Nations Climate Change Convention in December.” And as as HuffingtonPost reports, “Al Gore stars” in the Google Earth climate simulator video:
Researchers plan to offer more than just directions with innovations in software and hardware.
By Kristina Grifantini
Augmented games: In this game, developed by researchers at Columbia University, a player holds a flat board and sees three-dimensional objects projected onto it through a head-worn display. The player tilts the game board to control a virtual ball.
Credit: Ohan Oda and Steve Feiner, Columbia University
Augmented reality (AR), which involves superimposing virtual objects and information on top of the real world, may be coming to a phone near you. As mobile phones become packed with more sensors, better video capabilities, and faster processing power, many experts predict that AR will become increasingly common. But in a panel discussion today at EmTech@MIT in Cambridge, MA, panelists will admit that several obstacles still remain and that the "killer app" for augmented reality has yet to emerge.
Several AR apps have already been released for cell phones with positioning sensors. For example, PresseLite's Metro Paris app and Acrossair's Nearest Tube both provide iPhone users with augmented directions to nearby subway stops. AR apps are also available for phones powered by Google's Android platform. Layar, developed by SPRXmobile, based in the Netherlands, overlays information from Twitter, Flickr, and Wikipedia on real-world locations, while Wikitude, from Austria-based Mobilizy, displays tourist information collected from Wikipedia.
Some researchers believe that AR represents a fundamentally new way to organize and interact with information. "In the future, we see augmented reality as a component of any kind of digital media interaction," says Mobilizy's CEO, Alexander Igelsboeck, who will speak at the EmTech@MIT session.
This week Mobilizy released a new language for AR called Augmented Reality Mark-up Language (ARML). With ARML, Mobilizy hopes to make it easier for programmers to create location-based content for AR applications. The company envisions ARML as equivalent to HTML for the Web, and Igelsboeck emphasizes the importance of open content and standardization for AR to take off. "We want to open those standards to be available for developer communities that can create innovative applications around this augmented experience," he says.
But many challenges still remain. For instance, the positioning technology currently available in cell phones falls short for sophisticated AR applications. The GPSs built into smart phones "were really not designed for AR," says panelist Steven Feiner, a professor of computer science at Columbia University. "They were designed for simpler applications."
Feiner, who has worked on AR for over a decade, notes that early examples of AR required wearing a computer backpack and using cumbersome head-mounted displays. "[But] the tracking that we used [in 2001] was much, much better," he says.
Feiner is focusing on less-mainstream applications for AR--he has developed one program that shows levels of carbon monoxide in Manhattan (see image above), and another that shows virtual labels for engineers--for example, a floating tag that says, "Remove this bolt using a 1/4 inch socket wrench". He adds that better object recognition and posture tracking, as well as a way to deal with direct sunlight, will help AR become more practical.
Another potential obstacle for AR is social acceptance. While people already text or check e-mail while they walk, looking through a phone can be awkward. Feiner suggests that well-designed goggles could help. "There's a very high bar of what people are willing to wear on their heads," he says.
Pollution visualized: Another application developed at Columbia shows carbon monoxide levels projected over New York City. The height of each ball reflects concentrations of the pollutant.
Credit: Sean White and Steve Feiner, Columbia University
Last spring, a group at the MIT Media Lab demoed an interface that avoids the need to look at a display altogether. Graduate student Prana Mistry, a 2009 TR35 winner, developed SixthSense, a device that combines a webcam and a projector worn around the neck, along with colored markers on the fingers, to recognize a user's gestures and project information onto surfaces. (See a TR video of SixthSense in action here.)
"Your world can be augmented without you having to change your behavior and do anything extra [like] taking out your cell phone and starting an application," says MIT professor Pattie Maes, who heads the SixthSense project. Maes's group is also exploring technical applications for AR. "If my car stops working, I might open the hood and an expert might remotely see what I see and [then] project information in front of the engine, saying things like, 'Open this valve,'" explains Maes.
Nokia's Mobile Augmented Reality Applications and Mixed Reality Experiences projects aim to use a combination of hardware in AR applications. Ville-Veikko Mattila, a principal researcher at Nokia Research Center, believes that combining visual and audio information could be most practical. "I think it's clear that people won't be walking and holding a device upright. Therefore, the use of audio may be more intuitive," he says.
Mattila adds that AR could potentially combine social information and location-based services to give user-tailored recommendations. For example, an application could show what your friends think of a particular restaurant, instead of providing a guidebook's reviews.
"There's a lot of hype obviously," Feiner says. But ultimately he agrees that AR may be able to help people with their daily lives. "Like being able to get somewhere, find information, or recognize a face of a person you know, but can't remember the name of," he says.
You know what's absolutely useless? A video of Wipeout HD being played in 3D, with some schmuck wearing 3D glasses and babbling on about how much fun he's having. Well, that schmuck is this Engadget editor, the video can be found after the break, and we've gotta say: we loved it. Especially for something like Wipeout HD, whose neon-infused tracks make for an almost too convenient example of rapidly approaching vanishing points, we'd say 3D could really be a quasi-"killer app" for consoles going forward -- especially if those fancy new motion controllers don't catch on for Microsoft and Sony. In many ways, 3D just seems to make more sense in a video game than for a movie, and the whole problem of finding content to deliver in the format has already been solved: a software update for the PS3 sometime in 2010 will enable it to provide a 3D viewing experience to "all" existing games on the system. We're sure there will be some exceptions, but it sounds very promising. The console itself pumps out a quite regular signal over HDMI, which the TV syncs up with your 3D glasses. A 200Hz TV, for instance, alternates 1080p frames, with 100Hz for each eye. Of course, you'll need a brand new TV, but at least it won't be restricted to just Sony televisions.
If content producers start using new 3D screens (which are existing since several years now without that much success), it may make 3D technologies used in TV converging to a common standard that will be reliable for games... but also for 3D movies... unless it makes emerging a new standard war...
Renzo Piano’s latest project, the Shard, has recently moved to the construction phase. The 1,016 ft high skyscraper will be the tallest building in Western Europe and will provide amazing views of London. The mixed use tower, complete with offices, apartments, a hotel and spa, retail areas, restaurants and a 15-storey public viewing gallery, will sit adjacent to London Bridge station as part of a new development called London Bridge Quarter. Replacing the 1970’s Southwark Tower on Bridge Street, the Shard is a welcomed addition to the London skyline, and its central location near major transportation nodes will play a key role in allowing London to expand.
More about the tower after the break.
Known for his elegant, light and detail oriented building, Piano’s Shard consists of several glass facets that incline inwards but do not meet at the top. Inspired by the towering church spires and masts of ships that once anchored on the Thames, the Shard’s form was generated by the irregular site plan and open to the sky to allow the building to breath naturally.
Planned as a “vertical city” to address the city’s growing population and need to maximize space, the Shard’s program varies to provide a functional central structure for London. The ground level will include a public piazza with restaurants and cafes, in addition to areas for art installations. The 50,000 sqm of office spaces include naturally ventilated winter gardens while the 195 hotel rooms and exclusive apartments located on the upper floors showcase beautiful views. While the Shard offers luxurious spaces sure to be coveted by companies and residents, the building also caters to the public with viewing platforms on floors 68-72. Accessed directly from an entrance on the ground level, these viewing galleries are expected to attract over half a million visitors each year.
The mixed program is attractive to many and will allow the Shard to help London’s future development. The Shard is due for completion in 2012.
Je republie ce projet de Renzo Pian (publié dans ArchDaily) non pas tellement parce que le projet m'intéresse, mais parce qu'il y a un glissement de la production d'images pour l'architecture vers la SF (mêmes outils = mêmes images?).
Franchement, la première image ne vous fait-elle pas immédiatement penser à une autre image, déjà vue au cinéma (que je me suis donc permis de rajouter dans l'article)? Pour moi ça a été direct: la planète et la ville de Coruscant dans Star Wars.
Allez... Encore quelques buildings et des voitures volantes et on y sera!
Y aurait-il un "devenir Star Wars" urbanistique de Londres? Après le retro-devenir "Hong-Kongien" du Los Angeles hyper post-moderne de Blade Runner?
Andy Huntington/ Drew Allan: Cylinder (”Seahorses”, “Designed”, “Market”)
Cylinder by Andy Huntington and Drew Allan is an elegant series of data sculpture based on sound analysis. A mapping of the frequency and time domains produces cylindrical forms representing the spatial characteristics of the sound input. Physical versions of the digital 3D models are then 3D printed using stereolithography.
The idea of mapping sound to space is not unfamiliar. The Cylinder project shows similar strategies to those used in the exhibition Frozen, which showed sound represented as a continous space rather than as a one-dimensional signal. However, Cylinder is from 2003, predating Frozen and making it somewhat of an early example of the data sculpture genre.
There is a tangential similarity between Huntington’s pristine objects and Booshan & Widrig’s Binaural object. But in fact the spiky geometries of both works are a result of the numeric data underlying the form. Any data set will yield inherent patterns, and in the case of digital sound two “defaults” present themselves: The waveform (a 1D graph) and the spectral map found through FFT analysis, which represents a 2D map of spectral energies in the time domain. Any translation of these numeric representations into visual form must grapple with the fact that while they may be faithful representations of the data, they rarely give a good idea of how the sound is experienced by a human listener.
The Cylinder series show a range of different waveforms, some showing an apparent orderly structure with others suggesting a noisier sound input. Titles like “Seahorse”, “Design” and “Breath” imply the source sounds used to produce the forms. Their success as aesthetic objects derive from their complexity as well as from the clean quality given by the 3D printing process.
Coinciding with the 40th anniversary of the first Moon landing, Google has introduced a new feature in Google Earth, adding Earth’s most faithful follower to the popular geo application.
Google has been diligently adding data to Google Earth, expanding the geo-centric app to cover the sky, the ocean’s depths and the Red Planet. You can now explore the Moon from the same icon in the top toolbar that holds Sky, Mars and Earth. Fire it up, and you can explore lunar imagery, historical data, images and videos from the Apollo missions, panoramic images of the moon, 3D models of lunar modules and more.
Check out a brief introduction to this new feature in the video below. You can download the latest version of Google Earth 5.0 here.
It’s a conceptual piece, but alarmingly plausible. Be careful you don’t venture out to the Republican fundraiser with your Gay Disco Party Animal ID turned on.
The piece is a Cunningham dance work reconstructed from textual deconstructions of other Cunningham dance works. Each finger has an associated excerpt from an article, review, or essay on Cunningham from the last 5 decades. These texts become the "ink" with which each finger manifests its movements. Each text is dynamically typeset in 3 dimensional space along the curves traced by his fingertips.
The software keeps track of various movement parameters which it uses to modulate aspects of the visualization such as letter size, camera position, angle, and zoom. Merce not only dances the dance, but becomes typesetter and cinematographer, conducting the audience's view of the dance.
What, from the outside, appear to be subtle manipulations of the hands become a beautiful tangle of diving flocks and waterfalls of letters. Presenting dance in this way, we hope to get closer to the experience of the dance from the inside out.
The best way to see if something works or not is to try it out for yourself. FOXTEL Australia has sent us an email about their new website, called I Am Unique, which lets users create personalized 3D portraits that incorporate text, videos and images from social sites such as Facebook and Twitter. Sounds vague; luckily, they’ve also created one for us, and I have to admit, it’s pretty neat.
The portraits are really in 3D, and you can pan, zoom and rotate them as you like; the entire experience and the interface is somewhat similar to Microsoft’s Photosynth. Each portrait consists of several “fragments”; click on one, and it’ll flip over, revealing content - a tweet or an image, perhaps - drawn from one social network or another.
You can do all sorts of cool things with the fragments; for example, you can browse through them with the arrow keys, or separate them all with the spacebar (hit “controls” on the right side of the screen to see all the options). You can create your own portrait for free, or contribute to other portraits by adding photos, text, stories, videos or blog entries that are relevant to the portrayed entity.
Now, while the entire site is primarily a visual gimmick and a showcase of FOXTEL’s technology, used in their iQ2 set-top unit, it’s done very well and I can actually see people using it; it’s free, simple, easy, and the results are undeniably cool.
Le retour de l'Avatar... Ici, un avatar construit à partir de fragment trouvé sur les réseaux sociaux. Quelque chose qui serait à la fois dynamique et plus représentatif de son identité en ligne. Cela me rappelle évidemment deux projets que nous avons fait (il y a longtemps): les reconstruction de personnages pour Parisienne en 2000 (fragments d'images de visages capturées à différents moments dans le temps) et Knowscape_mobile (cf lien) entre 2003 et 2005 où l'avatar, construit en temps réel, montrait les sites web par lequel la personne était passée (clickable sur son avatar).
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.