Thursday, July 31. 2014(Un)natural Architectures: Scorpion Design Temperature-Controlled Burrows | #habitat
Via Archinect -----
Scientists have discovered that scorpions design their burrows to include both hot and cold spots. A long platform provides a sunny place to warm up before they hunt, whilst a humid chamber acts as a cool refuge during the heat of the day.
This recent discovery of scorpion architecture adds to a sizeable list of impressive non-human architecture.
Anthills consist of a complex network of paths. Comparative to the size of an individual ant, these structures are mega-skyscrapers.
Likewise, termites build huge structures that have been dubbed "cathedrals." Reaching up to 6m high or more, termite cathedrals are clustered in large arrays that cover whole landscapes.
This complex web of branches was built by the vogelkop gardener bowerbird. In direct refutation of the "less is more" aesthetic exemplified by both ants and Ludwig Mies van der Rohe, these birds embellish their structures with any bright things they can find.
Primates, including humans, are probably the most avid builders. For example, from an early age, orangutans learn to design and construct elaborately woven nests high in trees.
Far from trivial – and humor aside –, studying animal architectures helps destabilize the normative understanding of architecture as a strictly human domain of activity. Certain studios – like Animal Architecture – both draw inspiration from non-human design and develop collaborative practices with non-humans. Decentering the human as the center of architectural thinking is a necessary step in fostering a deeper understanding of the complex mesh of interconnectedness that is ecology. Without this step, humans will continue to practice architecture without regard for a larger context, which is why the profession already accounts for nearly half of US carbon emissions.
Posted by Patrick Keller
in Architecture, Territory
at
07:40
Defined tags for this entry: architecture, bioinspired, design (environments), landscape, life, territory, vernacular
Friday, July 25. 2014Algorithmic creationism | #code #games
This algorithmic creationism game makes me think, to some extent, to the researches lead by philosophers, mathematicians, or physicists to prove that our own everyday world would be (or wouldn't be) the result of an extra large simulation...Yet, funilly, even so this game world is announced as "algoritmically generated", planets populated by dynosaurs or similar creatures are still present, so as the dark emperor's cosmic fleet! There's probably some commercial determinism within their creationism rules... At some point though, we could make the following comment: what is the fundamental difference between the use of algorithms to carve a digital world for a game (a computer generated simulation) and the pratice of many contemporary architects that uses similar (generative) algorithms to carve physical buildings to live in, not to speak about all the other algorithms that structure our everyday life? If not by somebody else, we are creating our own simulation, so to say.
----- No Man’s Sky: A Vast Game Crafted by Algorithms A new computer game, No Man’s Sky, demonstrates a new way to build computer games filled with diverse flora and fauna. By Simon Parkin
The quality of the light on any one particular planet will depend on the color of its solar system’s sun.
Sean Murray, one of the creators of the computer game No Man’s Sky, can’t guarantee that the virtual universe he is building is infinite, but he’s certain that, if it isn’t, nobody will ever find out. “If you were to visit one virtual planet every second,” he says, “then our own sun will have died before you’d have seen them all.” No Man’s Sky is a video game quite unlike any other. Developed for Sony’s PlayStation 4 by an improbably small team (the original four-person crew has grown only to 10 in recent months) at Hello Games, an independent studio in the south of England, it’s a game that presents a traversable universe in which every rock, flower, tree, creature, and planet has been “procedurally generated” to create a vast and diverse play area. “We are attempting to do things that haven’t been done before,” says Murray. “No game has made it possible to fly down to a planet, and for it to be planet-sized, and feature life, ecology, lakes, caves, waterfalls, and canyons, then seamlessly fly up through the stratosphere and take to space again. It’s a tremendous challenge.” Procedural generation, whereby a game’s landscape is generated not by an artist’s pen but an algorithm, is increasingly prevalent in video games. Most famously Minecraft creates a unique world for each of its players, randomly arranging rocks and lakes from a limited palette of bricks whenever someone begins a new game (see “The Secret to a Video Game Phenomenon”). But No Man’s Sky is far more complex and sophisticated. The tens of millions of planets that comprise the universe are all unique. Each is generated when a player discovers it, and is subject to the laws of its respective solar systems and vulnerable to natural erosion. The multitude of creatures that inhabit the universe dynamically breed and genetically mutate as time progresses. This is virtual world building on an unprecedented scale (see video below). This presents numerous technological challenges, not least of which is how to test a universe of such scale during its development – the team is currently using virtual testers—automated bots that wander around taking screenshots which are then sent back to the team for viewing. Additionally, while No Man’s Sky might have an infinite-sized universe, there aren’t an infinite number of players. To avoid the problem of a kind of virtual loneliness, where a player might never encounter another person on his or her travels, the game starts every new player in the same galaxy (albeit on his or her own planet) with a shared initial goal of traveling to its center. Later in the game, players can meet up, fight, trade, mine, and explore. “Ultimately we don’t know whether people will work, congregate, or disperse,” Murray says. “I know players don’t like to be told that we don’t know what will happen, but that’s what is exciting to us: the game is a vast experiment.” The game also bears the weight of unrivaled expectation. At the E3 video game conference in Los Angeles in June, no other game met with such applause. It is the game of many childhood science fiction dreams. For Murray, that is truer than for most. He was born in Ireland, but the family lived on a farm in the Australian outback, away from civilization. “At night you could see the vastness of space,” he says. “Meanwhile, we were responsible for our own electricity and survival. We were completely cut off. It had an impact on me that I carry through life.” Murray formed Hello Games in 2009 with three friends, all of whom had previously worked at major studios. Hello Games’ first title, Joe Danger, let players control a stuntman. The game was, according to Murray, “annoyingly successful” in the sense that it locked him and his friends into a cycle of sequels that they had formed the company to escape. During the next few years the team made four Joe Danger games for seven different platforms. “Then I had a midlife game development crisis,” says Murray. “It changes your mindset when a single game’s development represents a significant chunk of life.” Murray decided it was time to embark upon the game he’d imagined as a child, a game about frontiership and existence on the edge of the unexplored. “We talked about the feeling of landing on a planet and effectively being the first person to discover it, not knowing what was out there,” he says. “In this era in which footage of every game is recorded and uploaded to YouTube, we wanted a game where, even if you watched every video, it still wouldn’t be spoiled for you.” When players discover a new planet, climb that planet’s tallest peak, or identify a new species of plant or animal, they are able to upload the discovery to the game’s servers, their name forever associated with the location, like a digital Christopher Columbus or Neil Armstrong. “Players will even be able to mark the planet as toxic or radioactive, or indicate what kind of life is there and then that then appears on everyone’s map,” says Murray. Experimentation has been a watchword throughout the game’s production. Originally the game was entirely randomly generated. “Only around 1 percent of the time would it create something that looked natural, interesting, and pleasing to the eye; the rest of the time it was a mess and, in some cases where the sky, the water, and the terrain were all the same color, unplayable,” Murray says. So the team began to create simple rules, such as the distance from a sun at which it is likely that there will be moisture,” he explains. “From that we decide there will be rivers, lakes, erosion, and weather, all of which is dependent on what the liquid is made from. The color of the water in the atmosphere will derive from what the liquid is; we model the refractions to give you a modeled atmosphere.” Similarly, the quality of light will depend on whether the solar system has a yellow sun or, for example, a red giant or red dwarf. “These are simple rules, but combined they produce something that seems natural, recognizable to our eyes. We have come from a place where everything was random and messy to something which is procedural and emergent, but still pleasingly chaotic in the mathematical sense. Things happen with cause and effect, but they are unpredictable for us.” At the blockbuster studios in which he once worked, 300-person teams would have to build content from scratch. Now, thanks to the increased power of PCs and video game consoles, a relatively tiny team is able to create unimaginable scope. In this sense, Hello Games may be on the cusp not only of a new universe, but also of an entirely new way of creating games. “When I look at game development in general I think the cost of creating content is the real problem,” he says. “The sheer amount of assets that artists must build to furnish a world is what forces so many safe creative bets. Likewise, you can’t have 300 people working experimentally. Game development is often more like building a skyscraper that has form and definition but is ultimately quite similar to what is around it. It never sat right with me to be in a huge warehouse with hundreds of people making a game. That is not the way it should be—and now it doesn’t have to be.”
Related Links:Thursday, July 24. 2014New material that makes objects invisible to touch | #material
Via Sploid -----
You're looking at a new awesome nano-material invented that does the seemingly impossible: It hides things from touch. Just a thin layer of this amazing polymer will hide anything under it from being perceived by your sense of touch. In this photo you can see how it "absorbs" a metal cylinder.
How is this magic possible? According to the the scientists at the Karlsruhe Institute of Technology, this "crystalline material structured with sub-micrometer accuracy [...] consists of needle-shaped cones, whose tips meet." It perfectly adapts and absorbs the shape of anything under it. The metamaterial structure directs the forces of the touching finger such that the cylinder is hidden completely. Not only your finger won't be able to detect it, but a force feedback measurement instrument will fail too. According to Tiemo Bückmann, the lead scientists in the project, "it is like in Hans-Christian Andersen's fairy tale about the princess and the pea. The princess feels the pea in spite of the mattresses. When using our new material, however, one mattress would be sufficient for the princess to sleep well."
What does this mean in real life? The Karlsruhe Institute of Technology claims that the material was developed for purely experimental purposes, "but might open up the door to interesting applications in a few years from now, as it allows for producing materials with freely selectable mechanical properties. Examples are very thin, light, and still comfortable camping mattresses or carpets hiding cables and pipelines below." I like that. Carpets that can perfectly hide cables is something I'd pay money for. And I'd love a camping blanket that perfectly absorbs any rock and twig on the ground, leaving a smooth surface to sleep on.
Posted by Patrick Keller
in Science & technology
at
07:54
Defined tags for this entry: artificial reality, materials, nanotech, research, ressources, science & technology
What Else Could Smart Contact Lenses Do? | #vision
----- Besides health tracking, contact lens technology under development could enable drug delivery, night vision, and augmented reality.
Last week Google and Novartis announced that they’re teaming up to develop contact lenses that monitor glucose levels and automatically adjust their focus. But these could be just the start of a clever new product category. From cancer detection and drug delivery to reality augmentation and night vision, our eyes offer unique opportunities for both health monitoring and enhancement. “Now is the time to put a little computer and a lot of miniaturized technologies in the contact lens,” says Franck Leveiller, head of research and development in the Novartis eye care division. One of the Novartis-Google prototype lenses contains a device about the size of a speck of glitter that measures glucose in tears. A wireless antenna then transmits the measurements to an external device. It’s designed to ease the burden of diabetics who otherwise have to prick their fingers to test their blood sugar levels. “I have many patients that are managing diabetes, and they described it as having a part-time job. It’s so arduous to monitor,” says Thomas Quinn, who is head of the American Optometric Association’s contact lens and cornea section. “To have a way that patients can do that more easily and get some of their life back is really exciting.” Glucose isn’t the only thing that can be measured from tears rather than a blood sample, says Quinn. Tears also contain a chemical called lacryglobin that serves as a biomarker for breast, colon, lung, prostate, and ovarian cancers. Monitoring lacryglobin levels could be particularly useful for cancer patients who are in remission, Quinn says. Quinn also believes that drug delivery may be another use for future contact lenses. If a lens could dispense medication slowly over long periods of time, it would be better for patients than the short, concentrated doses provided by eye drops, he says. Such a lens is not easy to make, though (see “A Drug-Dispensing Lens”). The autofocusing lens is in an earlier stage of development, but the goal is for it to adjust its shape depending on where the eye is looking, which would be especially helpful for people who need reading glasses. A current prototype of the lens uses photodiodes to detect light hitting the eye and determine whether the eye is directed downward. Leveiller says the team is also looking at other possible techniques. Google and Novartis are far from the only ones interesting in upgrading the contact lens with such new capabilities. In Sweden, a company called Sensimed is working on a contact lens that measures the intraocular pressure that results from the liquid buildup in the eyes of glaucoma patients (see “Glaucoma Test in a Contact Lens”). And researchers at the University of Michigan are using graphene to make infrared-sensitive contact lenses—the vision, as it were, is that these might one day provide some form of night vision without the bulky headgear. A Seattle-based company, Innovega, meanwhile, has developed a contact lens with a small area that filters specific bands of red, green, and blue light, giving users the ability to focus on a very small, high resolution display less than an inch away from their eyes without interfering with normal vision. That makes tiny displays attached to glasses look more like IMAX movie screens, says the company’s CEO, Steve Willey. Together, the lens and display are called iOptik. Plenty of challenges still remain before we’re all walking around with glucose-monitoring, cancer-detecting, drug-delivering super night vision. Some prototypes out there are unusually thick, Quinn says, and some use traditional, rigid electronics where clear, flexible alternatives would be preferable. And, of course, all will have to pass regulatory approval to show they are safe and effective. Jeff George, the head of the Novartis eye care division, is certainly optimistic about Google’s smart lens. “Google X’s team refers to themselves as a ‘moon shot factory.’ I’d view this as better than a moon shot given what we’ve seen,” he says.
Posted by Patrick Keller
in Design, Science & technology
at
07:50
Defined tags for this entry: artificial reality, design, design (interactions), design (products), devices, science & technology, screen, smart, vision, visualization
Wednesday, July 23. 2014“Force Illusions” | #perception
----- Could “Force Illusions” Help Wearables Catch On? By John Pavlus
What if the compass app in your phone didn’t just visually point north but actually seemed to pull your hand in that direction?
Two Japanese researchers will present tiny handheld devices that generate this kind of illusion at next month’s annual SIGGRAPH technology conference in Vancouver, British Columbia. The “force display” devices, called Traxion and Buru-Navi3, exploit the fact that a vibrating object is perceived as either pulling or pushing when held. The effect could be applied in navigation and gaming applications, and it suggests possibilities in mobile and wearable technology as well. Tomohiro Amemiya, a cognitive scientist at NTT Communication Science Laboratories, began the Buru-Navi project in 2004, originally as a way to research how the brain handles sensory illusions. His initial prototype was roughly the size of a paperback novel and contained a crankshaft mechanism to generate vibration, similar to the motion of a locomotive wheel. Amemiya discovered that when the vibrations occurred asymmetrically at a frequency of 10 hertz—with the crankshaft accelerating sharply in one direction and then easing back more slowly—a distinctive pulling sensation emerged in the direction of the acceleration. With his collaborator Hiroaki Gomi, Amemiya continued to modify and miniaturize the device into its current form, which is about the size of a wine cork and relies on a 40-hertz electromagnetic actuator similar to those found in smartphones. When pinched between the thumb and forefinger, Buru-Navi3 creates a continuous force illusion in one direction (toward or away from the user, depending on the device’s orientation). The second device, called Traxion, was developed within the last year at the University of Tokyo by a team led by computer science researcher Jun Rekimoto. Traxion also generates a force illusion via an asymmetrically vibrating actuator held between the fingers. “We tested many users, and they said that it feels as if there’s some invisible string pulling or pushing the device,” Rekimoto says. “It’s a strong sensation of force.” Both devices create a pulling force significant enough to guide a blindfolded user along a path or around corners. This way-finding application might be a perfect fit for the smart watches that Samsung, Google, and perhaps Apple are mobilizing to sell. Haptics, which is the name for the technology behind tactile interfaces, has been explored for years in limited or niche applications. But Vincent Hayward, who researches haptics at the Pierre and Marie Curie University in Paris, says the technology is now “reaching a critical mass.” He adds, “Enough people are trying a sufficient number of ideas that the balance between novelty and utility starts shifting.” Nonetheless, harnessing these kinesthetic effects for mainstream use is easier said than done. Amemiya admits that while his device generates strong force illusions while being pinched between a finger and thumb, the effect becomes much weaker if the device is merely placed in contact with the skin (as it would be in a watch). The rise of even crude haptic wearable devices could accelerate this kind of scientific research, though. “A wearable system is always on, so it records data constantly,” Amemiya explains. “This can be very useful for understanding human perception.” Related Links:
Posted by Patrick Keller
in Interaction design, Science & technology
at
09:59
Defined tags for this entry: artificial reality, cognition, devices, interaction design, research, science & technology
(Page 1 of 3, totaling 11 entries)
» next page
|
fabric | rblgThis blog is the survey website of fabric | ch - studio for architecture, interaction and research. We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings. Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations. This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
QuicksearchCategoriesCalendarSyndicate This BlogArchivesBlog Administration |