Friday, March 25. 2011
Via Creative Applications
-----
FABRICATE is an International Peer Reviewed Conference with supporting publication and exhibition to be held at The Bartlett School of Architecture in London from 15-16 April 2011. Discussing the progressive integration of digital design with manufacturing processes, FABRICATE will bring together pioneers in design and making within architecture, construction, engineering, manufacturing, materials technology and computation.
Part of the exhibition is the work of ScanLAB, a research group run by Matthew Shaw and William Trossell at the Bartlett School of Architecture that explores the potential role of 3D scanning in Architecture, Design and Making. In 2010, 48 hours of scanning produced 64 scans of the Slade school’s entire exhibition space. These have been compiled to form a complete 3D replica of the temporary show which has been distilled into a navigable animation and a series of ‘standard’ architectural drawings.
The work becomes a confused collage of hours of delicately created lines and forms set within a feature prefect representation of the exhibition space. Sometimes a model or image stands out as identifiable, more often a sketch merges into a model and an exhibition stand creating a blurred hybrid of designs and authors. These drawings represent the closest record to an as built drawing set for the entire exhibition and an ‘as was’ representation of the Bartlett’s year.
The 3D model was produced using a Faro Photon 120 laser scanner ($40k). Software that enables navigation is Pointools, generic point cloud model software that allows for some of the largest point cloud models – multi-billion point datasets.
For more information on FABRICATE, see http://www.fabricate2011.org
Exhibition Private View
6pm – 14th April 2011
Bartlett School of Architecture Gallery
Wates House, 22 Gordon Street
London WC1H 0QB
For tickets, see fabricate2011.org/registration/
(Thanks Ruairi)
See also Fragments of time and space recorded with Kinect+SLR on NYC Subway … and CITY OF HOLES on bldgblog
Personal comment:
Usually not a big fan of realistic 3d architecture, but I find quite interesting (camera movements excepted...) this "in between reality" of an uncompleted or imperfert scan. Like if the architecture was half appearing, or halp disappearing in an "in between time zone".
Via MIT Technology Review
-----
With a few snapshots, you can build a detailed virtual replica.
By Tom Simonite
|
Getting all the angles: Microsoft researcher Johannes Kopf demonstrates a cell phone app that can capture objects in 3-D.
Credit: Microsoft |
Capturing an object in three dimensions needn't require the budget of Avatar. A new cell phone app developed by Microsoft researchers can be sufficient. The software uses overlapping snapshots to build a photo-realistic 3-D model that can be spun around and viewed from any angle.
"We want everybody with a cell phone or regular digital camera to be able to capture 3-D objects," says Eric Stollnitz, one of the Microsoft researchers who worked on the project.
To capture a car in 3-D, for example, a person needs to take a handful of photos from different viewpoints around it. The photos can be instantly sent to a cloud server for processing. The app then downloads a photo-realistic model of the object that can be smoothly navigated by sliding a finger over the screen. A detailed 360 degree view of a car-sized object needs around 40 photos, a smaller object like a birthday cake would need 25 or fewer.
If captured with a conventional camera instead of a cell phone, the photos have to be uploaded onto a computer for processing in order to view the results. The researchers have also developed a Web browser plug-in that can be used to view the 3-D models, enabling them to be shared online. "You could be selling an item online, taking a picture of a friend for fun, or recording something for insurance purposes," says Stollnitz. "These 3-D scans take up less bandwidth than a video because they are based on only a few images, and are also interactive."
To make a model from the initial snapshots, the software first compares the photos to work out where in 3-D space they were taken from. The same technology was used in a previous Microsoft research project, PhotoSynth, that gave a sense of a 3-D scene by jumping between different views (see video). However, PhotoSynth doesn't directly capture the 3-D information inside photos.
"We also have to calculate the actual depth of objects from the stereo effect," says Stollnitz, "comparing how they appear in different photos." His software uses what it learns through that process to break each image apart and spread what it captures through virtual 3-D space (see video, below). The pieces from different photos are stitched together on the fly as a person navigates around the virtual space to generate his current viewpoint, creating the same view that would be seen if he were walking around the object in physical space.
Look at the video HERE.
"This is an interesting piece of software," says Jason Hurst, a product manager with 3DMedia, which makes software that combines pairs of photos to capture a single 3-D view of a scene. However, using still photos does have its limitations, he points out. "Their method, like ours, is effectively time-lapse, so it can't deal with objects that are moving," he says.
3DMedia's technology is targeted at displays like 3-D TVs or Nintendo's new glasses-free 3-D handheld gaming device. But the 3-D information built up by the Microsoft software could be modified to display on such devices, too, says Hurst, because the models it builds contain enough information to create the different viewpoints for a person's eyes.
Hurst says that as more 3-D-capable hardware appears, people will need more tools that let them make 3-D content. "The push of 3-D to consumers has come from TV and computer device makers, but the content is lagging," says Hurst. "Enabling people to make their own is a good complement."
Copyright Technology Review 2011.
Wednesday, March 23. 2011
Via GOOD
-----
by Nicola Twilley
The photo above is, according to the BBC, an extremely rare photo of Barack Obama inside his top secret tent. The tent is an example of a mobile secure area also known as a Sensitive Compartmented Information Facility, or SCIF, "designed to allow officials to have top secret discussions on the move." In fact, the BBC reports, "they are one of the safest places in the world to have a conversation."
This particular SCIF has been set up in the middle of a hotel room in Brazil—you can see the carpet pattern on the floor. Obama was on a pre-arranged trip to Brazil when airstrikes in Libya began on Saturday, and needed a secure facility from which to talk to his Secretaries of State and Defense, as well as fellow coalition leaders.
While the tent material looks like fairly standard blue tarpaulin, it is actually completely soundproof, windowless, and "made from a secret material which is designed to keep emissions in and listening devices out." The BBC quotes Phil Lago, whose company, Command Consulting Group, regularly supplies SCIFs to government agencies, who explains that a "ring of electronic waves" ensures that only signals from an encrypted satellite phone can get in and out.
Apparently, the President never travels without his SCIF, which is surprisingly portable. According to Lago, "You can usually fit them into two large foot lockers and that's most of the equipment you need."
The exact specifications of these mobile security pods are top secret, and for most of us, this photo will be the closest we ever get to a SCIF. James Bond, eat your heart out!
Image: Barack Obama and advisers inside his SCIF, via the BBC; story via @bldgblog
Thursday, March 17. 2011
Via MIT Technology Review
-----
Device maps the chemistry of the whole brain in moving animals.
By Katherine Bourzac++
|
Wearable PET: A rat’s head fits in the circular opening of this device, which is surrounded by miniaturized detectors and electronics.
Credit: Brookhaven National Laboratory |
A tiny wearable scanner has been used to track chemical activity in the brains of unrestrained animals for the first time. By revealing neurological circuitry as the subjects perform normal tasks, researchers say, the technology could greatly broaden the understanding of learning, addiction, depression, and other conditions.
The device was designed to be used with rats—the main animal model used by behavioral neuroscientists. But the researchers who developed the device, at Brookhaven National Laboratory, say it would be straightforward to engineer a similar device for people.
Positron emission tomography, or PET, is already broadly used in neuroscience research and in clinical treatment. It allows researchers to track the location of radioactively labeled neurotransmitters (the chemicals that carry signals between neurons) or drugs within the brain. Images of the way neurotransmitters and drugs move through the brain can reveal the processes that underpin normal behavior such as learning as well as pathologies including addiction. PET has been used to map drug-binding sites in the brains of addicts and healthy people, and to study how those sites change over time and with therapy.
A conventional PET scanner is so large that these studies have to be performed with the subject lying inside a large tube. Large photomultiplier tubes amplify signals from gamma rays emitted by labeled chemicals in the brain. The signals then pass through a desk-sized rack of electronics that process them and map them to a particular region of the brain. To get good readings during animal studies, the subjects are typically anaesthetized or restrained. What's being measured is not normal waking behavior.
"We have very limited data about what brains do in the real world," says Paul Glimcher, professor of neuroscience, economics, and psychology at New York University. Glimcher was not involved with the work.
The new portable scanner is designed to provide the same information about brain chemistry while an animal behaves naturally. It is small and lightweight enough that a rat can carry it around on its head. "[The rat] can move freely, interact with other animals, and at the same time we can make a 3-D map of, for example, dopamine receptors throughout the brain," says David Schlyer, a senior scientist at Brookhaven who led the work.
Schlyer's group worked for years to engineer a miniature PET scanner that could be worn by a moving subject. The device consists of a metal ring hanging from a support structure that helps support its weight and allows the rat to move around. The rat's head goes inside the ring, which contains both detectors and electronics.
The key to miniaturizing the device, Schlyer says, was integrating all the electronics for each detector in the ring on a single, specialized chip. An avalanche photodiode also replaces the large photomultiplier tubes of conventional PET, amplifying the signals emitted by the labeled chemicals in the brain. "The rats take about an hour to acclimate, then begin behaving normally," says Schlyer. The Brookhaven device is described this week in the journal Nature Methods.
The Brookhaven group used the scanner to map the dopamine receptors throughout the entire brains of of moving rats for the first time. Other groups, including Glimcher's, have previously used invasive probes to study dopamine levels in cubic-millimeter-sized portions of the brain in unrestrained animals, but have not been able to look at the entire brain.
Glimcher describes one of several experiments that could be done with the portable device. Researchers know that addicts who have successfully completed rehab are at great risk of relapse if they visit the places they associate with the drug, probably because their brain has been chemically rewired to respond to these associations. Glimcher imagines studies in rats that map brain chemistry when the animals are allowed to decide whether or not to take a drug, and when they wander into a location they have learned to associate with the drug.
"We don't really understand that well how circuits in [different parts of the brain] interact in addiction," says Glimcher. "To even get to a place where I can give you a clinical hypothesis, we have got to get more basic information. This is the breakthrough that could make that possible."
PET is not as broadly used in studies involving people as other neuroimaging methods because of the small but significant exposure to radiation that's necessary. Still, the Brookhaven researchers say it would be possible to make a wearable PET scanner that fits inside something resembling a football helmet. Joseph Huston, chair of the Center for Behavioral Neurosciences at the University of Düsseldorf, says the Brookhaven group has done "an incredible service" to the neuroscience community in developing the device. "The rat is the most important model for the brain—everything basic [we know] about learning, feeding, fear, sex, is based on work in the rat."
Schlyer says his group has talked with a few companies about licensing a commercial version of the device. But for now, they are mainly planning further behavioral studies in their lab. Mapping dopamine in waking animals could provide insights into a wide range of normal and pathological conditions such as the movement problems associated with Parkinson's disease. But dopamine is just one of the many brain chemicals the group can map. Schlyer says they will also study the sexual behavior of rats.
The group is also working on another instrument that combines PET with magnetic resonance imaging to provide richer information about tissue structure and function. They will start a clinical trial of this device in breast cancer patients next month.
Copyright Technology Review 2011.
-----
You can also read on the same subject: An On-Off Switch for Anxiety (MIT Tech Review as well)
Personal comment:
I blog this article from the MIT because I more and more believe that researches in neuroscience will have a huge incidence in the future on how we understand the way humans (and animals) and possibly artificial intelligences interact with each other, with their environment, with situations, etc. and how these behaviours can trigger specific patterns in the brain (neural, chemical, electric activities and hormones secretions, etc.) and/or in the body, which in return certainly condition what we "feel" about this situation. This would also mean that a "feeling" is somehow also very material (brain pattern, hormones, etc.)
So to say, I believe that spatial conditions in architecture or environments triggers certain brain conditions that could be in fact the direct "way" we experience this environment (comfortable, agressive, "nice", "ugly", hot, cold, ...).
One step further and we could possibly sometimes replace the space itself (or its experience) by it's brain pattern (drugs?) triggering the same feeling.
For my part, I will keep an eye full of curiosity on the results of researches in neurosciences ...
Wednesday, March 09. 2011
Via GOOD
-----
by Nicola Twilley
Wired just posted this incredible photo, taken by astrophotographer Alan Friedman, of the International Space Station, with the space shuttle Discovery attached, crossing the sun.
According to NASA, the International Space Shuttle is about the width and length of an American football field. It took just 0.2 seconds to transit across the sun, and Friedman nearly missed it:
The transit would be visible at 2:39 on March 1 [...]. Friedman was scheduled to give a talk about astrophotography from 12:30 to 1:30 pm. As soon as his talk was over, Friedman jumped in the car with fellow astrophotographers Brian Shelton and Mark Beale and raced after the sun.
"We got set up just in time to catch it," Friedman wrote. "I underestimated the narrowness of this event…another 500 feet and we would have missed it entirely. Lucky day!"
Photo: Alan Friedman, via Wired.
|