Saturday, August 02. 2014
Note: in the end... it's time for me (too) to turn off my screens for a couple of weeks and maybe go look for this swimming pool! "Subjective collections of ..." and myself will be back in early September.
The piece was completed last Friday and it consists of a single, diminutive swimming pool located somewhere in the southern Mojave Desert between Joshua Tree and Apple Valley. The public is allowed to use the pool, but in order to do so visitors need the key that unlocks it (it is kept covered) as well as the GPS coordinates. Only once you have the key, which is kept at the MAK Center, are you given the coordinates.
Friday, August 01. 2014
By fabric | ch
As we continue to lack a decent search engine on this blog and as we don't have a "tag cloud" ... This post could help you navigate though all the content on | rblg, via its tags.
HERE ARE ALL THE CURRENT TAGS TO NAVIGATE ON | RBLG BLOG:
(to be seen just below if you're navigating on the blog's page or here for rss readers)
Posted by Patrick Keller in fabric | ch at 21:19
Defined tags for this entry: 3d, activism, advertising, agriculture, air, animation, applications, archeology, architects, architecture, art, art direction, artificial reality, artists, atmosphere, automation, behaviour, bioinspired, biotech, blog, body, books, brand, character, city, climate, clips, code, cognition, collaboration, communication, community, computing, conditioning, conferences, consumption, content, control, craft, culture & society, curators, customization, data, density, design, design (environments), design (fashion), design (graphic), design (interactions), design (motion), design (products), designers, development, devices, digital, digital fabrication, digital life, digital marketing, dimensions, direct, display, documentary, earth, ecal, ecology, economy, electronics, energy, engineering, environment, equipment, event, exhibitions, fabric | ch, farming, fashion, fiction, films, food, form, friends, function, future, gadgets, games, garden, generative, geography, globalization, goods, hack, hardware, harvesting, health, history, housing, hybrid, identification, illustration, images, information, infrastructure, installations, interaction design, interface, interferences, internet, kinetic, knowledge, landscape, life, lighting, localization, localized, magazines, make, mapping, marketing, mashup, materials, media, mediated, mind, mining, mobile, mobility, molecules, monitoring, movie, museum, music, nanotech, narrative, nature, networks, neurosciences, opensource, operating system, participative, particles, people, perception, photography, physics, physiological, politics, pollution, presence, print, privacy, product, profiling, projects, psychological, public, publishing, reactive, real time, recycling, research, ressources, robotics, santé, scenography, schools, science & technology, scientists, screen, search, security, semantic, sharing, shopping, signage, smart, social, software, solar, sound, space, speculation, statement, surveillance, sustainability, tactile, tagging, tangible, targeted, teaching, technology, tele-, telecom, territory, text, theory, thinkers, thinking, time, topology, tourism, toys, transmission, trend, typography, ubiquitous, urbanism, users, variable, vernacular, video, viral, vision, visualization, vr, war, weather, web, wireless, writing
By fabric | ch
As we lack a decent search engine on this blog and as we don't have a "tag cloud" either... (and as Summer is also a period when there is maybe still a bit of time left to dig into content)
HERE ARE ALL THE CURRENT CATEGORIES TO NAVIGATE ON | RBLG BLOG:
(to be seen below if you're navigating on the blog's page or here for rss readers)
Posted by Patrick Keller in fabric | ch, Architecture, Art, Culture & society, Design, Interaction design, Science & technology, Sustainability, Territory at 21:18
Defined tags for this entry: architecture, art, culture & society, design, fabric | ch, interaction design, science & technology, sustainability, territory, toorop
Thursday, July 31. 2014
Scientists have discovered that scorpions design their burrows to include both hot and cold spots. A long platform provides a sunny place to warm up before they hunt, whilst a humid chamber acts as a cool refuge during the heat of the day.
This recent discovery of scorpion architecture adds to a sizeable list of impressive non-human architecture.
Anthills consist of a complex network of paths. Comparative to the size of an individual ant, these structures are mega-skyscrapers.
Likewise, termites build huge structures that have been dubbed "cathedrals." Reaching up to 6m high or more, termite cathedrals are clustered in large arrays that cover whole landscapes.
This complex web of branches was built by the vogelkop gardener bowerbird. In direct refutation of the "less is more" aesthetic exemplified by both ants and Ludwig Mies van der Rohe, these birds embellish their structures with any bright things they can find.
Primates, including humans, are probably the most avid builders. For example, from an early age, orangutans learn to design and construct elaborately woven nests high in trees.
Far from trivial – and humor aside –, studying animal architectures helps destabilize the normative understanding of architecture as a strictly human domain of activity. Certain studios – like Animal Architecture – both draw inspiration from non-human design and develop collaborative practices with non-humans. Decentering the human as the center of architectural thinking is a necessary step in fostering a deeper understanding of the complex mesh of interconnectedness that is ecology. Without this step, humans will continue to practice architecture without regard for a larger context, which is why the profession already accounts for nearly half of US carbon emissions.
Friday, July 25. 2014
This algorithmic creationism game makes me think, to some extent, to the researches lead by philosophers, mathematicians, or physicists to prove that our own everyday world would be (or wouldn't be) the result of an extra large simulation...Yet, funilly, even so this game world is announced as "algoritmically generated", planets populated by dynosaurs or similar creatures are still present, so as the dark emperor's cosmic fleet! There's probably some commercial determinism within their creationism rules...
At some point though, we could make the following comment: what is the fundamental difference between the use of algorithms to carve a digital world for a game (a computer generated simulation) and the pratice of many contemporary architects that uses similar (generative) algorithms to carve physical buildings to live in, not to speak about all the other algorithms that structure our everyday life? If not by somebody else, we are creating our own simulation, so to say.
No Man’s Sky: A Vast Game Crafted by Algorithms
A new computer game, No Man’s Sky, demonstrates a new way to build computer games filled with diverse flora and fauna.
By Simon Parkin
The quality of the light on any one particular planet will depend on the color of its solar system’s sun.
Sean Murray, one of the creators of the computer game No Man’s Sky, can’t guarantee that the virtual universe he is building is infinite, but he’s certain that, if it isn’t, nobody will ever find out. “If you were to visit one virtual planet every second,” he says, “then our own sun will have died before you’d have seen them all.”
No Man’s Sky is a video game quite unlike any other. Developed for Sony’s PlayStation 4 by an improbably small team (the original four-person crew has grown only to 10 in recent months) at Hello Games, an independent studio in the south of England, it’s a game that presents a traversable universe in which every rock, flower, tree, creature, and planet has been “procedurally generated” to create a vast and diverse play area.
“We are attempting to do things that haven’t been done before,” says Murray. “No game has made it possible to fly down to a planet, and for it to be planet-sized, and feature life, ecology, lakes, caves, waterfalls, and canyons, then seamlessly fly up through the stratosphere and take to space again. It’s a tremendous challenge.”
Procedural generation, whereby a game’s landscape is generated not by an artist’s pen but an algorithm, is increasingly prevalent in video games. Most famously Minecraft creates a unique world for each of its players, randomly arranging rocks and lakes from a limited palette of bricks whenever someone begins a new game (see “The Secret to a Video Game Phenomenon”). But No Man’s Sky is far more complex and sophisticated. The tens of millions of planets that comprise the universe are all unique. Each is generated when a player discovers it, and is subject to the laws of its respective solar systems and vulnerable to natural erosion. The multitude of creatures that inhabit the universe dynamically breed and genetically mutate as time progresses. This is virtual world building on an unprecedented scale (see video below).
This presents numerous technological challenges, not least of which is how to test a universe of such scale during its development – the team is currently using virtual testers—automated bots that wander around taking screenshots which are then sent back to the team for viewing. Additionally, while No Man’s Sky might have an infinite-sized universe, there aren’t an infinite number of players. To avoid the problem of a kind of virtual loneliness, where a player might never encounter another person on his or her travels, the game starts every new player in the same galaxy (albeit on his or her own planet) with a shared initial goal of traveling to its center. Later in the game, players can meet up, fight, trade, mine, and explore. “Ultimately we don’t know whether people will work, congregate, or disperse,” Murray says. “I know players don’t like to be told that we don’t know what will happen, but that’s what is exciting to us: the game is a vast experiment.”
The game also bears the weight of unrivaled expectation. At the E3 video game conference in Los Angeles in June, no other game met with such applause. It is the game of many childhood science fiction dreams. For Murray, that is truer than for most. He was born in Ireland, but the family lived on a farm in the Australian outback, away from civilization. “At night you could see the vastness of space,” he says. “Meanwhile, we were responsible for our own electricity and survival. We were completely cut off. It had an impact on me that I carry through life.”
Murray formed Hello Games in 2009 with three friends, all of whom had previously worked at major studios. Hello Games’ first title, Joe Danger, let players control a stuntman. The game was, according to Murray, “annoyingly successful” in the sense that it locked him and his friends into a cycle of sequels that they had formed the company to escape. During the next few years the team made four Joe Danger games for seven different platforms. “Then I had a midlife game development crisis,” says Murray. “It changes your mindset when a single game’s development represents a significant chunk of life.”
Murray decided it was time to embark upon the game he’d imagined as a child, a game about frontiership and existence on the edge of the unexplored. “We talked about the feeling of landing on a planet and effectively being the first person to discover it, not knowing what was out there,” he says. “In this era in which footage of every game is recorded and uploaded to YouTube, we wanted a game where, even if you watched every video, it still wouldn’t be spoiled for you.”
When players discover a new planet, climb that planet’s tallest peak, or identify a new species of plant or animal, they are able to upload the discovery to the game’s servers, their name forever associated with the location, like a digital Christopher Columbus or Neil Armstrong. “Players will even be able to mark the planet as toxic or radioactive, or indicate what kind of life is there and then that then appears on everyone’s map,” says Murray.
Experimentation has been a watchword throughout the game’s production. Originally the game was entirely randomly generated. “Only around 1 percent of the time would it create something that looked natural, interesting, and pleasing to the eye; the rest of the time it was a mess and, in some cases where the sky, the water, and the terrain were all the same color, unplayable,” Murray says. So the team began to create simple rules, such as the distance from a sun at which it is likely that there will be moisture,” he explains. “From that we decide there will be rivers, lakes, erosion, and weather, all of which is dependent on what the liquid is made from. The color of the water in the atmosphere will derive from what the liquid is; we model the refractions to give you a modeled atmosphere.”
Similarly, the quality of light will depend on whether the solar system has a yellow sun or, for example, a red giant or red dwarf. “These are simple rules, but combined they produce something that seems natural, recognizable to our eyes. We have come from a place where everything was random and messy to something which is procedural and emergent, but still pleasingly chaotic in the mathematical sense. Things happen with cause and effect, but they are unpredictable for us.”
At the blockbuster studios in which he once worked, 300-person teams would have to build content from scratch. Now, thanks to the increased power of PCs and video game consoles, a relatively tiny team is able to create unimaginable scope. In this sense, Hello Games may be on the cusp not only of a new universe, but also of an entirely new way of creating games. “When I look at game development in general I think the cost of creating content is the real problem,” he says. “The sheer amount of assets that artists must build to furnish a world is what forces so many safe creative bets. Likewise, you can’t have 300 people working experimentally. Game development is often more like building a skyscraper that has form and definition but is ultimately quite similar to what is around it. It never sat right with me to be in a huge warehouse with hundreds of people making a game. That is not the way it should be—and now it doesn’t have to be.”
Thursday, July 24. 2014
You're looking at a new awesome nano-material invented that does the seemingly impossible: It hides things from touch. Just a thin layer of this amazing polymer will hide anything under it from being perceived by your sense of touch. In this photo you can see how it "absorbs" a metal cylinder.
How is this magic possible?
According to the the scientists at the Karlsruhe Institute of Technology, this "crystalline material structured with sub-micrometer accuracy [...] consists of needle-shaped cones, whose tips meet." It perfectly adapts and absorbs the shape of anything under it.
The metamaterial structure directs the forces of the touching finger such that the cylinder is hidden completely.
Not only your finger won't be able to detect it, but a force feedback measurement instrument will fail too. According to Tiemo Bückmann, the lead scientists in the project, "it is like in Hans-Christian Andersen's fairy tale about the princess and the pea. The princess feels the pea in spite of the mattresses. When using our new material, however, one mattress would be sufficient for the princess to sleep well."
What does this mean in real life?
The Karlsruhe Institute of Technology claims that the material was developed for purely experimental purposes, "but might open up the door to interesting applications in a few years from now, as it allows for producing materials with freely selectable mechanical properties. Examples are very thin, light, and still comfortable camping mattresses or carpets hiding cables and pipelines below."
I like that. Carpets that can perfectly hide cables is something I'd pay money for. And I'd love a camping blanket that perfectly absorbs any rock and twig on the ground, leaving a smooth surface to sleep on.
Besides health tracking, contact lens technology under development could enable drug delivery, night vision, and augmented reality.
Last week Google and Novartis announced that they’re teaming up to develop contact lenses that monitor glucose levels and automatically adjust their focus. But these could be just the start of a clever new product category. From cancer detection and drug delivery to reality augmentation and night vision, our eyes offer unique opportunities for both health monitoring and enhancement.
“Now is the time to put a little computer and a lot of miniaturized technologies in the contact lens,” says Franck Leveiller, head of research and development in the Novartis eye care division.
One of the Novartis-Google prototype lenses contains a device about the size of a speck of glitter that measures glucose in tears. A wireless antenna then transmits the measurements to an external device. It’s designed to ease the burden of diabetics who otherwise have to prick their fingers to test their blood sugar levels.
“I have many patients that are managing diabetes, and they described it as having a part-time job. It’s so arduous to monitor,” says Thomas Quinn, who is head of the American Optometric Association’s contact lens and cornea section. “To have a way that patients can do that more easily and get some of their life back is really exciting.”
Glucose isn’t the only thing that can be measured from tears rather than a blood sample, says Quinn. Tears also contain a chemical called lacryglobin that serves as a biomarker for breast, colon, lung, prostate, and ovarian cancers. Monitoring lacryglobin levels could be particularly useful for cancer patients who are in remission, Quinn says.
Quinn also believes that drug delivery may be another use for future contact lenses. If a lens could dispense medication slowly over long periods of time, it would be better for patients than the short, concentrated doses provided by eye drops, he says. Such a lens is not easy to make, though (see “A Drug-Dispensing Lens”).
The autofocusing lens is in an earlier stage of development, but the goal is for it to adjust its shape depending on where the eye is looking, which would be especially helpful for people who need reading glasses. A current prototype of the lens uses photodiodes to detect light hitting the eye and determine whether the eye is directed downward. Leveiller says the team is also looking at other possible techniques.
Google and Novartis are far from the only ones interesting in upgrading the contact lens with such new capabilities. In Sweden, a company called Sensimed is working on a contact lens that measures the intraocular pressure that results from the liquid buildup in the eyes of glaucoma patients (see “Glaucoma Test in a Contact Lens”). And researchers at the University of Michigan are using graphene to make infrared-sensitive contact lenses—the vision, as it were, is that these might one day provide some form of night vision without the bulky headgear.
A Seattle-based company, Innovega, meanwhile, has developed a contact lens with a small area that filters specific bands of red, green, and blue light, giving users the ability to focus on a very small, high resolution display less than an inch away from their eyes without interfering with normal vision. That makes tiny displays attached to glasses look more like IMAX movie screens, says the company’s CEO, Steve Willey. Together, the lens and display are called iOptik.
Plenty of challenges still remain before we’re all walking around with glucose-monitoring, cancer-detecting, drug-delivering super night vision. Some prototypes out there are unusually thick, Quinn says, and some use traditional, rigid electronics where clear, flexible alternatives would be preferable. And, of course, all will have to pass regulatory approval to show they are safe and effective.
Jeff George, the head of the Novartis eye care division, is certainly optimistic about Google’s smart lens. “Google X’s team refers to themselves as a ‘moon shot factory.’ I’d view this as better than a moon shot given what we’ve seen,” he says.
Wednesday, July 23. 2014
Could “Force Illusions” Help Wearables Catch On?
By John Pavlus
What if the compass app in your phone didn’t just visually point north but actually seemed to pull your hand in that direction?
Two Japanese researchers will present tiny handheld devices that generate this kind of illusion at next month’s annual SIGGRAPH technology conference in Vancouver, British Columbia. The “force display” devices, called Traxion and Buru-Navi3, exploit the fact that a vibrating object is perceived as either pulling or pushing when held. The effect could be applied in navigation and gaming applications, and it suggests possibilities in mobile and wearable technology as well. Tomohiro Amemiya, a cognitive scientist at NTT Communication Science Laboratories, began the Buru-Navi project in 2004, originally as a way to research how the brain handles sensory illusions. His initial prototype was roughly the size of a paperback novel and contained a crankshaft mechanism to generate vibration, similar to the motion of a locomotive wheel. Amemiya discovered that when the vibrations occurred asymmetrically at a frequency of 10 hertz—with the crankshaft accelerating sharply in one direction and then easing back more slowly—a distinctive pulling sensation emerged in the direction of the acceleration. With his collaborator Hiroaki Gomi, Amemiya continued to modify and miniaturize the device into its current form, which is about the size of a wine cork and relies on a 40-hertz electromagnetic actuator similar to those found in smartphones. When pinched between the thumb and forefinger, Buru-Navi3 creates a continuous force illusion in one direction (toward or away from the user, depending on the device’s orientation). The second device, called Traxion, was developed within the last year at the University of Tokyo by a team led by computer science researcher Jun Rekimoto. Traxion also generates a force illusion via an asymmetrically vibrating actuator held between the fingers. “We tested many users, and they said that it feels as if there’s some invisible string pulling or pushing the device,” Rekimoto says. “It’s a strong sensation of force.” Both devices create a pulling force significant enough to guide a blindfolded user along a path or around corners. This way-finding application might be a perfect fit for the smart watches that Samsung, Google, and perhaps Apple are mobilizing to sell. Haptics, which is the name for the technology behind tactile interfaces, has been explored for years in limited or niche applications. But Vincent Hayward, who researches haptics at the Pierre and Marie Curie University in Paris, says the technology is now “reaching a critical mass.” He adds, “Enough people are trying a sufficient number of ideas that the balance between novelty and utility starts shifting.” Nonetheless, harnessing these kinesthetic effects for mainstream use is easier said than done. Amemiya admits that while his device generates strong force illusions while being pinched between a finger and thumb, the effect becomes much weaker if the device is merely placed in contact with the skin (as it would be in a watch). The rise of even crude haptic wearable devices could accelerate this kind of scientific research, though. “A wearable system is always on, so it records data constantly,” Amemiya explains. “This can be very useful for understanding human perception.”
Two Japanese researchers will present tiny handheld devices that generate this kind of illusion at next month’s annual SIGGRAPH technology conference in Vancouver, British Columbia. The “force display” devices, called Traxion and Buru-Navi3, exploit the fact that a vibrating object is perceived as either pulling or pushing when held. The effect could be applied in navigation and gaming applications, and it suggests possibilities in mobile and wearable technology as well.
Tomohiro Amemiya, a cognitive scientist at NTT Communication Science Laboratories, began the Buru-Navi project in 2004, originally as a way to research how the brain handles sensory illusions. His initial prototype was roughly the size of a paperback novel and contained a crankshaft mechanism to generate vibration, similar to the motion of a locomotive wheel. Amemiya discovered that when the vibrations occurred asymmetrically at a frequency of 10 hertz—with the crankshaft accelerating sharply in one direction and then easing back more slowly—a distinctive pulling sensation emerged in the direction of the acceleration.
With his collaborator Hiroaki Gomi, Amemiya continued to modify and miniaturize the device into its current form, which is about the size of a wine cork and relies on a 40-hertz electromagnetic actuator similar to those found in smartphones. When pinched between the thumb and forefinger, Buru-Navi3 creates a continuous force illusion in one direction (toward or away from the user, depending on the device’s orientation).
The second device, called Traxion, was developed within the last year at the University of Tokyo by a team led by computer science researcher Jun Rekimoto. Traxion also generates a force illusion via an asymmetrically vibrating actuator held between the fingers. “We tested many users, and they said that it feels as if there’s some invisible string pulling or pushing the device,” Rekimoto says. “It’s a strong sensation of force.”
Both devices create a pulling force significant enough to guide a blindfolded user along a path or around corners. This way-finding application might be a perfect fit for the smart watches that Samsung, Google, and perhaps Apple are mobilizing to sell.
Haptics, which is the name for the technology behind tactile interfaces, has been explored for years in limited or niche applications. But Vincent Hayward, who researches haptics at the Pierre and Marie Curie University in Paris, says the technology is now “reaching a critical mass.” He adds, “Enough people are trying a sufficient number of ideas that the balance between novelty and utility starts shifting.”
Nonetheless, harnessing these kinesthetic effects for mainstream use is easier said than done. Amemiya admits that while his device generates strong force illusions while being pinched between a finger and thumb, the effect becomes much weaker if the device is merely placed in contact with the skin (as it would be in a watch).
The rise of even crude haptic wearable devices could accelerate this kind of scientific research, though. “A wearable system is always on, so it records data constantly,” Amemiya explains. “This can be very useful for understanding human perception.”
Friday, July 18. 2014
Have you ever tried to imagine how a fish soup tastes whose recipe is based on publicly available local fishing data? Or what a pizza would be like if it was based on Helsinki's population mix? Data Cuisine explores food as a means of data expression - or, if you like - edible diagrams. More about it oin WMMNA.
The first time someone lays a 3D-printed piece of candy in your hand, you almost feel bad about eating it. The virtuosity of these pieces confuses the senses: stunning hexagonal structures cluster together like a complex chemical construction, full-color starburst patterns curve as if made from fabric, and neon geometrical shapes interlock without a single seam. On first glance, you think each one is a piece of art and meant to be consumed only by the eyes. But then you taste it and realize this is a whole new recipe.
Sugar 3D printing is a relatively new development and a fun sense-oriented detour under the “additive manufacturing” umbrella, which has often been largely about function. Not to mention this is a huge development in 3D printing materials alone, especially considering that they’re all edible. No chemicals allowed. If we can 3D print with sugar, you have to wonder how many more materials are out there that we haven’t even considered yet.
Most importantly, food 3D printing empowers us to build upon the culinary traditions that are so deeply imprinted on our cultural psyche. Food, as we can all attest, occupies a prominent space in the human experience. After all, we always seem to gravitate toward the kitchen as a gathering place, and one of the greatest pleasures of being human is making and enjoying a meal with someone else, whether it’s to catch up, celebrate, remember, or imagine the future. As culinary practices shift, so too do the experiences that surround them: they become heightened, enriched. This is exactly the kind of progression that food 3D printing will catalyze, as bakers, chefs, and confectioners take hold of capabilities never before realized, giving new shape to the moments of life that revolve around our food culture.
The Sugar Lab
The Sugar Lab at 3D Systems is the birthplace of sugar 3D printing. Think of it as our bakery and the place where all the amazing, sweet creations you see here come to life. Liz and Kyle von Hasseln, who began developing 3D printed food out of their small apartment while they were architecture graduate students, founded the Sugar Lab. For this husband-and-wife team, it started as a simple experiment with unusual 3D printing materials. They first attempted to print in wood, using sawdust, and later ceramics and concrete. Those all produced mixed results. But next, motivated by the need for a special birthday decoration, they tried sugar. After a few months spent perfecting the recipe, they realized they were onto something. A bit later, The Sugar Lab took form as a full-fledged business, with Kyle and Liz using a 3D Systems 3D printer that they’d retrofitted to be food safe.
Now as part of the 3D Systems family, their amazing invention has taken the next step with the introduction of the ChefJet 3D printer, the first sugar 3D printer available for restaurants, bakeries, catering companies, and more. We first revealed the ChefJet at International CES 2014, and the excitement has rightfully been through the roof. Since then, candy giant Hershey’s has joined our efforts to find delectable and captivating new ways to print candy.
As Kyle and Liz put it at CES, the ChefJet presents a fantastic new outlet for 3D printing to spread throughout mainstream culture. Food being such an integral part of our social interactions, our family gatherings, and our time at home, these edibles have the chance to open a lot of eyes to the personal power of 3D printing and its myriad uses.
How It Works
For those familiar with the different methods of 3D printing, sugar 3D printing is similar in principal to other technologies like ColorJet or Selective Laser Sintering (SLS). It uses a bed of powdered materials (in this case sugar), flavoring, and sometimes cocoa powder. A stream of water bonds the sugar together within the material bed to form a single layer, then the build platform lowers, a new layer of sugar is spread over the build area, and the machine builds the next layer. So it goes layer by layer until the sculpture is finished.
The results, as you can see here, are just as magnificent as printing with plastic or metal. The ChefJet is virtually unlimited by the geometry or the complexity of the model you want to print. You can create interlocking pieces, perfectly straight lines, and smooth curves, all in full color if you desire. Considering the sugar sculptures that it creates, it makes sense that architects thought it up.
To date, The Sugar Lab and the ChefJet have created everything from customized sugar cubes and structural cake decorations to premium cocktail decorations and exact scale Ford Mustang replicas. Flavor choices are equally delicious with mint, cherry, sour apple, milk chocolate, and others.
But what I love about the ChefJet and other 3D printers is that they provide yet another tool and a multitude of other options when it comes to artistic applications. I discussed this in last month’s blog: 3D printing in this respect can supplement the traditional methods, and recipes, that we’ve developed over years and years. In this case, it’s about building on tradition, not overpowering or replacing it. So now bakers and confectioners can match their delectable flavors with never-before-seen visual aesthetics. They can have their cake and eat it too.
(Page 1 of 184, totaling 1834 entries) » next page
fabric | rblg
fabric | rblg is the survey website of fabric | ch -- studio for architecture, interaction and research. We curate and re-blog articles, researches, exhibitions and projects that we notice during our everyday practice.