Friday, March 13. 2015Electronic Inks Make 3-D Printing More Promising | #print
----- A startup called Voxel8 is using materials expertise to extend the capabilities of 3-D printing. By Kevin Bullis
The quadcopter printed by Voxel8.
Three cofounders of Voxel8, a Harvard spinoff, are showing me a toy they’ve made. At the company’s lab space—a couple of cluttered work benches in a big warehouse it shares with other startups—a bright-orange quadcopter takes flight and hovers above tangles of wires, computer equipment, coffee mugs, and spare parts.
Voxel8 isn’t trying to get into the toy business. The hand-sized drone serves to show off the capabilities of the company’s new 3-D printing technology. Voxel8 has developed a machine that can print both highly conductive inks for circuits along with plastic. This makes it possible to do away with conventional circuit boards, the size and shape of which constrain designs and add extra bulk to devices.
Conductive ink is just one of many new materials Voxel8 is planning to use to transform 3-D printing.
The new ink is not only highly conductive and printable at room temperature; it also stays where it’s put. Voxel8 uses the ink to connect conventional components—like computer chips and motors—and to fabricate some electronic components, such as antennas.
The company made the quadcopter by printing its plastic body layer by layer, periodically switching to printing conductive lines that became embedded by successive layers of plastic. At the appropriate points in the process, the Voxel8 team would stop, manually add a component, such as an LED, and then start the printer again.
The toy looks like something that could be made with conventional techniques. The real goal is to work with customers to discover new applications that can only be produced via 3-D printing. A video the company made to show off its technology starts by asking: “What would you do if you could 3-D print electronics?” While the founders have some ideas, they really don’t know what the technology is going to be particularly useful for.
Voxel8’s business plan is to start by selling the conductive ink and a desktop 3-D printer. The machine is designed primarily to produce prototypes, not to manufacture large quantities of finished product. The company’s long-term goal, however, is to create industrial manufacturing equipment that can print large numbers of specialized materials simultaneously, which will enable new kinds of devices.
The founders will draw on a large collection of novel materials—and strategies for designing new ones—developed over the last decade by cofounder Jennifer Lewis, a professor of biologically inspired engineering at Harvard (see “Microscale 3-D Printing).
One of Lewis’s key insights has been how to design materials that flow under pressure—such as in a printer-head nozzle—but immediately solidify when the pressure is removed. This is done by engineering microscopic particles to spontaneously form networks that hold the material in place. Those particles can be made of various materials: strong structural ones that can survive high temperatures, as well as epoxies, ceramics, and materials for resistors, capacitors, batteries, motors, and electromagnets, among many other things (see “Printing Batteries”). “The long-term possibility is almost endless numbers of materials being coprinted together with superfine resolution,” says cofounder and hardware lead Michael Bell. “That’s far more interesting than printing a single material.”
Acoustic cloaking device hides objects from sound | #material
Via Eurekalert -----
DURHAM, N.C. -- Using little more than a few perforated sheets of plastic and a staggering amount of number crunching, Duke engineers have demonstrated the world's first three-dimensional acoustic cloak. The new device reroutes sound waves to create the impression that both the cloak and anything beneath it are not there. The acoustic cloaking device works in all three dimensions, no matter which direction the sound is coming from or where the observer is located, and holds potential for future applications such as sonar avoidance and architectural acoustics. The study appears online in Nature Materials. "The particular trick we're performing is hiding an object from sound waves," said Steven Cummer, professor of electrical and computer engineering at Duke University. "By placing this cloak around an object, the sound waves behave like there is nothing more than a flat surface in their path." To achieve this new trick, Cummer and his colleagues turned to the developing field of metamaterials--the combination of natural materials in repeating patterns to achieve unnatural properties. In the case of the new acoustic cloak, the materials manipulating the behavior of sound waves are simply plastic and air. Once constructed, the device looks like several plastic plates with a repeating pattern of holes poked through them stacked on top of one another to form a sort of pyramid. To give the illusion that it isn't there, the cloak must alter the waves' trajectory to match what they would look like had they had reflected off a flat surface. Because the sound is not reaching the surface beneath, it is traveling a shorter distance and its speed must be slowed to compensate. "The structure that we built might look really simple," said Cummer. "But I promise you that it's a lot more difficult and interesting than it looks. We put a lot of energy into calculating how sound waves would interact with it. We didn't come up with this overnight." To test the cloaking device, researchers covered a small sphere with the cloak and "pinged" it with short bursts of sound from various angles. Using a microphone, they mapped how the waves responded and produced videos of them traveling through the air. Cummer and his team then compared the videos to those created with both an unobstructed flat surface and an uncloaked sphere blocking the way. The results clearly show that the cloaking device makes it appear as though the sound waves reflected off an empty surface. Although the experiment is a simple demonstration showing that the technology is possible and concealing an evil super-genius' underwater lair is a long ways away, Cummer believes that the technique has several potential commercial applications. "We conducted our tests in the air, but sound waves behave similarly underwater, so one obvious potential use is sonar avoidance," said Cummer. "But there's also the design of auditoriums or concert halls--any space where you need to control the acoustics. If you had to put a beam somewhere for structural reasons that was going to mess up the sound, perhaps you could fix the acoustics by cloaking it."
###
This research was supported by Multidisciplinary University Research Initiative grants from the Office of Naval Research (N00014-13-1-0631) and from the Army Research Office (W911NF-09-1-00539). "Three-dimensional broadband omnidirectional acoustic ground cloak," Zigoneanu L., Popa, B., Cummer, S.A. Nature Materials, March 9, 2014. DOI: 10.1038/NMAT3901
Personal Cloud? | #data #participantVia Rhizome -----
Posted by Patrick Keller
in Culture & society, Science & technology
at
09:53
Defined tags for this entry: computing, culture & society, devices, make, participative, science & technology, users
Tuesday, March 03. 2015Outing A.I.: Beyond the Turing Test | #A.I.
-----
Artificial Intelligence (A.I.) is having a moment, albeit one marked by crucial ambiguities. Cognoscenti including Stephen Hawking, Elon Musk and Bill Gates, among others, have recently weighed in on its potential and perils. After reading Nick Bostrom’s book “Superintelligence,” Musk even wondered aloud if A.I. may be “our biggest existential threat.” Positions on A.I. are split, and not just on its dangers. Some insist that “hard A.I.” (with human-level intelligence) can never exist, while others conclude that it is inevitable. But in many cases these debates may be missing the real point of what it means to live and think with forms of synthetic intelligence very different from our own. That point, in short, is that a mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life — part of how our tools work, how our cities move and how our economy builds and trades things. Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for. This anthropocentric fallacy may contradict the implications of contemporary A.I. research, but it is still a prism through which much of our culture views an encounter with advanced synthetic cognition. The little boy robot in Steven Spielberg’s 2001 film “A.I. Artificial Intelligence” wants to be a real boy with all his little metal heart, while Skynet in the “Terminator” movies is obsessed with the genocide of humans. We automatically presume that the Monoliths in Stanley Kubrick and Arthur C. Clarke’s 1968 film, “2001: A Space Odyssey,” want to talk to the human protagonist Dave, and not to his spaceship’s A.I., HAL 9000. I argue that we should abandon the conceit that a “true” Artificial Intelligence must care deeply about humanity — us specifically — as its focus and motivation. Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all. Unless we assume that humanlike intelligence represents all possible forms of intelligence – a whopper of an assumption – why define an advanced A.I. by its resemblance to ours? After all, “intelligence” is notoriously difficult to define, and human intelligence simply can’t exhaust the possibilities. Granted, doing so may at times have practical value in the laboratory, but in cultural terms it is self-defeating, unethical and perhaps even dangerous. We need a popular culture of A.I. that is less parochial and narcissistic, one that is based on more than simply looking for a machine version of our own reflection. As a basis for staging encounters between various A.I.s and humans, that would be a deeply flawed precondition for communication. Needless to say, our historical track record with “first contacts,” even among ourselves, does not provide clear comfort that we are well-prepared. II. The idea of measuring A.I. by its ability to “pass” as a human – dramatized in countless sci-fi films, from Ridley Scott’s “Blade Runner” to Spike Jonze’s “Her” – is actually as old as modern A.I. research itself. It is traceable at least to 1950 when the British mathematician Alan Turing published “Computing Machinery and Intelligence,” a paper in which he described what we now call the “Turing Test,” and which he referred to as the “imitation game.” There are different versions of the test, all of which are revealing as to why our approach to the culture and ethics of A.I. is what it is, for good and bad. For the most familiar version, a human interrogator asks questions of two hidden contestants, one a human and the other a computer. Turing suggests that if the interrogator usually cannot tell which is which, and if the computer can successfully pass as human, then can we not conclude, for practical purposes, that the computer is “intelligent”? More people “know” Turing’s foundational text than have actually read it. This is unfortunate because the text is marvelous, strange and surprising. Turing introduces his test as a variation on a popular parlor game in which two hidden contestants, a woman (player A) and a man (player B) try to convince a third that he or she is a woman by their written responses to leading questions. To win, one of the players must convincingly be who they really are, whereas the other must try to pass as another gender. Turing describes his own variation as one where “a computer takes the place of player A,” and so a literal reading would suggest that in his version the computer is not just pretending to be a human, but pretending to be a woman. It must pass as a she. Other versions had it that player B could be either a man or a woman. It would seem a very different kind of game if only one player is faking, or if both are, or if neither of them are. Now that we give the computer a seat, we may have it pretending to be a woman along with a man pretending to be a woman, both trying to trick the interrogator into figuring out which is a man and which is a woman. Or perhaps a computer pretending to be a man pretending to be a woman, along with a man pretending to be a woman, or even a computer pretending to be a woman pretending to be a man pretending to be a woman! In the real world, of course, we already have all of the above. “The Imitation Game,” Morten Tyldum’s Oscar-winning 2014 film about Turing, reminds us that the mathematician himself also had to “pass” — in his case as straight man in a society that criminalized homosexuality. Upon discovery that he was not what he appeared to be, he was forced to undergo horrific medical treatments known as “chemical castration.” Ultimately the physical and emotional pain was too great and he committed suicide. The episode was grotesque tribute to a man whose contribution to defeating Hitler’s military was still at that time a state secret. Turing was only recently given posthumous pardon, but the tens of thousands of other British men sentenced under similar laws have not. One notes the sour ironic correspondence between asking an A.I. to “pass” the test in order to qualify as intelligent — to “pass” as a human intelligence — with Turing’s own need to hide his homosexuality and to “pass” as a straight man. The demands of both bluffs are unnecessary and profoundly unfair. Passing as a person, as a white or black person, or as a man or woman, for example, comes down to what others see and interpret. Because everyone else is already willing to read others according to conventional cues (of race, sex, gender, species, etc.) the complicity between whoever (or whatever) is passing and those among which he or she or it performs is what allows passing to succeed. Whether or not an A.I. is trying to pass as a human or is merely in drag as a human is another matter. Is the ruse all just a game or, as for some people who are compelled to pass in their daily lives, an essential camouflage? Either way, “passing” may say more about the audience than about the performers. We would do better to presume that in our universe, “thinking” is much more diverse, even alien, than our own particular case. The real philosophical lessons of A.I. will have less to do with humans teaching machines how to think than with machines teaching humans a fuller and truer range of what thinking can be (and for that matter, what being human can be). III. That we would wish to define the very existence of A.I. in relation to its ability to mimic how humans think that humans think will be looked back upon as a weird sort of speciesism. The legacy of that conceit helped to steer some older A.I. research down disappointingly fruitless paths, hoping to recreate human minds from available parts. It just doesn’t work that way. Contemporary A.I. research suggests instead that the threshold by which any particular arrangement of matter can be said to be “intelligent” doesn’t have much to do with how it reflects humanness back at us. As Stuart Russell and Peter Norvig (now director of research at Google) suggest in their essential A.I. textbook, biomorphic imitation is not how we design complex technology. Airplanes don’t fly like birds fly, and we certainly don’t try to trick birds into thinking that airplanes are birds in order to test whether those planes “really” are flying machines. Why do it for A.I. then? Today’s serious A.I. research does not focus on the Turing Test as an objective criterion of success, and yet in our popular culture of A.I., the test’s anthropocentrism holds such durable conceptual importance. Like the animals who talk like teenagers in a Disney movie, other minds are conceivable mostly by way of puerile ventriloquism. Where is the real injury in this? If we want everyday A.I. to be congenial in a humane sort of way, so what? The answer is that we have much to gain from a more sincere and disenchanted relationship to synthetic intelligences, and much to lose by keeping illusions on life support. Some philosophers write about the possible ethical “rights” of A.I. as sentient entities, but that’s not my point here. Rather, the truer perspective is also the better one for us as thinking technical creatures. Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers. Relying on efforts to program A.I. not to “harm humans” (inspired by on Isaac Asimov’s “three laws” of robotics from 1942) makes sense only when an A.I. knows what humans are and what harming them might mean. There are many ways that an A.I. might harm us that that have nothing to do with its malevolence toward us, and chief among these is exactly following our well-meaning instructions to an idiotic and catastrophic extreme. Instead of mechanical failure or a transgression of moral code, the A.I. may pose an existential risk because it is both powerfully intelligent and disinterested in humans. To the extent that we recognize A.I. by its anthropomorphic qualities, or presume its preoccupation with us, we are vulnerable to those eventualities. Whether or not “hard A.I.” ever appears, the harm is also in the loss of all that we prevent ourselves from discovering and understanding when we insist on protecting beliefs we know to be false. In the 1950 essay, Turing offers several rebuttals to his speculative A.I., including a striking comparison with earlier objections to Copernican astronomy. Copernican traumas that abolish the false centrality and absolute specialness of human thought and species-being are priceless accomplishments. They allow for human culture based on how the world actually is more than on how it appears to us from our limited vantage point. Turing referred to these as “theological objections,” but one could argue that the anthropomorphic precondition for A.I. is a “pre-Copernican” attitude as well, however secular it may appear. The advent of robust inhuman A.I. may let us achieve another disenchantment, one that should enable a more reality-based understanding of ourselves, our situation, and a fuller and more complex understanding of what “intelligence” is and is not. From there we can hopefully make our world with a greater confidence that our models are good approximations of what’s out there (always a helpful thing.) Lastly, the harm is in perpetuating a relationship to technology that has brought us to the precipice of a Sixth Great Extinction. Arguably the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity. If you think so, Google “pig decapitating machine” (actually, just don’t) and then let’s talk about inventing worlds in which machines are wholly subservient to humans’ wishes. One wonders whether it is only from a society that once gave theological and legislative comfort to chattel slavery that this particular affirmation could still be offered in 2015 with such satisfied naïveté? This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I. It is time to move on. This pretentious folklore is too expensive.
Benjamin H. Bratton (@bratton) is an associate professor of visual arts at the University of California, San Diego. His next book, “The Stack: On Software and Sovereignty,” will be published this fall by the MIT Press.
---
Another text that can be read regarding the same topic and that details different positions (Musk, Hawking, etc.) is Our Fear of Artificial Intelligence by Paul Ford, on MIT Technology Review.
Posted by Patrick Keller
in Culture & society, Science & technology
at
09:44
Defined tags for this entry: artificial reality, culture & society, presence, science & technology, thinking
(Page 1 of 1, totaling 4 entries)
|
fabric | rblgThis blog is the survey website of fabric | ch - studio for architecture, interaction and research. We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings. Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations. This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
QuicksearchCategoriesCalendarSyndicate This BlogArchivesBlog Administration |