Friday, March 13. 2015
A startup called Voxel8 is using materials expertise to extend the capabilities of 3-D printing.
By Kevin Bullis
The quadcopter printed by Voxel8.
Three cofounders of Voxel8, a Harvard spinoff, are showing me a toy they’ve made. At the company’s lab space—a couple of cluttered work benches in a big warehouse it shares with other startups—a bright-orange quadcopter takes flight and hovers above tangles of wires, computer equipment, coffee mugs, and spare parts.
Voxel8 isn’t trying to get into the toy business. The hand-sized drone serves to show off the capabilities of the company’s new 3-D printing technology. Voxel8 has developed a machine that can print both highly conductive inks for circuits along with plastic. This makes it possible to do away with conventional circuit boards, the size and shape of which constrain designs and add extra bulk to devices.
Conductive ink is just one of many new materials Voxel8 is planning to use to transform 3-D printing.
The new ink is not only highly conductive and printable at room temperature; it also stays where it’s put. Voxel8 uses the ink to connect conventional components—like computer chips and motors—and to fabricate some electronic components, such as antennas.
The company made the quadcopter by printing its plastic body layer by layer, periodically switching to printing conductive lines that became embedded by successive layers of plastic. At the appropriate points in the process, the Voxel8 team would stop, manually add a component, such as an LED, and then start the printer again.
The toy looks like something that could be made with conventional techniques. The real goal is to work with customers to discover new applications that can only be produced via 3-D printing. A video the company made to show off its technology starts by asking: “What would you do if you could 3-D print electronics?” While the founders have some ideas, they really don’t know what the technology is going to be particularly useful for.
Voxel8’s business plan is to start by selling the conductive ink and a desktop 3-D printer. The machine is designed primarily to produce prototypes, not to manufacture large quantities of finished product. The company’s long-term goal, however, is to create industrial manufacturing equipment that can print large numbers of specialized materials simultaneously, which will enable new kinds of devices.
The founders will draw on a large collection of novel materials—and strategies for designing new ones—developed over the last decade by cofounder Jennifer Lewis, a professor of biologically inspired engineering at Harvard (see “Microscale 3-D Printing).
One of Lewis’s key insights has been how to design materials that flow under pressure—such as in a printer-head nozzle—but immediately solidify when the pressure is removed. This is done by engineering microscopic particles to spontaneously form networks that hold the material in place. Those particles can be made of various materials: strong structural ones that can survive high temperatures, as well as epoxies, ceramics, and materials for resistors, capacitors, batteries, motors, and electromagnets, among many other things (see “Printing Batteries”).
“The long-term possibility is almost endless numbers of materials being coprinted together with superfine resolution,” says cofounder and hardware lead Michael Bell. “That’s far more interesting than printing a single material.”
DURHAM, N.C. -- Using little more than a few perforated sheets of plastic and a staggering amount of number crunching, Duke engineers have demonstrated the world's first three-dimensional acoustic cloak. The new device reroutes sound waves to create the impression that both the cloak and anything beneath it are not there.
The acoustic cloaking device works in all three dimensions, no matter which direction the sound is coming from or where the observer is located, and holds potential for future applications such as sonar avoidance and architectural acoustics.
The study appears online in Nature Materials.
"The particular trick we're performing is hiding an object from sound waves," said Steven Cummer, professor of electrical and computer engineering at Duke University. "By placing this cloak around an object, the sound waves behave like there is nothing more than a flat surface in their path."
To achieve this new trick, Cummer and his colleagues turned to the developing field of metamaterials--the combination of natural materials in repeating patterns to achieve unnatural properties. In the case of the new acoustic cloak, the materials manipulating the behavior of sound waves are simply plastic and air. Once constructed, the device looks like several plastic plates with a repeating pattern of holes poked through them stacked on top of one another to form a sort of pyramid.
To give the illusion that it isn't there, the cloak must alter the waves' trajectory to match what they would look like had they had reflected off a flat surface. Because the sound is not reaching the surface beneath, it is traveling a shorter distance and its speed must be slowed to compensate.
"The structure that we built might look really simple," said Cummer. "But I promise you that it's a lot more difficult and interesting than it looks. We put a lot of energy into calculating how sound waves would interact with it. We didn't come up with this overnight."
To test the cloaking device, researchers covered a small sphere with the cloak and "pinged" it with short bursts of sound from various angles. Using a microphone, they mapped how the waves responded and produced videos of them traveling through the air.
Cummer and his team then compared the videos to those created with both an unobstructed flat surface and an uncloaked sphere blocking the way. The results clearly show that the cloaking device makes it appear as though the sound waves reflected off an empty surface.
Although the experiment is a simple demonstration showing that the technology is possible and concealing an evil super-genius' underwater lair is a long ways away, Cummer believes that the technique has several potential commercial applications.
"We conducted our tests in the air, but sound waves behave similarly underwater, so one obvious potential use is sonar avoidance," said Cummer. "But there's also the design of auditoriums or concert halls--any space where you need to control the acoustics. If you had to put a beam somewhere for structural reasons that was going to mess up the sound, perhaps you could fix the acoustics by cloaking it."
This research was supported by Multidisciplinary University Research Initiative grants from the Office of Naval Research (N00014-13-1-0631) and from the Army Research Office (W911NF-09-1-00539).
"Three-dimensional broadband omnidirectional acoustic ground cloak," Zigoneanu L., Popa, B., Cummer, S.A. Nature Materials, March 9, 2014. DOI: 10.1038/NMAT3901
Tuesday, March 03. 2015
Artificial Intelligence (A.I.) is having a moment, albeit one marked by crucial ambiguities.
Cognoscenti including Stephen Hawking, Elon Musk and Bill Gates, among others, have recently weighed in on its potential and perils. After reading Nick Bostrom’s book “Superintelligence,” Musk even wondered aloud if A.I. may be “our biggest existential threat.”
Positions on A.I. are split, and not just on its dangers. Some insist that “hard A.I.” (with human-level intelligence) can never exist, while others conclude that it is inevitable. But in many cases these debates may be missing the real point of what it means to live and think with forms of synthetic intelligence very different from our own.
That point, in short, is that a mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits.
This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life — part of how our tools work, how our cities move and how our economy builds and trades things.
Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for. This anthropocentric fallacy may contradict the implications of contemporary A.I. research, but it is still a prism through which much of our culture views an encounter with advanced synthetic cognition.
The little boy robot in Steven Spielberg’s 2001 film “A.I. Artificial Intelligence” wants to be a real boy with all his little metal heart, while Skynet in the “Terminator” movies is obsessed with the genocide of humans. We automatically presume that the Monoliths in Stanley Kubrick and Arthur C. Clarke’s 1968 film, “2001: A Space Odyssey,” want to talk to the human protagonist Dave, and not to his spaceship’s A.I., HAL 9000.
I argue that we should abandon the conceit that a “true” Artificial Intelligence must care deeply about humanity — us specifically — as its focus and motivation. Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.
Unless we assume that humanlike intelligence represents all possible forms of intelligence – a whopper of an assumption – why define an advanced A.I. by its resemblance to ours? After all, “intelligence” is notoriously difficult to define, and human intelligence simply can’t exhaust the possibilities. Granted, doing so may at times have practical value in the laboratory, but in cultural terms it is self-defeating, unethical and perhaps even dangerous.
We need a popular culture of A.I. that is less parochial and narcissistic, one that is based on more than simply looking for a machine version of our own reflection. As a basis for staging encounters between various A.I.s and humans, that would be a deeply flawed precondition for communication. Needless to say, our historical track record with “first contacts,” even among ourselves, does not provide clear comfort that we are well-prepared.
The idea of measuring A.I. by its ability to “pass” as a human – dramatized in countless sci-fi films, from Ridley Scott’s “Blade Runner” to Spike Jonze’s “Her” – is actually as old as modern A.I. research itself. It is traceable at least to 1950 when the British mathematician Alan Turing published “Computing Machinery and Intelligence,” a paper in which he described what we now call the “Turing Test,” and which he referred to as the “imitation game.” There are different versions of the test, all of which are revealing as to why our approach to the culture and ethics of A.I. is what it is, for good and bad. For the most familiar version, a human interrogator asks questions of two hidden contestants, one a human and the other a computer. Turing suggests that if the interrogator usually cannot tell which is which, and if the computer can successfully pass as human, then can we not conclude, for practical purposes, that the computer is “intelligent”?
More people “know” Turing’s foundational text than have actually read it. This is unfortunate because the text is marvelous, strange and surprising. Turing introduces his test as a variation on a popular parlor game in which two hidden contestants, a woman (player A) and a man (player B) try to convince a third that he or she is a woman by their written responses to leading questions. To win, one of the players must convincingly be who they really are, whereas the other must try to pass as another gender. Turing describes his own variation as one where “a computer takes the place of player A,” and so a literal reading would suggest that in his version the computer is not just pretending to be a human, but pretending to be a woman. It must pass as a she.
Other versions had it that player B could be either a man or a woman. It would seem a very different kind of game if only one player is faking, or if both are, or if neither of them are. Now that we give the computer a seat, we may have it pretending to be a woman along with a man pretending to be a woman, both trying to trick the interrogator into figuring out which is a man and which is a woman. Or perhaps a computer pretending to be a man pretending to be a woman, along with a man pretending to be a woman, or even a computer pretending to be a woman pretending to be a man pretending to be a woman! In the real world, of course, we already have all of the above.
“The Imitation Game,” Morten Tyldum’s Oscar-winning 2014 film about Turing, reminds us that the mathematician himself also had to “pass” — in his case as straight man in a society that criminalized homosexuality. Upon discovery that he was not what he appeared to be, he was forced to undergo horrific medical treatments known as “chemical castration.” Ultimately the physical and emotional pain was too great and he committed suicide. The episode was grotesque tribute to a man whose contribution to defeating Hitler’s military was still at that time a state secret. Turing was only recently given posthumous pardon, but the tens of thousands of other British men sentenced under similar laws have not.
One notes the sour ironic correspondence between asking an A.I. to “pass” the test in order to qualify as intelligent — to “pass” as a human intelligence — with Turing’s own need to hide his homosexuality and to “pass” as a straight man. The demands of both bluffs are unnecessary and profoundly unfair.
Passing as a person, as a white or black person, or as a man or woman, for example, comes down to what others see and interpret. Because everyone else is already willing to read others according to conventional cues (of race, sex, gender, species, etc.) the complicity between whoever (or whatever) is passing and those among which he or she or it performs is what allows passing to succeed. Whether or not an A.I. is trying to pass as a human or is merely in drag as a human is another matter. Is the ruse all just a game or, as for some people who are compelled to pass in their daily lives, an essential camouflage? Either way, “passing” may say more about the audience than about the performers.
We would do better to presume that in our universe, “thinking” is much more diverse, even alien, than our own particular case. The real philosophical lessons of A.I. will have less to do with humans teaching machines how to think than with machines teaching humans a fuller and truer range of what thinking can be (and for that matter, what being human can be).
That we would wish to define the very existence of A.I. in relation to its ability to mimic how humans think that humans think will be looked back upon as a weird sort of speciesism. The legacy of that conceit helped to steer some older A.I. research down disappointingly fruitless paths, hoping to recreate human minds from available parts. It just doesn’t work that way. Contemporary A.I. research suggests instead that the threshold by which any particular arrangement of matter can be said to be “intelligent” doesn’t have much to do with how it reflects humanness back at us. As Stuart Russell and Peter Norvig (now director of research at Google) suggest in their essential A.I. textbook, biomorphic imitation is not how we design complex technology. Airplanes don’t fly like birds fly, and we certainly don’t try to trick birds into thinking that airplanes are birds in order to test whether those planes “really” are flying machines. Why do it for A.I. then? Today’s serious A.I. research does not focus on the Turing Test as an objective criterion of success, and yet in our popular culture of A.I., the test’s anthropocentrism holds such durable conceptual importance. Like the animals who talk like teenagers in a Disney movie, other minds are conceivable mostly by way of puerile ventriloquism.
Where is the real injury in this? If we want everyday A.I. to be congenial in a humane sort of way, so what? The answer is that we have much to gain from a more sincere and disenchanted relationship to synthetic intelligences, and much to lose by keeping illusions on life support. Some philosophers write about the possible ethical “rights” of A.I. as sentient entities, but that’s not my point here. Rather, the truer perspective is also the better one for us as thinking technical creatures.
Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers. Relying on efforts to program A.I. not to “harm humans” (inspired by on Isaac Asimov’s “three laws” of robotics from 1942) makes sense only when an A.I. knows what humans are and what harming them might mean. There are many ways that an A.I. might harm us that that have nothing to do with its malevolence toward us, and chief among these is exactly following our well-meaning instructions to an idiotic and catastrophic extreme. Instead of mechanical failure or a transgression of moral code, the A.I. may pose an existential risk because it is both powerfully intelligent and disinterested in humans. To the extent that we recognize A.I. by its anthropomorphic qualities, or presume its preoccupation with us, we are vulnerable to those eventualities.
Whether or not “hard A.I.” ever appears, the harm is also in the loss of all that we prevent ourselves from discovering and understanding when we insist on protecting beliefs we know to be false. In the 1950 essay, Turing offers several rebuttals to his speculative A.I., including a striking comparison with earlier objections to Copernican astronomy. Copernican traumas that abolish the false centrality and absolute specialness of human thought and species-being are priceless accomplishments. They allow for human culture based on how the world actually is more than on how it appears to us from our limited vantage point. Turing referred to these as “theological objections,” but one could argue that the anthropomorphic precondition for A.I. is a “pre-Copernican” attitude as well, however secular it may appear. The advent of robust inhuman A.I. may let us achieve another disenchantment, one that should enable a more reality-based understanding of ourselves, our situation, and a fuller and more complex understanding of what “intelligence” is and is not. From there we can hopefully make our world with a greater confidence that our models are good approximations of what’s out there (always a helpful thing.)
Lastly, the harm is in perpetuating a relationship to technology that has brought us to the precipice of a Sixth Great Extinction. Arguably the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity. If you think so, Google “pig decapitating machine” (actually, just don’t) and then let’s talk about inventing worlds in which machines are wholly subservient to humans’ wishes.
One wonders whether it is only from a society that once gave theological and legislative comfort to chattel slavery that this particular affirmation could still be offered in 2015 with such satisfied naïveté? This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I. It is time to move on. This pretentious folklore is too expensive.
Benjamin H. Bratton (@bratton) is an associate professor of visual arts at the University of California, San Diego. His next book, “The Stack: On Software and Sovereignty,” will be published this fall by the MIT Press.
Wednesday, February 18. 2015
Via iiclouds.org (Nicolas Nova)
“World Brain” by Stéphane Degoutin and Gwenola Wagon (2015):
Sunday, February 01. 2015
By fabric | ch
Along different projects we are undertaking at fabric | ch, we continue to work on self initiated researches and experiments (slowly, way too slowly... Time is of course missing). Deterritorialized House is one of them, introduced below.
Some of these experimental works concern the mutating "home" program (considered as "inhabited housing"), that is obviously an historical one for architecture but that is also rapidly changing "(...) under pressure of multiple forces --financial, environmental, technological, geopolitical. What we used to call home may not even exist anymore, having transmuted into a financial commodity measured in sqm (square meters)", following Joseph Grima's statement in sqm. the quantified home, "Home is the answer, but what is the question?"
In a different line of works, we are looking to build physical materializations in the form of small pavilions for projects like i.e. Satellite Daylight, 46°28'N, while other researches are about functions: based on live data feeds, how would you inhabit a transformed --almost geo-engineered atmospheric/environmental condition? Like the one of Deterritorialized Living (night doesn't exist in this fictional climate that consists of only one day, no years, no months, no seasons), the physiological environment of I-Weather, or the one of Perpetual Tropical Sunshine, etc.?
We are therefore very interested to explore further into the ways you would inhabit such singular and "creolized" environments composed of combined dimensions, like some of the ones we've designed for installations. Yet considering these environments as proto-architecture (architectured/mediated atmospheres) and as conditions to inhabit, looking for their own logic.
We are looking forward to publish the results of these different projects along the year. Some as early sketches, some as results, or both. I publish below early sketches of such an experiment, Deterritorialized House, linked to the "home/house" line of research. It is about symbiotically inhabiting the data center... Would you like it or not, we surely de-facto inhabit it, as it is a globally spread program and infrastructure that surrounds us, but we are thinking here in physically inhabiting it, possibly making it a "home", sharing it with the machines...
What is happening when you combine a fully deterritorialized program (super or hyper-modern, "non lieu", ...) with the one of the home? What might it say or comment about contemporary living? Could the symbiotic relation take advantage of the heat the machine are generating --directly connected to the amount of processing power used--, the quality of the air, the fact that the center must be up and running, possibly lit 24/7, etc.
As we'll run a workshop next week in the context of another research project (Inhabiting and Interfacing the Cloud(s), an academic program between ECAL, HEAD, EPFL-ECAL Lab and EPFL in this case) linked to this idea of questioning the data center --its paradoxically centralized program, its location, its size, its functionalism, etc.--, it might be useful to publish these drawings, even so in their early phase (theys are dating back from early 2014, the project went back and forth from this point and we are still working on it.)
1) The data center level (level -1 or level +1) serves as a speculative territory and environment to inhabit (each circle in this drawing is a fresh air pipe sourrounded by a certain number of computers cabinets --between 3 and 9).
A potential and idealistic new "infinite monument" (global)? It still needs to be decided if it should be underground, cut from natural lighting or if it should be fragmented into many pieces and located in altitude (--likely, according to our other scenarios that are looking for decentralization and collaboration), etc. Both?
Fresh air is coming from the outside through the pipes surrounded by the servers and their cabinets (the incoming air could be an underground cooled one, or the one that can be found in altitude, in the Swiss Alps --triggering scenarios like cities in the moutains? moutain data farming? Likely too, as we are looking to bring data centers back into small or big urban environments). The computing and data storage units are organized like a "landscape", trying to trigger different atmospheric qualities (some areas are hotter than others with the amount of hot air coming out of the data servers' cabinets, some areas are charged in positive ions, air connectivity is obviously everywhere, etc.)
Artificial lighting follows a similar organization as the servers' cabinets need to be well lit. Therefore a light pattern emerges as well in the data center level. Running 24/7, with the need to be always lit, the data center uses a very specific programmed lighting system: Deterritorialized Daylight linked to global online data flows.
2) Linked to the special atmospheric conditions found in this "geo-data engineered atmosphere" (the one of the data center itself, level -1 or 1), freely organized functions can be located according to their best matching location. There are no thick walls as the "cabinets islands" acts as semi-open partitions.
A program starts to appear that combines the needs of a data center and the one of a small housing program which is immersed into this "climate" (dense connectivity, always artificially lit, 24°C permanent heat). "Houses" start to appear as "plugs" into a larger data center.
3) A detailed view (data center, level -1 or +1) on the "housing plug" that combine programs. At this level, the combination between an office-administration unit for a small size data center start to emerge, combined with a kind of "small office - home office" that is immersed into this perpetually lit data space. This specific small housing space (a studio, or a "small office - home office") becomes a "deterritorialized" room within a larger housing program that we'll find on the upper level(s), likely ground floor or level +2 of the overall compound.
4) Using the patterns emerging from different spatial components (heat, light, air quality --dried, charged in positive ions--, wifi connectivity), a map is traced and "moirés" patterns of spatial configurations ("moirés spaces") start to happen. These define spatial qualities. Functions are "structurelessly" placed accordingly, on a "best matching location" basis (needs in heat, humidity, light, connectivity which connect this approach to the one of Philippe Rahm, initiated in a former research project, Form & Function Follow Climate (2006). Or also i.e. the one of Walter Henn, Burolandschaft (1963), if not the one of Junya Ishigami's Kanagawa Institute).
Note also that this is a line of work that we are following in another experimental project at fabric | ch, about which we also hope to publish along the year, Algorithmic Atomized Functioning --a glimpse of which can be seen in Desierto Issue #3, 28° Celsius.
5) On ground level or on level +2, the rest of the larger house program and few parts of the data center that emerges. There are no other heating or artificial lighting devices besides the ones provided by the data center program itself. The energy spent by the data center must serve and somehow be spared by the house. Fresh and hot zones, artificial light and connectivity, etc. are provided by the data center emergences in the house, so has from the opened "small office - home office" that is located one floor below. Again, a map is traced based and moirés patterns of specific locations and spatial configurations emerge. Functions are also placed accordingly (hot, cold, lit, connected zones).
Starts or tries to appear a "creolized" housing object, somewhere in between a symbiotic fragmented data center and a house, possibly sustaining or triggering new inhabiting patterns...
Project (ongoing): fabric | ch
Team: Patrick Keller, Christophe Guignard, Christian Babski, Sinan Mansuroglu
Friday, January 23. 2015
Note: Following my recent posts about the research project "Inhabiting & Intercacing the Cloud(s)" I'm leading for ECAL, Nicolas Nova and I will be present during next Lift Conference in Geneva (Feb. 4-6 2015) for a talk combined with a workshop and a skype session with EPFL (a workshop related to the I&IC research project will be on the finish line at EPFL –Prof. Dieter Dietz’s ALICE Laboratory– on the day we’ll present in Geneva). If you plan to take part to Lift 15, please come say "hello" and exchange about the project.
Inhabiting and Interfacing the Cloud(s)
Curated by Lift
Fri, Feb. 06 2015 – 10:30 to 12:30
Room 7+8 (Level 2)
Architect (EPFL), founding member of fabric | ch and Professor at ECAL
Principal at Near Future Laboratory and Professor at HEAD Geneva
Workshop description : Since the end of the 20th century, we have been seeing the rapid emergence of “Cloud Computing”, a new constructed entity that combines extensively information technologies, massive storage of individual or collective data, distributed computational power, distributed access interfaces, security and functionalism.
In a joint design research that connects the works of interaction designers from ECAL & HEAD with the spatial and territorial approaches of architects from EPFL, we’re interested in exploring the creation of alternatives to the current expression of “Cloud Computing”, particularly in its forms intended for private individuals and end users (“Personal Cloud”). It is to offer a critical appraisal of this “iconic” infrastructure of our modern age and its user interfaces, because to date their implementation has followed a logic chiefly of technical development, governed by the commercial interests of large corporations, and continues to be seen partly as a purely functional,centralized setup. However, the Personal Cloud holds a potential that is largely untapped in terms of design, novel uses and territorial strategies.
The workshop will be an opportunity to discuss these alternatives and work on potential scenarios for the near future. More specifically, we will address the following topics:
The joint design research Inhabiting & Interfacing the Cloud(s) is supported by HES-SO, ECAL & HEAD.
Interactivity : The workshop will start with a general introduction about the project, and moves to a discussion of its implications, opportunities and limits. Then a series of activities will enable break-out groups to sketch potential solutions.
Friday, January 16. 2015
Note: we didn't found enough time last December to document an interview of fabric | ch that was publish in the French design magazine Étapes. So let's do it in early 2015... The magazine itself has been recently revamped under the direction of a new editorial board. It is now a quite exciting magazine, interested in transverval approaches to design questions, including interaction design, architecture, etc. even so its main and historical focus remains graphic design.
The interview that took place between Christophe Guignard (fabric | ch) and Isabelle Moisy (editor in chief, Étapes) concerns the specific approach to architectural design that fabric | ch has adopted through times. This approach has taken into account since our foundation (1997) the networked and digital natures of contemporary space and territories (landscapes) combined with the physical one. This last point was particularly evident in the fact that since the start, our group was composed of architects and computer scientists. Our work has of course evolved since 1997, but this "coded/data dimension" of space has obviously gained importance in our work and in general since then, it has also proved itslelf to become a major element in the conceptualization of spaces in our still early century.
By fabric | ch
From the "Édito":
"(...). En l'absence d'horizon précis, les supports de communication se superposent, et les designers débordent sans complexe des pratiques restrictives auxquelles ils ont été formés. Les qualificatifs se multiplient. Designer pluriel, transdiciplinaire. (...)". Isabelle Moisy
Thursday, January 15. 2015
Wednesday, December 24. 2014
Note: Google Earth or literally and progressively Google's Earth? It could also be considered as the start of the privatization of the lower stratosphere, where up to now, no artifacts were permanently present.
Google X research lab boss Astro Teller says experimental wireless balloons will test delivering Internet access throughout the Southern Hemisphere by next year.
By Tom Simonite
Astro Teller & a Project Loon prototype sails skyward.
Within a year, Google is aiming to have a continuous ring of high-altitude balloons in the Southern Hemisphere capable of providing wireless Internet service to cell phones on the ground.
That’s according to Astro Teller, head of the Google X lab, the company established with the purpose of working on “moon shot” research projects. He spoke at MIT Technology Review’s EmTech conference in Cambridge today.
Teller said that the balloon project, known as Project Loon, was on track to meet the goal of demonstrating a practical way to get wireless Internet access to billions of people who don’t have it today, mostly in poor parts of the globe.
For that to work, Google would need a large fleet of balloons constantly circling the globe so that people on the ground could always get a signal. Teller said Google should soon have enough balloons aloft to prove that the idea is workable. “In the next year or so we should have a semi-permanent ring of balloons somewhere in the Southern Hemisphere,” he said.
Google first revealed the existence of Project Loon in June 2013 and has tested Loon Balloons, as they are known, in the U.S., New Zealand, and Brazil. The balloons fly at 60,000 feet and can stay aloft for as long as 100 days, their electronics powered by solar panels. Google’s balloons have now traveled more than two million kilometers, said Teller.
The balloons provide wireless Internet using the same LTE protocol used by cellular devices. Google has said that the balloons can serve data at rates of 22 megabits per second to fixed antennas, and five megabits per second to mobile handsets.
Google’s trials in New Zealand and Brazil are being conducted in partnership with local cellular providers. Google isn’t currently in the Internet service provider business—despite dabbling in wired services in the U.S. (see “Google Fiber’s Ripple Effect”)—but Teller said Project Loon would generate profits if it worked out. “We haven’t taken a dime of revenue, but if we can figure out a way to take the Internet to five billion people, that’s very valuable,” he said.
(Page 1 of 187, totaling 1865 entries) » next page
fabric | rblg
fabric | rblg is the survey website of fabric | ch -- studio for architecture, interaction and research. We curate and re-blog articles, researches, exhibitions and projects that we notice during our everyday practice.