Tuesday, July 19. 2011
Via Numerama
-----
Par Guillaume Champeau - publié le Lundi 18 Juillet 2011 à 10h39 - posté dans Télécoms
Les grands patrons des télécoms réunis par Bruxelles pour plancher sur les modes de financement du très haut débit à court terme dans l'Union Européenne ont accouché, sans surprise, d'une série de propositions qui enterrent la neutralité du net au profit d'un accès à plusieurs internets, plus ou moins riches selon l'abonnement payé. Des propositions qui enfoncent la porte déjà ouverte par la Commission Européenne.
Souvenez-vous. C'était en mars dernier. Sans grand bruit, la Commission Européenne réunissait à Bruxelles près d'une quarantaine de grands patrons des télécoms : Steve Jobs (Apple), Xavier Niel (Free), Stéphane Richard (Orange), Jean-Phillipe Courtois (Microsoft), Jean-Bernard Lévy (Vivendi), Stephen Elop (Nokia),..
La réunion avait pour but de demander aux industriels comment, selon eux, "assurer au mieux les investissements de très haut niveau du secteur privé nécessaires pour le déploiement des réseaux haut-débit de prochaine génération pour maintenir la croissance d'internet". La Commission souhaite en effet rendre possible l'objectif fixé par l'Agenda Numérique de l'Europe, qui prévoit que tous les Européens aient accès à Internet à 30 Mbps minimum d'ici 2020, et au moins la moitié d'entre eux à 100 Mbps.
Pour élaborer les propositions, un groupe de pilotage composé avait été désigné, composé par Jean-Bernard Lévy de Vivendi, Ben Verwaayen d'Alcatel-Lucent, et René Obermann de Deutsche Telekom. "Preuve qu'il y a dès la composition du groupe la volonté de trouver un équilibre entre le financement des réseaux et le financement des contenus, ce qui n'est jamais très bon signe pour la neutralité du net", pressentions-nous alors.
Le résultat est encore pire que nos craintes de l'époque, et confirme la tendance exprimée par la Commission Européenne le mois dernier, lorsqu'elle a dit vouloir privilégier le libre marché à la défense de la neutralité du net.
11 propositions pour enterrer la neutralité des réseaux
Lors d'une seconde réunion le 13 juillet dernier, les trois compères ont en effet remis une série de 11 propositions, insoutenables pour les partisans de la neutralité du net. Prenant l'objectif européen comme une aubaine pour prétendre que le déploiement du très haut-débit à court terme ne peut se faire sur les mêmes bases que précédemment, le groupe conclut que l'Europe "doit encourager la différenciation en matière de gestion du trafic pour promouvoir l’innovation et les nouveaux services, et répondre à la demande de niveaux de qualité différents". Il s'agit donc de faire payer plus cher ceux qui souhaitent accéder sans bridage à certains services qui demandent davantage de bande passante, ou "une moindre latence, ce qui est capital dans le jeu vidéo", explique le patron de Vivendi à la Tribune.
Il est aussi clairement envisagé de permettre aux éditeurs de services d'acheter un accès privilégié aux abonnés à Internet, pour que leur service soit plus rapide que celui des concurrents qui ne paieraient pas la dîme. "La valorisation du potentiel des marchés bifaces apportera plus d’innovation, d’efficacité et un déploiement plus rapide des réseaux de nouvelle génération, au bénéfice des consommateurs et des industries créatives", croit pouvoir affirmer le groupe de travail.
Par ailleurs, il justifie l'absence de représentants d'organisations de citoyens et de consommateurs parmi la quarantaine de dirigeants consultés par Bruxelles. "Les intérêts à long terme des consommateurs coïncident avec la promotion de l’innovation et l’investissement". Ils n'ont qu'à subir la mort de la neutralité du net, c'est au final dans leur intérêt, assurent les patrons des télécoms.
Dans La Tribune, Jean-Bernard Lévy raconte que la réunion du 13 juillet s'est "formidablement mieux passée que l'on ne pensait", et qu'elle a parmi de découvrir "un degré de consensus remarquable et inattendu entre ces acteurs de toute la chaîne de valeur, opérateurs, fabricants, agrégateurs, éditeurs de chaînes, etc". En oubliant, au passage, que les internautes sont les premiers acteurs de cette chaîne de valeur. Non seulement parce qu'ils payent leur accès à Internet. Mais aussi parce qu'ils sont aujourd'hui, et de très loin, les premiers producteurs des contenus qui y circulent.
Personal comment:
This can't be good... (Thanks Nicolas for the link)
Tuesday, July 12. 2011
Via BLDGBLOG
-----
by noreply@blogger.com (Geoff Manaugh)
[Image: The infrastructure of bullet time].
A digital image-processing system under development since 2007 will allow photographers "to artificially create photos taken from a perspective where there was no photographer." It uses "a computer-vision technique called view synthesis to combine two or more photographs to create another very realistic-looking one that looks like it was taken from an arbitrary viewpoint," as New Scientist explains.
One expert quoted refers to this as "anonymizing the photographer."
The images can come from more than one source: what's important is that they are taken at around the same time of a reasonably static scene from different viewing angles. Software then examines the pictures and generates a 3D "depth map" of the scene. Next, the user chooses an arbitrary viewing angle for a photo they want to post online.
The photo then goes through a "dewarping" stage, in which straight lines like walls and kerb angles are corrected for the new point of view, and "hole filling," in which nearby pixels are copied to fill in gaps in the image created because some original elements were obscured.
While the article rightly emphasizes the political implications of this—writing that the technology "could help protestors in repressive regimes escape arrest—and give journalists 'plausible deniability' over the provenance of leaked photos"—there are, of course, other possibilities inherent in the technique that seem worth exploring. These include virtualizing photographs taken of a landscape, building, person, or city, producing views, angles, and perspectives never actually seen by human beings; this would be like something out of the work of Piranesi, specifically as interpreted by Manfredo Tafuri in The Sphere and the Labyrinth, in which impossible scenes overlap to produce a single, yet far from comprehensive, spatial reality.
Perhaps some editor somewhere could send Iwan Baan and Fernando Guerra out to shoot a new building together, then "hole fill" their images to create a virtual, third photographer. Every image thus published in the resulting article documents a viewpoint neither photographer either experienced or saw. It is the building as seen by no one, virtually extruded from otherwise real-world photographs.
To throw another gratuitous theory reference out there, it's like Foucault's analysis of "Las Meninas" in The Order of Things, where we read that the painter may or may not have included an obscured vantage point from which his painting was supposedly painted. To translate Foucault's hypothesis into New Scientist's terms, this would be "location privacy," that is, "a way of disguising the photographer's viewpoint."
[Image: "Las Meninas" by Diego Velázquez].
Or, imagine, for instance, an entire film assembled from "dewarped" images—intermediary, falsified frames precipitated out from between the cameras—creating an uncanny motion picture of interstitial imagery. Virtual films between films; films recombined to create a third cinema of gaps; virtual still images taken from virtual films, overlaid and dewarped to form fourth and fifth and sixth films generationally removed from the original, in an infinite splintering of derivative film stills. We won't document the world as everyone sees it; we'll document it from places where no one's ever been.
(Thanks to Luke Fidler for the tip).
Thursday, July 07. 2011
-----
Is there such a thing as DIY internet? An amazing open-source project in Afghanistan proves you don’t need millions to get connected.
While visiting family last week, the topic of conversation turned to the internet, net neutrality, and both corporate and government attempts to police the online world. A family member remarked that if they wanted to, the U.S. government could simply turn off the internet and the entire world would be screwed.
Having read this inspiring article by Douglas Rushkoff on Shareable.net, I surprised the room by disagreeing. I said that we didn’t need the corporate built internet, and that if we had to, the people could build their own. Of course, not being that technically minded, I couldn’t offer a concrete idea of how this could be achieved. Until now.
A recent Fast Company article shines a spotlight on the Afghan city of Jalalabad which has a high-speed Internet network whose main components are built out of trash found locally. Aid workers, mostly from the United States, are using the provincial city in Afghanistan’s far east as a pilot site for a project called FabFi.
FabFi is an open-source, FabLab-grown system using common building materials and off-the-shelf electronics to transmit wireless ethernet signals across distances of up to several miles. With Fabfi, communities can build their own wireless networks to gain high-speed internet connectivity—thus enabling them to access online educational, medical, and other resources.
Residents who desire an internet connection can build a FabFi node out of approximately $60 worth of everyday items such as boards, wires, plastic tubs, and cans that will serve a whole community at once.
Jalalabad’s longest link is currently 2.41 miles, between the FabLab and the water tower at the public hospital in Jalalabad, transmitting with a real throughput of 11.5Mbps (compared to 22Mbps ideal-case for a standards compliant off-the-shelf 802.11g router transitting at a distance of only a few feet). The system works consistently through heavy rain, smog and a couple of good sized trees.
With millions of people still living without access to high-speed internet, including much of rural America, an open-source concept like FabFi could have profound ramifications on education and political progress.
Because FabFi is fundamentally a technological and sociological research endeavor, it is constantly growing and changing. Over the coming months expect to see infrastructure improvements to improve stability and decrease cost, and added features such as meshing and bandwidth aggregation to support a growing user base.
In addition to network improvements, there are plans to leverage the provided connectivity to build online communities and locally hosted resources for users in addition to MIT OpenCourseWare, making the system much more valuable than the sum of its uplink bandwidth. Follow the developments on the FabFi Blog.
Wednesday, July 06. 2011
Via ArchDaily
-----
By Sebastian J.
Departing from the simple question Why do we heat and cool buildings with air?, this book focuses on the technique of thermally active surfaces. This technique uses water in building surfaces to heat and cool bodies – a method that is at once more efficient, comfortable, and healthy. This technique thus imbues the fabric of the building with a more poignant role: its structure is also its primary heating and cooling system. In doing so, this approach triggers a cascading set of possibilities for how well buildings are built, how well they perform, and how long they will last: pointing the way toward multiple forms of sustainability. -Princeton Architectural Press
More after the break.
The first section of the book contrasts the parallel histories of thermally active surfaces and air conditioning. These histories explain the material, social, marketing, and technical unfolding of building technology in the twentieth century as a means to explain why we build the way we do and why that will change in the new century. The next section of the book covers the physiological and thermodynamic basis of thermally active surfaces. This section is designed for engineers and architects to grasp the logic and advantages of this technique. This section also includes a chapter on the de-fragmentation of buildings and design practice that is inherent in building with thermally active surfaces. The final section covers a series of contemporary case studies that demonstrate the efficacy of this technique. The project list currently includes Kunsthaus in Bregenz by Peter Zumthor, Zollverein School of Management in Essen, Germany by SANAA, and Linked Hybrid in Beijing by Steven Holl, amongst others.
-Princeton Architectural Press
Author: Kiel Moe
Publisher: Princeton Architectural Press
Editorial: Lauren Nelson Packard, Dan Simon
Design: Paul Wagner
Language: English
Cover: Hardcover
Pages: 240
Illustrations: 250 color
Dimensions: 11.2 x 8.8 x 1 inches
ISBN: 978-1568988801
Index
Foreword / D. Michelle Addington
Foreword / Tradition, Comfort, and Conservation / Matthias Schuler
Preface
Approaches to Technology and Human Comfort
Theories, Techniques, and Technologies
Conditioned Air
Thermally Active Surfaces
Principles and Practices of Thermally Active Surfaces
What Your Body Already Knows
Batiso (Constant Temperature Building) / Building Design Guide by Geoff McDonnell
De-fragmentation of Buildings and Practices
Thermodynamic Figures in Architecture
Thermally Active Surface Case Studies
Kunsthaus Bregenz / Peter Zumthor
Zollverein School of Management and Design / SANAA
Südwestmetall Regional Headquarters / Dominik Dreiner Architekt
Linked Hybrid / Steven Holl Architects
Charles Hostler Student Recreation Center / VJAA
Housing for Kripalu Center for Yoga and Health / Peter Rose and Partners
The Fred Kaiser Building, University of British Columbia / architectsAlliance
Terrence Donnelly Centre for Cellular and Biomolecular Research / architectsAlliance, Behnisch Architekten
Klarchek Information Commons, Loyola University / Solomon Cordwell Buenz
The Graham Resource Center, Crown Hall, Illinois Institute of Technology / Tom Brock Architect, P.C.
Acknowledgments
Sources
Illustration Credits
|