Friday, April 08. 2011
Via MIT Technology Review
-----
The social network breaks an unwritten rule by giving away plans to its new data center—an action it hopes will make the Web more efficient.
By Tom Simonite
|
The new data center, in Prineville, Oregon, covers 147,000 square feet and is one of the most energy-efficient computing warehouses ever built.
Credit: Jason Madera |
Just weeks before switching on a massive, super-efficient data center in rural Oregon, Facebook is giving away the designs and specifications to the whole thing online. In doing so, the company is breaking a long-established unwritten rule for Web companies: don't share the secrets of your server-stuffed data warehouses.
Ironically, most of those secret servers rely heavily on open source or free software, for example the Linux operating system and the Apache webserver. Facebook's move—dubbed the Open Compute Project—aims to kick-start a similar trend with hardware.
"Mark [Zuckerberg] was able to start Facebook in his dorm room because PHP and Apache and other free and open-source software existed," says David Recordon, who helps coordinate Facebook's use of, and contribution to, open-source software. "We wanted to encourage that for hardware, and release enough information about our data center and servers that someone else could go and actually build them."
The attitude of other large technology firms couldn't be more different, says Ricardo Bianchini, who researches energy-efficient computing infrastructure at Rutgers University. "Typically, companies like Google or Microsoft won't tell you anything about their designs," he says. A more open approach could help the Web as a whole become more efficient, he says. "Opening up the building like this will help researchers a lot, and also other industry players," he says. "It's opening up new opportunities to share and collaborate."
The open hardware designs are for a new data center in Prineville, Oregon, that will be switched on later this month. The 147,000-square-foot building will increase Facebook's overall computing capacity by around half; the social network already processes some 100 million new photos every day, and its user base of over 500 million is growing fast.
The material being made available - on a new website - includes detailed specifications of the building's electrical and cooling systems, as well as the custom designs of the servers inside. Facebook is dubbing the approach "open" rather than open-source because its designs won't be subject to a true open-source legal license, which requires anyone modifying them to share any changes they make.
The plans reveal the fruits of Facebook's efforts to create one of the most energy-efficient data centers ever built. Unlike almost every other data center, Facebook's new building doesn't use chillers to cool the air flowing past the servers. Instead, air from the outside flows over foam pads moistened by water sprays to cool by evaporation. The building is carefully oriented so that prevailing winds direct outside air into the building in both winter and summer.
Facebook's engineers also created a novel electrical design that cuts the number of times that the electricity from the grid is run through a transformer to reduce its voltage en route to the servers inside. Most data centers use transformers to reduce the 480 volts from the nearest substation down to 208 volts, but Facebook's design skips that step. "We run 480 volts right up to the server," says Jay Park, Facebook's director of data-center engineering. "That eliminates the need for a transformer that wastes energy."
To make this possible, Park and colleagues created a new type of server power supply that takes 277 volts and which can be split off from the 408-volt supply without the need for a transformer. The 408 volts is delivered using a method known as "three phase power": three wires carry three alternating currents with carefully different timings. Splitting off one of those wires extracts a 277-volt supply.
Park and colleagues also came up with a new design for the backup batteries that keep servers running during power outages before backup generators kick in—a period of about 90 seconds. Instead of building one huge battery store in a dedicated room, many cabinet-sized battery packs are spread among the servers. This is more efficient because the batteries share electrical connections with the computers around them, eliminating the dedicated connections and transformers needed for one large store. Park calculates that his new electrical design wastes about 7 percent of the power fed into it, compared to around 23 percent for a more conventional design.
According to the standard measure of data-center efficiency—the power usage efficiency (PUE) score—Facebook's tweaks have created one of the most efficient data centers ever. A PUE is calculated by dividing a building's total power use by the energy used by its computers - a perfect data center would score 1. "Our tests show that Prineville has a PUE of 1.07," says Park. Google, which invests heavily in data-center efficiency, reported an average PUE of 1.13 across all its locations for the last quarter of 2010 (when winter temperatures make data centers most efficient), with the most efficient scoring 1.1.
Google and others will now be able to cherry pick elements from Facebook's designs, but that poses no threat to Facebook's real business, says Frank Frankovsky, the company's director of hardware design. "Facebook is successful because of the great social product, not [because] we can build low-cost infrastructure," he says. "There's no reason we shouldn't help others out with this."
Copyright Technology Review 2011.
Personal comment:
Will efficient and sustainable ways to organize architectural climate as well as to use energy become a by product of data centers? Might be.
Via MIT Technology Review blog
-----
By Steve Hsu
Brian Christian, author of The Most Human Human, tells interviewer Leonard Lopate what it's like to be a participant in the Loebner Prize competition, an annual version of the Turing Test. See also Christian's article, excerpted below.
Atlantic Monthly: ... The first Loebner Prize competition was held on November 8, 1991, at the Boston Computer Museum. In its first few years, the contest required each program and human confederate to choose a topic, as a means of limiting the conversation. One of the confederates in 1991 was the Shakespeare expert Cynthia Clay, who was, famously, deemed a computer by three different judges after a conversation about the playwright. The consensus seemed to be: “No one knows that much about Shakespeare.” (For this reason, Clay took her misclassifications as a compliment.)
... Philosophers, psychologists, and scientists have been puzzling over the essential definition of human uniqueness since the beginning of recorded history. The Harvard psychologist Daniel Gilbert says that every psychologist must, at some point in his or her career, write a version of what he calls “The Sentence.” Specifically, The Sentence reads like this:
The human being is the only animal that ____.
The story of humans’ sense of self is, you might say, the story of failed, debunked versions of The Sentence. Except now it’s not just the animals that we’re worried about.
We once thought humans were unique for using language, but this seems less certain each year; we once thought humans were unique for using tools, but this claim also erodes with ongoing animal-behavior research; we once thought humans were unique for being able to do mathematics, and now we can barely imagine being able to do what our calculators can.
We might ask ourselves: Is it appropriate to allow our definition of our own uniqueness to be, in some sense, reactive to the advancing front of technology? And why is it that we are so compelled to feel unique in the first place?
“Sometimes it seems,” says Douglas Hofstadter, a Pulitzer Prize–winning cognitive scientist, “as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” While at first this seems a consoling position—one that keeps our unique claim to thought intact—it does bear the uncomfortable appearance of a gradual retreat, like a medieval army withdrawing from the castle to the keep. But the retreat can’t continue indefinitely. Consider: if everything that we thought hinged on thinking turns out to not involve it, then … what is thinking? It would seem to reduce to either an epiphenomenon—a kind of “exhaust” thrown off by the brain—or, worse, an illusion.
Where is the keep of our selfhood?
The story of the 21st century will be, in part, the story of the drawing and redrawing of these battle lines, the story of Homo sapiens trying to stake a claim on shifting ground, flanked by beast and machine, pinned between meat and math.
... In May 1989, Mark Humphrys, a 21-year-old University College Dublin undergraduate, put online an Eliza-style program he’d written, called “MGonz,” and left the building for the day. A user (screen name “Someone”) at Drake University in Iowa tentatively sent the message “finger” to Humphrys’s account—an early-Internet command that acted as a request for basic information about a user. To Someone’s surprise, a response came back immediately: “cut this cryptic shit speak in full sentences.” This began an argument between Someone and MGonz that lasted almost an hour and a half. (The best part was undoubtedly when Someone said, “you sound like a goddamn robot that repeats everything.”)
Returning to the lab the next morning, Humphrys was stunned to find the log, and felt a strange, ambivalent emotion. His program might have just shown how to pass the Turing Test, he thought—but the evidence was so profane that he was afraid to publish it. ...
Thursday, April 07. 2011
Via MIT Technology Review
-----
A novel approach to design and construction could save materials and energy, and create unusually beautiful structures.
By Kevin Bullis
|
Model maker: Neri Oxman works on “Cartesian Wax: Prototype for a Responsive Skin,” a model that is now part of the permanent collection at the Museum of Modern Art in New York.
Credit: Mikey Siegel |
In conventional construction, workers piece together buildings from mass-produced, prefabricated bricks, I-beams, concrete columns, plates of glass and so on. Neri Oxman, an architect and a professor at MIT's Media Lab, intends to print them instead—essentially using concrete, polymers, and other materials in the place of ink. Oxman is developing a new way of designing buildings to take advantage of the flexibility that printing can provide. If she's successful, her approach could lead to designs that are impossible with today's construction methods.
Existing 3-D printers, also called rapid prototyping machines, build structures layer by layer. So far these machines have been used mainly to make detailed plastic models based on computer designs. But as such printers improve and become capable of using more durable materials, including metals, they've become a potentially interesting way to make working products.
Oxman is working to extend the capabilities of these machines—making it possible to change the elasticity of a polymer or the porosity of concrete as it's printed, for example—and mounting print heads on flexible robot arms that have greater freedom of movement than current printers.
She's also drawing inspiration from nature to develop new design strategies that take advantage of these capabilities. For example, the density of wood in a palm tree trunk varies, depending on the load it must support. The densest wood is on the outside, where bending stress is the greatest, while the center is porous and weighs less. Oxman estimates that making concrete columns this way—with low-density porous concrete in the center—could reduce the amount of concrete needed by more than 10 percent, a significant savings on the scale of a construction project.
Oxman is developing software to realize her design strategy. She inputs data about physical stresses on a structure, as well as design constraints such as size, overall shape, and the need to let in light into certain areas of a building. Based on this information, the software applies algorithms to specify how the material properties need to change throughout a structure. Then she prints out small models based on these specifications.
The early results of her work are so beautiful and intriguing that they've been featured at the Museum of Modern Art in New York and the Museum of Science in Boston. One example, which she calls Beast, is a chair whose design is based on the shape of a human body (her own) and the predicted distribution of pressure on the chair. The resulting 3-D model features a complex network of cells and branching structures that are soft where needed to relieve pressure and stiff where needed for support.
The work is at an early stage, but the new approach to construction and design suggests many new possibilities. A load-bearing wall could be printed in elaborate patterns that correspond to the stresses it will experience from the load it supports from wind or earthquakes, for instance.
The pattern could also account for the need to allow light into a building. Some areas would have strong, dense concrete, but in areas of low stress, the concrete could be extremely porous and light, serving only as a barrier to the elements while saving material and reducing the weight of the structure. In these non-load bearing areas, it could also be possible to print concrete that's so porous that light can penetrate, or to mix the concrete gradually with transparent materials. Such designs could save energy by increasing the amount of daylight inside a building and reducing the need for artificial lighting. Eventually, it may be possible to print efficient insulation and ventilation at the same time. The structure can be complex, since it costs no more to print elaborate patterns than simple ones.
Other researchers are developing technology to print walls and other large structures. Behrokh Khoshnevis, a professor of industrial and systems engineering and civil and environmental engineering at the University of Southern California, has built a system that can deposit concrete walls without the need for forms to contain the concrete. Oxman's work would take this another step, adding the ability to vary the properties of the concrete, and eventually work with multiple materials.
The first applications of Oxman's approach will likely to be on a relatively small scale, in consumer products and medical devices. She's used her principles to design and print wrist braces for carpal tunnel syndrome. They're customized based on the pain that a particular patient experiences. The approach could also improve the performance of prosthetics.
Oxman, 35, is developing her techniques in partnership with a range of specialists, such as Craig Carter, a professor of materials science at MIT. While he says her approach faces challenges in controlling the properties of materials, he's impressed with her ideas: "There's no doubt that the results are strikingly beautiful."
Copyright Technology Review 2011.
Via MIT Technology Review
-----
New device offers distinct advantages over previous attempts to improve upon natural photosynthesis.
By Phil McKenna
Photosynthesis, nature's way of converting sunlight to fuel, happens all around us, from leaves on a tree to the smallest blade of grass. But finding a way to mimic the ability cheaply and efficiently has confounded engineers for decades.
|
Leaves that are not green: Silicon coated with inexpensive catalysts splits water into hydrogen and oxygen inside an illuminated container of water.
Credit: Daniel Nocera, MIT |
Now researchers have taken a step toward this elusive feat, with a device that is even more efficient than natural photosynthesis and relies on low-cost, abundant materials.
Conventional solar cells produce electricity when a photovoltaic material is exposed to light. The new device goes a step further, using the resulting electricity to split water into hydrogen and oxygen. Because hydrogen stores energy at a higher density than any battery, it is a promising method for inexpensive energy storage.
The new device is still in early laboratory development, and significant challenges remain. But if the technology can be commercialized, it could provide an inexpensive way of producing solar power and storing it until needed.
Daniel Nocera, a professor at MIT, revealed preliminary details of the device, which he calls the first practical "artificial leaf," at the national meeting of the American Chemical Society in California on March 27. The device combines a commercially available solar cell with a pair of inexpensive catalysts made of cobalt and nickel that split water into oxygen and hydrogen. Using this approach, a solar panel roughly one square meter bathed in water could produce enough hydrogen to supply a house in a developing country with electricity for both day and night, Nocera says.
Using a thin-film silicon solar cell that converts the energy in light with 7 percent efficiency, Nocera says his group achieved 5 percent efficiency for the conversion of sunlight to hydrogen. Natural photosynthesis is less than 1 percent efficient at converting sunlight to energy.
The device is not the first to attempt to improve upon natural photosynthesis. It does, however, offer distinct advantages over previous devices, which either used costly precious-metal catalysts to split water into hydrogen and oxygen, or performed the water splitting indirectly with a separate device, which is less efficient and more expensive.
Nocera's device is the first to use inexpensive and abundant catalyst materials that are incorporated into the solar cell. "You just have a piece of silicon coated with catalysts that you can put in a glass of water, and it starts splitting the water into hydrogen and oxygen," he says.
The device is made possible by several recent advances. Nocera first developed a cobalt catalyst capable of splitting oxygen from water in 2008, but the catalyst couldn't be applied directly to silicon because it would block incoming sunlight. For his new device, Nocera applied a thin film of cobalt to the silicon that blocks only 2 to 3 percent of incoming light. Prior to applying the catalyst, he coated the silicon with a thin membrane that protects it from oxidization but allows electrical current to pass through.
A novel nickel-based catalyst also developed recently by Nocera is applied to the other side of the silicon to split hydrogen from water. The nickel catalysts already used in other water splitting devices known as electrolyzers would quickly be rendered useless by phosphate and borate present in the water. The results of initial tests of the device have been submitted for publication. They show that it can operate for at least six days without a drop in efficiency, Nocera says.
John Turner, a research fellow at the National Renewable Energy Laboratory in Golden, Colorado, says the ability to use a virtually transparent cobalt catalyst is a key advance, and the reported efficiency is promising. "He is getting most of the efficiency out of the cell," Turner says. "If he [starts with] an 11 or 12 percent cell, which is commercially available, he should be able to do much better. But we would need to see what he can do once he gets a better cell."
However, Turner says, Nocera will have to demonstrate significantly longer run times—tens of thousands of hours. "They may have the durability, but they need to continue to show it," he says.
Sun Catalytix, a company founded by Nocera, will now work with Indian industrial giant Tata to commercialize the technology for residential use in developing countries. The companies are already working together to develop another artificial photosynthesis device developed previously by Nocera. This initial device will be based on a 100-watt solar panel and will require a separate electrolyzer connected by wires to the panel. It should sell for around $100.
The device would also have to be paired with a fuel cell to convert stored hydrogen to electricity. Nocera says he hopes to deliver this initial solar-powered hydrogen-producing device to Tata by the end of 2011. The new artificial leaf should be less expensive, but it could be another two and a half years before a commercial prototype would be ready, Nocera says.
James Stevens, a research fellow at Dow Chemical, says the technology still has a long way to go. "There is a lot that has to be done before this could be practical," he says. "The efficiency is low and the capital costs of these things are very high."
Other practical issues, such as safely storing hydrogen gas and preventing the system from freezing in subzero temperatures, are also significant challenges, Stevens says. "We're not really interested in the state of the art as it now stands," he says.
Copyright Technology Review 2011.
Friday, April 01. 2011
Via OWNI
-----
by Andréa Fradin
“Qui suis-je ? Où vais-je ? Dans quelle étagère ?” Pour certains spécialistes des neurosciences, il semblerait que cette quête de sens, ainsi que toutes nos angoisses existentielles, traîneraient du côté du rayon “cerveau”. Et il en serait de même pour LA question: et Dieu, dans tout ça ?
Voilà une dizaine d’années qu’un petit nombre de chercheurs, principalement américains et canadiens, recherchent activement les manifestations du Grand Horloger, et plus généralement la source de la spiritualité, dans les méandres cérébraux. La pratique, baptisée “neurothéologie”, restée aujourd’hui à la marge, est souvent présentée comme un domaine d’études un peu curieux, à la légitimité faiblarde. Pourtant, ils ne sont pas tous à chercher la preuve de l’existence (ou pas, d’ailleurs) de Dieu dans le fatras synaptique. Il est vrai que certains n’hésitent pas à clamer haut et fort avoir localisé une zone extatique cérébrale, sorte de bouton-pressoir activateur de foi.
D’autres en revanche, plus modérés, rétorquent que là n’est pas la question. Ni démonstrateurs en odeur de sainteté, ni abatteurs de divinités, ces chercheurs tentent d’observer une réalité vécue et exprimée: celle des états de conscience modifiés, des expériences dites “mystiques”, de la méditation, ou bien encore de la sensation d’unicité avec le monde. Expliquer la spiritualité en scrutant la cervelle humaine: difficile d’envisager entreprise plus périlleuse, tant les réticences en provenance des deux bords, science et religion, viennent impacter et questionner les conditions expérimentales des “neurothéologues”.
Neuro-localisation de Dieu
Le "casque de Dieu" du chercheur Michaël Persinger (extrait de la série "Through the Wormhole" de Science Channel)
Avec son “Helmet God” (“casque de Dieu”), Michael Persinger [ENG] fait figure de précurseur dans le domaine de la neurothéologie.
L’objectif de ce chercheur américain est de reproduire l’expérience mystique en stimulant certaines zones du cerveau, comme le lobe temporal -qui joue un rôle déterminant dans la production des émotions-, grâce à des ondes magnétiques émises par son fameux casque jaune.
Si l’electro-encéphalogramme s’affole au cours de chaque expérience, les retours des différents cobayes, eux, sont loin d’être univoques: quand certains affirment avoir l’impression qu’une “entité” était auprès d’eux, d’autres, en revanche, disent tout simplement ne rien avoir ressenti.
Avant Persinger, un chercheur de l’université de San Diego, Vilayanur Ramachandran, cherchait déjà une base neurologique aux manifestations spirituelles. Ses travaux portaient sur certaines formes d’épilepsie affectant ce même lobe temporal et pouvant entrainer des délires mystiques intenses chez les individus en crise. Une observation qui a valu le titre de “module de Dieu” à cette région du cerveau, dans laquelle on retrouve l’hippocampe, ou l’amygdale.
Pour Carol Albright, auteur de nombreux ouvrages sur la neurotheologie et rédactrice au magazine Zygon: Journal of Religion and Science, l’approche matérialiste de ces deux chercheurs, qui tentent de prouver “que toute expérience ou foi religieuses sont une aberration ou artefact”, est une conception “limitée de ce que comprend la religion”. Elle explique à OWNI:
Personnellement, je pense que l’expérience religieuse est bien plus multiple que ce que prétendent ou rapportent de tels scientifiques. Elle peut inclure des expériences mystiques de la présence de Dieu, mais elle comporte aussi une doctrine intellectuelle, une participation au rituel, et une orientation générale de la personnalité, entre autres paramètres.
Autrement dit, elle ne se limite pas à l’extase mystique, produit de l’expérience spirituelle; elle inclue aussi des éléments de contexte qui viennent bien en amont de cette manifestation, et qui dépassent le seul cadre du cerveau. Bien entendu, tempère Carol Albright, chaque affect humain a une résonance cérébrale, mais réduire la spiritualité à cette seule réalité, et plus encore, la percevoir comme seule raison à Dieu, est un raccourci simpliste.
Au-delà du matérialisme réductionniste
De là à associer l’intégralité de la neuroscience à une approche matérialiste du religieux, il n’y a qu’un pas. Pourtant, souligne encore l’analyste américaine, certains travaux se démarquent par une approche moins reductionniste.
Les américains Andrew Newberg et Eugene d’Aquili, auteurs du succès de librairie Why God Won’t Go Away: Brain Science and the Biology of Belief et initiateurs du terme “neurothéologie”, le canadien Mario Beauregard de l’université d’Ontario, cherchent moins à neuro-localiser Dieu qu’à observer la traduction cérébrale d’états de consciences modifiés. “A chaque fois, on n’a aucune idée de ce qu’on va trouver”, confie un assistant de Mario Beauregard à la caméra venue filmer les expériences de cette équipe de neurobiologistes, pour le documentaire Le Cerveau mystique, réalisé en 2006 (l’intégralité à voir ci-dessous).
En étudiant les états méditatifs de nonnes carmélites, ils tentent de comprendre le “cerveau spirituel”. Mais là encore, de bout en bout de l’expérimentation, la tache est difficile: convaincre les religieuses, repérer le moment extatique sans pouvoir interrompre la méditation, et surtout, interpréter les données sans savoir précisément qu’y chercher. La neurothéologie avance donc à tâtons. Mais dans une visée moins philosophique que pratique: pour ces chercheurs, l’objectif est d’augmenter le bien-être des individus, bien plus que de jouer à la devinette ontologique.
///
En ce sens, de nombreux neurothéologiens ont concentré leurs efforts sur l’étude de la méditation, afin de comprendre sa mécanique mais aussi ses effets sur le cerveau et le corps.
L’un des premiers à avoir aborder la thématique est le professeur Richard Davidson, qui s’est penché sur des moines bouddhistes ayant consacré plusieurs dizaines de milliers d’heures à la méditation. “Étudier leur cerveau, explique-t-il dans Le Cerveau Mystique, est un peu comme observer des maîtres d’échecs.”
Et il semblerait que l’activité cérébrale d’une personne entrée en méditation varie assez considérablement de celle d’un individu lambda: “il y a un changement spectaculaire entre les novices et les pratiquants”, explique Antoine Lutz, qui travaille en étroite collaboration avec les moines, parmi lesquels figure l’interprète français du Dalaï-Lama, Matthieu Ricard, très porté sur les avancées de la neuroscience.
Et il n’est pas le seul: le guide spirituel du mouvement est lui-même très investi dans le domaine. Le Dalaï-Lama est en effet co-fondateur et président honoraire du Mind and Life Institute, qui vise à “construire une compréhension scientifique de l’esprit pour réduire la souffrance et promouvoir le bien-être”.
Un engagement qui n’a rien d’anodin, car, comme le souligne un moine cistercien invité d’un congrès du Mind and Life:
Depuis mille ans, la religion et la science se sautent à la gorge dès qu’elles en ont l’occasion
Une façon de rétablir la trêve, même si des irréductibles refusent d’abandonner le front. “Il y a des “fondamentalistes” de chaque côté du débat -ceux qui ne jurent que par la science ou à l’inverse, seulement par la religion-, qui cherchent à saper l’autre clan”, explique Carol Albright. Résidente de Chicago, elle explique comment cinq écoles de théologie cohabitent avec l’approche scientifique:
Je vis dans le quartier de Hyde Park, à Chicago, où se trouvent l’Université de Chicago, ainsi que cinq écoles de théologie – Catholique Romaine, Luthérienne, Unitarienne, et deux autres se rapprochant du Calvinisme… Ajouté à cela, l’Université de Chicago a une Divinity School qui défend une approche très universitaire. J’ai des amis de chacune de ces confessions, qui travaillent à comprendre l’interaction de la science et de la religion. Ils ne cherchent pas à nier les conclusions scientifiques, mais bien plus à les intégrer à leur pensée, afin de mieux évaluer l’état de la connaissance de nos jours.
Qu’en est-il pour la France ?
Il semblerait qu’une telle approche soit moins entravée par une réserve pieuse que par l’ostracisme de la communauté scientifique. Doctorant en neurosciences cognitives et à l’origine d’Arthemoc, première association scientifique se concentrant sur l’étude des états modifiés de conscience en France, Guillaume Dumas raconte:
En France, on est vraiment à la traîne sur tous ces sujets; il est difficile de sortir des sentiers battus. Dès qu’on évoque la religion dans des thèmes de recherche, on n’est pas loin d’être insultés. Cela m’est même arrivé dans une présentation pour Arthemoc alors que j’évoquais juste les effets de la méditation. Il suffit de prendre le cas de Francisco Varela pour comprendre la situation française. C’était un chercheur brillant, fondateur du Mind and Life Institute, mais dont un article sur la méditation lui a presque coûté une accréditation lui permettant d’accéder au poste de directeur de recherche.
Difficilement surmontable par les neuro-scientifiques, ce déni de légitimité alimente, ironie suprême, la fuite des cerveaux outre-Atlantique. Comme le constate, amer, Guillaume Dumas:
La seule solution est de partir aux États-Unis. C’est ce qu’a fait Antoine Lutz, un ancien doctorant de Varela qui souhaitait justement étudier la méditation en neuro-imagerie.
|