Collaboration and the Evolution of Disciplines

Robert Axelrod [7.1.19]

The questions that I’ve been interested in more recently are about collaboration and what can make it succeed, also about the evolution of disciplines themselves. The part of collaboration that is well understood is that if a team has a diversity of tools and backgrounds available to them—they come from different cultures, they come from different knowledge sets—then that allows them to search a space and come up with solutions more effectively. Diversity is very good for teamwork, but the problem is that there are clearly barriers to people from diverse backgrounds working together. That part of it is not well understood. The way people usually talk about it is that they have to learn each other’s language and each other’s terminology. So, if you talk to somebody from a different field, they’re likely to use a different word for the same concept.

ROBERT AXELROD, Walgreen Professor for the Study of Human Understanding at the University of Michigan, is best known for his interdisciplinary work on the evolution of cooperation. He is author of The Evolution of Cooperation. Robert Axelrod's Edge Bio Page

[ED. NOTE:] As a follow-up to the completion of the book Possible Minds: 25 Ways of Looking at AI, we are continuing the conversation as the “Possible Minds Project.” The first meeting was at Winvian Farm in Morris, CT. Over the next few months we are rolling out the fifteen talks—videos, EdgeCasts, transcripts.

From left: W. Daniel HillisNeil GershenfeldFrank WilczekDavid ChalmersRobert AxelrodTom GriffithsCaroline JonesPeter GalisonAlison GopnikJohn BrockmanGeorge DysonFreeman DysonSeth LloydRod BrooksStephen WolframIan McEwan. Project participants in absentia: George M. ChurchDaniel KahnemanAlex "Sandy" PentlandVenki RamakrishnanAndy Clark. (Click to expand photo)


COLLABORATION AND THE EVOLUTION OF DISCIPLINES

ROBERT AXELROD: Let me start with what’s new in the world of cooperation. There's the problem of international relations in which an established power, the United States, is dealing with a rising power, China. The ancient Greek historian Thucydides said that the reason why Athens and Sparta fought was because Athens was a rising power and Sparta was the established power and they couldn’t work it out. More recently, Graham Allison at Harvard looked at the last 500 years for all the cases in which an established power was dealing with a rising power. He found sixteen of them, twelve of which led to war. Those are not good odds.

One of the ways of dealing with this is to try to develop norms and rules of the road for understanding what’s proper behavior. I’m working with Chinese and American delegations who are meeting regularly to discuss things like cyber conflict. For example, if cyber weapons were used on a large scale, it looks unstable in a way that nuclear weapons are not unstable. So, we’re dealing with how to develop norms for understanding cyber tools and cyber weapons. That’s one area where cooperation is important.

The Geometry of Thought

Barbara Tversky [6.25.19]

Slowly, the significance of spatial thinking is being recognized, of reasoning with the body acting in space, of reasoning with the world as given, but even more with the things that we create in the world. Babies and other animals have amazing feats of thought, without explicit language. So do we chatterers. Still, spatial thinking is often marginalized, a special interest, like music or smell, not a central one. Yet change seems to be in the zeitgeist, not just in cognitive science, but in philosophy and neuroscience and biology and computer science and mathematics and history and more, boosted by the 2014 Nobel prize awarded to John O’Keefe and Eduard and Britt-May Moser for the remarkable discoveries of place cells, single cells in the hippocampus that code places in the world, and grid cells next door one synapse away in the entorhinal cortex that map the place cells topographically on a neural grid. If it’s in the brain, it must be real. Even more remarkably, it turns out that place cells code events and ideas and that temporal and social and conceptual relations are mapped onto grid cells. Voila: spatial thinking is the foundation of thought. Not the entire edifice, but the foundation.

The mind simplifies and abstracts. We move from place to place along paths just as our thoughts move from idea to idea along relations. We talk about actions on thoughts the way we talk about actions on objects: we place them on the table, turn them upside down, tear them apart, and pull them together. Our gestures convey those actions on thought directly. We build structures to organize ideas in our minds and things in the world, the categories and hierarchies and one-to-one correspondences and symmetries and recursions.

BARBARA TVERSKY is Professor Emerita of Psychology, Stanford University, and Professor of Psychology and Education, Columbia Teachers College. She is the author of Mind in Motion: How Action Shapes Thought. Barbara Tversky's Edge Bio Page

Questioning the Cranial Paradigm

Caroline A. Jones [6.19.19]

Part of the definition of intelligence is always this representation model. . . . I’m pushing this idea of distribution—homeostatic surfing on worldly engagements that the body is always not only a part of but enabled by and symbiotic on. Also, the idea of adaptation as not necessarily defined by the consciousness that we like to fetishize. Are there other forms of consciousness? Here’s where the gut-brain axis comes in. Are there forms that we describe as visceral gut feelings that are a form of human consciousness that we’re getting through this immune brain?

CAROLINE A. JONES is a professor of art history in the Department of Architecture at MIT and author, most recently, of The Global Work of Art. Caroline Jones's Edge Bio Page

The Brain Is Full of Maps

Freeman Dyson [6.11.19]

 I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.

FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the UniverseWeapons and HopeInfinite in All Directions, and Maker of PatternsFreeman Dyson's Edge Bio Page

Perception As Controlled Hallucination

Predictive Processing and the Nature of Conscious Experience Andy Clark [6.6.19]

Perception itself is a kind of controlled hallucination. . . . [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception. 

ANDY CLARK is professor of Cognitive Philosophy at the University of Sussex and author of Surfing Uncertainty: Prediction, Action, and the Embodied MindAndy Clark's Edge Bio Page

Mining the Computational Universe

Stephen Wolfram [5.30.19]

I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science. Stephen Wolfram's Edge Bio Page

REMEMBERING MURRAY

Murray Gell-Mann [5.28.19]
Introduction by

MURRAY GELL-MANN
September 15, 1929 – May 24, 2019
  

[ED. NOTE: Upon learning of the death of long-time friend, and colleague Murray Gell-Mann, I posed the question below to the Edgies who knew and/or worked with him. —JB]

Can you tell us a personal story about Murray and yourself (about physics, or not)?  


THE REALITY CLUB
Leonard Susskind, George Dyson, Stuart Kauffman, John Brockman, Julian Barbour, Freeman Dyson, Neil Gershenfeld, Paul Davies, Virginia Louise Trimble, Alan Guth, Gino Segre, Sara Lippincott, Emanuel Derman, Jeremy Bernstein, George Johnson, Seth Lloyd, W. Brian Arthur, W. Daniel Hillis, Frank Tipler, Karl Sabbagh, Daniel C. Dennett


[ED. NOTE: For starters, here's a story Murray told about himself when I spent time with him in Santa Fe over Christmas vacation in 2003, excerpted from "The Making of a PhysicistEdge, June 3, 2003—JB]

Uncharacteristically, I discussed my application to Yale with my father, who asked, "What were you thinking of putting down?" I said, "Whatever would be appropriate for archaeology or linguistics, or both, because those are the things I'm most enthusiastic about. I'm also interested in natural history and exploration."

He said, "You'll starve!"

After all, this was 1944 and his experiences with the Depression were still quite fresh in his mind; we were still living in genteel poverty. He could have quit his job as the vault custodian in a bank and taken a position during the war that would have utilized his talents — his skill in mathematics, for example — but he didn't want to take the risk of changing jobs. He felt that after the war he would regret it, so he stayed where he was. This meant that we really didn't have any spare money at all.

I asked him, "What would you suggest?"

He mentioned engineering, to which I replied, "I'd rather starve. If I designed anything it would fall apart." And sure enough when I took an aptitude test a year later I was advised to take up nearly anything but engineering."

Then my father suggested, "Why don't we compromise — on physics?"


Introduction
By Geoffrey West

Murray Gell-Mann was one of the great scientists of the 20th century, one of its few renaissance people and a true polymath. He is best known for his seminal contributions to fundamental physics, for helping to bring order and symmetry to the apparently chaotic world of the elementary particles and the fundamental forces of nature. He dominated the field from the early ‘50s, when he was still in his twenties, up through the late ‘70s. Basically, he ran the show. By modern standards he didn’t publish a lot, but when he did we all hung on every word. It is an amazing litany of accomplishments: strangeness, the renormalization group, color and quantum chromodynamics, and of course, quarks and SU(3), for which he won the Nobel prize in 1969.

He was the Robert Andrews Millikan Professor Emeritus of Theoretical Physics at the California Institute of Technology, a cofounder of the Santa Fe Institute, where he was a Distinguished Fellow; a former director of the J.D. and C.T. MacArthur Foundation; one of the Global Five Hundred honored by the U.N. Environment Program; a former Citizen Regent of the Smithsonian Institution; a former member of the President's Committee of Advisors on Science and Technology; and the author of The Quark and the Jaguar: Adventures in the Simple and the Complex.

Despite his extraordinary contributions to high-energy physics, Murray maintained throughout his life an enduring passion for understanding how the messy world of culture, economies, ecologies and human interaction, and especially language, evolved from the beautifully ordered world of the fundamental laws of nature. How did complexity evolve from simplicity? Can we develop a generic science of complex adaptive systems? In the ‘80s he helped found the Santa Fe Institute as a hub on the academic landscape for addressing such vexing questions in a radically transdisciplinary environment.

Murray Gell-Mann knew, understood and was interested in everything, spoke every language on the planet, and probably those on other planets too, and was not shy in letting you know that he did. He was infamous not just for correcting your facts or your logic, but most annoyingly to some, for correcting how you should pronounce your name, your place of birth, or whatever. Luckily my name is West but that never stopped him from lecturing me many times on the Somerset dialect that I spoke as a young child.

Although he decidedly did not suffer fools and would harshly, sometimes almost cruelly, criticize sloppy thinking or incorrect factual statements, he would intensely engage with anyone regardless of their status or standing if he felt they had something to contribute. I rarely felt comfortable when discussing anything with him, whether a question of physics or lending him money, expecting to be clobbered at any moment because I had made some stupid comment or pronounced something wrong.

Murray could be a very difficult man…but what a mind! However, he loved to collaborate, to discuss ideas, and was amazingly open and inclusive even if he did dominate the proceedings. By the time we had become colleagues at SFI, I had become less and less sensitive to the master’s anticipated criticism or even to his occasional praise; the potential trepidation had pretty much disappeared and our relationship had evolved into friendship and collegiality, just in time for me to become his boss. Negotiating with Murray over a perplexing physics question is one thing, but try negotiating with him over salary and secretarial support, then you’ll really see him in action. To quote Hamlet: "He was a man. Take him for all in all. I shall not look upon his like again."

GEOFFREY WEST is a theoretical physcicist; Shannan Distinguished Professor and Past President, Santa Fe Institute; Author, ScaleGeoffrey West's Edge Bio page.

On Edge

Foreword to "The Last Unknowns" Daniel Kahneman [5.22.19]

Introduction

On June 4th, HarperCollins is publishing the final book in the Edge Annual Question series entitled The Last Unknowns: Deep, Elegant, Profound Unanswered Questions About the Universe, the Mind, the Future of Civilization, and the Meaning of Life. I am pleased to publish the foreword to the book by Nobel Laureate Daniel Kahneman, author of Thinking, Fast and Slow, and a frequent participant in Edge events (presenter of the first Edge Master Class on "Thinking About Thinking" in 2007;  co-presenter, with colleagues Richard Thaler and Sendhil Mullainathan, of the second Master Class, "A Short Course in Behavioral Economics" in 2008. Below, please find Daniel Kahneman's foreword to The Last Unknowns and the table of contents of the 282 contributors. Thanks to all for your support and attention in this interesting and continuing group endeavor.   

John Brockman
Editor, Edge


ON EDGE
by Daniel Kahneman

It seems like yesterday, but Edge has been up and running for twenty-two years. Twenty-two years in which it has channeled a fast-flowing river of ideas from the academic world to the intellectually curious public. The range of topics runs from the cosmos to the mind and every piece allows the reader at least a glimpse and often a serious look at the intellectual world of a thought leader in a dynamic field of science. Presenting challenging thoughts and facts in jargon-free language has also globalized the trade of ideas across scientific disciplines. Edge is a site where anyone can learn, and no one can be bored.

The statistics are awesome: The Edge conversation is a "manuscript" of close to 10 million words, with nearly 1,000 contributors whose work and ideas are presented in more than 350 hours of video, 750 transcribed conversations, and thousands of brief essays. And these activities have resulted in the publication of 19 printed volumes of short essays and lectures in English and in foreign language editions throughout the world.

The public response has been equally impressive: Edge's influence is evident in its Google Page Rank of  "8", the same as The Atlantic, The Economist, The New Yorker, The Wall Street Journal, and the Washington Post, in the enthusiastic reviews in major general-interest outlets, and in the more than 700,000 books sold. 

Of course, none of this would have been possible without the increasingly eager participation of scientists in the Edge enterprise. And a surprise: brilliant scientists can also write brilliantly! Answering the Edge question evidently became part of the annual schedule of many major figures in diverse fields of research, and the steadily growing number of responses is another measure of the growing influence of the Edge phenomenon. Is now the right time to stop? Many readers and writers will miss further installments of the annual Edge question—they should be on the lookout for the next form in which the Edge spirit will manifest itself.

The Cul-de-Sac of the Computational Metaphor

Rodney A. Brooks [5.13.19]

Have we gotten into a cul-de-sac in trying to understand animals as machines from the combination of digital thinking and the crack cocaine of computation uber alles that Moore's law has provided us? What revised models of brains might we be looking at to provide new ways of thinking and studying the brain and human behavior? Did the Macy Conferences get it right? Is it time for a reboot?­­­

RODNEY BROOKS is Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL); founder, chairman, and CTO of Rethink Robotics; and author of Flesh and Machines. Rodney Brooks's Edge Bio Page

Machines Like Me

Ian McEwan [4.16.19]

I would like to set aside the technological constraints in order to imagine how an embodied artificial consciousness might negotiate the open system of human ethics—not how people think they should behave, but how they do behave. For example, we may think the rule of law is preferable to revenge, but matters get blurred when the cause is just and we love the one who exacts the revenge.

A machine incorporating the best angel of our nature might think otherwise. The ancient dream of a plausible artificial human might be scientifically useless but culturally irresistible. At the very least, the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being. There’s a semi-religious quality to the hope of creating a being less cognitively flawed than we are.

IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like Me. Ian McEwan's Edge Bio Page


MACHINES LIKE ME

IAN MCEWAN: I feel something like an imposter here amongst so much technical expertise. I’m the breakfast equivalent of an after-dinner mint.

What’s been preoccupying me the last two or three years is what it would be like to live with a fully embodied artificial consciousness, which means leaping over every difficulty that we’ve heard described this morning by Rod Brooks. The building of such a thing is probably scientifically useless, much like putting a man on the moon when you could put a machine there, but it has an ancient history.

Pages

Subscribe to Front page feed