Why I’m a still a (non-card carrying) Skeptic

1 (1)I just came back from Las Vegas, where I had a lovely time at the annual CSICon event, organized by the folks that bring you Skeptical Inquirer magazine, among other things. As I’ve done almost since the beginning of my involvement with the skeptic movement, back in, ghasp, 1997, I’ve delivered a bit of a gadfly talk. This one was about scientism, reminding my fellow skeptics that they have a tendency to overdo it with the science thing, at times coming across nearly as evangelical and even obtuse as their usual targets, from creationists to UFO believers. After asking the audience to be patient with me and not serving me hemlock for lunch, I minced no words and criticized by name some of the big shots in the field, from Neil deGrasse Tyson to Richard Dawkins, from Sam Harris to Steven Pinker. And of course several of those people were giving talks at the same conference, either right before or right after me.


No hemlock was served, and I got less resistance to my chastising than usual from the audience. Some people even approached me later on telling me how much they appreciated my reminder that our community is not perfect and we need to do better. It was all very congenial, set against the perfect backdrop of the ultimate fake city in the world, and accompanied by the occasional dirty martini.


On my way back to New York I then got a tweet from a follower linking to yet another “I resign from the skeptic movement and hand in my skeptic card” article, written by a prominent (former) skeptic. It doesn’t matter who. The list of complaints by that author are familiar: a tendency toward scientism, a certain degree of sexism within the movement, and a public failure to lead by some of the de facto leaders. The same issues that I have been complaining about for years (for instance, here). But I have not quit, and do not intend to quit. Why?


The uncharitable answer would be because I’m part of the privileged elite. I doubt anyone would seriously consider me a “leader” in the movement, but I have certainly been prominent enough. And I am a male. White. Heterosexual. The problem is, uncharitable views are highly unhelpful, and I’m on record advocating on behalf of diversity in the movement, against sexual harassment, and – as I mentioned above – have made a mini-career of stinging the big shots every time I think they deserve it, which is rather often. So I’m afraid a casual dismissal based on my gender, sexual preference and ethnicity will not do. Quite apart from the fact that it would be obviously hypocritical on the part of anyone who claims that gender, sexual preference and ethnicity should not be grounds for blanket statements of any kind.


No, I stay because I believe in the fundamental soundness of the ideas that define modern skepticism, and also because I think quitting to create another group is an example of an all too common fallacy: the notion that, despite all historical evidence to the contrary, next time we’ll definitely get it right and finally create utopia on earth. Let me elaborate on each point in turn.
“Skepticism,” of course, has a long history in philosophy and science. The original Skeptics of ancient Greece and Rome where philosophers who maintained that human knowledge is either highly fallible or downright impossible (depending on which teacher of the school you refer to). Consequently, they figured that the reasonable thing to do was to either abstain entirely from any opinion, or at least to hold on to such opinions as lightly as possible. Theirs wasn’t just an epistemological stance: they turned this into a style of life, whereby they sought serenity of mind by way of detaching themselves emotionally from those opinions (political, religious) that others held so strongly and often died for. Not my cup of tea, but if you think about it, it’s not a bad approach to good living at all.


The philosopher that embodies modern skepticism most closely, however, is the Scottish Enlightenment figure par excellence, David Hume. He held an attitude of open inquiry, considering every notion worth investigating and leaving the (provisional) verdict of such investigations to the empirical evidence. He famously said that a reasonable person proportions his beliefs to the available facts, a phrase later turned by Carl Sagan in his hallmark motto: extraordinary claims require extraordinary evidence.


The contemporary skeptic movement was the brainchild of people like philosopher Paul Kurtz (the founder of the organizations that preceded CSI, as well as of Skeptical Inquirer), magician James “the Amazing” Randi (organizer of the long running conference that preceded CSICon, known as TAM, The Amazing Meeting), Carl Sagan himself, and a number of others. Initially, the movement was rather narrowly devoted to the debunking of pseudoscientific claims ranging from UFOs to telepathy, and from Bigfoot to astrology.


More recently, mainly through the efforts of a new generation of leaders – including but not limited to Steve Novella and his group, Michael Shermer, Barry Karr, and so forth – the scope of skeptical analysis has broadened to include modern challenges like those posed by the anti-vax movement and, of course, climate change. Even more recently, young people from a more diverse crowd, finally including several women like Rebecca Watson, Susan Gerbic, Kavin Senapathy, Julia Galef, and many others, have further expanded the discourse to include an evidence-based treatment of political issues, such as gender rights and racism.


The values of the skeptic movement, therefore, encompass a broad set that I am definitely on board with. At its best, the community is about reason broadly construed, critical but open minded analysis of extraordinary claim, support for science based education and critical thinking, and welcoming diversity within its ranks.


Of course, the reality is, shall we say, more complex. There has been plenty of sexual harassment scandals, involving high profile members of the community. There is that pesky tendency toward closing one’s mind and dismissing rather than investigating claims of the paranormal. And there is a new, annoying, vogue to reject philosophy, despite the fact that a skepticism (or even a science) without philosophical foundations is simply impossible.


But this leads me to the second point: I think it far more sensible to stay and fight for reform and improvement rather than to “hand my skeptic card” (there is no such thing, of course) and walk away. Because those who have walked away have, quite frankly, gone nowhere. Some have attempted to create a better version of what they have left, like the thankfully short-lived “Atheism+” experiment of a few years ago.


The problem with leaving and creating an alternative is that the new group will soon enough inevitably be characterized by the same or similar issues, because people are people. They diverge in their opinions, they get vehemently attached to those opinions, and they fight tooth and nails for them. Moreover, people are also fallible, so they will in turn engage in the same or similar behaviors as the ones that led to the splintering of the group in the first place, including discrimination and harassment. So the whole “I’m leaving and creating a new church over there” kind of approach ends up being self defeating and dispersing resources and energy that could far better be used to improve our own household from within while keep fighting the good fights we inherited from the likes of Kurtz and Sagan.


So, no, I’m not leaving the skeptic movement. I will keep going to CSICon, NECSS, the CICAP Fest, and wherever else they’ll invite me. I will keep up my self assigned role of gadfly, annoying enough people and hopefully energizing a larger number so that we keep getting things more and more right. After all, this is about making the world into an at least slightly better place, not into our personal utopia tailored to our favorite political ideology.

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe WordPress.com Blog

WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

They’ve done it again: another embarrassing moment for the skeptic movement

1In a few days I will be in Las Vegas. No, it’s not what you may be thinking about. I’ll be the token skeptic at one of the largest conferences of skeptics: CSICon, courtesy of the same people who publish Skeptical Inquirer magazine, for which I wrote a column on the nature of science for a decade. I say “token skeptic” because I have been invited by the organizers to talk about scientism, the notion that sometimes science itself is adopted as an ideology, applied everywhere even though it doesn’t belong or is not particularly useful (here is a video about this).


I have been both a member and a friendly internal critic of the skeptic community since the late ‘90s, and I have been reminded of the value of such a gadfly-like role very recently, with the publication of yet another “skeptical” hoax co-authored by philosopher Peter Boghossian and author James Lindsay, this time accompanied by Areo magazine’s Helen Pluckrose. The hoax purports to demonstrate once and for all that what the authors disdainfully refer to as “grievance studies” (i.e., black studies, race studies, women studies, gender studies, and allied fields) is a sham hopelessly marred by leftist ideological bias. The hoax doesn’t do any such thing, although those fields are, in fact, problematic. What the stunt accomplishes instead is to reveal the authors’ own ideological bias, as well as the poverty of critical thinking by major exponents of the self-professed skeptic community. But let’s proceed in order.


Boghossian and Lindsay made a first, awkward attempt at this last year, by submitting a single fake paper entitled “The Conceptual Penis as a Social Construct.” It was a disaster: the paper was, in fact, rejected by the first (very low ranking) journal they submitted it to, and only got published in an unranked, pay-per-publish journal later on. Here is my commentary on why Boghossian and Lindsay’s achievement was simply to shine a negative light on the skeptic movement, and here is a panel discussion about their failure at the North East Conference on Science and Skepticism later on in the year. That did not stop major exponents of the skeptic movement, from Michael Shermer to Steven Pinker, from Richard Dawkins to Sam Harris and Jerry Coyne, from praising Boghossian and Lindsay, which is why I maintain the episode was an embarrassment for the whole community.


The hoax, of course, was modeled after the famous one perpetrated by NYU physicist Alan Sokal at the expense of the (non peer reviewed) postmodernist journal Social Text, back in the ‘90s, at the height of the so-called science wars. Sokal, however, is far more cautious and reasonable than Boghossian & co., writing about his own stunt:


From the mere fact of publication of my parody I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or cultural studies of science — much less sociology of science — is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty.


In fact, Sokal himself published some good criticisms of the conceptual penis hoax.


Not having learned their lesson at all, Boghossian & co. engaged in a larger project of the same kind, this time sending out 21 fake papers to a number of journals, mostly in women and gender studies. Two thirds of the papers were rejected. Of the seven accepted papers, one was a collection of (bad) poetry, and thus really irrelevant to the objective at hand; two were simply boring and confusing, like a lot of academic papers; one was a self-referential piece on academic hoaxes that one independent commentator actually judged to be making “somewhat plausible arguments”; and three more included fake empirical evidence. As Daniel Engber says in Slate:


One can point to lots of silly-sounding published data from many other fields of study, including strictly scientific ones. Are those emblematic of ‘corruption’ too?


Indeed, there are several examples of this in the literature, like a 2013 hoax that saw a scientific paper about anti-cancer properties in a chemical extracted from a fictional lichen published in several hundred journals. Hundreds, not just half a dozen!


It’s very well worth reading the entirety of Engber’s commentary, which exposes several problematic aspects of the Boghossian et al.’s stunt. The major issues, as I see them, are the following:


1. Hoaxes are ethically problematic, and I honestly think Portland State University should start an academic investigation of the practices of Peter Boghossian. In the first place, I doubt the study (which was published in Aero magazine, not in a peer reviewed journal!) obtained the standard clearance required for research on human subjects. Second, the whole enterprise of academic publishing assumes that one is not faking things, particularly data. So tricking reviewers in that fashion at the very least breaches the ethical norms of any field of scholarship.


2. The authors make a big deal of the ideological slant of the fields they target, apparently entirely oblivious to their own ideological agenda, which explicitly targeted mostly women and gender studies. Both Boghossian and Lindsay have published a series of tweets (see Engber’s essay) that nakedly display their bias. Is the pot calling the kettle black?


3. While we can certainly agree that it is disturbing that academic journals publish any paper that is more or less obviously fake, this is not a good criticism of the target fields. You know what that would look like? It would take the form of a serious, in-depth analysis of arguments proposed by scholars in those fields. But Boghossian & co. actually proudly proclaimed, after their first hoax, that they have never read a paper in “X studies,” which means that – literally – they don’t know what they are talking about. Here is one example of how to do it.


4. What Boghossian et al. really want to convey is that “X studies” are intellectually bankrupt, unlike other academic disciplines, particularly scientific ones. But as the example of the anti-cancer hoax mentioned above, and several others, show, this is simply not the case. Corruption of academic culture, resulting either from ideological bias or from financial interests (pharmaceutical companies are well known to establish entire fake journals to push their products) is not limited to certain small corners of the humanities.


5. In a related fashion – and surprisingly given that Boghossian actually teaches critical thinking – while the first hoax fatally suffered from a sample size of n=1, the new one is plagued by the simple fact that it has no control! Without a similar systematic attempt being directed at journals in other fields (particularly scientific ones) we can conclude precious little about the specific state of “X studies.”


That said, do I think that the fields targeted by Boghossian & co. are problematic? Yes, as I’ve written before. Here the most useful commentary on the hoax has been published in the New York Times by William Eggington. As he puts it:


The problem is not that philosophers, historians or English professors are interested in, say, questions of how gender or racial identity or bias is expressed in culture or thought. Gender and racial identity are universally present and vitally important across all the areas that the humanities study and hence should be central concerns. The problem, rather, is that scholars who study these questions have been driven into sub-specializations that are not always seen as integral to larger fields or to the humanities as a whole. Sometimes they have been driven there by departments that are reluctant to accept them; sometimes they have been driven there by their own conviction that they alone have the standing to investigate these topics.


That strikes me as exactly right. “X studies” programs should be integrated within a university, either (ideally) in broad multidisciplinary programs, or within the most suitable departments, such as History, Philosophy, Sociology, and the like.


Eggington blames academic hyperspecialization for the current sorry state of affairs in these fields, as well as the “publish or perish” attitude that has plagued academia for decades now. But guess what? “X studies” are most definitely not the only ones to suffer from these problems. They are endemic to the whole of modern academy, including the natural sciences. Indeed, we should be far more worried about the influence of ideology and big money on scientific fields than on small areas of the humanities. After all, it is in the name of science that we spend billions annually, and it is from science that we expect miracles of medicine and technology.


As Engber writes in the Slate commentary, notwithstanding the dire warnings of Boghossian, Pinker, Harris, Dawkins and all the others:


Surprise, surprise: Civilization hasn’t yet collapsed. In spite of Derrida and Social Text, we somehow found a means of treating AIDS, and if we’re still at loggerheads about the need to deal with global warming, one can’t really blame the queer and gender theorists or imagine that the problem started with the Academic Left. (Hey, I wonder if those dang sociologists might have something interesting to say about climate change denial?)


The new Boghossian-led hoax is another example of badly executed, ideologically driven stunt that targets narrow fields with little impact while leaving alone the big elephants in the room. It is, in the end, yet another embarrassment for the skeptical community, as well as a reflection of the authors’ own biases and narrow mindedness.

The techno-optimists are at it again

1 (4)(the atomic explosion that destroyed Hiroshima)


Techno-optimism (a form of applied scientism, if you will) is the attitude that no matter how dire humanity’s problems, science and technology will surely come to the rescue. It tends to conveniently neglect that some of humanity’s biggest contemporary problems (say, climate change, or the risk of nuclear annihilation) are, in fact, caused by the willful misuse of science and technology. It seems odd to firmly believe that more of the same thing that caused the disease in the first place will surely cure the disease, because, you know, this time we’ll get it right.


A good example of techno-optimism is a recent article in Slate by Phil Torres, based on his new book, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. It’s actually a somewhat puzzling article, because Torres is both critical and supportive of what he calls radical human enhancement as a way to solve what he perceives as humanity’s existential risks. My take is that he mostly focuses on the wrong risks, some of which are not actually existential at all, and that his proposed solution is more likely to make things worse than better. I think of myself as a realist about technology – one who both appreciates its advantages (I’m writing this on a wonderfully advanced tablet computer!) and is aware of its dark side. But if after reading this you want to class me as a techno-pessimist, I’ll take it. Just don’t dismiss me as a Luddite, okay?


Torres begins by correctly pointing out that the current century is a bit special, in the context both of human evolution and, for that matter, the evolution of life on our planet. For the first time since life emerged 3.5 billion years ago a single, sentient species actually has developed the capacity to profoundly alter Earth’s bio- and geo-spheres. As my favorite philosopher, Spider-Man, warned us, with great power comes great responsibility, but we just don’t seem to be willing to accept that responsibility.


Torres then introduces the concepts of cognitive and moral enhancements, though the word “moral” appears only near the beginning of the piece, with “cognitive” replacing it throughout the rest of the article. That, as we shall see, is a crucial mistake. There are two classes of enhancement, conventional and radical. You are surely familiar with the conventional class (hence the name!): it includes things like education, meditation, and the absorption of caffeine. Okay, it’s an odd mix, but you get the point: anything that improves our cognitive abilities without permanently altering them in a heritable fashion, that is, across generations.


Radical enhancements are a whole different story, and while still at the borderlands between science and science fiction, surely some of them will become available within years or decades. Torres focuses his essay on radical enhancements, since he thinks these are the ones that will be necessary to stave off the existential risks faced by humanity.


One such radical enhancement is embryo selection, a process by which scientists – the wisest of all people, as we all know – pick a subset of embryos generated by a given combination of sperms and eggs, and do so repeatedly in order to improve whatever human characteristic is deemed to be desirable. Torres is perfectly aware that this is eugenics, but he deems it to be of a benign type, because it doesn’t violate people’s autonomy. I guess he hasn’t seen the film Gattaca. And yes, it is perfectly acceptable to object to sci-fi scenarios by using sci-fi philosophical thought experiments. Torres comments:


If we understand the genetic basis of intelligence sufficiently well, we could specify selection criteria that optimize for general intelligence. … According to a paper by philosophers Nick Bostrom and Carl Shulman, selecting one embryo out of 10, creating 10 more out of the one selected, and repeating the process 10 times could result in IQ gains of up to 130 points – a promising method for creating superbrainy offspring in a relatively short period of time. … As Bostrom puts it … ‘a strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.’


I’m afraid that Bostrom and Shulman don’t know what they are talking about, and no such strong prima facie case has been made. At all. First off, we actually don’t understand the genetic basis of intelligence. We know that IQ (which is not at all the same thing as “intelligence,” whatever that is) is heritable in humans. But “heritable” simply means that there is – other things being equal – a statistical correlation between intelligence and genetic makeup. Nothing more, and that ain’t even remotely close enough to what one would need in order to do embryo selection on intelligence, even setting aside the ethical issues, which would be far more thorny than Torres lets on.


Second, and this will become a recurring theme of my critique, “superbrainy” doesn’t seem to have a lot to do with what is really needed: wisdom, or a good moral compass. I seriously doubt that there is any correlation at all between intelligence and morality, and if I’m right, creating a super-race of hyper-intelligent beings with the same highly imperfect moral compass as Homo sapiens sapiens is a sure recipe to accelerate and magnify whatever existential threat Torres, Bostrom and Shulman may be concerned about.


Speaking of which: what does Torres consider to be an existential threat to humanity? At the top of his list he puts “apocalyptic terrorism,” the possibility that someone inspired by a “Manichean belief system” will blow all of us to smithereens with a stolen atomic weapon, in the name of ridding the world of apostates and assorted infidels, thus establishing the kingdom of God on earth.


While surely there is a risk of one such attack, notice a few important caveats. To begin with, there is no credible scenario under which a nuclear terrorist attack would be civilization-ending. Yes, someone may be able to sneak a low grade nuclear weapon in a major city and kill hundreds of thousands, millions even. That would be an unprecedented and horrifying catastrophe. But an existential threat to civilization? No. You know what really constitutes such a threat? The fact that the codes for thousands of nuclear missiles are currently in the hands of an incompetent narcissist sitting in the White House. But, curiously, there is no mention of government-based threats in Torres’ piece. Lastly, please keep in mind that this specific threat is made possible by, you guessed it, science and technology! It’s the very existence of very smart scientists and unscrupulous politicians – none of whom seems to be equipped with even a barely functioning moral compass – that has put us into this situation in the first place. And you think giving more leeway to the same folks is going to save humanity?


More generally speaking, Steven Pinker’s ‘Escalator of Reason’ hypothesis states that the observed decline in global violence since the second half of the 20th century has been driven by rising average IQs in many regions of the world, a phenomenon called the ‘Flynn effect.’ The most important concept here is that of ‘abstract reasoning,’ which Pinker identifies as being ‘highly correlated’ with IQ. In his words, ‘abstraction from the concrete particulars of immediate experience … is precisely the skill that must be exercised to take the perspectives of others and expand the circle of moral consideration.’


With all due respect to Steven Pinker, I’m going to call bullshit on this one as well. As a start, “violence” has indeed declined since the second half of the 20th century (though even this conclusion comes with huge caveats about what exactly counts as violence), but there is a reason Pinker picked that particular time frame: two world wars had just taken place in the previous half century, killing millions of people, thanks to science and technology. The culmination of that period was the only nuclear attack on civilians in the history of humanity (so far), perpetrated by a government, not a Manichean terrorist; the US government, to be specific.
Moreover, there is no causal model (correlation, as Pinker knows, is not the same as causation) that actually links the Flynn effect (which is probably due to “conventional enhancement techniques,” such as better nutrition and education) and moral improvement. Indeed, I see no reason to believe that humanity at large has improved morally since the times of Socrates and Confucius. And “abstraction from the concrete particulars of immediate experience” is also the sort of thing that makes possible killing at a distance by pushing a button, or that allows many of us to reconcile the otherwise irreconcilable fact that the top 10% of the human population lives by standards historically reserved to Kings and Queens while the rest is below or barely above poverty, subject to preventable disease, or killed by violence rendered particularly effective by technologically advanced weaponry in the hands of unscrupulous governments.


Torres does acknowledge some of the limitations of the approach proposed by techno-optimists like Pinker. After writing that perhaps “idiosyncratic actors” (i.e., terrorists) would suffer from less empathy if they had a higher IQ, he remembers that some real life examples of such actors, like the Unabomber Ted Kaczynski, actually do have high IQs, and yet they are still deficient in empathy. So let me state this clearly: there is no reason whatsoever to think that IQ and empathy are correlated, which throws a big wrench in Pinker’s, Bostrom’s and similar programs of enhancement. Torres continues:


Another major concern: cognitive enhancements would likely increase the rate of technological development, thereby shortening the segment of time between the present and when large numbers of people could have access to a doomsday button.


Right. But, again, he and his colleagues are insist in worrying about the least likely threats, which, once more, are not actually existential. No Unabomber can end the world. But Donald Trump (just to pick on the current occupant of the WH, it’s not that I trust others a hell of a lot more) can come pretty darn close. But Torres insists:


Although cognitive enhancements could worsen some types of terror agents, the evidence – albeit indirect – suggests that a population of cognitively enhanced cyborgs would be less susceptible to accidents, mistakes, and errors, and therefore less likely to inadvertently self-destruct in the presence of weapons of total destruction.


Ahem, no, the evidence suggests no such thing, and in fact the production of a population of “cognitively enhanced cyborg” is a nightmare that only naive techno-optimists could possibly wish on the rest of us. Don’t these people watch any sci-fi, at all? And there is more nonsense on stilts:


It seems plausible to say that a smarter overall population would increase humanity’s ability to solve a wide range of global problems. Consider Bostrom’s calculation that a 1 percent gain in ‘all-round cognitive performance … would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited … [it] would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists.’ … Superior knowledge about supervolcanoes, infectious diseases, asteroids, comets, climate change, biodiversity loss, particle physics, geoengineering, emerging technologies, and agential risks could lead to improved responses to these threats.


Bostrom’s calculations are based on thin air, to be charitable. It isn’t even clear what “all-round cognitive performance” means, let alone how to improve it so efficiently, and it is even more dubitable that such an improvement would actually be an improvement. Also, what sort of model of the brain is Bostrom working with, that allows him to simply sum small percentage increases across different individuals as if they were equivalent to a gigantic increase in a single person? Moreover, look at the list of disasters: most of them are both extremely unlikely and it is just as unlikely that we would be able to do much about them (I wonder why a nearby nova explosion isn’t part of the mix), but the most worrisome ones (climate change, biodiversity loss, emerging technologies, and agential risks) are all made possible by the very same thing that is supposed to save us: more intelligent technology.


Toward the end of the essay we simply reach Pindaric levels of imaginative flights:
There could be any number of existential risks looming in the cosmic shadows to which we, stuck in our Platonic cave, are cognitively closed. Perhaps we are in great danger right now, but we can only know this if we understand a Theory T. The problem is that understanding Theory T requires us to grasp a single Concept C that falls outside our cognitive space. Only after we recognize a risk can we invent strategies for avoiding it.


Sure. Now how about coming down to Earth, our only planet, which we are the ones currently destroying, and talk some sense? One of the problems with techno-optimism is that it captures the imagination with talk of supervolcanoes and “great unknown danger” X, offering us the reassuring but extremely dangerous feeling that all we need to do to get out of the trouble we have stubbornly dug ourselves into is more science. It almost sounds like magic. Because it almost is.


Only at the very end Torres manages to slip in the crucial word that has been missing from the entire exercise: wisdom. The problem is not that we are not smart enough, but rather that we are not wise enough. Science and technology have advanced by giant leaps since the time of Socrates and Confucius, and yet these two (as well as a number of other ancient sages) have been unsurpassed in wisdom by even the most cognitive advanced human beings of the intervening two and a half millennia.


I know it sounds far more pedestrian and unexciting, but what if – for a change – we actually got serious at using the sort of conventional enhancements that are proven to work? First and foremost, education. I don’t mean the sort of STEM-oriented technical tripe that produces more barely functional human drones apt for the use of large corporations. I mean serious education, what the Greeks called paideia, the rearing of good citizens of the polis. And yes, some meditation or other kinds of spiritual exercises, to calm our mind down and center ourselves so that we can live a better and more meaningful life, instead of depending on the constant acquisition of consumer goods for our so-called happiness. And caffeine, of course, that’s always helpful.

Neil deGrasse Tyson “debunks” Spider-Man. And that’s just wrong

1 (3)I’ve spent a significant part of my academic and public careers investigating and opposing pseudoscience. One of my role models in this quest has always been astronomer Carl Sagan, the original host of the landmark PBS series Cosmos. I have met and interviewed the new host, Neil deGrasse Tyson, the director of the Hayden Planetarium at the American Museum of Natural History. Despite our differences about the value of philosophy (he’s dead wrong on that one), Neil too got into the debunking business. But – unlike Sagan – does it with more than a whiff of scientism, and occasionally in a spectacularly wrongheaded fashion.


Take, for instance, last week’s mini-appearance on The Late Show with Stephen Colbert, one of my favorite programs to laugh at the crap currently affecting the planet (as we all known, a sense of humor is the best defense against the universe). On September 14th, Tyson was featured in a one-minute video entitled “Superpowers debunked, with Neil deGrasse Tyson.” What? Why do we need to “debunk” superpowers? Does anyone actually think there exists a god of thunder named Thor, who comes from a mythical place known as Asgard? But apparently the “problem” is pressing enough for our debunker-in-chief to use a popular nationally televised show to tackle it. Here is, in part, what Neil said (and no, this isn’t a joke, he was serious):


Let’s tackle Spider-Man.


No, let’s not! Spider-Man is one of my favorite superheroes, a (fictional) role model, motivated by a more than decent philosophy of life: with great powers comes great responsibility (he got that from Uncle Ben). Something Tyson has, apparently, not learned. He goes on:


He’s bitten by a radioactive spider. Don’t we know from experience that radioactivity give your organs cancer? So, he would just be a dead kid, not one with superpowers.


No kidding, Sherlock. Do we really need the awesome reasoning powers of a star national science popularizer to figure out that Spider-Man’s origin story doesn’t stand up to even casual scrutiny? Doesn’t Neil realize that this is fiction, for crying out loud? Well, apparently, he does, sort of:


Of course it’s fiction, so I don’t have a problem with fiction, but if you think you are going to do this experiment, and try to make that happen to you, I’ve got news for you: it’s not gonna work.
Well, Neil, apparently you do have a problem with fiction. I still remember that on my podcast, years ago, you complained about the aliens in Avatar, because the females had breasts, which are – obviously – a mammalian trait. Really? That’s what bothered you in that movie? Never heard of suspending disbelief and just enjoy a nice story?


Also, who on earth is going to be tempted to repeat in real life the “experiment” that generated Spider-Man? And even if an enterprising and badly informed kid wanted to, where would he get a radioactive spider? Lastly:


I’ve got news for you: it’s not gonna work.


You think?


All right, end of my anti-Tyson rant in defense of Spider-Man. The more serious issue here is: why did he feel the need to do such a silly thing in the first place? I suspect that’s because Neil, like a number of “skeptics” I know, is affected by two maladies: the above mentioned scientism and a strong sense of intellectual superiority to the common rabble.


Scientism is defined by the Merriam-Webster as “an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation.” I don’t know whether commentaries on comic book superheroes qualify as an area of investigation, but clearly Tyson felt it necessary to bring the awesome power of science and critical thinking to debunking the dangerous notion that being bitten by a radioactive spider will give you magical powers.


I really think the skeptic community should stay as far away as possible from the whole notion of debunking (and yes, I’ve been guilty of using that word myself, in the past). For one thing, it conveys a sense of preconceived outcome: you know a priori that the object of your debunking is nonsense, which isn’t exactly in line with the ideal scientific spirit of open inquiry. That’s why my favorite actual skeptic is philosopher David Hume, who famously said that a reasonable person’s beliefs should be proportionate to the evidence, a phrase later turned by Sagan into his famous “extraordinary claims require extraordinary evidence.” Sagan, like Hume, was open to a serious consideration of phenomena like UFOs and telepathy, even though he did not believe in them. At one point he risked his career and reputation in order to organize a scientific conference on UFO sightings. I simply cannot imagine a similar attitude being sported by Neil deGrasse Tyson.


For another thing, “debunking” strongly conveys the impression that one thinks that the people who believe in the notion to be debunked are simpletons barely worth consideration. Perhaps some are, but I’ve met plenty of really smart creationists, for instance, a notion that would sound to Tyson as the quintessential oxymoron. Which brings me to his second malady (one, again, from which I have suffered myself, and that I’m trying really hard to overcome): intellectual snobbism. People like Tyson (or, say, Richard Dawkins) exude the attitude at every turn, as on display in the short Colbert video that got me started with this post. The problem (other than that it’s simply not nice) is than snobbism isn’t going to get you converts. It only plays well with your own faithful crowd.


This is because of something that Aristotle realized back 23 centuries ago, and which he explained at great length in his book on rhetoric. Presumably, Neil, Dawkins, and others want the same thing that Sagan, Stephen Gould (another one of my role models), and myself want: to engage a broader public on the nature of science, and to widen the appreciation and practice of critical thinking. But Aristotle realized that this goal requires the deployment of three concepts: Logos, Ethos, and Pathos.


Logos refers to the idea that our first priority should be to get our facts and our reasoning right. In the case of Neil’s “debunking” of Spider-Man, yeah, he got the biological facts straight, as much as that isn’t going to do anyone any good.


Ethos means character: you need to establish your credentials with your audience. And by credentials Aristotle didn’t mean the fact that you have a PhD (Tyson has one, from Columbia University), but that you are a good, trustworthy person. I can’t comment on the degree to which Neil fits this description, because I don’t know him well enough; but he certainly comes across as condescending in this video and on many other occasions, a character trait that Aristotle would not have approved of. (One more time: I have been guilty of the same before, and I’ve been actively working on improving the situation.)


Pathos refers to the establishment of an emotional connection with your audience. This is something that scientists are actively trained not to do, under the mistaken impression that emotional connection is the same thing as emotional manipulation. But this is the case only if the agent is unscrupulous and manipulative, not if he’s acting as a genuine human being. We humans need emotional connections, without which we are prone to distrust whoever is talking to us. In the video Tyson makes absolutely no effort to connect with his audience. Indeed, it isn’t even clear who is audience is, exactly (certainly, not fans of Spider-Man!), and therefore what the point of the whole exercise actually was.


So, by all means let us nurture good science communicators, which Neil deGrasse Tyson most certainly is. We do need them. But they really ought to read a bit of Aristotle (oh no, philosophy!), and also relax about the questionable science of movies like Avatar or comic books like Spider-Man.


Speaking of which, let me leave you with the delightfully corny original animated series soundtrack. Try to enjoy it without feeling the urge to “debunk” it, okay?

Darwinism in the modern era: more on the evolution of evolutionary theory – part II

1 (2)The many conceptual and empirical advances in evolutionary biology during the second half of the twentieth century that I have briefly sketched in part I of this essay naturally led to a broader theoretical turmoil. More and more people felt like the Modern Synthesis (MS) was increasingly becoming too restrictive a view of evolution to keep playing the role of biology’s “standard model.” This group included Carl Schlichting and myself, Mary Jane West-Eberhard (2003), Eva Jablonka, and others. But arguably none made a more concerted, if partial, effort than Stephen Jay Gould in his magnum opus, The Structure of Evolutionary Theory, published in 2002.


The Structure is comprised of two parts, one tracing the history of evolutionary ideas, both pre-and post-Darwin, and the second one presenting Gould’s view of contemporary theoretical debates within the field. While the constructive part of the book focuses too much on paleontology and multilevel selection, Gould correctly identified three conceptual pillars of Darwinism that got imported wholesale into the Modern Synthesis:

1. Agency: the locus of action of natural selection. For Darwin, this was the individual organism, while within the MS the focus expanded to the gene, thus leading to an overall increase of agency. Gould advocated further expansion, to include multiple levels of selection, from the gene to the individual to kin groups to species. This suggestion is perfectly in line with that of other authors advocating an Extended Evolutionary Synthesis (EES).


2. Efficacy: the causal power of natural selection relative to other evolutionary mechanisms. According to Darwin, natural selection is the chief mechanism of evolutionary change, and certainly the only one capable of producing adaptation. The MS formally described—by means of population genetic theory—four additional mechanisms: mutation, recombination, migration, and genetic drift. Gould adds a positive role for developmental constraints to the picture, and advocates of the EES further expand on this theme, including concepts such as those of evolvability (i.e., change over time of evolutionary mechanisms themselves), facilitated variation (from developmental biology), and niche construction (from ecology), among others.


3. Scope: the degree to which natural selection can be extrapolated from micro-to macro-evolutionary outcomes. As we have seen last time, this has been controversial early on, with the MS settling for the same basic picture proposed by Darwin: so-called macro-evolutionary processes are simply micro-evolutionary ones writ large. Gould, of course, questions this, on the basis of the already discussed theory of punctuated equilibria. Proponents of the EES also doubt the received view, suggesting that species selection and group-level ecological characteristics may partially, though not entirely, decouple micro-from macro-evolution.


If Gould’s general take is right, then, evolutionary theory has changed over time and the process can best be tracked conceptually by keeping tabs on changes in the agency, efficacy, and scope of natural selection within the theory. This, incidentally, makes natural selection the fundamental idea in biological evolution, and rightly so. No other concept, not even that of common descent, has had such a complex and convoluted history within the field. Moreover, what the EES is attempting to do can also be understood within Gould’s framework.


Now, as we have seen so far, the latter part of the twentieth century and the beginning of the twenty-first century have seen a renewed debate about the status of contemporary evolutionary theory, with a number of calls for an expansion of the Modern Synthesis into an Extended Evolutionary Synthesis. But what does the latter look like, at the current state of the discussion?


I provided an early sketch of it in a paper published in Evolution back in 2007 (available to Socratic level subscribers from my archives), and an updated and expanded version of that sketch has been put out by Laland and collaborators in 2015. My early analysis began by noting that philosopher Karl Popper famously interpreted the MS as a theory of genes, lacking a comparable theory of forms (i.e., phenotypes). The field got started, however, as a theory of forms in Darwin’s days, with genetics taking on a fundamental role only after the rediscovery of Mendel’s work at the turn of the twentieth century. Consequently, I suggested, a major goal that an EES aims for is an improvement and unification of our theories of genes and of forms. This, seems to me, may best be achieved through an organic grafting of novel concepts onto the foundational structure of the MS, particularly evolvability, phenotypic plasticity (i.e., the ability of a single genotype to produce different phenotypes in response to environmental variation), epigenetic inheritance, complexity theory (from mathematics), and the theory of evolution in highly dimensional adaptive landscapes (from population genetics).


Laland et al.’s paper from 2015 is the most focused and systematic attempt to articulate the EES, explicitly aiming at clearing away inconsistencies in previous works. They begin with a comparison of core assumptions of the MS versus the EES. To give you an idea of what they are getting at, here are the entries for inheritance:


Genetic inheritance (MS): Genes constitute the only general inheritance system. Acquired characters are not inherited.


Inclusive inheritance (EES): Inheritance extends beyond genes to encompass (transgenerational) epigenetic inheritance, physiological inheritance, ecological inheritance, social (behavioural) transmission and cultural inheritance. Acquired characters can play evolutionary roles by biasing phenotypic variants subject to selection, modifying environments and contributing to heritability.


They then run through a series of alternative interpretations of important evolutionary phenomena according to the two frameworks. For instance, in the case of developmental plasticity:


MS: conceptualized as a genetically specified feature of individuals that can evolve under selection and drift. Focus is on the conditions that promote adaptive evolution of plastic versus non-plastic phenotypes. The primary evolutionary role of plasticity is to adjust phenotypes adaptively to variable environments. Plastic responses regarded as pre-filtered by past selection.


EES: considers reducing plasticity to a genetic feature to be explanatorily insufficient. Retains an interest in adaptive evolution of plasticity, but also focuses on how plasticity contributes to the origin of functional variation under genetic or environmental change, and how the mechanisms of plasticity limit or enhance evolvability, and initiate evolutionary responses. Many plastic responses viewed as reliant on open-ended (e.g., exploratory) developmental processes, and hence capable of introducing phenotypic novelty.


Moreover, Laland et al. provide readers with a comparison of different predictions originating from the competing frameworks. For instance, in the case of the relationship between genetic and phenotypic change:


MS: genetic change causes, and logically precedes, phenotypic change, in adaptive evolution.


EES: phenotypic accommodation (a non-genetic process) can precede, rather than follow, genetic change, in adaptive evolution.


Laland et al. also present a graphical outline of the structure of the Extended Evolutionary Synthesis, as they see it . It is instructive to comment on a number of features of their model. Phenotypic evolution—the target of explanation of the entire framework, just as it was for Darwin—is assumed to be affected by three classes of processes: those that generate novel variation, those that bias selection, and those that modify the frequency of heritable variation.


Beginning with the first class, these processes include classical ones like mutation, recombination, gene expression, and developmental regulatory processes. But also EES-specific ones like environmental induction (of developmental processes), niche construction, phenotypic accommodation, and facilitated variation. The second class (processes that bias selection) include only EES-related entries: developmental bias and niche construction, while the third class (processes that affect heritable variation) are all classical (mutation pressure, selection, drift, and gene flow) but are in turn affected by the previous class.


The resulting picture is one of complete and, seems to me, highly coherent, meshing of the MS and the EES perspectives, where the latter adds to but does not really replace any of the previously recognized mechanisms. Which brings me to the next question I wish to address concerning the most recent developments of the now more than 150-year-old Darwinian tradition: is the proposed shift from the MS to the EES akin to a Kunhian paradigm shift?


One of the most controversial aspects of the discussion surrounding the MS versus EES debate is the extent to which the new framework is claimed to be distinct from the old one. At one extreme, there are scientists who simply reject the idea that the EES presents much that is new, claiming that whatever new concepts are being advanced were in fact already part of the MS, either implicitly or explicitly. At the opposite extreme, some supporters of the EES have been making statements to the effect that the new framework somehow amounts to a rejection of fundamental aspects of Darwinism, akin to what philosopher Thomas Kuhn famously termed a “paradigm shift” within the discipline, thus aligning themselves with a tradition that can be fairly characterized as anti-Darwinian. My own position has always been that the truth lies somewhere in the middle (in this case!): the EES is significantly different from the MS, and yet the change does not reflect any kind of scientific revolution within modern biology, but rather more of the same process that has led us from the original Darwinism to neo-Darwinism to the MS itself.


Kuhn famously argued—on the basis, crucially, of examples drawn exclusively from physics—that science goes through an alternation of two phases: during “normal” or “puzzle solving” science, practitioners are focused on addressing specific issues from within a given theoretical framework and set of methods (the “paradigm”), which itself is not the target of empirical testing or conceptual revision. From time to time, however, a sufficient number of “anomalies,” or unresolved puzzles, accumulate and precipitate a crisis within the field. At that point scientists look for a new paradigm, better suited to take into account the insofar unresolved issues. If they find it, the new framework is quickly adopted and deployed in turn to guide a new phase of normal science.


Kuhn suggested a number of approaches to tell whether a paradigm shift has occurred (or, in our case, is in the process of occurring). These include five criteria for theory comparison, as well as three classes of potential incommensurability between theories. Let’s begin by examining the five criteria: (1) accuracy, (2) consistency (internal and with other theories), (3) explanatory scope, (4) simplicity, and (5) fruitfulness of the accompanying research program. Here is how the MS and EES compare, in my mind, according to the Kuhnian criteria:


Accuracy, MS: building on the original Darwinism, it has produced quantitative accounts of the change over time of the genetic makeup of natural populations.


Accuracy, EES: incorporates the same methods and results of both the original Darwinism and the MS, adding the explanation of developmental and other self organizing biological phenomena.


Consistency, MS: as internally consistent as any major scientific theory, features explicit external links to genetics, molecular biology, and ecology.


Consistency, EES: same degree of internal and external consistency as the MS, with the addition of external links to developmental biology, genomics, and complexity theory, among others.


Scope, MS: new facts about the biological world that are explained have been consistently uncovered for the past several decades.


Scope, EES: further expands the scope of the MS by explicitly including questions about the origin of evolutionary novelties, the generation of biological form, and the problem of genotype–phenotype mapping.


Simplicity, MS: uses a limited number of mechanisms (natural selection, genetic drift, mutation, migration, assortative mating) to account for evolutionary change over time.


Simplicity, EES: makes use of all the mechanisms of the MS, adding a number of others such as epigenetic inheritance, evolvability, facilitated (i.e., self-emergent) variation, etc.


Fruitfulness, MS: has a history of more than 70 years of vigorous research programs, building on the previous fruits of the original Darwinism.


Fruitfulness, EES: builds on the ongoing research program of the MS but has also already led to empirical (e.g., emergent properties of gene networks and of cell assemblages) and conceptual (e.g., evolvability, phenotypic plasticity) discoveries, though of course it is very much a work in progress as of the moment of this writing.


Even this brief survey ought to make it clear that the MS => EES is not a paradigm shift, but rather an organic expansion. Then there is the second test proposed by Kuhn to consider, a test in a sense more stringent, that of incommensurability. If two theories are incommensurable in even one of the three classes, a good argument can be made that a paradigm shift is occurring. The classes in question are methodological, observational, and semantic.


Methodological incommensurability refers to the notion that different paradigms lead scientists to pick different “puzzles” as objects of research, as well as to the idea that scientists then develop distinct approaches to the solution of those puzzles. The EES takes on board the same puzzles, and the same set of approaches, of the MS, but it also adds new puzzles (such as the appearance of so-called evolutionary novelties, like eyes, feathers, spines, and so forth), which were largely untouched, or dealt with only superficially, by the MS. It further adds new approaches, like interpretations of evolutionary changes in terms of niche construction, developmental plasticity, or epigenetic inheritance.


Observational incommensurability is tightly linked to the idea that observations are theory dependent: what is considered a “fact” within one theoretical context may not be such in a different theoretical context. For instance, in pre-relativity physics there was a (supposed) fact of the matter that some kind of substance, referred to as ether, had to be present in space in order for light to travel through it. After the famous Michelson–Morley experiment demonstrating that there was no such thing as ether, the relevant fact became the constancy of the speed of light and therefore the relativity of frames of reference. Nothing like that seems to be happening in evolutionary biology at the moment: the very same facts that have been catalogued and explained by the MS enter into the empirical corpus of the EES, to be further expanded with new facts that come to the forefront because of the additional conceptual advancements.


Semantic incommensurability has to do with shifts in the meaning of terms used by scientists, one of Kuhn’s examples being that of “mass,” which is a conserved, static quantity in Newtonian mechanics, but becomes interchangeable with energy within the framework of Einstein’s relativity. Again, I do not discern any analogous shift in the terminology used by proponents of the MS versus EES. Key biological concepts, such as species, genes, phenotypes, niche, and so forth, retain similar and perfectly commensurable meanings, even though our understanding of their referents becomes increasingly sharp.


It seems, therefore, that Darwinism after the Modern Synthesis has proceeded along similar lines to those followed by Darwinism before the MS: a continuous expansion of both empirical knowledge and conceptual understanding, an expansion that is likely to continue for the remainder of the current century and beyond.


This discussion is in part an opportunity to call for a bit of house cleaning, so to speak, on the part of evolutionary biologists and philosophers of science. For instance, it is truly astounding that in France the Modern Synthesis, and in particular population genetics, was not included in standardized university curricula, or addressed within main research programs until the 1970s. Against the Darwinian picture that was developing abroad, French life scientists supported various forms of Lamarckism throughout the twentieth century, and some of that attitude still lingers. There is no good scientific reason for that, and it is hard not to pin such an attitude on sheer nationalism and the cultural worship of Lamarck. Needless to say, that sort of thing has no place in a mature science. The French are not the only culprits here, and the fact that there are “German,” “Russian,” and other “traditions” within evolutionary biology is more than a little bizarre.


It’s also somewhat surprising that behavioral biologists are still clinging to simplistic notions from sociobiology and evolutionary biology, which have long since been debunked. It’s not the basic idea that behaviors, and especially human behaviors, evolve by natural selection and other means that is problematic. The problem, rather, lies with some of the specific claims made, and methods used, by evolutionary psychologists.


It is also both surprising and problematic that some researchers are still pursuing non-“mechanistic” or non-“physicalist” research programs, whatever that means. Indeed, a major point of the EES is to help bring the focus back on the organism and even the ecosystem, and yet—as I just argued above—this does not require a wholly alternative synthesis at all.


Over time, Darwinism has advanced its own agenda by incorporating a variety of themes proposed by its critics, including “saltationism” (punctuated equilibrium) and “Lamarckism” (epigenetic inheritance, phenotypic plasticity, and niche construction). This is fine, so long as we keep in mind that the terms within scare quotes above are to be understood in a modern, radically updated sense, and not along the lines of what biologists were thinking decades or even centuries ago. It’s this inherent flexibility of Darwinism that has allowed people with views as divergent as Stephen Jay Gould and Richard Dawkins to (rightly) claim the Darwinian mantle.


This ability to incorporate critical ideas is neither just a rhetorical move nor somehow indicative of serious problems inherent in the Darwinian approach. In the end, the various Darwinian traditions in evolutionary biology are best understood as a wide ranging family of conceptual and research approaches, always in dialectic dialogue with each other, always in a constructive tension that transcends the agendas and (sometimes strong) personalities of the many individual scientists that recognize themselves as intellectual descendants of Charles Darwin. More than a century and a half later, evolutionary theory keeps evolving.

Darwinism in the modern era: more on the evolution of evolutionary theory – part I

1 (1)Scientific theories are always provisional accounts of how the world works, intrinsically incomplete, and expected to be replaced by better accounts as science progresses. The theory of evolution, colloquially referred to as “Darwinism,” is, of course, no exception. It began in 1858 with joint papers presented to the Linnaean Society by Charles Darwin and Alfred Russell Wallace and was formalized shortly thereafter in On the Origin of Species. The original theory featured two conceptual pillars: the idea of common descent (which was accepted by a number of scholars even before Darwin), and that of natural selection as the chief mechanism of evolution, and the only one capable of generating adaptation.


The first bit of tinkering took place shortly thereafter, when Wallace himself, together with August Weismann, proposed to drop any reference to Lamarckian theories of heredity because of the newly proposed notion of the separation between sexual and somatic cellular lines, thus generating what is properly known as neo-Darwinism. After undergoing a temporary crisis, as a result of increasing skepticism from paleontologists and developmental biologists, we enter two phases of the so-called Modern Synthesis, the biological equivalent of the Standard Model in physics: the first phase consisted in the reconciliation between Mendelism (i.e., genetics) and Darwinism (i.e., the theory of natural selection), leading to the birth of population genetics; the second phase consisted in an expansion of the theory to include fields like natural history, population biology, paleontology, and botany.


What happened to “Darwinism” after 1950? The Modern Synthesis (MS) reigned as the dominant paradigm in the field, rather unchallenged until the late 1980s and early 1990s. At which point a number of authors, coming from a variety of disciplines, began to question not so much the foundations but the accepted structure of the MS. By the very late twentieth-century and early twenty-first-century, calls to replace the MS with an Extended Evolutionary Synthesis (EES) had begun to grow loud, and to be countered by equally loud voices raised in defense of the MS. How did this happen, and what does it mean for the current status and future of evolutionary theory? To understand this we need to step back for a moment and take a broad view of conceptual developments in the biological sciences during the second half of the twentieth century.


The second half of the twentieth century has been an incredibly exciting time for biology, a period that has put the discipline on the map at least at the same level of interest as physics, the alleged queen of sciences, and arguably even more so. Let me remind you of some of the major developments that have made this possible, because they all—directly or indirectly—eventually fed into the current discussion over the MS versus the EES as dominant conceptual frameworks in evolutionary biology.


A major breakthrough in one of the foundational fields of the Modern Synthesis, population genetics, came with the invention of a technique called gel electrophoresis, which for the first time made it possible to directly assess protein and gene frequencies in large samples drawn from natural populations. While research on electrophoresis began as early as the 1930s, it was the breakthrough work of Richard Lewontin and John Hubby in 1966 that set population genetics on fire. The unexpected discovery was, as the authors put it, that “there is a considerable amount of genic variation segregating in all of the populations studied …[it is not] clear what balance of forces is responsible for the genetic variation observed, but [it is] clear the kind and amount of variation at the genic level that we need to explain.” This new problem posed by a much larger degree of genetic variation than expected in natural populations eventually led to a revolution in population genetics, and also directly to the origination of the impactful neutral theory of molecular evolution first proposed in 1968 by Motoo Kimura.


The neutral theory was a landmark conceptual development because for the first time since Darwin it challenged the primacy of natural selection as an agent of evolutionary change. To be sure, Kimura and colleagues didn’t think that phenotypic evolution (i.e., the evolution of complex traits, like eyes, hearts, etc.) occurred in a largely neutral fashion, but if it turned out that much of what goes on at the molecular level is independent of selective processes, then the obvious question is how is it possible that largely neutral molecular variation can give rise to non-neutral phenotypic outcomes. Eventually, the debate about the neutral theory—which raged on intensely for a number of years—was settled with a sensible and empirically consistent compromise: a lot of molecular variation is “near-neutral,” which means that the role of stochastic processes such as genetic drift at the molecular level is significantly higher than might have been expected on the basis of a face-value reading of the tenets of the Modern Synthesis.


What could possibly connect the near-neutral molecular level with the obviously functional and therefore likely selected phenotypic level? The obvious answer was: development. The only problem was that developmental biology had famously been left out of the Modern Synthesis. It looked like something was seriously amiss with modern evolutionary theory.


Things began to change as an offshoot of yet another revolution in biology: the rapid advances made in molecular biology after the discovery of the structure of DNA in 1953. While molecular biology kept accelerating its pace independently of organismal biology for several decades—until their confluence in the era of evolutionary genomics—in the late 1970s the existence of homeotic genes regulating embryonic patterns of development in Drosophila was discovered. It soon turned out that this and similar classes of regulatory genes are both widespread and evolutionarily conserved (i.e., they don’t change much over time), so that they are one of the major keys to the understanding of the complex interplay among genotype, development, and phenotype.


This new approach eventually flourished into a new field, known as evolutionary developmental biology, or evo-devo for short, and one of its major contributions so far has been a marked shift of emphasis in the study of morphology and development, from the sort of classical population genetic studies focused on structural genes to an emphasis on regulatory genes and their potential to help us build a credible theory of the origin of evolutionary novelties (i.e., new structures like wings or flower). As Prud’homme and colleagues put it in 2007:


Because most animals share a conserved repertoire of body-building and -patterning genes, morphological diversity appears to evolve primarily through changes in the deployment of these genes during development. … Morphological evolution relies predominantly on changes in the architecture of gene regulatory networks and in particular on functional changes within [individual] regulatory elements. … Regulatory evolution: (i) uses available genetic components in the form of preexisting and active transcription factors and regulatory elements to generate novelty; (ii) minimizes the penalty to overall fitness by introducing discrete changes in gene expression; and (iii) allows interactions to arise among any transcription factor and [regulatory genes].


The picture that emerges from this and many other studies is not incompatible with the simple mathematical models that were incorporated into the Modern Synthesis, but it does present us with a much more complex and nuanced understanding of genetic, developmental, and phenotypic evolution, so much so that it is little wonder that people have been increasingly referring to the current, very much in flux, version of evolutionary theory as the Extended Synthesis.


I have already mentioned the molecular biology revolution initiated in the 1950s, which eventually led to the genomic revolution. Both these radical developments initially affected evolutionary biology only indirectly, by providing increasingly powerful new analytical tools, such as gel electrophoresis, and later on gene sequencing. But inevitably genomics itself became an evolutionary science, once technical developments made it possible to sequence entire genomes more quickly and cheaply, and molecular biologists fully internalized, as Theodosius Dobzhansky famously put it, that nothing in biology makes sense except in the light of evolution. The structure and function, as well as the sheer diversity, of genomes are themselves not understandable if not through evolutionary lenses, so that genomics and evolutionary biology currently represent a rare example of synergism between scientific disciplines: the first provides tools for the latter to advance, while the second one allows for a theoretical understanding of the data that the first one accumulates at such a heady pace.


While of course other disciplines within biology have made progress during the second part of the twentieth century—ecology, for instance—the next bit of this panoramic view I wish to briefly comment on concerns yet another area of inquiry that had played only a secondary role during the Modern Synthesis: paleontology. The field had always been a thorn in the side of Darwinism, since many paleontologists early on had rejected the Darwinian insight, proposing instead the idea that macro-evolutionary change was qualitatively distinct from the sort of micro-evolution that Darwin famously modeled on the basis of plant and animal breeding (and of course, notoriously, creationists have always made a big deal of the distinction between micro- and macro-evolution, often without understanding it). Indeed, it was this very rejection, together with the apparent incompatibility of Mendelism and Darwinism, that led to the above mentioned period of “eclipse” of the Darwinian theory at the turn of the twentieth century.


Paleontology’s early alternative to Darwinism took the shape of orthogenetic theory (according to organisms change in the same direction over millions of years), which in turn was essentially a scaled-up version of Lamarckism, since it postulated an inner vital force responsible for long-term evolutionary trends, which many paleontologists saw as otherwise inexplicable within the Darwinian framework. It was George Gaylor Simpson’s magistral role within the Modern Synthesis that cleared away any remnants of orthogenesis from paleontology, doing for that field what Fisher, Haldane and Sewall Wright had done for Mendelian genetics: he convincingly argued that the sort of so-called “micro”-evolutionary processes accounted for by Darwinism could be extrapolated to geological timescales, thus yielding the appearance of macro-evolutionary changes of a qualitatively different nature. In reality, Simpson argued, the second is simply a scaled up version of the former.


Simpson, however, was arguably too successful, essentially making paleontology a second-rate handmaiden to population genetics while overlooking the potential for its original contributions—theoretical as well as empirical—to the overall structure of evolutionary theory. Eventually, Simpson’s “conservatism,” so to speak, led to a backlash: Niles Eldredge and Stephen Jay Gould, the enfants terribles of modern paleontology, published in 1972 a landmark paper proposing the theory of punctuated equilibria, according to which evolution, when seen at the macroscopic scale, works by fits and starts: long periods of stasis during which not much appears to be happening in a given lineage, interrupted by sudden “bursts” of phenotypic change. The theory was immediately misunderstood by many population geneticists, who thought that Eldredge and Gould were attempting to revive an old notion known as “hopeful monsters,” i.e., of instantaneous evolutionary change resulting from genome-wide restructuring.


To be fair, at some point Gould’s own anti-establishment rhetoric, and the fact that creationists often mentioned him in their support, contributed to the confusion. But in fact, the sort of punctuations that Eldredge and Gould saw in the fossil record takes place over tens of thousands of generations, thus leaving plenty of time for standard Darwinian processes to do their work. As they pointed out later on in the debate, the real novel issue is that of prolonged stasis, over millions of years, not the allegedly (but not really) “instantaneous” change. A major class of explanation proposed especially by Gould for this observed stasis had to do with developmental processes and constraints, which nicely connects the new paleontology with the emerging field of evo-devo mentioned above, making both of them into pillars of the ensuing Extended Synthesis in evolutionary biology.


(next time: the Stephen Jay Gould conceptual revolution and the birth of the Extended Evolutionary Synthesis)

The impossible conundrum: science as a (perennial?) candle in the dark

1(left: Carl Sagan; right: Richard Lewontin)


When I was a kid I wanted to be an astronomer. One of my role models was Carl Sagan, the charming original host of the television series Cosmos and author of countless books on astronomy and the nature of science. Later on I decided that biology was really my calling, and my entire career was the result of reading a single, incredibly powerful paper: The analysis of variance and the analysis of causes, by Richard Lewontin. I never had the pleasure of meeting Sagan, but I did have an hour long chat with Lewontin when I was a graduate student at the University of Connecticut and he was visiting our lab. It was one of the highlights of my life.


Both Sagan and Lewontin had far more impact on me than just their science. Sagan made me sensitive to the importance of communicating with a broader public, to share the wonders of the scientific worldview, as well as to fight the irrationality of pseudoscience. Lewontin made me sensitive to the ideological underpinnings of science and even science popularizing, and therefore, ironically, somewhat skeptical of Sagan’s own approach.


Recently, one of my readers suggested that I take a fresh look at a classic within this context: Lewontin’s review of one of Sagan’s best known books, and one that has influenced me for two decades: The Demon-Haunted World, subtitled Science as a Candle in the Dark. The review, entitled Billions and Billions of Demons (a playful, perhaps somewhat sarcastic, take on Sagan’s famous tagline about a universe with billions and billions of stars) is well worth pondering again today.


Lewontin opens with a recounting of when he met Sagan for the first time, on the occasion of a public debate about creationism vs evolution in Little Rock, Arkansas, in 1964. The experience was formative for both, but they came away from it with radically different messages:


“Sagan and I drew different conclusions from our experience. For me the confrontation between creationism and the science of evolution was an example of historical, regional, and class differences in culture that could only be understood in the context of American social history. For Carl it was a struggle between ignorance and knowledge.”


I can sympathize. When, in 1997, I first debated a creationist, Duane Gish of the Institute for Creation Research (no kidding), I was squarely looking at things through Sagan’s filter: obviously creation “science” is no such thing; obviously evolutionary theory is solid science; and obviously anyone disagreeing with these two propositions is a hillbilly ignoramus. More than two decades after that debate I think that position was incredibly naive, and I find myself far closer to Lewontin’s, though not entirely on board just yet.


As Lewontin aptly puts it:


“The primary problem is not to provide the public with the knowledge of how far it is to the nearest star and what genes are made of, for that vast project is, in its entirety, hopeless. Rather, the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth. The reason that people do not have a correct view of nature is not that they are ignorant of this or that fact about the material world, but that they look to the wrong sources in their attempt to understand.”


In other words, and contra Sagan, it isn’t a question of educating people about facts, it’s a question of convincing them to trust the better authority. Think of it this way. You probably “know” that atomic nuclei are made of quarks, right? But do you? Really? Unless you are a physicist, or at any rate someone whose grasp of physics is far better than average, you don’t actually know how science arrived at this basic fact about the structure of the world. Instead, you are simply repeating a statement that you read in a book or heard from a prominent physicist, or your college physics professor. You don’t know. You trust.


That’s why rejection of evolution in favor of creationism — while wrong (I actually know this, I’m a biologist) — is not irrational. It simply means that many people in the United States would rather trust their preachers, who they think speak on behalf of God, than Profs. Sagan, Lewontin, or Pigliucci. That’s why Lewontin, correctly, says that the only way to understand why creationism is such an issue in the US of A but not in pretty much any other Western country (and, again, is very much an issue in a lot of Islamic countries) we don’t need to look at the quality of science education. We need to look at the specific cultural history of the United States vs that of European countries.


Sagan did not get it. Here is Lewontin again:


“The only explanation that [Sagan] offers for the dogged resistance of the masses to the obvious virtues of the scientific way of knowing is that ‘through indifference, inattention, incompetence, or fear of skepticism, we discourage children from science.’ He does not tell us how he used the scientific method to discover the ‘embedded’ human proclivity for science, or the cause of its frustration. Perhaps we ought to add to the menu of Saganic demonology, just after spoon-bending, ten-second seat-of-the-pants explanations of social realities.”


You hear similar ex cathedra pronouncements from the contemporary heirs of Sagan’s approach, for instance Neil deGrasse Tyson (who has taken over the helm of the new Cosmos series). Their analysis of the hows and whys of widespread beliefs in parapsychology, UFOs, astrology and so forth is just as unempirical and “seat-of-the-pants” as Sagan’s. One would expect better from people who loudly insist on the absolute necessity of systematic empirical data before making any pronouncement.


Lewontin then proceeds with chastising another common Sagan-Tyson-et-al argument in defense of science: that it “delivers the goods.” Well, yes, sometimes. At times, though, those “goods” are anything but (atomic weapons, biological weapons, Facebook), and in other cases there is no delivery at all (the “war on cancer,” or the over-hyped promises of the human genome project). Meanwhile billions and billions — of dollars — are spent at taxpayers’ expense. Referring to the repeated promises of scientists to deliver cures for diseases if they were only given money to sequence the genes associated with them, followed by inevitable failure since a DNA sequence by itself doesn’t provide a cure for anything, Lewontin writes:


“Scientists apparently do not realize that the repeated promises of benefits yet to come, with no likelihood that those promises will be fulfilled, can only produce a widespread cynicism about the claims for the scientific method. Sagan, trying to explain the success of Carlos, a telepathic charlatan, muses on ‘how little it takes to tamper with our beliefs, how readily we are led, how easy it is to fool the public when people are lonely and starved for something to believe in.’
Not to mention when they are sick and dying.”


Ouch, but on the mark. And there is more where that came from:


“Sagan’s suggestion that only demonologists engage in ‘special pleading, often to rescue a proposition in deep rhetorical trouble,’ is certainly not one that accords with my reading of the scientific literature. … As to assertions without adequate evidence, the literature of science is filled with them, especially the literature of popular science writing.”


I must say that my own experience as a scientist first, and now as a philosopher of science, is far more in synch with Lewontin’s cynicism than with Sagan’s optimism.


And here is another gem from the review:


“When, at the time of the moon landing, a woman in rural Texas was interviewed about the event, she very sensibly refused to believe that the television pictures she had seen had come all the way from the moon, on the grounds that with her antenna she couldn’t even get Dallas. What seems absurd depends on one’s prejudice. Carl Sagan accepts, as I do, the duality of light, which is at the same time wave and particle, but he thinks that the consubstantiality of Father, Son, and Holy Ghost puts the mystery of the Holy Trinity ‘in deep trouble.’ Two’s company, but three’s a crowd.”


Just in case your blood is boiling and you begin to think Lewontin to be a postmodern deconstructionist, think again (and try to breathe deeply). He is an atheist, and he certainly does believe that we landed on the moon. His point is about cautioning scientists and science popularizers against dismissing others on the ground that their views are “obviously” irrational. Rationality is a great tool, but its deployment depends on one’s axioms or, as Lewontin’s puts it, one’s prejudices.


Here is where I partially, but only partially, part company with Lewontin:


“We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism.”


Well, yes, sort of. I would say that materialism itself is a philosophical position that many have arrived at because it is the one that makes the most sense of the world as we understand it. But wait, isn’t our understanding of the world based on the assumption of materialism? In a sense, but I think it is a mistake to see one as definitely preceding the other. Materialism and science co-evolved for centuries, and there was plenty of time when many prominent scientists were definitely not materialists, or at least not thoroughgoing materialists — from Newton to Alfred Wallace (the co-discoverer of natural selection). But the more the metaphysical leanings of natural philosophers (as scientists were once called) approached full fledged materialism, the more their science became successful at explaining and manipulating the world. This is, in a sense, a beautiful, centuries-long example of why one’s metaphysics should never be far from one’s epistemology (as it is, by contrast, with religion). The problem is that it’s really hard to imagine how to trigger that same sort of shift in a general public that hardly thinks either philosophically or scientifically. And no, more courses along the lines of Biology or Physics 101 ain’t gonna do it.


Lewontin, again, is far more perceptive than Sagan:


“The struggle for possession of public consciousness between material and mystical explanations of the world is one aspect of the history of the confrontation between elite culture and popular culture. … Evolution, for example, was not part of the regular biology curriculum when I was a student in 1946 in the New York City high schools, nor was it discussed in school textbooks. In consequence there was no organized creationist movement. Then, in the late 1950s, a national project was begun to bring school science curricula up to date. … The elite culture was now extending its domination by attacking the control that families had maintained over the ideological formation of their children. The result was a fundamentalist revolt, the invention of ‘Creation Science,’ and successful popular pressure on local school boards and state textbook purchasing agencies to revise subversive curricula and boycott blasphemous textbooks.”


Lewontin is absolutely right here. But the problem is, and he would be the first one to admit it, that there is no solution in sight. Are we supposed not to teach one of the most important scientific theories of all time because teaching it is going to be taken as yet another affront perpetrated on the working class by the moneyed elite? I doubt it. But the only other path I can see just ain’t gonna happen: establish a society where there is no such thing as the moneyed elite, where everyone has access to free education, and where consequently a lot of the cultural and economic factors that Lewontin correctly pinpoints will be erased or at least greatly diminished. I’ not holding my breath, are you?


The review concludes with a quote from the Gorgias, one of Plato’s dialogues (which Sagan would have appreciated, though I’m pretty confident that a lot of contemporary science popularizers have no idea why anyone would quote a philosopher who’s been dead more than two millennia. After all, isn’t philosophy useless?). Gorgias, a sophist, and Socrates are debating the relative virtues of rhetoric and technical expertise in public life. We are meant, of course, to sympathize with Socrates, but see if you can appreciate Gorgias’ point, in light of the preceding discussion:


Gorgias: “I mean [by the art of rhetoric] the ability to convince by means of speech a jury in a court of justice, members of the Council in their Chamber, voters at a meeting of the Assembly, and any other gathering of citizens, whatever it may be.”


Socrates: “When the citizens hold a meeting to appoint medical officers or shipbuilders or any other professional class of person, surely it won’t be the orator who advises them then. Obviously in every such election the choice ought to fall on the most expert.”


Obviously it ought, but equally obviously it doesn’t. And that, two and a half millennia later, is still the problem, and the reason why we are in the mess we are in.

No, science does not provide all the answers to the big questions

From time to time a famous scientist allows himself (in my experience it’s always a man) to write nonchalantly about something of which he demonstrably has only a superficial grasp: philosophy. The list of offenders is a long one, and it includes Lawrence Krauss, Neil deGrasse Tyson, and Stephen Hawking, among several others. (Fortunately, there are also exceptions, scientists who value a constructive intercourse with the humanities, like Sean Carroll.) The latest entry in this dubious pantheon is Peter Atkins, who recently published a sloppy essay in the otherwise excellent Aeon magazine entitled “Why it’s only science that can answer all the big questions.” Oh boy.


Atkins begins by telling us that there are two fundamental kinds of “big questions”:


“One class consists of invented questions that are often based on unwarranted extrapolations of human experience. They typically include questions of purpose and worries about the annihilation of the self, such as Why are we here? and What are the attributes of the soul? They are not real questions, because they are not based on evidence. … Most questions of this class are a waste of time; and because they are not open to rational discourse, at worst they are resolved only by resort to the sword, the bomb or the flame. … The second class of big questions concerns features of the Universe for which there is evidence other than wish-fulfilling speculation and the stimulation provided by the study of sacred texts. … These are all real big questions and, in my view, are open to scientific elucidation.”


This is not news, of course, at all. David Hume — one of my favorite philosophers — made essentially the same argument back in the 18th century, in his case rejecting what he saw as the waste of time associated with the Scholastic metaphysics that had prevailed throughout the Middle Ages:


“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” (An Enquiry Concerning Human Understanding)


With all due respect to Hume, it’s a good thing people didn’t follow his advice, or we would have lost his very own Enquiry Concerning Human Understanding, since that book doesn’t contain any abstract reasoning concerning quantity or number, nor does it contain any experimental reasoning concerning matter of fact. And yet, it is — justly — considered to be one of the most important works of modern philosophy.


Atkins apparently realizes that he may come across as a bit too facile, since he acknowledges that he is defining the big questions precisely as those that science can answer, turning then around to “demonstrate” that science is the only discipline equipped to answer such questions. As he drily puts it when considering the obvious charge of circularity: “that might be so.” Which doesn’t stop him from proceeding as if it were not so.


Atkins tells us that science is getting ready to tackle what he considers the next three big questions: How did the Universe begin? How did matter in the Universe become alive? and How did living matter become self-conscious?


I have no doubt, as a scientist, that those are, indeed, scientific questions. I’m slightly more skeptical, as a philosopher, that science will actually be able to come up with answers. Fundamental physics, after more than a century of uninterrupted success, seems to have entered a period of navel gazing where speculation (admittedly mathematically informed speculation) is poised to replace empirical evidence. So we shall see if and when we’ll actually get a “theory of everything,” and whether that theory will in fact be able to tell us how the universe began from “nothing” (there is some doubt that it will).


Regarding the second question, the origin of life, theories have been piling up for several centuries now, and yet we don’t seem to be particularly close to a resolution just yet. I’m certainly not arguing that it isn’t possible, but it’s a very, very difficult problem, for the simple reason that a lot of the historical traces have been lost. No geological strata survive from the time when the primordial earth was home to the first living organisms, meaning that researchers on the origin of life are like detectives who already know the smoking gun isn’t going to be found. At best, they’ll have to rely on circumstantial evidence. Even should we be able to produce life artificially in the laboratory that would not solve the problem, since it wouldn’t mean that life on our planet actually followed anything like that particular causal path.


As for consciousness, I remain convinced that the problem is indeed biological in nature, and that therefore developmental, evolutionary, and neuro-biology are the disciplines best positioned to find a solution. But at the moment nobody seems to have much of a clue, and common talk of the brain being a computer is finally beginning to be understood as the shaky and very likely misleading analogy that is.


So, yes, if any of those three big questions are going to be answered, the answer will be a scientific one. But what about other questions that arguably just as big (or, for most of us, even bigger)? Here Atkins shifts into full scientistic mode:


“I see no reason why the scientific method cannot be used to answer, or at least illuminate, Socrates’ question ‘How should we live?’ by appealing to those currently semi-sciences (the social sciences) including anthropology, ethology, psychology and economics.”


Please notice a number of interesting and revealing things about this sentence. First, Atkins is making the time-honored argument from personal incredulity: “I see no reason why…” Which, of course, is not an argument at all, but an elementary logical fallacy. Second, he is seriously hedging his bets when he immediately qualifies his initial statement: “or at least illuminate…” Ah, well, but philosophers since the Pre-Socratics have understood that empirical evidence (i.e., “science”) can illuminate philosophical questions. However, that’s a far more modest claim than the notion that science can actually answer those questions. Third, Atkins can’t help himself but deliver a contemptuous dig at the “semi-sciences.” This attitude, common among physicists, reflects a naive understanding of the philosophy of science, according to which physics is the (self-professed) “queen” of the sciences, and every other field will achieve full scientific status only when it will finally evolve into something that looks like physics. But an increasingly common view in philosophy is that there actually is a fundamental disunity of science, that “science” is only a loosely defined family resemblance term, reflecting the fact that each science has its own goals, methods, and internal standards, and that there is no universal yardstick to be appealed to in order to make comparative judgments of quality.


Going back to philosophy, the question of “how should I live?” admits of a large number of reasonable (and a lot of unreasonable!) answers, given the very same facts about the universe and human nature. It isn’t so much a question to be answered, as to be explored and clarified. Indeed, this is arguably what most fundamentally distinguishes science from philosophy.
One of my recent morning meditations is pertinent here. It begins with a quote by the Stoic philosopher Epictetus, who says in Discourses II, 11.13:


“Here you have philosophy’s starting point: we find that people cannot agree among themselves, and we go in search of the source of their disagreement.”


As I argue in the podcast episode, there are two broad sources of disagreement among human beings: factual and conceptual. If you and I disagree about, say, the number of moons orbiting around the planet Saturn, one of us is wrong, possibly both. There is a matter of fact about the issue, and we can find out the answer by asking an astronomer. Or more simply by doing a web search. If disagreement remains after that, then one of us is more than a bit obtuse.


The second kind of disagreement concerns how to think about facts, actions, and values. Here the facts are relevant, but insufficient to settle the dispute. Let’s say we have different opinions about the permissibility of assisted suicide. Certain empirical facts are going to be pertinent to the discussion, like information about how the procedure is going to be implemented, what safeguards there may be to avoid abuses, and so forth. But even if we agree on the facts, we may still disagree on the crucial issue: is assisted suicide morally permissible?


That’s the difference between science and philosophy, and why Epictetus says that philosophy begins with the search for why people disagree on things. Notoriously, philosophy does not necessarily settle such disagreements. The joke in philosophy departments is that our profession’s slogan is: “Philosophy: we have all the questions!” But what philosophy does, by means of careful analysis and reasoned argument, is to help us clarify why, exactly, we disagree. That is of huge help to people of good will who wish to honestly pursue discussions in search of better ways to conduct their lives. Atkins may want to take notice.

Is philosophy a profession? (Yes, it’s a serious question)

You would think that the one that gives the title to this essay is one of those silly questions that only a philosopher would pose. And yet, a few months ago I unwittingly caused a Twitterstorm when I suggested that philosophy is, indeed, a profession, and that it comes with credentials (in the form of an awarded PhD, job titles and so forth) and even (gasp!) expertise.


I will start by presenting my arguments for why philosophy is indeed a profession that marks a certain kind of expertise; then we’ll talk about why this matters; and finally we’ll explore why, I think, so many people got positively upset at the mere suggestion that there can be professional philosophers, and even more so that they deserve a bit of respect when they talk about their own subject matter. I will also address some common objections to the idea of professional philosophy, as they were put to me during said Twitterstorm.


Is philosophy a profession?


Modern philosophy, meaning — approximately — philosophy has it has been practiced since the 20th century, is a profession in the same sense that, say, psychology or dentistry are professions. If you want to become a psychologist, or a dentist, you go to specialized schools, you take specific courses, you demonstrate your ability as a practitioner, and you get awarded a certificate that says that yup, you are indeed a psychologist, dentist, or philosopher. You then look for a job in your chosen profession, and if you are capable and lucky you land one. You then practice said profession, drawing a salary or other form of income. And eventually you cease practicing in order to enjoy a more or less well deserved retirement.


Typically, in order to become a professional philosopher one needs an undergraduate degree in that field (in the United States, four years) and a PhD from an accredited university (4-6 years on average, but it can be more). The PhD requires taking advanced courses (in my case, for instance, on Plato, ethics, Descartes, Kant, and a number of others), and the writing of a dissertation that must be of publication quality and advance the field by way of proposing original ideas (here is mine). After this, a young philosopher may find temporary employment as a postdoctoral associate, or as a lecturer, and eventually, maybe, land a tenure track position (though the whole institution of tenure has been under relentless attack by conservative political forces, but that’s another discussion). If you do get such a position, you then have six years to prove to your colleagues that you are worth retaining and being promoted from assistant to associate professor, a promotion that comes with some benefits (beginning with tenure itself) and usually a very modest increase in salary. If you are good, a number of years later (usually around five) you get another promotion, to full professor, which comes with little additional benefits (except that now you can serve on more university committees!) and with an equally modest increase in salary.


What I have just described, of course, is the academic path. It used to be pretty much the only game in town, but now the American Philosophical Association has a whole booklet on career paths beyond academia, if you are so inclined. Nevertheless, the academy is still where you will find most professional philosophers, these days.


So, since becoming a philosopher requires studying and getting a degree, and is often associated with belonging to a professional society and getting a regular salary from an employer (usually a university) it seems pretty obvious that philosophy is, indeed, a profession as succinctly defined by the Merriam-Webster: a type of job that requires special education, training, or skill.


Why does this matter?


Why did I bother engaging in the above elucidation of the obvious? Because ever since I switched my own career from that of a scientist (evolutionary biology) to that of a philosopher, I noticed an incredible amount of hostility and dismissal toward philosophy, including — unbelievably — by some philosophers!


I think it is important to correct public misperceptions of philosophy in particular, and of the humanities in general, not because these disciplines are difficult to practice and therefore deserving of respect, but because they are vital to the functioning of an open society. Far too often these days we hear administrators and politicians (usually, but not only, conservatives) saying that a college degree should prepare students to find well paying jobs. That is simply not the case. That definition applies to trade schools, not universities. Yes, of course you want to find a well paying job, especially given the insane amount of money you will have to shell for the privilege of a higher education in the increasingly unequal United States of America (and elsewhere). But the point of a liberal arts education (as it used to be called before “liberal” somehow became a dirty word) is first and foremost to help create mature adults and responsible citizens. You know, the sort of people who can think for themselves about what to do with their lives, instead of being brainwashed by corporate ads. Or the sort of people who believe that voting is both a right and a privilege, and who exercise such right/privilege by doing their homework on different candidates, instead of falling for blatant propaganda and conspiracy theories. That, and not to create an obedient army of drones for the corporate world and an increasingly illiberal government, is what education is for. No wonder so many in power have tried so hard to undermine that mission.


And make no mistake about it, that mission requires a substantial involvement in the humanities, not just the STEM fields. Everyone these days claims to be teaching “critical thinking,” but trust me, you ain’t gonna learn that in a biology class, or in chemistry, or in engineering. You will learn all sorts of interesting things in those classes, some of which may even be useful for getting you a job. But you won’t acquire the sort of ability at critical analysis and writing that philosophy will give you. You will also not be able to familiarize yourself with art, literature and music, some of the main reasons why human life is so interesting and varied. And you will not learn about the stupid things we have repeatedly done in the course of history — which is just as well from the point of view of politicians who prefer to keep selling you propaganda according to which you live (of course!) in the best nation that has ever blessed planet earth, handpicked by God himself to be a shining light for the rest of the world. You see, if you read Plato and Shakespeare and Haruki Murakami, or learn about the American bombing of Dresden at the end of WWII, or appreciate just how and why inequality, racism, and sexism are still pervasive in the 21st century, you will might start questioning what the hell is going on and how to change it. As one of my favorite comedians, George Carlin, once put it: “it’s called the American dream because you must be asleep to believe it.” Philosophy, and the rest of the humanities, are a major way for you to wake up.


Why do people have a problem?


Once more, I would not have thought that any of the above were controversial. But it was! I got a surprising amount of pushback on social media. Okay, fine, it’s social media, where one gets pushback and worse for saying the most mundane things. But still. Studying those responses, it seems to me they fall in the following broad categories:


(i) People who believe that I’m telling them that only professional philosophers can think. What? No, and if you believe that’s the implicature of the above position, you may benefit to taking a philosophy class or two! Snarky comments aside (sorry, this sort of exercise is exhausting!), of course philosophers aren’t the only people who can think, or even think well. Nor does thinking require a license or accreditation of any sort. But the job description of the philosopher is not “thinker,” but rather thinker of a particular kind, using particular tools, applying them to particular subject matters. Similarly, a psychotherapist, say, isn’t just someone who talk to you about your problems. Your friend can do that over a beer at the local pub. But your friend is not professionally trained, is not aware of psychological theories of human behavior, and is not familiar with psychotherapeutic techniques. That’s why so many people pay professional therapists to talk about their problems, instead (or on top) of having a beer with their friends.


That is why it is bizarre that when someone disagrees with me on Twitter or Facebook they often say something along the lines of “you should be aware of logical fallacies,” or “you should study philosophy of science” (actual phrases, and please notice that I teach a course on — among other things — logical fallacies, have written technical papers on the topic, and my specialty is, you guessed it, philosophy of science). This isn’t to say that a professional is always right and an amateur always wrong. Sometimes your intuitions about what’s wrong with your car may trump those of your mechanic. But, as a general rule, is far more likely the expert got it right and that you have a superficial or incomplete understanding of the matter. There is no shame in this, of course. We can’t all be experts on everything.


(ii) Which brings me to the second cause of irritation among some commenters: a good number of people seem not to recognize that philosophy is a field of expertise. On the one hand, this is understandable, but on the other hand it is downright bizarre. It’s understandable because philosophy is, indeed, a rather peculiar field, even within the academy. While biologists study the living world, physicists study the fundamentals of matter and energy, psychologists study human behavior, and historian study human history, what do philosophers study, exactly? The answer is: everything.


Which doesn’t mean they are experts on everything. Here is how it works. First off, the very comparison between philosophy and, say, biology, is misleading. “Philosophy,” if anything, is comparable to “science,” not to a sub-discipline of science. Second, philosophers are interested in broad vistas and the connections among fields, hence the various “philosophies of” (mind, biology, physics, social science, language, history, and so forth). This doesn’t make it easier, but more difficult to be a philosopher. Take my own case: I am a philosopher of science, and in particular a philosopher of evolutionary biology. This means that I need to be very familiar with not one, but two areas of scholarship: evolutionary biology and philosophy of science. I need to understand both the biology and epistemology, for instance, in order to apply a philosophical lense to the science and ask questions like what is the logic and structure of a particular scientific theory, how do unstated assumptions and unrecognized biases interfere with scientific research, what exactly is the relationship between a scientific theory and the evidence that is invoked to back it up (i.e., what’s the “epistemic warrant” of the theory).


Surely this sort of work requires expertise. Equally surely, someone without background in both science and philosophy of science is unlikely to just waltz in and come up with a novel idea that will stun the pros. It’s possible, of course, but very, very unlikely.


(iii) A third group of responses threw back at me that apparent incongruity that I have spent years encouraging people to practice philosophy (Stoicism, specifically) in their everyday life, and yet I’m now telling them that they don’t understand it. But there is a big difference between philosophy as an academic field of scholarship and philosophy understood as a daily practice in life. The first one is the province of professionals, the second one can (and, I think, should) be accessible by anyone willing to spend a modicum of time reading about it.


Again, the difference that I’m drawing here should not be surprising, as it finds lots of parallels. Everyone should exercise to maintain good health. That doesn’t mean everyone suddenly is a professional trainer or athlete. Anyone is capable of driving a car. But we are not a planet of car mechanics. Every Christian is able to read the Gospels, but few are theologians of the level of Thomas Aquinas. And so on, the examples are endless.


So, no, there is no contradiction at all between the notion that philosophy is a specialized academic profession requiring a lot of training and the idea that anyone can read up enough about Stoicism, or Buddhism, or any other philosophical or religious practice and incorporate them in their lives.


Possible objections


Finally, let me do some pre-emptive addressing of likely criticisms (another useful habit that I picked up as a professional philosopher!):


(1) But dentists (say) produce something, what do philosophers produce?


The outcome of the profession of dentistry is that your teeth will be in better order and more healthy than they would have been had you not gone to the dentist. The outcome of the profession of philosophy is twofold: (a) our students develop a better sense for complex ideas and how to evaluate them; and (b) we publish papers and books that contain new insights into the problems we are interested in. (The latter is, of course, what every scholar does, both in the humanities and in the sciences.)


(2) But Socrates did not have a PhD!

 

True. Neither did Darwin. Or Galileo. But today it’s really, really hard to become a professional biologist or physicist without proper, standardized, and rigorous training, usually certified by the award of a PhD. Philosophy has changed exactly in the same way in which all other fields of inquiry have, and for similar reasons (increased specialization, consequent division of labor, institutionalization, etc.).


(3) But someone can make novel contributions to philosophy even without a degree.

 

Yes. Just like someone can make a novel contribution to biology, or physics, and so forth. Such cases exist, but they are rare. Indeed, they are increasingly hard to find, across fields, precisely because both humanistic and scientific knowledge are getting more and more sophisticated and specialized, thus requiring extensive professional training.


(4) But plenty of professional philosophers don’t make interesting contributions to the field.

 

True. And the same goes for plenty of professional biologists (believe me, I’ve seen it) and, I assume, professional physicists, mathematicians, and so forth. Even so, your average philosopher (or biologist, or physicist) will still have a far more sophisticated command of her field than someone who has never studied it systematically.


(5) But there are serious problems with academia. Indeed there are.

 

This is something often pointed out, among others, by my friend Nigel Warburton. That said, Nigel himself has a PhD in philosophy and was an academic before going freelance. And for his spectacularly successful podcast, Philosophy Bites, he tends to interview… you guessed it! Professional philosophers! (Including yours truly.) Because they have knowledge of their field, and interesting things to say about it.


The bottom line


So, can we please get over this strange combination of defensiveness and disdain, and admit that philosophy is — among other things — a serious profession carried out by people with expertise? As I argued above, there is far more at stake here than a petty turf war or wounded egos. Taking philosophy (and the humanities) seriously may be what ultimately will save us from the forces of obscurantism and tyranny.

Biology’s last paradigm shift and the evolution of evolutionary theory – part II

Last time we have seen how evolutionary theory has evolved over the past century and a half, why so many contemporary biologists are calling for what they refer to as the Extended Evolutionary Synthesis (see here and here), and how Darwin, building on David Hume, definitely rebutted the intelligent design argument advanced by William Paley. All as part of a discussion of a paper I published back in 2012, entitled “Biology’s last paradigm shift. The transition from natural theology to Darwinism.” (full text here) In this second part we are going to look at whether the transition between natural theology and Darwinism constituted a paradigm shift, according to criteria laid out by philosopher of science Thomas Kuhn. As I mentioned last time, in the paper I also apply the same analysis to what happened after Darwinism, to more and more recent incarnations of evolutionary theory, but will not discuss that section here.


According to Kuhn, change in science is comprised of two distinct and alternating phases: during “normal science” scientists use the dominant theoretical and methodological tools within a field of inquiry to solve “puzzles”, i.e. problems arising within a particular theory. However, from time to time the number of such problems that cannot be resolved within the adopted framework (“anomalies”) becomes large enough to trigger a crisis, which is then resolved if a new “paradigm” is arrived at to replace the old framework and provide new guidance for further normal-puzzle solving science.


Typically, one of the problems with the Kuhnian approach is that Kuhn did not define exactly what he meant by paradigm, which means that it is not entirely clear what may constitute a paradigm shift. For the purposes of my argument, I will use the commonly accepted interpretation of paradigms as encompassing the “disciplinary matrix,” which means not just the dominant theory or theories within a given field, but also the accompanying methodologies, training strategies for the next generation of scientists, and – no less important – the pertinent metaphysical and epistemological assumptions.


Kuhn suggested five criteria for comparing competing paradigms and for theory choice: 1) Accuracy; 2) Consistency, both internal and with other theories; 3) Scope, in terms of how widely the explanatory reach of a theory extends; 4) Simplicity; and 5) Fruitfulness, in terms of further research. Roughly speaking, then, the comparison between the two paradigms of natural theology and Darwinism is striking. Let’s go through it criterion by criterion.


Accuracy


Natural theology: all explanations are ad hoc, since God’s will is inscrutable.


Darwinism: it can explain some surprising facts about the biological world, like the complexities of the flower structure in some orchid species, or the intricacies of the life cycles of some parasites.


Consistency


Natural theology: internally inconsistent with the idea of an all-powerful, all good God (the problem of natural evil).


Darwinism: as internally consistent as any major scientific theory; external links to other sciences, particularly Darwin’s prediction that the age of the earth had to be greater than what commonly thought by geologists and physicists of the time (turns out, he was right).


Scope


Natural theology: allegedly all-encompassing, but supernatural “explanations” are epistemologically empty. That is, to say “God did it” sounds like an explanation, but it really doesn’t explain anything.


Darwinism: new facts about the biological world that are explained by the theory have been consistently uncovered for more than one and a half centuries.


Simplicity


Natural theology: deceptively simple, if one neglects the obvious question of the origin and makeup of the Creator.


Darwinism: in its original form invokes a small number of mechanisms to explain biological history and complexity; more recent versions invoke more mechanisms, but still a relatively limited number.


Fruitfulness


Natural theology: did not lead to any research program or discovery.


Darwinism: has maintained a vigorous research program for more than one and a half centuries.
According to the above summary, then, the Darwinian paradigm is definitely preferable to Paley’s natural theology – not surprisingly. More interestingly for our purposes here, these are all clear signs of a paradigm shift, the only one ever occurred in evolutionary biology, I argue in the rest of the original paper.


Kuhn’s theory of paradigm shifts famously included another controversial notion: incommensurability, the idea that crucial concepts within a given paradigm are simply not comparable to what superficially appear to be equivalent concepts within another paradigm. Kuhn identified three distinct types of incommensurability: methodological, observational and semantic.


Methodological incommensurability refers to the notion that different paradigms lead scientists to pick different “puzzles” as objects of research, as well as to the idea that scientists then develop distinct approaches to the solution of those puzzles. Obviously, natural theology and Darwinism are methodologically incommensurable: while they both rely on observation and comparative analyses, their goals are entirely different. For Paley, the focus is on the intricate complexity of living organisms, constantly interpreted as an obvious indication of the will and omnipotence of the Creator. Darwin, instead, pays particular attention to precisely those biological phenomena that are troubling to the notion of intelligent design, as in this famous passage:


“I cannot persuade myself that a beneficent and omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of Caterpillars.” (letter collected by Francis Darwin 1887).


More broadly, the sort of “puzzles,” to use Kuhn’s terminology, that Darwinists began to pay attention to concern the historical relationships between different species of organisms (something that is defined out of existence within the natural theological paradigm, since species are specially created), as well as the kind of ecological settings that bring about different adaptations (again, a problem ruled out within natural theology, where adaptations are the direct result of an intelligent act).


Observational incommensurability is tightly linked to the idea that observations are theory-dependent: what is considered a “fact” within one theoretical context may not be such in a different theoretical context. This is perhaps one of the most controversial of Kuhn’s notions, famously illustrated with images from Gestalt psychology, where the same pattern of lines on paper can be interpreted in dramatically different fashions (e.g., a vase or two faces, an old or a young woman, a rabbit or a duck, etc.).


The problem, of course, is that if we take the Gestalt metaphor seriously, we are led to the position that there is no true or even better way to interpret the data, which in turn leads to the constructivist temptation: any theory is just as good as any other, and there really is no way to measure progress in science. Kuhn strongly disavowed such an extreme interpretation of his ideas, and the notion of theory-dependence of observations is now commonly accepted in philosophy of science and embedded in textbook treatments of the subject.


Be that as it may, it is hard to imagine examples of observational incommensurability between natural theology and Darwinism, in part no doubt because no sophisticated way of gathering data was accessible – beyond direct observation and rudimentary experiments – to proponents of the two paradigms.


Finally we get to semantic incommensurability. This has to do with shifts in the meaning of terms used by scientists, one of Kuhn’s examples being the concept of “mass,” which is a conserved, static quantity in Newtonian mechanics, but becomes interchangeable with energy within the framework of Einstein’s relativity.


For the purposes of our discussion, one could make the argument that a similar situation holds for the shifting concept of species between natural theology and Darwinism. Both paradigms do refer to “species,” but the meaning of the term is entirely different. For Paley, species were fixed entities set in place by the action of the Creator – in that sense not far from Newton’s own conception of the physical world, and particularly of the laws governing it. For Darwin, however, species are ever changing entities with no sharp boundaries, which are altered by evolutionary processes in a continuous, gradualistic fashion.


All in all, then, it appears that whether we use the first set of Kuhnian criteria or the various notions of incommensurability, there are very strong reasons to conclude that the shift between natural theology and Darwinism was, in fact, a paradigm shift. It was also, in a very important sense, a shift from a proto-scientific to a scientific view of biology: Darwin and Wallace abandoned any reference to supernatural forces, thus literally establishing a whole new field of science, which keeps, ahem, evolving even today.