GNXP status update

No comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

This blog has been inactive for a while. I’ve got the domain, and perhaps one day there will be regular contributors. The internet has changed a lot since we started GNXP in June of 2002, so I don’t know.

* This domain has almost all the archives on gnxp.com, including many comments, going back to 2002

* My specific content from this weblog, ScienceBlogs, and Discover, can be found at Unz Review, where I’m posting.

Sexual selection and economic growth

7 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

I’m not sure how much drive-by traffic gnxp is continuing to receive, but figured it worthwhile to post a note about my latest working paper, which explores whether male signalling may have a role in driving economic progress. The abstract:

Sexual Selection, Conspicuous Consumption and Economic Growth

The evolution by sexual selection of the male propensity to engage in conspicuous consumption contributed to the emergence of modern rates of economic growth. We develop a model in which males engage in conspicuous consumption to send an honest signal of their quality to females. Males who engage in conspicuous consumption have higher reproductive success than those who do not, as females respond to the costly and honest signal, increasing the prevalence of signalling males in the population over time. As males fund conspicuous consumption through participation in the labour force, the increase in the prevalence of signalling males who engage in conspicuous consumption gives rise to an increase in economic activity that leads to economic growth.

I’ve posted some background to the paper over at Evolving Economics. I’ve also received some interesting feedback, including this post on The Conversation by Rob Brooks.

Finally, I have posted on SSRN an update to my paper examining the Galor-Moav model in which economic growth is triggered by the interplay between technological progress and an inherited preference for quality or quantity of children. I posted about it on gnxp mid-last year. The revision carries the same story as the original paper, but is tighter and cuts out some of the flotsam.

10 years of Gene Expression

9 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Just thought I would mention that a few days ago the weblog Gene Expression has been around for 10 years. I won’t say much more at this point because of time constraints. But I wanted to enter it into the record, as well as admitting two minor points. I often used to say in the early days that my foray into blogging was rather a coincidence. I was playing around with the JSP/Servlet platform and wrote up a primitive blog software which I decided to test with my own weblog…and somehow one thing led to another. But I’m 99% sure now that at some point I would have started a weblog, and soon in relation to 2002. Second, of late I notice that Gawker is occasionally mentioned in the media as the locus for various politically correct outrages. If you had asked me 10 years ago that Gawker would be such a banal and conventional website I would have been surprised. The founding editor of Gawker was an occasional contributor to the first incarnation of GNXP in 2002. People tend to idealize the early blogosphere too much, there was a lot of stupid Iraq warblogging going on (I was part of it to some extent), but there definitely was some amalgamation of heterodoxy. Today the blogosphere reflects the mainstream media by and large.

The genetic architecture of economic and political preferences

1 Comment
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

*This is a cross post from Evolving Economics.

Evidence from twin studies implies that economic and political traits have a significant heritable component. That is, some of the variation between people is attributable to genetic variation.

Despite this, there has been a failure to demonstrate that the heritability can be attributed to specific genes. Candidate gene studies, in which a single gene (or SNP) is examined for its potential influence on a trait, have long failed to identify effects beyond a fraction of one per cent. Further, many of the candidate gene results fail to be replicated in studies with new samples.

An alternative approach to genetic analysis is now starting to address this issue. Genomic-relatedness-matrix restricted maximum likelihood (GREML – the term used by the authors of the paper discussed below) is a technique that looks to examine how the variance in traits can be explained by all of the SNPs simultaneously. This approach has been used to examine height, intelligence, personality and several diseases, and has generally shown that half of the heritability estimated in twin studies can be attributed to the sampled SNPs.

A new paper released in PNAS seeks to apply this approach to economic and political phenotypes. The paper by Benjamin and colleagues shows that around half the heritability in economic and political behaviour observed in behavioural studies could be explained by the array of SNPs.

The authors used the results of recent surveys of subjects from the Swedish Twin Registry, who had their educational attainment, four economic preferences (risk, patience, fairness and trust) and five political preferences (immigration/crime, foreign policy, environmentalism, feminism and equality, and economic policy) measured. The GREML analysis found that for one economic preference, trust, the level of variance explained by the SNPs was statistically significant, with an estimate of narrow heritability of over 0.2. Two of the political preferences, economic policy and foreign policy, had narrow heritability that was statistically significant, with heritability estimates above 0.3 for each of these. The authors noted that as the estimates are noisy and GREML provides a lower bound, the results are consistent with low to moderate heritability for these traits.

Educational attainment was also found to have a statistically significant result, although the more precise measurement of educational attainment and the availability of this data across all subjects made that result more likely.

This result is corroboration of the evidence from twin studies and provides a basis for believing that molecular genetic data could be used to predict phenotypic traits. However, one interesting feature of the GREML method of analysis is that after conducting this analysis with one sample, the data obtained does not assist in predicting the traits for someone out of the sample. This technique shows the potential of molecular genetic data without directly realising those results.

As a comparison, the authors examined whether any individual SNPs might predict economic or political preferences, but found none that met the significance test standard of 5×10-8. Such a high level of significance is required to reflect the huge number of SNPs that are being tested.

The authors also conducted the standard comparison between monozygotic (identical) and dizygotic (fraternal) twins, which resulted in heritability estimates consistent with the existing literature, although with a much larger sample than typically used. Looking through the supplementary materials, the major surprise to me was that the twin analysis suggests that patience has low heritability, with a very low correlation between twins and almost no difference between monozygotic and dizygotic twins (in fact, for males, dizygotic twins were more similar).

The authors draw a few conclusions from their work, many which reflect the argument in a Journal of Economic Perspectives article from late last year. The first and most obvious is that we should treat all candidate gene studies with caution. Hopefully some journals that insist on publishing low sample size candidate gene studies will pay attention to this. Where they are going to be conducted, you need very large samples, and significantly larger than are being used in most studies being published.

Meanwhile, they are still hopeful that there can be a contribution from genetic research, particularly if the biological pathways between the gene and trait can be determined. This might include using genes as instrumental variables or as control variables in non-genetic empirical work. The use as instrumental variables does require, however, some understanding of the pathways through which the gene acts as it may have multiple roles (that is, it is pleiotropic). They also suggest that the focus be turned to SNPs for which there are known large effects and the results have been replicated.

On element of analyses of political and economic preferences that makes me slightly uncomfortable is the loose nature of these preferences. For one, the manner in which they are elicited from subjects can vary substantially, as can the nature of the measurement. Take the 2005 paper by Alford and colleagues on political preferences, which canvassed 28 political preferences. Many of the views are likely to change over time and be highly correlated with each other. And why stop at 28?

As a result, it may be preferable to take a step back and ensure that data on higher level traits are collected. I generally consider that IQ and the big five personality traits (openness, conscientiousness, agreeableness, extraversion and stability) are a good starting point and are likely to capture much of the variation in political and economic preferences. For example, preferences such as patience are likely to be reflected in IQ, while openness captures much of the liberal-conservative spectrum of political leaning. Starting from a basis such as this may also give greater scope for working back to the biological pathways.

The Social Science Genetics Association Consortium is doing some work in harmonising phenotypes across large samples. Hopefully their work will lead in this direction.

Robustness and fragility in neural development

10 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

So many things can go wrong in the development of the human brain it is amazing that it ever goes right. The fact that it usually does – that the majority of people do not suffer from a neurodevelopmental disorder – is due to the property engineers call robustness. This property has important implications for understanding the genetic architecture of neurodevelopmental disorders – what kinds of insults will the system be able to tolerate and what kind will it be vulnerable to?

The development of the brain involves many thousands of different gene products acting in hundreds of distinct molecular and cellular processes, all tightly coordinated in space and time – from patterning and proliferation to cell migration, axon guidance, synapse formation and many others. Large numbers of proteins are involved in the biochemical pathways and networks underlying each cell biological process. Each of these systems has evolved not just to do a particular job, but to do it robustly – to make sure this process happens even in the face of diverse challenges.

Robustness is an emergent and highly adaptive property of complex systems that can be selected for in response to particular pressures. These include extrinsic factors, such as variability in temperature, supply of nutrients, etc., but also intrinsic factors. A major source of intrinsic variation is noise in gene expression – random fluctuations in the levels of all proteins in all cells. These fluctuations arise due to the probabilistic nature of gene transcription – whether a messenger RNA is actively being made from a gene at any particular moment. The system must be able to deal with these fluctuations and it can be argued that the noise in the system actually acts as a buffer. If the system only worked within a narrow operating range for each component then it would be very vulnerable to failure of any single part.

Natural selection will therefore favour system architectures that are more robust to environmental and intrinsic variation. In the process, such systems also indirectly become robust to the other major source of variation – mutations.

Many individual components can be deleted entirely with no discernible effect on the system (which is why looking exhaustively for a phenotype in mouse mutants can be so frustrating – many gene knockouts are irritatingly normal). You could say that if the knockout of a gene does not affect a particular process, that that means the gene product is not actually involved in that process, but that is not always the case. One can often show that a protein is involved biochemically and even that the system is sensitive to changes in the level of that protein – increased expression can often cause a phenotype even when loss-of-function manipulations do not.

Direct evidence for robustness of neurodevelopmental systems comes from examples of genetic background effects on phenotypes caused by specific mutations. While many components of the system can be deleted without effect, others do cause a clear phenotype when mutated. However, such phenotypes are often modified by the genetic background. This is commonly seen in mouse experiments, for example, where the effect of a mutation may vary widely when it is crossed into various inbred strains. The implication is that there are some genetic differences between strains that by themselves have no effect on the phenotype, but that are clearly involved in the system or process, as they strongly modify the effect of another mutation.

How is this relevant to understanding so-called complex disorders? There are two schools of thought on the genetic architecture of these conditions. One considers the symptoms of, say, autism or schizophrenia or epilepsy as the consequence of mutation in any one of a very large number of distinct genes. This is the scenario for intellectual disability, for example, and also for many other conditions like inherited blindness or deafness. There are hundreds of distinct mutations that can result in these symptoms. The mutations in these cases are almost always ones that have a dramatic effect on the level or function of the encoded protein.

The other model is that complex disorders arise, in many cases, due to the combined effects of a very large number of common polymorphisms – these are bases in the genome where the sequence is variable in the population (e.g., there might be an “A” in some people but a “G” in others). The human genome contains millions of such sites and many consider the specific combination of variants that each person inherits at these sites to be the most important determinant of their phenotype. (I disagree, especially when it comes to disease). The idea for disorders such as schizophrenia is that at many of these sites (perhaps thousands of them), one of the variants may predispose slightly to the illness. Each one has an almost negligible effect alone, but if you are unlucky enough to inherit a lot of them, then the system might be pushed over the level of burden that it can tolerate, into a pathogenic state.

These are the two most extreme positions – there are also many models that incorporate effects of both rare mutations and common polymorphisms. Models incorporating common variants as modifiers of the effects of rare mutations make a lot of biological sense. What I want to consider here is the model that the disease is caused in some individuals purely by the combined effects of hundreds or thousands of common variants (without what I call a “proper mutation”).

Ironically, robustness has been invoked by both proponents and opponents of this idea. I have argued that neurodevelopmental systems should be robust to the combined effects of many variants that have only very tiny effects on protein expression or function (which is the case for most common variants). This is precisely because the system has evolved to buffer fluctuations in many components all the time. In addition to being an intrinsic, passive property of the architecture of developmental networks, robustness is also actively promoted through homeostatic feedback loops, which can maintain optimal performance in the face of variations, by regulating the levels of other components to compensate. The effects of such variants should therefore NOT be cumulative – they should be absorbed by the system. (In fact, you could argue that a certain level of noise in the system is a “design feature” because it enables this buffering).

Others have argued precisely the opposite – that robustness permits cryptic genetic variation to accumulate in populations. Cryptic genetic variation has no effect in the context in which it arises (allowing it to escape selection) but, in another context – say in a different environment, or a different genetic background – can have a large effect. This is exactly what robustness allows to happen – indeed, the fact that cryptic genetic variation exists provides some of the best evidence that we have that the systems are robust as it shows directly that mutations in some components are tolerated in most contexts. But is there any evidence that such cryptic variation comprises hundreds or thousands of common variants?

To be fair, proving that is the case would be very difficult. You could argue from animal breeding experiments that the continuing response to selection of many traits means that there must be a vast pool of genetic variation that can affect them, which can be cumulatively enriched by selective breeding, almost ad infinitum. However, new mutations are known to make at least some contribution to this continued response to selection. In addition, in most cases where the genetics of such continuously distributed traits have been unpicked (by identifying the specific factors contributing to strain differences for example) they come down to perhaps tens of loci showing very strong and complex epistatic interactions (1, 2, 3). Thus, just because variation in a trait is multigenic, does not mean it is affected by mutations of small individual effect – an effectively continuous distribution can emerge due to very complex epistatic interactions between a fairly small number of mutations which have surprisingly large effects in isolation.

(I would be keen to hear of any examples showing real polygenicity on the level of hundreds or thousands of variants).

In the case of genetic modifiers of specific mutations – say, where a mutation causes a very different phenotype in different mouse strains – most of the effects that have been identified have been mapped to one or a small number of mutations which have no effect by themselves, but which strongly modify the phenotype caused by another mutation.

These and other findings suggest that (i) cryptic genetic variation relevant to disease is certainly likely to exist and to have important effects on phenotype, but that (ii) such genetic background effects can most likely be ascribed to one, several, or perhaps tens of mutations, as opposed to hundreds or thousands of common polymorphisms.

This is already too long, but it begs the question: if neurodevelopmental systems are so robust, then why do we ever get neurodevelopmental disease? The paradox of systems that are generally robust is that they may be quite vulnerable to large variation in a specific subset of components. Why specific types of genes are in this set, while others can be completely deleted without effect, is the big question. More on that in a subsequent post…

De novo mutations in autism

15 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

A trio of papers in this week’s Nature identifies mutations causing autism in four new genes, demonstrate the importance of de novo mutations in the etiology of this disorder and suggest that there may be 1,000 or more genes in which high-risk, autism-causing mutations can occur.

These studies provide an explanation for what seems like a paradox: on the one hand, twin studies show that autism is very strongly genetic (identical twins are much more likely to share a diagnosis than fraternal twins) – on the other, many cases are sporadic, with no one else in the family affected. How can the condition be “genetic” but not always run in the family? The explanation is that many cases are caused by new mutations – ones that arise in the germline of the parents. (This is similar to conditions like Down syndrome). The studies reported in Nature are trying to find those mutations and see which genes are affected.

They are only possible because of the tremendous advances in our ability to sequence DNA. The first genome cost three billion dollars to sequence and took ten years – we can do one now for a couple thousand dollars in a few days. That means you can scan through the entire genome in any affected individual for mutated genes. The problem is we each carry hundreds of such mutations, making it difficult to recognise the ones that are really causing disease.

The solution is to sequence the DNA of large numbers of people with the same condition and see if the same genes pop up multiple times. That is what these studies aimed to do, with samples of a couple hundred patients each. They also concentrated on families where autism was present in only one child and looked specifically for mutations in that child that were not carried by either parent – so-called de novo mutations, that arise in the generation of sperm or eggs. These are the easiest to detect because they are likely to be the most severe. (Mutations with very severe effects are unlikely to be passed on because the people who carry them are far less likely to have children).

There is already strong evidence that de novo mutations play an important role in the etiology of autism – first, de novo copy number variants (deletions or duplications of chunks of chromosomes) appear at a significantly higher rate in autism patients compared to controls (in 8% of patients compared to 2% of controls). Second, it has been known for a while that the risk of autism increases with paternal age – that is, older fathers are more likely to have a child with autism. (Initial studies suggested the risk was up to five-fold greater in fathers over forty – these figures have been revised downwards with increasing sample sizes, but the effect remains very significant, with risk increasing monotonically with paternal age). This is also true of schizophrenia and, in fact, of dominant Mendelian disorders in general (those caused by single mutations). The reason is that the germ cells generating sperm in men continue to divide throughout their lifetime, leading to an increased chance of a mutation having happened as time goes on.

The three studies in Nature were looking for a different class of mutation – point mutations or changes in single DNA bases. They each provide a list of genes with de novo mutations found in specific patients. Several of these showed a mutation in more than one (unrelated) patient, providing strong evidence that these mutations are likely to be causing autism in those patients. The genes with multiple hits include CHD8, SCN2A, KATNAL2 and NTNG1. Mutations in the last of these, NTNG1, were only found in two patients but have been previously implicated as a rare cause of Rett syndrome. This gene encodes the protein Netrin-G1, which is involved in the guidance of growing nerves and the specification of neuronal connections. CHD8 is a chromatin-remodeling factor and is involved in Wnt signaling, a major neurodevelopmental pathway, as well as interacting with p53, which controls cell growth and division. SCN2A encodes a sodium channel subunit; mutations in this gene are involved in a variety of epilepsies. Not much is known about KATNAL2, except by homology – it is related to proteins katanin and spastin, which sever microtubules – mutations in spastin are associated with hereditary spastic paraplegia. How the specific mutations observed in these genes cause the symptoms of autism in these patients (or contribute to them) is not clear – these discoveries are just a starting point, but they will greatly aid the quest to understand the biological basis of this disorder.

The fact that these studies only got a few repeat hits also means that there are probably many hundreds or even thousands of genes that can cause autism when mutated (if there were only a small number, we would see more repeat hits). Some of these will be among the other genes on the lists provided by these studies and will no doubt be recognisable as more patients are sequenced. Interestingly, many of the genes on the lists are involved in aspects of nervous system development or function and encode proteins that interact closely with each other – this makes it more likely that they are really involved.

These studies reinforce the fact that autism is not one disorder – not clinically and not genetically either. Like intellectual disability or epilepsy or many other conditions, it can be caused by mutations in any of a very large number of genes. The ones we know about so far make up around 30% of cases – these new studies add to that list and also show how far we have to go to complete it.

We should recognise too that the picture will also get more complex – in many cases there may be more than one mutation involved in causing the disease. De novo mutations are likely to be the most severe class and thus most likely to cause disease with high penetrance themselves. But many inherited mutations may cause autism only in combination with one or a few other mutations.

These complexities will emerge over time, but for now we can aim to recognise the simpler cases where a mutation in a particular gene is clearly implicated. Each new gene discovered means that the fraction of cases we can assign to a specific cause increases. As we learn more about the biology of each case, those genetic diagnoses will have important implications for prognosis, treatment and reproductive decisions. We can aim to diagnose and treat the underlying cause in each patient and not just the symptoms.

An economics and evolutionary biology reading list

No comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

I have added a new page over at Evolving Economics with a suggested reading list for those interested in the intersection of economics and evolutionary biology. The list is here.

The list is a work is progress, and I plan to update it as new sources emerge or are suggested (or when I realise what oversights I have made). I also intend to constrain it to the best sources, rather than being a complete list on every thought on the topic.

I am interested in suggestions from gnxp readers, so please let me know if you have any thoughts. Comments can also be made at the bottom of the reading list page.

Nerves of a feather, wire together

No comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Finding your soulmate, for a neuron, is a daunting task. With so many opportunities for casual hook-ups, how do you know when you find “the one”?

In the early 1960’s Roger Sperry proposed his famous “chemoaffinity theory” to explain how neural connectivity arises. This was based on observations of remarkable specificity in the projections of nerves regenerating from the eye of frogs to their targets in the brain. His first version of this theory proposed that each neuron found its target by expression of matching labels on their respective surfaces. He quickly realised, however, that with ~200,000 neurons in the retina, the genome was not large enough to encode separate connectivity molecules for each one. This led him to the insight that a regular array of connections of one field of neurons (like the retina) across a target field (the optic tectum in this case) could be readily achieved by gradients of only one or a few molecules.

The molecules in question, Ephrins and Eph receptors, were discovered thirty-some years later. They are now known to control topographic projections of sets of neurons to other sets of neurons across many areas of the brain, such that nearest-neighbour relationships are maintained (e.g., neurons next to each other in the retina connect to neurons next to each other in the tectum). In this way, the map of the visual world that is generated in the retina is transmitted intact to its targets. Actually, maintenance of nearest-neighbour topography seems to be a general property of projections between any two areas, even ones that do not obviously map some external property across them.

But the idea of matching labels was not wrong – they do exist and they play a very important part in an earlier step of wiring – finding the correct target region in the first place. This is nicely illustrated by a beautiful paper studying projections of retinal neurons in the mouse, which implicates proteins in the Cadherin family in this process.

In the retina, photoreceptor cells sense light and transmit this information, through a couple of relays, to retinal ganglion cells (RGCs). These are the cells that send their projections out of the retina, through the optic nerve, to the brain. But the tectum is not the only target of these neurons. There are, in fact, at least 20 different types of RGCs with distinct functions that project from the retina to various parts of the brain.

In mammals, “seeing” is mediated by projections to the visual centre of the thalamus, which projects in turn to the primary visual cortex. But conscious vision is only one thing we use our eyes for. The equivalent of the tectum, called the superior colliculus in mammals, is also a target for RGCs, and mediates reflexive eye movements, head turns and shifts of attention. (It might even be responsible for blindsight – subconscious visual responsiveness in consciously blind patients). Other RGCs send messages to regions controlling circadian rhythms (the suprachiasmatic nuclei) or pupillary reflexes (areas of the midbrain called the olivary pretectal nuclei).

These RGCs express a photoresponsive pigment (melanopsin) and respond to light directly. This likely reflects the fact that early eyes contained both ciliated photoreceptors (like current rods and cones) and rhabdomeric photoreceptors (possibly the ancestors of RGCs and other retinal cells).

So how do these various RGCs know which part of the brain to project to? This was the question investigated by Andrew Huberman and colleagues, who looked for inspiration to the fly eye. It had previously been shown that a member of the Cadherin family of proteins was involved in fly photoreceptor axons choosing the right layer to project to in the optic lobe. Cadherins are “homophilic” adhesion molecules – they are expressed on the surface of cells and like to bind to themselves. Two cells expressing the same Cadherin protein will therefore stick to each other. This stickiness may be used as a signal to make a synaptic connection between a neuron and its target.

The protein implicated in flies, N-Cadherin, is widely expressed in mammals and thus unlikely to specify connections to different targets of the retina. But Cadherins comprise a large family of proteins, suggesting that other members might play more specific roles. This turns out to be the case – a screen of these proteins revealed several expressed in distinct regions of the brain receiving inputs from subtypes of RGCs. One in particular, Cadherin-6, is expressed in non-image-forming brain regions that receive retinal inputs – those controlling eye movements and pupillary reflexes, for example. The protein is also expressed in a very discrete subset of RGCs – specifically those that project to the Cadherin-6-expressing targets in the brain.

The obvious hypothesis was that this matching protein expression allowed those RGCs to recognise their correct targets by literally sticking to them. To test this, they analysed these projections in mice lacking the Cadherin-6 molecule. Sure enough, the projections to those targets were severely affected – the axons spread out over the general area of the brain but failed to zero in on the specific subregions that they normally targeted.

These results illustrate a general principle likely to be repeated using different Cadherins in different RGC subsets and also in other parts of the brain. Indeed, a paper published at the same time shows that Cadherin-9 may play a similar function in the developing hippocampus. In addition, other families of molecules, such as Leucine-Rich Repeat proteins may play a similar role as synaptic matchmakers by promoting homophilic adhesion between neurons and their targets. (Both Cadherins and LRR proteins also have important “heterophilic” interactions with other proteins).

The expansion of these families in vertebrates could conceivably be linked to the greater complexity of the nervous system, which presumably requires more such labels to specify it. But these molecules may be of more than just academic interest in understanding the molecular logic and evolution of the genetic program that specifies brain wiring. Mutations in various members of the Cadherin (and related protocadherin) and LRR gene families have also been implicated in neurodevelopmental disorders, including autism, schizophrenia, Tourette’s syndrome and others. Defining the molecules and mechanisms involved in normal development may thus be crucial to understanding the roots of neurodevelopmental disease.

Osterhout, J., Josten, N., Yamada, J., Pan, F., Wu, S., Nguyen, P., Panagiotakos, G., Inoue, Y., Egusa, S., Volgyi, B., Inoue, T., Bloomfield, S., Barres, B., Berson, D., Feldheim, D., & Huberman, A. (2011). Cadherin-6 Mediates Axon-Target Matching in a Non-Image-Forming Visual Circuit Neuron, 71 (4), 632-639 DOI: 10.1016/j.neuron.2011.07.006

Williams, M., Wilke, S., Daggett, A., Davis, E., Otto, S., Ravi, D., Ripley, B., Bushong, E., Ellisman, M., Klein, G., & Ghosh, A. (2011). Cadherin-9 Regulates Synapse-Specific Differentiation in the Developing Hippocampus Neuron, 71 (4), 640-655 DOI: 10.1016/j.neuron.2011.06.019

I’ve got your missing heritability right here…

28 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

A debate is raging in human genetics these days as to why the massive genome-wide association studies (GWAS) that have been carried out for every trait and disorder imaginable over the last several years have not explained more of the underlying heritability. This is especially true for many of the so-called complex disorders that have been investigated, where results have been far less than hoped for. A good deal of effort has gone into quantifying exactly how much of the genetic variance has been “explained” and how much remains “missing”.

The problem with this question is that it limits the search space for the solution. It forces our thinking further and further along a certain path, when what we really need is to draw back and question the assumptions on which the whole approach is founded. Rather than asking what is the right answer to this question, we should be asking: what is the right question?

The idea of performing genome-wide association studies for complex disorders rests on a number of very fundamental and very big assumptions. These are explored in a recent article I wrote for Genome Biology (referenced below; reprints available on request). They are:

1) That what we call complex disorders are unitary conditions. That is, clinical categories like schizophrenia or diabetes or asthma are each a single disease and it is appropriate to investigate them by lumping together everyone in the population who has such a diagnosis – allowing us to calculate things like heritability and relative risks. Such population-based figures are only informative if all patients with these symptoms really have a common etiology.

2) That the underlying genetic architecture is polygenic – i.e., the disease arises in each individual due to toxic combinations of many genetic variants that are individually segregating at high frequency in the population (i.e., “common variants”).

3) That, despite the observed dramatic discontinuities in actual risk for the disease across the population, there is some underlying quantitative trait called “liability” that is normally distributed in the population. If a person’s load of risk variants exceeds some threshold of liability, then disease arises.

All of these assumptions typically go unquestioned – often unmentioned, in fact – yet there is no evidence that any of them is valid. In fact, the more you step back and look at them with an objective eye, the more outlandish they seem, even from first principles.

First, what reason is there to think that there is only one route to the symptoms observed in any particular complex disorder? We know there are lots of ways, genetically speaking, to cause mental retardation or blindness or deafness – why should this not also be the case for psychosis or seizures or poor blood sugar regulation? If the clinical diagnosis of a specific disorder is based on superficial criteria, as is especially the case for psychiatric disorders, then this assumption is unlikely to hold.

Second, the idea that common variants could contribute significantly to disease runs up against the effects of natural selection pretty quickly – variants that cause disease get selected against and are therefore rare. You can propose models of balancing selection (where a specific variant is beneficial in some genomic contexts and harmful in others), but there is no evidence that this mechanism is widespread. In general, the more arcane your model has to become to accommodate contradictory evidence, the more inclined you should be to question the initial premise.

Third, the idea that common disorders (where people either are or are not affected) really can be treated as quantitative traits (with a smooth distribution in the population, as with height) is really, truly bizarre. The history of this idea can be traced back to early geneticists, but it was popularised by Douglas Falconer, the godfather of quantitative genetics (he literally wrote the book).

In an attempt to demonstrate the relevance of quantitative genetics to the study of human disease, Falconer came up with a nifty solution. Even though disease states are typically all-or-nothing, and even though the actual risk of disease is clearly very discontinuously distributed in the population (dramatically higher in relatives of affecteds, for example), he claimed that it was reasonable to assume that there was something called the underlying liability to the disorder that was actually continuously distributed. This could be converted to a discontinuous distribution by further assuming that only individuals whose burden of genetic variants passed an imagined threshold actually got the disease. To transform discontinuous incidence data (mean rates of disease in various groups, such as people with different levels of genetic relatedness to affected individuals) into mean liability on a continuous scale, it was necessary to further assume that this liability was normally distributed in the population. The corollary is that liability is affected by many genetic variants, each of small effect. Q.E.D.

This model – simply declared by fiat – forms the mathematical basis for most GWAS analyses and for simulations regarding proportions of heritability explained by combinations of genetic variants (e.g., the recent paper from Eric Lander’s group). To me, it is an extraordinary claim, which you would think would require extraordinary evidence to be accepted. Despite the fact that it has no evidence to support it and fundamentally makes no biological sense (see Genome Biology article for more on that), it goes largely unquestioned and unchallenged.

In the cold light of day, the most fundamental assumptions underlying population-based approaches to investigate the genetics of “complex disorders” can be seen to be flawed, unsupported and, in my opinion, clearly invalid. More importantly, there is now lots of direct evidence that complex disorders like schizophrenia or autism or epilepsy are really umbrella terms, reflecting common symptoms associated with large numbers of distinct genetic conditions. More and more mutations causing such conditions are being identified all the time, thanks to genomic array and next generation sequencing approaches.

Different individuals and families will have very rare, sometimes even unique mutations. In some cases, it will be possible to identify specific single mutations as clearly causal; in others, it may require a combination of two or three. There is clear evidence for a very wide range of genetic etiologies leading to the same symptoms. It is time for the field to assimilate this paradigm shift and stop analysing the data in population-based terms. Rather than asking how much of the genetic variance across the population can be currently explained (a question that is nonsensical if the disorder is not a unitary condition), we should be asking about causes of disease in individuals:

- How many cases can currently be explained (by the mutations so far identified)?

- Why are the mutations not completely penetrant?

- What factors contribute to the variable phenotypic expression in different individuals carrying the same mutation?

- What are the biological functions of the genes involved and what are the consequences of their disruption?

- Why do so many different mutations give rise to the same phenotypes?

- Why are specific symptoms like psychosis or seizures or social withdrawal such common outcomes?

These are the questions that will get us to the underlying biology.

Mitchell, K. (2012). What is complex about complex disorders? Genome Biology, 13 (1) DOI: 10.1186/gb-2012-13-1-237

Manolio, T., Collins, F., Cox, N., Goldstein, D., Hindorff, L., Hunter, D., McCarthy, M., Ramos, E., Cardon, L., Chakravarti, A., Cho, J., Guttmacher, A., Kong, A., Kruglyak, L., Mardis, E., Rotimi, C., Slatkin, M., Valle, D., Whittemore, A., Boehnke, M., Clark, A., Eichler, E., Gibson, G., Haines, J., Mackay, T., McCarroll, S., & Visscher, P. (2009). Finding the missing heritability of complex diseases Nature, 461 (7265), 747-753 DOI: 10.1038/nature08494

Zuk, O., Hechter, E., Sunyaev, S., & Lander, E. (2012). The mystery of missing heritability: Genetic interactions create phantom heritability Proceedings of the National Academy of Sciences, 109 (4), 1193-1198 DOI: 10.1073/pnas.1119675109

From miswired brain to psychopathology – modelling neurodevelopmental disorders in mice

No comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

It takes a lot of genes to wire the human brain. Billions of cells, of a myriad different types have to be specified, directed to migrate to the right position, organised in clusters or layers, and finally connected to their appropriate targets. When the genes that specify these neurodevelopmental processes are mutated, the result can be severe impairment in function, which can manifest as neurological or psychiatric disease.

How those kinds of neurodevelopmental defects actually lead to the emergence of particular pathological states – like psychosis or seizures or social withdrawal – is a mystery, however. Many researchers are trying to tackle this problem using mouse models – animals carrying mutations known to cause autism or schizophrenia in humans, for example. A recent study from my own lab (open access in PLoS One) adds to this effort by examining the consequences of mutation of an important neurodevelopmental gene and providing evidence that the mice end up in a state resembling psychosis. In this case, we start with a discovery in mice as an entry point to the underlying neurodevelopmental processes.

In just the past few years, over a hundred different mutations have been discovered that are believed to cause disorders like autism or schizophrenia. In many cases, particular mutations can actually predispose to many different disorders, having been linked in different patients to ADHD, epilepsy, mental retardation or intellectual disability, Tourette’s syndrome, depression, bipolar disorder and others. These clinical categories may thus represent more or less distinct endpoints that can arise from common neurodevelopmental origins.

For a condition like schizophrenia, the genetic overlap with other conditions does not invalidate the clinical category. There is still something distinctive about the symptoms of this disorder that needs to be explained. I have argued that schizophrenia can clearly be caused by single mutations in any of a very large number of different genes, many with roles in neurodevelopment. If that model is correct, then the big question is: how do these presumably diverse neurodevelopmental insults ultimately converge on that specific phenotype? It is, after all, a highly unusual condition. The positive symptoms of psychosis – hallucinations and delusions, for example – especially require an explanation. If we view the brain from an engineering perspective, then we can say that the system is not just not working well – it is failing in a particular and peculiar manner.

To try to address how this kind of state can arise we have been investigating a particular mouse – one with a mutation in a gene called Semaphorin-6A. This gene encodes a protein that spans the membranes of nerve cells, acting in some contexts as a signal to other cells and in other contexts as a receptor of information. It has been implicated in controlling cell migration, the guidance of growing axons, the specification of synaptic connectivity and other processes. It is deployed in many parts of the developing brain and required for proper development in the cerebral cortex, hippocampus, thalamus, cerebellum, retina, spinal cord, and probably other areas we don’t yet know about.

Despite widespread cellular disorganisation and miswiring in their brains, Sema6A mutant mice seem overtly pretty normal. They are quite healthy and fertile and a casual inspection would not pick them out as different from their littermates. However, more detailed investigation revealed electrophysiological and behavioural differences that piqued our interest.

Because these animals have a subtly malformed hippocampus, which looks superficially like the kind of neuropathology observed in many cases of temporal lobe epilepsy, we wanted to test if they had seizures. To do this we attached electrodes to their scalp and recorded their electroencephalogram (or EEG). This technique measures patterned electrical activity in the underlying parts of the brain and showed quite clearly that these animals do not have seizures. But it did show something else – a generally elevated amount of activity in these animals all the time.


What was particularly interesting about this is that the pattern of change (a specific increase in alpha frequency oscillations) was very similar to that reported in animals that are sensitised to amphetamine – a well-used model of psychosis in rodents. High doses of amphetamine can acutely induce psychosis in humans and a suite of behavioural responses in rodents.
In addition, a regimen of repeated low doses of amphetamine over an extended time period can induce sensitisation to the effects of this drug in rodents, characterised by behavioural differences, like hyperlocomotion, as well as the EEG differences mentioned above. Amphetamine is believed to cause these effects by inducing increases in dopaminergic signaling, either chronically, or to acute stimuli.

This was of particular interest to us, as that kind of hyperdopaminergic state is thought to be a final common pathway underlying psychosis in humans. Alterations in dopamine signaling are observed in schizophrenia patients (using PET imaging) and also in all relevant animal models so far studied.

To explore possible further parallels to these effects in Sema6A mutants we examined their behaviour and found a very similar profile to many known animal models of psychosis, namely hyperlocomotion and a hyper-exploratory phenotype (in addition to various other phenotypes, like a defect in working memory). The positive symptoms of psychosis can be ameliorated in humans with a number of different antipsychotic drugs, which have in common a blocking action on dopamine receptors. Administering such drugs to the Sema6A mutants normalised both their activity levels and the EEG (at a dose that had no effect on wild-type animals).

These data are at least consistent with (though they by no means prove) the hypothesis that Sema6A mutants end up in a hyperdopaminergic state. But how do they end up in that state? There does not seem to be a direct effect on the development of the dopaminergic system – Sema6A is at least not required to direct these axons to their normal targets.

Our working hypothesis is that the changes to the dopaminergic system emerge over time, as a secondary response to the primary neurodevelopmental defects seen in these animals.

It is well documented that early alterations, for example to the hippocampus, can have cascading effects over subsequent activity-dependent development and maturation of brain circuits. In particular, it can alter the excitatory drive to the part of the midbrain where dopamine neurons are located, in turn altering dopaminergic tone in the forebrain. This can induce compensatory changes that ultimately, in this context, may prove maladaptive, pushing the system into a pathological state, which may be self-reinforcing.

For now, this is just a hypothesis and one that we (and many other researchers working on other models) are working to test. The important thing is that it provides a possible explanation for why so many different mutations can result in this strange phenotype, which manifests in humans as psychosis. If this emerges as a secondary response to a range of primary insults then that reactive process provides a common pathway of convergence on a final phenotype. Importantly, it also provides a possible point of early intervention – it may not be possible to “correct” early differences in brain wiring but it may be possible to prevent them causing transition to a state of florid psychopathology.

Rünker AE, O’Tuathaigh C, Dunleavy M, Morris DW, Little GE, Corvin AP, Gill M, Henshall DC, Waddington JL, & Mitchell KJ (2011). Mutation of Semaphorin-6A disrupts limbic and cortical connectivity and models neurodevelopmental psychopathology. PloS one, 6 (11) PMID: 22132072

Mitchell, K., Huang, Z., Moghaddam, B., & Sawa, A. (2011). Following the genes: a framework for animal modeling of psychiatric disorders BMC Biology, 9 (1) DOI: 10.1186/1741-7007-9-76

Mitchell, K. (2011). The genetics of neurodevelopmental disease Current Opinion in Neurobiology, 21 (1), 197-203 DOI: 10.1016/j.conb.2010.08.009

Howes, O., & Kapur, S. (2009). The Dopamine Hypothesis of Schizophrenia: Version III–The Final Common Pathway Schizophrenia Bulletin, 35 (3), 549-562 DOI: 10.1093/schbul/sbp006

Jump-starting regeneration of injured nerves

2 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter


Unlike in many other animals, injured nerve fibres in the mammalian central nervous system do not regenerate – at least not spontaneously. A lot of research has gone in to finding ways to coax them to do so, unfortunately with only modest success. The main problem is that there are many reasons why central nerve fibres don’t regenerate after an injury – tackling them singly is not sufficient. A new study takes a combined approach to hit two distinct molecular pathways in injured nerves and achieves substantial regrowth in an animal model.

Many lower vertebrates, like frogs and salamanders, for example, can regrow damaged nerves quite readily. And even in mammals, nerves in the periphery will regenerate and reconnect, given enough time. But nerve fibres in the brain and spinal cord do not regenerate after an injury. Researchers trying to solve this problem focused initially on figuring out what is different about the environment in the central versus the peripheral nervous system in mammals.

It was discovered early on that the myelin – the fatty sheath of insulation surrounding nerve fibres – in the central nervous system is different from that in the periphery. In particular, it inhibits nerve growth. A number of groups have tried to figure out what components of central myelin are responsible for this activity. Myelin is composed of a large number of proteins, as well as lipid membranes. One of these, subsequently named Nogo, was discovered to block nerve growth. This discovery prompted understandable excitement, especially because an antibody that binds that protein was found to promote regrowth of injured spinal nerves in the rat. (It even prompted a film, Extreme Measures, with Gene Hackman and Hugh Grant – an under-rated thriller with some surprisingly accurate science and some very serious medical malfeasance).

Unfortunately, the regrowth in rats that is promoted by blocking the Nogo protein is very limited. Similarly, mice that are mutant for this protein or its receptor show very minor regeneration. What is observed in some cases is extra sprouting of uninjured axons downstream of the spinal injury site. This can lead to some minor recovery of function but it’s really remodelling, rather than regeneration.

But it does suggest an answer to the question: why would we have evolved a system that seems actively harmful, that prevents regeneration after an injury? Well, first, the selective pressure in mammals to be able to regenerate damaged nerves is probably not very great, simply because injured animals would not typically get the chance to regenerate in the wild. And second, it suggests that the function of proteins like Nogo may not be to prevent regeneration but to prevent sprouting of nerve fibres after they have already made their appropriate connections. A lot of effort goes in to wiring the nervous system, with exquisite specificity – once that wiring pattern is established, it probably pays to actively keep it that way.

There are a number of reasons why blocking the Nogo protein does not allow nerves to fully regenerate. First, it is not the only protein in myelin that blocks growth – there are many others. Second, the injury itself can give rise to scarring and inflammation that generates a secondary barrier. And third, neurons in the mature nervous system may simply not be inclined to grow. (Not only that – the distances they may have to travel in the fully grown adult may be orders of magnitude longer than those required to wire the nervous system up during development. There are nerves in an adult human that are almost a metre long but these connections were first formed in the embryo when the distance was measured in millimetres.)

This last problem has been addressed more recently, by researchers asking if there is something in the neurons themselves that changes over time – after all, neurons in the developing nervous system grow like crazy. That propensity for growth seems to be dampened down in the adult nervous system – again, once the nervous system is wired up, it is important to restrict further growth.

Researchers have therefore looked for biochemical differences between young (developing) neurons and mature neurons that have already formed connections. The hope is that if we understand the molecular pathways that differ we might be able to target them to “rejuvenate” damaged neurons, restoring their internal urge to grow. The lab of Zhigang He at Harvard Medical School has been one of the leaders in this area and has previously found that targeting either of two biochemical pathways allowed some modest regeneration of injured neurons. (They study the optic nerve as a more accessible model of central nerve regrowth than the spinal cord).

In a new study recently published in Nature, they show that simultaneously blocking both these proteins leads to remarkably impressive regrowth – far greater than simply an additive effect of blocking the two proteins alone. The two proteins are called PTEN and SOCS3 – they are both intracellular regulators of cell growth, including the ability to respond to extracellular growth factors. The authors used a genetic approach to delete these genes two weeks prior to an injury and found that regrowth was hugely promoted. That is obviously not a very medically useful approach however – more important is to show that deleting them after the injury can permit regeneration and indeed, this is what they found. Presumably, neurons in this “grow, grow, grow!” state are either insensitive to the inhibitory factors in myelin or the instructions for growth can override these factors.

They went on to characterise the changes that occur in the neurons when these genes are deleted and observed that many other proteins associated with active growth states are upregulated, including ones that get repressed in response to the injury itself. The hope now is that drugs may be developed to target the PTEN and SOCS3 pathways in human patients, especially those with devastating spinal cord injuries, to encourage damaged nerves to regrow. As with all such discoveries, translation to the clinic will be a difficult and lengthy process, likely to take years and there is no guarantee of success. But compared to previous benchmarks of regeneration in animal models, this study shows what looks like real progress.

Sun F, Park KK, Belin S, Wang D, Lu T, Chen G, Zhang K, Yeung C, Feng G, Yankner BA, & He Z (2011). Sustained axon regeneration induced by co-deletion of PTEN and SOCS3. Nature, 480 (7377), 372-5 PMID: 22056987

The use of heritability in policy development

4 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

A cross post from Evolving Economics:

The heritability straw man has copped another bashing, this time in the Journal of Economic Perspectives. In it, Charles Manski picks up an old line of argument by Goldberger from 1979 and argues that heritability research is uninformative for the analysis of policy.

Manski starts by arguing that heritability estimates are based on the assumption that there is no gene-environment correlation. Manski writes:

The assumption that g and e are uncorrelated is at odds with the reasonable conjecture that persons who inherit relatively strong genetic endowments tend to grow up in families with more favorable environments for child development.

Any review of discussions of heritability, whether in the peer-reviewed literature or the blogosphere, will show that his claim is generally false. The proviso that the heritability estimate is only relevant to the existing environment is usually threaded through any discussion of heritability.

It is true that gene-environment covariance can affect estimates of heritability. Yet this does not mean that existing estimates have no value, nor that there are not methods that seek to account for the covariance. For example, the use of comparisons between misdiagnosed identical twins and actual identical twins allows for bounded estimates of heritability to be developed (pdf).

Manski’s broader claim, adopted directly from Goldberger, is that even if you knew the heritability of a trait, it tells you nothing about social policy. Manski uses Goldberger’s eyeglasses example as an illustration:

Consider Goldberger’s use of distribution of eyeglasses as the intervention. For simplicity, suppose that nearsightedness derives entirely from the presence of a particular allele of a specific gene. Suppose that this gene is observable, taking the value g = 0 if a person has the allele for nearsightedness and g = 1 if he has the one that yields normal sight.

Let the outcome of interest be effective quality of sight, where “effective” means sight when augmented by eyeglasses, should they be available. A person has effective normal sight either if he has the allele for normal sight or if eyeglasses are available. A person is effectively nearsighted if that person has the allele for nearsightedness and eyeglasses are unavailable.

Now suppose that the entire population lacks eyeglasses. Then the heritability of effective quality of sight is one. What does this imply about the usefulness of distributing eyeglasses as a treatment for nearsightedness? Nothing, of course. The policy question of interest concerns effective quality of sight in a conjectured environment where eyeglasses are available. However, the available data only reveal what happens when eyeglasses are unavailable.

Manski and Goldberger may be correct that the heritability estimate is uninformative as to the efficacy of distributing eyeglasses, but it is useful in assessing other policy responses to the problem and the trade-offs between them. Is it possible to prevent the eyesight loss in the first place? Is that policy cheaper and more effective than eyeglasses? If the heritability estimate was zero, you would look to the environmental causes and ask whether the eyesight problem is more appropriately dealt with by addressing the cause rather than by distribution of eyeglasses.

There is no shortage of other areas where heritability estimates might add value. Heritability estimates can inform whether it is an effective use of resources to make sure that everyone has a university degree or is over six-foot tall. Is everyone putty in the hands of the policy maker, or are there some constraints? On a personal level, Bryan Caplan’s use of heritability in Selfish Reasons to Have More Kids is a useful input to his parenting strategy.

ResearchBlogging.orgFor me, the most salient example of the usefulness of heritability research comes from examination of the heritability of IQ among children. Among high socioeconomic status families, the heritability tends to be high. Among low socioeconomic status families, it is significantly lower. This suggests that there is significant room to improve the outcomes of the children at the bottom of the socioeconomic ladder in the early years of their life (assuming those changes have effects that persist into adulthood). Increasing heritability of IQ might be evidence that environmental disadvantages are being ameliorated and opportunity equalised.

The latter part of Manski’s paper turns to the use of genes as covariates in statistical regressions. Regression identifies statistical association and not causation, which appears to be an important point in attracting Manski to this use. Noting the wealth of data being created and the possibility of observing changes in the effect of genes as the environment changes, Manski considers that these regression exercises may assist in examining how genes and environment interact.

I don’t disagree with Manski, but at present, genome association studies have plenty of issues. First, there is the missing heritability problem. To date, the magnitude of the identified effect of genes on most traits accounts for a miniscule proportion of the trait’s heritability. This points to the important role played by heritability research to provide direction to research on genes as covariates. It also indicates that until these genes are found, heritability estimates will be more informative for social policy.

A second issue is that with 30,000 odd genes and the ability to test so many of them for correlation with traits, many are found to have a statistically significant relationship through chance. As blogged about recently by Razib, this is shown when people seek to replicate earlier results – such as when it was found that most reported genetic associations with general intelligence are probably false positives (pdf).

Finally, genome based research is now feeding back into estimates of heritability. From a recent paper:

We conducted a genome-wide analysis of 3511 unrelated adults with data on 549 692 single nucleotide polymorphisms (SNPs) and detailed phenotypes on cognitive traits. We estimate that 40% of the variation in crystallized-type intelligence and 51% of the variation in fluid-type intelligence between individuals is accounted for by linkage disequilibrium between genotyped common SNP markers and unknown causal variants. These estimates provide lower bounds for the narrow-sense heritability of the traits.

Despite all the critiques about methodology, most new studies confirm that the old “methodologically poor” heritability estimates were in the right ballpark. The problem is not that the estimates are not useful, but rather that they are not used.

Manski, C. (2011). Genes, Eyeglasses, and Social Policy Journal of Economic Perspectives, 25 (4), 83-94 DOI: 10.1257/jep.25.4.83

Why Eurasians aren’t very pale

1 Comment
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

A few years ago I wondered offhand why Eurasians weren’t very pale, since East Asians and Europeans developed light skin at different loci over the past few tens of thousands of years. In hindsight the answer seems pretty obvious. I realized the solution when looking at the skin pigmentation loci in my parents’ genotypes. They’re both homozygous for the derived “light” variant of SLC24A5, but interestingly my father has more “light” alleles than my mother. This is peculiar because my mother is notably lighter complected than my father. Then I realized that there was a likelihood that my mother carried an East Asian allele which conferred light complexion, since she’s ~15% East Asian. So of course the reason that East Asian-European hybrids aren’t exceedingly pale is that pigmentation is predominantly additive in trait value effect and they’d be heterozygotes at many loci where their parental populations would be homozygotes.

On the other hand, the F2 generation might be potentially very light indeed (or dark)….

What is a gene “for”?

11 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

“Scientists discover gene for autism” (or ovarian cancer, or depression, cocaine addiction, obesity, happiness, height, schizophrenia… and whatever you’re having yourself). These are typical newspaper headlines (all from the last year) and all use the popular shorthand of “a gene for” something. In my view, this phrase is both lazy and deeply misleading and has caused widespread confusion about what genes are and do and about their influences on human traits and disease.

The problem with this phrase stems from the ambiguity in what we mean by a “gene” and what we mean by “for”. These can mean different things at different levels and unfortunately these meanings are easily conflated. First, a gene can be defined in several different ways. From a molecular perspective, it is a segment of DNA that codes for a protein, along with the instructions for when and where and in what amounts this protein should be made. (Some genes encode RNA molecules, rather than proteins, but the general point is the same). The function of the gene on a cellular level is thus to store the information that allows this protein to be made and its production to be regulated. So, you have a gene for haemoglobin and a gene for insulin and a gene for rhodopsin, etc., etc. (around 25,000 such genes in the human genome). The question of what the gene is for then becomes a biochemical question – what does the encoded protein do?

But that is not the only way or probably even the main way that people think about what genes do – it is certainly not how geneticists think about it. The function of a gene is commonly defined (indeed often discovered) by looking at what happens when it is mutated – when the sequence of DNA bases that make up the gene is altered in some way which affects the production or activity of the encoded protein. The visible manifestation of the effect of such a mutation (the phenotype) is usually defined at the organismal level – altered anatomy or physiology or behaviour, or often the presence of disease. From this perspective, the gene is defined as a separable unit of heredity – something that can be passed on from generation to generation that affects a particular trait. This is much closer to the popular concept of a gene, such as a gene for blue eyes or a gene for breast cancer. What this really means is a mutation for blue eyes or a mutation for breast cancer.

The challenge is in relating the function of a gene at a cellular level to the effects of variation in that gene, which are most commonly observed at the organismal level. The function at a cellular level can be defined pretty directly (make protein X) but the effect at the organismal level is much more indirect and context-dependent, involving interaction with many other genes that also contribute to the phenotype in question, often in highly complex and dynamic systems.

If you are talking about a simple trait like blue eyes, then the function of the gene at a molecular level can actually be related to the mutant phenotype fairly easily – the gene encodes an enzyme that makes a brown pigment. When that enzyme is not made or does not work properly, the pigment is not made and the eyes are blue. Easy-peasy.

But what if the phenotype is in some complex physiological trait, or even worse, a psychological or behavioural trait? These traits are often defined at a very superficial level, far removed from the possible molecular origins of individual differences. The neural systems underlying such traits may be incredibly complex – they may break down due to very indirect consequences of mutations in any of a large number of genes.

For example, mutations in the genes encoding two related proteins, neuroligin-3 and neuroligin-4 have been found in patients with autism and there is good evidence that these mutations are responsible for the condition in those patients. Does this make them “genes for autism”? That phrase really makes no sense – the function of these genes is certainly not to cause autism, nor is it to prevent autism. The real link between these genes and autism is extremely indirect. The neuroligin proteins are involved in the formation of synaptic connections between neurons in the developing brain. If they are mutated, then the connections that form between specific types of neurons are altered. This changes the function of local circuits in the brain, affecting their information-processing parameters and changing how different regions of the brain communicate. Ultimately, this impacts on neural systems controlling things like social behaviour, communication and behavioural flexibility, leading to the symptoms that define autism at the behavioural level.

So, mutations in these genes can cause autism, but these are not genes for autism. They are not even usefully or accurately thought of as genes for social behaviour or for cognitive flexibility – they are required, along with the products of thousands of other genes, for those faculties to develop.

But perhaps there are other genetic variants in the population that affect the various traits underlying these faculties – not in such a severe way as to result in a clinical disorder, but enough to cause the observed variation across the general population. It is certainly true that traits like extraversion are moderately heritable – i.e., a fair proportion of the differences between people in this trait are attributable to genetic differences. When someone asks “are there genes for extraversion?”, the answer is yes if they mean “are differences in extraversion partly due to genetic differences?”. If they mean the function of some genetic variant is to make people more or less extroverted, then they have suddenly (often unknowingly) gone from talking about the activity of a gene or the effect of mutation of that gene to considering the utility of a specific variant.

This suggests a deeper meaning – not just that the gene has a function, but that it has a purpose – in biological terms, this means that a particular version of the gene was selected for on the basis of its effect on some trait. This can be applied to the specific sequence of a gene in humans (as distinct from other animals) or to variants within humans (which may be specific to sub-populations or polymorphic within populations).

While geneticists may know what they mean by the shorthand of “genes for” various traits, it is too easily taken in different, unintended ways. In particular, if there are genes “for” something, then many people infer that the something in question is also “for” something. For example, if there are “genes for homosexuality”, the inference is that homosexuality must somehow have been selected for, either currently or under some ancestral conditions. Even sophisticated thinkers like Richard Dawkins fall foul of this confusion – the apparent need to explain why a condition like homosexual orientation persists. Similar arguments are often advanced for depression or schizophrenia or autism – that maybe in ancestral environments, these conditions conferred some kind of selective advantage. That is one supposed explanation for why “genes for schizophrenia or autism” persist in the population.

Natural selection is a powerful force but that does not mean every genetic variation we see in humans was selected for, nor does it mean every condition affecting human psychology confers some selective advantage. In fact, mutations like those in the neuroligin genes are rapidly selected against in the population, due to the much lower average number of offspring of people carrying them. The problem is that new ones keep arising – in those genes and in thousands of other required to build the brain. By analogy, it is not beneficial for my car to break down – this fact does not require some teleological explanation. Breaking down occasionally in various ways is not a design feature – it is just that highly complex systems bring an associated higher risk due to possible failure of so many components.

So, just because the conditions persist at some level does not mean that the individual variants causing them do. Most of the mutations causing disease are probably very recent and will be rapidly selected against – they are not “for” anything.


Jamain S, Quach H, Betancur C, Råstam M, Colineaux C, Gillberg IC, Soderstrom H, Giros B, Leboyer M, Gillberg C, Bourgeron T, & Paris Autism Research International Sibpair Study (2003). Mutations of the X-linked genes encoding neuroligins NLGN3 and NLGN4 are associated with autism. Nature genetics, 34 (1), 27-9 PMID: 12669065

Does brain plasticity trump innateness?

19 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

The fact that the adult brain is very plastic is often held up as evidence against the idea that many psychological, cognitive or behavioural traits are innately determined. At first glance, there does indeed appear to be a paradox. On the one hand, behavioural genetic studies show that many human psychological traits are strongly heritable and thus likely determined, at least in part, by innate biological differences. On the other, it is very clear that even the adult brain is highly plastic and changes itself in response to experience.

The evidence on both sides is very strong. In general, for traits like intelligence and personality characteristics such as extraversion, neuroticism or conscientiousness, among many others, the findings from genetic studies are remarkably consistent. Just as for physical traits, people who are more closely related resemble each other for psychological traits more than people with a more distant relationship. Twin study designs get around the obvious objection that such similarities might be due to having been raised together. Identical twins tend to be far more like each other for these traits than fraternal twins, though the family environment is shared in both cases. Even more telling, identical twins who are raised apart tend to be pretty much as similar to each other as pairs who are raised together. Clearly, we come fairly strongly pre-wired and the family environment has little effect on these kinds of traits.

Yet we know the brain can “change itself”. You could say that is one of its main jobs in fact – altering itself in response to experience to better adapt to the conditions in which it finds itself. For example, as children learn a language, their auditory system specialises to recognise the typical sounds of that language. Their brains become highly expert at distinguishing those sounds and, in the process, lose the ability to distinguish sounds they hear less often. (This is why many Japanese people cannot distinguish between the sounds of the letters “l” and “r”, for example, and why many Westerners have difficulty hearing the crucial tonal variations in languages like Cantonese). Learning motor skills similarly improves performance and induces structural changes in the relevant brain circuits. In fact, most circuits in the brain develop in an experience-dependent fashion, summed up by two adages: “cells that fire together, wire together” and “use it or lose it”.

Given the clear evidence for brain plasticity, the implication would seem to be that even if our brains come pre-wired with some particular tendencies, that experience, especially early experience, should be able to override them.

I would argue that the effect of experience-dependent development is typically exactly the opposite – that while the right kind of experience can, in principle, act to overcome innate tendencies, in practice, the effect is reversed. The reason is that our innate tendencies shape the experiences we have, leading us to select ones that tend instead to reinforce or even amplify these tendencies. Our environment does not just shape us – we shape it.

A child who is naturally shy – due to innate differences in the brain circuits mediating social behaviour, general anxiety, risk-aversion and other parameters – will tend to have less varied and less intense social experience. As a result, they will not develop the social skills that might make social interaction more enjoyable for them. A vicious circle emerges – perhaps intense practice in social situations would alter the preconfigured settings of a shy child’s social brain circuits but they tend not to get that experience, precisely because of those settings. In contrast, their extroverted classmates may, by constantly seeking out social interactions, continue to develop this innate faculty.

This circle may be most vicious in children with autism, most of whom have a reduced level of innate interest in other people. They tend, for example, not to find faces as intrinsically fascinating as other infants. This may contribute to a delay in language acquisition, as they miss out on interpersonal cues that strongly facilitate learning to speak.

A similar situation may hold for children who have difficulties in reading or with mathematics. Dyslexia seems to be caused by an innate difficulty in associating the sounds and shapes of letters. This can be traced to genetic effects during early development of the brain, which may cause interruptions in long-range connections between brain areas. This innate disadvantage is cruelly amplified by the typical experience of many dyslexics. Learning to read is hard enough and requires years of practice and active instruction. For children who have basic difficulties in recognising letters and words, reading remains effortful for far longer and they will therefore tend to read less, missing out on the intensive practice that would help their brain circuitry specialise for reading.

Though less widely known, dyscalculia (a selective difficulty in mathematics) is equally common and shares many characteristics with dyslexia. The initial problem is in innate number sense – the ability to estimate and compare small numbers of objects. This faculty is present in very young infants and even shared with many other animal species, notably crows. Formal mathematical instruction is required to build on this innate number sense but also crucially relies on it. As with reading, mathematics requires hard work to learn and if numbers are inherently mysterious then this will change the nature of the child’s experience, lessen interest and reduce practice. At the other end of the spectrum, those with strong mathematical talent may gravitate towards the subject, further amplifying the differences between these two groups.

Thus, while a certain type of experience can alter the innate tendency, the innate tendency makes getting that experience far less likely. Brain plasticity tends instead to amplify initial differences.

That sounds rather fatalistic, but the good news is that this vicious circle can be broken if innate difficulties are recognised early enough – by actively changing the nature of early experience. There is good evidence that intense early intervention in children with autism (such as Applied Behaviour Analysis) allows them to compensate for innate deficits and lead to improvements in cognitive, communication and adaptive skills. Similarly intense intervention in children with dyslexia has also proven effective. Thus, even if it is not possible to reverse whatever neurodevelopmental differences lead to these kinds of deficits, it should at least be possible to prevent their being amplified by subsequent experience.

Duff FJ, & Clarke PJ (2011). Practitioner Review: Reading disorders: what are the effective interventions and how should they be implemented and evaluated? Journal of child psychology and psychiatry, and allied disciplines, 52 (1), 3-12 PMID: 21039483

Vismara, L., & Rogers, S. (2010). Behavioral Treatments in Autism Spectrum Disorder: What Do We Know? Annual Review of Clinical Psychology, 6 (1), 447-468 DOI: 10.1146/annurev.clinpsy.121208.131151

Human nature and libertarianism

20 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

*A cross-post from Evolving Economics

There is another interesting topic in this month’s Cato Unbound, with Michael Shermer arguing in the lead essay that human nature is best represented by the libertarian political philosophy.

Shermer (rightly) spends most of the essay shooting down the blank slate vision of humans that underpins many policies on the left, and suggests that moderates on both the left and right should accept a “Realistic vision” of human nature. He then simply states that the libertarian philosophy best represents this vision. Unfortunately, Shermer provides no explanation about why that might be the case, and in particular, does not detail why libertarianism might better reflect human nature than conservatism.

In the first response to Shermer’s essay, Eliezer Yudkowsky puts Shermer’s argument as such:

[B]ecause variance in IQ seems to be around 50% genetic and 50% environmental, the Soviets were half right. And that this, in turn, makes libertarianism the wise, mature compromise path between liberalism and conservatism.

Yudkowsky’s response to this argument is spot on:

In every known culture, humans experience joy, sadness, disgust, anger, fear, and surprise. In every known culture, these emotions are indicated by the same facial expressions. …

Complex adaptations like “being a little selfish” and “not being willing to work without reward” are human universals. The strength might vary a bit from person to person, but everyone’s got the same machinery under the hood, we’re just painted different colors.

Which means that trying to raise perfect unselfish communists isn’t like reading Childcraft books to your kid, it’s like trying to read Childcraft books to your puppy.

The Soviets were not 50% right, they were entirely wrong. They weren’t quantitatively wrong about the amount of variance due to the environment, they were qualitatively wrong about what environmental manipulations could do in the face of built-in universal human machinery.

Shermer’s argument was a change from the line of reasoning that I have heard from him before, which is that if the left understood that capitalism is an emergent system like evolution, they would be more accepting of it. I find that argument even less convincing. My understanding of evolution provides one of the strongest challenges to my libertarian leanings – evolution is full of wasteful competition for relative status and what is good for the individual is often not good for the group.

The weakness of these arguments is probably reflected in the deeper rationale for Shermer’s libertarianism. As Yudkowsky questions, is human nature the real reason for Shermer’s libertarianism?

Would Michael Shermer change his mind and become a liberal, if these traits were shown to be 10% hereditary?

… Before you stake your argument on a point, ask yourself in advance what you would say if that point were decisively refuted. Would you relinquish your previous conclusion? Would you actually change your mind? If not, maybe that point isn’t really the key issue.

Yudkowsky’s answer to the question of why he is a libertarian is similar to mine:

When I ask myself this question, I think my actual political views would change primarily with my beliefs about how likely government interventions are in practice to do more harm than good. I think my libertarianism rests chiefly on the empirical proposition—a factual belief which is either false or true, depending on how the universe actually works—that 90% of the time you have a bright idea like “offer government mortgage guarantees so that more people can own houses,”someone will somehow manage to screw it up, or there’ll be side effects you didn’t think about, and most of the time you’ll end up doing more harm than good, and the next time won’t be much different from the last time.

A human nature thread could underlie some of this explanation, with the nature of individuals in government and bureaucracy shaping the outcomes from government intervention. However, an understanding of human nature, in itself, does not settle the case for libertarianism. It may provide some support, but it provides just as many challenges.

10 Questions for Charles C. Mann

No comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Over at Discover Blogs. Mostly about 1493: Uncovering the New World Columbus Created.

Analysis of a Tutsi genotype

1 Comment
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

With this post, Tutsi probably differ genetically from the Hutu, I hope to tamp down all the talk about how the Belgians invented the Tutsi-Hutu division. After putting the call out it took 2 months for me to get my hands on a genotype, and less than 24 hours to post some results.

Thoughts on the BGI IQ study

4 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

I’ve been following the development of the BGI study on IQ pretty closely. I wanted to note two main caveats people should be aware of with regard to its methodology.

First, as with any case-control study, volunteer bias will be an issue. If the cases are a certain class of very smart people, rather than a representative sample, then genes peculiar to that class of smart people will show up as hits. The BGI study is choosing people who are more math than verbal-oriented; will math-specific genes show up as general intelligence genes? Other confounds along these lines are possible– PhD genes, Ashkenazi genes, curiosity in new study genes, etc..

Second, because the study doesn’t completely control for family environments (possible only by comparing siblings to each other), gene-environment correlations and interactions can cause problems as well. For example, suppose that high IQ parents also confer better environments for their children. Then the IQ gene effects will get an extra “boost” from that environment.

None of this is to downgrade the awesomeness of the BGI study. It should be viewed as an important step in resolving the nature vs nurture controversy. Overeager journalists and bloggers are urged to wait a few more years before we finally resolve the IQ debate.

Looking for a few good 145+ I.Q. individuals

5 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Cross-posted from Discover

My friend Steve Hsu gave a talk at Google today. Here are the details:

I’ll be giving a talk at Google tomorrow (Thursday August 18) at 5 pm. The slides are here. The video will probably be available on Google’s TechTalk channel on YouTube.

The Cognitive Genomics Lab at BGI is using this talk to kick off the drive for US participants in our intelligence GWAS. More information at www.cog-genomics.org, including automatic qualifying standards for the study, which are set just above +3 SD. Participants will receive free genotyping and help with interpreting the results. (The functional part of the site should be live after August 18.)

Title: Genetics and Intelligence

Abstract: How do genes affect cognitive ability? I begin with a brief review of psychometric measurements of intelligence, introducing the idea of a “general factor” or IQ score. The main results concern the stability, validity (predictive power), and heritability of adult IQ. Next, I discuss ongoing Genome Wide Association Studies which investigate the genetic basis of intelligence. Due mainly to the rapidly decreasing cost of sequencing, it is likely that within the next 5-10 years we will identify genes which account for a significant fraction of total IQ variation.

We are currently seeking volunteers for a study of high cognitive ability. Participants will receive free genotyping.

From what I recall of my discussion with Steve the aim here is to fish in the extreme tail of the distribution to see if that allows for an easier catchment of I.Q. upward incrementing alleles. 3 standard deviations above the mean I.Q. is about 1 out of 750 individuals or so.

a