I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.
Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.
Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See NZ Blog Rankings FAQ.
This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.
Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.
Here are the rankings of New Zealand blogs with publicly available statistics for December 2020. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.
You can see data for previous months at Blog Ranks
Uncritical use of science to support a preconceived position is widespread – and it really gets up my nose. I have no respect for the person, often an activist, who uncritically cites a scientific report. Often they will cite a report which they have read only the abstract of – or not even that. Sometimes commenters will support their claims by producing “scientific evidence” which are simply lists of citations obtained from PubMed or Google Scholar.
[Yes, readers will recognise this is a common behaviour with anti-fluoride activists]
Unfortunately, this problem is not restricted to activists. Too often I read scientific papers with discussions where authors have simply cited studies that support, or they interpret as supporting, their own preconceived ideas or hypotheses. Compounding this scientific “sin” is the habit of some authors who completely refuse to cite, or even discuss, studies producing evidence that doesn’t fit their scientific prejudices.
Publication does not magically make scientific findings or ideas “true” – far from it. The serious reader of scientific literature must constantly remember that the chances are very high that published conclusions or findings are likely to be false. John Ioannidis makes this point in his article Why most published research findings are false. Ioannidis concentrates on the poor use, or misuse, of statistics. This is a constant problem in scientific writing – and it certainly underlines the fact that even scientists will consciously or unconsciously manipulate their data to confirm their biases. They are using statistical analysis in the way a drunk used a lamppost – for support rather than illumination.
Poor studies often used to fool policymakers
These problems are often not easily understood by scientists themselves but the situation is much worse for policymakers. They are not trained in science and don’t have the scientific or statistical experience required for a proper critically analysis of claims made to them by activists. Yet they are often called on to make decisions which rely on the acceptance, or rejection, of scientific claims (or, claims about the science).
These authors have an anti-fluoride activists position and are campaigning against community water fluoridation (CWF). Their paper uses their own studies which report very poor and rare statistical relationships of child IQ with fluoride intake as “proof” of causation sufficiently strong to advocate for regulatory guidelines. Unsurprisingly their recommended guidelines are very low – much lower than those common with CWF.
Sadly, their sciencey sounding advocacy may convince some policymakers. It is important that policymakers be exposed to a critical analysis of these studies and their arguments. The authors will obviously not do this – they are selling their own biases. I hope that any regulator or policymaker required to make decisions on these recommendations have the sense to call for an independent, objective and critical analysis of the paper’s claims.
[Note: The purpose of the medRxiv preprints of non-peer-reviewed articles is to enable and invite discussion and comments that will help in revising the article. I submitted comments on the draft article over a month ago (Comments on “A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children”) and have had no response from the authors. This lack of response to constructive critiques is, unfortunately, common for this group. I guess one can only comment that scientists are human.]
A big problem with published science today is that many studies are nothing more than observational exploratory studies using existing databases which, by their nature, cannot be used to derive causes. Yet that can easily be used to derive statistically significant links or relationships. These can be used to write scientific papers but they are simply not evidence of causes.
Properly designed studies, with proper controls and randomised populations properly representing different groups, may provide reasonable evidence of causal relationships – but most reported studies are not like this. Most observational studies use existing databases with non-random populations where selection and confounding with other factors is a huge problem. Authors are often silent about selection problems and may claim to control for important confounding factors, but it is impossible to include all confounders. The databases used may not include data for relevant confounders and authors themselves may not properly select all relevant confounders for inclusion.
This sort of situation makes some degree of data mining likely., This occurs when a number of different variables and measures of outcomes are considered in the search for statistically significant relationships. Jim Frost illustrated the problems with this sort of approach. Using a set of completely fictitious random data he was able to obtain a statistically significant relationship with very low p values and R-squared values showing the explanation of 61% of the variance (see Jim Frost – Regression Analysis: An Intuitive Guide).
That is the problem with observational studies where some degree of data mining is often involved. It is possible to find relationships wich look good, have low p-values and relatively high R-squared values, but are entirely meaningless. They represent nothing.
So readers and users of science should beware. The findings they are given may be completely false or contradictory. or at least meaningless in quantitative terms (as is the case with the relationships produced by the Grandjean et al 2020 group discussed above).
A recent scientific article provides a practical example of this problem. Different authors used the same surgical database but produced complete opposite findings (see Childers et al: 2020). Same Data, Opposite Results?: A Call to Improve Surgical Database Research). By themselves each study may have looked convincing. Both used the same large database from the same year. Over 10,000 samples were used in both cases and both studies were published in the same journal within a few months. However, the inclusion and exclusion criteria used were different. Large numbers of possible covariates were considered but these differed. Similarly, different outcome measures were used.
Readers interested in the details can read the original study or a Sceptical Scalpel blog article Dangerous pitfalls of database research. However, Childers et al (2020) describe how the number of these sort of observational studies “has exploded over the past decade.” As they say:
“The reasons for this growth are clear: these sources are easily accessible, can be imported into statistical programs within minutes, and offer opportunities to answer a diverse breadth of questions.”
However:
“With increased use of database research, greater caution must be exercised in terms of how it is performed and documented.”
“. . . because the data are observational, they may be prone to bias from selection or confounding.”
Problems for policymakers and regulators
Given that many scientists do not have the statistical expertise to properly assess published scientific findings it is understandable for policymakers or regulators to be at a loss unless they have proper expert advice. However, it is important that policymakers obtain objective, critical advice and not simply rely on the advocates who may well have scientific degrees. Qualifications by themselves are not evidence of objectivity and, undoubtedly, we often do face situations where scientists become advocates for a cause.
I think policymakers should consciously seek out a range of scientific expert advice, recognising that not all scientists are objective. Given the nature of current observational research, its use of existing databases and the ease with which researchers can obtain statistically significant relationships I also think policymakers should consciously seek the input of statisticians when they seek help in interpreting the science.
Surely they owe this to the people they represent.
I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.
Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.
Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See NZ Blog Rankings FAQ.
This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.
Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.
Here are the rankings of New Zealand blogs with publicly available statistics for November 2020. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.
You can see data for previous months at Blog Ranks
In my time as a scientific researcher, honest scientists used to condemn colleagues who over-hyped their science. To our mind there should have been a special place in hell for scientists who misrepresented their findings or dishonestly described their significance.
That sort of self-promotional behaviour is probably understandable for reasons of ambition – or even the attempt to secure future funding. And these self-promoting scientists usually moved on rather quicker into higher-paid administrative jobs. Not exactly to that special place in hell – but maybe their promotion away from active research reduced the damage their personal self-hyping could do to science in the long run (although I did often wonder about the damage they did to science with their administrative decisions).
A recent article (Hype isn’t just annoying, it’s harmful to science and innovation) got me thinking of this problem again – and to realise we are facing a classic case of this self-promotional over-hyping in recent science related to community water fluoridation (CWF).
Readers may pick up that I am referring to the behaviours of a north-American research group which has been indulging in a wave of self-promotion – a promotion which involves misrepresenting of their own findings and the significance of those findings. I have discussed the research findings of this group in a number of posts – including the following:
More recently they have produced a video promoting and misrepresenting the significance of their work. A video which is being gleefully used by anti-fluoride activists in their propaganda (there has been a bit of a dance over this video which has been roundly criticised scientifically and taken down or moved several times so links often don’t work. But a recent appearance was on the New Zealand anti-fluoride Facebook page).
Group members have also attacked, in a very unprofessional way, fellow scientists who have critiqued their work (see for example When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics). On social media, they have attempted to close down any critical discussion of their work – and in a similar manner, they purposely ignore, or even attempt to hide, studies that do not support their claims. (At the personal level I have had a member of this group refuse to fulfil their prior undertaking to do a peer review on a draft paper of mine – presumably because on reading it she became aware that my paper discussed flaws in their work).
In support of my contention that this group is over-hyping their findings, and unprofessionally using this misrepresentation to give support to anti-fluoride activists, I will briefly list below what their findings were.
The table below lists their data together with that of Broadbent et al (2015)
I think it unprofessional for this group to ignore their own data while at the same time lending support to activists who are claiming that CWF harms child’s brains. Perhaps they assume that this finding could not be hyped to promote their standing and ambitions. So, instead, they have diverted attention to another part of their work – the relationship between child IQ and measures of fluoride consumption.
Occasional weak relationships of child IQ with fluoride intake
While ignoring some other data – which is unprofessional in itself – they have devoted their promotional material to just one part of their findings – the few cases when they are able to demonstrate a relationship, albeit only a weak relationship, of child IQ with fluoride intake as measured by drinking water fluoride content, estimated fluoride intake or urinary fluoride levels.
I have discussed problems with this approach in my articles listed above but will stress here that the relationships are usually not statistically significant, or very weak when significant (explaining only a few per cent of the variance in IQ), and suffer from inadequate consideration of possible important confounders or other risk-modifying factors. A common problem with the sort of “fishing expedition” involving statistical searching of existing databases in an attempt to confirm a bias.
The figure below shows the relationships considered in the two studies. Most simply are not statistically significant. In a recent article (see Perrott, K. W. (2020). Health effects of fluoridation on IQ are unproven. New Zealand Medical Journal, 133(1522), 177–179) I describe it this way:
“Multiple measures for both cognitive factors and of fluoride exposure are used producing many relationships. Only four of the ten relationships reported by Green et al were statistically significant (p<0.5). Similarly, only three of the twelve relationships reported by Till et al were statistically significant. There is a danger that reported relationships could be misleading – as the proverb says, “If you torture your data long enough, they will tell you whatever you want to hear.” “
Even if the reported relationships correctly reflected reality (being a “fishing expedition” the chances are they don’t) their concentration on such weak relationships (explaining only a few per cent of the data) could be actively diverting attention away from the factors which are more important. Although this group has been very shy about making their data available for other researchers to check, the data they have published indicate that regional and ethnic differences may be making a much bigger contribution to child IQ.
A big problem (always glossed over by those promoting this work) is that the studies are exploratory, using existing data bases rather than experiments specifically designed to answer the relevant questions. Reported relationships may support preconceived beliefs but it is easy to ignore important confounders or risk-modifying factors (which properly designed experiments would attempt to minimize).
I highlighted the problem of inadequate consideration of other factors in my article critiquing an early paper from this group (see Perrott, K. W. (2018). Fluoridation and attention deficit hyperactivity disorder a critique of Malin and Till (2015). British Dental Journal, 223(11), 819–822). In this case, I showed that when regional factors (in thas case elevation) were included in the statistical analysis the relationship of ADHD prevalence with the extent of fluoridation that Malin & Till (2015) reported simply disappeared.
It is worth adding that in subsequent reports from this group my critique has been completely ignored and they still reported the flawed Malin & Till (20915) as being reliable. I think that is very unprofessional but it does align with the tactics of self promotion and over-hyping of their work.
“Acting this way has a cost. It’s not just about allowing people to feel awe: it’s about empowering those who are not professional scientists or technologists to be able to participate, instead of being spoon-fed a whizz-bang watered-down version of science as cheap entertainment. Hype doesn’t just obscure the reality of what’s going on in science and technology – it makes it less interesting. It’s time we start to look past it and delight in what lies beyond.”
So as honest working researchers we were right to resent self-promotional hype and, perhaps, to wish that a place in hell was reserved for these ambitious self-promoters.
But, looking back, I can recognize that scientists are human and, like everyone else, fallible. It is easy to see how people will place ambition over the truth and why that should resort to hyping their science for ambitious reasons. I can also recognise, as Ioannidis (2015) reported, that “Most Published Research Findings Are False.” I believe Ioannis is basically correct and there are big problems with the scientific literature which contains reports from so many studies based on an exploratory statistical analysis of the sort indulged in by this North American group.
It’s inevitable that such poor science will be seized on by those with political, commercial and ideological agendas to support their claims. This has been done by the anti-fluoride activist groups. For the rest of us it is matter of reading the scientific literature intelligently and critically. And I mean all the literature, not just that related to fluoridation, vaccination and similar “hot topics.”
And, in the end, the truth will out. Poor science and self-promoting ambition and hype do get exposed. The faults in the promotional messages do get exposed. And, new research and data usually provide context for a proper evaluation of the claims made by those which currently hype their work.
I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.
Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.
Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See NZ Blog Rankings FAQ.
This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.
Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.
Here are the rankings of New Zealand blogs with publicly available statistics for September 2020. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.
You can see data for previous months at Blog Ranks
I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.
Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.
Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See NZ Blog Rankings FAQ.
This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.
Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.
Here are the rankings of New Zealand blogs with publicly available statistics for August 2020. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.
You can see data for previous months at Blog Ranks
Finally, I have got back to these rankings. A long period in hospital and computer problems meant the June 2020 rankings were abandoned. Now, with a new personal defibrillator implanted in my chest and a new hard drive in my computer it is back to business.
I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.
Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.
Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See NZ Blog Rankings FAQ.
This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.
Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.
Here are the rankings of New Zealand blogs with publicly available statistics for July 2020. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.
You can see data for previous months at Blog Ranks
Anti-fluoride propagandists continually cite studies from areas of endemic fluorosis in their arguments against community water fluoridation (CWF). But if they critically looked at the data in those papers they might get a shock. Invariably the published data, even from areas of endemic fluorosis, shows fluoride is safe at the concentrations relevant to CWF.
I have completed a detailed analysis of all the 65 studies the Fluoride Action Network (FAN) lists as evidence that community water fluoridation (CWF) is harmful to child IQ. The full analysis is available for download as the document Analysis of FAN’s 65 brain-fluoride studies.
In this article, I discuss the studies in the FAN’s list (see “FLUORIDE & IQ: THE 65 STUDIES”) which report relationships between child IQ and fluoride exposure in areas of endemic fluorosis. There are eleven such studies in the FAN list but only six of them provide sufficient data to enable independent statistical analysis.
While those six studies do show a statically significant (p<0.05) negative relationship of IQ with fluoride intake those results are not relevant to CWF because the fluoride as exposure levels are much higher than ever occurs with CWF.
However, it is possible to investigate if the relationships are significant at lower concentrations more relevant to CWF. I have done this with these six studies and illustrate the result obtained with these graphs below using the data extracted from Xiang et al (2003). (This study is often used by anti-fluoride campaigners).
The red data points in the figures below are for lower concentrations of urinary F or creatinine adjusted urinary F. The range for the red points is still quite a bit larger than urinary F levels measured for children in areas where CWF is used. However, we can see that the relationships at these lower ranges are not statistically significant (results from regression analyses cited in figures).
This was also the case with the other studies from FAN’s list which provided sufficient data for regression analyses. I summarise the results obtained for five of these studies in the figure below.
This show that none of the studies found statistically significant relationships with fluoride exposure for the low fluoride concentration relevant to CWF. The situation is basically the same for the sixth study, Mustafa et al (2018), which reports average school subject performances for a range of subjects for children in Khartoum state, Sudan. However, it is hard to know what the safe limit for fluoride exposure is in that climate (for climatic reasons the upper permissible F level in drinking water is set at 0.33 ppm for Khartoum state) and the sample numbers are low. Interested readers should consult my report – Analysis of FAN’s 65 brain-fluoride studies.
In Child IQ in countries with endemic fluorosis imply fluoridation is safe I showed that while IQ and other health problems may occur where fluoride exposure is very high in areas of endemic fluorosis the reports themselves implicitly assume that the low fluoride exposure in the “low fluoride” areas is safe. It is the data from these areas, not the “high fluoride” areas, that are relevant to CWF. So despite the heavy use of these articles by FAN and anti-fluoride activists these studies do not prove what they claim. If anything these studies show CWF is safe.
In this article, I considered a few of these studies which included data relevant to low fluoride exposure. When the low fluoride exposure data (relevant to CWF) from these studies were statistically analysed none of them showed significant relationships of child IQ to fluoride exposure. That confirms the implicit assumption from these studies that there is no negative effect of fluoride exposure on child IQ at these low levels.
Finally, in Canadian studies confirm findings of Broadbent et al (2015) – fluoridation has no effect on child IQI summarise results from the only three studies where comparisons of IQ for children living in fluoridated and unfluoridated areas are compared. These studies were made in New Zealand and Canada and the results were the same. No statistically significant differences in child IQ were found.
However, the authors of the Canadian studies ignored this result and instead used questionable statistical methods to search for possible relationships between fluoride exposure and child IQ. Most of the relationships they report were not statistically significant but, nevertheless, they and their supporters have simply ignored this and concentrated on the few statically significant relationships.
Anti-fluoride activists currently rely strongly on these studies and heavily promote them. I will discuss these few studies further in my next article.
Readers may remember the scathing reaction of anti-fluoride campaigners to the paper of Broadbent et al (2015). This was the first paper to compare child and adult IQ levels for people living in fluoridated and unfluoridated areas.
The anti-fluoride campaigners were extremely rude in their reaction – accusing the authors of fraud and claiming the paper was “fatally flawed.” Interestingly, several scientists known for their anti-fluoride bias also launched attacks – but more respectably as letters to the editor of the journal. For example, see articles by Osmunson et al (2016),Grandjean (2015),; and Menkes et al (2014).
And why? Simply because Broadbent et al (2015) showed there was no difference in IQ of people living in fluoridated areas. That the studies from areas of endemic fluorosis used by anti-fluoride activists to argue at CWF were just not relevant (see Child IQ in countries with endemic fluorosis imply fluoridation is safe).
But isn’t it strange? Two more recent papers (Green et al 2019 & Till et al 2020) have effectively repeated the work of Broadbent et al (2015). They found the same result – no difference in IQ of children living in fluoridated and unfluoridated areas. And simply no reaction, no condemnation from anti-fluoride activists or the anti-fluoride scientists.
No condemnation because these anti-fluoride critics promote these papers for other reasons. But this underlines how biased the critics of the Broadbent et al (2015) paper were.
I have completed a detailed analysis of all the 65 studies the Fluoride Action Network (FAN) lists as evidence that community water fluoridation (CWF) is harmful to child IQ. The full analysis is available for download as the document Analysis of FAN’s 65 brain-fluoride studies.
In this article, I discuss the studies in the FAN’s list (see “FLUORIDE & IQ: THE 65 STUDIES”) which compare child IQ in areas of “fluoridated” and “unfluoridated” fluoride in Canada. Only two studies – but I include that of Broadbent et al (2015) (which FAN’s list ignores) for completeness. All three studies found no difference in the IQ of children living in fluoridated and unfluoridated areas.
Comparing IQ of children in fluoridated and unfluoridated areas
The table below summarises the results reported by all three studies – Broadbent et al (2015), Green et al (2019), and Till et al (2020).
Table 1: Results from studies comparing IQ of children and adults from fluoridated and fluoridated areas
Notes: Data from Green et al (2019) for children whose mothers lived in fluoridated or unfluoridated areas during pregnancy. Data from Till et al (2020) for children either breastfed of formula-fed as babies while living in fluoridated or unfluoridated areas.
There is absolutely no difference in IQ due to fluoridation. Remember, the standard dedication of the values in the table are about 13 to 16 IQ points.
I have presented all the results from these papers graphically below. FSIQ is the normal IQ measurement. VIQ (Verbal IQ) and PIQ (Performance IQ) are subsets of FSIQ.
The only statistically significant differences between fluoridated and unfluoridated areas were for VIQ of breastfed babies (VIQ higher for fluoridated areas) and PIQ of formula-fed babies (PIQ lower for fluoridated areas).
Anti-fluoride campaigners and (biased scientists like Grandjean) love the Green et al (2019) and Till et al (2020) papers because they reported (very weak) negative relationships of some child cognitive measures with fluoride intake ( I discuss this in separate articles). This is largely a result of the statistical methods used – particularly resorting to several different cognitive measures and measures of fluoride exposure, as well as the separation of results according to gender. Reminds me of the old saying that one can always get the results one requires by torturing the data hard enough.
I will return to the statistical problems of these and similar papers in a separate article.
Misrepresentation by anti-fluoride activists
Anti-fluoride campaigners have latched on to the two Canadian studies – often making claims that simply are not supported. But always ignoring the data shown above.
For example – this propaganda poster from FAN promoting the Green et al (2019) study.
This completely misrepresents the results of the study. No difference was found in the IQs of children from fluoridated and unfluoridated areas. These people completely ignore that result while placing unwarranted faith in the weak relationships reported elsewhere in that paper. (In fact, Green et al (2019) found a weak significant relationship only for boys – the relationships for all children and for girls were not significant. See my articles about this statistical torture).
And this FAN propaganda poster promoting the Till et al (2020) study.
Again – completely wrong. There was no difference in IQ of formula-fed babies in fluoridated and unfluoridated areas (see Table 1 above). Even worse – FAN is misrepresenting the statistical relationships reported in this paper as there’s was no statistically significant relationship between child IQ and fluoride exposure for formula-fed our breastfed babies once the influence of outliers and/or confounders were considered.
Misrepresentation by anti-fluoride scientists
It is understandable, I guess, that the authors of the two Canadian papers make a lot of the poor statistical relationships they reported and ignored the fact that they did not see any effect of fluoridation. Perhaps they can be excused some bias due to professional ambition. But this underlines why sensible readers should always critically and intelligently read the papers in this controversial area. One should never rely on the public relations claims of authors and their institutes. But it is sad to see how scientific basis and ambitions can lead scientists to support the claims of political activists. or worse, to attack honest scientists who do post-publication peer review of the studies (see for example When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics).
I am also very critical of scientific supporters of these studies who have their own anti-fluoride motivations. Philippe Grandjean, for example, was one of the authors very critical of the Broadbent et al (2015) paper and ignored completely the fact that the Green et al (2019) and Till et al (2020) papers report exactly the same result – no effect of fluoridation on child IQ. Grandjean often makes public comments supporting the claims of anti-fluoride campaigners like FAN. He also behaved in a scientifically unethical way when he refused to allow my critique of the flawed paper by Malin & Till (2015) to be published in Environmental Health – the journal he acts as the chief editor of (see Fluoridation not associated with ADHD – a myth put to rest).
I am repeating myself but it is a matter of “reader beware.” Readers should not simply rely on the scientific “standing” of authors who are only human and suffer from the same biases as others. They should read these papers for themselves and make up their own mind about what the data actually says.
Anti-fluoride activists love to point out that people living in endemic fluorosis areas in countries like China suffer all sorts of health problems, including lower IQ. But studies of these areas show no lowering of IQ in the low fluoride areas relevant to community water fluoridation.
I have completed a detailed analysis of all the 65 studies the Fluoride Action Network (FAN) lists as evidence that community water fluoridation (CWF) is harmful to child IQ. The full analysis is available for download as the document Analysis of FAN’s 65 brain-fluoride studies.
In this article, I discuss the studies in the FAN’s list (see “FLUORIDE & IQ: THE 65 STUDIES”) which compare child IQ in areas of “low” and “high” fluoride in countries like China, Mexico, Iran, Egypt, and India where fluorosis is endemic. In fact, all these studies either assume or provide evidence that fluoride at the concentrations used for CWF is harmless.
IQ differences for “high” and “low” fluoride areas
FAN was really dredging through very poor research to find these studies. In fact, FAN had to go to the trouble of translating many of these studies because they were obscure and not available in English.
Of their 65 studies, 17 do not provide data for fluoride intake or for drinking water fluoride concentrations. Instead, they simply describe the “high” areas as endemic fluorosis areas or areas where people suffer severe dental or skeletal fluorosis. Several of the studies used “control” groups from areas of “slight” fluorosis or dental fluorosis in contrast to skeletal fluorosis.
Another 29 studies did provide water fluoride concentrations for the “low” fluoride and “high” fluoride areas. This data is useful as it enables us to consider how relevant the results are to CWF. I have summarised the data in Figure 1.
The take-home message from Figure 1 is that while these 29 studies do show a decrease in child IQ in areas of “high” fluoride those areas are not relevant to CWF. In fact, the only relevance to CWF are the areas of “low” fluoride where there is the implicit assumption that child IQ is not affected. We can also assume this is the case for the 17 studies which do not provide details of fluoride exposure.
Figure 1: Comparison of water fluoride levels in “high” and “low” fluoride areas of 29 of the FAN studies and in areas where CWF is used.
So these 46 studies heavily promoted by FAN over recent years do not show any harm from CWF – in fact, all these studies implicitly assume there is no negative effect on child IQ at the “low” fluoride levels studied – and these are the areas most relevant to CWF. A simple consideration of the health problems faced by people living in areas of endemic fluorosis should have made it obvious that the data for high fluoride areas is simply not relevant. Consider these figures from Das et al (2016) – one never sees people like this in areas where CWF is used:
Dental fluorosis case found in the study area (age: 12, sex: male). Das et al (2016)
Skeletal fluorosis case found in the study area (age: 17, sex: male). Das et al (2016)
FAN is simply silly to suggest these studies, and especially the results for the “high” fluoride areas, area at all relevant to CWF.
Mind you, Paul Connett, FAN Director, likes to draw attention to one of these studies where he claims the “high” fluoride area has a drinking- water fluoride concentration of 0.81 mg/L which is similar to that for CWF. He is simply dredging the data (and ignoring all the other studies he cites) to make this claim. The study he refers to was made in an area of iodine deficiency and is extremely weak – simply and half pages in a Chinese newsletter. Have a look for yourself – Lin et al (1991).
In a future article, I will discuss the studies in FAN’s list which compare IQ for children from fluoridated and unfluoridated areas.