Cell Phones and The Brain

Wireless telephones, including both cellular phones and cordless home phones, emit electromagnetic radiation in the radiofrequency range. It’s been suggested in recent years that using a wireless phone on a regular basis could expose the brain to large doses of radiofrequency radiation, the risks of which are currently unknown. Particularly because children have rapidly developing brains, could using cell phones or cordless home phones increase the risk of cancer or have other negative health effects?

Radio waves are a type of electromagnetic radiation, which (though the word “radiation” makes them sound scary and dangerous) are very low in energy. As such, they are incapable of doing the kind of damage that, say, x-rays and nuclear radiation can do, as explained in a previous article. However, while radio waves can’t break chemical bonds like certain other types of radiation can, they are nevertheless a type of energy. In fact, cell phone use has been shown to increase the temperature of the skin with which the phone is in contact by more than 2°C over a period of less than 10 minutes (see, for instance, Anderson et al, Straume et al), though very little of this increased temperature is likely due to the radiation itself. It’s also not likely that much of the heat actually makes it through the skull into the brain. Still, because little is known about the potential effects of routine exposure of the brain to radiofrequency radiation, scientists continue to investigate the safety of wireless phones and similar devices.

Radiofrequency energy isn’t very penetrating; it’s absorbed by the head and hand of a cell phone user, but it can’t travel very far into the head. Therefore, if cell phones increase the risk of tumors, the tumors should appear in the regions of the brain nearest the ear. A recently published study with a very large number of participants (more than 350,000) examined the relationship between brain tumors and cell phone use. The authors found no correlation whatsoever, leading them to conclude that cell phone use does not increase the risk of brain tumors (Frei et al).

An even more recent study found, however, that cell phone use does alter the metabolism of glucose in the brain; specifically, using a cell phone increases the extent to which the regions of the brain nearest the phone antenna burn sugar (Volkow et al). Increased glucose metabolism (burning of sugar) is a sign that cells are working harder, so the results of this study suggest that cell phone use alters the operation of brain cells. The authors did not attempt to discern, nor did they propose, a mechanism for this effect. It remains to be determined why radiofrequency radiation would increase brain cell activity, and what ultimate effects that increased activity might have. An animal study, however, suggests that radiofrequency might change certain functional parameters of brain cells (how easily excited they are, for instance), and might alter the release of neurotransmitters, which are brain cell communication molecules (Hyland). The potential involvement of neurotransmitters is a particularly distressing possibility where it comes to a child’s brain, which is still developing and which is quite sensitive to neurotransmitter concentrations (though it’s worth bearing in mind that, as of yet, the involvement of neurotransmitters is purely hypothetical).

 

Science Bottom Line:* There’s no evidence that cell phones cause cancer, but there is evidence that they affect brain activity, and there’s not much yet known about how they do so, or what the long-term effects might be. A reasonable course of action in situations like this, in which the risks are poorly defined, is to proceed with caution. Using a cell phone for short periods during the day and/or infrequently for longer periods isn’t likely to be a problem, but you may wish to invest in a headset if you (or your child) uses a cell phone frequently or for long periods of time on a regular basis.

 

Do you worry about the long-term health effects of cell phone use?

 

References:

Anderson et al. Measurements of skin surface temperature during mobile phone use. Bioelectromagnetics. 2007 Feb;28(2):159-62.

Frei et al. Use of mobile phones and risk of brain tumours: update of Danish cohort study. BMJ. 2011 Oct 19;343:d6387. doi: 10.1136/bmj.d6387.

Hyland, G. Physics and biology of mobile telephony. Lancet. 2000 Nov 25;356(9244):1833-6.

Straume et al. Skin temperature increase caused by a mobile phone: a methodological infrared camera study. Bioelectromagnetics. 2005 Sep;26(6):510-9.

Volkow et al. Effects of cell phone radiofrequency signal exposure on brain glucose metabolism. JAMA. 2011 Feb 23;305(8):808-13.

34 Comments (+add yours?)

  1. Lorie
    Jan 13, 2012 @ 09:16:21

    I have no scientific background to speak with any expertise on this topic, but I am getting pretty good at recognizing credible sources. So, no, after reading and reading and critically evaluating the situation to the best of my ability, I don’t worry at all about cell phone or tower/antenna microwaves or radiowaves. (I’m even less worried now than I used to be about microwave ovens, now that I understand more about how they work.)

    You might be interested in what critics of that glucose study had to say: http://spectrum.ieee.org/tech-talk/biomedical/imaging/jama-cell-phone-study-may-be-flawed
    E.g. The researchers hadn’t blinded themselves to prevent inadvertent bias, but moreso they didn’t control for subjects being able to sense heat on their face from the phone.

    Before I read these critcisms of that study, I had read that the glucose increase they detected was relatively miniscule, the same as one might experience by simply talking. The only point I see in their study’s results is to suggest that holding an activated but muted cell phone to your head activates your brain to the same degree as speaking. Who knows, maybe this could have a positive affect on people 😉

    Dr. Christopher Davis was one of the critics of this study, and I found a talk he did on Youtube. He is quite an expert in this field. I found all the 5 parts interesting but in part 4 he specifically addresses RF and children’s brains, at around frame 11:00, here: http://www.youtube.com/watch?v=HMP3TvRPbx0&feature=related

    That said, thanks for writing your critical analysis of this important topic! Personally I would have kept going with my critical analysis rather than begin to get concerned over the glucose study.

    Reply

    • SquintMom
      Jan 13, 2012 @ 14:59:23

      I agree that the glucose study isn’t stand-alone evidence of much of anything; it was a sort of preliminary investigation which warrants further research. You mention that the glucose use increase was small (same as that experienced by simply talking), but what’s important is that the increase WASN’T in the areas of the brain used in talking, which means it wasn’t CAUSED by talking. Instead, it was in the areas of the brain closest to the antenna, and wasn’t found in a discreet group of functional regions, meaning that it wasn’t a brain activity that was causing the increased glucose use. Rather, it was an outside influence. THAT, to me, is interesting.

      Reply

      • Lorie
        Jan 14, 2012 @ 02:01:16

        Sorry, I didn’t mean to imply that it was the same brain region involved in speaking, but rather was trying to put it in perspective; as the author herself said, the increased glucose response was close to the level reached when talking. She also reported the difference in brain activation was less than that involved in lifting a finger.

        To me, the fact that we literally don’t even have to lift a finger to get a 7% glucose brain response means to me that the researchers should have taken extra special care to control for at least the most obvious confounding variables (like heat), and think their results would have been much more compelling if they showed that the brain’s increased response switched from the right side to the left as the phone activation was switched, but they only tested the right side for some reason.

        But anyway, whatever difficulties some people (including myself) might have with this one study, several sources I looked at today that I tend to trust say there was good evidence that EMFs can increase glucose activation in the brain even before Volkow’s study. But scientific skepticism abounds about the importance of this finding. It seems pretty easy for most anything to evoke glucose metabolism in the brain (just thinking about something like food can evoke a much larger response than that found by Volkow), and meanwhile the preponderance of evidence from the most credible researchers and organizations hasn’t pointed to any ill effects (other than thermal dangers), so putting those together, I don’t feel worried about cell phone RF (other things, yes, like driving while talking on the phone, or worse: texting). I have a feeling we may never get to final conclusion that RF waves are safe; it’s very hard to prove anything to be completely safe. The RF debate reminds me a lot of the vaccine debate.

      • SquintMom
        Jan 14, 2012 @ 22:02:30

        Just a quick response, and I’ll reply in greater depth later. There’s a big difference between suggesting that RF from, say, cell towers is potentially harmful, and suggesting the same of RF from a cell phone held to the ear. Because of the potential for risk (during long calls) and the absolutely easy solution of using a headset instead, I think it’s a pretty easy cost-to-benefit analysis.

      • Lorie
        Jan 15, 2012 @ 22:35:02

        No dispute there. It’s a choice people can make if they’re worried. Many people take precaution advice to mean there is good reason to fear, and i’m not seeing this (yet; still looking though). Millions and millions of studies are published and most turn out to be inaccurate – fodder for anyone to make a case for any agenda. I plan to watch those who watch the situation to see if these preliminary non-thermal effect studies hold up to close scrutiny over time. Glad science has a way of self-correcting over time, but unfortunately that could take decades. Technology is changing so fast, and cell companies reducing RF emissions anyway (they like to save energy too; it’s costly for them).
        BTW, I really like what you’re doing with your blog. I see it as a service to the public, so THANKS! You’ve renewed my interest in this topic after I studied it deeply awhile back when my community protested cell antennas on a movie theatre. Now I really want to know what the deal is on non-thermal effects. Eyes open.

      • SquintMom
        Jan 16, 2012 @ 03:15:51

        Thanks for your kind words. One thing I’d say, though, as a science “insider,” is that I disagree with the statement that most studies turn out to be inaccurate. This may be true of studies that aren’t published in peer-reviewed journals, and it is certainly true of the exaggerated claims that the popular media ascribes to the studies in question, but it’s not true of the studies themselves.

  2. Lorie
    Jan 13, 2012 @ 09:38:13

    One more thing…
    I looked up Hyland, since you mentioned his work as a source of concern, asking myself if he is a credible scientist in this field. I’m beginning to doubt that he is; looks fringe to me: http://www.quackometer.net/blog/2007/05/wi-fi-quackery-and-parliament.html

    Reply

    • SquintMom
      Jan 13, 2012 @ 15:21:13

      Be careful, though; the fact that an individual is engaged in some dodgy science is not a sign that ALL their science is dodgy. To suggest so is a to make a fallacious ad hominem argument. As an example, Linus Pauling was (appropriately) the winner of TWO Nobel prizes, and yet was also the initiator of an absolutely unproven (at the time, and disproven since) bit of quackery that survives to this day: the belief that large doses of vitamin C improve the immune system and general health.

      Reply

      • Lorie
        Jan 14, 2012 @ 02:56:49

        True, but if a study looks dubious and you see that the scientist seems to have moved away from good scientific thinking, especially if they’re veering away from their own field of expertise at the same time (Hyland was a theoretical physicist), joining hands and mindset with those who promote seriously dubious claims (like homeopathy), I think this should factor in to the credibility of their claims – especially as it’s not easy even for experts to directly evaluate all studies. I appreciate someone pointing out when a mainstream scientist is straying into pseudoscience. I believe that’s what Andy Lewis (quackometer blog owner) was pointing out in his assessment of Hyland, not attacking him personally in lieu of critiquing his studies.

        If someone had pointed this out about Pauling at the time, I might not have risked burning a hole in my stomach (that’s what if felt like, at least) from all the mega units of Vit. C my doctor, mother, and everyone else told me to take to prevent colds. My son’s daycare provider was slipping my toddler (at the time) mega units of Vit. C (witihout telling me, that’s how sure she was that it was safe and effective).

        Btw, do you know about this site: http://www.Sciencebasedmedicine.org ? They have helped me more than I can express to appreciate good science and realize its limitations as well.

  3. Lorie
    Jan 15, 2012 @ 22:43:23

    Here’s an opinion piece I wrote for my local paper, in case you want to read it. http://www.dailytidings.com/apps/pbcs.dll/article?AID=/20100927/OPINION01/9270305&emailAFriend=1

    Reply

  4. Lorie
    Jan 16, 2012 @ 06:02:43

    (I didn’t see a reply button on your comment above, so I’ll reply here) You said “One thing I’d say, though, as a science “insider,” is that I disagree with the statement that most studies turn out to be inaccurate.”

    I should have put a reference with my statement about that. See http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/ John Ioannidis studies studies. It made more sense after I also discovered there are more than 1.3 million peer-reviewed studies published per year world-wide (if we can assume date from 2006 is representative of other years). He admits he can also be wrong in his own study’s conclusion. Can we say that the most reputable medical journals, at least, publish mostly accurate studies? I think these are the ones that Ioannidis looked at. It made my head hurt when I first heard about his contentions (a case of cognitive dissonance, I guess).

    Reply

    • SquintMom
      Jan 16, 2012 @ 17:06:48

      I’d be happy to go with the conclusion that the better a publication’s reputation, the more likely it is to publish mostly accurate studies. Again, though, I want to make it clear that what a study ACTUALLY concluded and how the study’s conclusions were reported by others are often vastly different things.

      Reply

      • Chris Maloney
        Jan 22, 2012 @ 03:54:39

        > Can we say that the most reputable medical journals, at least, publish mostly accurate studies?

        … and …

        > I’d be happy to go with the conclusion that the better a publication’s reputation, the more likely it is to publish mostly accurate studies.

        Actually, this turns out to be completely false. See, for example, this post: http://www.biotechniques.com/news/Higher-Impact-Factor-Higher-Retraction-Frequency/biotechniques-321112.html
        about this study: http://iai.asm.org/content/early/2011/08/08/IAI.05661-11.abstract that shows that the Impact Factor (IF) of a journal is positively correlated with its retraction rate. The simple explanation is that scientists are so motivated to publish in glamor mags that they tend to take shortcuts in their data gathering.

      • SquintMom
        Jan 22, 2012 @ 04:10:00

        This study only considered high-quality journals, however. I was referring to across-the-board reputation. That is to say, for instance, while Nature might retract more articles than the Journal of Organic Chemistry because of impact factor, there’s absolutely no doubt in my mind that more Nature articles contain ultimately good science than articles in the Journal of American Physicians and Surgeons. Also, the first point (that the most reputable journals publish MOSTLY accurate studies) is correct; the vast majority of studies published in high-impact, well-respected, peer-reviewed journals contain solid science, as demonstrated quite clearly by the link you provide. Even the highest retraction index reported (which was for the New England Journal of Medicine) was less than 4. Per the original paper, this means that fewer than 0.0004/total published articles were retracted OVER THE COURSE OF A DECADE.

        That’s a pretty damn fine track record.

      • Chris Maloney
        Jan 22, 2012 @ 16:07:37

        > This study only considered high-quality journals

        I’m not sure where you’re getting that. From the article: “… we carried out a PubMed search for retracted articles among 17 journals ranging in impact factor between 2.00 to 53.484”. An IF of 2 is considered mediocre.

        > there’s absolutely no doubt in my mind that more Nature articles contain ultimately good science than articles in the Journal of American Physicians and Surgeons.

        The journal you pick as a counter-example is an outlier, I think — it’s not even indexed in MEDLINE or WoS, and has no IF.

        In my comment above, I was only trying to point out that the common misconception that journal reputation correlates with the quality of articles/science is not true. Impact factor is the de-facto measure of journal reputation, and, from the article, “A plot of the journal retraction index versus the impact factor revealed a surprisingly robust correlation between the journal retraction index and its impact factor.”

        Now, it may be that higher-IF journals have different retraction policies, or something else unrelated to article quality, that causes articles to be retracted at a higher rate, but this statement is pretty clear: “Although correlation does not imply causality, this preliminary investigation suggests that the probability that an article published in a higher journal will be retracted is higher than that of an article published in a lower impact journal.”

        I wasn’t, in my comment above, trying to suggest that your first point (“that the most reputable journals publish MOSTLY accurate studies”) was incorrect. But, since you bring it up, I think it’s worthwhile to post another quote from the article:

        Most scientists feel that research misconduct is
        uncommon. However, a meta-analysis of survey
        data reported that 2% of scientists report having
        committed serious research misconduct at least
        once, and one-third admit to having engaged in
        questionable research practices (36). Given the
        stigma associated with retractions and the
        challenges in detecting misconduct, it is likely
        that retractions represent only the tip of the
        iceberg.

        Make no mistake — I think science works, and I agree that most journal articles are trustworthy. But the point here is that just because something is published in a high-reputation, peer-reviewed journal, doesn’t mean it is necessarily to be trusted. A case in point is the recent supplement published by Nature touting Traditional Chinese Medicine, which you can read about here, if you are interested: http://www.sciencebasedmedicine.org/index.php/shilling-for-traditional-chinese-medicine/

      • SquintMom
        Jan 23, 2012 @ 17:20:59

        Ok. You made a number of points, so I’ll try to get to them all. My responses are in bold. First, you said “I’m not sure where you’re getting that [in response to my claim that the study considered only high-quality journals]. From the article: “… we carried out a PubMed search for retracted articles among 17 journals ranging in impact factor between 2.00 to 53.484″. An IF of 2 is considered mediocre.”

        Please note that IF (impact factor) is not the same thing as reputation/quality. A journal can have a good reputation and a high impact factor (like Nature), a good reputation and a lower impact factor (like the Journal of Organic Chemistry), or a poor reputation (like the Journal of American Physicians and Surgeons). The study only considered high-quality (good reputation) journals, though they considered journals with all sorts of impact factors. Impact factor is a measure of how often a journal is CITED, not a measure of its scientific rigor.

        The next point you made was that “The journal you pick as a counter-example [Journal of American Physicians and Surgeons] is an outlier, I think — it’s not even indexed in MEDLINE or WoS, and has no IF.”

        Yes, that was an outlier. Just as Nature is arguably one of the two most influential and rigorous journals (and is therefore an outlier), I wanted to give an example of an outlier on the other end of the spectrum. Journals like the Journal of Organic Chemistry are more “average”.

        You say “Most scientists feel that research misconduct is
        uncommon. However, a meta-analysis of survey
        data reported that 2% of scientists report having
        committed serious research misconduct at least
        once, and one-third admit to having engaged in
        questionable research practices (36). Given the
        stigma associated with retractions and the
        challenges in detecting misconduct, it is likely
        that retractions represent only the tip of the
        iceberg.”

        2% of scientists represents a great minority, making the assertion that research misconduct is uncommon CORRECT. We can speculate that this 2% represents the tip of the iceberg, but it’s pure speculation. Since you go on to say that you think science works and that most articles are trustworthy, you and I are actually in agreement. If you recall, my disagreement with Lorie was in her suggestion that MOST articles are inaccurate. Sounds like you and I are on the same page afterall.

      • Chris Maloney
        Jan 23, 2012 @ 17:55:00

        I never said that I disagreed with you, when you said that most articles are trustworthy. I don’t know why you’ve implied that, in this thread, twice now already. This was always about the contention that high-reputation journals → more trustworthy articles.

        You seem to have missed an important bit from my last comment: “Impact factor is the de-facto measure of journal reputation”. I think this is true. If anyone wants to do some statistical analysis involving journal reputation, you need some sort of measure that has a numeric value. So they used IF as a proxy for reputation. You say there are journals with a good reputation but low impact factor. That may be true, but how would you quantify it? Whether it’s a good thing or not, IF is probably the best way to measure reputation.

        Anyway, as an aside, the reason I’m belaboring this point is because, as an open-access advocate, I do think that, in general, scientists tend to give too much weight to journals’ reputations. This causes a lot of systemic problems downstream. Pressure to publish in high-impact (or high-reputation, if you prefer) journals means that open-access journals don’t get as much consideration as they should.

      • SquintMom
        Jan 23, 2012 @ 18:21:54

        You said:

        > Can we say that the most reputable medical journals, at least, publish mostly accurate studies?

        … and …

        > I’d be happy to go with the conclusion that the better a publication’s reputation, the more likely it is to publish mostly accurate studies.

        Actually, this turns out to be completely false.

        From that, I assumed you were suggesting that BOTH of my assertions were false; why else include both quotes? If that’s not what you meant to assert, fine.

        I didn’t miss your comment that impact factor is the de-facto measure of journal reputation, I ignored it. It may be true much of the time, but it isn’t CERTAINLY true ALL of the time. There is not, as far as I know, an ACTUAL measure of journal quality that incorporates impact, rigor, and accuracy, so we have to talk about impact and reputation as at least somewhat separate qualities.

        I’ll return us to the original discussion, shall I? I objected to Lorie’s comment that MOST science was incorrect. MOST science (that is published in well-conducted journals) is not inaccurate.

      • Chris Maloney
        Jan 23, 2012 @ 18:48:39

        Ah! I guess there’s been some miscommunication. You are right that the two quotes above say different things. Lorrie’s original statement, I agree with. But when you paraphrased it, you seemed (to me at least) to be implying a correlation between a journal’s reputation, and the accuracy of its studies. That’s the point I was trying to contend.

        Cheers!

  5. Lorie
    Jan 17, 2012 @ 02:59:34

    I hope so but Ioannidis is talking about those journals. He’s concerned about the market, rather than quality, influencing decisions on what to publish, and the need to get published in a reputable journal influencing scientists’ submissions – leading to too much bias, sample sizes too small, negative findings hidden by scientists and rejected by journals while false positives naturally occuring by chance get published.
    I imagine too that the better journals have a greater impact on society when studies are misleading, as they’re trusted. A well-known example is the Lancet’s publication of Andrew Wakefield’s now retracted (took 10 years, I think) measles/autism study – just a small preliminary study but the public damage was huge; some people died avoiding the MMR vaccine.
    Along with the media overstating some studys’ findings, often the researchers themselves do it. Wakefield (along with the Lancet) is a good example of that too. He made his living from the MMR-autism scare.

    Anyway, this is a valuable topic for anyone interested in evidence-based medicine. I realized Ioannidis (he’s heading a dept. at Stanford now) is all about improving science, not condemning it.

    Thanks! This discussion has stimulated my brain glucose metabolism, big time. 😉

    Reply

  6. Chris Maloney
    Jan 22, 2012 @ 04:54:33

    I’m new to this site, and I read this post first, because it’s something that I’ve read a bit about, and I wanted to see what you had to say about it, to get an initial impression of your site. I have to say that I found this post to be disappointingly weak.

    You start by explaining, accurately, that there is no reason to think that cellphones could cause harm — there is no plausible scientific hypothesis for the mechanism of any harm. But then you make a common error when you say, “because little is known about the potential effects …, scientists continue to investigate …”. No, it is not because so little is known that they continue to investigate. The only reasons that investigations continue are an irrational paranoia among the general public, based on superstitions, rumors, and memes that will not die; the political pressure that this paranoia engenders; and also (perhaps) some opportunistic pseudo-scientists who exploit this paranoia. The way you describe it, people can use the fact that there are ongoing investigations to bolster there argument that there might be cause to believe that there is a risk.

    As for the glucose study, the article Lorie points to succinctly debunks that, noting that the authors did not control for the temperature increase that cell phones cause. In light of this, I think you should amend your “science bottom line”, and retract “there is evidence that they affect brain activity”.

    I’d also suggest that it doesn’t make sense to advise people that wearing a headset is a rational choice based on any cost-benefit analysis. In any such analysis, any cost that’s measurably above zero (i.e. buying a headset, keeping track of it, putting it on) is too high, because the risk of any harm is so close to zero as to be astronomically infinitessimal. The problem with suggesting that wearing a headset is a prudent thing to do, is that others will cite you as a credible source, and say “SquintMom says you should wear a headset”. This would be entirely true, but misleading; and unfortunately it just prolongs the agony of people who really make decisions based on evidence and reason.

    In my opinion, in the video that Lorie linked to above, Prof. Christopher Davis does a much better job of summing up the “science bottom line”:

    “The single most important fact … is there’s no increase of brain tumors on the planet, and if cell phones cause brain tumors … you should be seeing something…. The preponderance of the evidence, and the fact that these tumors are not increasing, and there’s no mechanism in my judgement as a scientist, tells me that it’s irrational to worry about something that’s just not there.”

    Reply

    • SquintMom
      Jan 22, 2012 @ 14:13:53

      Well, let’s go through and look at what I said in the post. I started by explaining that there is no plausible scientific hypothesis for mechanism of harm. You claim I then made a common error, stating that scientists investigate because little is known about the potential effects. You claim scientists investigate because of paranoia. I don’t know what kind of experience you have with research, but the fact is that most scientists have to investigate those things for which they can get research dollars, and you can’t get funding to study something just because people are paranoid about it. To get funding, you have to have some evidence and/or hypothesis in place to suggest that you might get an interesting result (that hasn’t been obtained before), which means that my statement is more accurate in this case: scientists continue to investigate because there’s little known about potential long-term effects of exposure (and there’s continued curiosity in the scientific community as to whether there could be long-term effects).

      The fact that the glucose study failed to control for temperature doesn’t actually matter. Because of the lack of control, one can’t conclude from the study that the EMR from cell phones affects glucose metabolism, but one can conclude that the use of handset cellphones affects glucose metabolism. Is that a good thing? A bad thing? An unimportant thing? Why knows.

      Because a) the study provided good evidence that (regardless of why it happens) cell phones alter glucose metabolism, and because b) using a headset for long calls is essentially a risk-free action with the potential for benefit, there’s no reason not to.

      Here’s the thing; when you’re doing a risk-to-benefit analysis, if there’s a (significant) potential for risk in an activity, you need to be able to show a major potential for benefit in order to justify it. Here, there’s an unknown risk associated with using a cellphone handset, because while studies have demonstrated that phone use doesn’t cause cancer, studies have also shown that (because of EMR, because of temperature, or because of who-knows-what) cell phone handsets alter brain glucose metabolism. So…unknown risk. That makes it impossible to do a risk-to-benefit analysis. By adding in a headset, you remove the element of the unknown. You get to make your call (benefit), and you avoid the unknown risk of altering brain glucose metabolism.

      Please note that I wasn’t reactionary or extreme; I didn’t say always use a headset. In fact, I will re-post the entire text of the bottom line here, with emphasis (in bold) and comments (in italics):

      There’s no evidence that cell phones cause cancer, but there is evidence that they affect brain activity, note that I didn’t say it was the EMR responsible for the brain activity effects and there’s not much yet known about how they do so again, asserting that we don’t know HOW they affect glucose metabolism, so it doesn’t matter whether or not the study controlled for temperature, or what the long-term effects might be. A reasonable course of action in situations like this, in which the risks are poorly defined, is to proceed with caution this is a very measured response. I didn’t say “freak out,” I didn’t say “never use a headset.” Instead, I said: Using a cell phone for short periods during the day and/or infrequently for longer periods isn’t likely to be a problem, but you may wish to invest in a headset if you (or your child) uses a cell phone frequently or for long periods of time on a regular basis. Why? Because using a headset for long periods of time is associated with an UNKNOWN risk of UNKNOWN origin with UNKNOWN significance.

      Hope that helps clear up my thinking on this topic! Thanks for your comments.

      Reply

  7. Lorie
    Jan 25, 2012 @ 11:19:47

    More belaboring…

    @ Squint mom, you said: “I objected to Lorie’s comment that MOST science was incorrect. MOST science (that is published in well-conducted journals) is not inaccurate.”

    I imagine truly “well-conducted” journals would have mostly accurate studies, but the issue Ioannidis raises is that prestigious journals aren’t well-conducted, or at least not in a way that would ensure accuracy and quality. I don’t know if he says “most” prestigious/reputable journals publish inaccurate findings (he said “most published studies…”), but as I said, he finds serious faults with those. They are the ones most relied upon for accuracy, so even a 25% inaccuracy would be a sad state of affairs.

    Take this Kolkow study, we can see the large impact it has had on reinforcing public fear (as does urging the precautionary principle), yet it very well could turn out to be a completely inaccurate/misleading study. It’s just preliminary, unreplicated, not double-blinded, etc.

    Kolkow does insist the effects she saw were non-thermal, and I believe it does matter because thermal effects (basically, movement of molecules in cells that causes heat to build up as RF increases) have been well studied and found to be safe at such low power levels. Until we know where this study stands after critical evaluation, it’s not evidence of anything, it’s just interesting.

    Reply

    • SquintMom
      Jan 25, 2012 @ 14:49:52

      Where are you getting the idea that 25% of studies are inaccurate?? The highest retraction rate observed for high-quality journals (like Nature) was only hundredths of a percent.

      Reply

      • Lorie
        Jan 25, 2012 @ 19:20:11

        (Uh oh, using carrots for emphasis made the content disappear. Here’s my complete post; I hope you can delete the previous one.)

        My 25% figure pertaining to prestigious journals was my own hypothetical, my point being that even if the “vast majority” are accurate, as you say, this could still leave a sizeable proportion of inaccurate studies.

        Retraction rate is the tiny tip of the iceberg, after much damage is done.

        Looking for a specific number reported by Ioannidis himself about prestigious journals alone, I see it’s much worse than my hypothetical 25%. Googling, I see Freedman says in his book called Wrong: “The two-out-of-three wrongness rate Ioannidis found is worse than it sounds. He (Ioannidis) had been examining only the less than on-tenth of one percent of published medical research that makes it to the most prestigious medical journals.”

        Freedman says Ioannidis is offering an “extremely optimistic assessment” by looking at just the most prestigious journals and that the problem Ioannidis found wasn’t unique to the medical field.

        So, Ioannidis found a 66% wrongness rate among the most prestigious medical journals.

        Back to the subject at hand, if Ioannidis and others who study studies are accurate, the Volkow study is most likely wrong or misleading or will at end up being highly contested or found to be unimportant, or something along those lines just because of the issues Ioannidis raises.

      • SquintMom
        Jan 26, 2012 @ 16:37:45

        Ioannidis’ paper is an intriguing journey through statistics, but doesn’t particularly impress me, because he is not looking at actual retraction and/or error rates. Anyway, here are my thoughts in no particular order:

        1) I have faith in science
        2) The corollary to the above is that science is a journey, not a destination (yikes, that sounds like it should be on a greeting card or something). What I mean by this is:
        3) I put my faith in the scientific PROCESS, not in any one scientific article or finding.

        So, with regard to the cell phone study, the study itself could be wrong. Doesn’t bother me; if it is, the scientific process (that is to say, other researchers, more research) will sniff out the truth eventually.

        4) It’s a bad idea to put too many eggs in one basket (boy, I’m on a roll with the cliches today!). What I mean by this is that one shouldn’t ever put too much stock in a single study.

        With regard to the cell phone study, because this is a single study, it would not be wise to engage in a high-risk or high-cost action on the basis of the study findings. For instance, it would be really silly to say, “OMG! Cell phones make your brain heat up! Never use cell phones again!” However, a low-risk, low-cost action may be justified. For instance, I might say to myself, “Huh. Cell phones held near the head seem to affect glucose metabolism. Who knows why. That’s weird. What happens if I affect my brain glucose metabolism in odd ways on a regular basis? Who knows. Maybe it’s not a problem, but maybe it’s not a good thing. Oh well…using a headset when I talk for a long time on my cell doesn’t cost me anything, and isn’t a risk, so maybe I’ll do it just as a precaution.” That’s a very measured, sensible response. See what I mean?

        So in the end, I think you and I are in agreement. I suspect you’d agree with me that over time, the trajectory of science is a good one (science eventually gets us to the answer), but that the trajectory of any ONE study could be totally right, totally wrong, or somewhere in between…and therefore while we can put faith in the greater trajectory of science (and use it as an overarching framework for decision-making), we shouldn’t make life-and-death decisions or high-cost/high-risk decisions on the basis of a single study.

      • Lorie
        Jan 27, 2012 @ 01:50:07

        Darn, i meant to put my dated 1/27 as a reply to this one of yours dated 1/26. Hope your readers won’t get too confused.

  8. Lorie
    Jan 25, 2012 @ 11:21:15

    Sorry, I meant Volkow (not Kolkow).

    Reply

  9. Lorie
    Jan 27, 2012 @ 01:42:52

    Yeah, it looks like we’re on (mostly) the same page about science. I suppose I wouldn’t have bothered with all this discussion if we weren’t. I think Ioannidis would mostly agree too; I hope his efforts will help science bring real answers to the public sooner.

    Where you and I diverge is simply that I felt your article started out with strong critical analysis but then made Volkow’s conclusion sound more definitive and scary than I feel is warranted (prelimary, not double-blinded, unreplicated, found relatively tiny effects even if accurate, etc.). And while you think it’s worth it to urge relatively minimal precautions even if Volkow may well be wrong (although if she’s right, she pointed out that a little brain activitation could prove to be a health benefit), I would argue that the PP comes with it’s own set of consequences when urged by credible sources (that was a compliment). Search Google: dangers of the precautionary principle.

    Anyway, your readers have all these comments here and can decide for themselves, and so we can just agree to disagree on those points.

    I’m afraid that if this discussion ends, I’ll have to use my cellphone to activate my brain instead 😉

    Reply

  10. Chris Maloney
    Jan 27, 2012 @ 03:29:32

    Here’s a good blog post about Ioannidis and this issue: http://gameswithwords.fieldofscience.com/2010/10/youre-wrong.html.

    Harking back to our earlier thread — I guess I may have been hasty in agreeing with you, that most published articles in high-profile journals are accurate! I had read the Atlantic article about Ioannidis when it came out, but didn’t remember that he actually makes the claim that *most* articles are wrong. Thanks, Lorie, for the link!

    Anyway, the blog post above delves into some of the reasons why we might expect many of these papers to be wrong. The reasons have to do with the incentive structure of the whole scientific enterprise. That’s why I attempted to refute the contention above (which perhaps I misinterpreted) that a journals’ reputation is correlated with the accuracy of its articles. This belief is unfounded, and the belief itself is part of the dysfunctional incentive structure that leads to this dilemna. Scientists should rather be encouraged to publish in journals like PLoS One, that accept negative results and articles that are judged based on sound science, as opposed to “significance” or surprising findings.

    So, ironically, if more than half of the papers published in high-reputation journals are wrong, does that mean that if we read an article that has a particular result, we should actually be inclined to believe the opposite? I don’t think that’s justified, actually. My understanding (which might be wrong) is that a study that’s wrong is one that can’t be replicated, and so any findings produced by it are just due to random chance. A random finding provides no evidence for or against any hypothesis, at all.

    Anyway, I am with you, SquintMom, when you say that you have faith in science. One article is not cause enough to believe any hypothesis, especially if: it’s a very surprising finding, has no underlying explanation, or if there’s palpable political pressure to produce findings in line with those of this article. That’s why experts such as yourself have a responsibility to survey the scientific landscape on a topic, and to discern the consensus, and then report that accurately. And that’s why I’d continue to say that it’s irresponsible to recommend to people *any* action, even a “low-risk, low-cost” one, on the basis of a single study, which doesn’t even purport to demonstrate that there’s any potential for harm.

    We are all bombarded with reports of things that are harmful to us and our kids, every single day. And we have to make decisions about what to pay attention to. *Any* attention given to this issue is a waste of time and a drain on society. Say, for analogy, that somebody sees you feeding your kid asparagus, and says to you, you know, asparagus causes brain cancer. You’re skeptical, so you do some Googling, and find an article that says that asparagus “changes glucose metabolism” in brain cells. Well, a low-risk, low-cost response to this would be to stop feeding him asparagus, and feed him broccoli instead. But is it justified? Absolutely not.

    Reply

    • SquintMom
      Jan 27, 2012 @ 15:10:26

      I would like to respond in more detail to your thought-provoking post, but don’t have time right now, so I’ll get back to it. I did want to say very quickly, though, that in your last example, stopping feeding asparagus is low-risk, but not actually low-cost, because asparagus has valuable nutrients that you’re then catagorically avoiding. An action is only low-cost if what you’re avoiding conferred no unique benefit. Handset cell phones confer no unique benefit (that you can’t reproduce with headsets), so there’s actually a major difference there!

      Reply

      • Lorie
        Jan 27, 2012 @ 20:52:52

        Look up dangers of headsets; they can cause environmental isolation and loss of hearing. Some people get ear pain and headaches from headsets. Then there is the inconvenience factor too. And you’re ignoring the hazards of urging precaution. All for very likely no good reason, based on a study taken out of context. What does the bulk of the research on low level RF effect on the brain tell us, to date?

      • SquintMom
        Jan 27, 2012 @ 23:39:42

        Look, nothing is without risk. Headsets used at appropriate volume don’t cause loss of hearing. Ear pain and headaches have to do with the physical shape of the headset, so if you have a problem with a particular one, don’t use it. NOTHING IS WITHOUT RISK. The bulk of the research on low-level RF tells us very little about the long-term effects of holding an RF transmitter right up to your head for long periods of time. This is a physics issue, and has to do with intensity of the RF waves. I’m not scared of cell phones used for reasonable periods of time. If I were on one all day, I’d use a headset. Why? Because there’s enough question in my mind as to whether there could be long-term effects of right-up-against-my-head RF exposure for long periods that it would be worth it to me. If it’s not worth it to you, don’t do it. But I simply can’t and won’t acknowledge that there’s no risk associated with cell phone handsets; I don’t believe that information is in yet. To me, based upon CURRENTLY AVAILABLE INFORMATION, the known and tiny risks of headset use are more acceptable than the unknown risks of handset use FOR LONG PERIODS. But to be rigorous about scientific decision-making, one has to re-evaluate one’s decision every time new information comes to light.

        And in the end, if anyone’s curious, I don’t use a headset. Why? Because I’m very rarely on the phone for more than a few minutes. So there that is.

      • Lorie
        Feb 04, 2012 @ 23:01:14

        Well, I still use my cell phone but I’ve stopped eating asparagus: http://uncyclopedia.wikia.com/wiki/Asparagus_Syndrome

        (sorry, couldn’t resist posting this parody).

Leave a reply to Chris Maloney Cancel reply