Friday, December 19, 2014

Brain Training Games - Who's winning? or Another Open Letter on Brain Training Games

theconversation.com

In October, an opening salvo by 70 scientists took aim, in the form of an open letter (Stanford Center on Longevity) on a $1.3 billion dollar business. In the letter they question the claims of brain training games as a "magic bullet" in the fight against cognitive decline. This week, Michael Merzenich (who is the founder of BrainHQ) rounded up the troops and responded along with 126 scientists in their own open letter (Cognitive Training Data) addressing the questions and criticisms of the original open letter.

The original letter concluded with 5 recommendations for how we should move forward regarding brain health, aging and cognitive training:
  1. Much more research needs to be done before we understand whether and what types of challenges and engagements benefit cognitive functioning in everyday life. In the absence of clear evidence, the recommendation of the group, based largely on correlational findings, is that individuals lead physically active, intellectually challenging, and socially engaged lives, in ways that work for them. Before investing time and money on brain games, consider what economists call opportunity costs: If an hour spent doing solo software drills is an hour not spent hiking, learning Italian, making a new recipe, or playing with your grandchildren, it may not be worth it. But if it replaces time spent in a sedentary state, like watching television, the choice may make more sense for you.
  2. Physical exercise is a moderately effective way to improve general health, including brain fitness. Scientists have found that regular aerobic exercise increases blood flow to the brain, and helps to support formation of new neural and vascular connections. Physical exercise has been shown to improve attention, reasoning, and components of memory. All said, one can expect small but noticeable gains in cognitive performance, or attenuation of loss, from taking up aerobic exercise training.
  3. A single study, conducted by researchers with financial interests in the product, or one quote from a scientist advocating the product, is not enough to assume that a game has been rigorously examined. Findings need to be replicated at multiple sites, based on studies conducted by independent researchers who are funded by independent sources. Moreover, participants of training programs should show evidence of significant advantage over a comparison group that does not receive the treatment but is otherwise treated exactly the same as the trained group.
  4. No studies have demonstrated that playing brain games cures or prevents Alzheimer’s disease or other forms of dementia.
  5. Do not expect that cognitively challenging activities will work like one-shot treatments or vaccines; there is little evidence that you can do something once (or even for a concentrated period) and be inoculated against the effects of aging in an enduring way. In all likelihood, gains won’t last long after you stop the challenge.
Or as the Cognitive Training Data response letter summarizes:
  1. More research needs to be done.
  2. Physical exercise is good for physical health and brain health.
  3. A single study generally is not conclusive and needs to be integrated into a larger body of evidence.
  4. No study, to date, has demonstrated that brain training cures or prevents Alzheimer’s disease.
  5. Cognitively challenging activities have not been shown to work like one-shot treatments or vaccines.
After agreeing with many points from the original letter, the response letter finely gets down to brass tacks (emphasis my own):
"We cannot agree with the part of your statement that says 'there is no compelling scientific evidence' that brain exercises 'offer consumers a scientifically grounded avenue to reduce or reverse cognitive decline'.” 
They go on to offer their own talking points:

  1. The training basis of the literature that shows that brain plasticity exists throughout the brain and throughout life
  2. The many demonstrations of the effectiveness of well-designed plasticity-based training regimens 
  3. The specific findings of efficacy in the area of aging, your statement derogates the time, effort, and expertise of the thousands of scientists and clinicians engaged in designing, conducting, analyzing, publishing, and reviewing the research. It also diminishes the contribution of thousands of volunteer research participants who gave their time and effort to these studies, and the time, effort, and expertise of the grant-makers who awarded the funding for most of these studies through the National Institutes of Health, other government agencies, and foundations. In addition, it short-changes the taxpayers who funded this well-conducted research.
Reading through these talking points, the one that stands out to me is number 3, wow does it pull at your heartstrings! They call out the SCL and say that their claims are hurtful, not just to the scientists making the claims that brain games work for consumers, but that they are hurting the participants, the NIH, and the taxpayers funding all of that amazing research. But lets go back to the quote that the brain training scientists were most upset about, the one questioning whether consumers actually benefit from brain training. I don't think the SCL scientists in the original letter are arguing that our brains aren't plastic and that dedicating minutes or hours a day in an engaged activity won't change (whether for the better or worse) our cognitive functioning. I think the crux of their argument is whether it is worth consumers time and money to play these games and gain skills that are both relevant to the real-world and long lasting. The point that the SCL researchers (here is as good a place as any to mention that of the 197 scientists signing the two letters, 8 have conflicts of interest, 1 of 70 in the first letter and 8 of 127 in the response) are making is that consumers may not care to improve at statistically significant levels in their performance on the game itself or on related laboratory tests of neurocognitive functioning, they want to stop forgetting their keys and leaving the stove on.

I keep highlighting the word consumer because these games cost money and as I said above they are part of a growing $1.3 billion business. So as others have suggested, its not whether the games, "work" or not, its whether a consumer could just be more physically active, get more sleep and try to be less stressed and see the same or greater improvements in cognition without spending hundreds of dollars a year. In attempting to look up the cost of a few games, it was almost impossible to find the information on their websites. 
  
BrainHQ - $6/month annually (on sale from $8)
Cogmed - $500 (on sale from $1500) - this is a program sold to clinicians who can then administer to clients
Lumnosity - $6.95/month annually
Memorado - ~$5.40/month annually
Fit brains - $9.99/ year

So the point of these two letters aren't whether playing a brain game "helps" you or not, its whether the investment of time and money in brain games is significantly better than say keeping a physically and socially active lifestyle. Perhaps the best piece of advice comes from Art Kramer who once responded to my question of his advice for warding off cognitive decline in aging as, "join a book club that walks and drinks wine during the meeting."

Wednesday, December 17, 2014

Hyperopia in academics?

Last week, the cheery state of being a post-doc was made even cheerier with the release of a new publication on the state of post-doc-hood. Others have not met this news with as bright of an outlook (here and here).
Here's a choice of two good songs to set the mood as we dive in -



Some of the most exciting tidbits from the report include:

  • Postdoctoral appointments for a given researcher should total no more than 5 years (barring extraordinary circumstances)
    • They suggest the cumulative term should be five years, meaning not stringing together multiple post-docs of at least 2-3 years.
  • The title of ‘postdoctoral researcher’ should be applied only to those people who are receiving advanced training in research
    • After 5 years, post-docs should either start a permanent position externally or be promoted internally to a staff position with a different and appropriate designation and salary.
  • The postdoctoral position should not be viewed by graduate students or principal investigators as the default step after the completion of doctoral training (postdoctoral positions are intended only for those seeking advanced research training)
    • Beginning at the first year of graduate school, graduate students should be made aware of the wide variety of career paths available for Ph.D. recipients
  • The NIH should raise the NRSA postdoctoral starting salary to $50,000 (2014 dollars) - currently salary starts at $39,264 increasing to $54,180 (after 7 years of experience)
    • Adjusted annually for inflation (currently 1.3% in the US)
    • Appropriately higher where regional cost of living, disciplinary norms, and institutional or sector salary scales dictate higher salaries
    • Benefits should be appropriate for level of experience and commensurate with benefits given to equivalent full-time employees including health insurance, family and parental leave, and access to a retirement plan
  • Host institutions and funding agencies should take responsibility for ensuring the quality of mentoring through evaluation of, and training programs for, the mentors and postdoctoral researchers should be encouraged to seek advice from multiple advisors
  • Just as graduate students are counted and tracked, postdoctoral fellows should be counted and tracked. Additionally the NSF should serve as the primary curator for establishing and updating a database system that tracks postdoctoral researchers, including non-academic and foreign-trained postdoctoral researcher


In order to achieve these suggestions, the overall conclusion of the report appears to be that we need to reduce the number of post-docs. In particular, it appears that the authors suggest overhauling graduate training so that PhD students begin training for non-research careers in graduate school and only PhD students who want to run their own R1 lab should pursue post-doctoral training. These changes would then change the post-doctoral experience so that is no longer a (supposed) broad training for a wide range of careers and no longer a period of soul-crushing disillusionment of the research experience where training scientists decide to (or are forced to) pursue other career options. By reducing the number of post-docs, the remaining future R1 PIs could receive higher salaries during their training. Another reason to reduce post-doc positions, besides paying them more, is that there are not enough jobs that they are training for:
"New faculty posts have not kept pace with the number of postdoctoral scientists in training, so there are, at least in some fields, far more postdocs than available research jobs"
That quote was quite interesting to me, so I decided to take a look at the current state of hiring in my own field. I found job postings for psychology faculty positions at one of, if not the most comprehensive wiki of aggregated tenure-track openings in psychology. Before diving in, I realize that it may not list every job available since it is focused on jobs in the US primarily, Canada secondarily and then elsewhere.

A total of 657 positions posted across the different areas of psychology.




657 positions sounds like a lot of jobs, for just one year of hiring, but let's dig into those numbers a bit. These 657 jobs represent all areas of psychology, from across the US, Canada and other international locations. A given scholars background will likely limit them to less than half of these positions (unless they have a clinical degree and studied social neuroscience from a developmental perspective in which case they may be able to apply for 70% of positions). After limiting yourself to the positions in your area of psychology, you must now limit yourself to the ones you are actually qualified for. The positions on the wiki include all types of colleges and universities from primarily undergraduate and teaching focused schools to R1, research focused schools. Without teaching and mentoring experience (or a desire to teach), you're ruled out from the teaching schools, while without independent funding and multiple high profile first author papers, you're ruled out from the research schools. After you've found your area of psychology and your niche of school type, in order to apply to all of the remaining positions, you must be willing to move anywhere. If you'd like to have some choice in where you live, you may find yourself with few options. Let's look at an example. If you are interested in teaching at a liberal arts college in Minnesota, Wisconsin or Iowa and have a background in cognitive neuroscience, there are 7 jobs you can apply or approximately 1% of the available jobs. If you want to be more selective and only want to live in Minneapolis/St. Paul, there are 2 jobs available.

Now let's think about the competition for that job. According to a recent survey by the APA, 9564 PhDs were awarded in the US in psychology (in 2009/2010) which is about 14.5 PhD graduates per open position this year. However, another report states that approximately 47% of psychology PhD graduates go on to complete post-doctoral training. If we take the 5-year max for post-docs suggested by the report about, that means we have a total post-doctorally trained pool of psychology PhDs of about 22,475, who together with the recent grads would make up 42 times (talk about "hyper-competition") the number of open positions.  This is not to say someone has a 2.4% chance at attaining a tenure track job as the estimated chance is way higher (like 3 - 5 times higher).

Another interesting report on the issue comes from post-docs themselves who organized a conference in Boston back in October, called the Future of Research Symposium. The report is an amazing read with a ton of fascinating links to other research studies and on-line discussions about the state of science, the academy and research.

The broad conclusions of the report are similar to the previous report:

1. We recommend increased connectivity among junior scientists and other stakeholders to promote discussions on reforming the structure of the scientific enterprise.  
2. We advocate for increased transparency. This includes the number and career outcomes of trainees, as well as the expectations of the balance between employment and training in individual postdoctoral appointments.  
3. We call for an increased investment in junior scientists, with increased numbers of grants that provide financial independence from Principal Investigator (PI) research grants, and increased accountability for the quality of training as a requirement of funding approval.
But before we view these conclusions as unrealistic, one of my favorite quotes on this topic came from a Boston Globe article, where Professor emeritus Henry Bourne says of the recommendations from these reports:
“I suspect that many scientific leaders and some institutions will applaud this document but promptly work to negate its rules. And they will succeed in doing so, with ridiculous ease. The reason is simple: It is clearly not in the interest of established investigators or their institutions to pay postdocs more and give them good jobs after their so-called ‘training,’ ” 
Or represented graphically: 
Jorge Cham - PhDcomics

After reading through the issues raised here and through the links, one may think academia is acting shortsightedly, or myopically, but in reading through everything, oddly enough, its the ones who are in it for the long haul who make it to the position to be myopic. The people who made it, are the ones who trundled and worked and grinded through 4 years of undergrad, 2 years of masters, 4 years of PhD, 5-7 years of post-doctoral training, and 6 years of probationary untenured work finally achieve job security at the young age of 39-41 years old (about half of the average American life expectancy) and reach the top of the pyramid.

Edit: 1/16/2015 Brian Kurilla from GeekPsychologist recently reported slightly more optimistic chances for obtaining employment after graduation.

Tuesday, December 16, 2014

(Mis)communicating science

ISTOCKPHOTO/THINKSTOCK
Of the three post-doctoral fellowships I recently submitted, one surprisingly had a large focus on knowledge translation, the plan for communicating the study, its purpose, and eventually its results to various communities including both scientific and general.

Within the last week the issue of science communication was raised in two different tweets:

The point when science becomes publicity from @mjsutterer
and
100 Most followed Psychologists on Twitter from @AkiNikolaidis

The first article discusses a recent retrospective study that might be best summed by Public Enemy:


with the authors concluding that over-exaggerations in news coverage of scientific articles is more likely to occur when the press release from the university includes exaggerations. Both the article and an accompanying editorial state the importance of the press release in communicating science to a broader audience. But this article comes at an interesting time where scientists have more avenues than ever to communicate directly with the public, including twitter, facebook, reddit, and personal blogs and scientists are even encouraging other scientists to get out and tweet (here and here).

This push of scientists to toward twitter made me really interested in the second article published by the British Psychological Society listing the 100 most followed psychologists and neuroscientists on twitter. I have previously posted about science and twitter and I wanted to conduct a similar study with these 100 psychology twitter accounts. Before starting, I thought the process would be easier than it ultimately ended being. First, finding an individual authors total citation count is almost impossible. I used the Scopus database and already know that I am likely underestimating and certainly missing citation counts (my own citation count on Scopus is underestimated by 33%). Other metrics are also very difficult to get ahold of when searching in 100 different places. So before presenting the data, I realize that much of it is somewhat off (Twitter counts are old, citation counts likely only from Elsevier based journals, and other information only from what I could easily find and discern from webpages). 

Turning to the data, while many citation accounts are missing, there appears to be almost no relationship between academic impact and twitter followers. As I collected the data, I started to wonder, what does account for increases in twitter followers. I've started to think of a few other influences (I'm ignoring twit-iquet, as that has been discussed a number of times before, here and here for example) that may increase follower account. On the google doc I've linked, I'm hoping others can help fill in the blanks and perhaps explain how and why psychologists can best communicate with the public. I'll be tweeting a link to the google doc and the image below to the 100 accounts to see if I can fill in the information and see if anyone can propose other possible influences.