Sunday, February 11, 2018

By how much would we need to increase our sample sizes to have adequate power with an alpha level of 0.005?


At our department seminar last week, the recent paper by Benjamin et al. on redefining statistical significance was brought up. In this paper, a large group of researchers argue that findings with a p-value close to 0.05 reflect only weak evidence for an effect. Thus, to claim a new discovery, the authors propose a stricter threshold, α = 0.005.

After hearing of this proposal, the immediate reaction in the seminar room was horror at some rough estimations of either the loss of power or increase in the required sample size that this would involve. I imagine that this reaction is rather standard among researchers, but from a quick scan of the “Redefine Statistical Significance” paper and four responses to the paper that I have found (“Why redefining statistical significance will not improve reproducibility and could make the replication crisis worse” by Crane, “Justify your alpha” by Lakens et al., “Abandon statistical significance” by McShane et al., and “Retract p < 0.005 and propose using JASP instead” by Perezgonzales & Frías-Navarro), there are no updated sample size estimates.

Required sample estimates for α = 0.05 α = 0.005 are very easy to calculate with g*power. So, here are the sample size estimates for achieving 80% power, for two-tailed independent-sample t-tests and four different effect sizes:

Alpha
N for d = 0.2
N for d = 0.4
N for d = 0.6
N for d = 0.8
0.05
788
200
90
52
0.005
1336
338
152
88

It is worth noting that most effects in psychology tend to be closer to the d = 0.2 end of the scale, and that most designs are nowadays more complicated than simple main effects in a between-subject comparison. More complex designs (e.g., when one is looking at an interaction) usually require even more participants.

The argument of Benjamin et al., that p-values close to 0.05 provide very weak evidence, is convincing. But their solution raises practical issues which should be considered. For some research questions, collecting a sample of 1336 participants could be achievable, for example by using online questionnaires instead of testing participants at the lab. For other research questions, collecting these kinds of samples is unimaginable. It’s not impossible, of course, but doing so would require a collective change in mindset, the research structure (e.g., investing more resources into a single project, providing longer-term contracts for early career researchers), and incentives (e.g., relaxing the requirement to have many first-author publications).

If we ignore peoples’ concerns about the practical issues associated with collecting this many participants, the Open Science movement may lose a great deal of supporters.

Can I end this blog post on a positive note? Well, there are some things we can do to make the numbers from the table above seem less scary. For example, we can use within-subject designs when possible. Things already start to look brighter: Using the same settings in g*power as above, but calculating the required sample size for “Difference between two dependent means”, we get the following:

Alpha
N for d = 0.2
N for d = 0.4
N for d = 0.6
N for d = 0.8
0.05
199
52
24
15
0.005
337
88
41
25

We could also pre-register our study, including the expected direction of a test, which would allow us to use a one-sided t-test. If we do this, in addition to using a within-subject design, we have:

Alpha
N for d = 0.2
N for d = 0.4
N for d = 0.6
N for d = 0.8
0.05
156
41
19
12
0.005
296
77
36
22

The bottom line is: A comprehensive solution to the replication crisis should address the practical issues associated with getting larger sample sizes.

Thursday, February 8, 2018

Should early-career researchers make their own website?


TL;DR: Yes.

For a while, I have been thinking about whether or not to make my own website. I could see some advantages, but at the same time, I was wondering how it would be perceived. After all, I don’t think any of my superiors at work have their own website, so why should I?

To see what people actually think, I made a poll on Twitter. It received some attention and generated some interesting discussions and many supportive comments (you can read them directly by viewing the responses to the poll I linked above). In this blogpost, I would like to summarise the arguments that were brought up (they were mainly pro-website).

But first, and without any further ado, here are the results:

The results are pretty clear, so – here it is: https://www.xenia-schmalz.net/. It’s still a work-in-progress, so I would be happy to get any feedback!

It is noteworthy that there are some people who did think that it’s narcissistic for Early Career Researchers (ECRs) to create their own website. It would have been interesting to get some information about the demographics of these 5%, and their thoughts behind their vote. If you are an ECR who is weighing up the pros and cons of creating a website, then, as Leonid Schneider pointed out, you may want to think about whether you would want to positively impress someone who judges you for creating an online presence. Either way, I decided that the benefits outweigh any potential costs.

Several people have pointed out in response to the twitter poll that a website is only as narcissistic as you make it. This leads to the question: what comes off as narcissistic? I can imagine that there are many differences in opinion on this. Does one focus on one’s research only? Or include some fun facts about oneself? I decided to take the former approach, for the reason that people who google me are probably more interested in my research rather than my political opinion or to find out whether I’m a cat or a dog person.

In general, people who spend more time on self-promotion than on actually doing things that they brag they can do are not very popular. I would rather not self-promote at all than come off as someone with a head full of hot air. Ideally, I would want to let my work speak for itself and for colleagues to judge me based on the quality of my work. This, of course, requires that people can access my work – which is where the website comes in. Depending on how you design your website, this is literally what it is: A way for people to access your work, so they can make their own opinion about its quality. 

In principle, universities create websites for their employees. However, things can get complicated, especially for ECRs. ECRs often change affiliations, and sometimes go for months without having an official job. For example, I refer to myself as a “post-doc in transit”: My two-year post-doc contract at the University of Padova went until March last year, and I’m currently on a part-time short-term contract at the University of Munich until I will (hopefully) get my own funding. In the meantime, I don’t have a website at the University of Munich, only an out-of-date and incomplete website at the University of Padova, and a still-functioning and rather detailed and up-to-date website at the Centre of Cognition and its Disorders (where I did my PhD in 2011-2014; I’m still affiliated with the CCD as an associate investigator until this year, so probably this site will disappear or stop being updated rather soon). Several people pointed out, in the responses to my twitter poll, that they get a negative impression if they google a researcher and only find an incomplete university page: this may come across as laziness or not caring.

What kind of information should be available about an ECR? First, their current contact details. I somehow thought that my email address should be findable for everyone who looks for it, but come to think of it, I have had people contact me through researchgate or twitter and saying that they couldn’t find my email address.

Let’s suppose that Professor Awesome is looking to hire a post-doc, and has heard that you’re looking for a job and have all the skills that she needs. She might google you, only to find an out-dated university website with an email address that doesn’t work anymore, and in order to contact you, she would need to find you on researchgate (where she would probably need to have an account to contact you), or search for your recent publications, find one where you are a corresponding author, and hope that the listed email address is still valid. At some stage, Professor Awesome might give up and look up the contact details of another ECR who fits the job description.

Admittedly, I have never heard of anyone receiving a job offer via email out of the blue. But one can think of other situations where people might want to contact you with good news: Invitations to review, to become a journal editor, to participate in a symposium, to give a talk at someone else’s department, to collaborate, to give an interview about your research, or simply to discuss some aspects of your work. These things are very likely to increase your chances of getting a position in Professor Awesome’s lab. For me, it remains an open question whether having a website will actually result in any of these things, but I will report back in one year with my anecdotal data on this.

Second, having your own website (rather than relying on your university to create one for you) gives you more control of what people find out about you. In my case, a dry list of publications would probably not bring across my dedication to Open Science, which I see as a big part of my identity as a scientist.

Third, a website can be useful tool to link to your work: not just a list of publications, but also links to full texts, data, materials and analysis scripts. One can even link to unpublished work. In fact, this was one of my main goals while creating the website. In addition to a list of publications on the CV section, I included information about projects that I’m working on or that I have worked on in the past. This was a good reason to get myself organised. First, I sorted my studies by an overarching research question (which has helped me to figure out: What am I actually doing?). Then, for each study, I added a short description (which has helped me to figure out what I have achieved so far), and links to the full text, data and materials (which helped me to verify that I did make this information publicly accessible, which I always tell everyone else that they should do). 

Creating the website is therefore a useful tool for myself to keep track of what I'm doing. People on twitter have pointed out in their comments that it can also be useful for others: not only for the fictional Professor Awesome who is only waiting to make you a job offer, but also, for example, for students who would like to apply for a PhD at your department and are interested to get more information about what people in the department are doing.

I have included information about ongoing projects, published articles, and projects-on-hold. Including information about unpublished projects could be controversial: given that the preprints are presented alongside with published papers, unsuspecting readers might get confused and mistake an unpublished study for a peer-reviewed paper. However, I think that the benefits of making data and materials for unpublished studies outweighs the cost. Some of these papers are unpublished for practical reasons (e.g., because I ran out of resources to conduct a follow-up experiment). Even if an experiment turned out to be unpublishable because I made some mistakes in the experimental design, other people might learn from my mistakes in conducting their own research. This is one of the main reasons why I created the website: To make all aspects of all of my projects fully accessible.

Conclusion
As with everything, there are pros and cons with creating a personal website. A con is that some people might perceive you as narcissistic. There are many pros, though: Especially as an ECR, you provide a platform with information about your work which will remain independently of your employment status. You increase your visibility, so that others can contact you more easily. You can control what others can find out about you. And, finally, you can provide information about your work that, for whatever reason, does not come across in your publication list. So, in conclusion: I recommend to ECRs to make their own website.

Thursday, February 1, 2018

Why I don’t wish that I had never learned about the replication crisis


Doing good science is really hard. It takes a lot of effort and time, and the latter is critical for early-career researchers who have a very limited amount of time to show their productivity to their next potential employer. Doing bad science is easier: It takes less time to plan an experiment, one needs less data, and if one doesn’t get good results immediately (one hardly ever does), the amount of time needed to make the results look good is still less than the amount of time needed to plan and run a well-designed and well-powered study.

Doing good science is frustrating at times. It can make you wonder if it’s even worth it. Wouldn’t life be easier if we were able to continue running our underpowered studies and publish them in masses, without having a bad conscience? I often wonder about this. But the grass always looks greener from the other side, so it’s worth taking a critical look at the BC (before-crisis) times before deciding whether the good old days really were that good.

I learned about the replication crisis gradually, starting in the second year of my PhD, and came to realise its relevance for my own work towards the end of my PhD. During my PhD, I conducted a number of psycholinguistic experiments. I knew about statistical power, in theory – it was that thingy that we were told we should calculate before we start an experiment, but nobody really does that, anyway. I collected as many participants as was the norm in my field. And then the fun started: I remember sleepless nights, followed by a 5-am trip to the lab because I’d thought of yet another analysis that I could try. Frustration when also that analysis didn’t work. Doubts about the data, the experiment, my own competence, and why was I even doing a PhD? Why was I unable to find a simple effect that others had found and published? Once, I was very close to calling the university which gave me my Bachelor of Science degree, and asking them to take it back: What kind of scientist can’t even replicate a stupid effect?

No, I don’t wish that I had never learned about the replication crisis. Doing good science is frustrating at times, but much more satisfying. I know where to start, even if it takes time to get to the stage when I have something to show. I can stand up for what I think is right, and sometimes, I even feel that I can make a difference in improving the system.

Tuesday, January 9, 2018

Why I love preprints


An increasing number of servers are becoming available for posting preprints. This allows authors to post versions of their papers before publication in a peer-reviewed journal. I think this is great. In fact, based on my experiences with preprints so far, if I didn’t need journal publications to get a job, I don’t think I would ever submit another paper to a journal again. Here, I describe the advantages of preprints, and address some concerns that I’ve heard from colleagues who are less enthusiastic about preprints.

The “How” of preprints
Preprints can be simply uploaded to a preprint server: for example, on psyarxiv.com, via osf.io, or even on researchgate. It’s easy. This covers the “how” part.

The “Why” of preprints
In an ideal world, a publication serves as a starting point for a conversation, or as a contribution to an already ongoing discussion. Preprints fulfil this purpose more effectively than journal publications. Their publication takes only a couple of minutes, while publication in a journal can take anywhere between a couple of months to a couple of years. With modern technology, preprints are findable for researchers. Preprints are often posted on social media websites, such as twitter, and are then circulated by others who are interested in the same topic, and critically discussed. With many preprint servers, preprints become listed on google scholar, which sends alerts to researchers who are following the authors. The preprint can also be linked to supplementary material, such as the data and analysis codes, thus facilitating open and reproducible science.

Preprints are advantageous to show an author’s productivity: If someone (especially an early career researcher) is unlucky in obtaining journal publications, they can demonstrate, on their CV, that they are productive, and potential employers can check the preprint to verify its quality and the match of research interests.

The author has a lot of flexibility in the decision of when to upload a preprint. The earlier a preprint is uploaded, the more possibilities the author has to receive feedback from colleagues and incorporate them in the text. The OSF website, which allows users to upload preprints, has a version control function. This means that an updated version of the file can be uploaded, while the older version is archived. Searches will lead to the most recent version, thus avoiding version confusion. At the same time, it is possible to track the development of the paper.

The “When” of preprints
In terms of timing, one option is to upload a preprint shortly after it has been accepted for publication at a journal. In line with many journals’ policies, this is a way to make your article openly accessible to everyone: while uploading the final, journal-formatted version is a violation of copyright, uploading the author’s version is generally allowed1.

Another option is to post a preprint at the same time as submitting the paper to a journal. This has an additional advantage: It allows the authors to receive more feedback. Readers who are interested in the topic may contact the author with corrections or suggestions. If this happens, the author can still make changes before the paper reaches its final, journal-published version. If, conversely, a mistake is noticed only after journal publication, the author either has to live with it, or issue an often stigmatising correction.

A final possibility is to upload a preprint that one does not want to publish. This could include preliminary work, or papers that have been rejected repeatedly by traditional journals. Preliminary work could be based on research directions which did not work out for whatever reason. This would inform other researchers who might be thinking of going in the same direction of potential issues with a given approach: this, in turn, would stop them from wasting their resources by doing the same thing only to find out, too, that it doesn’t work.

Uploading papers that have been repeatedly rejected is a more hairy issue. Here, it is important, for the authors, to consider why the paper has been rejected. Sometimes, papers really are fundamentally flawed. They could be p-hacked, contain fabricated data, or errors in the analyses; theory and interpretation could be based on non sequiturs or be presented in a biased way. Such papers have no place in the academic literature. But there are other issues that might make a paper unsuitable for publication in a traditional journal, but still useful for others to know about. For example, one might run an experiment on a theoretically or practically important association, and find that a one’s measure is unreliable. In such a scenario, a null result is difficult to interpret, but it is important that colleagues know about this, so they can avoid using this measure in their own work. Or, one might have run into practical obstacles in participant recruitment, and failed to get a sufficiently large sample size. Again, it is difficult to draw conclusions from such studies, but if the details of this experiment are publically available, this data can be included in meta-analysis. This can be critical for research questions which concern a special population that is difficult to recruit, and in fact may be the only way in which conducting such research is possible.

With traditional journals, one can also be simply unlucky with reviewers. The fact that luck is a huge component in journals’ decisions can be exemplified with a paper of mine, that was rejected as being “irritating” and “nonsense” from one journal, and accepted with minor revisions by another one. Alternatively, one may find it difficult to find a perfectly matching journal for a paper. I have another anecdote as an example of this: After one paper of mine was rejected by three different journals, I uploaded a preprint. A week later, I had received two emails from colleagues with suggestions about journals that could be interested in this specific paper, and two months later the paper was accepted by the first of these journals with minor revisions.  

The possibility of uploading unpublishable work is probably the most controversial point about preprints. Traditional journals are considered to give a paper a seal of approval: a guarantee of quality, as reflected by positive reports of expert reviewers. In contrast, anyone can publish anything as a preprint. If both preprints and journal articles are floating around on the web, it could be difficult, especially for people who are not experts in the field (including journalists, or people who are directly affected by the research, such as patients reading about a potential treatment), to determine which they can trust. This is indeed a concern – however, I maintain that it is an open empirical question whether or not the increase in preprint will exacerbate the spreading of misinformation.

The fact is that traditional journals’ peer review is not perfect. Hardly anyone would contest this: fundamentally flawed papers sometimes get published, and good, sound papers sometimes get repeatedly rejected. Thus, even papers published in traditional journals are a mixture of good and bad papers. In addition, there are the notorious predatory journals, that accept any paper against a fee, and publish it under the appearance of being peer reviewed. These may not fool persons who are experienced with academia, but journalists and consumers may find this confusing.

The point stands that the increase in preprints may increase the ratio of bad to good papers. But perhaps this calls for increased caution in trusting what we read: the probability that a given paper is bad is definitely above zero, regardless of whether it has been published as a preprint or in a traditional journal. Maybe, just maybe, the increase of preprints will lead to increased caution in evaluating papers based on their own merit, rather than the journal it was published in. Researchers would become more critical of the papers that they read, and post-publication peer review may increase in importance. And maybe, just maybe, an additional bonus will lie in the realisation that we as researchers need to become better at sharing our research with the general public in a way that provides a clear explanation of our work and doesn’t overhype our results.

Conclusion
I love preprints. They are easy, allow for fast publication of our work, and encourage openness and a dynamic approach to science, where publications reflect ongoing discussions in the scientific community. This is not to say that I hate traditional peer review. I like peer review: I have often received very helpful comments from which I have learned about statistics, theory building, and got a broader picture of the views held by colleagues outside of the lab. Such comments are fundamental for the development of high-quality science. 

But: Let’s have such conversations in public, rather than in anonymous email threads moderated by the editor, so that everyone can benefit. Emphasising the nature of science as an open dialogue may be the biggest advantage of preprints.

 __________________________________________
1 This differs from journal to journal. For specific journals’ policies on this issue, see here.

Wednesday, December 20, 2017

Does action video gaming help against dyslexia?


TL;DR: Probably not.

Imagine there is a way to improve reading ability in children in dyslexia, which is fun and efficient. For parents of children with dyslexia this would be great: No more dragging your child to therapists, spending endless hours in the evening trying to get the child to practice their letter-sound rules or forcing them to sit down with a book. According to several recent papers, a fun and quick treatment to improve reading ability might be in sight, and every parent can apply this treatment in their own home: Action video gaming.

Action video games differ from other types of games, because they involve situations where the player has to quickly shift their attention from one visual stimulus to another. First-person shooter games are a good example: one might focus on one part of the screen, and then an “enemy” appears and one needs to direct the visual attention to him and shoot him1.

The idea that action video gaming could improve reading ability is not as random as might seem at first sight. Indeed, there is a large body of work, albeit very controversial, that suggests that children or adults with dyslexia might have problems with shifting visual attention. The idea that a visual deficit might underlie dyslexia originates from the early 1980s (Badcock et al., Galaburda et al.; references are in the articles linked below), thus it is not in any way novel or revolutionary. A summary of this work would warrant a separate blog post or academic publication, but for some (favourable) reviews, see Vidyasagar, T. R., & Pammer, K. (2010). Dyslexia: a deficit in visuo-spatial attention, not in phonological processing. Trends in Cognitive Sciences, 14(2), 57-63 (downloadable here) or Stein, J., & Walsh, V. (1997). To see but not to read; the magnocellular theory of dyslexia. Trends in neurosciences, 20(4), 147-152 (downloadable here), or (for a more agnostic review) Boden, C., & Giaschi, D. (2007). M-stream deficits and reading-related visual processes in developmental dyslexia. Psychological Bulletin, 133(2), 346 (downloadable here). It is worth noting that there is little consensus, amongst the proponents of this broad class of visual-attentional deficit theories, about the exact cognitive processes that are impaired and how they would lead to problems with reading.

The way research should proceed is clear: If there is a theoretical groundwork, based on experimental studies, to suggest that a certain type of treatment might work, one does a randomised controlled trial (RCT): A group of patients are randomly divided into two groups, one is subjected to the treatment in question, and the other to a control treatment, and we compare the improvement between pre- and post-measurement in the two groups. To date, there are three such studies:

Franceschini, S., Gori, S., Ruffino, M., Viola, S., Molteni, M., & Facoetti, A. (2013). Action video games make dyslexic children read better. Current Biology, 23(6), 462-466 (here)

Franceschini, S., Trevisan, P., Ronconi, L., Bertoni, S., Colmar, S., Double, K., ... & Gori, S. (2017). Action video games improve reading abilities and visual-to-auditory attentional shifting in English-speaking children with dyslexia. Scientific Reports, 7(1), 5863 (here), and

Gori, S., Seitz, A. R., Ronconi, L., Franceschini, S., & Facoetti, A. (2016). Multiple causal links between magnocellular–dorsal pathway deficit and developmental dyslexia. Cerebral Cortex, 26(11), 4356-4369 (here).

In writing the current critique, I am assuming no issues with the papers at stake, or with the research skills or integrity of the researchers. Rather, I would like to show that, under the above assumptions, the three studies may provide a highly misleading picture of the effect of video gaming on reading ability. The implications are clear and very important: Parents of children with dyslexia have access to many different sources of information, some of which provide only snake-oil treatments. From a quick google search for “How to cure dyslexia”, the first five links suggest modelling letters out of clay, early assessment, multi-sensory instructions, more clay sculptures, and teaching phonemic awareness. As reading researchers, we should not add to the confusion or divert resources from treatments that have actually been shown to work, by adding yet another “cure” to the list.

So, what is my gripe with these three papers? First, that there are only three such papers. As I mentioned above, the idea that there is a deficit in visual-attentional processing amongst people with dyslexia, and that this might be a cause of their poor reading ability, has been floating around for over 30 years. We know that the best way to establish causality is through a treatment study (RCT): We have known this for well over thirty years2. So, why didn’t more people conduct and publish RCTs on this topic?

The Mystery of Missing Data
Here is a hypothesis which, admittedly, is difficult to test: RCTs have been conducted for 30 years, but only three of them ever got published. This is a well-known phenomenon in scientific publishing: in general, studies which report positive findings are easier to publish. Studies which do not find a significant result tend to get stored in file-drawer archives. This is called the File-Drawer Problem, and has been discussed as early as 1979 (Rosenthal, R. (1979). The "File Drawer Problem" and Tolerance for Null Results. Psychological Bulletin, 86(3), 638-641, here). 

The reason this is a problem goes back to the very definition of the statistical test we generally use to establish significance: The p-value. p-values are considered “significant” if they are below 0.05, i.e., below 5%. The p-value is defined as the probability of obtaining the data or more extreme observations, under the assumption that the null hypothesis is true. They key is the second part. By rephrasing the definition, we get the following: When the effect is not there, the p-value tells us that it is there 5% of the time. This is a feature, not a bug, as it does exactly what the p-value was designed to do: It gives us a long-run error rate and allows us to keep it constant at 5% across a set of studies. But this desired property becomes invalidated in a world where we only publish positive results. In a scenario where the effect is not there, 5 in 100 studies will give us a significant p-value, on average. If only the five significant studies are published, we have a 100% rate of false positives (significant p-values in the absence of a true effect) in the literature. If we assume that the action video gaming effect is not there, then we would expect, on average, three false positives out of 60 studies3. Is it possible that in 30 years, there is an accumulation of studies which trained dyslexic children’s visual-attentional skills and observed no improvement?

Magnitude Errors
The second issue in the currently published literature relates to the previous point, and extends to the possibility that there might be an effect of action video gaming on reading ability. So, for now, let’s assume the effect is there. Perhaps it is even a big effect, let’s say, it has a standardised effect size (Cohen’s d) of 0.3, which is considered to be a small-to-medium-size effect. Realistically, the effect of action video gaming on reading ability is very unlikely to be bigger, since the best-established treatment effects have shown effect sizes of around 0.3 (Galuschka et al., 2014; here).

We can simulate very easily (in R) what will happen in this scenario. We pick a sample of 16 participants (the number of dyslexic children assigned to the action video gaming group in Franceschini et al., 2017). Then, we calculate the average improvement across the 16 participants, in the standardised score:

x=rnorm(16,0.3,1)
mean(x)

The first average value I get a mean improvement of 0.24. Not bad. Then I run the code again, and get a whooping 0.44! Next time, not so lucky: 0.09. And then, we even get a negative effect, of -0.30.

This is just a brief illustration of the fact that, when you sample from the population, your observed effect will jump around the true population effect size due to random variation. This might seem trivial to some, but, unfortunately, this fact is often forgotten even by well-established researchers, who may go on to treat an observed effect size as a precise estimate.

When we sample, repeatedly, from a population, and plot a histogram of all the observed means, we get a normal distribution: A fair few observed means will be close to the true population mean, but some will not be at all.

We’re closing in on the point I want to make here: Just by chance, someone will eventually run an experiment and obtain an effect size of 0.7, even if the true effect is 0.5, 0.2, or even 0. Bigger observed effects, when all else is equal, will yield significant results while smaller observed effects will be non-significant. This means: If you run a study, and by chance you observe an effect size that is bigger than the population effect size, there will be a higher probability that it will be significantly and get published. If your identical twin sibling runs an identical study but happens to obtain an effect size that is smaller than yours – even if it corresponds to the true effect size! – it may not be significant, and they will be forced to stow it in their file drawer.

Given that only the significant effects are published (or even if there is a disproportionate number of positive compared to negative outcomes), we end up with a skewed literature. In the first-case scenario, we considered the possibility that the effect might not be there at all. In the second scenario, we assume that the effect is there, but even so, the published studies, due to the presence of publication bias, may have captured effect sizes that are larger than the actual treatment effect. This has been called by Gelman & Carlin (2014, here) the “Magnitude Error”, and has been described, with an illustration that I like to use in talks, by Schmidt in 1992 (see Figure 2, here).

Getting back to action video gaming and dyslexia: Maybe action video gaming improves dyslexia. We don’t know: Given only three studies, it is difficult to adjudicate between two possible scenarios (no effect + publication bias or small effect + publication bias).

So, let’s have a look at the effects reported in the three published papers. I will ignore the 2013 paper4, because it only provides the necessary descriptives in figures rather than tables, and the journal format hides the methods section with vital information about the number of participants god-knows-where. In the 2017 paper, Table 1 provides the pre- and post-measurement values of the experimental and control group, for word reading speed, word reading accuracy, phonological decoding (pseudoword reading) speed, and phonological decoding accuracy. The paper even reports the effect sizes: The action video game training had no effect on reading accuracy. For speed, the effect sizes are d = 0.27 and d = 0.45 for word and pseudoword reading, respectively. In the 2015 paper, the effect size for the increase in speed for word reading (second row of the table) is 0.34, and for pseudoword reading ability, it is 0.58.

The effect sizes are thus comparable across studies. Putting the effect sizes into context: The 2017 study found an increase in speed, from 88 seconds to 76 seconds to read a list of words, and from 86 seconds to 69 seconds to read a list of pseudowords. For words, this translates to an increase in speed of 14%: In practical terms, if it takes a child 100 hours to read a book before training, it would take the same child only 86 hours to read the same book after training.

In experimental terms, this is not a huge effect, but it competes with the effect sizes for well-established treatment methods such as phonics instruction (Hedge’s g’ = 0.32; Galuschka et al., 2014)5. Phonics instruction focuses on a proximal cause of poor reading: A deficit in mapping speech sounds onto print. We would expect a focus of proximal causes to have a stronger effect than a focus on distal causes, where there are many intermediate steps between a deficit and reading ability, as explained by McArthur and Castles (2017) here. In our case, the following things have to happen for a couple of weeks of action video gaming to improve reading ability:

- Playing first-person shooter games has to increase children’s ability to switch their attention rapidly,
- The type of attention switching during reading is the same as the attention switching to a stimulus which appears suddenly on the screen,
-  Improving your visual attention leads to an increase in reading speed.

There are ifs and buts at each of these steps. The link between action video gaming and visual-attentional processing would be diluted by other things which train children’s visual-attentional skills, such as how often they read, played tennis, sight-read sheet music, or looked through “Where’s Wally” books during the training period.6 In between visual-attentional processing and reading ability, are other variables which affect reading ability and dilute this link: the amount of time they read at home, motivation and tiredness at the first versus the second testing time point, and many others. These other factors dilute the treatment effect by adding variability to the experiment that is not due to the treatment. This should lead to smaller effect sizes.

In short: There might be an effect of action video gaming on reading ability. But I’m willing to bet that it will be smaller than the effect reported in the published studies. I mean this literally: I will buy a good bottle of a drink of your choice to anyone who can convince me that the effect 2 weeks of action video gaming on reading ability is in the vicinity of d = 0.3.

How to provide a convincing case for an effect of action video gaming on reading ability
The idea that something as simple as action video gaming can improve children’s ability to do one of the most complex tasks they learn at school is an incredible claim. Incredible claims require very strong evidence. Especially if the claim has practical implications.

To convince me, one would have to conduct a study which is (1) well-powered, and (2) pre-registered. Let’s assume that the effect is, indeed, d = 0.3. With g*power, we can easily calculate how many participants we would need to recruit for 80% power. Setting “Means: Difference between two dependent means (matched pairs)” in “Statistical test”, a one-tailed test (note that both of these decisions increase power, i.e., decrease the number of required participants), effect size of 0.3, alpha of 0.05 and power of 0.8, it shows that we need 71 children in a within-children design to have adequate power to detect such an effect.

A study should also be pre-registered. This would remove the possibility of the authors tweaking the data, analysis and variables until they get significant results. This is important in reading research, because there are many different ways in which reading ability can be calculated. For example, Gori and colleagues (Table 3) present 6 different dependent variables that can be used as the outcome measure. The greater the amount of variables one can possibly analyse, the greater the flexibility for conducting analyses until at least some contrast becomes significant (Simmons et al., 2011, here). Furthermore, pre-registration will reduce the overall effect of publication bias, because there will be a record of someone having started a given study:

In short: To make a convincing case that there is an effect of the magnitude reported in the published literature, we would need a pre-registered study with at least 70 participants in a within-subject design.

Some final recommendations
For researchers: I hope that I managed to illustrate how publication bias can lead to magnitude errors: the illusion that an effect is much bigger than it actually is (regardless of whether or not it exists). Your perfect study which you pre-registered and published with a significant result and without p-hacking might be interpreted very differently if we knew about all the unpublished studies that are hidden away. This is a pretty terrifying thought: As long as publication bias exists, you can be entirely wrong with the interpretation of your study, even if you do all the right things. We are quickly running out of excuses: We need to move towards pre-registration, especially for research questions such as the one I discussed here, which has strong practical implications. So, PLEASE PLEASE PLEASE, no more underpowered and non-registered studies of action video gaming on reading ability.

For funders: Unless a study on the effect of action video gaming on reading ability is pre-registered and adequately powered, it will not give us meaningful results. So, please don’t spend any more of the tax payers’ money on studies that cannot be used to address the question they set out to answer. In case you have too much money and don’t know what to do with it: I am looking for funding for a project on GPC learning and automatisation in reading development and dyslexia.   

For parents and teachers who want to find out what’s best for their child or student: I don’t know what to tell you. I hope we’ll sort out the publication bias thing soon. In the meantime, it’s best to focus on proximal causes of reading problems, as proposed by McArthur and Castles (2017) here.

-------------------------------------------------------
1 I know absolutely nothing about shooter games, but from what I understand characters there tend to be males.
2 More like 300 years, Wikipedia informs me.
3 This assumes no questionable research practices: With questionable research practices, the false positive rate may inflate to 60%, meaning that we would need to assume the presence of only 2 unpublished studies which did not find a significant treatment effect (Simmons et al., 2011, here)
4 I can do this in a blog post, right?
5 And this is probably an over-estimation, given publication bias.
6 If playing action video games increases visual-attentional processing ability, then so should, surely, these other things?

Thursday, November 9, 2017

On the importance of studying things that don’t work


In our reading group, we discussed a landmark paper of Paul Meehl’s, “Why summaries of research on psychological theories are often unintepretable” (1990). The paper ends with a very strong statement (p. 242), written by Meehl in italics for extra emphasis:

We should maturely and sophisticatedly accept the fact that some perfectly legitimate “empirical” scientific theories may not be strongly testable at a given time, and that it is neither good scientific strategy nor a legitimate use of the taxpayer’s dollar to pretend otherwise.

This statement should bring up all kinds of stages of grief in psychological researchers, including anger, denial, guilt, and depression. Are we really just wasting taxpayers’ money on studying things that are not studyable (yet)?

We sometimes have ideas, theories, or models, which cannot be tested given our current measurement devices. However, research is a process of incremental progress, and in order to make progress, we need to first understand if something works or not, and if not, why it doesn’t work. If we close our eyes towards all of the things that don’t work, we cannot progress. Even worse, if we find out that something doesn’t work, and don’t make any effort to publicise our results, other researchers are likely to get the same idea, at some point in time, and start using their resources in order to also find out that it doesn’t work.

To illustrate with a short example: For some reason or another, I decided to look at individual differences in the size of psycholinguistic marker effects. With the help of half a dozen colleagues, we have collected data from approximately 100 participants, tested individually in 1-hour sessions. The results so far suggest that this approach doesn’t work: there are no individual differences in psycholinguistic marker effects.

Was I the first one to find this out? Apparently not. When sharing my conclusion with some older colleagues, they said: “Well, I could have told you that. I have tried to use this approach for many years with the same results.” Could I have known this? Did I waste the time of my colleagues and the participants in pursuing something that everyone already knows? I think not. At least myself and my colleagues were unaware of any potential problems with this approach. And finding out that it doesn’t work opens interesting new questions: Why doesn’t it work? Does it work in some other populations? Can we make it work?

All of these questions are important, even if the answer is that there is no hope to make this approach work. However, in the current academic reward system, studying things that may never work is not a good strategy. If one wants publications, a better strategy is to drop a study like a hot potato once you realise that it will not give a significant result: throw it into your file drawer and move on to something else, something that will be more likely to give you a significant p-value somewhere. This is waste of taxpayer’s money.

Tuesday, August 8, 2017

Are predatory journals really that bad?


Tales of Algerian Princes, Exotic Beauties, Old Friends Stranded And In Need, and… Your Next Submission?

All academics know these pesky little emails that our spam folder is filled with. Occasionally, a real-looking one slips through the filter, and it takes us a few minutes to figure out that we are invited to submit a paper to the journal Psychological Sciences, rather than the prestigious (or rather, high-impact) journal Psychological Science, without the ‘s’ at the end.

Predatory journals, which pose as real, often open-access journals, offer to publish your papers for a processing fee, normally several thousand US-dollars. Numerous researchers have demonstrated that the peer review process, that supposedly guarantees high quality of your paper, is completely absent or very lax in these journals. The result of these demonstrations is a set of published pseudo-academic papers with varying degree of absurdity; see here for Zen Faulkes' non-comprehensive compilation of the funniest publications.

I argue that such predatory journals are not worse than your average spammer – but, of course, they are no better, either. Charging money for a service one doesn’t provide is a crime, be it a shipping of gold, mail-order bride, or peer-review process. What I argue here is that, despite predatory journals receiving a lot of negative attention from the research community, I have not yet seen a convincing argument to suggest that they damage science.

Also, it is a separate question whether monopolising publically funded research, putting it behind a paywall and charging gazillions for access, then suing the crap out of anyone who dares to disseminate the knowledge, is morally superior to predatory journals. But, two wrongs don’t make a right, and this blog post is not about that.

Predatory journals: A victimless crime?
Sometimes, a paper we write is just “unlucky”: it gets rejected by journal after journal, and eventually we shrug and realise that the paper will probably never be accepted for publication. Maybe the paper really isn’t our best piece of work: it could be a failed experiment, which does not advance our understanding, but publishing it would prevent other researchers from wasting time trying the same thing. A worse scenario is a paper which contradicts previously published and “well-established” work: it could keep getting blocked by editors and reviewers who are friends with the original authors or have themselves published papers that hinge on the assumptions that we are arguing against.

In such cases, making the paper public while avoiding a stringent peer-review process is justifiable. And, in principle, if you have money, if you know that you will be publishing in a journal with very low prestige, or rather, very high anti-prestige  – why not? The Frontiers Journals, anecdotally speaking, are a popular outlet for such work, and until relatively recently, Frontiers was considered a respectable open-access journal with a high impact factor, which has published some good papers.  

For the record, I don’t think it’s a good idea to publish “unlucky” papers in predatory journals, for the simple reason that preprint platforms give you the same service for free, and without the possibility of damaging your reputation. The format of a preprint also has other advantages: for example, the fact that your paper is not (yet) published may encourage your colleagues to provide useful feedback (which has happened to me both times I have uploaded a preprint so far). But, for those who really want to see their “unlucky” paper in the formatted journal version, the question is: is publishing in predatory journals a victimless crime?

Playing the game of boosting your CV
Some publications in predatory journals are probably by researchers who got scammed, and genuinely believed that they were paying money for a good peer-reviewed publication in a legit open-access journal. However, I would guess that the number of such fooled researchers is relatively small – at least, I have not heard of a single case. (To be fair, anyone who has realised that they have been tricked into paying money for a bogus publication would probably be embarrassed to admit it.)

The problem seems to be that some researchers take advantage of these predatory journals to boost their publication record. Anecdotally, this seems to be a problem in the non-Western world, where researchers are often pressured by their institutions to keep up with Western standards of publishing in international peer-reviewed journals, even though they often have fewer resources to produce the same amount of high-quality research and are sometimes limited by their English skills. Predatory journals allow them to publish a large quantity of low-quality papers, without having a strict English proficiency requirement. Here, the victims are honest researchers on the job market and applying for grants. Having to compete with someone who has an artificially inflated CV is unfair. On the other hand, I would argue, the problem here is not predatory journals, but rather an evaluation system that would prefer a researcher with a hundred random-text-generator papers compared to one with five good publications. Also, I would bet that, in practice, presenting a CV with hundreds of publications in predatory journals would not get a researcher very far on the international market (though I have heard of such researchers being unfairly advantaged by their home institutions).

In summary, while playing the publication game by publishing many low-quality articles in predatory journals is not a victimless crime, as it disadvantages honest researchers, I see it as a symptom of a broken evaluation system. If we did not evaluate researchers by quantity rather than quality, researchers just wanting to make their CV look bigger could publish all the gibberish they wanted, without causing any damage to their colleagues with less fragile egos.

Bad research posing as good research
The peer review process serves as a filter to ensure that the published literature is trustworthy. For researchers, science journalists, and the general public, this filtering process means that they can read papers with more confidence. It’s peer reviewed, therefore it’s true, one might be tempted to conclude. Having papers which appear to be peer reviewed but actually contain faulty methods, analyses or inferences would create and disseminate knowledge that is false. As the demonstrations which I linked above show, any text can be published under the apparent seal of peer-review. 

Except, we all know that peer review, even in "legit" journals, is not perfect. I would like to hear from anyone who has never seen a bad published paper in their field. Some papers are just sloppy, and draw conclusions that are not justified. Occasionally, a case of data fabrication or other types of fraud blows up, and papers published in very prestigious journals that have been peer-reviewed meticulously by genuine experts are retracted. Even a perfectly executed study may be reporting a false positive – after all, it’s possible that one runs an experiment and gets a p-value of 0.01, not knowing that fifty other labs have tried the same paradigm and not found a significant effect. Thus, we should not trust the results of a paper, just because it is peer reviewed. The trustworthiness of a paper should be determined by its quality, and by whether or not the results are replicable.

Perhaps predatory journals rarely or never publish good research. Theoretically, it is possible that some publications in predatory journals are “unlucky” papers of the type I described above, in which case they may well be worth reading. In fact, if we adopt a broad definition of predatory journals and include Frontiers, it is very likely that some of the papers are good. Be that as it may, it is undeniable that peer-reviewed journals at least sometimes publish rubbish. Thus, we should not rely on peer review as an ultimate seal of approval, anyway – regardless of the outlet where a paper was published, we should first skip to the methods and results section, and judge the paper on its own merit.

Damage to the Open Science movement
When I finally published one of my “unlucky” papers in Collabra, a friend (from a completely different area of research) told me: “I don’t want to disappoint you, but… I saw that the journal you published in is one of these open access journals.” As many of the predatory journals play the card of making your work freely accessible, there is some confusion about the distinction between “good” open-access journals and predatory journals. For example, Frontiers seems to be hovering in a grey area, with many respectable scientists on the editorial boards, but examples of very bad research getting published, and editors being pressured into accepting papers for the sake of increasing profit.

It is hard to argue against the benefit of making research freely accessible, both to fellow scientists and to the general public. Therefore, it is a pity that the Open Science movement loses some of the respect and support that it deserves, not due to convincing counter-arguments but due to confusion about whether or not it has a legit peer review process. Again, though, the problem here is not predatory publishing: rather, it is misconceptions about open access and its relation to the quality of peer review.

Conclusion
Predatory journals pose as academic, often open-access journals, and have been shown to publish, for a fee, any text with a very lax peer review process, or none at all. Predatory journals are annoying, because they spam researchers in an attempt to receive submissions, and they are immoral, because they may trick a researcher into paying money for the service of high-quality peer review which will not be provided.

There are other issues which may be argued to impede the progress of science. Allowing researchers to inflate their CVs by publishing a large quantity of low-quality work may disadvantage more honest researchers with fewer but better publications, who compete with them for jobs and funding. This would lead to the selection of bad scientists in high-level positions. Publishing low-quality papers as peer reviewed studies may confuse other researchers, science journalists and the general public, and would thus serve to disseminate facts that are not true. Finally, as they pose as open-access journals, predatory journals damage the reputation of other open-access journals, by spreading the misconception that open-access journals necessarily have a lax peer review process and publish anything to increase their financial profit.

I argue that the issues discussed in the previous paragraph – though they are real and important problems – are symptoms of an imperfect evaluation system, rather than caused by the presence of predatory journals. In an ideal world, researchers and papers would be evaluated on their own merit, rather than by a number representing the quantity of publications or impact factors. This is rather difficult to achieve, because it requires top-down changes from employers and funders. But, in this ideal world, publishing in a predatory journal would become nothing more than an auto-ego-stroking gesture. Also, myths about open access journals need to be dispelled, so that the negative publicity that predatory journals receive would not damage the open science movement. Many open access journals, such as Collabra and RIO, have the option of publishing the reviewers’ comments alongside the paper. This practice should dispel any doubts about the legitimacy of the peer review process. If the same was done for all journals, this could be used as an indicator for the journal’s quality, rather than the label of being open access, which is, in principle, orthogonal to the peer review process.

So, what should we do about the presence of predatory journals? Address the issues from the previous paragraph, somehow. And, in the meantime, treat emails from predatory journals the same way you treat any other spam: either delete them, or, for a slow day in the office, see here for some inspiration.