COGSCIdotNL
cognitive science and more
 
cognitive science and more
Notes from the NC3Rs workshop on publication bias

I'm writing this on my way back from London, where I attended a workshop on publication bias that was organized by the NC3Rs (the British National Center for the Replacement, Refinement, and Reduction of Animals in Research). Publication bias arises when not all scientific studies are published, and when the chance of whether a study is published depends on its outcome. More specifically, studies that show a 'positive' result (e.g. a treatment effect, or something that supports a researcher's hypothesis) are published more often than studies that show a 'negative' result (e.g. no treatment effect, or something that doesn't support a researcher's hypothesis). Publication bias distorts scientific evidence. In most cases, it makes treatments (drugs, therapies, etc.) seem more effective than they are, simply because we only see studies that show positive treatment effects.

Publication bias is increasingly recognized as a severe problem that affects all areas of science. It's not new. It's just that until recently little was done about it. It was therefore great to see this workshop bring researchers, funders, publishers, and people from industry together with the aim of discussing concrete ways of reducing publication bias. In this post I would like to tell you about some of the things that were discussed.

There were many excellent speakers, but I will first highlight the opening talk by Emily Sena. Her talk was partly based on a meta-analysis in which she investigated publication bias in animal research on stroke treatment. Her work nicely shows how you can answer a seemingly unanswerable question: How many studies were never published, and what did these invisible studies find?

 
The play store bully

I recently had a little adventure on the Google Play store, where I publish a few apps. I wanted to share the story with you, because it illustrates the danger of having a single company (Google) that dictates an entire platform (Android) and its app store (Google Play Store).

It's about a game app called Infinite Maze. This is a cute little game that Theo Danes and I created as an entry for the 2014 Best Illusion of the Year Contest. A playable optical illusion! We didn't win, but I'm proud to say that we made the finals.

You can read more about the illusion here, but this post is about a trademark infringement that was filed by Namco against a dozen-or-so apps, including Infinite Maze. Namco is the company behind Pac-man. Their exact allegation was:

this app infringes PAC-MAN in the first game screenshot; PAC-MAN is clearly seen as the game title

Besides punctuation, there is something very wrong with this allegation: It is not true. Sure, Infinite Maze is a labyrinth game, and it's clearly inspired by Pac-man. In the past, I have even referred to it as Infinite Maze of Pac-man. But before uploading it to the Google Play store, I removed all mention of the word 'pac-man' in the game so that I wouldn't violate any trademarks. The word 'pac-man' now only occurs in the app description in the context of 'a pac-man-inspired game'. A phrasing that, as far as I know, doesn't constitute trademark infringement. But even if it does, Namco's specific allegation is that the word 'pac-man' is seen as the game title in a screenshot, which is utter, full, and complete nonsense.

I was informed of this allegation only after the app had been pulled. No warning. No advance notice. No chance for rebuttal. Google deals swift justice.

 
Breeding the perfect visual-search display

Tens of thousands of psychology students have spent hundreds of thousands of hours in stuffy little lab cubicles doing visual-search experiments. They have searched for diamonds among squares, red lines among green crosses, smileys among frowneys, and so on. You would think that by now every conceivable visual-search experiment has been done. But no, there's still cool stuff left.

In a study that just appeared in Journal of Vision, Erik van der Burg and his colleagues used a genetic algorithm to breed the best visual-search display. That is, they used evolution through 'natural' selection to create a display in which a target object was super easy to find. The results are a little surprising, which makes this experiment extra cool.

Natural selection applied to visual-search displays. The fittest displays from generation 1 are crossbred to create the displays from generation 2. The target is the horizontal red line segment in the center.

 
A bit about pupil size, attention, and inhibition

Let's try a little trick.

  1. Take a piece of cardboard and punch a small hole in it, no bigger than say two millimeters. If you're in a bar—this is one of those bar tricks—a beer coaster will do just fine.
  2. Cover your right eye with the piece of cardboard so that you can see through the hole. Make sure that no light gets through, except for through the hole.
  3. Cover your left eye with your hand. Make sure that no light gets through at all.
  4. Now uncover your left eye and watch what happens to the hole: It shrinks!
  5. Now recover your left eye with your hand, and watch the hole become bigger again.
A beer coaster.A beer coaster.

This trick allows you to see your own pupillary light response. When you remove your hand from your left eye, the light that suddenly enters the eye causes your pupils to constrict. (Your right pupil as well as your left, because pupils always act together). And this, in turn, causes the little hole to appear smaller. In other words, the apparent size of the hole directly reflects the size of your pupil! (If you're brave you can try to figure out the optics behind this effect. It took me a while.)

All of this was just an elaborate introduction to tell you what you already know: Pupils respond to light. Brightness causes pupils to constrict, and darkness causes pupils to dilate. But what you may not know (unless you read my previous post) is that you don't need to look at something bright for your pupils to constrict. Just paying attention (without looking) is enough. In a way, when you pay attention to something, your pupils respond as if you were looking directly at it.

In a paper that just appeared in Journal of Vision, my coauthors and I studied this phenomenon in more detail. I'm very excited about this study, so I wanted to share the main result in a blog.

Our experiment was very simple. Participants kept their eyes fixated in the center of a display, and identified a target stimulus that appeared on the left or right side of the display. Just before the target appeared, there was a brief movement on the left or right side of the display. This movement was not relevant for the task (i.e. it did not predict the location or identity of the target), but participants were nevertheless unable to ignore it: Movement automatically attracts attention, whether you want it to or not.

So far, so just another good-old-fashioned cuing paradigm. But we added a little twist: Half of the display was bright, the other half was dark. Therefore, attention was sometimes drawn toward brightness (when the movement occurred on the bright side), and sometimes toward darkness. And because we know from previous studies that attention affects the pupillary light response, this brightness/darkness manipulation allowed us to study what happens to attention in this type of experiment.

 
A bit about our open-science Marie Curie project

This is my first week as a Marie Sklodowska-Curie fellow. Exciting! Marie Curie fellowships are post-doctoral grants from the European Commision. They give young(ish) researchers like me the opportunity to focus full time on research for two years. Being a Marie Curie fellow is a good thing in every way, so I’m thrilled to finally start!

I will blog occasionally about the project. Most will be about the research itself, but in this first post I want to write a bit about how we are going to approach this project. (“We” refers also to Françoise Vitu, senior researcher of the project, and other collaborators.) To use a heavily overused buzzterm, this is going to be an open-science project.

An actress performing Marie Curie. The left-most badges have been designed by the Center for Open Science. The right-most badge is the officious open-access logo, designed by PLoS.