cognitive science
and more

Do you remember what it was like before the Internet? If not, just Google “what was it like before the internet?” and you will.

In a recent study in Science, Sparrow and colleagues investigated whether people indeed use the Internet as an external memory store. This idea is not entirely new (in fact, it may be kind of obvious that, in some sense at least, we do), but their experiments are quite nice.

Firefox web browserThe first experiment is most compelling, so I will give you a few details. Participants were asked questions that were either easy or difficult. After a series of questions, all of approximately the same difficulty, participants performed a Stroop-like task. In this task, participants had to name the colors of words. Like this:

Google Nike Yahoo Shell

So that would be “red, green, blue, red”. You get the point. The idea is that we are such overtrained readers that word-meaning is processed before color. If a word is particularly interesting, it will capture attention and interfere strongly with the color naming-task. As a result, the response is delayed for interesting words.

The crucial finding was that people are slower to name the color of computer-related words (such as Google), compared to control words (such as Nike), particularly after a series of difficult questions. The authors concluded from this that when we hear a difficult question, we don't think about the question per se, but about how we are going to find out the answer. Which is typically …

Read more »

An easy way to create graphs with within-subject error bars

Let's consider an experiment in which participants were shown happy pictures (warning: this is a silly experiment, without a proper control condition). Before and after they saw the pictures, they filled in a questionnaire to estimate their mood on a scale from 1 (sad) to 10 (happy). The results of the experiment are shown in the graph below. Each line represents a single participant.


Clearly, people became happier after seeing the happy pictures. This can also be verified easily using a paired samples t-test, which shows that the “before” scores are significantly lower than the “after” scores (p < .005).

However, the graph isn't that nice. We don't want to see individual participants. We'd rather see two average scores ("before" and "after") and a measure of the variability. So what we can do is create a graph with error bars that reflect the 95% confidence interval (i.e., the average of the population is 95% certain to fall within the depicted range):


As you can see, the error bars are very large and show a huge overlap! If there is that much variation, how can it be that the difference between “before” and “after” is so highly significant? The reason is that the we are only interested in whether participants have become happier or not. We are not interested at all in how happy the participants were to begin with. All participants became happier and therefore our t-test showed a significant difference between “before” and “after”. But there is a lot …

Read more »

Orthodox statistics beware: Bayesian radicals spotted

I recently read this alarming report in Perspectives on Psychological Science (Kievit, 2011):

A group of international Bayesians was arrested today in the Rotterdam harbor. According to Dutch customs, they were attempting to smuggle over 1.5 million priors into the country, hidden between electronic equipment. The arrest represents the largest capture of priors in history.

This is our biggest catch yet. Uniform priors, Gaussian priors, Dirichlet priors, even informative priors, it’s all here,” says customs officers Benjamin Roosken, responsible for the arrest. (…)

Sources suggest that the shipment of priors was going to be introduced into the Dutch scientific community by “white-washing” them. “They are getting very good at it. They found ghost-journals with fake articles, refer to the papers where the priors are allegedly based on empirical data, and before you know it, they’re out in the open. Of course, when you look up the reference, everything is long gone,” says Roosken.

This fake report is quite possibly the geekiest joke in the history of man, so you're forgiven if you don't get it right away. It's about statistics, so a very brief introduction is in order.

Psychologists typically investigate whether two groups (or one group under two conditions) differ from each other in some respect. For example, they may investigate whether men and women differ in their cleaning habits, by comparing the number of times that men and women vacuum each week. Here's some fake data for 6 participants (3 men and 3 women):

Men: 2 …
Read more »

Skull measuring or How Stephen Jay Gould proves his point by being wrong

In the 19th century, the anthropologist Samuel George Morton set out on the, by today's standards, highly dubious quest to show that cranial capacity differs between racial groups. Essentially, he filled almost a 1000 skulls with seed or leadshot, gave each of them a good shake to make sure that every nook and cranny was filled, and then measured the amount of filling that came out as he emptied the skulls. According to his findings, Caucasians had the largest skulls. As, of course, he had suspected all along.

A view of a skull, drawn by Leonardo da Vinci. Source [url=http://commons.wikimedia.org/wiki/File:View_of_a_Skull_III.jpg]Wikimedia Commons[/url]Morton's experiments were not that famous until, more than a century later, they were rediscovered by the eminent biologist Stephen Jay Gould. And he was having none of it. According to Gould, Morton's findings were driven by his racist expectations. Caucasian should have the largest cranial capacity, so, when measuring the skulls, Morton made sure that they did, perhaps merely by subtle “unconscious or dimly perceived finagling.” Gould used this example to prove his broader point that experimental results are inevitably biased, because researchers are only human and simply cannot help but massage the data just the tiniest bit. And, as they say, if you torture the data it will confess to anything.

Now, I'm in general sympathetic to Gould's views, but in this case he was wrong. In a recent study in PLoS Biology, Lewis and colleagues remeasured almost half of the skulls that had been used by Morton (Gould did not have access to the actual skulls. He derived his …

Read more »

OSDOC: The OpenSesame documentation area

Lo and behold, the OpenSesame documentation area is online at osdoc.cogsci.nl! The documentation area serves as a central point for everything that is related to OpenSesame, the graphical experiment builder. From tutorials and plug-ins to example experiments, it's all there. This is obviously a big improvement over my previous "system" of having documentation in the form of a bunch of loosely linked blog posts.

The documentation area is part of the preparation for the next version of OpenSesame, 0.24 "Cody Crick", which I hope to release in a month or so. I have tested it quite a bit and things are already working quite nicely. If you are interested in getting a sneak preview, you can find out how to get your hands on the development version through, yes, the documentation area!

Read more »