10 March 2006
Anyone who is interested in Bayesian models of human cognition has to wrestle with the issue of whether people use the same sort of reasoning (and, if so, to what extent this is true, and how our brains do that). I'll be doing a series of posts exploring what I think about this issue (which isn't really set in stone yet -- so think of this as "musing out loud" rather than saying "this is the way it is").
First: what does it mean to say that people are (or are not) Bayesian?
In many ways the question of whether people do the "same thing" as the model is a red herring: I use Bayesian models of human cognition in order to provide computational-level explanations of behavior, not algorithmic-level explanations. What's the difference? A computational-level explanation seeks to explain a system in terms of the goals of the system, the constraints involved, and the way those various factors play out. An algorithmic-level explanation seeks to explain how the brain physically does this. So any single computational explanation might have a number of possible different algorithmic implementations. Ultimately, of course, we would like to understand both: but I think most phenomena in cognitive science are not well enough understood on the computational level to make understanding on the algorithmic level very realistic, at least not at this stage.
To illustrate the difference between computational and algorithmic, I'll give an example. People given a list of words to memorize show certain regular types of mistakes. If the list contains many words with the same theme - say, all having to do with sports, but never the specific word "sport" - people they will nevertheless often incorrectly "remember" seeing "sport". One possible computational-level explanation of what is going on might suggest, say, that the function of memory is to use the past to predict the future. It might further say that there are constraints on memory deriving from limited capacity and limited ability to encode everything in time, and that as a result the mind seeks to "compress" information by encoding the meaning of words rather than their exact form. Thus, it is more likely to "false positive" on words with similar meanings but very different forms.
That's one of many possible computational-level explanations of this specific memory phenomenon. The huge value of Bayesian models (and computational models in general) is that they make this type of explanation rigorous and testable - we can quantify "limited capacity" and what is meant by "prediction" and explore how they interact with each other, so we're not just throwing words around. There is no claim in most computational cognitive science, implicit or explicit, that people actually implement the same computations our models do.
There is still the open question of what is going on algorithmically. Quite frankly, I don't know. That said, in my next post I'll talk about why I don't think we can reject out of hand the idea that our brains are implementing something (on the algorithmic level) that might be similar to the computations our computers are doing. And then in another post or two I'll wrap up with an exploration of the other possibility: that people are adopting heuristics that approximate our models, at least under some conditions. All this, of course, is only true to the extent that the models are good matches to human behavior -- which is probably variable given the domain and the situation.