The Cracked Bassoon

Sam Mathias

I'm a psychoacoustician. I mainly post code for making interesting sounds.

31. January 2015 1275 words ~7 minutes read Comment

Bayesian signal detection theory, part II: 2AFC

In my last post, I described how to use signal detection theory (SDT) to analyse data from a single observer in a yes-no experiment. In the real world, not all experiments conform to the yes-no paradigm, so the basic SDT model must be extended in various ways. Here, I'm going to apply SDT to data from a two-alternative forced choice (2AFC) experiment. As before, I'll start by formulating the model, and then I'll describe how to estimate its parameters using PyMC.

There are numerous ways to derive the 2AFC SDT model. The traditional approach involves using the what's known as the differencing rule (Macmillan et al., 1977). An alternative approach, described by DeCarlo (2012), does not involve differencing. Both approaches are valid, but DeCarlo's method has the advantage that it can be easily extended to experimental designs that involve more than two alternatives. My derivation below follows Decarlo's non-differencing strategy.

The 2AFC model

On each trial in a 2AFC experiment, the observer is presented with two stimuli per trial, a noise stimulus and a signal stimulus, one after the other. Let the variable X denote the presentation order of the stimuli. When X = 0, the signal stimulus is presented first; when X = 1, the signal stimulus is presented second. The observer has to decide which stimulus was the signal. Let the variable Y denote the observer's response. When Y = 0, the observer responds 'first' ; when Y = 1, the observer responds 'second'. The observer is assumed to make two observations per trial — one corresponding to the first tone, \psi_1, and one corresponding to the second tone, \psi_2 — and then choose whichever observation is largest. This decision rule can be expressed formally as


where c is a measure of bias.  Just as under the yes-no model, noise observations are Gaussian random variables with zero mean and unit variance, whereas signal observations are Gaussian random variables with d^\prime mean and unit variance. Thus, d^\prime again represents the distance between the noise and signal distributions, and is the measure of sensitivity.

Just like under yes-no model, we can derive closed-form equations that describe d^\prime and c in terms of hit and false-alarm probabilities. Here, I define a hit as when the observer responds 'second' (Y=1) given that the signal was presented second (X=1). Correspondingly, I define a false alarm as Y=1 given that X=0. [In his derivations, DeCarlo (2012) defined a hit as Y=0 given that X=0. Either way is valid; I chose this way just to make the figure below more consistent with my previous post].

Consider the figure below:


The figure shows the probability distributions of noise and signal observations, which are exactly the same as under the yes-no model. It also shows an observation of the first stimulus on a trial where this stimulus was noise (i.e., X=1). The figure makes it clear that the probability of a hit, h, will vary from trial to trial depending on the value of the first observation. In fact, the hit rate is given by

\begin{eqnarray*}&P\left\{Y=1\mid{}X=1, \psi_1=x\right\}&=&\Phi\left(d^\prime-c-x\right)\textrm{,}\end{eqnarray*}

where x a realisation of \psi_1 when it is drawn from the noise distribution, or x\sim\mathcal{N}\left(0,1\right). The corresponding equation not conditional on x can be found by integrating with respect to x:


Conveniently, equations of this kind can be simplified, leading to


Similarly, the probability of a false alarm is

\begin{eqnarray*}&P\left\{Y=1\mid{}X=0, \psi_1=y\right\}&=&\Phi\left(-c-y\right)\textrm{,}\end{eqnarray*}

where y a realisation of \psi_1 when it is drawn from the signal distribution, or y\sim\mathcal{N}\left(0,1\right)+d^\prime. If we let z\sim\mathcal{N}\left(0,1\right), then

\begin{eqnarray*}&P\left\{Y=1\mid{}X=0, \psi_1=z+d^\prime\right\}&=&\Phi\left(-d^\prime-c-z\right)\textrm{,}\end{eqnarray*}



We now have everything we need to construct our Bayesian 2AFC model, but for completeness, maximum-likelihood estimators for d^\prime and c can be found by combining equations 1 and 2 and re-arranging:


Equation 3 should be familiar to anyone who has read Macmillan and Creelman (2005) — 2AFC is a popular enough experimental design that most SDT textbooks spend a chapter on it and the origins of the so-called "\sqrt{2} correction".  It is also frequently commented that 2AFC designs produce little bias, which is perhaps why I've never seen Equation 4 reported anywhere. I find this strange, because one could easily imagine a situation where an observer would be more inclined to select one stimulus over another in general. Imagine an experiment where the observer had to select the louder of two sounds presented simultaneously to either ear; it would be completely valid to consider this a 2AFC decision, but it is quite plausible that the observer might exhibit a bias towards one ear over the other.

Bayesian inference

Below is some PyMC code that implements both the yes-no and 2AFC models:

And here is the output:


Banner image is The Crying Girl by Roy Lichtenstein.

Leave a Reply

Your email address will not be published. Required fields are marked *