After that somewhat intricate consideration of motion processing, let’s now take a diversion to see how signal detection has been used in applied vision research — outside the domain of motion.
We will consider an experiment reported in the article “Is facial emotion recognition impairment in schizophrenia identical for different emotions? A signal detection analysis” by Tsoi et al. (2008). UNSW students can access the article here.
The researchers were interested in investigating whether sensitivity to facial emotion differs in those with and without a diagnosis of schizophrenia. They presented their participants with visual images of faces depicting happy, sad, fearful, disgust, surprise, and anger emotions. These were similar, though not identical, to the faces from the FACES dataset shown in Fig. 15 below.
On a given block of trials, participants were instructed about a “target emotion” that was either happy, sad, or fearful. On each trial in the block, participants were asked to respond “yes” if they thought the image depicted the target emotion and “no” if not. There were 88 trials in a block, of which 28 depicted the target emotion (‘signal’ trials) and the remaining 60 trials depicted one of the other emotions (‘noise’ trials). Each face was only presented for a very short amount of time (50ms).
Notice the details of the ‘noise’ trials in this study. Remember that ‘noise’ trials are those in which the physical property of interest is not present in the stimulus. Hence, ‘noise’ does not mean ‘noisy’ in the conventional sense — this study is a good example, where a ‘noise’ stimulus is a face that is not depicting the given target emotion.
The study involved 50 observers — 25 each with and without a diagnosis of schizophrenia. The researchers used the same signal detection theory framework as we did to summarise each participant’s performance for each target emotion as hit and false alarm rates, which are reported in their Table 2. As we have seen, the hit and false alarm rates can be converted to an estimate of sensitivity called d’.
The hit and false alarm rates reported in Table 2 in the paper do not correspond precisely to their reported d’ values. This is because the authors apply a small adjustment to the hit and false alarm rates prior to calculating d’. See the article by Stanislaw & Todorov (Additional resources) for further details if you’re interested.
Unusually for a study of perception, the article has no figures — so I have plotted their sensitivity results below.
As shown above, Tsoi et al. (2008) were able to use signal detection theory to demonstrate that individuals with schizophrenia may have a lower capacity for detecting happy facial emotions relative to control participants. Because of their use of signal detection theory, they can infer such sensitivity separately from any differences in propensity to say “yes” or “no”. Indeed, the researchers did find between-group differences in the criterion values used to make the decisions, which underscores the importance of using an approach like signal detection theory that can limit the influence of such biases.
This experiment is an interesting case study in the application of signal detection theory to a yes/no task for the estimation of visual sensitivity. However, this type of application is reasonably rare in the literature. Why? We will consider this question in the next section.
Tsoi et al. (2008) used a yes/no task in which observers were asked to identify the depiction of a particular target emotion in faces. By applying signal detection theory, they were able to demonstrate a difference in sensitivity for happy faces between those with and without a diagnosis of schizophrenia. Importantly, the application of signal detection theory allowed this difference in sensitivity to be dissociated from any difference in a bias to respond “yes” or “no”.