πŸ’Ž When weighing up the merits of a product or dangers of a technology we often rely on how it makes us feel (rather than laboriously compute the facts)

In a second experiment, Slovic and Alhakami had students of the University of Oregon rate the risks and benefits of a technology (different trials used nuclear power, natural gas, and food preservatives). Then they were asked to read a few paragraphs describing some of the benefits of the technology. Finally, they were asked again to rate the risks and benefits of the technology. Not surprisingly, the positive information they read raised – student’s ratings of the technology’s benefits in about one-half of the cases. But most of those who raised their estimate of the technology’s benefits also lowered their estimate of the risk – even though they had not read a word about the risk. Later trials in which only risks were discussed had the same effect but in reverse: People who raised their estimate of the technology’s risks in response to the information about risk also lowered their estimate of its benefit.

Excerpt from: Risk: The Science and Politics of Fear by Dan Gardner

πŸ’Ž On how too much data can make us overconfident in our predictions (rather than boost their accuracy)

The problem of more data was investigated by Paul Slovic, Professor of Psychology at the University of Oregon. He ran an experiment with professional horseracing handicap setters in which they were given a list of 88 variables that were useful in predicting a horse’s performance. The participants then had to predict the outcome of the race and their confidence in their prediction. They repeated these tasks with access to different levels of data: either 5, 10, 20, 30 or 40 of the variables.

The results were illuminating. Accuracy was the same regardless of the number of variables used. However, overconfidence grew as more data was harnessed. Experts overestimated the importance of factors that had a limited value. It was only when five data points were used that accuracy and confidence were well calibrated.

Marketers face a similar set of problems. They have access to more data than ever before and many believe that because the information exists they should use it. The Slovic experiment suggests otherwise. We shouldn’t harness data just because we can. Instead, as much time should be spent choosing which data sets to ignore as which to use.

Excerpt from: The Choice Factory: 25 behavioural biases that influence what we buy by Richard Shotton