There are four cards, a simple rule, and all you’ve got to do is to work out which cards you need to turn over to see if the rule has been broken. That’s got to be easy, right? Well maybe, but the Wason Selection Task, as it is called, is one of the most oft repeated tests of logical reasoning in the world of experimental psychology.
There are four cards, a simple rule, and all you’ve got to do is to work out which cards you need to turn over to see if the rule has been broken. That’s got to be easy, right? Well maybe, but the Wason Selection Task, as it is called, is one of the most oft repeated tests of logical reasoning in the world of experimental psychology.
Three questions.
Only click on this and read after you've taken the test! I swear it's fast.
According to Leda Cosmides and John Tooby, the results of the Wason Selection Task demonstrate that the human mind has not evolved reasoning procedures that are specialised for detecting logical violations of conditional rules. Moreover, they claim that this is the case even when these rules deal with familiar content drawn from everyday life.
However, they argue that the human mind has evolved to detect violations of conditional rules, when these violations involve cheating on a social exchange. This is a situation where a person is entitled to some kind of reward only if they have fulfilled a particular requirement (for example, you can enter a particular nightclub only if you’re over the age of 21). Cheating involves taking the benefit, without fulfilling the condition for the benefit. Cosmides and Tooby have found that when the Wason Selection Task is constructed to reflect a cheating scenario, subjects perform considerably better than they do with the standard test.
Okay, so, obviously invoking evolution is a little silly here. Who’s to say this is a matter of hardwiring rather than what people have practiced?
Still, there’s something very interesting.
In biology, anthropomorphism bad.
In science, the use of anthropomorphic language that suggests animals have intentions and emotions has traditionally been deprecated as indicating a lack of objectivity. Biologists have been warned to avoid assumptions that animals share any of the same mental, social, and emotional capacities of humans, and to rely instead on strictly observable evidence.
You might reason about it: we don’t want to project our own social dynamics onto situations where they’re not fundamentally present, because we’ll project too much and miss what’s going on. Seems fair enough.
But in this case, taking the social out of the same reasoning task makes many unable to pick their way through it.
This suggests to me that various anthropomorphized lenses can help us with cold hard logical problems, especially if we can pick them up and put them down.
The version of this that I’ve found people able to use at work is “adversarial” thinking. Every computer science education handwaves a bit that you ought to be able to do this kind of reasoning, sometimes for worst-case analysis or more commonly for security. You can walk people through this a bit and get to interesting insights.
(Fake example that may make this seem slightly more concrete to software engineers if absolute nonsense to everyone else: Yes, I know that this service is only invoked by three trusted internal callers, and yes, they’re all supposed to be reasonable – but think with me. If you wanted to, could you create load with some attributes that would ruin the async workflow distribution’s performance characteristics? (Some method is discussed.) (Ahh – and could we end up with some amount of that by accident if the upstream caller had a failure in such-and-such way and were resubmitting XYZ?))
But: I think adversarial reasoning and thinking about cheating probably isn’t the only way that social reasoning can help when you’re manipulating endless cold logical abstractions.
My senior year in college, a friend and I were thoroughly and delightedly obnoxious about calling components / interfaces / entities “boi”. “The session state boi.” “The parser boi needs it.” “Ahh but this update will need to touch the factory bois as well.” I can’t say that that must have been fun for others, but… Thinking about the different pieces of code like mammoths in a David Macaulay drawing handing off various tasks to each other makes it easier for me to remember what they all do and how they interact.
Is that embarrassing? Is it something to suppress like a field biologist’s anthropomorphization of a troop of meerkats?
I think it shouldn’t be. I’ve no idea today how to leverage it to be more effective, but there’s a powerful amount of brain that you get to work with for social reasoning, and finding ways of getting it to kick on seems valuable.