Hardbound

Designing better choices

University of Chicago professor Richard Thaler and Carl Sunstein of Harvard Law School show the path to refined decision-making

|
Published 7 years ago on May 27, 2017 4 minutes Read

Early in Thaler’s career, he was teaching a class on managerial decision making to business school students. Students would sometimes leave class early to go for job interviews (or a golf game) and would try to sneak out of the room as surreptitiously as possible. Unfortunately for them, the only way out of the room was through a large double door in the front, in full view of the entire class (though not directly in Thaler’s line of sight). The doors were equipped with large, handsome wood handles, vertically mounted cylindrical pulls about two feet in length. When the students came to these doors, they were faced with two competing instincts. One instinct says that to leave a room you push the door. The other instinct says, when faced with large wooden handles that are obviously designed to be grabbed, you pull. It turns out that the latter instinct trumps the former, and every student leaving the room began by pulling on the handle. Alas, the door opened outward.

At one point in the semester, Thaler pointed this out to the class, as one embarrassed student was pulling on the door handle while trying to escape the classroom. Thereafter, as a student got up to leave, the rest of the class would eagerly wait to see whether the student would push or pull. Amazingly, most still pulled! Their Automatic Systems triumphed; the signal emitted by that big wooden handle simply could not be screened out. (And when Thaler would leave that room on other occasions, he sheepishly found himself pulling too.)

Those doors are bad architecture because they violate a simple psychological principle with a fancy name: stimulus response compatibility. The idea is that you want the signal you receive (the stimulus) to be consistent with the desired action. When there are inconsistencies, performance suffers and people blunder.

Consider, for example, the effect of a large, red, octagonal sign that said GO. The difficulties induced by such incompatibilities are easy to show experimentally. One of the most famous such demonstrations is the Stroop (1935) test. In the modern version of this experiment people see words flashed on a computer screen and they have a very simple task. They press the right button if they see a word that is displayed in red, and press the left button if they see a word displayed in green. People find the task easy and can learn to do it very quickly with great accuracy. That is, until they are thrown a curve ball, in the form of the word green displayed in red, or the word red displayed in green. For these incompatible signals, response time slows and error rates increase. A key reason is that the Automatic System reads the word faster than the color naming system can decide the color of the text. See the word green in red text and the non-thinking Automatic System rushes to press the left button, which is, of course, the wrong one. You can try this for yourself. Just get a bunch of colored crayons and write a list of color names, making sure that most of the names are not the same as the color they are written in. (Better yet, get a nearby kid to do this for you.) Then name the color names as fast as you can (that is, read the words and ignore the color): easy, isn’t it? Now say the color that the words are written in as fast as you can and ignore the word itself: hard, isn’t it? In tasks like this, Automatic Systems always win over Reflective ones.

Although we have never seen a green stop sign, doors such as the ones described above are commonplace, and they violate the same principle. Flat plates say “push me” and big handles say “pull me,” so don’t expect people to push big handles! This is a failure of architecture to accommodate basic principles of human psychology. Life is full of products that suffer from such defects. Isn’t it obvious that the largest buttons on a television remote control should be the power, channel, and volume controls? Yet how many remotes do we see that have the volume control the same size as the “input” control button (which if pressed accidentally can cause the picture to disappear)?

It is possible, however, to incorporate human factors into design, as Don Norman’s wonderful book The Design of Everyday Things (1990) illustrates. One of his best examples is the design of a basic four-burner stove. Most such stoves have the burners in a symmetric arrangement, as in the stove pictured at the top, with the controls arranged in a linear fashion below. In this set-up, it is easy to get confused about which knob controls the front burner and which controls the back, and many pots and pans have been burned as a result. The other two designs we have illustrated are only two of many better possibilities.

Norman’s basic lesson is that designers need to keep in mind that the users of their objects are humans who are confronted every day with myriad choices and cues. The goal of this chapter is to develop the same idea for choice architects. If you indirectly influence the choices other people make, you are a choice architect. And since the choices you are influencing are going to be made by humans, you will want your architecture to reflect a good understanding of how humans behave. In particular, you will want to ensure that the Automatic System doesn’t get all confused.