A conversation With David Rand in Edge:
I'm most interested in understanding cooperation, that is to say, why people are willing to act for the greater good rather than their narrow self-interest. In thinking about that question, there's both a scientific part of understanding how the selfish process of natural selection and strategic reasoning could give rise to this cooperative behavior, and also the practical question of what we can do to make people more cooperative in real-world settings.
The way that I think about this is at two general different levels. One is an institutional level. How can you arrange interactions in a way that makes people inclined to cooperate? Most of that boils down to “how do you make it in people's long-run self-interests to be cooperative?” The other part is trying to understand at a more mechanistic or psychological/cognitive level what's going on inside people's heads while they're making cooperation decisions; in particular, in situations where there's not any self-interested motive to cooperate, lots of people still cooperate. I want to understand how exactly that happens.
If you think about the puzzle of cooperation being “why should I incur a personal cost of time or money or effort in order to do something that's going to benefit other people and not me?” the general answer is that if you can create future consequences for present behavior, that can create an incentive to cooperate. Cooperation requires me to incur some costs now, but if I'm cooperating with someone who I'll interact with again, it's worth it for me to pay the cost of cooperating now in order to get the benefit of them cooperating with me in the future, as long as there's a large enough likelihood that we'll interact again.
Even if it's with someone that I'm not going to interact with again, if other people are observing that interaction, then it affects my reputation. It can be worth paying the cost of cooperating in order to earn a good reputation, and to attract new interaction partners.