Sigal Samuel in The Atlantic:
Imagine you’re the president of a European country. You’re slated to take in 50,000 refugees from the Middle East this year. Most of them are very religious, while most of your population is very secular. You want to integrate the newcomers seamlessly, minimizing the risk of economic malaise or violence, but you have limited resources. One of your advisers tells you to invest in the refugees’ education; another says providing jobs is the key; yet another insists the most important thing is giving the youth opportunities to socialize with local kids. What do you do? Well, you make your best guess and hope the policy you chose works out. But it might not. Even a policy that yielded great results in another place or time may fail miserably in your particular country under its present circumstances. If that happens, you might find yourself wishing you could hit a giant reset button and run the whole experiment over again, this time choosing a different policy. But of course, you can’t experiment like that, not with real people.
You can, however, experiment like that with virtual people. And that’s exactly what the Modeling Religion Project does. An international team of computer scientists, philosophers, religion scholars, and others are collaborating to build computer models that they populate with thousands of virtual people, or “agents.” As the agents interact with each other and with shifting conditions in their artificial environment, their attributes and beliefs—levels of economic security, of education, of religiosity, and so on—can change. At the outset, the researchers program the agents to mimic the attributes and beliefs of a real country’s population using survey data from that country. They also “train” the model on a set of empirically validated social-science rules about how humans tend to interact under various pressures. And then they experiment: Add in 50,000 newcomers, say, and invest heavily in education. How does the artificial society change? The model tells you. Don’t like it? Just hit that reset button and try a different policy. The goal of the project is to give politicians an empirical tool that will help them assess competing policy options so they can choose the most effective one. It’s a noble idea: If leaders can use artificial intelligence to predict which policy will produce the best outcome, maybe we’ll end up with a healthier and happier world. But it’s also a dangerous idea: What’s “best” is in the eye of the beholder, after all.