Computer Simulations And The Universe

by Ashutosh Jogalekar

There is a sense in certain quarters that both experimental and theoretical fundamental physics are at an impasse. Other branches of physics like condensed matter physics and fluid dynamics are thriving, but since the composition and existence of the fundamental basis of matter, the origins of the universe and the unity of quantum mechanics with general relativity have long since been held to be foundational matters in physics, this lack of progress rightly bothers its practitioners.

Each of these two aspects of physics faces its own problems. Experimental physics is in trouble because it now relies on energies that cannot be reached even by the biggest particle accelerators around, and building new accelerators will require billions of dollars at a minimum. Even before it was difficult to get this kind of money; in the 1990s the Superconducting Supercollider, an accelerator which would have cost about $2 billion and reached energies greater than those reached by the Large Hadron Collider, was shelved because of a lack of consensus among physicists, political foot dragging and budget concerns. The next particle accelerator which is projected to cost $10 billion is seen as a bad investment by some, especially since previous expensive experiments in physics have confirmed prior theoretical foundations rather than discovered new phenomena or particles.

Fundamental theoretical physics is in trouble because it has become unfalsifiable, divorced from experiment and entangled in mathematical complexities. String theory which was thought to be the most promising approach to unifying quantum mechanics and general relativity has come under particular scrutiny, and its lack of falsifiable predictive power has become so visible that some philosophers have suggested that traditional criteria for a theory’s success like falsification should no longer be applied to string theory. Not surprisingly, many scientists as well as philosophers have frowned on this proposed novel, postmodern model of scientific validation. Read more »

Modeling Artificial and Real Societies

by Muhammad Aurangzeb Ahmad

SchellingmodelScience Fiction literature is fraught with examples of what-ifs of history which speculate on how the would have looked like if certain events had happened a different way e.g., if the Confederates had won the American Civil War, if the Western Roman Empire had not fallen, if Islam had made inroads in the imperial household in China etc. At best these are speculations that we can entertain to shed light on our own world but imagine if there was a way to gauge how societies react under certain environmental constraints, social structures and stress. Simulation is often described as the Third Paradigm in Science and the field of Social Simulation seeks to model social phenomenon that cannot otherwise be studied because of practical and ethical constraints. Isaac Asimov envisions the science of predicting future with the psychohistory in the foundation series of science fiction novels.

The history of social simulation can be traced back to the idea of Cellular Automata by Stainlaw Ulam and John von Neumann: A cellular automata is a system of cell objects that can interact with its neighbors given a set of rules. The most famous example of this phenomenon being Conway’s Game of Life, which is a very simple simulation, that generates self-organizing patterns, which one could not really have predicted by just knowing the rules. To illustrate the concept of Social Simulation consider Schilling’s model of how racial segregation happens. Consider a two dimensional grid where each cell represents an individual. The cells are divided into two groups represented by different colors. Initially the cells are randomly seeded in the grid representing an integrated neighborhood. The cells however have preference with respect to what percentage of cells that are their neighbors should belong to the same group (color). The simulation is run for a large number of steps. At each step a person (cell) checks if the number of such neighbors is less than a pre-defined threshold then the person can move by a single cell. If the number of such neighbors meets the threshold then the person (cell) remains at its current position. Even with such a simple setup we observe that the integrated neighborhood slowly becomes segregated so that after some iterations the neighborhood is completed segregated. The evolution of the simulation can be observed in Figure 1. The main lesson to be learned here is that even without overt racism and just having a preference about one’s neighbors can lead to a segregated neighborhood.

Read more »