**by Rishidev Chaudhuri **

A dynamical system is a mathematical description of how some particular system evolves through time. These are often physical or biological systems, and dynamical systems are used to model everything from how the planets move around the sun and how earthquakes propagate to how neurons fire and how economies evolve over time. To build a dynamical system we need two things. We need some sort of abstract description of the properties of the system that will be changing; for example, if we want to understand the movements of the planets these would be their positions in space (relative to some coordinate system). We also need some rule for how these quantities evolve from moment to moment. Here, this would be a set of equations describing how the gravitational attraction between the planets causes them to accelerate in various directions.

As a simple example, imagine we are studying a group of immortal rabbits. Each year, 10% of them reproduce. If we started with 100 rabbits, in the second year we'd have 110, in the third year we'd have 121, and so on. We'd start having fractional amounts of rabbit very soon, but let's ignore that. Let's call x(n) the number of rabbits in year n. Then we can explicitly write out the rule as:

x(n+1)=1.1*x(n).

Again, this just says, “To go from year n to year n+1 take x(n) (the number of rabbits in year n) and multiply by 1.1, which is the same as adding 10%.” The rule tells us how to go from one state in time to the next one. If we started with 100 rabbits and kept applying this rule we'd get a sequence: {100, 110, 121, …}. This describes a trajectory of the system with time. “Trajectory” sounds like it should describe a path through space, and that's where the intuition comes from (see this article, for example). If we start with a different number of rabbits, we get a different trajectory (for example, starting with 90 rabbits gives us 99 rabbits at the end of the first year).

Say we start a system off in one state (here, state means number of rabbits), and wait a certain amount of time. Where will it be? One approach is to just replicate what the system would do. If we want to know what it'll do after 10 steps of time, we write down the description of where it is, apply the rule to it to get the new description, and do this ten times. This can take a long time, especially if we have many time steps. More importantly, it doesn't seem to tell us anything about the deeper structure of the system. For example, how would things change if we started the system off in a different state. Do all states end up in a similar place or do they differ wildly? If we see that the state is headed in one direction, will it keep going in that direction? In some cases, we can solve the system to get a formula that just tells us how many rabbits we'd have given a starting number and a length of time. This is typically impossible; simple update rules can give rise to systems that have no such formula. But this system does have one, and the formula is

Here x(0) is the starting value. Note that this formula doesn't need to be repeatedly applied; we can just put in the number of years and the starting value. Of course, given the simplicity of the update rule, applying the formula is not too much simpler than applying the update rule. The formula tells us that all of the trajectories of this system look qualitatively similar. The number of rabbits we have keeps growing and never stops, unless we started with no rabbits, in which case we'll always have no rabbits (within the assumptions of the model, of course, so we don't get to go out and buy some).

Or imagine that we're studying a group of sterile, mortal rabbits. Each year, 10% of them die. So if we start with 100 rabbits, after a year we'll have 90, then 81, and so on. This is quite similar to the other system, though the rule is different, and the number of rabbits dwindles towards zero. The rule can be used to generate a formula for the number of rabbits after a length of time. Again, this update rule and the associated formula give rise to relatively simple trajectories; the number of rabbits dwindles, getting closer and closer to zero no matter how many rabbits we start with.

The logistic map is a rule with more complicated behavior, attempting to model population growth slightly (but not much) more realistically. To simplify matters, it describes the population as a fraction of some maximum possible population, giving a number between 0 and 1. If the population in year n is x(n), the update rule is

x(n+1)=4*x(n)*(1-x(n))

To see what this equation does, let's start by ignoring the (1-x(n)). If we just had x(n+1)=4*x(n), we'd be in a state similar to the first example (with the rate of growth being 400% instead of 10%). The population would be growing quite rapidly, and each new year it would be 4 times as large. The (1-x(n)) takes into account the effect of overpopulation and starvation, so that if there are too many individuals the population shrinks. As the population gets closer to 1, this term becomes more and more important and drives the next year's population lower and lower. This is a very simple model, only an abstraction, but it can already show quite complicated behavior. Some trajectories stay where they are, some oscillate through a finite set of values endlessly, most wander wildly through the numbers between 0 and 1. This update rule has an associated formula, though not a very simple one. But changing the rule slightly (for example, changing 4 to a nearby number) gives us a situation where we don't have one. And the formula is complicated enough that it's not apparent what various trajectories do or how they relate to each other.

As a next step, we can start to explore the trajectories of the system more generally. We can try to classify the types of trajectories we see, or to look for particular trajectories that seem to determine the structure of the system or exemplify the possible options. The simplest trajectories a dynamical system shows are fixed points. A fixed point is a state of the system which stays put; the rule doesn't change anything. So, for example, take our collection of rabbits. If we start with zero rabbits, we'll always have zero rabbits. The trajectory is {0,0,0,…}. We can also ask how these fixed points relate to the nearby trajectories. Fixed points can be stable or unstable. A stable fixed point has nearby trajectories approaching it; it draws them in. On the other hand, unstable fixed points drive nearby trajectories away. So if we take the second example, and start with a small number of rabbits that are dying away, we'll get closer and closer to 0. Here, 0 is a stable fixed point. Stable fixed points are particularly useful because the nearby trajectories get closer and closer, and we learn something about the long-term behavior of the system, at least in that region. In the very first example, 0 is an unstable fixed point; if we start with no rabbits we'll always have no rabbits, but if we start with a small number they'll keep growing. The logistic map has two unstable fixed points: 0 and 0.75 (try putting them into the equation to see this), and no stable ones. In more complicated systems, a fixed point can be stable in some directions and unstable in others.

Oscillations are the next most complicated type of trajectory. Here the system bounces among a set of states and returns to where it started, and it repeats this pattern forever. The simplest oscillation has period 2, meaning that there are two distinct points. The system goes from the first to the second and then back to the first, and keep going in this fashion. A period 3 oscillation would cycle between 3 numbers, and so on. The logistic map, for example, has a period-2 oscillation at the pair {0.3455, 0.9045}. Oscillations can be of arbitrarily long periods, and in fact, the logistic map has oscillations of all periods.

Surprisingly, most trajectories in the logistic map show a much more complicated form and the dynamical system shows what is called chaotic behavior. Most of the trajectories do not settle down to a fixed point or repeat among a finite set of points. They wander the space, bouncing from point to point, and the patterns they form look quite random. Chaos shows two significant features. Trajectories starting from nearby points in the system separate very fast. This means that a trajectory starting at 0.3213 and one starting at 0.3214, for example, will rapidly end up in different places. This is surprising, but something very similar is true near the point 0 in the example of the growing rabbit populations. A population of 0 rabbits will never grow; a population of a few rabbits will grow rapidly. In a chaotic system this separation is true for most nearby pairs of points. And significantly, in chaos this is combined with a mixing of trajectories, so that trajectories wind about and between each other. For example, here are two trajectories in the logistic map:

0.3213, 0.8723, 0.4457, 0.9882, 0.0467, 0.1779, 0.5851, 0.9710, 0.1126, 0.3995, 0.9596, 0.1550, 0.5238, 0.9977, 0.0090, 0.0358, 0.1379, 0.4756, 0.9976, 0.0095

0.3214, 0.8724, 0.4452, 0.9880, 0.0474, 0.1806, 0.5919, 0.9662, 0.1305, 0.4537, 0.9914, 0.0340, 0.1312, 0.4559, 0.9922, 0.0308, 0.1196, 0.4211, 0.9751, 0.0971

Notice that they look quite different after a little while. Small differences in initial conditions get amplified and crucially, the mixing means that these differences are important. A common example for the evolution of such systems is mixing dough by stretching it and folding it over. Bits of dough near each other are separated by the stretching, and the folding over and squishing means that bits of dough from various parts get mixed together. Nearby trajectories are separated and distant trajectories are mixed together. The trajectories very rapidly start to look random. The nearness of two trajectories says little about how far away they were in the past.

This is fatal for trying to predict specific trajectories by starting somewhere and repeatedly applying the update rule. The sensitivity to where we start means that we would need to know the starting position to absurdly high precision for anything but the most short-term predictions. Even if we get the right starting position, small errors will rapidly grow, and small errors are inevitable when we simulate on a computer, which can store only finitely many digits, or on a finite piece of paper. What's most surprising about this phenomenon is how simple the system is that gives rise to it. It shouldn't be surprising that a system with millions of interacting parts can give rise to complex behavior. Here, however, we have a system that is described by one number, and whose update rule involves multiplying three numbers.

We're left with several families of interesting questions. The first ask why this happens. What properties of a system give rise to this sort of behavior? Is it ubiquitous and present in most systems or it a rare pathology that we've somehow stumbled across? The second family of questions asks what we should do. It seems that we're being pushed even further away from questions about individual trajectories, and more towards global and qualitative features of the system. What do these look like? Are individual trajectories even a useful construct or do these other features take primacy? What does this phenomenon tell us about our descriptions of systems and how we should model the world? What are the sorts of questions we can reasonably expect our theories to answer? I'll return to these in a later post.