# Dynamical Systems Theory: What in the World is it?

### Mike Hochman

 This page is a non-technical overview of some subjects related to my field of research. It's meant for freshmen students, my friends, my mom (hi mom!) and anyone else who is interested; you don't have to be a mathematician to read it. I've tried not to lie too much, though sometimes I've bent the truth a little in the interest of clarity (I hope, anyway). If you would like more details take a look at the list of references at the end. If you have any questions or suggestions I'd be glad to hear from you.

Dynamical systems theory attempts to understand, or at least describe, the changes over time that occur in physical and artificial "systems". Examples of such systems include:

• The solar system (sun and planets),
• The weather,
• The motion of billiard balls on a billiard table,
• Sugar dissolving in a cup of coffee,
• The growth of crystals
• The stock market,
• The formation of traffic jams,
• The behavior of the decimal digits of the square root of 2;
and so on.

Many areas of biology, physics, economics and applied mathematics involve a detailed analysis of systems like these, based on the particular laws governing their change (these laws, in turn, are derived from a suitable theory: Newtonian mechanics, fluid dynamics, mathematical economics, etc.). These are often case-by-case efforts in which each field and sub-field applies its own techniques and tricks (not to mention jargon).

## Describing dynamical systems mathematically

All these models can be unified conceptually in the mathematical notion of a dynamical system, which consists of two parts: the phase space and the dynamics.

The phase space of a dynamical system is the collection of all possible world-states of the system in question. Each world state represents a complete snapshot of the system at some moment in time.

The dynamics is a rule that transforms one point in the phase space (that is, a world state), representing the state of the system "now", into another point (= world state), representing the state of the system one time unit "later".. In mathematical language, the dynamics is a function mapping world states into world states.

For example, if we are studying planetary motion then a world-state might consist of the location and velocities of all planets and stars in some neighborhood of the solar system (or the galaxy, if we are ambitious); and the dynamics would be derived from the laws of gravity, which, given the position and masses of the planets, determine the forces acting on them (since the masses don't change in time, these need not be part of the world state).

Once an initial world-state is chosen, the dynamics determines the world-state at all future times. If you want to know what the state will be two time steps from now just apply the rule twice: one application gives you the state one unit time from now, and the second application gives the state one unit time after that, which is two units time from now. Applying the rule again and again we obtain all future states (in some cases we can also reverse this and get past states).

Notice that, by describing the system in this way, we have made a few assumptions about the way the system evolves. First, we have assumed that the current state of the world uniquely determines the next state. Second, we have assumed that the transition from the current state to the next one is independent of the time at which the transition occurs. In other words, if we place the system in some predetermined state and let it evolve, the resulting evolution will be the same whether we do the experiment today or tomorrow or next week. There can be systems that don't satisfy one or both of these assumptions, but I will only talk about those that do.

## Abstraction

Abstract dynamics is the study of dynamical systems based on the description explained above and discarding most specialized information about the system or the origin of the dynamics. In particular we do not assume that the world states have any interpretation except as points in the phase space. Instead of describing planets or stock prices, the world-states are thought of as abstract "points" in a "space".

Similarly, we usually do not assume that the rule governing the dynamics has any particular form. For example, we do not require that it can be written using arithmetic operations (this would only make sense, in any case, if the world states were composed of numbers!), or then it can be computed efficiently, or even that it can be written down in any concrete and finite form. We only assume that the rule exists, and we would like to draw conclusions from that.

This point of view is actually too restrictive, and we usually retain a little more information, though still much less than in concrete models. For example, we often retain some information about which states are close to each other (this is called "topological dynamics"). In the example of planetary motion, two world states might be thought of as being close to each other if the corresponding astronomical objects in each state have similar positions and velocities. In the abstract version of this model we are left with world-states at varying degrees of closeness to each other, but we longer know how to decompose a world-state into information about planets and stars.

There are other kinds of information about the phase space and dynamics which is sometimes retained, such as the relative probabilities of different world-states (this gives rise to "ergodic theory"), or certain geometric information (giving rise to "smooth dynamics").

## What is abstraction good for?

Passing from a concrete model to its abstract representation obviously entails some loss of information, and having given up so much detail about the system we can't expect to get results which are more precise than an analysis of the original model would have given, or to calculate things more efficiently (after all, in order to predict the weather you do need to know something about weather!). The abstract theory is not meant to replace detailed analysis.

Nonetheless, there are good reasons to study the abstract model. One is precisely its generality: any technique or concept we can deduce in the abstract model will immediately apply to any concrete model, giving tools that can be used on the entire spectrum of dynamical systems.

Stripping away the details of a system also can focus attention on more essential properties of it. This is often a crucial step if we want to compare or draw analogies between systems of different kinds (apples and oranges, so to speak). Similarities are often apparent only when you step back and look at the big picture. For example, some models of the stock market and of the motion of billiard balls are very closely related, but you can't see this until you describe them abstractly and forget any interpretation of the variables as prices or velocities.

There is also another advantage to the abstract model. It is often the case that the details are actually irrelevant to the problem, and adopting the abstract point of view can lead to new insights that would otherwise be obscured. This is a psychological gain more than a mathematical one, but it can be quite important.

Here's a simple example from high-school math that may shed some light on this. Consider those "real world" problems that you often see, such as: train A leaves New York at 100mph, train B lease Boston at 20mph, how far from Boston will they meet if the distance is 400 miles? In high school we learn to represent such a problem with equations. The point is that once you write down the equation, you can safely forget that "x" represents the distance traveled by train A;. Instead "x" is an abstract variable and you can focus on solving the equation. Moreover, the methods for solving the equation are not special to train problems. They work equally well for car problems, horse problems, etc. What the equation represents is totally irrelevant to solving it. In the case of dynamics we are losing a lot more information than just the names of things, but it turns out enough is left that interesting conclusions can be drawn.

## Classification of dynamical systems: philosophy and examples

A very general problem in abstract dynamics is to understand when two systems are "the same", either precisely or in some fuzzier sense. The following examples will give some idea of what this means.

• The dynamics of dissolving a drop of red ink in a cup of water is essentially "the same" as the process of a drop of black ink.
• In the ink example above, if we run time backwards we get a different dynamics than running it forward, because when time goes forward the ink becomes more and more spread out and dissolved, whereas if time goes backward it starts out dissolved and becomes more and more concentrated in a drop (this observation is the essence of the second law of thermodynamics
• The dynamics of billiard balls moving without friction on a billiard table look the same if we run time forward or backward. In fact, if I showed you a movie of the balls rolling, you couldn't tell with any certainty which way time was going. Therefore the dynamics of billiard balls is essentially different from the dynamics of dissolving ink. (Actually, I am lying here. Ink and water are made up of atoms that bounce around a lot like billiard balls. But at the time scales we can observe they certainly behave differently).

One of the great successes of the abstract theory has been to show that many apparently different systems are in fact the same. However, it turns out that the problem of classifying all types of dynamical systems individually is too hard: there simply are too many of them. Instead, we can try to classify them according to coarser measures of similarity and dissimilarity of their dynamics, or some important features of the dynamics.

### Example: stationary vs. non-stationary dynamics

One distinction that can be made is between systems which are in a "stable state", called stationary systems, and those which are not. Being stationary does not mean that there is no longer any change in the system. Rather, it means that "the more things change the more they stay the same". A more precise definition of stationarity is that in a stationary system, if we observe the system at two times, we cannot, based on the observation, know with any certainty which of the two times came earlier.

For example, if we I were to show you two short movies of billiards balls rolling around on a table without friction, you could not tell which was recorded first. Hence this system is stationary. On the other hand, if there is friction, then we are in the non-stationary situation, because the balls will slow down as time progresses, and their speed (or, rather, kinetic energy) gives us a way of deducing when the observation was made (but from the moment the balls stop completely, we are again in the stationary case, since no further change occurs. This is a common feature of physical systems -- even when they are not stationary, they often approach a stationary phase).

Often a system will begin in a non-stationary state and evolve towards a stationary one, like the billiards with friction. That this should occur is not at all obvious. It is an interesting result of the abstract theory that (under the weak assumption of compactness, which means the phase space is not "too big") there always exist stationary states.

For another example let's go back to ink dissolving in water. This process doesn't seem to be stationary but as time goes on it approaches those states where the ink is totally dissolved and evenly distributed throughout the water; these "totally dissolved" states form a stationary system, because once the ink is totally dissolved, it stays that way (I am lying a little here, but a full explanation would get messy. I'd like to mention, though, that the problem of understanding whether this system is stationary or not was a major conundrum in the late 19th century and confounded the greatest minds of that era. It was resolved eventually when it was understood that while the system is stationary, it is stationary over a time scale which is immensely longer than the age of the universe, so it will never appear stationary under human observation).

### Example: chaotic vs. non-chaotic (deterministic) dynamics

Another important distinction is between "chaotic" and "non-chaotic" dynamics, that is, between systems which exhibit sufficient "randomness" and "unpredictability" versus those which do now. Examples of chaotic systems include a many physical systems (e.g. the weather) as well as social and economic systems (e.g. the stock market). In fact, most natural systems are chaotic.

A word of caution: The notion of chaos means different things to different people and is not a well-defined mathematical concept. Among its interpretations there are also a lot of populistic and misleading ones (some of which have little to do with science or math).

You may have noticed that I described dynamics as a rule governing the evolution of a system, but also spoke of "randomness". This might seem contradictory: if every state uniquely determines the next state, where does the randomness come from? This is resolved if we admit that in the real world, we usually do not know the state of the world precisely, but, rather, only approximately. For example, we can determine a lot about today's weather by measuring temperature and pressure at a number of locations on the globe. But this does not give us complete information: we are very, very far from knowing the location and velocity of every molecule in the atmosphere (this is undoubtedly impossible). If we had complete information we could predict tomorrows weather according to physical theories of atomic motion and interaction. But the uncertainty that we have about the present state of the world means that our prediction of tomorrow's world-state will also be uncertain, and the uncertainty will often grow the further into the future we try to predict.

On the other hand, in some systems the uncertainty does not increase. Roughly speaking, such systems are called deterministic. It is an interesting and non-trivial fact that if the uncertainty does not increase then, given infinitely many vague measurements of the past, one can predict the future with complete certainty! In other words, we have a dichotomy: either our ability to predict the future based on incomplete information decays very rapidly as we look farther and farther into the future, or else, given enough information, we can make a completely accurate prediction about the future, for all times! This dichotomy is only of partial practical use, since to make perfect prediction you need more information than is practically available, and the abstract theory doesn't tell you how to predict, only that it is possible to do so. But it has interesting theoretical (and philosophical) implications.

### Example: invariants

An invariant of a dynamical system is some quantity you can measure (either in practice or in theory) which comes out the same when you compute it for different systems that are "the same".

For example, the property of containing billiard balls is not an invariant, because, after some minor adjustments, a system with billiard balls is the same as a system with ping pong balls. In fact the abstract model there is no meaning to the question "does the system contain billiard balls?", because in the abstract model we have forgotten what the states mean.

Returning to the discussion of uncertainty, it turns out that it's possible to measure the rate at which the amount of uncertainty grows as you predict farther and farther into the future; and this quantity -- the rate of uncertainty growth -- is an invariant. This invariant is called entropy; it is related to, but not the same, as the notion of entropy in thermodynamics. One common interpretation of the word "chaos" is as "positive entropy", that is, a system is chaotic if the farther you look into the future, the poorer your eyesight becomes. Entropy gives more refined information since it gives a quantitative means to compare systems and determine which is more "chaotic" than the other.

There are not so many other known invariants (at least, useful ones). One other example is the periodicity of the system, which measures the manner in which a system returns regularly to the same state (or similar state). The weather, for example, is approximately periodic with period 365 days (1 solar year). On the other hand billiard balls on frictionless table are typically not periodic.

Another important notion is that of mixing. Roughly speaking, a system is mixing if, given any pair of target states and an initial state, one can perturb the initial state in two ways so that each perturbation leads to one of the targets after some common amount of time. This is another notion which is sometimes used as a definition of chaos, and it interestingly is not the same as having positive entropy; the interaction between the two is somewhat subtle.

## Some basic questions

Let us return to our goal of understanding the similarities and differences between different systems. Once we have identified some useful properties or invariants of dynamical systems, there some natural questions one would like to answer. Here are a few of my favorites:

What are the relationships between the different attributes? For example, enough periodicity turns out to imply zero entropy (this shouldn't be surprising. If a system is periodic it means the past repeats regularly, so knowing the past we can predict the future perfectly; and we said this is the same as zero entropy).

When we are observing an unknown system, can we determine its attributes by observing a sequence of measurements? Notice that there is a difference between figuring them out from knowledge of the system and figuring them out by "participating" in the system. For example, if you know the law of gravity you can try to compute things about the solar system. But this is a different problem from that of an ancient astronomer who is ignorant of gravity but has at his disposal observations and measurements of the heavens. It turns out that even when we do not know the rules, we can sometimes estimate the entropy of a system, though not necessarily its periodicity!

Given a concrete rule governing the dynamics of a system, can we use the rule (i.e. its description) to decide if the system has a certain property (e.g. positive entropy)? Surprisingly, this is sometimes impossible. There are very simple rules that can generate dynamics so complicated that we can actually prove that we cannot understand them completely.

Do "most" systems have a certain attribute? This is a somewhat subtle question because it depends what you mean by "most", But it turns out that, rather generally, for a given dynamical property either most systems have it, or most don't (it can't be split down the middle, so to speak). For example, most systems have zero entropy, but most systems are also aperiodic.

Which real-world systems have which attributes? As we already mentioned, there are many real-world system that display chaotic behavior like positive entropy and mixing, but there is a long list about which we are still not sure, and it is a challenge to develop methods for deciding this. This is a problem at the interface of abstract and concrete dynamics: testing specific systems for abstract properties.

## Other applications

There are also many interesting applications of dynamical systems theory to other areas of mathematics. In particular, dynamical methods have had great success in combinatorics and number theory, and are fundamental to parts of information theory. I won't go into these.

Here are a few online sources that I know of:

• Tomasz Downarowicz's webpage contains a friendly and fairly nontechnical explanation, assuming basic undergraduate level mathematics
• Karl Petersen's webpage contains some nice and fairly elementary lecture notes.
• Steve Kalikows's book on ergodic theory. The book is online and free. It is meant as a graduate course but is fairly non-technical.

There's lots of literature available on the subject, from popular accounts to technical monographs. I highly recommend these books:

• "Dynamical systems and ergodic theory", by M. Pollicott and M Yuri.
• "Single orbit dynamics", by B. Weiss.
Both are a great place to start and give a broad overview of the field. Pollocott and Yuri's book is more elementary and more thorough, though it assumes some knowledge of university-level mathematics. Weiss' book isn't a textbook and assumes more "mathematical maturity", and it's more of a survey than a monograph, but it's short, well written and gives the big picture.

Also worth looking at are:

• "Introduction to ergodic theory", by P. Walters (includes also a discussion of topological dynamics. An excellent place to start, though slightly out of date).
• "Recurrence in ergodic theory and combinatorial number theory", by H. Furstenburg (focuses on applications to combinatorics, but also an excellent general introduction).
• "Ergodic theory of discrete sample paths", by P. Shields (focuses on entropy theory, good elementary introduction).
• "Introduction to modern dynamics", by B. Hasselblatt and A. Katok (for those interested in the theory of dynamics on manifolds).
• "Fundamentals of measurable dynamics", by D. Rudolph (a more modern intro to ergodic theory).