Intuitively, a system is unpredictable if a lot of information about its past state tends to yield only a little information about its future state. There are many different ways to make this precise. Here we shall consider four different definitions of unpredictability, three of them original.

Let us consider a discrete dynamical system (f,X), which consists of a "state space" X and a

function f mapping X into X. For the details to follow, it should be assumed that X is a finite

space, so that concepts of algorithmic complexity may be easily applied. But in fact, the ideas are much more general; they apply to any metric space X. A trajectory of the system (f,X) is a

sequence (x,f(x),f2(x),…), where fn(x)=f(f(…f(x)…)), the n’th iterate of f applied to x.

In this notation, we may define the Liapunov sensitivity, or L.-sensitivity, of a dynamical

system as follows:

Definition 4.1: The L.-sensitivity K(a,n) of a dynamical system (f,X) at a point x in X is

defined as the average over all y so that d(x,y)<a of d(fn(x),fn(y)).

The function K tells you, if you know x to within accuracy a, how well you can estimate fn(x).

Different choices of "averaging" function yield different definitions. The most common way

of averaging two entities A and B is the arithmetic mean (A+B)/2, but there are other common

formulas. For positive numbers such as we have here, there is the geometric mean (AB)1/2 and

the power mean (Ap + Bp)1/p. In general, a function A which takes in n real numbers and puts

out another is said to be an average if min(x1,…,xn) % A(x1,…,xn) % max(x1,…,xn) for all n-tuples of numbers x1 ,…,xn.

If the average of a set of n numbers is defined as the maximum element of the set, and X is not

a discrete space but a space of real numbers or vectors, then in many cases it is known that

K(a,n) is equal to a exp(L(x)n), where L(x) is called the "Liapunov exponent" of the dynamical

system (Collet and Eckmann, 1980). Often the Liapunov exponent is independent of x, i.e.

L(x)=L. This exponent has the advantage of being easily computable. But the maximum function

is not always a reasonable choice of average: if one is interested in making a guess as to what

fn(x) is probably going to be, then one wants an average which (like, say, the arithmetic mean)

does not give undue emphasis to unlikely situations.

To measure the sensitivity of a system, one merely averages the sensitivities at all points x in

X. Here again, there is a choice to be made: what sort of average? But since we are speaking

conceptually and not making explicit calculations, this need not bother us.

Next, let us consider a form of unpredictability which has not previously been identified:

structural sensitivity, or S-sensitivity.

Definition 4.2: The S-sensitivity K(a,n) of a dynamical system (f,X) at a point x in X is

defined as the average over all y so that d(x,y)<a of d#(xf(x)…fn(x),xf(x)…fn(x)).

This measures how sensitively the structure of a trajectory depends on its initial point.

Conversely, one may also consider reverse structural sensitivity, or R.S.-sensitivity — roughly, how sensitively the point a trajectory passes through at time n depends on the structure of the trajectory up to that point. To be precise:

Definition 4.3: The R.S.-sensitivity K of a dynamical system (f,X) at a point x in X is defined as the average over all y so that d#(xf(x)…fn(x),xf(x)…fn(x))<a of d(fn(x),fn(y)).

This is not so similar to L-sensitivity, but it has a simple intuitive interpretation: it measures how well, from observing patterns in the behavior of a system, one can determine its immediately future state.

Finally, let us define what might be called structural-structural sensitivity, or

S.S.-sensitivity.

Definition 4.4: The S.S.-sensitivity K(a,n,m) of a dynamical system (f,X) at a point x in X is

defined as the average, over all y so that

d#(xf(x)…fn(x),xf(x)…fn(x))< a, of

d#(xf(x)…fn(x),xf(x)…fn(x)).

This measures how difficult it is to ascertain the future structure of the system from its past

structure.

What is essential here is that we are talking about the unpredictability of structure rather than the unpredictability of specific values. It doesn’t matter how different two states are if they lead to similar structures, since (or so I will hypothesize) what the mind perceives is structure.

Theoretically, to measure the L.-, S.-, R.S.- or S.S.-sensitivity of a system, one merely

averages the respective sensitivities at all points x in X. But of course, the word "measure" must be taken with a grain of salt.

The metric d# is, in general, an uncomputable quantity. For practical purposes, we must work

instead with dC, the distance which considers only patterns in the computable set C. For example, C could be the set of all n’th order Boolean patterns, as discussed at the end of Chapter 3. If one replaces d# with dC in the above definitions, one obtains L.-, S.-, R.S.- and S.S.-sensitivities relative to C.

Estimation of the sensitivity of a system in these various senses could potentially be quite

valuable. For instance, if a system were highly sensitive to initial conditions, but not highly

structurally sensitive, then although one could not reasonably predict the exact future condition of the system, one would be able to predict the general structure of the future of the system. If a system were highly structurally sensitive but not highly S.S.-sensitive, then, although knowledge of the present state would tell little about the future structure, knowledge of the past structure would tell a lot. If a system were highly R.S.-sensitive but not highly S.S.-sensitive, then by studying the structure of a system one could reasonably predict the future structure but not the exact future state. The precise relation between the various forms of unpredictability has yet to be explored, but it seems likely that all these combinations are possible.

It seems to me that the new sensitivity measures defined here possess a very direct relation to

unpredictability as it occurs in real social, psychological and biological situations — they speak of what studying a system, recognizing patterns in it, can tell about its future. L.-sensitivity, on the other hand, has no such connection. L.-sensitivity — in particular, the Liapunov exponent — is profoundly incisive in the analysis of intricate feedback systems such as turbulent flow.

However, speaking philosophically, it seems that when studying a system containing feedback

on the level of structure as well as the level of physical parameters, one should consider

unpredictability on the level of structure as well as the level of numerical parameters.

In conclusion, I would like to make the following conjecture: that when the logic relating self-

organization with unpredictability is untangled, it will turn out that real highly self-organizing systems (society, the brain, the ecosystem, etc.) are highly Liapunov sensitive, structurally sensitive and R.S.-sensitive, but are not nearly so highly S.S.-sensitive. That is: roughly speaking, it should turn out that by studying the structure of the past, one can tell something about the structure of the future, but by tracking or attempting to predict specific events one will get nowhere.

belgesi-927