A system is defined to be nonlinear if the laws governing the time evolution of its state variables depend on the values of these variables in a manner that deviates from proportionality. Nonlinearity is widely spread in nature [Nicolis, (1995)]. At the microscopic level the equations of motion of a system of particles under the effect of their own collisions, or the equations describing the interaction of radiation with matter are nonlinear; at the macroscopic level, the equations describing the evolution of the conserved variables {x} of a one-component fluid exhibit the universal "inertial" nonlinearity . x, where v is the fluid velocity — itself part, of the set of the variables {x} — and V the gradient operator; likewise, the composition variables of a chemically reactive mixture obey typically a set of nonlinear equations in which the principal source of nonlinearity is to be found in the law of mass action, linking the reaction velocity to the products of concentrations of the species involved.

In recent years it has been realized that quite ordinary systems obeying simple nonlinear laws can give rise spontaneously to behaviors of considerable complexity associated with abrupt transitions, a multiplicity of states, rhythmic activity, pattern formation or a random-looking evolution which is referred to as deterministic chaos. Thermal convection in a fluid layer heated from below, Taylor vortex flow, turbulence, the generation of coherent light by a laser, chemical oscillations or the formation of a flame front provide some well-established examples of this ubiquitous property of nonlinear systems, which is also referred to as *self-organization*. The self-organization paradigm has proven to be a powerful tool for analyzing complex systems outside the traditional realm of physical sciences, notably biological systems and systems encountered in environmental science. (See, for example Turbulence.)

Nonlinearity may remain "inactive" or, on the contrary, lead to qualitative changes of behavior depending on the values of the *control parameters* describing the way a system has been initially prepared or is being permanently solicited by the external world. In a *conservative* system, (e.g., a system whose relevant observables obey the laws of mechanics), these may stand for the initial amount of energy communicated to it or for the action of an external force deriving from a potential. In a *dissipative* system (a system whose relevant observables generate during their evolution a strictly positive entropy production) a typical control parameter is the distance from thermodynamic equilibrium or from a phase transition point.

The awakening of the system's nonlinearities does not happen gradually but involves a succession of explosive events, in the form of *instabilities*. Specifically, when the constraints exerted by the environment reach certain thresholds small perturbations of the environment or small, spontaneously arising fluctuations, become amplified, leading the system out of its basic state and pushing it towards a new regime. This transition resembles a *bifurcation* when the initial state becomes unstable, it is replaced not by a unique regime but, generally speaking, by a multitude of stable regimes that are accessible simultaneously (Figure 1). Nothing in the initial preparation of the system enables one to know which particular regime will actually be chosen. Only chance, in the form of a critical variation which is going to prevail at the propitious moment, will decide which particular branch will be followed. This makes the system *sensitive to the parameters* controlling the position of the bifurcation point, since two macroscopically indiscernible systems, submitted to the same constraints, may follow entirely different paths.

**Figure 1. Typical bifurcation diagram. As the control parameter λ is varied, a reference state loses its stability and two new branches of solutions emerge at λ ≥ λ _{1}. These branches, in turn, lose their stability beyond a secondary bifurcation point λ = λ_{2}, etc.**

Bifurcation is far from being a unique event. As the constraints are varied, a system typically undergoes not just a single transition but a whole sequence of transition phenomena, the characteristics of which depend on the nature of the nonlinearities present. In many cases, these transitions culminate in a regime which, despite its deterministic origin, is characterized by an irregular evolution of the variables in space and time resembling in many respects a game of chance. One refers to this phenomenon as *deterministic chaos*.

In their vast majority, natural systems are composed of a large number of interacting elementary subunits. At the microscopic level, this interaction is manifested through the inter- or intramolecular forces. At the mesoscopic and macroscopic level, the interparticle forces do not come into play in an explicit manner but are manifested indirectly through, for instance, the transfer of matter, energy, mechanical work or information between a subsystem and its surroundings.

As a rule, because of the interactions, each subunit undergoes complex nonlinear dynamics. At the microscopic level, this resembles in many respects a noise process, of which Brownian Motion is a typical example, but actually this complexity finds its origin in an extreme form of deterministic chaos characteristic of systems involving a large number of degrees of freedom. At the macroscopic level complexity arises through the mechanism of bifurcations and a milder form of deterministic chaos characteristic of systems involving a small number of macrovariables describing the thermodynamic state of the system. Finally, at the mesoscopic level, an intermediate view is adopted in which the central quantity is the probability distribution of the macrovariables. *Complexity* is manifested here through a qualitative change in the properties of this probability (multimodality, critical fluctuations, etc.) across a transition point leading to multiple states or to chaos. A considerable amount of effort has been devoted to bridge the gap between these various levels of description, [Prigogine (1980); Nicolis and Prigogine (1989)] but our understanding of this fundamental problem is still far from being complete.

Motions of particles and their interactions at the microscopic level are governed by the fundamental laws of physics, essentially those of mechanics, quantum mechanics and electrodynamics. Progress in *chaos theory* has allowed one to reformulate on a new basis the connection between the complex dynamics induced by these interactions and the foundations of statistical mechanics, including the origin of *irreversibility*, through the study of the spectrum of the Liouville-like operators governing the evolution of probability densities. The advent of supercomputers has made it possible to envisage a complementary approach, in which the corresponding equations are written out explicitly for all the particles involved and solved on the computer through the microscopic simulation techniques of molecular dynamics or *Monte Carlo procedures* [Mareshnal & Holian (1992)]. The complex phenomena characteristic of nonlinearity emerge, then, from the collective behavior in the ensemble — for instance, by performing averages of a certain property over the particles or over successive runs differing slightly in their initial conditions.

The mesoscopic approach is based on phenomenological equations describing an intermediate level. An example of this class is Chemical Reaction kinetics where the elementary steps are restricted to reactive collisions without explicit consideration of vibrational and rotational states of the molecules. At this level, the evolution laws are formulated as Stochastic Processes governed by Master equations or equations of the Fokker-Planck type which are accessible to the tools of *probability theory* [Nicolis & Prigogine (1989); van Kampen (1981)].

The third, macroscopic level of description is the Thermodynamic level. Here the individuality of processes disappears in the mathematical formulation and one resorts to the *constitutive relations* of thermodynamics linking phenomenologically the thermodynamic potentials to the state variables and the fluxes of the various processes present to the thermodynamic forces such as temperature or chemical potential gradients [De Groot and Mazur (1962)]. Molecular dynamics reveals that linear phenomenological laws of this second kind, with state-dependent coefficients, remain surprisingly robust even under very strong constraints. Still, far away from equilibrium, a purely thermodynamic characterization of the states of a system is not sufficient. In this range the most relevant mode of approach becomes the analysis of the phenomenological rate equations (reaction-diffusion equations, Navier-Stokes equations, etc.) derived from the balance equations of mass, momentum and energy supplemented with the constitutive relations, using the methods of stability, bifurcation and chaos theories. These methods are briefly summarized in the next two subsections.

A useful tool in classifying and comparing different types of dynamic behavior that are likely to arise within a system when the conditions to which it is submitted are varied is afforded by *qualitative* analysis, based on the geometric view of nonlinear systems [Guckenheimer and Holmes, (1983)].

Consider a system described by a finite set of observables such as temperature, chemical composition, flow velocity, pressure, etc. One embeds its evolution into the abstract space spanned by all these variables. In this *phase space*, an instantaneous state of the system is represented by a point, and as time goes by, the point in question follows a curve, called the phase trajectory. By following the trajectories emanating from different initial states, one obtains *a phase portrait* which provides a valuable qualitative idea of the system's potentialities. As a general rule, for every natural system obeying the second law of thermodynamics (dissipative system), the phase trajectory will converge, after a certain time, towards an object in phase space whose dimension is strictly smaller than that of the phase space itself, and which is referred to as the *attractor*.

In view of the foregoing, understanding the types of behavior generated by a system amounts to classifying all the attractors that can be realized in a phase space. The simplest element one can embed in a space is the point. Consequently, one can imagine point attractors (Figure 2a), whose existence means that after a sufficiently long period of time, any initial state will tend towards a regime that will no longer evolve once it is established: the system will be in a stationary state.

One level higher than a point in the hierarchy of geometric forms is the line. We can thus imagine attractors in the form of a closed curve towards which different possible histories of the system will tend (Figure 2b); these attractors are referred to as *limit cycles*. In this type of behavior, events are repeated in a regular and reproducible way: one has here an archetype of periodic phenomena, of which biological rhythms are a particularly important example.

A further level upwards in the hierarchy of complexity are *strange attractors* (Figure 2c). Contrary to the cases illustrated in Figure 2a and 2b where the behavior became stable and regular once the system was fixed on the attractor, one now finds that the system performs an aperiodic and apparently erratic movement on the attractor, which has already been referred to as deterministic chaos. This behavior results from two opposite tendencies: an instability arising along certain directions of the attractor (horizontal arrow in Figure 2c) coexisting permanently with a stabilizing trend which prevents the trajectories from escaping and re-injects them into the attractor (vertical arrow in Figure 2c). In order to reconcile these two antagonistic tendencies, the system must free itself from the constraints of Euclidean geometry which dictated the form of the trajectories of the examples in Figure 2a and 2b. A *fractal attractor* — that is, a set of points constituting an object somewhere between a surface and a three-dimensional volume, whose dimensionality (generally not an integer) is thus greater than in Euclidean geometry — appears as the result of this radical change [Ott (1993)].

The existence of an unstable element in the dynamics of deterministic chaos means that the system evolves in a radically different way if its initial state is slightly different, even if it obeys exactly the same laws. This property of *sensitivity to initial conditions* leads to an unexpected consequence. Consider a system operating in the regime of deterministic chaos, and suppose that its initial state is given by the value of a certain variable at the instant zero. Due to the fact that the precision of any physical measurement is limited, two slightly different states, represented by two points in the phase space separated by a distance less than the precision of the measurement, will be indiscernible to the observer. And yet, after a certain lapse of time, these two states will end up becoming distinct and projecting themselves on distant parts of the attractor. It follows that it is no longer meaningful to make long-term predictions concerning the future of the underlying system.

A major limitation hindering the *quantitative* study of nonlinear systems is that, as a rule, exact non-trivial solutions of nonlinear evolution equations are not available. Nevertheless, judicious application of perturbation theory combined with geometric techniques leads to the unexpected conclusion that in the vicinity of transition phenomena the dynamics of a nonlinear system is dramatically simplified, in the sense that it is dominated by a limited number of key variables, in terms of which all others can be expressed, obeying furthermore universal evolution equations [Guckenheimer and Holmes, (1983)]. For instance, in a system involving a finite number of variables and operating in the vicinity of the first bifurcation depicted in Figure 1 — referred to as pitchfork bifurcation—there exists a single combination z of the initial variables, obeying the equation

in which τ is a scaled time, λ the control parameter, λ_{c} its critical value and u a real parameter whose value depends on the detailed structure of the system. Equation (1) is known as *normal form*, and the one-dimensional subspace of the initial phase space on which z is defined is known as the *center manifold*. The variable z itself is referred to as the *order parameter*.

More intricate situations arise in the presence of interaction between instabilities generating higher order bifurcation phenomena and, possibly, chaotic dynamics. In such cases one can still guarantee that the part of the dynamics that gives information on the bifurcating branches takes place in a phase space of reduced dimensionality. The explicit construction of the normal forms becomes, however, much more involved and their universality can no longer be guaranteed. Still, in the regime of chaotic dynamics universal properties of a new type may emerge. [Ott, (1993)]

The above developments open the tantalizing possibility of modeling the great diversity of complex phenomena observed in real-world multivariable nonlinear systems in terms of a limited number of variables or, equivalently, in terms of low-dimensional attractors. What is more, these results can be extended to spatially distributed systems involving at the outset an infinite number of degrees of freedom. The type of normal form depends, in this case, not only on the nature of the instability (e.g., non-oscillatory or oscillatory) but also on whether or not a *symmetry-breaking* process generating a characteristic length previously absent in the system is taking place. As an example, in the vicinity of an oscillatory instability without spatial symmetry breaking, one arrives at an equation known as the complex Landau-Ginzburg equation [Coullet & Gil (1988)]

where z is now a complex-valued order parameter and the particular values of α and β depend on the detailed structure of the system at hand. Similar equations can be derived for symmetry-breaking instabilities or for interfering instabilities involving the interaction between oscillatory and symmetry-breaking modes. These equations have been studied intensely in recent years and have provided the interpretation of a large body of experimental data in such different contexts as fluid mechanics, optics, chemistry and materials science.

#### REFERENCES

Coullet, P. and Gil, L. (1988) Normal form description of broken symmetries, *Solid State Phenomena* 3–4, 57–76.

De Groot, S. and Mazur, P. (1962) *Nonequilibrium Thermodynamics*, North-Holland, Amsterdam.

Guckenheimer, J. and Holmes, Ph. (1983) *Nonlinear Oscillations, Dynamical Systems and Bifurcation of Vector Fields*, Springer, Berlin.

Mareschal, M. and Holian, B. (Eds.) (1992) *Microscopic Simulations of Complex Hydwdynamic Phenomena*, Plenum, New York.

Nicolis, G. and Prigogine, I. (1989) *Exploring Complexity*, Freeman, New York.

Nicolis, G. (1995) *Introduction to Nonlinear Science*, Cambridge University Press, Cambridge.

OH, E. (1993) *Chaos in Dynamical Systems*. Cambridge University Press, Cambridge.

Prigogine, I. (1980) *From Being to Becoming*, Freeman, San Francisco.

Van Kampen, N. (1981) *Stochastic Processes in Physics and Chemistry*, North-Holland, Amsterdam.