Showing posts with label Physics department. Show all posts
Showing posts with label Physics department. Show all posts

Monday, March 30, 2015

Formulating a Feature Extractor Feedback System as an Ordinary Differential Equation

The basic idea of a Feature Extractor Feedback System (FEFS) is to have an audio signal generator whose output is analysed with some feature extractor, and this time varying feature is mapped to control parameters of the signal generator in a feedback loop.



What would be the simplest possible FEFS that still is capable of a wide range of sounds? Any FEFS must have the three components: a generator, a feature extractor and a mapping from signal descriptors to synthesis parameters. As for the simplicity of a model, one way to assess it would be to formulate it as a dynamic system and count its dimension, i.e. the number of state variables.

Although FEFS were originally explored as discrete time systems, some variants can be designed using ordinary differential equations. The generators are simply some type of oscillator, but it may be less straightforward to implement the feature extractor in terms of ordinary differential equations. However, the feature extractor (also called signal descriptor) does not have to be very complicated.

One of the simplest possible signal descriptors is an envelope follower that measures the sound's amplitude as it changes over time. An envelope follower can be easily constructed using differential equations. The idea is simply to appy a lowpass filter (as described in a previous post) to the squared input signal.

For the signal generator, let us consider a sinusoidal oscillator with variable amplitude and frequency. Although a single oscillator could be used for a FEFS, here we will consider a system of N circularly connected oscillators.

The amplitude follower introduces slow changes to the oscillator's control parameters. Since the amplitude follower changes smoothly, the synthesis parameters will follow the same slow, smooth rhythm. In this system, we will use a discontinuous mapping from the measured amplitudes of each oscillator to their amplitudes and frequencies. To this end, the mapping will be based on the relative measured amplitudes of pairs of adjacent oscillators (remember, the oscillators are positioned on a circle).

Let g(A) be the mapping function. The full system is

fefs-equation
with control parameters k1, k2, k1, K and τ. The variables θ are the oscillators' phases, a are the amplitude control parameters, A is the output of the envelope follower, and x(t) is the output signal. Since x(t) is an N-dimensional vector, any mixture of the N signals can be used as output.

Let the mapping function be defined as

mapping-function

where U is Heaviside's step function and bj is a set of coefficients. Whenever the amplitude of an oscillator grows past the amplitude of its neighboring oscillators, the value of the functions g changes, but as long as the relative amplitudes stay within the same order relation, g remains constant. Thus, with a sufficiently slow amplitude envelope follower, g should remain constant for relatively long periods before switching to a new state. In the first equation which governs the oscillators' phases, the g functions determine the frequencies together with a coupling between oscillators. This coupling term is the same as is used in the Kuramoto model, but here it is usually restricted to two other oscillators. The amplitude a grows at a speed determined by g but is kept in check by the quadratic damping term.

Although this model has many parameters to tweak, some general observations can be made. The system is designed to facilitate a kind of instability, where the discontinuous function g may tip the system over in a new state even after it may appear to have settled on some steady state. Note that there is a finite number of possible values for the function g: since U(x) is either 0 or 1, the number of distinct states is at most 2N for N oscillators. (The system's dimension is 3N; the x variable in the last equation is really just a notational convenience.)

There may be periods of rapid alteration between two states of g. There may also be periodic patterns that cycle through more than two states. Over longer time spans the system is likely to go through a number of different patterns, including dwelling a long time in one state.

Let S be the total number of states visited by the system, given its parameter values and specific initial conditions. Then S/2N is the relative number of visited states. It can be conjectured that the relative number of states visited should decrease as the system's dimension increases. Or does it just take much longer time for the system to explore all of the available states as N grows?

The coupling term may induce synchronisation between the oscillators, but on the other hand it may also make the system's behaviour more complex. Without coupling, each oscillator would only be able to run at a discrete set of frequencies as determined by the mapping function. But with a non-zero couping, the instantaneous frequencies will be pushed up or down depending on the phases of the linked oscillators. The coupling term is an example of the seemingly trivial fact that adding structural complexity to the model increases its behavioural complexity.

There are many papers on coupled systems of oscillators such as the Kuramoto model, but typically the oscillators interact through their phase variables. In the above model, the interaction is mediated through a function of the waveform, as well as directly between the phases through the coupling term. Therefore the choice of waveform should influence the dynamics, which indeed has been found to be the case.

With all the free choices of parameters, of the b coefficients, the waveform and the coupling topology, this model allows for a large set of concrete instantiations. It is not the simplest conceivable example of a FEFS, but still its full description fits in a few equations and coefficients, while it is capable of seemingly unpredictable behaviour over very long time spans.

Wednesday, February 26, 2014

Manifesto for self-generating patches

Ideas for the implementation of autonomous instruments in analog modular synths (v. 0.2)

The following guidelines are not meant as aesthetic value judgements or prescriptions as to what people should do with their modulars  as always, do what you want! The purpose is to propose some principles for the exploration of a limited class of patches and a particular mode of using the modular as an instrument.

Self-generating patches are those which, when left running without manual interference, produce complex and varied musical patterns. Usually, the results will be more or less unpredictable. In this class of patches, there are no limitations as to what modules to use and how to connect them, except that one should not change the patch or touch any knobs after the patch has been set up to run. An initial phase of testing and tweaking is of course allowed, but if preparing a recording as documentation of the self-generating patch, it should just run uninterrupted on its own.

A stricter version of the same concept is to try to make a deterministic autonomous system in which there is no source of modulation (such as LFOs or sequencers) that is not itself modulated by other sources. In consequence, the patch has to be a feedback system.

The patch may be regarded as a network with modules as the nodes and patch cords as the links. Specifically, it is a bidirectional graph, because modules usually have both inputs and outputs. (The requirement that there be no source of modulation which itself is not modulated by other modules implies that, e.g., noise modules or LFOs without any input are not allowed.) Thus, in the graph corresponding to the patch, each node that belongs to the graph must have at least one incomming link and at least one outgoing link. The entire patch must be interconnected in the sense that one can follow the patch cords from any module through intervening modules to any other module that belongs to the patch.


Criterion of elegance:
The smaller the number of modules and patch cords used, the more elegant the patch is. (Caveat: modules are not straightforwardly comparable. There are small and simple modules with restricted possibilities, and modules with lots of features that may correspond to using several simpler modules.)

Aesthetic judgement:
Why not organize competitions where the audience may vote for their favourite patches, or perhaps let a panel of experts decide.

Standards of documentation:
Make a high quality audio recording with no post processing other than possibly volume adjustment. Video recordings and/or photos of the patch are welcome, but a detailed diagram explaining the patch and settings of all knobs and switches involved should be submitted. The diagram should provide all the information necessary to reconstruct the patch.

Criterion of robustness:
Try to reconstruct the patch with some modules replaced by equivalent ones. Swap one oscillator for another one, use a different filter or VCA and try to get a similar sound. Also try small adjustments of knobs and see whether it affects the sound in a radical way. The more robust a patch is, the easier it should be for other modular enthusiasts to recreate a similar patch on their system.

Criteria of objective complexity:
The patch is supposed to generate complex, evolving sounds, not just a static drone or a steady noise. Define your own musical complexity signal descriptor and apply it to the signal. Or use one of the existing complexity measures.

Dissemination:
Spread your results and let us know about your amazing patch!


Tuesday, February 11, 2014

The geometry of drifting apart

Why do point particles drift apart when they are randomly shuffled around? Of course the particles may be restricted by walls that they keep bumping into, or there may be some attractive force that makes them stick together, but let us assume that there are no such restrictions. The points move freely in a plane, only subject to the unpredictable force of a push in a random direction.

Suppose the point xn (at discrete time n) is perturbed by some stochastic vector ξ, defined in polar coordinates (r, α) with uniform density functions f, such that 

fr(ξr) = 1/R,  0 ≤ ξrR
fα(ξα) = 1/2π,  0 ≤ ξα < 2π.

Thus, xn+1 = xn + ξ, and the point may move to any other point within a circle centered around it and with radius R.


Now, suppose there is a point p which can move to any point inside a circle P in one step of time, and a point q that can move to any point within a circle Q.
First, suppose the point p remains at its position and the point q moves according to the probability density function. For the distance ||p-q|| to remain unchanged, q has to move to some point on the blue arc that marks points equidistant from p. As can be easily seen, the blue arc divides the circle Q in two unequal parts with the smallest part being closest to p. Therefore, the probability of q moving away from p is greater than the probability of approaching p. As the distance ||p-q|| increases, the arc through q obviously becomes flatter, thereby dividing Q more equally. In consequence, when p and q are close, they will be likely to move away from each other at a faster average rate than when they are farther apart, but they will always continue to drift apart.

After q has moved, the same reasoning can be applied to p. Furthermore, the same geometric argument works with several other probability density functions as well.

When a single point is repeatedly pushed around, it traces out a path and the result is a Brownian motion resulting in skeins such as this one.


Different probability density functions may produce tangles with other visual characteristics. The stochastic displacement vector itself may be Brownian noise, in which case the path is more likely to travel in more or less the same direction for several steps of time. Then two nearby points will separate even faster.

Friday, November 1, 2013

The theoretical minimum of physics


The theoretical minimum.

What you need to know to start doing physics

by Susskind and Hrabovsky, 2013

This crash course of classical mechanics is targeted at those who “regretted not taking physics at university” or who perhaps did but have forgotten most of it, or anyone who is just curious and wants to learn how to think like a physicist. Since first year university physics courses usually have rather high drop out frequencies, there must be some genuine difficulties to come over. Instead of dwelling on the mind-boggling paradoxes of quantum mechanics and relativity as most popular physics books do, and wrapping it up in fluffy metaphores and allusions to eastern philosophy, the theoretical minimum offers a glimpse of the actual calculations and their theoretical underpinnings in classical mechanics.

This two hundred page book grew out of a series of lectures given by Susskind, but adds a series of mathematical interludes that serve as refreshers on calculus. Although covering almost exactly the same material as the book, the lectures are a good complement. Some explanations may be more clear in the classroom, often prompted by questions from the audience. Although Susskind is accompanied by Hrabovsky as a second author, the text mysteriously addresses the reader in the first person singular.

A typical first semester physics text book may cover less theory than the theoretical minimum in a thousand pages volume, although it would probably cover relativity theory which is not discussed in this book. There are a few well chosen exercises in the theoretical minimum, some quite easy and a few that take some time to solve. “You can be dumb as hell and still solve the problem”, as Susskind puts it in one of the lectures while discussing the Lagrangian formulation of mechanics versus Newton's equations. That quote fits as a description of the exercises too, as many of them can be solved without really gaining a solid understanding of how it all works.

The book begins by introducing the concept of conservation of information and how it applies to deterministic, nonreversible systems (all systems considered in classical mechanics are deterministic and nonreversible). Halfways through the book the first more advanced ideas come into play: the Lagrangian and the principle of least action. In general, one gets an idea of what kinds of questions physicists care about, such as symmetries and conservation laws. Examples of symmetries that are discussed include spatial translation invariance and time shift invariance, and the conservation of energy is a recurrent theme. The trick is simple: take a time derivative of the Lagrangian or the Hamiltonian, and show it to be zero. The principle of least action requires more sophisticated mathematics (functional analysis), although the authors try to explain it in very simple terms. Nonetheless, that part is not very easy to follow.

The writing is concise, yet almost colloquial, with only a few typos. Mathematical rigour is thrown out whenever it would clutter the exposition. Susskind does not care for limits in the formulation of derivaties, but uses a delta or an epsilon that is supposedly infinitesimal in a loosely nonstandard analysis kind of way. Most derivations are easy to follow, using elementary calculus and patiently laid out step by step. Some background in one variable and vector calculus will be necessary to follow the text, although all math that is needed (which is not very much) is summarized in the mathematical interludes.

Why should we need to know about Lagrangians, Hamiltonians and Poisson brackets, a student may ask. Susskind's answer might be that Lagrangians make the solution of certain problems much easier than trying to apply Newton's equations, and that Hamiltonians play an important role in quantum mechanics.

The theoretical minimum is probably the most concise introduction to advanced physics out there, highly suitable for self-study. It provides much of the essential background needed for books reviewed here in previous posts, such as Steeb's Nonlinear Workbook or Haken's Synergetics.

Wednesday, September 25, 2013

How to patch your own oscillator

The charming world of analog modular synthesis offers many choices regarding how to construct one's instrument from components. There are lots of oscillators, filters, VCAs, LFOs, signal processors and utility modules to choose among. In that setting, it can be very interesting to build something as elementary as an oscillator out of even more basic components. Here is an example of how it can be done with two modules, neither of which functions as an oscillator on its own.

The modules needed are a utility module that mixes, offsets and inverts signals, and a dual slew limiter (or two separate slew limiters). In particular, this example will work with Doepfer's Slew Limiter A-170 SL and wmd's Invert Offset mk II. However, there is nothing magic about these modules, so other modules that offer equivalent functionality may replace them.


Five patch cords are needed to connect the modules as illustrated. Then, with some tweaking of the knobs, slow oscillations should occur. It is possible to influence the frequency by the settings of all the knobs. By adjusting the two lower knobs of A-170, controlling the rise and fall times, the wave shape can also be varied from rising ramp through triangle to falling ramp. The amplitude may be low, and the frequency usually sub-audio, although low bass frequencies in the audio range can be obtained. The effects are best observed if the CV out of the Invert Offset is routed to the frequency input of another oscillator.

What is actually going on in this patch? To a first approximation, the slew limiter can be regarded as an integrator. In fact, it is probably more accurate to think of it as a leaky integrator. The Invert Offset consists of two identical blocks with two signal inputs and two outputs each. Let us introduce the labels x+, x-, y+ and y- for the output signals, and ux, uy, vx and vy for the inputs, as shown in the sketch above. The knobs, labeled cx and cy, add a constant offset to the signal. Inferring from the user's manual, the following set of equations should describe what the module does.
Expressing the action of the slew limiter as an integral, and following the patch cords that go into the inputs of the Invert Offset module, the system is given by:
After a number of substitutions, and taking derivatives to get rid of the integrals, the system simplifies to:
If the constants are both zero, the eigenvalues of this system are 1±i, indicating that the system is unstable. Clearly something in the model is wrong, since the actual patch does not blow up in any way. As hinted at earlier, the slew limiters do not actually integrate the signal. If they did, there would be infinite gain at dc so any constant signal fed into one of them would keep increasing linearly. What happens in reality is that, starting from a relaxed state and feeding a constant signal into a slew limiter, the output grows from zero until it reaches the level of the input. If one had two true integrators and an inverter, the equations for an harmonic oscillator
could be realized quite easily. 

The moral of this failed attempt at modeling two quite simple modules is that even seemingly simple modules may hide more complex behaviour than one would naively suspect. In any case, it may be surprising to find that five patch cords connecting these modules in the right way are all it takes to turn them into a low frequency oscillator. Although there are more than one way to patch up an oscillator from these two modules, there are many more ways to patch up systems that do not oscillate. Bistable systems with hysteresis is the result in most cases.


Friday, August 9, 2013

Synergetics, the book


Hermann Haken: Synergetics. Introduction and Advanced Topics. 

[Disclaimer: There are many things in this book that I do not understand, although hopefully I have grasped the big picture.]

Under the term Synergetics, Haken collects a number of approaches that can be useful in a variety of scientific disciplines ranging from physics, chemistry and biology, to economics and even sociology. Synergetics is presented as its own discipline with its characteristic concepts and methods. Yet this discipline draws on related fields such as thermodynamics, statistical dynamics, information theory, dynamic systems, control theory, bifurcations and catastrophe theory. Synergetics proposes to shed light on self-organized phenomena in various areas and to treat them within a unified apparatus. In particular, the slaving principle is the one trick that is used again and again. The slaving principle can be thought of in terms of a dynamic system where some variables change fast and others slowly, but there is also a separation into stable and unstable modes. The stable modes can be eliminated and treated as parameters, resulting in great simplifications.

This tome contains two classic volumes in one. Volume one (Introduction) begins gently with tutorial chapters on basic probability theory, ordinary differential equations, and their combination in stochastic differential equations. After the theoretical background has been presented, there is a chapter on self-organization followed by several chapters devoted to applications in various domains. First, the chapter on physics deals mainly with lasers. Then, as the chapters turn to chemistry, biology and economics in turn, the treatment becomes more and more accessible to the non-specialist. However, at the same time the models seem to become increasingly simplistic. Already the examples from biology and population dynamics are sketchy, and the discussion of applications to economics and sociology do not introduce many useful ideas. Nonetheless, one should remember that Haken was among the pioneers who brought a physicist's tool kit to these fields. In particular,
[...] synergetics has established links between dynamic systems theory and statistical physics. Undoubtedly, the marriage between these two disciplines has started. (p. 364 of the double volume) 
Further, regarding the connections of physics, chemistry, biology and even softer sciences:
It thus appears that we are presently from two different sides digging a tunnel under a big mountain which has so far separated different disciplines, in particular the “soft” from the “hard” sciences. (p. 364-5) 
We see the results of this excavation in numerous papers today, where physicists have begun to address such problems as the motion of crowds at concerts or the opinion formation before elections. However, there are obvious dangers involved in attacking problems that lie far beyond one's sphere of specialization. In the words of Buckminster Fuller (who also wrote a two volume book called Synergetics, otherwise bearing little resemblance to Haken's):
The word generalization in literature usually means covering too much territory too thinly to be persuasive, let alone convincing. In science, however, a generalization means a principle that has been found to hold true in every special case.
Apparently both kinds of generalization are involved in Hakens work; the applicability seems to decrease the further away from physics one gets, till it begins to look suspicious when applied to the social sciences. Meanwhile, the single finding that unites all chapters, the slaving principle, exemplifies the kind of generalization that holds true in several special cases, if not in all conceivable scenarios. It is the method of finding solutions that survives generalizations, not necessarily so with the modelling of systems in different fields.

Volume two (Advanced Topics) starts over with a long expository chapter on the application domains followed by the introduction of the theory. There are short sections on deterministic chaos, but Haken is not the best source on this. Quasi-periodicity is treated extensively. Although the exposition is clear to begin with, soon enough matters get complicated. If you ever wondered what makes a system of differential equations with quasi-periodic coefficients stable or unstable, this is the text to read.

Matters of style

The first chapters of each volume are tutorial in character and cover material that most readers probably already know. The manner of exposition changes as Haken begins to introduce his own findings—one can sense a shifting of gears when his enthusiasm sets in. Unfortunately, these parts involve solutions that stretch over sections or entire chapters, sometimes using idiosyncratic notation. It is often hard to tell whether a variable is supposed to be real, complex, or a vector, even though one may be able to figure it out from the context.

The writing has the appearance of a stream of consciousness layed out at the blackboard, rather than elaborated at the typing machine. Throughout the book, variable substitutions are profusely employed; so much, in fact, that one almost inevitably loses track of the variables' meaning. The derivations are decidedly informal, with almost no theorems and proofs. (There are a handful of theorems that rely on a long list of assumptions with long, unwieldy proofs.) Instead there are long chains of “simplifications” or “abbreviations”, often resulting in expressions that are longer than the one they replace, truncations of higher order terms in series expansions and other sorts of approximations. All these tricks are of course what physicists are usually good at, but for readers without the proper background, they may appear as incomprehensible as pulling rabbits out of a hat.

If synergetics has to do with self-organization of complex systems, it must be said that Haken is quite terse on the topic of self-organization as such. This is where some conceptual analysis is lacking. On the other hand, the cyberneticians have already contributed much hand-waving philosophizing on self-organization, without necessarily having contributed much to its understanding. Here, at least, one has a class of problems and an approach to their solution, but there is more to self-organization than what is covered in this book.



Tuesday, May 28, 2013

Feedback FM with bells and whistles

Single oscillator feedback FM is a most economic technique for producing rich harmonic tones. However, the technique suffers from parasitic oscillations at the Nyquist frequency when the modulation index is turned up sufficiently high. The most obvious thing to try is to modify the original formula


x[n] = sin(ωn + Ix[n-1])

by lowpass filtering the feedback signal with some filter that has a zero at half the sample rate. A two point average increases the range the index can take before the spurious oscillations set in, but it cannot stop them at sufficiently high modulation indices. A complementary trick is to put a filter outside the feedback loop. Again, it helps to a certain extent, but should not be expected to solve the problem in all cases. Finally, there is the Overkill Solution of oversampling the system. Or maybe it's not so overkill after all. In any case, a high sample rate is recommended.

feedback FM
Feedback FM waveform with spurious oscillations and no attempt to squelch them.

Depending on the sample rate, the spurious oscillations will typically ring at such a high frequency that only domestic animals will notice them and possibly object to their presence. Nonetheless, the waveform will be contaminated by a conspicuous wiggle that may annoy any purist, whether or not they hear it. Of course, it is a spurious or parasitic oscillation since it follows the sampling rate rather than the synthesis parameters. Sometimes the parasitic oscillation happens at a third or fourth of the sample rate, or other subharmonics.

Single oscillator feedback FM is limited to harmonic spectra. Much flexibility is gained  by introducing a second oscillator since there are several ways to connect the two oscillators. Rather than listing all cases separately, we introduce coupling parameters c (cross terms) and b (self-modulation) in the coupled system

      x[n+1] = sin(θ[n] + b1x[n] + c1y[n])
(*)   y[n+1] = sin(φ[n] + b2y[n] + c2x[n])

where the phases θ and φ are incremented by the modulating and carrier frequencies. Which one is which depends on what signal you send to the output. Of course both signals can be used for stereo, but then it makes less sense to call one of the oscillators the carrier and the other one the modulator.

In FM synthesis, the phase variables usually depend only on their respective frequencies. By introducing an interaction term, phase coupling can be used to synchronize the oscillators. Hardsync may have been used with FM before, but the gentler kind of sync used in the Kuramoto model is useful here, as it is also suitable for synchronizing more than two oscillators. Now, the phases are incremented by the oscillators' frequencies as usual, but to that we add a coupling term with strength K:

      θ[n+1] = θ[n] + ωc - K sin(θ[n] - φ[n])
(#)   φ[n+1] = φ[n] + ωm - K sin(φ[n] - θ[n])

Turning up K too much will collapse the two oscillators into a single strong team working in perfect sync. The system (*, #) is just a four-dimensional map with seven parameters and may thus be studied with the appropriate methods of dynamic systems.

As often happens with iterated maps, the feedback FM system exhibits typical behaviour such as period two oscillations, and the period doubling route to chaos. The frequency terms may both be set to zero, which means the system (*) becomes autonomous. Then period doublings can be seen more easily, as shown below. The system has to be seeded with nonzero initial values for any oscillation to occur.

cross-FM
Colour legendgray, period 1; orange, period 2; blue, period 4, bright yellow, period 3, red, chaos. On the horizontal axis, -1 < c1 = c2 < 4, vertical axis: - 0.5 < b1 < 5 and b2 = 0.5 (constant). The modulator and carrier frequencies are both 0; hence the coupling term does not influence the dynamics.


We are not done with feedback FM!

Other things to try include modifying the feedback signals by waveshapers and filters. Even the phase coupled signal may be filtered. Each filter adds its state variables to the system and increases its dimension. This is complicated territory; suffice it to say that filters plus strong nonlinearities or high feedback gain equals noise!


Tuesday, February 26, 2013

Sunday, December 2, 2012

Sunday, November 25, 2012

Open access

The benefits of open-access publishing are widely acknowledged. The arXiv is a splendid site for keeping up-to-date on physics, mathematics, computer science and a few other fields. Just pick your specialization!

arXiv: adaptation and self-organization. This is where they compute with slime and synchronize oscillators. The Kuramoto model shows up every now and then. It seems to be tackled from ever more complicated angles each time.

arXiv: math, history and overview. Unlike more specialized fields, some papers in this section may be partially comprehensible even to lay people with an interest in mathematics.

While the arXiv offers its readers free access, their endorsement system does not allow everyone to publish with them. That is presumably why there is a site like vixra, which has no peer review at all and where anyone can submit papers. And so they do! Crackpots solve all the mysteries of the universe, and no one believes them. (Well, perhaps the truth is far too many believe them.) There is even a name for this sort of behaviour, it is called the Dunning-Kruger effect: incompetent individuals are not even able to realize that they are incompetent.

Then there are all these new open access journals with absurdly broad scope that often cannot be taken seriously.

Ubuweb is a different matter. Being included in their collection should be viewed as the equivalent of winning a prestigious prise, although some copyright holders may disagree. If they do, they should think twice. But ubuweb may not be with us forever.

Saturday, November 10, 2012

Editorial neglect

The Nonlinear Workbook 

by Willi-Hans Steeb. 

World Scientific 2011, 5th edition.


Beginning with chaos and dynamic systems, from one-dimensional maps and fractals to ordinary differential equations, the usual topics are presented succinctly in the first half of the book. Short chapters on chaos control and synchronization are also included. Then, the second half deals with topics that should be more familiar to computer scientists, such as neural networks, genetic algorithms, optimization, wavelets and fuzzy logic. This is a huge span of topics that cannot be covered in depth in a single volume. According to Steeb, most of these disparate fields are interrelated. If so, there is a valid motivation for presenting them in the same book and highlighting their relations. However, many interesting and often difficult fields are only presented in glimpses. The chapter on wavelets is a case in point; that material is not used in the rest of the book so the chapter might as well have been left out.

To be clear, this is not a book for beginners. It serves better as a complement to other literature, and to some extent it offers a different point of view than many other sources.

The book claims to balance a theoretical exposition with practical computer code. Several short stand-alone programs written in C++, Java and Symbolic C++ (the latter being a library developed by the author) form the backbone of the text. There is even a short routine in assembler. 
The treatment of chaotic systems is different from many other comparable texts in that exact numerical algorithms are used to study the iterations of maps. The SymbolicC++ library is used for this, but unfortunately this code is neither included nor explained in the book despite the many programs relying upon it.

The solution of ODEs is for a large part done with the Lie series technique, which is not very accessibly explained. Another frequently used technique is the Runge-Kutta-Fehlberg method, which makes use of a set of coefficients that are copied and pasted into each code snippet where they are needed. Needless to say, this makes for a lot of redundancy, which could easily have been avoided by placing this part of the code in an include file. In Chapter 11, where the integration techniques for ODEs should presumable be explained, the same mysterious coefficients appear again without any hint as to how they are derived (there is just a reference to the literature). This avoidance of explanations stands in stark contrast to the style of another, excellent source on scientific computation  the Numerical Recipes by Press et al. Although the Nonlinear Workbook is not at all on a par with Numerical Recipes regarding stylistic issues, clarity of presentation, and general usefulness, it does contain much material that cannot be found there (at least not in the third printed edition).

Already in its fifth edition, one would think that the Nonlinear Workbook has all kinds of editorial flaws sorted out by now. Unfortunately, this is not the case. The text suffers from a lack of efforts to organize the material. Skimming through the text, there appears to be no single illustration; in fact there are less than a dozen across its more than six hundred pages, but they are not very visually striking. Most books about chaos theory and matters cognate will include a number of elegant pictures of attractors or fractals; this one does not, except for the one on its cover. Certainly most readers will have already seen an assortment of representative fractals and attractors and do not need them printed in yet another book; better yet, using the code in the book one should be able to generate and experiment with them for oneself. However, a few explanatory diagrams would often make ideas in the text more accessible to the reader. Or how about trying to explain a Poincaré section in words and formulae, with no illustrations? That is surely a recipe for making simple things look hard.

On the whole, the book suffers from editorial neglect. Code examples are often given without any indentation, making nested routines hard to read. 
The copy-and-paste manner of coding does not make things any better. The prose is awkward in many places, such as the following:
If we want to compare two hidden Markov models then we need a measure for the distance between two hidden Markov models.
There is very little cross-referencing inside the book and the equations are almost never numbered. On the positive side, one never needs to turn the pages to find some equation that was introduced earlier. 

The level of exposition is also a bit uneven. Some sections assume rather much mathematical background, whereas others are quite accessible. Presumably someone who already is at ease with Lie series, exterior products and what not will have little need for the guidance provided by the code examples. Some programs are rather trivial and probably pose no difficulty to a first semester student of computer science. A few programs are far from trivial, however, and the lack of explanatory comments in the code as well as in the main text make them hard to understand. There are almost as little exercises as there are illustrations. Why should a text book have exercises at all, though? The intelligent reader will find his or her own problems to solve, spurred by puzzling remarks in the text or unproven propositions. 

To round up, this workbook provides glimpses into many fascinating topics, albeit presented in a less than ideal way. The text is too tersely written to serve as an introduction to any of the many fields that it covers, but may be valuable to someone who has studied the theory before.

See for yourself, sample chapters are available for free.