Showing posts with label mathematics. Show all posts
Showing posts with label mathematics. Show all posts

Wednesday, August 12, 2015

Golomb Rulers and Ugly Music

A Golomb ruler has marks on it for measuring distances, but unlike ordinary rulers it has a smaller number of irregularly spaced marks that still allow for measuring a large number of distances. The marks are at integer multiples of some arbitrary unit. A regular ruler of six units length will have marks at 0, 1, 2, 3, 4, 5 and 6, and it will be possible to measure each distance from 1 to 6 units. A Golomb ruler of length six could have marks at 0, 1, 4 and 6.
Each distance from 1 to 6 can be found between pairs of marks on this ruler. A Golomb ruler that has this nice property that each distance from 1 to the length of the ruler can be measured with it is called a perfect Golomb ruler. Unfortunately, there is a theorem that states that there are no perfect Golomb rulers with more than four marks.

Sidon sets are subsets of the natural numbers {1, 2, ..., n} such that the sums of any pair of the numbers in the set are all different. It turns out that Sidon sets are equivalent to Golomb rulers. The proof must have been one of the lowest hanging fruits ever of mathematics.

An interesting property of Golomb rulers is that, in a sense, they are maximally irregular. Toussaint used them to test a theory of rhythmic complexity precisely because of their irregularity, which is something that sets them apart from more commonly encountered musical rhythms.

There is a two-dimensional counterpart to Golomb rulers which was used to compose a piano piece that, allegedly, contains no repetition and is therefore the ugliest kind of music its creator could think of.

Contrary to what Scott Rickard says in this video, there are musical patterns in this piece. Evidently they did not consider octave equivalence, so there is a striking passage of ascending octaves and hence pitch class repetition.

At first hearing, the "ugly" piece may sound like a typical 1950's serialist piece, but it has some characteristic features such as its sequence of single notes and its sempre forte articulation. Successful serialist pieces would be much more varied in texture.

The (claimed) absence of patterns in the piece is more extreme than would be a random sequence of notes. If notes had been drawn randomly from a uniform distribution, there is some probability of immediate repetition of notes as well as of repeated sequences of intervals. When someone tries to improvise a sequence of random numbers, say, just the numbers 0, 1, they would typically exaggerate the occurrences of changes and generate too little repetition. True randomness is more orderly than our human conception of it. In that sense the "ugly" piece agrees with our idea of randomness more than would an actually random sequence of notes.

When using Golomb rulers for rhythm generation, it may be practical to repeat the pattern instead of extending a Golomb ruler to the length of the entire piece. In the case of repetition the pattern occurs cyclically, so the definition of the ruler should change accordingly. Now we have a circular Golomb ruler (perhaps better known as a cyclic difference set) where the marks are put on a circle, and distances are measured along the circumference of the circle.

Although the concept of a Golomb ruler is easy for anyone to grasp, some generalization and a little further digging leads into the frontiers of mathematic knowledge with unanswered questions still to solve. 

And, of course, the Golomb rulers make excellent raw material for quirky music.

Wednesday, January 7, 2015

Decimals of π in 10TET


The digits of π have been translated into music a number of times. Sometimes the digits are translated to the pitches of a diatonic scale, perhaps accompanied by chords. The random appearance of the sequence of digits is reflected in the aimless meandering of such melodies. But wouldn't it be more appropriate, in some sense, to represent π in base 12 and map the numbers to the chromatic scale? After all, there is nothing in π that indicates that it should be sung or played in a major or minor tonality. Of course the mapping of integers to the twelve chromatic pitches is just about as arbitrary as any other mapping, it is a decision one has to take. However, it is easier to use the usual base 10 representation and to map it to a 10-TET tuning with 10 chromatic pitch steps in one octave.

Here is an etude that does precisely that, with two voices in tempo relation 1 : π. The sounds are synthesized with algorithms that also incorporate the number π. In the fast voice, the sounds are made with FM synthesis where two modulators with ratio 1 : π modulate a carrier. The slow voice is a waveshaped mixture of three partials in ratio 1 : π : π2.



Despite the random appearance of the digits of π, it is not even known whether π is a normal number or not. Let us recall the definition: a normal number in base b has an equal proportion of all the digits 0, 1, ..., b-1 occuring in it, and equal probability of any possible sequence of two digits, three digits and so on. ("Digit" is usually reserved for the base 10 number system, so you may prefer to call them "letters" or "symbols".) A number that is normal in any base is simply called normal.

Some specific normal numbers have been constructed, but even though it is known that almost all numbers are normal, the proof that a number is normal is often elusive. Rational numbers are not normal in any base since they all end in a periodic sequence, such as 22/7 = 3.142857. However, there are irrational, non-normal numbers, some of which are quite exotic in the way they are constructed.



Sunday, July 28, 2013

On smoothness under parameter changes

Is your synthesizer a mathematical function?

At least it can be considered in such terms. Each setting of all its parameters represents a point in parameter space. The output signal depends on the parameter settings. Assuming the parameters remain fixed over time, the generated audio signal may also be considered as a point in another space. In order to relate these output sequences to perceptually more relevant terms, signal descriptors (e.g. the fundamental frequency, amplitude, spectral centroid, flux) are applied to the output signal.





Now, in order to assess how smoothly the sound changes as one turns any of the knobs that controls some synthesis parameter, the first step is to relate the amount of change in the signal descriptors to the distance in parameter space. The distance in parameter space corresponds to the angle the knob is turned. Let us call this distance Δc. It is trickier to define suitable distance metrics in the space of audio signal sequences, but why not use a signal descriptor φ which itself varies over time and take its time average ⟨φ⟩. The difference Δφ between two such time averages as the synthesizer is run at two different points in parameter space may be taken as the distance metric.

A smooth function has derivatives of all orders. Therefore the smoothness of a synthesis parameter may be described in terms of a derivative of the function that maps points in parameter space to points in the space of signal descriptors. This derivative may be defined as the limit of Δφ/Δc as Δc approaches 0. It makes a significant difference whether a pitch control of an oscillator has been designed with a linear or exponential response. But abrupt changes, corresponding to a discontinuous derivative, will be even more conspicuous when they occur.

Whereas the derivative is about the smoothness locally at each point in parameter space, another way to look at parameter smoothness is to measure the total variation of a signal descriptor as the synthesis parameter goes from one setting to another. As a compromise, the interval over which the total variation is measured may be made really small, so that a local variation can be measured instead over an interval of a parameter.

Is this really useful for anything?

Short answer: Don't expect too much. But seriously, whether we like it or not, science progresses in part by taking vague concepts and making them crisper, by making them quantifiable. "Smoothness" under parameter changes is precisely such a vague concept that can be defined in ways that make it measurable. Such a smoothness diagnostic may be useful in the design of synthesis models and their parameter mappings, as well as perhaps for introducing and testing hypotheses about the perceptual discrimination of similar synthesized sounds.

The paper was presented as a poster at the joint SMAC/SMC conference.


Thursday, May 2, 2013

Filtering with differential equations


For those of us who are more familiar with digital filters than their analog counterparts, a one pole lowpass filter is easy:

yn = (1 - β)xn + βyn-1,   0 < β < 1.

But how do you filter a signal with an ordinary differential equation?



Working backwards, we should have an ODE that says

dy/dt + Ay(t) = Bx(t)   (*)

for some suitable constants A and B yet to be found. The differential may be approximated with a forward difference over a short time interval T, so dy(nT)/dt ≈ (y(nT+T) - y(nT))/T. Setting T equal to one sampling period, the discrete time version of (*) is

(yn+1 - yn)/T + ayn = bxn

Perform some algebraic shuffling to and fro of the variables to obtain

yn+1 = bTxn + (1 - aT)yn

and recall that the filter coefficients are 1 - β = bT and β = 1 - aT, hence bT = aT. Now we introduce a new coefficient τ > 0 for the variables in (*) and set 1/τ equal to both A and B. The system then is

dy/dt = (x - y) / τ

where now τ plays the role of a relaxation time constant. The greater τ is, the slower the response of the filter. Also, when the input equals the output the derivative becomes zero, which is to say that the system has unit DC response as required.

blackboard formula

Sunday, March 24, 2013

A new kind of square root


The postmodernism generator has been translated to mathematics. Now there is a program called Mathgen that outputs nonsensical papers on the advances of mathematics, complete with theorems and references. It has certain idiosynchrasies that makes it easy to recognize its papers. Authors are often drawn from among the most famous names of mathematics, but usually getting the first initial wrong. Theorems and conjectures are generously attributed to pairs of collegues across history, often using centuries old personalities as authors of brand new theories. Who has ever heard of the Conway-d'Alembert conjecture? Well, now we have.

The tone is exactly as condescending as one might fear: 'Clearly' such and such result follows; 'as every student knows …', and what follows is invariably clear as mud. Proofs are safely omitted because they are 'obvious'.

All this remarkable research, those 'little known results', are published safely beyond accessibility in Transactions of the Kenyan Mathematical Society, South Korean Journal of Integral Category TheoryIranian Journal of Homological PDE, and the like. Surely most of these publications cannot be found at your local library anytime soon.

It is not hard to generate plain gibberish by TeX. Begin with listing a few elementary symbols and operators:

const char *alpha = "\\alpha";
const char *beta = "\\beta";
...
const char *r_arrow = "\\rightarrow";
const char *sqrt = "\\sqrt";
const char *sup = "^ ";
const char *sub = "_ ";

Put all the symbols in an array, so they can be easily accessed and picked at random. Concatenate several of the symbols into a string and print it. With some luck, the symbol sequence will not break the TeX syntax. This doesn't happen by itself, so next one might like to do something more structured. Elementary functions (program routines, that is) that generate small expressions like x ∈ ℂ2 or f : ℝ → ℝ are not hard to write.

This is a sample of the babbling that results from the mere concatenation of a few symbols and numbers without regard for syntactical rules:



Difficulties arise when operators are used, because these expect arguments. An expression should not end with, say, an empty square root with no argument, as the above formula seems to do. In fact it ends with \sqrt{%0^{\sum}}, but this is apparently beyond the wits of TeX.


The notorious mathgen paper Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE by M. Rathke contains a larger sortiment of abstruse mathematical symbols in hilarious combinations. Or what about the frequent use of various powers of zero? Already the first formula contains expressions such as 0-4, 05, and other meaningless entities such as tan(∞-1). A judicious use of elaborate idempotent expressions may even accidentally result in a true statement despite the funny appearance.


Sunday, February 17, 2013

Total variation


The total variation of a real valued function f on an interval I is defined as
taking the supremum over all possible partitions of I = [p0, pN]. Notably, if the function is (continuously) differentiable, the total variation becomes
but f does not have to be differentiable, and the total variation may be unbounded.

Sometimes the function itself may be evaluated at any point of the interval, although its derivative either does not exist or is far too complicated to deal with. Then the total variation may be estimated by sampling the function at several points and checking whether or not it converges to some limit as the mesh gets finer. If it doesn't, the curve may be a fractal, so its fractal dimension can be estimated from the procedure.

The length of a fractal curve is a function of the scale of measurement. As the scale of measurement ε varies, the measured length N varies according to N ~ ε −D, where D is the fractal dimension. The common procedure then is to fit a double logarithmic plot of N against ε and finding the slope. However, it would be a grave mistake to blindly accept any automatically calculated slope without checking the error of the fit.

Estimating the total variation at several arbitrary sampling resolutions can be inefficient, unless a clever trick is used. Suppose we begin with a fine resolution with uniform distance Δ = xi - xi-1 > 0 between the points. Then it is easy to obtain the total variation for subdivisions by nΔ, for n = 1, 2, … just by skipping so many points. Even better, one can take averages
so as to obtain estimates that do not depend (as much) on the particular chosen sample points.  

A somewhat related concept is arc length, which is, conceptually, the length of a string superposed on the graph of the function (assuming the function is continuous). The total variation is smaller than the arc length. For the straight line y = kx, 0 < x < t, the total variation squared is V2 = (kt)2 as compared to the arc length squared which is t2+(kt)2. Now suppose the function is monotonous over the interval under consideration. Then, if the function is deformed so as to become more curved, only the arc length will increase while the total variation remains the same. For example, if fn(x) = xn, 0 ≤ x ≤ 1 and n = 1, 2, ..., then the arc length approaches 2 as n increases, whereas the total variation remains 1.

Saturday, January 26, 2013

The derivative of products

An elementary proof 

Knowing some important formulas by heart can be very useful, but if one knows how to derive them, it is no longer necessary to remember the formula. From reading math textbooks (many  or most of them?), one can gain the false impression that the process of deriving a formula follows the same sequence as the proof.

Here is an elementary proof that nevertheless involves some not so obvious steps. 

Suppose that


then the formula for the derivative is 


but how do we prove this? Although the proof is straightforward, it is perhaps difficult to remember all the tricks that are required and when to apply them. Here is a standard proof. First, apply the definition of derivative to the product of the two functions:



The next step is the crucial operation, at once trivial and far from obvious. We are going to both subtract and add u(x)v(x+h) and rewrite the ratio as


Now, who would think of adding two terms that sum to 0 into such an expression? This is an idea that doesn't make much sense at this point. Indeed, one needs to look a few steps ahead and see what it is going to be needed for. What follows are just some simple factorizations of terms.


Break out some terms to get 


then take limits and replace derivatives, and we are done:




Here, the simple formula seems much easier to memorize than all the steps of the proof. (In fact, you may impress your friends far more if you memorize Hugo Ball's poem Karawane than if you learn to recite the steps of this proof.)

It is highly misleading when formulas such as the above are just plainly stated and then concisely proven. This is most likely not how the formulas were originally discovered. Rather, one would observe a few instances of derivatives of multiplied functions and conjecture a formula. Then, starting from the formula as well as the definition of derivative, one would work backwards and find all the arithmetic manipulations that make the proof work.

Instead of learning a fixed set of steps that are used in particular proofs, one would probably learn a bag of tricks that can be applied in various situations. Then, out of this bag one can grab various operations that can be tried out, until something is found that leads the proof in a promising direction.