Thursday, 15 January 2015

Fast algorithms for polynomials and exponentials

Introduction

In this blog post we're going to derive fast algorithms for evaluating the successive values of polynomials and exponentials.

Polynomials

Let's start simple. How do you evaluate successive values of P(n) = an + b, a first order polynomial? You initialize an accumulator to b and keep adding a to it.


Notice how we don't have to multiply by a anywhere to get the values.

Let's try P(n) = n2, the simplest second order polynomial. We needed one accumulator with the first one so maybe we need two accumulators this time.


The bottom accumulator y0 will have the values 0, 1, 3, 6, 10, 15... That's definately not linear, but not the sequence 0, 1, 4, 9, 16, 25... that we're after. It turns out that the general term in the sequence is n(n + 1)/2 or multiplying by two n2 + n. Multiplication is easy (just replace the 1 with a 2 above), but how do we get rid of the n term? The equations are linear  and actually include the first order iteration. Adding a value to the initial value of y0 shift the sequence by a constant and adding a value to the initial value of y1 adds a linear function of n to the final sequence. In this case we want to add P(n) = -n. The final iteration for n2 becomes:


The iteration is linear so we can now add all the lower order terms for a general second order polynomial:

So if you pre-calculate the 2a term you only need two additions to get the next value of a polynomial that naïvely requires three multiplications to evaluate (two multiplications if you reduce it to Horner form). Pretty cool, huh?

Moving on. Let's tackle P(n) = n3. Generalizing the naïve iteration:



We get 0, 1, 4, 10, 20, 35... or n(n + 1)(n + 2)/6 in general. Multiplying by six we get a n3 term, but now we get both quadratic and linear nuisance terms. Luckily we already know how to generate both kinds of terms so we just need to subtract them:


Or for a general third order polynomial:


Starting to look a bit complex, but it works. Very useful if you wish to evaluate cubic Bézier curves or interpolate other functions using cubic Hermite splines.

I'm going to leave the higher order polynomials as an exercise. You can use the strategy introduced here: Write out the naïve iteration, derive the general term, expand it and subtract the lower order nuisance terms that you already know how to compute.

Computing polynomials at non-integer values

Above we derived the general iteration for polynomials of the form P(n), where n is an integer. What if you want to evaluate P(t) for successive instants of time? It's very simple. Just write t as t = nΔt, where Δt is the inverse of the sampling rate and you'll get a new polynomial as a function of n. The same goes for starting the iteration at arbitrary instant of time t0. Just write t = nΔt + t0, expand to get get the coefficients of ni and plug them to the formula for P(n).

Exponentials

Nothing in the above iterations says that you have to use addition to get the next values. There's a homomorphism between the additive group and the multiplicative group that says you can use 1 in place of 0, multiplication in place of addition and raising to a power in place of scalar multiplication.

Let's write out the iteration for the exponential function to get a taste:


And the gaussian for good measure:


Or the general gaussian in terms of t = nΔt:


You get the idea. In general it's possible to evaluate eP(n) with only m multiplications where m is the order of the polynomial P(n).

Saturday, 3 January 2015

Algorithms for Additive Synthesis

Introduction

Additive synthesis is a sound synthesis technique that creates timbre by adding sine waves together.
In this blog post we're going to look at two special cases of additive synthesis and implement efficient algorithms for them.
First we're going to look into modal synthesis with simple exponentially decaying partials and then advance into harmonic synthesis with freely controllable frequency. Instead of having to evaluate trigonometric functions for each of the partials, both of these techniques will only require a couple of multiplications and additions.

Modal Synthesis

With fixed frequencies and decay rates the modal synthesis model looks like this:

This approach can be used to create the sound of drums or idiophones such as the marimba or bells where there are relatively few partials that are related inharmonically.

Let's look at a single partial:


Using a few trigonometric identities we can manipulate the equation into the following recurrence:


Where fs is the sampling rate. The constants α1 and α2 can be calculated once and the recurrence states that to get the next sample of the partial you simply add up two previously generated samples multiplied by the constants, a huge saving over having to evaluate the sin and exp functions for every sample.

Harmonic Synthesis

The modal synthesis above suffers from fixed frequencies and a general lack of control over the amplitude envelopes. If we restrict our attention to a harmonic series where each of the partial frequencies is an integer multiple of the base frequency we get the following equation:

Where the fundamental frequency f and each of the amplitude envelopes ak depends on time t.
Lets look at a single instant of time:

Where we have absorbed the ak and ϕk terms into a single complex term bk. The end result is a simple sum of the powers of the complex exponential which can be evaluated with Horner's method:



where I have thrown in a timbral sharpness parameter s for free. The method requires only K-1 complex multiplications in total.

While one complex multiplication per harmonic isn't bad it still requires four real multiplications to be evaluated. We can do better by further reducing our attention by dropping out the ϕk terms:

This series can be evaluated with the Chebyshev recurrence:


And then applying the Clenshaw algorithm to this recurrence:


From this recurrence we can see that we only need one real multiplication per harmonic (assuming the 2 v term is computed only once).

What about the case of sines in the series?

This time we use the Chebyshev recurrence of the second kind:


Giving us the final Clenshaw recurrence:


Again there is only one real multiplication per harmonic, but now we had to evaluate both the sin and cos functions once.

Let's do one more. This time with only odd harmonics:


Now the Clenshaw recurrence reads:

Implementation

As usual. Here's a Python gist with implementations of the algorithms introduced here: