Tuning and Timbre, Part II

In Part I, we replaced the usual formula for the harmonics of a tone (\(f_n = n \cdot f_0\)) with this one:

$$f_n = f_0 \cdot A^{log_2\ n}$$

where \(f_n\) is the frequency of the n-th partial, \(f_0\) is the fundamental frequency, and \(A\) is the inharmonicity parameter. (Note that when \(A=2\), we obtain a conventional harmonic spectrum; when \(A>2\), the spectrum is stretched, and when \(A<2\), the spectrum is compressed.) We then claimed that these new timbres require new scales. Specifcally, rather than express the frequency of a note as the root times some fraction (i.e. \(f_0 \cdot \frac{m}{n}\)), we should express it as:

$$f_0 \cdot A^{log_2\ \frac {m}{n}}$$

There is a certain mathematical consistency embodied in these relations, but do they sound any good? In this post, we’ll write the SuperCollider patch shown above and find out. As a bonus, we’ll also learn a slightly cleaner method of making visualizations than we used previously.

As usual, you can find the complete code for this patch as a gist on Github.

Overview

Our patch will have the following components:

  • A SynthDef that takes the inharmonicity parameter \(A\) as an argument and implements the formula for partials given above
  • A pattern that plays a major chord using this SynthDef, tuned according to the inharmonicity paramter
  • A pattern that plays a pseudo-random melody using this SynthDef, tuned according to the inharmonicity parameter
  • An LFO that slowly varies the inharmonicity parameter
  • A drum beat (just because)

SynthDef

Our SynthDef looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
SynthDef(\timbreTest, {
    |freq=220, amp=0.1, attack=0.03, sustain=0.1, tuningFactor=2, id=0, out=0|
    var partials = #[
        0,                     //log2 1
        1,                     //log2 2
        1.5849625007211563,    //log2 3
        2,                     //log2 4
        2.321928094887362,     //log2 5
    ];
    var sig = Array.fill(5, { |i|
        SinOsc.ar(tuningFactor ** partials[i] * freq, i*pi/4, 1/(i+1))
    }).sum;
    var env = EnvGen.kr(Env.perc(attack, sustain, amp), doneAction: 2);
    SendReply.kr(Impulse.kr(0), '/tr', [freq, amp, sustain, pan], id);
    Out.ar(out, LPF.ar(Pan2.ar(sig*env, pan), 2000));
}).add;

Note that the inharmonicity parameter \(A\) is represented by the argument tuningFactor (just a touch easier to type). The first thing we do is hardcode the logs of the first five integers, which we’ll need to calculate the partials. Since SuperCollider has been eschewing for loops since before it was cool, we sum up our partials by using the fill and sum methods of Array. The amplitudes fall off as \(1/n\), and we also modulate the phase for no particular reason. Finally, we slap on a low-pass filter at 2000Hz so that things don’t get too bright. The call to SendReply is used for the visualization, which we’ll discuss later.

Patterns

First, we define a few environment variables:

1
2
3
4
~tuningFactor = 2.0;
~root = 205;
~chord = [1, 1.5, 2.5].collect({|item| item.log2});
~scale = [1, 1.25, 1.5, 2, 2.25, 2.5, 3].collect({|item| item.log2 });

As in the SynthDef, we store the chord and scale notes as their logs to avoid computing them later. (The collect method creates a new array by applying a function to each method of the old array.)

Here is the basic skeleton of the chord pattern:

1
2
3
4
5
6
7
p=Pbind(
    \freq, Pfunc({~tuningFactor ** ~chord * ~root}),
    \dur, Pseq([2.25,3.75], inf),
    \out, 16,
    \id, 1,
    \instrument, \timbreTest
);

The frequency calculation needs to be wrapped in a Pfunc because we plan on modulating ~tuningFactor. SuperCollider will automatically expand the ~chord array for us and instantiate a synth for each element of the array. Note that the durations add up to a 6-second “bar.” The \id parameter will be used for the visualization.

The melody pattern is a bit more complicated, but this simplified version will get the point across:

1
2
3
4
5
6
7
q=Pbind(
    \freq, Pfunc({~tuningFactor ** ~scale.choose * ~root})
    \dur, Pwrand([0.25,0.5,1,2,4],[5,4,3,2,1].normalizeSum,inf)*0.25,
    \out, 16,
    \id, 2,
    \instrument, \timbreTest,
);

Basically, we substitute ~scale.choose for ~chord. Our durations are skewed towards smaller values, which is generally a good strategy when doing this sort of thing, since you need more of them to occupy the same amount of time. Also notice that this pattern has a different \id value than the previous one. To get a little of that late-nineties drum’n’bass feel, we run this pattern through Pstutter:

1
r = Pstutter(Pwrand([1,3,5],[0.9, 0.05, 0.05], inf), q);

We don’t stutter often, but when we do, it’s an odd number of times.

Modulation

We set up a little LFO to modulate tuningFactor:

1
Ndef(\lfo, {SinOsc.kr(1/75, 0, 0.2, 2)});

This has a period of 75 seconds, and oscillates between 1.8 and 2.2. What follows is a bit ugly, but it works. Every frame of the visualization runs this bit of code:

1
Ndef(\lfo).asBus.get({|value| tuningFactor=value});

This sets tuningFactor (note the absence of a tilde) to the current value of the LFO. This is so that we can “see” the LFO in action. However, in what we hear, we only want to update once a bar, so we have a routine that does the following:

1
2
3
4
inf.do({
    6.wait;
    ~tuningFactor=tuningFactor;
});

That is, every six seconds, we set ~tuningFactor to tuningFactor. Clear as mud, right?

Percussion

Not too much to say here, other than that the kick and snare SynthDefs were taken from James Harkins’ patterns tutorial, with the snare slightly modified. On the snare pattern, we randomly switch \out between a bus with delay and without delay, which sounds kinda cool.

Visualization

The fundamental issue in writing SuperCollider visualizations is transmitting information about sonic events to the DrawFunc of the UserView. You can do this using environment variables or control busses (which are fundamentally the same thing), but it turns out that SuperCollider has a built-in functionality for this kind of thing, which is the SendReply class. While I am generally a big fan of the SuperCollider documentation, in this case I think it is a bit opaque. Here is the description of SendReply from the help file:

1
A message is sent to all notified clients. See Server.

Right.

Anyway, here’s how it works. In our SynthDef, we have the following line:

1
SendReply.kr(Impulse.kr(0), '/tr', [freq, amp, sustain, pan], id);

The first argument is a trigger. Impulse with a frequency of 0 triggers once when the synth is instantiated and never again. The name of our message is /tr (names are required to start with /). The message itself is the array [freq, amp, sustain, pan]. Finally, we can attach an id to our message which, in this case, is passed in to the synth by the controlling pattern. This allows us to distinguish between messages sent by different patterns.

This message is sent every time the synth is instantiated and contains the basic parameters of the event (i.e. freq, amp, etc). So far, so good. Now we need to receive the message. In order to do this, we set up an OSCFunc:

1
2
3
4
o=OSCFunc({|m| switch( m[2],
    1, { chordEvent = chordEvent + m[4].explin(0.04, 0.16, 1, 16) },
    2, { noteEvents[0].add([m[3], m[4], m[5], m[6]])}
)}, '/tr');

The argument |m| is the message sent by SendReply. We switch on m[2] which is the id parameter. Events from chord pattern have an id of 1 and events from the melody pattern have an id of 2. For the chord, we just want to keep track of amplitude, so we increment chordEvent by a scaled value of m[4] which is the amplitude. For the melody, we need all of the data, and we add it to the first element of an array called noteEvents.

In the last step of this process, this data is consumed by the drawFunc. The behavior for chordEvent is simple. Each time drawFunc runs (approximately 60 times a second), it reduces chordEvent by a factor of 0.985. This gives a nice geometric decay.

For noteEvents, things are a bit more complicated. First of all, noteEvents is an array of 30 elements, representing points in time. The OSCFunc only writes to element zero, which contains the newest events. The drawFunc processes all of the elements, and successive elements are drawn with smaller opacity values. At the end of drawing, all of the elements are shifted back in time (i.e. up in index). This results in the appearance of events smoothly “fading out.”

Hopefully, you can see that this is a quite general procedure for transferring information from synths to the drawFunc.

Results

So, how does this sound? To my ears, it’s quite easy to spot \(A=2\). When the timbres are slightly stretched, the still sound good, just a bit brighter, but when we get to the top of the range, they’re definitely a bit strange. All of the compressed timbres sound unusual, but not necessarily bad. It’s interesting to think about combining different timbres in the same composition. This is a little difficult because of the lack of a shared octave, but there could be creative ways around that…