DJ Pi 5: Habemus Sonum!

November 4, 2017

After four posts we still haven’t produced any sound. Let’s change that! I’m going to start with the aural equivalent of “Hello, world!”, which is changing the volume of the output. I’ll also go through a refactor I did to allow each effect component to be linked to multiple parameters.

You can find the code for this post here.


So, I’m using the Audio Injector stereo hat to handle audio input and output on my Pi. There are plenty of Pi sound cards available but this one had good reviews at a reasonable price. Setup is pretty straightforward, requiring nothing more than connecting it and running a shell script. Straight away I could play tracks through my Pi using sox.

Sadly, getting input was not such a joy. My Juce app was connecting to the soundcard correctly (all handled automatically by the framework) but no audio was coming in despite me playing all kinds of nonsense into the input ports. The problem was that alsamixer was configured to take input from the (non-existent) microphone rather than the line-in. Each time I would correct the setting, exit alsamixer, try the Juce app again and then hear no audio. When I would go back into alsamixer I would see that it had reset my changes. Eventually, I realised that the problem stemmed from having forked-daapd running in a service. It appears that they didn’t play well together, as alsamixer started remembering what I had done after I stopped forked-daapd. Let’s start attenuating that output!

Processing audio basics

In Juce, as seems to be standard practice, audio is processed in blocks of samples (50, to be exact). The framework repeatedly provides a buffer containing a block of input samples for each channel. All of the audio processing magic happens by modifying the samples as they come through. Since this behaviour will be common to every effect componponent that we make, I created an AudioEffect base class that provides a virtual processBlock to be implemented by each effect component class. The benefit of this is that each component exposes a common interface so components can be chained together simply by piping the buffer through each component’s processBlock method. The base class also provides some functionality for adding parameters, which I’ll discuss further below.

Changing the volume

Here we have the Volume component’s implementation of processBlock. For now ignore the EffectParameter and content yourself with the knowledge that the level variable will contain a floating-point value between 0.0 and 1.0.

If the parameter isn’t turned on we make a quick exit, leaving the audio buffer unchanged. If it is on we just look through each sample in each channel and scale it by the parameter’s value. Note that the buffer is passed in as a reference so are modifications are to the original buffer. The alternative would be to create a copy, modify it and return the copy, leaving the original untouched. This would gain us lots of functional programming points but would be more inefficient and speed is important here.

As our maximum input level is 1.0 our design limits us to only reducing the input volume. Turning the potentiometer fully clockwise means the output will be at the input’s level with corresponding decreases as the potentiometer moves anti-clockwise. I thought this was safer than allowing horrendous gain increases. And that’s literally it for the processing logic!

Changing the parameters

So, let’s look at the parameters. Previously we created a Control object for each phsyical control and listened for changes to them in the MainComponent class. Now, each effect can instantiate a number of EffectParameter objects, specifying a name for the parameter and the ID of the control it’s listening to. This means that multiple parameters can be tied to each control and each effect can have more than one parameter.

This allows for a much more configurable setup. The main problem is that with one control per effect you can control simultaneously as many effects as there are controls. But in the brave new world of multiple parameters we’ll need some way of specifying which effect we want to modify.

To give an example, let’s say the first control is the ‘level’ parameter for the volume effect and the delay length for the delay component. It would be pretty rubbish if changing the length of the delay also changed the volume of the output. Most likely I’ll add some kind of hardware switch, but we’re getting a bit ahead of ourselves. Do check out the EffectParameter class, but it’s really only a mirror of the Control class.

Once an effect has been instantiated it needs to be registered with the SerialConnection so that it can register the listeners. The effect will store its parameters in a std::map<string, EffectParameter>, so we iterate through each entry in the map, retrieve the parameter and register it as a listener to the relevant control. I felt that this was the best approach as it means that the controls can be kept private to the SerialConnection.

Wrap up

This isn’t the most exciting post, but we’ve demonstrated that we can receive the audio input, process it and output a modified signal. My next step will be towards something actually potentially useful in the form of a variable delay line. While a change of volume is pretty easy to verify, as the effects get more sophisticated I would also like to add some testing code.