1

Understanding Intermodulation Distortion

The harmonic distortion that nonlinear processing adds to single tones is widely known and understood. But apart from that, nonlinear processing adds another kind of artifact when two or more tones are involved: intermodulation distortion.

Nonlinear distortion is an extremely fascinating topic. Not only is it what creates much of the beloved character of analog audio gear, but it is also the biggest challenge to understand, simulate or get under deliberate control. Why is that?

Sources of nonlinear distortion come in many flavors. First of all there are several variants of clipping, the thing we usually think of when talking about nonlinear distortion. But there are also things like crossover distortion that happens at low signal levels. The symmetry of distortion curves also matters a lot. Things get even more complicated when the distortion behavior varies for different frequency ranges. (As often, my list of possible topics grows rather than shrinks when I write a new post.)

Unfortunately these effects are often hard to calculate and simulate. Most of all, digital emulation is usually very expensive. Not only does it require oversampling to suppress aliasing, but properly modeling a complete nonlinear circuit always involves many more computations than any developer would like.

The audible effects of nonlinear distortion are as manifold as the topic is complex. As stated above, we often only think about harmonic distortion and what overtones are added. But that’s only the tip of the iceberg. For example, every clipping distortion is at the same time a compressor and a limiter. One could also go so far to think of strong crossover distortion as a kind of expander. But for today, we stick with another not so prominent nonlinear effect called intermodulation distortion.

Harmonic vs. Intermodulation Distortion

First, there are some things to understand regarding the differences between harmonic and intermodulation distortion. Most importantly, they always come as a couple. There’s no way to get one without the other.

That said, it’s funny that most of the nerd talk around nonlinearities is only about the specific shape of a distortion curve and the kinds of overtones it creates. In fact, often not even these properties are of interest, but rather if there’s a tube somewhere that’s glowing nicely (even if the glowing comes from an LED behind the tube). As a result, the question how distortion responds to more complex tones than a sine wave is rarely discussed. And that latter brings us right into the territory of intermodulation distortion.

The thing is that human hearing is great at grouping harmonic overtones together, perceptually combining several frequency components into one sound. As we’ll see in the following, intermodulation distortion can create inharmonic tones that are not perceived as belonging to the original sound.

Consequently, perception thresholds for intermodulation artifacts can be much lower than those for harmonic distortion. And while harmonic distortion is often a pleasant and sought-after quality, that’s not necessarily the case for inharmonic intermodulation tones. Thus, intermodulation is something we usually don’t want to hear.

What is Intermodulation Distortion?

To recap, harmonic distortion describes what happens to a single sine wave when it passes a nonlinear system. The output signal will contain additional tones at multiples of the sine frequency. Depending on the type of nonlinearity, the amount and distribution of these harmonic overtones will vary.

Things get more complicated when the signal consists of more than one tone. Not only does each of these two tones generate their own series of harmonics, but also additional non-harmonic tones.

The frequencies of these intermodulation distortion products land at the sum and difference of the original tone’s frequencies.

So imagine two sine tones with frequencies of 1000 Hz and 1100 Hz passing a nonlinear amplifier stage. Each of them will create harmonic overtones at 2000 Hz, 2200 Hz, 3000 Hz, 3300 Hz and so on. But additionally, there will be distortion products at 100 Hz and 2100 Hz.

Now these two pure sine waves are actually a very artificial case. With actual music instruments, the tones already have harmonic overtones in the first place. Some more, some less, but practically they’re always there. If we take harmonics of the original tones into account, we get additional intermodulation products at 200 Hz, 4200 Hz, 300 Hz, 6300 Hz and so on. So as you can see, the distortion products have their own harmonic series.

What Does That Mean Musically?

Still these are only two tones. In a complete piece of music, there are full chords, possibly with extensions and much more. Obviously, there are a lot of intermodulation frequencies to calculate. Most audible in this situation are the difference tones, because they occur at lower frequencies than the original tones and thus are less likely to be masked by these.

For simpler chords, these lower-frequency difference products again have a harmonic relationship to the original tones. Especially in the case of major chords, these difference products are sub-harmonics of the original tones, which means that all the original tones and intermodulation tones together are related like the harmonic overtone series of a much lower tone.

This is also how powerchords on a guitar work. Played through a heavily distorting guitar amplifier, an additional subharmonic one octave below the chords is generated.

Generally, the more complex the chord, the lower the subharmonics that are generated. And in many cases, these subharmonics become much lower than the pitches we could perceive. Instead, they are rather perceived as rhythmic beating similar to an amplitude modulation. In this regard, minor chords lead to much lower subharmonics than major chords.

Strongly inharmonic sounds such as cymbals have a rather chaotic set of partials themselves, thus nonlinear distortion leads to chaotic noisy intermodulation artifacts, especially at much lower frequencies. Thus, cymbals quickly get a gritty and grainy texture when distorted. Which sounds great if you want it, but not so great if you’re after a clean high-end acoustic recording. At this point, keep in mind that with most analog circuits, nonlinear distortion increases at high frequencies, where cymbals have most energy.

Conclusion

The thing to keep in mind is that for anything but single tones, intermodulation artifacts will occur in a nonlinear audio processor. And due to their often inharmonic relationship and their lower frequency components, they are much more easily audible than harmonic distortion.

When pushing your full mix hard through some vintage character box or saturation plugin, be careful and listen for such artifacts. It can sound cool, but it doesn’t have to. A funny deduction from the above might be that songs in a minor key probably require a cleaner treatment with lower distortion compared to songs in a major key. Think about that for a second.

Also, take special care with high frequency content. Otherwise inaudible frequency ranges can lead to extra low frequency artifacts on less awesome playback systems than yours. Especially because these likely create the most distortion at high frequencies.

Anyway, don’t start to panic about intermodulation distortion. It’s not necessarily a bad thing. But be aware of it and try to listen for it especially when treating mix busses with vintage gear or software emulations thereof. And as always, share your experience in the comments!

Jon Boley - September 7, 2016

It’s also interesting to keep in mind that the cochlea creates it’s own (audible) distortion. In fact, audiologists use intermodulation tones (distortion product otoacoustic emissions) as an indication of a healthy cochlea.

Comments are closed