The harmonic distortion that nonlinear processing adds to single tones is widely known and understood. But apart from that, nonlinear processing adds another kind of artifact when two or more tones are involved: intermodulation distortion.
Equalizers are probably some of the simplest tools for audio and music production, at least at the surface. Their purpose is clear, and the process of boosting or cutting certain frequency ranges is a comparably clean and easily understandable one. Yet there are vast differences among different equalizer models in both sound and usability. But what exactly are these?
And on we go to the third and final part of the series about making phase audible. This time it’s about the most important ways in which a phase response influences sound tremendously. At the same time, I’m introducing a highly versatile concept that will become a recurring theme on The Science of Sound.
Continuing our journey through the world of phase, we are going to have even more fun with allpasses today. The magic starts to happen when a phase response changes over time.
After last week’s introduction, we’re going to start actually listening to phase effects today. First of all, we look at what it takes to make a static nonlinear phase response audible.
In the world of audio, nothing seems to trigger more reflexes of fear than “phase issues”. The reason is probably that it describes a rather scientific concept which is sometimes hard to relate to the reality of music signals. Also, there are three very different phenomena that are often confused by using the same word for all of them.