Audio Engineering Theses


In music production, there are many tools available to an audio engineer throughout the mixing process to bring a song to completion. Among the most common are level balancing, equalization (EQ), and compression. However, these tools still require the subjective judgments of human engineers to create the desired listening experience. While automated mixing algorithms have been increasingly studied, they have yet to match the skills of even an amateur mixing engineer, in major part, because the subjective mixing decisions of an engineer are difficult to replicate by automated systems. The following research was developed with the primary goal of objectively generalizing the mixing decisions of audio engineers. Any advancement in this field may be useful in the development of new tools for audio engineers, including automated mixing systems which can efficiently replicate the subjective decisions of mixing engineers. To this end, professionally mixed multi-track audio stems from 30 songs in the rock genre were processed using a cross-analytic adaptation of a well-known loudness model to determine how each stem within a song masked the others in five frequency bands. It was found that guitars were the least masked instrument in all frequency bands,followed by the vocals, bass, and drums. A close masking relationship wasfound between guitars and vocals, while the bass and drums were less closely related. Data was sufficient to support the hypothesis that within a given genre, significant interactions can be found for masking levels between stem and frequency band.



First Advisor

Wesley A. Bulla

Second Advisor

Doyuen Ko

Third Advisor

Eric Tarr


Audio Engineering


Entertainment and Music Business, Mike Curb College of

Document Type



Master of Science in Audio Engineering (MSAE)

Degree Level


Degree Grantor

Belmont University


audio engineering; loudness; masking threshold; mixing; multitrack; perception; psychoacoustics; stem; threshold