So, you have one mic sounding good. Now let’s start working with multiple microphones, multiple instruments. The nitty gritty of bad audio, if we were to put it in a nutshell, is the notion that if you get each channel sounding good by itself and then turn it all up, it will all sound great together. Based on that premise, if you want somebody to sound more in front of someone else, you just turn them up, right? Not exactly. Why not? Because that is not how the human ear operates.
Your ear can only discriminate a certain number of things at any given time. Try this for an example: play a track of a bass player that sounds good, full and crisp. Now turn on a fan in the room. Bass definition drops off and, oddly enough, it sounds like there is no bottom-end. If, instead of a fan the sound interference was cymbals, it would be even worse. Multiple “anythings” have similar issues. Sounds mask each other. As the engineer, you have to decide how to deal with that and make music out of it.
So, let’s just take a simple task: two vocalists as opposed to just one.
Put two people up there and let’s say they have kind of similar voices and you have a hard time figuring out who’s who. When you listened to each voice individually they sounded pretty good – and then when you put them together it is just kind of two-dimensional.
You can’t really tell who’s who when they are singing at the same time. Now, if they are in a duet or singing parts and they are breaking apart, obviously that changes. But when they are singing together, you’re not really hearing the voices independently.
One of the things people will do, if they are mixing on a sound system that is stereo, is to pan one person to left and the other person to right. And then, if you are sitting in the middle of the room, that really separates them.
In live sound, however, if you are sitting on the right or the left side of the room you won’t hear the other person very well at all. So that approach is not a viable fix unless your sound system is a type that hardly any average school has, where the left and right systems completely overlap the entire room and provide true stereo in all seats. [We can talk about why that is complicated to do in a different segment at some point.]
Most live sound is dual mono by default. Separation in the mix is done by other means.
So, let’s get back to the two microphones. We really need to mix them in mono because we need to be sure everybody in the room hears both singers, but we want them to sound distinct. So what should we do? We will take some frequencies out of one mic that we leave in the other mic to make them stand apart sonically.
Let’s say there is a male voice and a female voice. In this instance, the goal is for the male voice to stand out in the low frequency ranges but the female voice has some low frequency content. To make that work, we will pull some low frequencies out of the female voice which will separate the two.
Now, if we then listen to the female when she is singing just her part, we have to bring those low frequencies back in for a while to make her voice sound the way it should as a soloist. But when they go back to singing in unison, in the duet sections, we will have to pull her voice frequencies out to get the whole thing to work from that perspective.
So, it’s a constant movement, like playing an instrument. You are not leaving things alone. The keyboard player doesn’t just play an F chord and that’s it. When they’ve got to play a B flat cord, they change to B flat. Channel equalization has to change in various songs and parts of songs. In essence, you are part of the band.
End of part 2 of a multi-part series.
Copyright AVLDesignsInc 2020
Did you miss part one? Catch up here>>> Sound in Schools part 1