Sound waves have a property called phase that refers either to specific parts of the cycle of rising and falling or the timing between 2 sounds. Complex sounds contain more than 1 frequency, and their shape is based on the phase and amplitude of those frequencies. When mixing complex sounds, out-of-phase frequencies tend to cancel and in-phase frequencies tend to reinforce. "Reversed phase" should be called "reversed polarity" because phase refers to changing the timing, not inverting the wave.
- The higher the frequency of a wave, the more times it cycles per second and therefore, the shorter each cycle is. Wavelength is the measure of the length of one cycle. Let's look at one wavelength of a simple sine wave. From the initial rise to its peak, the fall into the trough, and back to the original starting point. Just like we saw on the ocilloscope, the line across the middle, the x-axis, shows time left to right, and the y-axis shows the air pressure.
A sine wave oscillates between positive and negative amplitude in a smooth curve as time passes. This oscillation is measured in degrees. From zero, to 90, to 180, to 270, up to 360, which we call zero again as the cycle repeats just like degrees of a circle. This shape also shows the phase of the wave over time. Here, phase refers to where within the cycle the wave currently is.
Zero degrees, 90 degrees, 180 degrees, and so on. Things become more interesting though when we talk about the relative phase of two or more sounds. Combining two sound waves creates one wave with a new shape. I'll demonstrate first by mixing together two copies of the exact same sound. The way the copies combine depends on their timing compared to each other, which is another way of saying their relative phase.
If the ups and downs are in sync, those waves are perfectly in phase, and we get a combined sound twice as loud. If the ups and downs are timed so they happen opposite each other, then they're out of phase. The result of mixing two identical sounds perfectly out of phase is silence. If the relative phase is somewhere in between, the waves will partly reinforce or cancel. Things get even more interesting when we combine waves of different frequencies.
This creates a more complex shape. For example, if we mix a sine wave at 250 hertz with another at 1600 hertz, we get this new shape with elements of both. Then, if we combine two copies of our new complex waveform, we see that changing the timing affects the phase of the two frequency components differently. This is called comb filtering, where mixing a complex sound with a delayed copy of itself causes some frequency components to cancel, and others to reinforce.
For more information about comb filtering, check out Foundations of Audio: Delay and Modulation. Besides time delay causing comb filtering, something else that can happen when two copies of the same sound combine is that if one of them is flipped upside down, the result is also total cancellation and silence. A lot of people call this out of phase, but strictly speaking that's not technically correct. Phase is a timing difference, not a flip.
Flipping the sound so that positive becomes negative, and vice versa, is sometimes called inverting the phase, but the technically correct term is inverting the polarity. This is a very important distinction when it comes to complex waves, like this complete mix of a song. (upbeat rock music) If we take two copies, shift one a couple of milliseconds, and mix them together, we get a tone alteration because of comb filtering.
(upbeat rock music) But, that's not cancellation of the whole sound. If we put the timing back in sync like it was, then flip the polarity of one of the waves, (silence) we get total cancellation, and silence. That's the difference between phase and polarity.
Of course, whether you should try to correct someone if they refer to inverting the polarity as flipping the phase, that's up to you.
The course starts with explanations of what sound really is and how we hear it, including discussions on frequency, amplitude, phase, and psychoacoustics. Matt explores analog audio signal path, explaining connections, gain staging, and metering. Next, he brings the audio signal into the digital domain, discussing analog to digital conversion, digital gain staging, file formats and compression, and dither.
Then the course digs into digital audio workstations (DAWs), explaining the concepts and misconceptions involved in digital recording systems. Matt describes how memory, CPU speed, and storage affect your DAW's performance, as well as how to manage computer resources and understand the plethora of file formats associated with digital recording. He follows with an overview of MIDI: how to generate, store, process, and communicate MIDI data. He wraps up with the audio processors that are often used for mixing in a DAW—including EQ, compressors, reverb, delay, and many others.
- What is sound?
- The three domains of sound: acoustic, analog, and digital
- The analog vs. digital signal paths
- Converting analog audio to digital
- Digital formats and data compression
- Understanding the five types of DAWs
- Recording performances with MIDI
- Mixing and processing audio with EQ, compression, and other effects