Start learning with our library of video tutorials taught by experts. Get started
Viewers: in countries Watching now:
This installment of Foundations of Audio explains one of the most essential ingredients in audio mixing, reverb—the time it takes for sound to bounce, echo, and decay during a live performance or recording. Reverb gives a natural richness to your recordings, which is possible to reproduce. Producer and audio engineer Alex U. Case covers the acoustic, mechanical, and digital means for creating reverb, and charts the parameters (room size, density, etc.) you'll need to know to take advantage of the original recording space and enhance it in post. He then shows how to simulate reverb digitally with effects, adding timbre, texture, and contrast, and improve the sound of your mixes with a sense of space and depth.
These techniques can be practiced with the free Get in the Mix sessions, currently available for Pro Tools and Logic Pro.
There are so many different methods used for creating Studio reverb. Room tracks and chambers are acoustic sources of reverb, springs and plates give us reverb mechanically, but we're not done, there are digital ways too. Digital reverbs come in two forms, algorithmic reverb, which is the type of reverb plug-in in your DAW. And convolution, which takes advantage of the ever-growing power of CPUs to bring us another form of digital reverb. You'll hear examples of these types of reverbs throughout this course.
We'll take them in order, in an earlier movie we saw how reverb comes from countless room reflections that follow any sound made in a room. In fact, those reflections that make up reverberation could be created in your DAW using a bunch of delays. One of the first digital reverbs ever was created in 1962 by a clever chap named Manfred Schroeter working at Bell Labs and he used just six delays. Today's digital reverbs of course use many, many more.
Algorithmic digital reverbs are simply tricked out digital delay lines. The high-end outboard digital reverbs from Bert Caskey, Lexicon, TC Electronic, and Yamaha use algorithms consisting of very elaborate networks of interconnected, re-circulating, modulating, and filter delays. Many, many different digital delays are combined--intertwined really--so that the sound that goes in is sustained and repeated in a richly complex pattern, very much inspired by room acoustics.
What a Concert Hall does with countless reflecting surfaces and algorithmic reverb can do with a large, but finite number of delay lines. The number of delays within the delay time settings of each, the way they're connected to each other feeding back and feeding forward with their delay times modulated, their phase shifted, their spectral content manipulated, built of process delays, multiple process delays and algorithmic reverb is a complex digital system that resonates, the signal goes in and lingers and fades.
The sound quality of this kind of digital reverb depends very much on the skills of the engineers who design and write the code and seems to be directly proportional to the digital algorithms complexity. So these reverbs are greedy users of hardware needed to do all the calculations. Outboard digital reverbs have hardware dedicated to the task. Plug-in reverbs, especially the best sounding plug-ins, will gobble up a significant share of your DAW's system resources. There are great sounding plug in reverbs for sure, but you'll do well to have a very fast multi-core CPU with quite a big chunk of RAM to handle them.
Memory and calculation intense, algorithmic reverb is one of the most important places to consider reaching for a dedicated outboard unit. Convolution offers an alternative digital approach to the algorithmic reverb. This curious word convolution simply refers to a very specific mathematical operation. Leveraging that kind of math lets us take the impulse response of a room and apply it to any audio track we have. Again, as reverb is in essence that almost indescribable, complicated, organic pattern of decaying reflections from all the room surfaces, all we need is a way to find the necessary pattern of delays and apply them to our audio.
This is done by sending an impulse into the room, a simple single instantaneous spike of energy, a perfect click, and recording the resulting pattern of spikes that follow. This recorded response of the reverberant room as it reacts to a simple spike gives us all the data we need to apply the room's response to any other signal, your vocal track, your snare, your ukulele. Convolution reverb is driven then by a library of impulse responses, those recordings of how space is reacted to an impulse.
The impulse response might be a hall in Japan, while your track is a close miked vocal from your bedroom. Convolve your vocal track with this impulse response and your listeners will hear your vocalist sonically transported to a space you may never have been. Convolution is a relatively new tool in audio, not because the idea is new, it isn't, but because it's very computationally intense. Convolution didn't really become viable in the studio environment until CPUs became multi-core, clock speed broke through the gigahertz range, and RAM started being sold by the gigabyte.
We're lucky to be alive in audio today, because such capability is readily available. Convolution joins algorithmic to give all of us two very powerful choices for creating studio reverb digitally.
Find answers to the most frequently asked questions about Foundations of Audio: Reverb.
Here are the FAQs that matched your search "":
Sorry, there are no matches for your search ""—to search again, type in another word or phrase and click search.
Access exercise files from a button right under the course name.
Search within course videos and transcripts, and jump right to the results.
Remove icons showing you already watched videos if you want to start over.
Make the video wide, narrow, full-screen, or pop the player out of the page into its own window.
Click on text in the transcript to jump to that spot in the video. As the video plays, the relevant spot in the transcript will be highlighted.