Join John Nastos for an in-depth discussion in this video Introducing AVAudioSession, part of Creating Audio Apps for iOS.
- Let's take a look at AVAudioSession which is a Singleton object used to tell the system some information about how the audio for an app should be dealt with. Singletons are a commonly used design pattern in objective-C, they give you convenient access to a single object that may be used by many different pieces of code without even having an access to the owner of the object. With AVAudioSession, think of it like this: the system has one audio engine. Multiple apps can hook into it and play their own audio but it's always routed through the common system.
The Singleton object sharedInstance gives us access and the code to that shared system. With AVAudioSession, I can do a few things. I can activate or deactivate the app's audio session which tells the system when my app is going to actively play or record audio. I can set the audio session category which tells the system some details about how my app wants to deal with the playing or recording of audio. I can also configure some settings about sample rate, buffer duration, etcetera.
Using the session, I can also handle audio route changes that might be plugging in or unplugging headphones or USB audio device. And lastly, I can react to system audio events like the availability of the media services daemon that controls audio system wide. When of the most useful, common, and important things that AVAudioSession allows you to do is set the audio session category. Almost every time you want to play audio in your app, with the exception of using system sound services, you'll want to choose the appropriate category for you session.
AVAudioSession offers a variety of possible categories. Ambient is used for background sounds. By default, it mixes with other music and audio that other apps may be playing. SoloAmbient is used for background sounds but will stop the other audio. Playback should be used for music tracks and other foreground audio. This is the category I'll use in the examples when playing audio. Record is used when recording. PlayAndRecord should be used when playing and recording audio at the same time.
Think, for example, about a voiceover IP app that records and transmits your voice as well as plays the voice on the other end of the call. AudioProcessing should be used when using a hardware codec or signal processing but not actually playing or recording audio. And lastly, MultiRoute is for apps that use separate routes for distinct audio data such as an app that plays different audio data out of the headphones and a USB interface separately. If you want even more information about these categories check out Apple's documentation on working with audio categories.
It's important to choose the category that best fits the audio needs if your particular app since the category tells the system some important information about how it should treat and route the audio. For example, one of the most common and frustrating mistakes that people tend to make when starting to write iOS audio code is that they don't set the category properly. And then their audio doesn't play if the device's silent switch is on. Of course, in some situations, like a game perhaps, you want to respect the user's silent switch settings.
In this case, you may want to use the default audio category which is AVAudioSessionCategorySoloAmbient. But, there are other situations where the users expect the app to function either way. For example, if your app is a music player that plays the library of MP3s, like the built-in Music app, you would expect it to play audio regardless of the silent switch setting. So, in that instance, you'd most likely want to set the category to AVAudioSessionCategoryPlayback. You can read up on even more details about audio sessions in Apple's documentation.
- Playing sounds with System Sound Services
- Setting up audio sessions
- Playing sounds with AVFoundation and AVAudioPlayer/AVAudioRecorder
- Recording audio with an audio input queue
- Playing back audio with an audio output queue
- Setting up audio units
- Changing input and output levels
- Responding to events
- Working with third-party frameworks