As interesting as a camera frame on your sample application is, it would be even more interesting if you could actually do something with the frames you're seeing on the camera screen. Learn how to access the frame processing delegate in AVFoundation, and how to convert its output into an iOS friendly image that any iOS developer can interact with in their application.
- [Instructor] Now that we've added a button to our UI,…let's make sure we handle what happens…when we tap the button on our UI…to capture a still image.…Go to the code where you declared your button.…And underneath the line where you set the image of it,…type in button.addTarget,…and let auto-complete take care of the rest.…For target, you'll select self.…For the action, you'll type #selector(shutterButtonTapped).…Once again, this is not a method that we have created yet,…but you'll type this in and add it in just a moment.…
For UI control events,…you'll type in .touchUpInside.…Now scroll down to your extension…where you handle UI button functions.…Underneath cancelButtonTapped, add a new function.…And we'll call this @objc func shutterButtonTapped.…Now that we have a method to call…whenever the shutter button is tapped,…we need to make sure we do what's next.…For this, we know that our camera object…is going to be what opens up the aperture on our iOS device,…and actually captures a still image.…
But, when the camera object captures the still image…
Along the way, he explains the differences and nuances between writing code for an application and for a reusable framework, as well as some of the fundamentals of AVFoundation, one of the core camera frameworks in iOS. David also shows how to refactor your code, understand Swift access control, develop an interface, and handle memory leaks, so your framework is ready to share with other developers.
- Creating your first build
- Making the camera work
- Creating a framework delegate
- Adding media
- Capturing images
- Correcting orientation
- Versioning and tagging releases in Git