This course focuses on the Apple machine learning frameworks Core ML, Vision, and NLP. To save some time, you're not holding yourself up with the AVFoundation code required to display live camera images on your iPhone screen. Still, get a quick overview about the app's boilerplate code that you can use to follow along.
- [Instructor] Before we get started…implementing all of the features that you've already seen…like the rectangle detention…and the life object classification,…I'd like to give you a quick overview…about the extra project that I've already created for you.…It is called Vision ML, and you will find it…in your exercise files.…And if you open up the main storyboard,…you will find three elements here.…We have just the label stating Object Classification.…We're not going to do anything with that.…But we have the text view that is going to display…the classification results, and we have a special UI view,…and I've sub-classed UI view actually…to create a special class.…
It is called preview view, that is going to let us…draw the rectangles that we have detected…directly on screen.…And if we open up the preview view,…you can see that we have just a mask layer here,…which is an area of CA shape layers,…and we're using this preview view also to store…our video preview layer, and we're having a function here,…which is called draw layer, and this draws…
- What are machine learning, Core ML, Vision, and NLP?
- Adding a machine learning model to a project
- Getting predictions from machine learning models
- Converting existing machine learning models for Core ML
- Classifying images and detecting objects with Vision and Core ML
- Analyzing natural language text with NSLinguisticTagger