From the course: Machine Learning for iOS Developers
Unlock the full course today
Join today to access over 22,600 courses taught by industry experts or purchase this course individually.
The Vision framework and Core ML - iOS Tutorial
From the course: Machine Learning for iOS Developers
The Vision framework and Core ML
- [Instructor] Before we continue in code, I'd like to talk with you a little bit more about the Vision framework which we are going to use together with our Core ML model. This is cool because Vision works really great together with Core ML, but it does also a lot of other things. So for example, Vision can do face detection, it can do face landmark detection, rectangle detection, barcode detection, object tracking. And Apple says that Vision really gives you a high level on device solution for computer Vision problems through one simple API. So you do not have to be a computer Vision expert. You can just say I want to know where the faces are, because Vision handles the complexity for you. And as I said, Vision works really great together with Core ML models, so we can do image analysis requests using a Core ML model to process images. And Vision does all the heavy lifting for us like, for example, resizing the images so that it fits our model. Now, how does this work? Well, using…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
(Locked)
Our sample app: Dogs vs. Cats1m 3s
-
(Locked)
A quick look around the Xcode project1m 46s
-
(Locked)
Implement an image picker controller7m 46s
-
(Locked)
The Vision framework and Core ML1m 57s
-
(Locked)
An animal detector powered by Vision13m 30s
-
(Locked)
Our machine learning model in action3m 44s
-
(Locked)
-