The app that you are going to create is going to leverage the power of both the Vision framework and Core ML. Use Vision to detect rectangles in a live camera view, and highlight these rectangles in the live video feed. Also, use the power of Vision when it comes to working together with Core ML by adding a machine learning model that can classify the dominant object in an image. Vision makes using that model really easy.
- [Instructor] Now that you've learned…how to work with Core ML,…it is time to put the pieces together…and build a really cool demo application.…What you're seeing at the moment is a video,…and I'm going to play it in a second.…And this is the application that we're going to build.…We're doing two things.…First of all, we're going to use the Vision framework…to detect rectangles…in a live camera feed from the iPhone camera.…And we're then highlighting these rectangles,…like you can see here.…We're highlighting the stone plate lying on a table,…and this is identified as a rectangle.…
Now, if I'm playing this recording of our app,…then you can see that we are also doing…live image recognition or live image classification…identifying a coffee mug, a flower pot or vase,…a remote control, and a computer keyboard.…And this is all done live using Core ML…and using the Vision framework together.…And this is going to be really cool, so let's get started.…
- What are machine learning, Core ML, Vision, and NLP?
- Adding a machine learning model to a project
- Getting predictions from machine learning models
- Converting existing machine learning models for Core ML
- Classifying images and detecting objects with Vision and Core ML
- Analyzing natural language text with NSLinguisticTagger