The Vision framework and NLP APIs are both domain specific. With Vision, you can easily build computer vision machine learning features into your app. Supported features include face tracking, face detection, landmarks, text detection, and rectangle detection. The natural language processing APIs in AVFoundation use machine learning to deeply understand text using features such as language identification, tokenization, lemmatization, parts of speech, and named entity recognition.
- [Instructor] In addition to Core ML,…Apple also released two other machine learning frameworks…called Vision and NLP that are more domain specific.…You already know that the Vision Framework…has something to do with images,…but now let's explore what the Vision Framework…can actually do, in more detail.…Apple says Vision gives you a high level,…on device solution to computer vision problems…through one simple API.…So, you do not have to be a computer Vision expert.…And, we can do things like face detection,…face landmark detection,…so that we can figure out where the mouth is,…where the eyes are, and so on.…
We can do rectangle detection, barcode detection,…or even object tracking.…So, you don't have to be a computer Vision expert,…you can just say I want to know where the face are…because Vision handles the complexity for you.…The same things that apply to Core ML…are also great for Vision…because, again, it protects the user's privacy,…it reduces data cost, it reduces server cost,…and it's always available.…
- What are machine learning, Core ML, Vision, and NLP?
- Adding a machine learning model to a project
- Getting predictions from machine learning models
- Converting existing machine learning models for Core ML
- Classifying images and detecting objects with Vision and Core ML
- Analyzing natural language text with NSLinguisticTagger