Machine learning models are key for the use of Core ML in your apps. Apple introduced the MLModel format, which is a single document in a public format. They can be added to your project via drag and drop, and are task specific. For instance, a model could be able to detect the dominant objects present in an image from a set of categories, such as trees, animals, food, vehicles, and people.
- [Instructor] You have already learned that the key to using Core ML really is the machine learning models that you can drag and drop into Xcode. And if you're asking yourself, "Where do you get these ML model files?" Then head over to developer.apple.com/machinelearning and there you will find great resources for machine learning, and also some models that are already converted into the ML model format. So you'll find, for example, the MobileNet model here, which allows you to detect the dominant object present in an image, and also other really great machine learning models that you can use for your own projects.
And they are already converted into the ML model format, so you can go ahead and download those models and then drag and drop them into your application and we're going to do that in a future project but we are also going to use our own model and convert that into an ML model document, or into the ML model format.
- What are machine learning, Core ML, Vision, and NLP?
- Adding a machine learning model to a project
- Getting predictions from machine learning models
- Converting existing machine learning models for Core ML
- Classifying images and detecting objects with Vision and Core ML
- Analyzing natural language text with NSLinguisticTagger