Core ML lets you take advantage of a new foundational machine learning framework used across Apple products, including Siri, Camera, and QuickType. Core ML delivers extremely fast performance with easy integration of machine learning models enabling you to build apps with intelligent new features using just a few lines of code.
- [Instructor] You already know now what machine learning is, but what is Core ML and what does this framework do for us? Let's quickly explore that right now. Core ML is a machine learning framework created by Apple and it offers you so many new possibilities in your application, like real-time image recognition, face detection, emotion detection, sentiment analysis, and many, many more things that are not listed here. So you have huge new opportunities as a developer.
I'd like to give you two quick examples of where you will find machine learning already implemented on your iPhone for some years now. So for example, on the left we have the Photos app, which does face recognition or image classification to group together people that are found in your photo library. Or on the right, we have the predictive keyboard that suggests what you could write next, depending on what you wrote before. These are just two examples of where Apple already implemented machine learning in IOS, but now you are able to implement things like that, too, very easily.
We have three frameworks for that, which are Vision, NLP and at the base, Core ML. Your application can use all of these frameworks together. So Vision is framework to do all things related to computer vision and images. NLP is about text processing so you can do things like language identification, tokenization and more. What is also interesting is that Core ML has the ability to deal with mixed input and output types, so you can input an image and get a text output.
And the best part is that all these frameworks can work together. So you could use text, pass it through NLP, get an output, pass it through Core ML and do things like sentiment analysis, for example. And this is one example that we are going to talk about right now, again, because what we can do now is, for example, doing sentiment analysis, meaning we are having a sentence, we are processing this natural language text using a machine learning model and determine if this is a positive sentence or a negative sentence.
We can do things like handwriting recognition, or also something like scene classification, inputting an image and getting back a string with a possible scene that this image depicts. And this was possible before with third-party frameworks and third-party APIs, mostly only on third-party servers. So Core ML is also extremely great because it gives you, for example, a great deal of user privacy, because the data that is processed by your application belongs to the user and it is only processed on the device.
It is not transferred to a third-party server, not even to an Apple server, because everything is processed on the device. Which also means that your users do save data costs. They do not have to use their mobile data to transfer images to a third-party server. Which also means that you can save server costs, because you do not have to pay any third-party provider with a monthly fee or a per API request fee, which is also extremely cool. And your machine learning algorithms, or your machine learning features, are always available because everything happens on the device.
So these are just four of the advantages that Core ML is giving you. And now you might ask how does that work? How can I use that powerful feature? At the center of Core ML are actually machine learning models, and this is actually a new file type, the ML model file type, which is a single document in a public format, so also the machine learning community can use this format for free. There are no licensing costs or something like that, and these models are just ready to use.
So we are dragging and dropping them into our project, and we can directly use them and they are task-specific. So we could have a model for sentiment analysis, we could have a model for image recognition, image classification and so on. So these models are at the center of this Core ML framework. And now, how can we work with them? Imagine that we have our Xcode ID open right now and we have dragged and dropped such a ML model file into our project. What Xcode does now is translating this model into a Swift class so that we can access it using just three lines of code, actually.
So we would create a scene model object using an initializer like SceneClassifier in that case. And getting a prediction is so simple. We are just using the sceneModel called the prediction function and input an image. And then we can return the sceneType. It's really as simple as that. And we're going to see that in practice later.
- What are machine learning, Core ML, Vision, and NLP?
- Adding a machine learning model to a project
- Getting predictions from machine learning models
- Converting existing machine learning models for Core ML
- Classifying images and detecting objects with Vision and Core ML
- Analyzing natural language text with NSLinguisticTagger