This course provides an introduction to the Core ML framework, and the advantages of using machine learning models, computer vision, and natural language processing in modern apps.
- [Brian] With CoreML you have a new framework that delivers blazingly fast performance to build apps with intelligent new features. Apple uses CoreML for many of their own products like Siri, the Camera app or QuickType, and now you can do things like image classification, music tagging, emotion detection, handwriting recognition, and a lot more using just a few lines of code. I'm Brian Advent and I've been working as a developer specially in the Apple ecosystem for almost a decade now, and in this course I'm giving you a solid theoretical introduction into machine learning, and the CoreML framework to then take a very practical approach.
We are going to build three apps together that leverage the power of machine learning. First we will learn more about machine learning models and detect the gender from a given first name. Second, we're going to use computer vision to classify objects and detect rectangles in real time using the iPhone camera. Our last project is going to focus on natural language processing and making search operations much more powerful. I'm very excited to introduce you into this new world of machine learning and CoreML development, so let's get started!
- What are machine learning, Core ML, Vision, and NLP?
- Adding a machine learning model to a project
- Getting predictions from machine learning models
- Converting existing machine learning models for Core ML
- Classifying images and detecting objects with Vision and Core ML
- Analyzing natural language text with NSLinguisticTagger