After creating your first vision requests, prepare a function that is able to process observations, and translate and transform the received coordinates so that you can draw a rectangle on your live camera image. Also, prepare the camera input for vision processing. Since you are dealing with live image data from a device camera, you need to use an AVPixelBuffer and get a little bit of camera information for processing.
- [Instructor] When we are running our application now on…a real device, then we can only…see a live camera image so far.…We didn't really start the rectangle detection request yet,…we've just defined it and we also didn't really draw…anything on screen, and we're going…to change that right now.…So, so far we have created a request handler…that holds a request parameter of the type of VN request.…
And we need to deal now with the observations in order…to draw rectangles on screen into our preview view.…And we're using another function for that,…we're calling that later and I'm going to make some more…space below the handleRectangles function.…I'm going to call that one drawVisionRequestResults,…and this function only holds one parameter,…which is going to be an array of VNRectangleObservation.…
And here we can now finally deal with our observations.…And before we do that, before we draw anything,…let's just call one specific function of the preview view,…which is remove mask, we just removes all of the previous…
- What are machine learning, Core ML, Vision, and NLP?
- Adding a machine learning model to a project
- Getting predictions from machine learning models
- Converting existing machine learning models for Core ML
- Classifying images and detecting objects with Vision and Core ML
- Analyzing natural language text with NSLinguisticTagger