Xcode: Resolving “Failed to find or create execution context for description”

Error(s)

  • An internal error occurred. Editing functionality may be limited.
  • Failed to find or create execution context for description

Solution

Restart your Mac.

Creating a Simple Machine Learning iOS App

Description of the app

This app will take a picture of an object and tell you what that object is. For example, if you take a picture of a hot dog, it will tell you it’s a hot dog. If you take a picture of a Golden Retriever, it will tell you it’s a Golden Retriever.

What is Core ML?

Core ML allows you to integrate trained machine learning models into your iOS app. A trained model is the result of a machine learning algorithm being applied to a set of training data. Core ML models are pre-trained, meaning you don’t have to train them because they’ve already been trained. The models are static which means you are not able to use the data from your app to train the model any further.

We will be using the Inception v3 model for our app. It has been trained to recognize over 1000 categories of images including animals, foods, trees, vehicles, and more.

This tutorial assumes you have a basic understanding of iOS app development.

Creating the app

Step One: Create a new Xcode Project

Xcode is the software used to create iOS apps. If you haven’t already, you must download and set Xcode up. There are plenty of tutorials on how to do this.

  1. Afterwards, create a new Single View App, name it, and save it wherever you like.
Image result for xcode single view app

Step Two: Incorporate the Inception v3 Model into your Project

  1. Go to https://developer.apple.com/machine-learning/build-run-models/
  2. Scroll down to the CoreML Models and find “Inception v3”
  3. Press “Download Core ML Model.”
  4. Once the file downloads, drag the Inceptionv3.mlmodel file into your Xcode project file structure
  5. Make sure the check box “Copy Items if needed” is checked and press Finish (It might take a while for the model to incorporate into Xcode so be patient at first)

Step Three: Setup up your ViewController.swift

  1. Open up the ViewController.swift file in your Xcode project file structure.
import UIKit
import CoreML
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {

     override func viewDidLoad() {
          super.viewDidLoad()

     }

}

The import CoreML statement allows you to use CoreML in your code. The import Vision statement makes it easier to process images.

The UIImagePickerControllerDelegate lets you access your phone’s camera, and the UINavigationControllerDelegate lets you navigate back and forth between screens.

Step Four: Set up your Main.storyboard

  1. Click the yellow button on the top-left of the View Controller box
  2. Click the Editor tab in the menu bar of your Mac
  3. Find “Embed In” and hover on it and choose “Navigation Controller”
  4. A new gray box should appear that says “Navigation Controller.” Slide it over to the left to get it out of the way.
  1. In the top right corner, press the Library button. Type in “Bar Button Item” and drag the Bar Button Item into the top right of the View Controller
  2. Change the Bar Button Item’s icon to a camera icon
  3. Press the Library button again. Type “Image View” and drag an Image View into the View Controller. Resize it fill the entire box.
  4. Click the check box that says “Use Safe Area Layout Guides.”
  1. Connect the Bar Button Item into your ViewController.swift as an IBAction with a name like “cameraTapped”
  2. Connect the Image View into the file as an IBOutlet with a name like “imageView”

Step Five: Connect Storyboard items to variables in code

import UIKit
import CoreML
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {

     @IBOutlet weak var imageView: UIImageView!

     override func viewDidLoad() {
          super.viewDidLoad()
     }

     @IBAction func cameraTapped(_ sender: UIBarButtonItem) { 
     }

}

Step Six: Add an Image Picker to the View Controller

import UIKit
import CoreML
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {

     @IBOutlet weak var imageView: UIImageView!

     let imagePicker = UIImagePickerController()

     override func viewDidLoad() {
          super.viewDidLoad()
          
          imagePicker.delegate = self
          imagePicker.sourceType = .camera
          imagePicker.allowsEditing = false
     }

     func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
          if let userPickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
               imageView.image = userPickedImage
          }
          imagePicker.dismiss(animated: true, completion: nil)
     }

     @IBAction func cameraTapped(_ sender: UIBarButtonItem) {
           present(imagePicker, animated: true, completion: nil)
     }

}

All this code basically allows you to have access to the phone’s camera to take pictures through the app. When you press the Bar Button Item that we set up in your app, it will present a view that lets you take a picture.

Step Seven: Get permission to use the camera

  1. Go to the Info.plist file
  2. Click the Add button
  3. For the Key, select the “Privacy – Camera Usage Description”
  4. For the Value, write something like “We need to use your camera”

This step allows you to get access to the user’s camera by asking permission.

Now the camera functionality in your app should be fully functional.

Your phone should get something like this when you try to run it

Step Eight: Convert the Image to a CIImage

Add the guard statement to your imagePickerController function.

     func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
          if let userPickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
               imageView.image = userPickedImage
               guard let ciimage = CIImage(image: userPickedImage) else {
               fatalError("Could not convert to CIImage")
               }
          }
          imagePicker.dismiss(animated: true, completion: nil)
     }

CIImage is a special type of image that allows us to use the Vision and CoreML frameworks in order to get an interpretation from the model.

Step Nine: Create the Function that Interprets Images with Inceptionv3 Model

Now we’re going to add the function that will actually look at the image, analyze it, and classify it as something.

Add this detect function right above your cameraTapped IBAction

func detect(image: CIImage) {
        guard let model = try? VNCoreMLModel(for: Inceptionv3().model) else {
            fatalError("Loading CoreML Model failed")
        }
        
        let request = VNCoreMLRequest(model: model) { (request, error) in
            guard let results = request.results as? [VNClassificationObservation] else {
                fatalError("Model failed to process image")
            }
            
            if let firstResult = results.first {
                self.navigationItem.title = "\(firstResult.identifier)"
            }
        }
        
        let handler = VNImageRequestHandler(ciImage: image)
        
        do {
            try handler.perform([request])
        }
        catch {
            print(error)
        }
        
    }

@IBAction func cameraTapped(_ sender: UIBarButtonItem) {
        present(imagePicker, animated: true, completion: nil)
    }

First, we created a variable called model which was a reference to the Inception v3 model that we could access in our code.

Then, we created the request that basically uses the model to process input and give out relevant results.

Whatever result the request comes up with (results.first) will be displayed on the user’s screen in the navigation title (self.navigationItem.title).

Then, we put the CIImage that is a parameter of the detect function into the VNImageRequestHandler variable so we can use it in the model.

Last, we performed the request on the handler (A.K.A the image).

Step Ten: Call the detect function

Last but not least, we have to call the detect function in our code so that it actually runs on the image.

Go back to your imagePickerController function.

func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
        
        if let userPickedImage = info[.originalImage] as? UIImage {
            imageView.image = userPickedImage
            
            guard let ciimage = CIImage(image: userPickedImage) else {
                fatalError("Could not convert to CIImage")
            }
            
            detect(image: ciimage)
            
        }
        
        imagePicker.dismiss(animated: true, completion: nil)
        
    }

The only line that we added is detect(image: ciimage). This line calls the detect function and the ciimage variable defined right above it is the image parameter. Whatever image you use for this parameter is the image that will be analyzed and classified by the model.

The Result

The app is now completely done and ready to run.

Remember: It is not even close to 100% accurate. It only knows about 1000 objects and it’s not a very sophisticated AI. But it’s a cool beginner project and it’s fun to see when it works.

Here’s an example. I took the photo of my dog Blue and it correctly identified him as a Silky Terrier.

Here are some more examples of objects that it recognized.

Thank you so much for reading and I hope you enjoyed this project. Come back soon for more!