Error(s)
- An internal error occurred. Editing functionality may be limited.
- Failed to find or create execution context for description


Solution
Restart your Mac.
Hopefully my notes can help you with your software development
Restart your Mac.
This app will take a picture of an object and tell you what that object is. For example, if you take a picture of a hot dog, it will tell you it’s a hot dog. If you take a picture of a Golden Retriever, it will tell you it’s a Golden Retriever.
Core ML allows you to integrate trained machine learning models into your iOS app. A trained model is the result of a machine learning algorithm being applied to a set of training data. Core ML models are pre-trained, meaning you don’t have to train them because they’ve already been trained. The models are static which means you are not able to use the data from your app to train the model any further.
We will be using the Inception v3 model for our app. It has been trained to recognize over 1000 categories of images including animals, foods, trees, vehicles, and more.
This tutorial assumes you have a basic understanding of iOS app development.
Xcode is the software used to create iOS apps. If you haven’t already, you must download and set Xcode up. There are plenty of tutorials on how to do this.
import UIKit
import CoreML
import Vision
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
override func viewDidLoad() {
super.viewDidLoad()
}
}
The import CoreML
statement allows you to use CoreML in your code. The import Vision
statement makes it easier to process images.
The UIImagePickerControllerDelegate
lets you access your phone’s camera, and the UINavigationControllerDelegate
lets you navigate back and forth between screens.
import UIKit
import CoreML
import Vision
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
@IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
}
@IBAction func cameraTapped(_ sender: UIBarButtonItem) {
}
}
import UIKit
import CoreML
import Vision
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
@IBOutlet weak var imageView: UIImageView!
let imagePicker = UIImagePickerController()
override func viewDidLoad() {
super.viewDidLoad()
imagePicker.delegate = self
imagePicker.sourceType = .camera
imagePicker.allowsEditing = false
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let userPickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
imageView.image = userPickedImage
}
imagePicker.dismiss(animated: true, completion: nil)
}
@IBAction func cameraTapped(_ sender: UIBarButtonItem) {
present(imagePicker, animated: true, completion: nil)
}
}
All this code basically allows you to have access to the phone’s camera to take pictures through the app. When you press the Bar Button Item that we set up in your app, it will present a view that lets you take a picture.
This step allows you to get access to the user’s camera by asking permission.
Now the camera functionality in your app should be fully functional.
Add the guard
statement to your imagePickerController
function.
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let userPickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
imageView.image = userPickedImage
guard let ciimage = CIImage(image: userPickedImage) else {
fatalError("Could not convert to CIImage")
}
}
imagePicker.dismiss(animated: true, completion: nil)
}
CIImage
is a special type of image that allows us to use the Vision and CoreML frameworks in order to get an interpretation from the model.
Now we’re going to add the function that will actually look at the image, analyze it, and classify it as something.
Add this detect
function right above your cameraTapped
IBAction
func detect(image: CIImage) {
guard let model = try? VNCoreMLModel(for: Inceptionv3().model) else {
fatalError("Loading CoreML Model failed")
}
let request = VNCoreMLRequest(model: model) { (request, error) in
guard let results = request.results as? [VNClassificationObservation] else {
fatalError("Model failed to process image")
}
if let firstResult = results.first {
self.navigationItem.title = "\(firstResult.identifier)"
}
}
let handler = VNImageRequestHandler(ciImage: image)
do {
try handler.perform([request])
}
catch {
print(error)
}
}
@IBAction func cameraTapped(_ sender: UIBarButtonItem) {
present(imagePicker, animated: true, completion: nil)
}
First, we created a variable called model
which was a reference to the Inception v3 model that we could access in our code.
Then, we created the request
that basically uses the model to process input and give out relevant results.
Whatever result the request comes up with (results.first
) will be displayed on the user’s screen in the navigation title (self.navigationItem.title
).
Then, we put the CIImage
that is a parameter of the detect
function into the VNImageRequestHandler
variable so we can use it in the model.
Last, we performed the request on the handler (A.K.A the image).
Last but not least, we have to call the detect
function in our code so that it actually runs on the image.
Go back to your imagePickerController
function.
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let userPickedImage = info[.originalImage] as? UIImage {
imageView.image = userPickedImage
guard let ciimage = CIImage(image: userPickedImage) else {
fatalError("Could not convert to CIImage")
}
detect(image: ciimage)
}
imagePicker.dismiss(animated: true, completion: nil)
}
The only line that we added is detect(image: ciimage)
. This line calls the detect
function and the ciimage
variable defined right above it is the image
parameter. Whatever image you use for this parameter is the image that will be analyzed and classified by the model.
The app is now completely done and ready to run.
Remember: It is not even close to 100% accurate. It only knows about 1000 objects and it’s not a very sophisticated AI. But it’s a cool beginner project and it’s fun to see when it works.
Here’s an example. I took the photo of my dog Blue and it correctly identified him as a Silky Terrier.
Here are some more examples of objects that it recognized.
Thank you so much for reading and I hope you enjoyed this project. Come back soon for more!