What is TensorFlow?

TensorFlow is a software framework for building and deploying machine learning models.

It was developed by Google for their personal use, but they ended up deciding to make it public in late 2015. The first stable version came in 2017.

TensorFlow is open source. This means it is released under a license which allows users to do basically whatever they want with the software.

TensorFlow is low-level and can take quite a few lines of code to build stuff. Keras is a popular, high-level TensorFlow wrapper for building neural networks with much fewer lines of code.

Keras simplifies the process of coding common tasks in TensorFlow, making it much easier and quicker. Read a little bit more about Keras here.

TensorFlow
Keras

How to Install Python and PyCharm

Python is an extremely popular programming language that is commonly used in machine learning and data science. It is one of the smartest languages to learn.

PyCharm is a good IDE (Integrated Development Environment) for Python. An IDE is the software that allows you to write and test your code. There are different IDEs for different programming languages.

How to Install Python

  • Step 3: Press the big yellow download button for the latest version for your OS (It should be Python 3 or higher)
  • Step 4: Click on the downloaded file to launch the installer
  • Step 5: Set it up however you want. I suggest just choosing all of the default options by continuing to press “Continue,” but it depends on your individual situation
  • Step 6: If your on Mac, check to make sure the install was successful by opening up Terminal. Type this in:
python -v

How to Install PyCharm

  • Step 1: Go to https://www.jetbrains.com/pycharm/
  • Step 2: In the center of the screen, press the big black “Download Now” button. You should be transported to a new window that says “Download PyCharm” at the top
  • Step 3: Choose your correct OS (Windows, macOs, or Linux). Then choose the Professional (not free) or Community (free) version of PyCharm and press “Download”
  • Step 4: When the download completes, click on the file to open it
  • Step 5: Simply drag the PyCharm application to the Applications folder, as you’ve been prompted
  • Step 6: Go to your Applications folder and run PyCharm
  • Step 7: Setup PyCharm. Just press “Skip Remaining and Set Defaults” button at the bottom-left to accept the default settings

You’re all set. Now you can press “Create New Project” to get started with a new Python project

Creating a Simple Machine Learning iOS App

Description of the app

This app will take a picture of an object and tell you what that object is. For example, if you take a picture of a hot dog, it will tell you it’s a hot dog. If you take a picture of a Golden Retriever, it will tell you it’s a Golden Retriever.

What is Core ML?

Core ML allows you to integrate trained machine learning models into your iOS app. A trained model is the result of a machine learning algorithm being applied to a set of training data. Core ML models are pre-trained, meaning you don’t have to train them because they’ve already been trained. The models are static which means you are not able to use the data from your app to train the model any further.

We will be using the Inception v3 model for our app. It has been trained to recognize over 1000 categories of images including animals, foods, trees, vehicles, and more.

This tutorial assumes you have a basic understanding of iOS app development.

Creating the app

Step One: Create a new Xcode Project

Xcode is the software used to create iOS apps. If you haven’t already, you must download and set Xcode up. There are plenty of tutorials on how to do this.

  1. Afterwards, create a new Single View App, name it, and save it wherever you like.
Image result for xcode single view app

Step Two: Incorporate the Inception v3 Model into your Project

  1. Go to https://developer.apple.com/machine-learning/build-run-models/
  2. Scroll down to the CoreML Models and find “Inception v3”
  3. Press “Download Core ML Model.”
  4. Once the file downloads, drag the Inceptionv3.mlmodel file into your Xcode project file structure
  5. Make sure the check box “Copy Items if needed” is checked and press Finish (It might take a while for the model to incorporate into Xcode so be patient at first)

Step Three: Setup up your ViewController.swift

  1. Open up the ViewController.swift file in your Xcode project file structure.
import UIKit
import CoreML
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {

     override func viewDidLoad() {
          super.viewDidLoad()

     }

}

The import CoreML statement allows you to use CoreML in your code. The import Vision statement makes it easier to process images.

The UIImagePickerControllerDelegate lets you access your phone’s camera, and the UINavigationControllerDelegate lets you navigate back and forth between screens.

Step Four: Set up your Main.storyboard

  1. Click the yellow button on the top-left of the View Controller box
  2. Click the Editor tab in the menu bar of your Mac
  3. Find “Embed In” and hover on it and choose “Navigation Controller”
  4. A new gray box should appear that says “Navigation Controller.” Slide it over to the left to get it out of the way.
  1. In the top right corner, press the Library button. Type in “Bar Button Item” and drag the Bar Button Item into the top right of the View Controller
  2. Change the Bar Button Item’s icon to a camera icon
  3. Press the Library button again. Type “Image View” and drag an Image View into the View Controller. Resize it fill the entire box.
  4. Click the check box that says “Use Safe Area Layout Guides.”
  1. Connect the Bar Button Item into your ViewController.swift as an IBAction with a name like “cameraTapped”
  2. Connect the Image View into the file as an IBOutlet with a name like “imageView”

Step Five: Connect Storyboard items to variables in code

import UIKit
import CoreML
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {

     @IBOutlet weak var imageView: UIImageView!

     override func viewDidLoad() {
          super.viewDidLoad()
     }

     @IBAction func cameraTapped(_ sender: UIBarButtonItem) { 
     }

}

Step Six: Add an Image Picker to the View Controller

import UIKit
import CoreML
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {

     @IBOutlet weak var imageView: UIImageView!

     let imagePicker = UIImagePickerController()

     override func viewDidLoad() {
          super.viewDidLoad()
          
          imagePicker.delegate = self
          imagePicker.sourceType = .camera
          imagePicker.allowsEditing = false
     }

     func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
          if let userPickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
               imageView.image = userPickedImage
          }
          imagePicker.dismiss(animated: true, completion: nil)
     }

     @IBAction func cameraTapped(_ sender: UIBarButtonItem) {
           present(imagePicker, animated: true, completion: nil)
     }

}

All this code basically allows you to have access to the phone’s camera to take pictures through the app. When you press the Bar Button Item that we set up in your app, it will present a view that lets you take a picture.

Step Seven: Get permission to use the camera

  1. Go to the Info.plist file
  2. Click the Add button
  3. For the Key, select the “Privacy – Camera Usage Description”
  4. For the Value, write something like “We need to use your camera”

This step allows you to get access to the user’s camera by asking permission.

Now the camera functionality in your app should be fully functional.

Your phone should get something like this when you try to run it

Step Eight: Convert the Image to a CIImage

Add the guard statement to your imagePickerController function.

     func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
          if let userPickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
               imageView.image = userPickedImage
               guard let ciimage = CIImage(image: userPickedImage) else {
               fatalError("Could not convert to CIImage")
               }
          }
          imagePicker.dismiss(animated: true, completion: nil)
     }

CIImage is a special type of image that allows us to use the Vision and CoreML frameworks in order to get an interpretation from the model.

Step Nine: Create the Function that Interprets Images with Inceptionv3 Model

Now we’re going to add the function that will actually look at the image, analyze it, and classify it as something.

Add this detect function right above your cameraTapped IBAction

func detect(image: CIImage) {
        guard let model = try? VNCoreMLModel(for: Inceptionv3().model) else {
            fatalError("Loading CoreML Model failed")
        }
        
        let request = VNCoreMLRequest(model: model) { (request, error) in
            guard let results = request.results as? [VNClassificationObservation] else {
                fatalError("Model failed to process image")
            }
            
            if let firstResult = results.first {
                self.navigationItem.title = "\(firstResult.identifier)"
            }
        }
        
        let handler = VNImageRequestHandler(ciImage: image)
        
        do {
            try handler.perform([request])
        }
        catch {
            print(error)
        }
        
    }

@IBAction func cameraTapped(_ sender: UIBarButtonItem) {
        present(imagePicker, animated: true, completion: nil)
    }

First, we created a variable called model which was a reference to the Inception v3 model that we could access in our code.

Then, we created the request that basically uses the model to process input and give out relevant results.

Whatever result the request comes up with (results.first) will be displayed on the user’s screen in the navigation title (self.navigationItem.title).

Then, we put the CIImage that is a parameter of the detect function into the VNImageRequestHandler variable so we can use it in the model.

Last, we performed the request on the handler (A.K.A the image).

Step Ten: Call the detect function

Last but not least, we have to call the detect function in our code so that it actually runs on the image.

Go back to your imagePickerController function.

func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
        
        if let userPickedImage = info[.originalImage] as? UIImage {
            imageView.image = userPickedImage
            
            guard let ciimage = CIImage(image: userPickedImage) else {
                fatalError("Could not convert to CIImage")
            }
            
            detect(image: ciimage)
            
        }
        
        imagePicker.dismiss(animated: true, completion: nil)
        
    }

The only line that we added is detect(image: ciimage). This line calls the detect function and the ciimage variable defined right above it is the image parameter. Whatever image you use for this parameter is the image that will be analyzed and classified by the model.

The Result

The app is now completely done and ready to run.

Remember: It is not even close to 100% accurate. It only knows about 1000 objects and it’s not a very sophisticated AI. But it’s a cool beginner project and it’s fun to see when it works.

Here’s an example. I took the photo of my dog Blue and it correctly identified him as a Silky Terrier.

Here are some more examples of objects that it recognized.

Thank you so much for reading and I hope you enjoyed this project. Come back soon for more!

What is Deep Learning?

Deep learning (DL) is a method of Machine Learning which teaches machines with as little human effort as possible. They learn from example, like humans, by digesting large amounts of data.

Unlike other other types of Machine Learning that require you to outline features to look for in the data, Deep Learning models deal directly with the data– no human intervention!

The downside to this is Deep Learning models require more data and higher computing power. Also, it takes longer to train DL models.

One of the benefits to DL models is they are more accurate. In addition, they are the best models if you don’t know the features you’re looking for in your data.

Deep Learning is accomplished using neural networks. A neural network is a computing model which is structured based on the structure of a brain.

Deep Learning is used in driverless cars, detecting cancer cells, improving worker safety around heavy machinery, in home assistants, and many other situations nowadays.

What is Machine Learning?

Machine Learning (ML) provides Artificial Intelligence machines the ability to “learn.” This is achieved by using algorithms that discover patterns and generate insights from data.

Unlike traditional programming, you’re not giving the computer detailed instructions on what to do. Instead, you give the computer data and tools that allow it to study and solve the problem without directive. The computer remembers and adapts to new information each time it is used, also known as learning.

Use ML when you can’t tell the computer explicitly what to do in each situation.

Types of Machine Learning

  • Supervised – You show the machine connection between variables and outcomes. It’s like a knowledgeable tutor.
  • Unsupervised – The machine learns through lots of observation and trial and error. Unlike supervised learning, you’re not showing the machine the correct answer. The more data you give it, the more accurate the machine will become.
  • Semi-supervised – Mix between supervised and unsupervised.
  • Reinforcement – The machine gets a reward for performing a desired action.

Why is Artificial Intelligence Important?

Artificial Intelligence (AI) basically allows us to give machines intelligence and enable them to do tasks that used to be done by humans. With this comes an unimaginable amount of benefits and drawbacks. I’ll explain some of the important ones here

AI is important because, depending on the work, it helps us to:

  • make smarter decisions
  • make work more efficient
  • cut down the amount of human effort

Examples of Benefits

Technology

  • Allows for better user experiences on applications by making them smarter, more intuitive and more personalized
    • Online shopping websites
    • Netflix
    • Social Media
    • Siri, Alexa, Cortana
  • Helps programmers because the AI models teach themselves and don’t have to be programmed normally
  • Detect and deter security threats
  • Resolve user tech problems
  • Monitor social media comments to, for example, detect people’s thoughts on a brand/company

Healthcare

  • Robots that assist in surgeries
  • Provide personalized medicine
  • Help with diagnosis
  • X-rays

Business

  • Predict fraudulent transactions
  • Quick credit scoring
  • Financial trading
  • Automated call systems

What is Artificial Intelligence?

Artificial Intelligence (AI) is a branch of computer science that deals with creating “intelligent” machines that can perform human tasks

AI machines “learn” through a process called Machine Learning (ML).

Machine Learning is a subfield of AI which provides machines the ability to learn by discovering patterns in huge volumes of data.

AI has become more popular in recent years because of the availability of much more data, improvements in computer storage/power, and more advanced algorithms.

Subfields of AI

  • Machine Learning – provides machines the ability to learn
  • Deep Learning – provides machines ability to mimic brain’s neural network
  • Cognitive Computing
  • Computer Vision
  • Natural Language Processing (NLP)
  • Data Mining

Types of AI

  • Reactive Machines: AI machines designed to do a specific job. They do not form memories (Ex. automatic coffee maker)
  • Limited memory: AI that uses past experiences and current data to make decisions (Ex. self-driving cars)
  • Theory of mind: AI that can socialize and understand human emotions. They have not been built yet
  • Self awareness: AI that is conscious. These systems are the future of AI and are yet to be built