A **neural network **is a deep learning system that is modeled after the human brain and the way biological organisms learn.

A **perceptron** is a single layer of a neural network. It takes in a few binary inputs, or yes or no inputs, and outputs a single binary output. **Binary** objects can have a value of either 0 (meaning no) or 1 (meaning yes). This is a model of a perceptron:

It looks confusing, but you don’t have to understand all of the symbols. Just know that x1, x2, and x3 are the 3 binary inputs into the neural network, the big circle in the middle is the decider, and the z is the yes or no output.

As an example, let’s use a perceptron to decide if I will or will not go out to get food. I will put in some considerations (x1, x2, x3) and make a yes or no decision. These will be my inputs, or factors that will affect my decision of whether to eat out or not:

- x1 = Food at home = 0
- x2 = Tired = 1
- x3 = Have cash = 1

If I have food at home, I will not go out to eat. If I have cash and/or I’m too tired to cook, I will go out to eat. But what if all 3 of these conditions are true? I have food at home, but I also have cash. Then, we have a conflicting yes and no. That’s why we need to add weights to our perceptron.

**Weights** are multipliers that show how important each input is to our final decision. I will add some weights to our inputs:

- x1 = Food at home = 1 x -3
- x2 = Tired = 1 x 3
- x3 = Have cash = 1 x 6

So if there was a day when I had food at home, I was tired, and I had cash: -3 + 3 + 6 = 6. I get 6 which is not a binary (0 or 1) output. Now that we’ve added weights, we have to figure out a way to change the output to be yes or no.

(By the way the w1, w2, and w3 in the above diagram are the weights added to the inputs)

We can achieve this by using sigmoid neurons instead of perceptrons. **Sigmoid neurons **are basically the same as perceptrons except they use decimals between 0 and 1 to make sure our output always comes out as a number between 0 and 1.

Let’s modify our inputs to reflect more ambiguous circumstances using sigmoid neurons. Let’s say I have food at home, but I don’t like the food. I’m only kind of tired. I barely have enough cash. I will give them values between 0 and 1 based on the particular situation.

- x1 = Food at home = 0.5
- x2 = Tired = 0.3
- x3 = Have cash = 0.9

Now we will add the same weights:

- x1 = Food at home = 0.5 x -3
- x2 = Tired = 0.3 x 3
- x3 = Have cash = 0.9 x 6

-1.5 + 0.9 + 5.4 = 4.8

Then, we would take the number 4.8 and turn it into a number between 0 and 1 using the sigmoid function.

**An entire neural network is a multilayer perceptron or multiple layers of sigmoid neurons.**