What are neural networks actually?

https://upload.wikimedia.org/wikipedia/commons/e/e4/Artificial_neural_network.svg wikimedia.org

Think of how you analyze something. You read a book, then you connect the book’s content to your experience and cognitions, before producing your own perspective of that book.

A neural network is essentially that; it takes in an input, connects it to the model’s training data and produces a result. But something so abstract, the ability to learn, is not so quantifiable. This is why machine learning is tricky - how do we ‘train’ a model and make it learn?

What is in a neural network?

Input layer

The input layer refers to the stimuli given to the neural network. The input layer should output a matrix of values, which is then passed onto the next layer for processing.

Hidden layers

Hidden layers are the layers in between the input layer and the output layer.

Output layer

The output layer is the final layer of the network. It’s just like the other layers, receiving inputs and calculating them before outputting a result, but instead of to a next layer of neurons, it outputs as is.

Neurons

Each node (circle) in the network above is called a neuron. It’s called so because it’s similar to an actual neuron from your nervous system.

Each neuron has these traits:

  • inputs - these are your neurons before (presynaptic neurons), which influences how likely the current neuron is likely to ‘fire’.
    • weights - each input going into the neuron has a weight. The weight is basically how strong the input is to the current neuron, which then influences the next layer. The stronger the weight, the better the impact it has on the next neuron’s firing.
  • outputs - depending on the inputs, the neuron calculates output values to send to the next layer of neurons or the values themselves in the output layer. The calculation used I will go into later.
  • bias - the bias allows you manually manipulate a neuron’s impact on the next neuron by adding an artificial number to the output of the neuron.

So, what do we need to do?

That’s essentially it; the structure of a neural network. The toughest part of building a neural network is finding a set of weights and biases so that it does what you want the network intended to do.

Firstly, let’s see how weights and biases affect an output and just how the output is calculated in general using something called the dot product.