Hold on…
Before we move on to hidden layers, our code right now isn’t really fit for multiple neurons and layers.
import numpy as np
inputs = [[0.2, 0.5, 1.2],
[1.2, 1.1, 0.6]]
weights = [[1.0, 0.5, 1.3],
[2.0, 3.2, -0.8]]
bias = [1.0, 1.2]
output = np.dot(inputs, np.array(weights).T) + bias
print(output)To organize things better, we should recreate these neurons and layers as a class, as it will make it much easier to handle. If you don’t understand how classes and objects work, take a look at object oriented programming paradigm and solving problems with objects.
import numpy as np
inputs = [[0.2, 0.5, 1.2],
[1.2, 1.1, 0.6]]
class neural_layer:
def __init__(self, weights, bias):
self.weights = weights
self.bias = bias
def new(self):
return self
We keep our inputs as those are our inputs from our input layer (the initial matrix of data).
Although it’s nowhere near finished, the basic layout of what we’ve done so far has been written.
How do we initialize a neural layer?
Now that we have gotten our layer class, let’s actually expand on how we can create our neural layer.
Firstly, we need to figure out our weights of the layer. Although this will be somewhat random for now, we still need to keep the values really close to 0 to ensure that our values don’t change drastically, changing our model.
In case you’re follow the code exactly, I’m using numpy’s seed method, which allows us to create ‘predictable’ random number generation, like how Minecraft will generate the same world with the same seed for example. This means that whatever values I put in (provided you follow), should create the same output.
import numpy as np
inputs = [[0.2, 0.5, 1.2],
[1.2, 1.1, 0.6]]
class neural_layer:
def __init__(self, weights, bias):
self.weights = weights
self.bias = bias
def new(self):
return self
Now, we need to use this to generate our random weights whenever we create a new neural_layer, which is done with our static method new().
import numpy as np
inputs = [[0.2, 0.5, 1.2],
[1.2, 1.1, 0.6]]
class neural_layer:
def __init__(self, weights, bias):
self.weights = weights
self.bias = bias
@staticmethod
def new(input_size, output_size) -> neural_layer:
# input_size = number of neurons in the previous layer that provide an input to the current layer
# output_size = number of neurons outputted to the next layer
weights = np.random.randn(input_size, output_size)
biases = np.zeros((1, output_size))
return neural_layer(weights, biases)Note that we create our weights using input_size, output_size which is important to realize that this will become our shape. The input_size is the size of the input given, The output_size will determine how big the final matrix is which could also determine the input_size of the following layer. An example will be shown later.
If we print our weights, you might notice that we do get numbers above 1, which may affect our graph dramatically. To ensure that the network doesn’t freak out, we can multiply our weights by 10% to get a lower number and therefore a more controlled number.
weights = 0.1 * np.random.randn(input_size, output_size)
After that, we now need a method that does the dot product with inputs and assigns the output. We call this method the ``forward()``, as in **forward propagation** where we feed input data through a network in a *forward direction*. Temporarily, we are using ``
import numpy as np
inputs = [[0.2, 0.5, 1.2],
[1.2, 1.1, 0.6]]
class neural_layer:
def __init__(self, weights, bias):
self.weights = weights
self.bias = bias
@staticmethod
def new(input_size, output_size) -> neural_layer:
# input_size = number of neurons in the previous layer that provide an input to the current layer
# output_size = number of neurons outputted to the next layer
weights = 0.1 * np.random.randn(input_size, output_size)
biases = np.zeros((1, output_size))
return neural_layer(weights, biases)
def forward(self, inputs):
self.output = np.dot(inputs, self.weights) + self.biasExample - combining the layers
Now that we have our basic layer functionality setup, we can apply what we’ve learnt to a random network.
Firstly, we can need to recognize that our original input has 2 different data sets and each containing 3 neurons. Therefore, our first layer will have 3 as our input_size. Our output_size can be any number we choose, so long as it makes sense. I chose 5.
Secondly, we can now run layer1.forward(inputs) with our initial input layer as our first set of inputs. This will run the inputs into our layer1’s dot product following weights and biases.
import numpy as np
inputs = [[0.2, 0.5, 1.2],
[1.2, 1.1, 0.6]]
class neural_layer:
def __init__(self, weights, bias):
self.weights = weights
self.bias = bias
@staticmethod
def new(input_size, output_size) -> neural_layer:
# input_size = number of neurons in the previous layer that provide an input to the current layer
# output_size = number of neurons outputted to the next layer
weights = np.random.rand(input_size, output_size)
biases = np.zeros((1, output_size))
return neural_layer(weights, biases)
def forward(self, inputs):
self.output = np.dot(inputs, self.weights) + self.bias
layer1 = neural_layer.new(3,5)
layer1.forward(inputs)Now we can create another layer and run layer1’s outputs in layer2, ensuring that the input_size of layer 2 is the same as the output_size of layer 1’s output.
import numpy as np
inputs = [[0.2, 0.5, 1.2],
[1.2, 1.1, 0.6]]
class neural_layer:
def __init__(self, weights, bias):
self.weights = weights
self.bias = bias
@staticmethod
def new(input_size, output_size) -> neural_layer:
# input_size = number of neurons in the previous layer that provide an input to the current layer
# output_size = number of neurons outputted to the next layer
weights = np.random.rand(input_size, output_size)
biases = np.zeros((1, output_size))
return neural_layer(weights, biases)
def forward(self, inputs):
self.output = np.dot(inputs, self.weights) + self.bias
layer1 = neural_layer.new(3, 5)
layer1.forward(inputs)
layer2 = neural_layer.new(5, 4)
layer2.forward(layer1.output)
print(layer2.output)