Creating a Artificial Neural Network from scratch using C#

Jagath Karra
4 min readMar 16, 2020

Let’s build Artificial Neural Network in C#! How hard can it be?

Artificial Neural Network

Developing models using C# is easy and fun, but real understanding can be achieved only via reading and implementing the algorithms on your own, build a Neural Network (shallow one) from scratch, using only pure C#. The real challenge is to implement the core algorithm that is used to train (Deep) Neural Networks — Back propagation. Shall we start?

Setup

Let’s begin by preparing our environment and seeding the random number generator properly using below code.

Random Weights Declaration

Background

Input with XOR Gates

We will try to create an Artificial Neural Network (ANN) that can properly predict values from the XOR function. Here is its truth table for XOR Gates:

Input and Output table of XOR Gate
Random inputs with Gate

Feedforward

This is the simplest form of ANN (artificial neural network); data travels only in one direction (input to output). This is the example we just looked at. When you actually use it, it’s fast; when you’re training it, it takes a while. Almost all vision and speech recognition applications use some form of this type of neural network.

Feedforward Neural Network

Feedforward neural networks are artificial neural networks where the connections between units do not form a cycle. Feedforward neural networks were the first type of artificial neural network invented and are simpler than their counterpart, recurrent neural networks. They are called feedforward because information only travels forward in the network (no loops), first through the input nodes, then through the hidden nodes (if present), and finally through the output nodes.

Process of Feedforward neural network summing of all inputs with weights applying an activation function like Sigmoid Function and finally receiving the outputs.

https://preline01.com/#/deeplearn/feedforward

Activation Function (Sigmoid)

It’s a technique decides either neuron activated or failed or converts to probabilities yes or no.

The sigmoid function is used quite commonly in the realm of deep learning; at least it was until recently. It has distinct S shape and it is a differentiable real function for any real input value. Additionally, it has a positive derivative at each point. More importantly, we will use it as an activation function for the hidden layer of our model. Here’s how it is defined:

Sigmoid Curve

It’s first derivative (which we will use during the backpropagation step of our training algorithm) has the following formula:

Sigmoid Function Formula

https://preline01.com/#/deeplearn/activationfunction

Backpropagation

Backpropagation is the backbone of almost anything we do when using an Artificial Neural Networks. The algorithm consists of 3 subtasks:

· Make a forward pass

· Calculate the error

· Make backward pass (backpropagation)

In the first step, if the expected output was not received, we need to apply the Error/Loss function using Gradient Descendent formula to minimize the error then sending the data towards backward , updating the weights and calling the feedforward , we use this process until gets the expected output.

Backpropagation Neural Network

Let’s build more intuition about what the algorithm is actually doing:

That error seems to be decreasing! Yay! And the implementation is not that scary, isn’t it? We just multiply the matrix containing our training data with the matrix of the weights of the hidden layer. Then, we apply the activation function (sigmoid) to the result and multiply that with the weight matrix of the output layer.

https://preline01.com/#/deeplearn/backpropogate

Example of ANN

Sample Artificial Neural Network

Conclusion

What a journey, right? We’ve learned a lot about the inner workings of the Neural Network models. More importantly, we’ve implemented the backpropagation algorithm — twice! Hopefully, you got some practical understanding of the processes involved in training an Artificial Neural Network. Can you adapt the code and make a Deep Neural Network?

You can find the source code from my GitHub account

--

--