Last summer my family and I visited Russia, even though none of us could read Russian we did not have any trouble in figuring our way out. All thanks to google real-time translation of Russian boards into English. This is one of the several applications of Neural networks. Neural network forms the basis of deep learning a sun field of machine learning where the algorithm is inspired by the structure of the human brain. Neural network takes in data train themselves to recognise the pattern in this data and then predict the output for a new set of similar data.
Let’s understand how this is done, let’s create a neural network that differentiates between a square circle and triangle. Neural network is made up of layers of neurons, these neurons are the core processing unit of the network. First, we have the input layer which receives the input, the output layers predict our final output. In between exist a hidden layers which perform most of computations required by our network. Inputs are feed into each neurons the first layer neurons of one layer are connected to neurons of another layer through channels. Each of these channels are assigned a numerical value known as weight. The inputs are multiplied to the corresponding weights and their sum is sent as input to the neurons in the hidden layer. Each of this neuron is associated to a numerical value called the bias which is then added to the input sum this value is then passed through a threshold function called the activation function. The result of the activation function determines if the particular neuron will get activated or not, an activated neuron transmits data to the neurons of the next layer over the channels in this manner the data is propagated through the network this is called forward propagation. In the output layer the neuron with the highest value fires and determines the output the values are basically a probable. For example here are near associated with square has the highest probability hence that’s the output predicted by the neural network of course just by a look at it.
We know our neutral network has made a wrong prediction but how does the network figure this out. Note that our network is yet to be trained during this training process along with the input our network also as the output fed to it the predicted output is compared against the actual output to realize the error in prediction. The magnitude of the error indicates how wrong we are in the sign suggest if our predicted values are higher or lower than expected the arrows here give an indication of the direction and magnitude of change to reduce the error this information is then transferred backward through our network this is known as back propagation. Now based on this information the weights are adjusted this cycle of forward propagation and back propagation is iteratively performed with multiple inputs this process continues until our weights are assigned such that the network can predict the shapes correctly in most of the cases this brings our training process to an end. You might wonder how long this training process takes honestly neural networks may take hours or even months to train but time is a reasonable trade-off when compared to its scope. Let us look at some of the prime application of neutral networks.
Facial recognition, cameras on smartphones these days can estimate the age of the person based on their facial features. This is neural networks at play first differentiating the face from the background and then correlating the lines and spots on your face to a possible age.
Forecasting, neural networks are trained to understand the patterns and detect the possibility of rainfall or arise and stock prices with high accuracy.
Music composition neural networks can learn patterns and music and train itself enough to compose a fresh tune. Which deep learning and neural networks we are still takin baby steps. The growth in this filed has been foreseen by the big name’s companies such as Google, Amazon and Nvidia have invested in developing products such as libraries predictive models and intuitive GPUs that support the implementation of neural networks. The question dividing the visionaries is on the reach of neural networks to what extent can we replicate the human brain. We’d have to wait a few more years to give a definite answer.