## Feedforward Neural Networks

*Edit: I came across this resource, and it does an excellent job of explaining machine learning: http://www.r2d3.us/visual-intro-to-machine-learning-part-1/*

Artificial learning is all the rage these days, so I decided to jump on the bandwagon.

I’ve started with learning about the most basic type of neural network – a feedforward. There’s probably some gaps in my knowledge, so feel free to chime in:

### What is a feedforward neural network?

A feedforward neural network was the first and simplest type of neural network. It does not form any type of cycle and only moves in one direction.

### How it works

An input is passed to a layer of “weights” which takes the input and performs calculations on it. If the output of the calculations is >0, then it passes it along to the next layer until you reach a final output.

That’s a little confusing, so let’s use an example. Let’s say you have one data point called X. That data point will be passed onto another layer of 3 neurons. Those neurons will take the input, perform a simple learning algorithm, and pass it to the next layer until a final output is reached. If the final output is >0, then your algorithm is working well against your dataset.

### How does it get smarter?

You might be asking, how is this a neural network? It just seems like a bunch of equations.

These networks are trained by using a learning algorithm, which creates adjustments to the weights based on final output. If the sum of the final product is <0, the neural network will adjust until the sum of the calculations are >1.

### The most common learning technique

There are multiple leaning techniques you can use, but the most common is back-propagation. Here’s how it works:

– The final output of your neural network is compared with a correct answer to calculate the value of an error-function

– The error is then fed back through the system and the algorithm adjusts the weights of each connection to reduce the value of the error function

– After repeating this process across a large number of training cycles, the network will typically converge to a state where the error of calculations is small

This type of learning has its shortcomings. If you only have a few samples for you to train your neural network against, your system might overgeneralize based on limited data.