What defines a Neural network as being Perceptron?

I think it's important to talk about artificial intelligence, I searched here in stack overflow and found nothing relevant about it. Could anyone with experience explain well the definition of a Perceptron-type Neural network? We know that there are several types of neural networks such as: Arts networks, Hopfield Network, associative memory , among others. But the question here is what differentiates a Network Perceptron from too much?

Author: Rogers CorrĂȘa, 2017-07-17

2 answers

The perceptron is a single, solitary processing neuron with supervised learning. It receives impulses from various stimuli, then applies the relative weights of its synapses, and then emits an output signal.

A perceptron network is a set of several perceptrons side by side, all receiving the same stimuli. Since one perceptron does not interfere with the outcome of another perceptron, they can be understood individually without prejudice to the all.

Not to be confused with Perceptron Multi layer, MLP of the acronym in English, in which the perceptrons are in layer.

The perceptron neuron learns based on its mistakes. Yes, literally. And it depends on the size of the error: the larger the error, the faster perceptron tries to correct itself.

The output of a perceptron is a real function that receives a real number. The stimuli are transformed into a single real number through a scalar force product of stimuli with the weight of synapses. In summary, for X being the stimulus, p the result of perceptron processing, S the weights of its synapses and f the real function of perceptron:

p = f(y)
y = X . S

I mentioned above that perceptron has supervised learning, it is not self-sufficient like Kohonen networks. Supervised learning here means that for each training input T_i, there is an expected result r_i. If p_i != r_i, it means that there was a non-null error, called e_i.

On the e_i obtained for the input T_i, the values of the synapses S are corrected in such a way that this error will have been corrected or minimized in this learning shift.

The creation of the learning set and how the test elements will be presented can vary greatly depending on who implements it. Usually the set of tests is presented sequentially successive times until a convergence criterion is reached. The convergence criterion can be the total cumulative error of the test set. Another interesting point in training is that the learning rate is usually reduced between one battery and another.

 4
Author: Jefferson Quesado, 2018-02-14 14:49:34

The Perceptron is the elder of all neural networks. The Perceptron is the simplest type of direct neural network (Feedfoward), known as linear classifier. This means that the types of problems solved by this neural network must be linearly separable

 3
Author: Isabella Oliveira, 2017-07-17 18:00:59