Perceptron Algorithm - Python for Integrated Circuits - - An Online Book - |
||||||||
Python for Integrated Circuits http://www.globalsino.com/ICs/ | ||||||||
Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix | ||||||||
================================================================================= The Perceptron algorithm is a type of supervised learning algorithm used for binary classification tasks. It is one of the earliest forms of artificial neural networks, which was developed by Frank Rosenblatt in 1957. The Perceptron is a simplified model of a biological neuron and serves as a building block for more complex neural network architectures. The key overview of perceptron algorithm is:
Weighted_sum = w1 * x1 + w2 * x2 + ... + wn * xn -------------------------------- [3870a] Where: -------------------------------- [3870b] Where: The update rule of θj is given by, ------------------------ [3870c] Equation 3870c indicates that the weight updates is given by,
Output = 1 (if Weighted_sum >= Threshold) ---------------------------- [3870d] Δwi = Learning_rate * (Target - Output) * xi where:
(a)
(b)
Note that Perceptron algorithm does not use Sigmoid function as its activation function. The Perceptron algorithm uses a step function (also known as a threshold function) as its activation function. A perceptron, as a single-layer neural network, is only capable of learning linearly separable patterns in data. It cannot effectively handle datasets that require non-linear decision boundaries for accurate classification. However, the limitations of perceptrons can be overcome by using multi-layer perceptrons (MLPs) or more complex neural network architectures, allowing for the learning of non-linear relationships in data. ============================================
|
||||||||
================================================================================= | ||||||||
|
||||||||