Learning vector quantization

What is learning vector quantization?
Learning vector quantization (LVQ) is an algorithm that is a kind of artificial neural networks and uses neural computation. In a broader sense, it can be said that it is a type of computational intelligence. This algorithm requires a competitive, all-win approach and is also related to other neural network algorithms such as Perceptron and Backpropagation. The LVQ algorithm makes it possible to select the number of training instances to be subjected and then to find out what these instances look like. LVQ was invented by Teuvo Kohonen and is related to the k-nearest neighbor algorithm.

The basic goal of learning vector quantization in relation to information processing is to prepare a set of codebook vectors in the area of the observed data samples. Furthermore, these vectors are then used for the classification of invisible vectors. To begin with, a random pool of vectors is assembled and these are then subjected to training samples. When using an all wins all strategy, either one or those that are most similar to the given input pattern are selected. These are then adjusted to be closer to the input vector or sometimes further away from the runner-up. When this process is repeated, this leads to a distribution of code book vectors in the input space which can approximate the distribution of sample values on which the test data set is based. This algorithm is used for predictive modeling.

Was the explanation to "Learning vector quantization"Helpful? Rate now:

Further explanations for the first letter L