Learning vector quantization

from Wikipedia, the free encyclopedia

Learning vector quantization , English learning vector quantization (abbreviated LVQ) is a method from the field of artificial neural networks . It is used to classify nonlinear problems.

VQ = vector quantization : method for unsupervised clustering

LVQ = Learner VQ

Classification: Mapping in K classes Training set P

  P = {(xi,yi) in x{1..K}|i}

LVQ network: given by prototypes (w1, Y1)… (wk, Yk) in {1..K} defines mapping x → Yi with | x-wi | minimal, i.e. winner according to WTA ( W inner T ake A ll)

LVQ1: One or more neurons are generated for classes 1 to k and their weight vectors are initialized randomly, with random patterns of the respective class, by the focus of the respective class or otherwise meaningfully. Then, just like with vector quantization, the patterns are presented and a winner neuron with the smallest Euclidean distance to the input is calculated. The learning rate [0.1] is either constant or falling in the course of the procedure in order to force convergence.

Algorithm:

  init wj
  repeat
     Wähle (xi,yi)
     bestimme Gewinner (wk,yk)
     wk = wk+(xi-wk) falls yi = Yk (wenn yi die durch wk repräsentierte Klasse)
          wk-(xi-wk) sonst

The convergence of LVQ1 is not proven; in practice there are problems with overlapping data.