This method (Martinetz and Schulten, 1991; Martinetz, 1993) is usually not used on its own but in conjunction with other methods (see sections 5.3 and 5.4). It is, however, instructive to study competitive Hebbian learning on its own. The method does not change reference vectors at all (which could be interpreted as having a zero learning rate). It only generates a number of neighborhood edges between the units of the network. It was proved by Martinetz (1993) that the so generated graph is optimally topology-preserving in a very general sense. In particular each edge of this graph belongs to the Delaunay triangulation corresponding to the given set of reference vectors. The complete competitive Hebbian learning algorithm is the following:
Initialize the connection set ,
, to the empty set:
Figure 5.3: Competitive Hebbian learning simulation sequence for a ring-shaped uniform probability distribution. a) Initial state. b-f) Intermediate states. g) Final state. h) Voronoi tessellation corresponding to the final state. Obviously, the method is
sensitive to initialization since the initial positions are always
equal to the final positions.
Figure: Competitive Hebbian learning simulation results after 40000 input signals for three different probability distributions (described in the caption of figure 4.4).