next up previous contents
Next: Batch Update: LBG Up: Some Competitive Learning Methods Previous: Other Goals

Hard Competitive Learning

  Hard competitive learning (a.k.a. winner-take-all learning) comprises methods where each input signal only determines the adaptation of one unit, the winner. Different specific methods can be obtained by performing either batch or on-line update. In batch methods (e.g. LBG) all possible input signals (which must come from a finite set in this case) are evaluated first before any adaptations are done. This is iterated a number of times. On-line methods, on the other hand (e.g. k-means), perform an update directly after each input signal. Among the on-line methods variants with constant adaptation rate can be distinguished from variants with decreasing adaptation rates of different kinds.

A general problem occurring with hard competitive learning is the possible existence of ``dead units''. These are units which - perhaps due to inappropriate initialization - are never winner for an input signal and, therefore, keep their position indefinitely. Those units do not contribute to whatever the networks purpose is (e.g. error minimization) and must be considered harmful since they are unused network resources. A common way to avoid dead units is to use distinct sample vectors according to tex2html_wrap_inline4477 to initialize the reference vectors.

The following problem, however, remains: if the reference vectors are initialized randomly according to tex2html_wrap_inline4479, then their expected initial local density is proportional to tex2html_wrap_inline4481. This may be rather suboptimal for certain goals. For example, if the goal is error minimization and tex2html_wrap_inline4483 is highly non-uniform, then it is better to undersample the regions with high probability density (i.e., use less reference vectors there than dictated by tex2html_wrap_inline4485) and oversample the other regions. One possibility to adapt the distribution of the reference vectors to a specific goal is the use of local statistical measures for directing insertions and possibly also deletion of units (see sections 5.4, 6.2 and 6.3).

Another problem of hard competitive learning is that different random initializations may lead to very different results. The purely local adaptations may not be able to get the system out of the poor local minimum where it was started. One way to cope with this problem is to change the ``winner-take-all'' approach of hard competitive learning to the ``winner-take-most'' approach of soft competitive learning. In this case not only the winner but also some other units are adapted (see chapters 5 and 6). In general this decreases the dependency on initialization.




next up previous contents
Next: Batch Update: LBG Up: Some Competitive Learning Methods Previous: Other Goals

Bernd Fritzke
Sat Apr 5 18:17:58 MET DST 1997