In some situations the data set is so huge that batch methods
become impractical. In other cases the input data comes as a
continuous stream of unlimited length which makes it completely
impossible to apply batch methods. A resort is *on-line update*, which
can be described as follows:

- 1.
- Initialize the set to contain
*N*units

with reference vectors chosen randomly according to . - 2.
- Generate at random an input signal according to .
- 3.
- Determine the winner :

- 4.
- Adapt the reference vector of the winner towards :

- 5.
- Unless the maximum number of steps is reached continue with step 2.

Thereby, the *learning rate* determines the extent to which the
winner is adapted towards the input signal. Depending on whether stays constant or decays over time, several different methods are possible some of which are described in the following.

Sat Apr 5 18:17:58 MET DST 1997