Buddy's Homepage!

About Me
Ramblings
Pictures
Wishlist
Adventures
Code
Papers
Fun Stuff
Links

Self-Organising Maps



A SOM exploring a 2D grid

Overview
Self-organising maps (SOMs) are useful tools in data-exploration, as they work unsupervised and attempt to fully explore the input domain. In the applet above, a SOM is exploring the 2D grid. They are also widely used for easy visualization of multi-dimensional data (projecting items onto a (generally) 2D plane, with similar items being closer together). This article will briefly describe the algorithm behind a SOM.

A SOM is composed of nodes (aka units, neurons, etc) that each have a reference vectore associated with it. For instance, if your input data was of dimension 5, then each node would have an associated reference vector of dimension 5. This reference vector is used to determine "where" the node is in the input space. Each node also has a set of neighbours to whom it is connected to; generally the nodes are connected in a rectangular or hexagonal grid. Onto the algorithm!

Initially all the reference vectors for nodes are set to random values (if some prior information about the distribution of the data is known, non-random initial values can be used and this will decrease the time for convergence). Then the teaching phase begins, a random vector from the input space is chosen, say r, and is presented to the map. Each node calculates its distance from this vector. The distance function can be any distance function, Euclidian ("normal" vector distance, d = sqrt( (x - r.x)^2 + (y - r.y)^2 + ... )), Manhattan ( d = | x - r.x | + | r - r.y | + ... ), etc. From these distances, a "winning" node is chosen, namely the one with the closest distance. This node's reference vector is then modified to move closer to the input vector, next all the reference vectors of neighbours to this winning node are moved closer to the input vector (but by a smaller amount than the winning node). If desired the reference vectors of the neighbours to the neighbours of the winning node can also be moved closer to the input vector (but by a smaller amount than the neighbours of the winning node), and so can the neighbours to the neighbours to the neighbours of the winning node, etc. The farther away (topologically speaking) the nodes are from the winning node the less their reference vectors should be moved towards the input vector.
If that doesn't make much sense, here's a step-by-step algorithm:
  1. Initialize the nodes' reference vectors to random values
  2. Choose an input vector
  3. Calculate the distance from this vector to each node's reference vector
  4. Pick the closest node and:
    1. Update the reference vector of the winning node
    2. Update the reference vectors of the neighbours of the winning node
    3. Update the reference vectors of the neighbours of the neighbours of the winning node
    4. etc...
  5. Goto step 2
Of course there are some more details than that. When updating a node's reference vector to be closer to the input vector, it looks something like: n.ref = n.ref + alpha * ( iv - n.ref ), where n.ref is the node's reference vector, iv is the input vector and alpha is a scalar such that 0 < alpha <= 1. Initially the value of alpha is closer to 1 and as the number of training iterations increase the value of alpha gradually decreases. This allows the SOM to quickly learn the "rough layout" of the input space and then to slow down and fine tune itself.

If the winning node is updated like: n.ref = n.ref + alpha * ( iv - n.ref ), then the neighbours of this node would be updated like n1.ref = n1.ref + beta * ( iv - n1.ref ). Where beta < alpha. Almost any function that satisfies that inequality could be used, some possible ones are: beta = d * alpha ^ h, beta = d * alpha / h, where h is the neighbour level (ie: the neighbours of the winning node would have h = 2, the neighbours of the neighbours would have h = 3, etc), and where d is the neighbourhood scaling value. This scaling value should be in the range [0,1) and, like the alpha value, should gradually decrease as the number of training iterations increase. Again, this helps the SOM to quickly cover the input space and then to settle and fine tune itself. The inital alpha and d values, as well as their rate of change are normally found experimentally.

Applet Specific Details
This applet uses a SOM to explore a 2-dimensional space (a grid). While running it displays various information in the status bar: iteration, maxManDist and alpha).
iteration - the current iteration (ie: the number of training vectors given to the SOM)
maxManDist - rather than updating all neighbours of the winning node, this applet only updates nodes within a certain Manhattan distance of the winner. Nodes with Manhattan distance < maxManDist are updated. The value of beta used for updating their reference vector is calculated as: beta = alpha / manDist (where manDist is the Manhattan distance of the node from the winning node).
alpha - the current alpha value. Which is used for updating the reference vector for the winning node (and the neighbouring nodes indirectly).
One thing that can be misleading about the applet is that the points are displayed as moving. This is due to the fact that the input domain is 2-dimensional and allows the reference vector for each node to be used to determine where to draw the node. When visualizing higher-dimensional data, it may be more intuitive to draw the nodes in a fixed position and just change the reference vectors. I'll put up another applet that demonstrates this soon.
This applet also comes up with poor coverings of the input space (such as this and this, a good covering looks like this). This is due to poorly chosen alpha and maxManDist settings (both the initial values and the decay rates). The settings were chosen so that the SOM would expand quickly so that the user could easily see it's intent. As a by product, the poor values also allow the user to see what poor mappings look like (It's not a bug, it's a feature!)
For more information look at the code.


Comments? Quetsions? Suggestions? Email me your thoughts!

Related Links
Here's some other SOM applets you can check out:
http://www.arch.usyd.edu.au/~rob/java/applets/neuro/SelfOrganisingMapDemo.html
http://www.aist.go.jp/NIBH/~b0616/Lab/BSOM1/
http://ai.bpa.arizona.edu/Visualization/demos3_intro.html
http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html

For more detailed information on SOMs see:
http://davis.wpi.edu/~matt/courses/soms/
http://www.cis.hut.fi/~sami/thesis/thesis_tohtml.html
http://www.ifs.tuwien.ac.at/~andi/somlib/som.html
http://www.mlab.uiah.fi/~timo/som/thesis-som.html
Copyright © 1999-2013 Morgan "Buddy" Betts