


SelfOrganising MapsA SOM exploring a 2D grid Overview Selforganising maps (SOMs) are useful tools in dataexploration, as they work unsupervised and attempt to fully explore the input domain. In the applet above, a SOM is exploring the 2D grid. They are also widely used for easy visualization of multidimensional data (projecting items onto a (generally) 2D plane, with similar items being closer together). This article will briefly describe the algorithm behind a SOM. A SOM is composed of nodes (aka units, neurons, etc) that each have a reference vectore associated with it. For instance, if your input data was of dimension 5, then each node would have an associated reference vector of dimension 5. This reference vector is used to determine "where" the node is in the input space. Each node also has a set of neighbours to whom it is connected to; generally the nodes are connected in a rectangular or hexagonal grid. Onto the algorithm! Initially all the reference vectors for nodes are set to random values (if some prior information about the distribution of the data is known, nonrandom initial values can be used and this will decrease the time for convergence). Then the teaching phase begins, a random vector from the input space is chosen, say r, and is presented to the map. Each node calculates its distance from this vector. The distance function can be any distance function, Euclidian ("normal" vector distance, d = sqrt( (x  r.x)^2 + (y  r.y)^2 + ... )), Manhattan ( d =  x  r.x  +  r  r.y  + ... ), etc. From these distances, a "winning" node is chosen, namely the one with the closest distance. This node's reference vector is then modified to move closer to the input vector, next all the reference vectors of neighbours to this winning node are moved closer to the input vector (but by a smaller amount than the winning node). If desired the reference vectors of the neighbours to the neighbours of the winning node can also be moved closer to the input vector (but by a smaller amount than the neighbours of the winning node), and so can the neighbours to the neighbours to the neighbours of the winning node, etc. The farther away (topologically speaking) the nodes are from the winning node the less their reference vectors should be moved towards the input vector. If that doesn't make much sense, here's a stepbystep algorithm:
If the winning node is updated like: n.ref = n.ref + alpha * ( iv  n.ref ), then the neighbours of this node would be updated like n1.ref = n1.ref + beta * ( iv  n1.ref ). Where beta < alpha. Almost any function that satisfies that inequality could be used, some possible ones are: beta = d * alpha ^ h, beta = d * alpha / h, where h is the neighbour level (ie: the neighbours of the winning node would have h = 2, the neighbours of the neighbours would have h = 3, etc), and where d is the neighbourhood scaling value. This scaling value should be in the range [0,1) and, like the alpha value, should gradually decrease as the number of training iterations increase. Again, this helps the SOM to quickly cover the input space and then to settle and fine tune itself. The inital alpha and d values, as well as their rate of change are normally found experimentally. Applet Specific Details This applet uses a SOM to explore a 2dimensional space (a grid). While running it displays various information in the status bar: iteration, maxManDist and alpha). iteration  the current iteration (ie: the number of training vectors given to the SOM)One thing that can be misleading about the applet is that the points are displayed as moving. This is due to the fact that the input domain is 2dimensional and allows the reference vector for each node to be used to determine where to draw the node. When visualizing higherdimensional data, it may be more intuitive to draw the nodes in a fixed position and just change the reference vectors. I'll put up another applet that demonstrates this soon. This applet also comes up with poor coverings of the input space (such as this and this, a good covering looks like this). This is due to poorly chosen alpha and maxManDist settings (both the initial values and the decay rates). The settings were chosen so that the SOM would expand quickly so that the user could easily see it's intent. As a by product, the poor values also allow the user to see what poor mappings look like (It's not a bug, it's a feature!) For more information look at the code. Comments? Quetsions? Suggestions? Email me your thoughts! Related Links Here's some other SOM applets you can check out: http://www.arch.usyd.edu.au/~rob/java/applets/neuro/SelfOrganisingMapDemo.html http://www.aist.go.jp/NIBH/~b0616/Lab/BSOM1/ http://ai.bpa.arizona.edu/Visualization/demos3_intro.html http://www.neuroinformatik.ruhrunibochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html For more detailed information on SOMs see: http://davis.wpi.edu/~matt/courses/soms/ http://www.cis.hut.fi/~sami/thesis/thesis_tohtml.html http://www.ifs.tuwien.ac.at/~andi/somlib/som.html http://www.mlab.uiah.fi/~timo/som/thesissom.html 