Survey of Neural Transfer Functions.

Wlodzislaw Duch, Norbert Jankowski
Department of Computer Methods, Nicholas Copernicus University,
Grudziadzka 5, 87-100 Torun, Poland.
E-mails: duch,norbert@phys.uni.torun.pl

Neural Computing Surveys (submitted, Oct. 1998)


 


The choice of transfer functions in neural networks is of crucial importance to their performance. Although sigmoidal transfer functions are the most common there is no \emph{a priori} reason why they should be optimal in all cases. In this article advantages of various neural transfer functions are discussed and several new type of functions are introduced. Universal transfer functions, parametrized to change from localized to delocalized type, are of greatest interest. Many other types of neural transfer functions are discussed, including functions based on non-Euclidean distance measures. Biradial functions, formed from products or linear combinations of pairs of sigmoids, are treated in some details. Products of $N$ biradial functions in $N$-dimensional input space give densities of arbitrary shapes, offering great flexibility in modelling the probability density of the input vectors. Extensions of biradial functions, offering good tradeoff between complexity of transfer functions and flexibility of the densities they are able to represent, are proposed. Biradial functions can be used as transfer functions in many types of neural networks, such as RBF, RAN, FSM and IncNet. Using such functions and going into the hard limit (steep slopes) facilitates logical interpretation of the network performance, i.e. extraction of logical rules from the training data. A few examples of the influence of the choice of transfer functions on network performance are given.

Paper in gzipped postscript, 2 150 kB
Paper in uncompressed postscript has 14 MB! Please use gzipped version.