Computer vision
Advertisement

Fisher's linear discriminant [1]is a quantity which is derived to solve binary supervised classification problems (see Classifiers_and_Discriminants).

It is assumed that every object or event

belongs to one of two classes 

, and may be represented by a vector of measurements referred to as a feature vector . Since the measurements have been chosen to distinguish between the classes, the distribution of each class in this feature space should be different. For simplicity the distributions of each class

have been summarised by their means 
and covariances 

.

Fisher's linear discriminant is a linear combination of the feature vectors

which maximises the separation of the two classes in one quantity. In general, two classes will be further apart when their means are far apart, and there is little variability within each class. Fisher summarised this with the measure of separation given by

Separation =


For this example, consider the linear combination of measurements


It can be shown that the mean of each class in this new variable will be

and 

, and the variances will be

and 

. The Fisher separation will then be

Separation =


which should be maximised with respect to . It can be shown that the optimal

is given by


and so Fisher's linear discriminant is given by . A set of measurements can then be classified by a simple comparison of the Fisher discriminant with a threshold.

It is frequently stated that Fisher's discriminant assumes each class is normally distributed. This is probably due to confusion with the related Linear Discriminant Analysis (LDA), which is in fact identical to Fisher's discriminant when the covariance of both classes are equal.


[1] ``The use of multiple measurements in taxonomic problems," R.A.Fisher, Annals of Eugenics, Vol.7, pp.179--188, 1936.

Advertisement