Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Subspace Projection Approaches to Classification and Visualization of Neural Network-Level Encoding Patterns

Figure 2

Regularization can prevent over-fitting of the training data sets.

A) A two-dimensional example illustrate how a two-class classification between the two data sets (blue and green points drawn from two-dimensional gaussian distributions) is affected by selecting a small number of samples (two in this example). The probability distributions that best fit the selected points are too tight and introduces classification errors (red stars, for the green class). B) The probability distributions corresponding to the regularized covariance matrices yield better generalization and allow all data points to be classified classified correctly. The new 2σ boundaries are plotted in black for each class. Minimization of quantity F = log(E1)+log(E2)+log(E3), the log of error terms for self, opposite class and between-class separations, permits the selection of regularization parameters at the minimum for C) blue class at λ = 0.76 and for D) green class at λ = 0.52.

Figure 2

doi: https://doi.org/10.1371/journal.pone.0000404.g002