machine learning - When to stop training - LOOV MLP -


i'm running mlp classify set of values 10 different classes.

simplified down, have sonar gives me 400 "readings" of object. each reading list of 1000 float values.

i have scanned 100 total objects , want classify them , evaluate model based on leave-one-out cross validation.

for each object, split data training set of 99 objects , test set of remaining object. feed training set (99 objects, 99*400 "readings") mlp , use test set (1 objects, 1*400 "readings) validate.

my question is: how know training epoch use final "best" model? googled around , people said use epoch had best validation accuracy, seems cheating me. shouldn't instead pick model based on statistics of training data? (my thought process random weight reshuffling in training create artificially high validation accuracy doesn't provide useful model new objects scanned in future)

so answer says use training epoch gives best validation accuracy:

whats difference between train, validation , test set, in neural networks?

best, deckwasher

this called early stopping.

what need validation set.

-after each epoch, compute desired evaluation measure on validation set.

-always save parameters of best performing model on validation set in variable.

-if 2 or n iterations validation results not improved stop epochs , reset mlp best performing parameters.

-then compute results on test set best performing model on validation set saved before.


Popular posts from this blog

php - How should I create my API for mobile applications (Needs Authentication) -

5 Reasons to Blog Anonymously (and 5 Reasons Not To)

Google AdWords and AdSense - A Dynamic Small Business Marketing Duo