machine learning - why Test precision is higher than training precision -


i using tensorflow implement object recognition. followed tutorial use own dataset. https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html#deep-mnist-for-experts

i used 212 positive samples , 120 negative samples train. test set contains 100 positive , 20 negative samples. training precision 32.15%, test precision 83.19%

i wondering makes test precision higher training precision, data set not large enough? data doesn't show statistical meaning? or general thing, because saw people said training precision doesn't make sense. why that?

there 2 problems here.

first, precision not measure of performance when classes unbalanced.

second, , more important have bad ratio of negative positive in test set. test set should come same process training one, in case negatives ~40% of training set ~17% of test set. not suprisingly - classifier answers "true" every single input, 83% precision on test set (as positives 83% of whole data).

thus not matter of number of test samples, matter of incorrect construction of training/test datasets. can imagine, there more issues split, there different structure in train , in test.


Popular posts from this blog

php - How should I create my API for mobile applications (Needs Authentication) -

5 Reasons to Blog Anonymously (and 5 Reasons Not To)

Google AdWords and AdSense - A Dynamic Small Business Marketing Duo