Show simple item record

dc.contributor.authorVenter, Arthur Edgar William
dc.contributor.authorTheunissen, Marthinus Wilhelm
dc.contributor.authorDavel, Marelie Hattingh
dc.date.accessioned2021-03-17T15:58:53Z
dc.date.available2021-03-17T15:58:53Z
dc.date.issued2020
dc.identifier.isbn978-3-030-66151-9
dc.identifier.issn1865-0929
dc.identifier.urihttp://hdl.handle.net/10394/36914
dc.description.abstractWhen training neural networks as classifiers, it is common to observe an increase in average test loss while still maintaining or improving the overall classification accuracy on the same dataset. In spite of the ubiquity of this phenomenon, it has not been well studied and is often dismissively attributed to an increase in borderline correct classifications. We present an empirical investigation that shows how this phenomenon is actually a result of the differential manner by which test samples are processed. In essence: test loss does not increase overall, but only for a small minority of samples. Large representational capacities allow losses to decrease for the vast majority of test samples at the cost of extreme increases for others. This effect seems to be mainly caused by increased parameter values relating to the correctly processed sample features. Our findings contribute to the practical understanding of a common behaviour of deep neural networks. We also discuss the implications of this work for network optimisation and generalisation.en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.subjectOverfittingen_US
dc.subjectGeneralizationen_US
dc.subjectDeep Learningen_US
dc.titlePre-interpolation loss behavior in neural networksen_US
dc.typeArticleen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record