Now showing items 1-7 of 7

    • Activation gap generators in neural networks 

      Davel, Marelie H. (In Proc. South African Forum for Artificial Intelligence Research (FAIR2019), 2019-12)
      No framework exists that can explain and predict the generalisation ability of DNNs in general circumstances. In fact, this question has not been addressed for some of the least complicated of neural network architectures: ...
    • Benign interpolation of noise in deep learning 

      Davel, Marelie Hattingh; Barnard, Etienne; Theunissen, Marthinus Wilhelmus (South African Institute of Computer Scientists and Information Technologists, 2020)
      The understanding of generalisation in machine learning is in a state of flux, in part due to the ability of deep learning models to interpolate noisy training data and still perform appropriately on out-of-sample data, ...
    • Exploring neural network training dynamics through binary node activations 

      Haasbroek, Daniël G.; Davel, Marelie H. (Southern African Conference for Artificial Intelligence Research, 2020)
      Each node in a neural network is trained to activate for a specific region in the input domain. Any training samples that fall within this domain are therefore implicitly clustered together. Recent work has highlighted ...
    • Insights regarding overfitting on noise in deep learning 

      Theunissen, Marthinus W.; Davel, Marelie H.; Barnard, Etienne (In Proc. South African Forum for Artificial Intelligence Research (FAIR2019), 2019-12)
      The understanding of generalization in machine learning is in a state of flux. This is partly due to the relatively recent revelation that deep learning models are able to completely memorize training data and still perform ...
    • Pre-interpolation loss behavior in neural networks 

      Venter, Arthur Edgar William; Theunissen, Marthinus Wilhelm; Davel, Marelie Hattingh (Springer, 2020)
      When training neural networks as classifiers, it is common to observe an increase in average test loss while still maintaining or improving the overall classification accuracy on the same dataset. In spite of the ubiquity ...
    • ReLU and sigmoidal activation functions 

      Pretorius, Arnold M.; Barnard, Etienne; Davel, Marelie H. (In Proc. South African Forum for Artificial Intelligence Research (FAIR2019), 2019-12)
      The generalization capabilities of deep neural networks are not well understood, and in particular, the influence of activation functions on generalization has received little theoretical attention. Phenomena such as ...
    • Using summary layers to probe neural network behaviour 

      Davel, Marelie Hattingh (South African Institute of Computer Scientists and Information Technologists, 2020)
      No framework exists that can explain and predict the generalisation ability of deep neural networks in general circumstances. In fact, this question has not been answered for some of the least complicated of neural network ...