University of Notre Dame College of Engineering
Irish.png
  • Home(current)
  • People
  • Datasets
  • Projects
  • Publications
  • News

Evaluation of Machine Learning Model Generalization



   Achieving a good measure of model generalization remains a challenge within machine learning. The goal of this project is to investigate alternative techniques for evaluating a machine learning model's expected generalization. This project is inspired by the observation that neural activation similarity corresponds to instance similarity. Theoretically, a machine learning model that mirrored this observation would indicate a consistent understanding of stimuli, which extrapolate to unseen stimuli or stimuli variations. This project utilizes representational dissimilarity matrices (RDMs), which represent a model as it's activation similarity between pairs of responses to stimuli. The project experiments with a multitude of techniques to construct and utilize RDMs in and across model training, validation, and testing.


Funded by: IARPA contract #D16PC00002 and NSF DGE #1313583

Nathaniel Blanchard, Jeffery Kinnison, Brandon RichardWebster, Walter Scheirer

Collaborators: Pouya Bashivan

A Neurobiological Cross-domain Evaluation Metric for Predictive Coding Networks, Nathaniel Blanchard, Jeffery Kinnison, Brandon RichardWebster, Pouya Bashivan, Walter J. Scheirer, May 2018: [arxiv] [pdf]


Website developed by Richard Stefanik
Copyright © 2018
University of Notre Dame
Computer Vision Research Lab
307 Stinson-Remick Hall, Notre Dame, Indiana 46656
Accessibility Information
College of Engineering