I am a scientist at Thomson Reuters Labs, working on machine learning models embedded as components of search systems, including re-ranking, dense retrieval (a.k.a. vector search) and retrieval-augmented generation with large language models (LLMs).
Previously I was a postdoctoral research associate at Pacific Northwest National Lab, working in Tegan Emerson’s Math of Data Science group where my mentor was Henry Kvinge. Before that I completed a PhD in mathematics (algebraic geometry) at the University of Washington, advised by Sándor Kovács.
Research
I am interested in holistic evaluation of deep learning models, including bias, robustness, explainability and interpretability, and post-hoc analysis of learned features (e.g. representation (dis)similarity metrics).
My pure math research focused on birational geometry and singularities, mostly in positive and mixed characteristic.
Publications
Main Track
- Davis Brown, Charles Godfrey, Nicholas Konz, Jonathan Tu and Henry Kvinge. Understanding the Inner-workings of Language Models Through Representation Dissimilarity. In EMNLP 2023.
- Kelsey Lieberman, James Diffenderfer, Charles Godfrey and Bhavya Kailkhura. Neural Image Compression: Generalization, Robustness, and Spectral Biases. In NeurIPS 2023 (was also selected for an oral presentation at ICML 2023 Workshop Neural Compression: From Information Theory to Applications). Code available at github.com/klieberman/ood_nic
- Charles Godfrey, Davis Brown (equal contribution), Tegan Emerson and Henry Kvinge. On the Symmetries of Deep Learning Models and their Internal Representations. In NeurIPS 2022. Code available at github.com/pnnl/modelsym.
Workshop
- Nicholas Konz, Charles Godfrey, Madelyn Shapiro, Jonathan Tu, Henry Kvinge and Davis Brown. Attributing Learned Concepts in Neural Networks to Training Data. In The 1st Workshop on Attributing Model Behavior at Scale at NeurIPS 2023, selected for oral presentation.
- Charles Godfrey, Henry Kvinge, Elise Bishoff, Myles Mckay, Davis Brown, Tim Doster and Eleanor Byler. How many dimensions are required to find an adversarial example? In The 3rd Workshop of Adversarial Machine Learning on Computer Vision at CVPR 2023, selected for oral presentation.
- Charles Godfrey, Michael Rawson, Henry Kvinge and Davis Brown. Fast computation of permutation equivariant layers with the partition algebra. In ICLR 2023 Workshop on Physics for Machine Learning.
- Davis Brown, Charles Godfrey (equal contribution), Cody Nizinski, Jonathan Tu, Henry Kvinge. Robustness of edited neural networks. In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models.
- Henry Kvinge, Davis Brown and Charles Godfrey. Exploring the Representation Manifolds of Stable Diffusion Through the Lens of Intrinsic Dimension. In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models, featured in The Gradient.
- Charles Godfrey, Elise Bishoff, Myles McKay and Eleanor Byler. Impact of architecture on robustness and interpretability of multispectral deep neural networks. In SPIE Defense + Commercial Sensing 2023.
- Elizabeth Coda, Nico Courts, Colby Wight, Loc Truong, WoongJo Choi, Charles Godfrey, Tegan Emerson, Keerti Kappagantula and Henry Kvinge. Fiber bundle morphisms as a framework for modeling many-to-many maps. In ICLR 2022 Workshop on Geometrical and Topological Representation Learning.
Preprints
- Correspondences in log Hodge cohomology (2023).
- Henry Kvinge, Grayson Jorgenson, Davis Brown, Charles Godfrey and Tegan Emerson. Neural frames: A Tool for Studying the Tangent Bundles Underlying Image Datasets and How Deep Learning Models Process Them (2022).
- Charles Godfrey, Elise Bishoff, Myles Mckay, Davis Brown, Grayson Jorgenson, Henry Kvinge and Eleanor Byler. Testing predictions of representation cost theory with CNNs (2022). Code available at https://github.com/pnnl/frequency_sensitivity.
- Takumi Murayama and Charles Godfrey. Pure subrings of Du Bois singularities are Du Bois singularities (2022).
- Higher direct images of ideal sheaves (2022).
Axioms
I believe in Federico Ardila’s axioms:
-
Axiom 1. Mathematical potential is distributed equally among different groups, irrespective of geographic, demographic, and economic boundaries.
-
Axiom 2. Everyone can have joyful, meaningful, and empowering mathematical experiences.
-
Axiom 3. Mathematics is a powerful, malleable tool that can be shaped and used differently by various communities to serve their needs.
-
Axiom 4. Every student deserves to be treated with dignity and respect.
Acknowledgments
During the spring of 2019 I was in residence at the Mathematical Sciences Research Institute in Berkeley, California, supported by the National Science Foundation under Grant No. 1440140. During the academic year of 2018-2019 I was supported by the University of Washington Department of Mathematics Graduate Research Fellowship.