I’m a machine learning researcher at AKASA, where I work on end-to-end generative AI solutions for healthcare revenue cycle management.

Previously, I was an applied machine learning scientist at Thomson Reuters Labs, tuning AI agents for knowledge workers, and a postdoctoral research associate at Pacific Northwest National Laboratory in Tegan Emerson’s Math of Data Science group, mentored by Henry Kvinge. Before that, I completed a PhD in mathematics (algebraic geometry) at the University of Washington, advised by Sándor Kovács.

Research

I work on applied research for LLM-driven machine learning systems. I am interested in holistic evaluation of deep learning models, including bias, robustness, explainability and interpretability, and post-hoc analysis of learned features (e.g. representation (dis)similarity metrics).

My pure math research focused on birational geometry and singularities, mostly in positive and mixed characteristic.

Publications

Note: my Google Scholar profile may be more complete and up to date.

Main Track

  1. Davis Brown, Charles Godfrey, Nicholas Konz, Jonathan Tu and Henry Kvinge. Understanding the Inner-workings of Language Models Through Representation Dissimilarity. In EMNLP 2023.
  2. Kelsey Lieberman, James Diffenderfer, Charles Godfrey and Bhavya Kailkhura. Neural Image Compression: Generalization, Robustness, and Spectral Biases. In NeurIPS 2023 (was also selected for an oral presentation at ICML 2023 Workshop Neural Compression: From Information Theory to Applications). Code available at github.com/klieberman/ood_nic
  3. Charles Godfrey, Davis Brown (equal contribution), Tegan Emerson and Henry Kvinge. On the Symmetries of Deep Learning Models and their Internal Representations. In NeurIPS 2022. Code available at github.com/pnnl/modelsym.

Workshop

  1. Nicholas Konz, Charles Godfrey, Madelyn Shapiro, Jonathan Tu, Henry Kvinge and Davis Brown. Attributing Learned Concepts in Neural Networks to Training Data. In The 1st Workshop on Attributing Model Behavior at Scale at NeurIPS 2023, selected for oral presentation.
  2. Charles Godfrey, Henry Kvinge, Elise Bishoff, Myles Mckay, Davis Brown, Tim Doster and Eleanor Byler. How many dimensions are required to find an adversarial example? In The 3rd Workshop of Adversarial Machine Learning on Computer Vision at CVPR 2023, selected for oral presentation.
  3. Charles Godfrey, Michael Rawson, Henry Kvinge and Davis Brown. Fast computation of permutation equivariant layers with the partition algebra. In ICLR 2023 Workshop on Physics for Machine Learning.
  4. Davis Brown, Charles Godfrey (equal contribution), Cody Nizinski, Jonathan Tu, Henry Kvinge. Robustness of edited neural networks. In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models.
  5. Henry Kvinge, Davis Brown and Charles Godfrey. Exploring the Representation Manifolds of Stable Diffusion Through the Lens of Intrinsic Dimension. In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models, featured in The Gradient.
  6. Charles Godfrey, Elise Bishoff, Myles McKay and Eleanor Byler. Impact of architecture on robustness and interpretability of multispectral deep neural networks. In SPIE Defense + Commercial Sensing 2023.
  7. Elizabeth Coda, Nico Courts, Colby Wight, Loc Truong, WoongJo Choi, Charles Godfrey, Tegan Emerson, Keerti Kappagantula and Henry Kvinge. Fiber bundle morphisms as a framework for modeling many-to-many maps. In ICLR 2022 Workshop on Geometrical and Topological Representation Learning.

Preprints

  1. Charles Godfrey, Ping Nie, Natalia Ostapuk, David Ken, Shang Gao and Souheil Inati. Likert or Not: LLM Absolute Relevance Judgments on Fine-Grained Ordinal Scales (2025).
  2. Correspondences in log Hodge cohomology (2023).
  3. Henry Kvinge, Grayson Jorgenson, Davis Brown, Charles Godfrey and Tegan Emerson. Neural frames: A Tool for Studying the Tangent Bundles Underlying Image Datasets and How Deep Learning Models Process Them (2022).
  4. Charles Godfrey, Elise Bishoff, Myles Mckay, Davis Brown, Grayson Jorgenson, Henry Kvinge and Eleanor Byler. Testing predictions of representation cost theory with CNNs (2022). Code available at https://github.com/pnnl/frequency_sensitivity.
  5. Takumi Murayama and Charles Godfrey. Pure subrings of Du Bois singularities are Du Bois singularities (2022).
  6. Higher direct images of ideal sheaves (2022).