Leadership

Team Lead + Lead Researcher

Math Modeling Student Research Group

Synopsis

Leading mechanistic interpretability research team of 10 undergraduates and PhD candidates. Architected framework to quantify polysemanticity of neurons in convolutional neural networks. Developed pipeline to generate and embed feature visualizations via gradient ascent; clustered 5k+ embeddings with modified Bayesian Information Criterion.