Home Projects Publications Code

Joe Austerweil

(Sorry you have to work to replace the underscore, lastname, at, and dot, with _, austerweil, @, and .. I really hate spam).

I am Joe Austerweil, an Assistant Professor at Brown University in the Department of Cognitive, Linguistic, and Psychological Sciences.

Short Research Philosophy

As a computational cognitive psychologist, my research program explores questions at the intersection of perception and higher-level cognition. I use recent advances in statistics and computer science to formulate ideal learner models to see how they solve these problems and then test the model predictions using traditional behavioral experimentation. Ideal learner models help us understand the knowledge people use to solve problems because such knowledge must be made explicit for the ideal learner model to successfully produce human behavior. This method yields novel machine learning methods and leads to the discovery of new psychological principles.

Academic history

Brown University (2007), Sc. B. in Applied Mathematics-Computer Science (with honors)
University of California, Berkeley (2011), M.A. in Statistics
University of California, Berkeley (2012), Ph.D. in Psychology

Upcoming Presentations and Workshops

MIT in Late April-Early May, 2015.

Cognitive Science Society 2015:

Mark Ho, Michael Littman, Fiery Cushman, and Joseph Austerweil. Teaching with Rewards and Punishments: Reinforcement or Cuommuncation? [PDF]

Teaching with evaluative feedback involves expectations about how a learner will interpret rewards and punishments. We formalize two hypotheses of how a teacher implicitly expects a learner to interpret feedback - a reward-maximizing model based on standard reinforcement learning and an action-feedback model based on research on communicative intent - and describe a virtual animal-training task that distinguishes the two. The results of two experiments in which people gave learners feedback for isolated actions (Exp. 1) or while learning over time (Exp. 2) support the action-feedback model over the reward-maximizing model.

Ting Qian and Joseph Austerweil. Learning additive and substitutive features. [PDF]

To adapt in an ever-changing world, people infer what basic units should be used to form concepts and guide generalizations. While recent computational models of human representation learning have successfully predicted how people discover features from high-dimensional input in a number of domains (Austerweil & Griffiths, 2013), the learned features are assumed to be additive. However, this assumption is not always true in the real world. Sometimes a basic unit is substitutive (Garner, 1978), which means it can only be one value out of a set of discrete values. For example, a cat is either furry or hairless, but not both. In this paper, we explore how people form representations for substitutive features, and what computational principles guide such behavior. In a behavioral experiment, we show that not only are people capable of forming substitutive feature representations, but they also infer whether a feature should be additive or substitutive depending on the observed input. This learning behavior is predicted by our novel extension to the Austerweil and Griffiths (2011, 2013)'s feature construction framework, but not their original model. Our work contributes to the continuing effort to understand how people form representations of the world.


I will be teaching Human and Machine Learning (CLPS 1211) in Fall 2015.

Current highlighted paper(s):

Joshua Abbott, Joseph Austerweil, and Thomas Griffiths (in press). Random walks on semantic networks can resemble optimal foraging. Psychological Review. [Preprint PDF]

Joseph Austerweil and Thomas Griffiths. (2013). A nonparametric Bayesian framework for constructing flexible feature representations. Psychological Review, 120 (4), 817-851. [DOI]

Highlighted code tools:

My laboratory is developing open source tools to perform fast (GPU-based using OpenCL) and easy (written for use in Python) inference for Bayesian nonparametric models. The current methods are available at Github. This effort is spearheaded by a postdoctoral scholar in my laboratory, Ting Qian.

Last Updated April 26, 2015