Probabilistic Thinking in Language and Code

Kevin Ellis
Cornell University

I will present work that tries to bridge Bayesian models of cognition with LLMs, treating both informal (natural) language and formal (programming) languages as candidate languages-of-thought for humanlike internal representations. First, I will define a class of Bayesian models that wrap around LLMs. Then, I will show ways in which the resulting models are more humanlike than either a raw LLM, or a conventional Bayesian cognitive model. Last, I will show engineering results suggesting how wake-sleep learning could fine-tune language models to be more effective at inductive reasoning by amortizing probabilistic inference.

Presentation (PDF File)
View on Youtube

Back to Workshop III: Naturalistic Approaches to Artificial Intelligence