|Speaker||Noah D. Goodman|
|Affiliation||Assistant Professor of Psychology, Stanford University|
|Date and Time||Sept. 5, 2013, 5:30 p.m. - 06:30 p.m.|
|Location||McGovern Reading Room, MIT 46-5165 (seating is limited)|
MIT Intelligence Initiative Seminar Series
presents Noah D. Goodman, Stanford University
Abstract: Probabilistic programming languages (PPLs) provide a powerful representation for uncertain knowledge, separating the task of modeling from the design of inference algorithms. The key challenge to practical PPL systems is finding efficient algorithms for inference that cover a wide space of useful programs. Motivated from observations about human reasoning and language understanding, I will argue that three interrelated ideas can lead the way: Compile away repeated computation, Relax hard problems to easier ones, and Learn to do better inference over time. I will illustrate these slogans by describing several recent projects aimed at producing more efficient Church implementations. The Shred implementation provides several orders of magnitude speedup by tracing and slicing the inner loop of MCMC. Locally-annealed reversible jump (LARJ-)MCMC exploits relaxation to make efficient structure-changing proposals. Stochastic inverses are learned representations of local model structure that enable efficient block-proposals, converging asymptotically on perfect bottom-up samplers. Finally, I will suggest that resource-rational analysis of the tradeoff between computation resources and accuracy can both help us optimize our inference strategies and make a formal connection to human inference processes.
There are no comments yet