The MIT Intelligence Initiative (I2) Steering Committee is please to announce that Vikash Mansinghka, Martin Rohrmeier and Timothy J. O'Donnell have been awarded the first one year I2 fellowships.
The central goal of the MIT Intelligence Initiative (I2) has been to support integrative research focused on the topic of intelligence – intelligence in humans or animals, in machines or molecules, in cultural or collective settings -- which includes collaborations across all departments of MIT. The initiative hopes to fully exploit MIT’s unique potential to address the problem of intelligence and encourage and enable more integrative approaches than do conventional funding sources and institutional structures.
Previously, MIT I2 has funded research through seed grant awards. In order to further the initiatives focus and to expand into the training and education of future scientists, the MIT I2 Steering Committee decided to establish the MIT I2 Postdoctoral Fellowships. The fellowship will be awarded to postdoctoral candidates conducting research that is directly relevant to the understanding of intelligence. Applicants to the MIT I2 Postdoctoral Fellowships are expected to show strong scientific accomplishments and to have propose innovative research bridging at least two different MIT labs.
Vikash Mansinghka received his PhD from MIT in 2009, and an S.B and M.Eng. from the EECS department, along with an S.B. in mathematics. Most of Vikash’s research deals with three questions: What kind of information processing machine is the mind? How can we best emulate it in software? And how can we build hardware that does this as efficiently as the brain?
His specific scientific goal is to narrow this gap from both sides: contribute to the engineering of machine intelligence using new computational principles that may underpin the unreasonable effectiveness of biological computation, and use these ideas to reverse engineer the mind and brain, from epistemology to algorithms and architecture to hardware. The fields of artificial intelligence, neuroscience and psychology have all converged on probabilistic inference and statistical learning as key to understanding this gap, but have struggled to improve on either the apparent intractability of inference or the limited flexibility of state-of-the-art probabilistic systems and probabilistic models of cognition. In response, he has developed a new model of computation, called probabilistic computing, which recasts computation in terms of stochastic inference, not deterministic calculation.
Martin Rohrmeier has received his PhD in Musicology and a MPhil in Musicology from the University of Cambridge.
The aim of Martin’s proposed project is to advance the understanding of the human capacity of musical syntax in a research project that combines cutting edge methodologies in theoretical linguistics and probabilistic computational models.
In its theoretical part, the project will take as a starting point the preliminary theory of musical syntax that I have recently developed (Rohrmeier, 2007, 2011). In collaborative work with the music syntax research group within MIT Linguistics, we will combine expertise in linguistics and music theory to devise formal tests for musical recursion or constituent structure, investigate whether the full range of linguistic features (as specified by minimalist and other linguistic theories) is reflected in musical syntax, and explore how a syntactic framework can model complex musical phenomena such as fauxbourdon style parallel triads, sequential patterns or chromaticisms.
In its computational part, in collaboration with the MIT Computational Cognitive Science group, the projects aims to model musical syntax using probabilistic context-free grammars (PCFG) or related methods using real-world musical corpora.
Timothy J. O'Donnell received a BA in Linguistics from Cornell University in 1999 and a PhD in Psychology from Harvard University in 2011. His research lies at the boundary of experimental psychology, theoretical linguistics, and applied computer science. He develops mathematical models of the way children learn language and the way adults generalize linguistic rules to create new words and sentences.
His project seeks to target key open questions in language structure, acquisition and processing by bringing together researchers in three areas of language research: statistical natural language processing, formal linguistics, and psycholinguistics. Recent developments make this an especially good time for interdisciplinary exchange and cross-pollination. The project identifies four ways in which advances in each of the fields can inform and enrich each other: (1) use of complex structured representations and domain-specific knowledge from linguistics in statistical NLP and psycholinguistics; (2) improved empirical methods from psycholinguistics for linguistics and NLP; (3) statistical models from natural language processing (NLP) which allow finer-grained prediction for linguistic and psycholinguistic theories; and (4) an expanded set of challenging scientific problems for all three areas.
There are no comments yet