|Speaker||Steve Smale, City U. Hong Kong and Berkeley; Paul Smolensky, John Hopkins U.; and Gerry Sussman, MIT|
|Date and Time||Nov. 29, 2012, 4 p.m. - 07:00 p.m.|
|Location||PILM Seminar Room 46-3310 MIT Building 46, off of 3rd floor atrium|
We apologize, this workshop has been canceled. MIT I^2 is currenlty working to reschedule this workshop for next year. Please check back next semester for updates.
Workshop will include seminars by Steve Smale, City U. Hong Kong and Berkeley; Paul Smolensky, John Hopkins U.; and Gerry Sussman, MIT. Presentations will be followed by a moderated discussion.
Free and open to the public
Abstract: Our brains are far slower than our best hardware, and, yet, our minds can exploit patterns in inherently ambiguous, noisy data far more eciently than our best software. This discrepancy has led to numerous proposals for new computational architectures and new programming models to help us engineer machine intelligence and reverse-engineer natural intelligence - from asynchronous spiking neural networks to Lisp machines, and from perceptrons to logic programming.
Many of these architectures were introduced in response to perceived limitations of standard architectures for ecient calculation. Despite their engineering success, Turing machines, serial processors following the von Neumann model, and synchronous digital circuits all have properties that seem inappropriate for intelligent computation. Design criteria for alternatives have included biological plausibility, suitability for learning, natural parallelism, and fault tolerance. But what is essential about these architectures, and what is incidental, distracting, or simply convenient? In what ways is it inadequate or misleading to describe them all using Turing machines or C code, and in what ways is it unhelpful to view them as anything else?
In computer science and engineering, the concept of computation as calculation - the representationand evaluation of mathematical functions via algorithmic processes - has taken life at many dierentlevels of abstraction. Each addresses a dierent piece of the puzzle:
In this workshop, we will explore candidate architectures for computational intelligence advanced by some of the world's best theorists in AI, neuroscience, and cognitive science. We will compare and contrast them to one another, as well as to their closest analogs in other branches of computer science and engineering, to see what the key motivations and contributions of each new architecture is. We will strive to identify the necessary features of a new model of intelligent computation, and the prospects for using it to understand the mind and brain.
4:40-4:50 Questions/Moderator Comments
5:20-5:30 Questions/Moderator Comments
There are no comments yet