Source
Automatically imported from: http://commons.somewhere.com:80/rre/1994/neural.nets.html
Content
This web service brought to you by Somewhere.Com, LLC.
neural nets
``` Date: Mon, 18 Apr 94 06:00:03 -0700 From: hutches@cs.ucsd.edu (David J. Hutches) Subject: Reminder -- CSE A.I. Research Group Meeting Today
TIME : 12:00 PM PLACE : 5402 AP&M
TODAY'S TALK
---
Speaker : Filippo Menczer (UCSD CSE Department) [fil@cs.ucsd.edu]
Title : Evolving Sensory Systems in Latent Energy Environments
Abstract : Sensors represent a crucial link between the evolutionary forces shaping a species' relationship with its environment, and the individual's cognitive abilities to behave and learn. We report on experiments using a new class of "latent energy environments" (LEE) models to define environments of carefully controlled complexity which allow us to state bounds for random and optimal behaviors that are independent of strategies for achieving the behaviors. Using LEE's analytic basis for defining environments, we then use neural networks (NNets) to model individuals and a steady-state genetic algorithm to model an evolutionary process shaping the NNets, in particular their sensors. Our experiments consider two types of "contact" and "ambient" sensors, and variants where the NNets are not allowed to learn, learn via error correction from internal prediction, and via reinforcement learning. We find that predictive learning, even when using a larger repertoire of the more sophisticated ambient sensors, provides no advantage over NNets unable to learn. However, reinforcement learning using a small number of crude contact sensors does provide a significant advantage. Our analysis of these results points to a trade-off between the genetic "robustness" of sensors and their informativeness to a learning system.
NEXT WEEK'S TALK
---
Speaker : David Noelle (UCSD CSE Department) [dnoelle@cs.ucsd.edu]
Title : Do As I Say! Working Towards Instructable Connectionist Systems
Abstract : Humans improve their performance by means of a variety of learning strategies, including both gradual statistical induction from experience and rapid incorporation of advice. In many learning environments, these strategies may interact in complementary ways. The focus of this work is on cognitively plausible models of multistrategy learning involving the integration of inductive generalization and learning "by being told". Specifically, we present a general design strategy for connectionist networks which instantaneously modify their behavior in response to quasi-linguistic advice.
The approach which is examined here focuses on encoding the receipt of instruction as motion in a network's activation space. The correct interpretation and operationalization of input instruction sequences is learned inductively, but, once this initial learning is complete, instruction following proceeds at the speed of activation propagation. This focus on activation space dynamics allows instructional learning and standard connectionist inductive learning to function in tandem.
This strategy has been successfully applied to a simple discrete mapping task and to the learning of natural number arithmetic. In this latter domain, the connectionist adder of Cottrell and Tsung, which is capable of systematically operating on arbitrarily large natural numbers, was augmented to receive instruction in various methods of addition and subtraction. Future experiments will extend these multistrategy learners to include auto-associative memories containing articulated attractors in activation space which will facilitate systematic generalization to novel advice sequences. These later experiments will abandon arithmetic and will focus instead on simple planning tasks in a "blocks world" environment. ```
This web service brought to you by Somewhere.Com, LLC.