Neural nets was: Re[2]: Info on Smalltalk DSLs or Metaprogramming...

Rich Warren rwmlist at gmail.com
Sun Sep 10 20:07:03 UTC 2006


Hey,

I hate to say this, but I may be moving my project to Repast (http:// 
repast.sourceforge.net/). It's in Java (ARGGG!), but it looks like a  
well-designed agent simulation framework. I don't know if this would  
be something you're interested in or not. It's supposed to have  
various learning algorithms and neural nets and the like already  
built in. Also has built-in graphing and logging capabilities.

It looks like a good way to bootstrap my project (an evolutionary AI/ 
Alife project), with a minimum of effort. Also, it's been used for a  
number of ALife research projects--which will help make my project  
more acceptable in the eyes of the biologists I eventually need to  
convince.

I'm still doing some of the early prototyping in Squeak, however. If  
I get anything that looks nice, I'll try to post it in a public way  
(don't really know how to do that yet--but I'm sure someone on this  
list can point me to the correct HowTo's).

-Rich-

On Sep 6, 2006, at 11:40 PM, Herbert König wrote:

> Hello Rich,
>
>
>>> This (reinforcement learning) is said to be slow. What number of
>>> inputs and how many neurons would such a brain have? How many  
>>> agents?
>
> RW> Regarding reinforcement learning, I've seen this complaint form  
> other
> RW> EE people, and I never really understood it. Perhaps you could  
> give
> RW> me an example. In my opinion, it really depends.
>
> regarding slowness, I'm an absolute beginner on neural nets and only
> read about how much rounds of training it needs. Currently I'm in the
> stage of: "Is my data representation suitable for processing by a
> (which type of?) neural net?"
>
> Going through: training the net -> evaluate network performance ->
> change data representation or network topology .....
> feels slow in absolute terms.
>
> And a single layer perceptron behind a self organizing feature map
> trains much faster than a multilayer network (only a few of the nodes
> are trained) with comparable results (in my special case).
>
> As soon as I know that a training will yield the "best possible"
> result, I will be cool if a computer needs to run a weekend to train a
> network.
>
> On the other hand, soon after I reach that state, the subject will get
> boring :-))
>
> RW> Not off hand (though I always recommend searching on http://
> RW> citeseer.ist.psu.edu/). If you're just trying to train the  
> weights,
> RW> you can just read the bits into an array and use that as your  
> genome.
> RW> If you want to evolve both the weights and the size/topography,  
> you
> RW> need more sophisticated methods. I know some people have worked on
> RW> different ways to encode neural nets (basically compressing the
> RW> layout/weight information, much the way our DNA compresses all the
> RW> information needed to build a human body).  I can't find the
> RW> reference in my notes right now, however.
>
> Thanks!
>
>
> Herbert                            mailto:herbertkoenig at gmx.net
>
>




More information about the Squeak-dev mailing list