Neural nets was: Re[2]: Info on Smalltalk DSLs or
Metaprogramming...
Herbert König
herbertkoenig at gmx.net
Thu Sep 7 09:40:34 UTC 2006
Hello Rich,
>> This (reinforcement learning) is said to be slow. What number of
>> inputs and how many neurons would such a brain have? How many agents?
RW> Regarding reinforcement learning, I've seen this complaint form other
RW> EE people, and I never really understood it. Perhaps you could give
RW> me an example. In my opinion, it really depends.
regarding slowness, I'm an absolute beginner on neural nets and only
read about how much rounds of training it needs. Currently I'm in the
stage of: "Is my data representation suitable for processing by a
(which type of?) neural net?"
Going through: training the net -> evaluate network performance ->
change data representation or network topology .....
feels slow in absolute terms.
And a single layer perceptron behind a self organizing feature map
trains much faster than a multilayer network (only a few of the nodes
are trained) with comparable results (in my special case).
As soon as I know that a training will yield the "best possible"
result, I will be cool if a computer needs to run a weekend to train a
network.
On the other hand, soon after I reach that state, the subject will get
boring :-))
RW> Not off hand (though I always recommend searching on http://
RW> citeseer.ist.psu.edu/). If you're just trying to train the weights,
RW> you can just read the bits into an array and use that as your genome.
RW> If you want to evolve both the weights and the size/topography, you
RW> need more sophisticated methods. I know some people have worked on
RW> different ways to encode neural nets (basically compressing the
RW> layout/weight information, much the way our DNA compresses all the
RW> information needed to build a human body). I can't find the
RW> reference in my notes right now, however.
Thanks!
Herbert mailto:herbertkoenig at gmx.net
More information about the Squeak-dev
mailing list
|