Neural nets was: Re[2]: Info on Smalltalk DSLs or
Metaprogramming...
Rich Warren
rwmlist at gmail.com
Thu Sep 7 06:50:47 UTC 2006
On Sep 4, 2006, at 10:44 AM, Herbert König wrote:
>
> This (reinforcement learning) is said to be slow. What number of
> inputs and how many neurons would such a brain have? How many agents?
Regarding reinforcement learning, I've seen this complaint form other
EE people, and I never really understood it. Perhaps you could give
me an example. In my opinion, it really depends. I believe the "no
free lunch" principle shows that all learning systems are identical
when averaged over all possible problems--so efficiency really
depends on the specific problem.
Reinforcement learning has a number of advantages: it can work very
well on systems where you receive intermitted rewards or punishments--
especially if the rewards only come after a series of actions (for
example, games). Reinforcement learning can also continue after the
agent/software has been deployed, allowing it to continue to update
its behavior in the real world.
Also, for many domains, learning speed isn't much of an issue. You
train the system, save the trained state, then run it from the
trained state. The training should be a one-time cost.
As for the details on my project, I don't know. I'm still in the
planning stages (and pouring through a lot of biology/evolution
papers right now).
>
> With 500 epochs of 400 samples training of a single Perceptron of 64
> hidden and 16 output neurons took over an hour on a 1.8GHz Pentium M.
> It had 140 inputs.
I've worked on projects where we have trained the system for several
days. Again, this is a one-time cost. If I put in a month of coding
time, a few days of training (which can run over the weekend) doesn't
seem that much. Many ML techniques can take a long time to train, but
the end results can be very fast in use.
>
> Do you have any pointers on how to use genetic algorithms on neural
> nets? More practical, I'm an EE not a CS person :-)
Not off hand (though I always recommend searching on http://
citeseer.ist.psu.edu/). If you're just trying to train the weights,
you can just read the bits into an array and use that as your genome.
If you want to evolve both the weights and the size/topography, you
need more sophisticated methods. I know some people have worked on
different ways to encode neural nets (basically compressing the
layout/weight information, much the way our DNA compresses all the
information needed to build a human body). I can't find the
reference in my notes right now, however.
-Rich-
More information about the Squeak-dev
mailing list
|