Selection criteria and reproduction 

I have just downloaded Framsticks and I am impressed.

Most of my suggestions seem to require additional features in the engine
itself -- I don't know whether such suggestions are sought, but for what
it's worth here they are:

I would like to see less goal-driven selection criteria. Could a
reproductive fitness criterion be introduced? You would not need to actually
implement sexual reproduction, but could model it crudely by counting the
number of times an individual makes physical contact with a conspecific
individual.

Some other ideas which might be interesting to explore:

Communication. Individuals can emit/detect a signal which identifies
their genotype (like a pheromone).

Learning. Neural weights could be modified during the individuals
lifetime. (Hebbian or Competitive Learning rules would be worth exploring).
This might require a higher level genetic representation format.

I am a C/matlab programmer with knowledge of neural nets and GAs and would
like to contribute, but don't have much free time.

Tom

Maciej Komosinski's picture

> I would like to see less goal-driven selection criteria. Could a
> reproductive fitness criterion be introduced? You would not need to actually
> implement sexual reproduction, but could model it crudely by counting the
> number of times an individual makes physical contact with a conspecific
> individual.

Sure. Framsticks v2 evolution will be script-controlled, so it
is possible to devise any scenario of evolutionary process.

> Some other ideas which might be interesting to explore:
> Communication. Individuals can emit/detect a signal which identifies
> their genotype (like a pheromone).

That will most probably be introduced in the future...

> Learning. Neural weights could be modified during the individuals
> lifetime. (Hebbian or Competitive Learning rules would be worth exploring).
> This might require a higher level genetic representation format.

Higher level... why? What do you think would be the
advantages of Hebbian/competitive learning?

MacKo

> > Learning. Neural weights could be modified during the individuals
> > lifetime. (Hebbian or Competitive Learning rules would be worth
exploring).
> > This might require a higher level genetic representation format.
>
> Higher level... why? What do you think would be the
> advantages of Hebbian/competitive learning?
>
>
>
> MacKo

I'm not quite sure what you mean by "advantages" here.

If you mean the particular advantages of Hebbian and competitive learning,
they are that these algorithms are unsupervised (like evolution) and
relatively simple to implement while providing a significant computational
advantage to the organism. Also, Hebbian learning at least is biologically
plausible.

If you mean the advantages of learning in general to the organism, there are
many and they are fairly obvious, but I would stress two: 1) Learning
permits an organism to acquire adaptive behaviours within its lifetime, and
thus to adapt to rapid changes in the environment. 2) As the weights of the
neural net do not have to be propagated genetically, a functional and
adaptive neural system can be specified with a less complicated genetic
code. It is very much akin to the difference between genotype formats f0 and
f4: learning is essentially analogous to a developmental process by which an
effective "brain" is built from a simple set of instructions. In this case
though the "building" consists of modifying connection strengths rather than
determining their physical presence or absence. This is why another (higher
level) genotypic code would be required (to express the simple set of
instructions for changing connection strengths, rather than the fixed
connection strengths).

The big advantage of learning is that it permits the evolution of more
complex neural systems and behaviours, and so makes the evolutionary
simulation even more interesting. Note though that learning could help with
very basic behaviours. For instance, an organism could feasibly learn the
neural connections required to move efficiently. Although these connections
would not be passed directly to its offspring, the neural hardware (neurons
plus learning instructions) would, and so be subject to selection.

Allowing the weights to change would require positive or negative feedback while
the frams are learning to strengthen or weaken the weights possibly producing a
more fit individual that may last longer ( learn more and improve further )
rather than weeding out the weekilings after the fact. This would be beneficial
of course, but the overhead would would be immense, and evolution would slow to
a crawl.

Tom Hartley wrote:

> > > Learning. Neural weights could be modified during the individuals
> > > lifetime. (Hebbian or Competitive Learning rules would be worth
> exploring).
> > > This might require a higher level genetic representation format.
> >
> > Higher level... why? What do you think would be the
> > advantages of Hebbian/competitive learning?
> >
> >
> >
> > MacKo
>
> I'm not quite sure what you mean by "advantages" here.
>
> If you mean the particular advantages of Hebbian and competitive learning,
> they are that these algorithms are unsupervised (like evolution) and
> relatively simple to implement while providing a significant computational
> advantage to the organism. Also, Hebbian learning at least is biologically
> plausible.
>
> If you mean the advantages of learning in general to the organism, there are
> many and they are fairly obvious, but I would stress two: 1) Learning
> permits an organism to acquire adaptive behaviours within its lifetime, and
> thus to adapt to rapid changes in the environment. 2) As the weights of the
> neural net do not have to be propagated genetically, a functional and
> adaptive neural system can be specified with a less complicated genetic
> code. It is very much akin to the difference between genotype formats f0 and
> f4: learning is essentially analogous to a developmental process by which an
> effective "brain" is built from a simple set of instructions. In this case
> though the "building" consists of modifying connection strengths rather than
> determining their physical presence or absence. This is why another (higher
> level) genotypic code would be required (to express the simple set of
> instructions for changing connection strengths, rather than the fixed
> connection strengths).
>
> The big advantage of learning is that it permits the evolution of more
> complex neural systems and behaviours, and so makes the evolutionary
> simulation even more interesting. Note though that learning could help with
> very basic behaviours. For instance, an organism could feasibly learn the
> neural connections required to move efficiently. Although these connections
> would not be passed directly to its offspring, the neural hardware (neurons
> plus learning instructions) would, and so be subject to selection.

Maciej Komosinski's picture

> Allowing the weights to change would require positive or negative feedback while
> the frams are learning to strengthen or weaken the weights possibly producing a
> more fit individual that may last longer ( learn more and improve further )
> rather than weeding out the weekilings after the fact. This would be beneficial
> of course, but the overhead would would be immense, and evolution would slow to
> a crawl.

The idea of 'learning' is interesting, but I cannot
imagine it in details because of the lack of
information which feedback is positive, and which
is not.

MacKo

One way to learn is through training. There is generally a pattern matching, rule
based, or formula based solution to any number of problems. A problem is presented to
the input neurons and the output neurons are forced to the correct answer. The weights
between all fired neurons is then increasd slightly. As more and more problem and
solution examples are presented the weight slowly adjust so as to find an acceptable
formula, rule, or pattern conversion solution.

Another way to learn is through feedback. All iterations are measured for fitness of
the instance. An increase in fitness causes a slight increase in the weights between
all "ON" neurons and a decrease in fitness causes a weakening in the weight between
all on neurons.

Maciej Komosinski wrote:

> > Allowing the weights to change would require positive or negative feedback while
> > the frams are learning to strengthen or weaken the weights possibly producing a
> > more fit individual that may last longer ( learn more and improve further )
> > rather than weeding out the weekilings after the fact. This would be beneficial
> > of course, but the overhead would would be immense, and evolution would slow to
> > a crawl.
>
> The idea of 'learning' is interesting, but I cannot
> imagine it in details because of the lack of
> information which feedback is positive, and which
> is not.
>
> MacKo

Maciej Komosinski's picture

> One way to learn is through training...
> Another way to learn is through feedback...

I know these, but as I said,

>>I cannot
>>imagine it in details because of the lack of
>>information which feedback is positive, and which
>>is not.

Even a useful implementation of unsupervised learning,
SOM-like, Hebbian-like etc. are not clear for me.
Any specific suggestions; how and why?

MacKo

How long to v2?

I am going mad with anticipation!

Cheers,

Ander