ArrayAlign | ArrayAlignHierarchy | ArrayAlign_ConstantRange | ArrayAlign_Equal | ArrayAlign_Multiply | ArrayAlign_PermutationCycle | ArrayAlign_Power | AudivolvNetwork | AudivolvSummary | Brainwaves | CodeSimian | CodeTree | Func | FuncPurpose_MeasureOtherFunc | FuncPurpose_SoundColorMouse | FuncPurpose_Unknown | HardCoded_CodeTree_BreedFunction | HeapQueue | HighStandards | Hypercube | Javassist | LongTermGoalsOfAudivolvDesign | MindReadingHelmet | MovementScore | NeatVsScruffy | NoSpying | Node | NormedCodeTree | PermutationCompilerAndInterpreter | Plugin | ShortTermGoalsOfAudivolvDesign | SoundCardPart | Template | TuringTests_per_second | flo | int | ob

Audivolv - ArrayAlign

How evolved code will iterate over combinations of arraysThere are 5 main functions that array size can depend on:
* is multiply function. ArrayAlign_Multiply.
range is a function that gives a minimum and maximum size, which are equal for constant size. ArrayAlign_ConstantRange.
^ is power function. ArrayAlign_Power.
Each calculates required array size based on the sizes of 2 other specific arrays,
usually in the same node.
There is also ArrayAlign_PermutationCycle and ArrayAlign_Equal.

For example, a bayesian node's floating point array's size is 2^childArraySize,
and childArraySize may have a maximum of 7 to avoid a large floating point array.
It will not be possible to evolve nodes whose total array sizes can ever exceed
some user-definable limit, like 10000.

s means size of array.
i means current index during iterating.
xi*ys means ys quantity of xi. It does not mean xi quantity of ys.
(xs*ys)i means the current index of an iteration from 0 to xs*ys-1.
x --> y means x contains all information in y, but may not work as y --> x.

xi --> xs
yi --> ys
xs^ys --> xs
xs^ys --> ys
xs*ys --> xs
xs*ys --> ys
xi*ys --> xs^ys
(xs^ys)i --> xi*ys
xi*ys --> (xs^ys)i
(xs*ys)i --> xi
(xs*ys)i --> yi
(xs*ys)i --> xs*ys
yi*(xi*ys) --> (xs^ys)i
yi*(xi*ys) --> (xs*ys)i
yi*(xs^ys)i --> ys*(xs^ys)i
yi*(xs^ys)i --> yi*(xi*ys)
This is a description of a branch of math. Following this path, there must be things much more effective than bayesian-networks that can be written in a few lines. The most complex part of a bayesian-network is yi*(xs^ys)i

A bayesian-node's arrays would include an array containing BAYESFALSE and BAYESTRUE, size xs = 2,
Object array of child-bayesian-nodes, size ys, and floating point array of bayesian-weights,
size xs^ys, and floating point array of sums of weights for each child
(1 floating point for BAYESFALSE and one for BAYESTRUE, for each child), size xs*ys.
The function that processes a bayesian node would obey the iteration order defined by yi*(xs^ys)i,
or ys*(xs^ys)i would also work if you wanted to use ys quantity of floating points at a time.
yi*(xs^ys)i lets you use a constant quantity of floating points each iteration,
that does not depend on quantity of bayesian node childs. Variations of these iteration orders,
array sizes, types of arrays, etc will evolve, but for now I'm hard-coding a few of the
best artificial intelligence algorithms in a way that can evolve. I'm writing them
as data instead of algorithms. Javassist can optimize these long chains of logic
into a single Java class while Audivolv runs.