ArrayAlign | ArrayAlignHierarchy | ArrayAlign_ConstantRange | ArrayAlign_Equal | ArrayAlign_Multiply | ArrayAlign_PermutationCycle | ArrayAlign_Power | AudivolvNetwork | AudivolvSummary | Brainwaves | CodeSimian | CodeTree | Func | FuncPurpose_MeasureOtherFunc | FuncPurpose_SoundColorMouse | FuncPurpose_Unknown | HardCoded_CodeTree_BreedFunction | HeapQueue | HighStandards | Hypercube | Javassist | LongTermGoalsOfAudivolvDesign | MindReadingHelmet | MovementScore | NeatVsScruffy | NoSpying | Node | NormedCodeTree | PermutationCompilerAndInterpreter | Plugin | ShortTermGoalsOfAudivolvDesign | SoundCardPart | Template | TuringTests_per_second | flo | int | ob

Audivolv - ShortTermGoalsOfAudivolvDesign

This is short-term. Also read LongTermGoalsOfAudivolvDesign.


TODO change all "double" and "Object" to "flo" and "ob" which are Audivolv's names for them.

The functions will be evolvable. The main types of arrays will be Object and floating point.
For example, a network with cycles will have
Object arrays within Object arrays ... leading back to the first Object array.
Those Object arrays will also have floating point arrays
which contain the number data for the nodes.
They will have integer arrays for more complex algorithms like priority-queues.

Much of this depends on designs in Audivolv that may be temporary, like having 1 global (per computer) SimpleFuncInfo object that tells how many speakers, microphone channels, mouse channels, color output channels, and general purpose vars the Func can use.

A Func has 1 main function: run(double[],int,Object[],int). The double[] and/or a double[] in an index of the Object[] can be those flos specified in the SimpleFuncInfo. The Object[] will be a node in a connectionist network, similar to a neural or bayesian network but designed specificly for each node to play audio and learn which other nodes sound good when it connects to them.

Example indexs that a SimpleFuncInfo specifies: speakerOutLeft, speakerOutRight, colorOutRed, colorOutBlue, predictOutMouseX, predictOutMouseY, predictOutMouseXLater, predictOutMouseYLater, colorOutGreen, (not using microphone), mouseInX, mouseInY, mouseInXLater, mouseInYLater, var20, var21, var22... var30. Some small number of indexs must be used for the whole Audivolv software for a few minutes while the Human user moves the mouse to play audio on this network.

The network should evolve Func objects and create new nodes using them. The network will evolve for the purpose of predicting the Human users mouse movements, and will be scored higher when it predicts more seconds ahead of time.

The heapQueue (priority queue) algorithm I wrote about in the other documents should be used in a few places, including the child list of each node.

Each node is an Object array which contains only arrays of various types, and you can find cycles in those arrays leading back to most nodes. The data of Audivolv networks are stored as arrays and Javassist-generated Func objects, which can be stored simply as a text file of Java code and compiled on-the-fly again the next time Audivolv runs. All this will be easy to send across the internet.

Node has: * size-1 int array is standard for all nodes in Audivolv. The 1 int is the index in the global node array (type Object[][] because node is Object[]). * There may be other arrays to store other info in the node.

Each edge from a node to its child node will include at least*****:

* At least 1 Slow-changing flo, similar to how a Human brain does "long term potentiation" vs faster "neuromodulation", basically theres levels of long-term and short-term memory. I do not know how many long-term and short-term flos are needed in the Audivolv node,

* Flo connection strength. Flo means floating point number. All flo in the whole network are required to stay in range -1 to 1 at all times. Evolved Java code can contain numbers outside that range as long as all variables read and written are in that range. Example: speakerOutLeft = Math.sin(mouseInX*305);

* Func object that is probably shared between many nodes in the network, but there will be thousands of Func in the same network, and there are no restrictions on which Funcs can be in which nodes. As described above, the Func uses the flos described by the global SimpleFuncInfo. Each node may have its own flos described by SimpleFuncInfo and/or there may be a global array of flos for that where flos are copied to/from nodes before running. Deep recursion of Func in the network, finding more Func and nodes and recursing farther... should be avoided. Instead, recursion should be only 1 or a few levels deep and set vars in some nodes which cause them to be sorted in heapQueue (priority queue needing only 2 int arrays and 1 flo array and 1 Func array that the priority is about (edges to other nodes)) so they will run again soon. No deep recursion, use heapQueue instead.

* node object. All nodes are designed with the same flexible constraints (array sizes are restricted to some function of multiply and power of other array sizes in the same node).

While the Human user plays sound with the mouse using the network, new Java classes (type Func) will be created as simple genetic mutations of good scoring Funcs already in the network, and new nodes will grow and start to use those Funcs, and existing nodes will gain edges with those Funcs, and the whole network shape can change without stopping the sound.

EFFICIENCY: It is extremely important to do that efficiently because it must play sound that, after a few minutes of evolution, the Human interprets as music. On a 2 ghz computer, I tested the Func interface with very simple code. I got it to do up to 30 million function calls per second. Javassist creates new Func classes as efficient as if they were compiled before Audivolv started, and it takes less than 1 second. The Math.sin(double) function and other simple math will be used a lot. I guess that the 2 ghz computer will run these bigger Func 300,000 times per second, if nothing else is being done simultaneously. For 2-channel 44.1 khz audio (frequency is CD quality), at least 44100 Func must run per second. Its not 88200 per second because a Func handles all audio channels, mouse movements, color, and other devices at the same time. That will allow 7-level recursion or FUNC STACKING, which is needed for this new form of Artificial-Intelligence I am defining here.

FUNC STACKING is when the flos described by the global SimpleFuncInfo, from 2 or more Func, are used back-to-back without erasing them. Example: If SimpleFuncInfo defines 22 flos... Set all 22 flos to 0. Run the first Func on those, changing some of them. All 22 flos always stay between -1 and 1. Do not modify the 22 flos. Immediately run the next Func on those 22 flos. Run the third Func on the 22 flos... You use a sequence of Func as 1 Func, and those Funcs can be wrapped in a single Func.

There are many ways to do FUNC STACKING, and some of them modify the flos between Func calls. Example: Remember what the 22 flos are, see what the next Func changes them to, and do a weighted-average on the previous values and changed values, so the 22 flos will change slower. You can do calculus on that, intepret the changes like a bayesian-node would, or other advanced math.

By mixing FUNC STACKING with recursion (limited to depth 7 for example) through the network (of Object[] nodes which each have more nodes, Funcs, flos, etc), audio will slowly echo through the network, while at most 7 nodes are used at a time for practical speed reasons. 7*44100=308700 Func calls would be done per second, in this example. Its important to do FUNC STACKING for the network to learn good values of the slow-changing flos which are the long-term memory of the network.

Each neuron predicting your mouse-movements, and being scored on its ability to do that, and the COMBINATIONS of FUNC that are FUNC-STACKED together are what makes this Artificial-Intelligence algorithm unique. Its all audio, but its also the "brainwaves" of the AI, and theoretically if you play this intelligent musical instrument with your mouse for long enough, your Human brainwaves will flow with the Audivolv brainwaves across the internet and into other Humans brains. Its only music and mouse movements.

The most important part is the nodes in the network each predicting where you will move the mouse next, and having multiple nodes that could start playing sound next, depending on where you move the mouse relevant to those predictions. This method of software evolution does not need the Human user to say "good" or "bad" at any time. It will interpret the Human's state of mind by how they move the mouse, in the context of what sound is playing.

Later, pictures and text could be displayed in the Audivolv window automatically, depending on the sound and mouse movements, which would affect how you move the mouse a little, and Audivolv could theoretically learn what those things are by your reactions. Other text-based Artificial-Intelligence softwares can know that a "dog" has 4 legs and barks, but only Audivolv will know what it "feels like" to have a dog bark at you, interpreted from the changes in mouse movements of Humans who suddenly see a picture of a dog barking. The "Audivolv Network" should form a limited copy of the Human brains who use it, so their intuition still flows through the internet after they turn their computer off.

REVERSE LINKS: LongTermGoalsOfAudivolvDesign AUDIVOLV WEBSITES: | | |