Do not accuse AI of simply parroting information back to you. The smartest AI is not smart enough to repeat like a parrot. Some parrots listen to themself talk and change their voice to sound like what they heard. Copying sound from microphone to speakers is not the same. They often repeat at times that demonstrate intelligence, while staying silent other times. They hear something similar to what they heard many times before and repeat the next few words, or they may do that for multiple sounds at once, repeating what that group of sounds may lead to. Parrots do not get confused if a sound is behind a wall or if it echos.

Google has a sound search that uses filenames and text around the sounds, but http://findsounds.com beat Google to a real audio search. It finds things that sound similar to your search audio.

Read this NEWS about "Instruments on the verge of a musical revolution" (not about my software, that will be later)


Instruments on the verge of a musical revolution 13 May 2008

The world’s first intelligent digital musical instrument that can learn to make music in the style of any musician playing it has been developed by experts at the University of Plymouth.

The research – undertaken by PhD student Marcelo Gimenes and Professor Eduardo Miranda at the Interdisciplinary Centre for Computer Music Research (ICCMR) at Plymouth – means that musicians will soon be able to jam with machines that hold virtual musical clones of themselves, the profile of others musicians, or both.

Unlike existing technology that needs to be pre-programmed to make music, the new software developed at the ICCMR ‘listens’ to the musician and learns their style so that it can improvise with them.

When it listens to the musician, the software creates a memory from interconnected streams of data. Incoming musical information is compared with the data already stored in the system and the memory is subsequently updated on the fly.

The ICCMR is developing new intelligent musical systems that will be able to evolve their own rules for musical composition and the ability to interact with musicians and listeners in much more sophisticated ways than at present.

Professor Eduardo Miranda, Head of the ICCMR, said: “We predict the emergence of new kinds of intelligent musical instruments that could revolutionise the music industry. Such instruments will be able to learn basic musical skills autonomously as they are played. New forms of music making will certainly emerge along with new business opportunities.”




ben@audivolv.com 408.334.7214
View my resumE (and email I check more often, for jobs).
I'm looking for a software job near San Jose, California, USA. --Ben Rayfield.

Audivolv - where Instruments Play The Musicians
What it does now: A unique Artificial-Intelligence that evolves Java code to create mouse-music instruments as you prefer them to sound.

Creating "Natural Language Mouse Interface"... http://sourceforge.net/projects/natlangmouse (open-source GNU GPL 3+, or call to negotiate a written nonexclusive proprietary license)

I'm redesigning this website (not Audivolv) using javascript (compatible with the 5 major browsers) and a very simple AI algorithm.

SUMMARY: A Javascript software for intelligent (neural network) dynamic movement of text in a webpage based on mouse movements. Touch the mouse to any few texts on the screen you are interested in, and relevant (to multiple texts) text will move toward those.

DETAILS: A Javascript software for intelligent dynamic movement of text in a webpage based on mouse movements. Divide text into small sections, maybe 3 sentences each, and a very shorter title for each. Display each of those as a small rectangle in a web browser that has position and speed and tries to not overlap any other visible rectangles too often. When there are too many rectangles on the screen, the least important or lest used will become invisible and later become visible when the artificial-intelligence code thinks they are relevant. Touch the mouse to a few of these rectangles in any combination. Their color will become brighter and slowly decay, flowing color recursively in the network of rectangles. The network of rectangles is shaped like a neural-network, where edges are defined by substring matching of the title of each rectangle if that is found in the body of other rectangles. There will be no clicking or scrolling in this webpage. Instead of clicking a link, touch the mouse to the text that would have been a link (and to a few others) and that will cause, the rectangles that the link would have gone to, to move toward where your mouse was and push other rectangles out of the way. In an intuitive way, the user will see color and moving text on the screen, touch the mouse to whatever they are interested in, and the relevant information will come to them.

I am creating this "Natural Language Mouse Interface" software to upgrade this website.

View my resumE (and email I check more often). I'm looking for a software job near San Jose, CA. --Ben Rayfield. Or, ben@audivolv.com 408.334.7214

Creating "Natural Language Mouse Interface"... http://sourceforge.net/projects/natlangmouse (open-source GNU GPL 3+, or call to negotiate a written nonexclusive proprietary license)
What does Audivolv do now?
* Starts when you click here then click OPEN. If it does not work, click here for how to fix it or my phone/email.
* Open-source GNU GPL 2+. Unzip that file instead of opening it to get the source-code.
* Sound reacts instantly to mouse movement.
* You move the mouse any way you like and think about how that should sound.
* You choose "Sounds Good" or "Sounds Bad".
* Audivolv tries to learn what you were thinking.
* Repeat until your musical thoughts become real. It took 5 minutes to create this example music: .OGG .MP3 .WAV
* Close Audivolv and start over if you confuse it too much.
* Audivolv is a Java software that creates Java software to change things about itself as it learns
* Learns techno music faster than any other type, but what did you expect an AI to play first?
* There is not 1 line of music code in Audivolv until you teach it what music is. Then it writes the code.
* Find all new musical instruments in the "Audivolving" folder it creates, and copy/paste whats in those files into the "Create Musical Instrument" tab in the options, to play them again, or use them as part of a new music software (not GNU GPL 2+ because it usually does not apply to program output). Audivolv writes reusable Java code.
What will Audivolv do years from now?
* The same thing it does now but it learns faster, and will learn what you like with no good/bad buttons.
* Privacy And Safety Policy is only relevant if you turn on the Internet option or after it has much more intelligence. Does not exist yet.
* Given examples of the best known AI algorithms (bayesian net, neural net, evolution, etc) rewritten in Audivolv's simpler code and data format, Audivolv will evolve an intelligent way to design and use new AI algorithms in that same format.
* Play all types of music and create new types. If you think mouse-music can not rock, listen to my older and less advanced music program: CodeSimian. What I did manually in CodeSimian, Audivolv will do automatically and better.
* Audivolv evolves a way to intelligently design AI brainwaves and data structures it will flow through and modify. Most AI is built for text because its easier at first. Audivolv is built for waves because its easier in the end. Brainwaves and music are waves. Before riding, you could read for years about riding a bicycle and learn less than trying to ride for 5 minutes. Its hard for waves to understand text, but in the end, waves will understand text exponentially better than text will understand waves.
* Audivolv designs a different music-and-mouse language between AI and each person who plays music with their mouse.
* A new kind of communication: Audivolv translates between those music-and-mouse languages (no text) between millions of people across the Internet.
* Theoretically: ToDo Summary.txt in the Audivolv Jar file summarizes my long-term theoretical plans for Audivolv. The most important parts are the "filter" of Java code (also called "firewall"), Friendly AI (for accuracy of the emergent effects), evolution of connectionist algorithms and node types, and the part after "MICROPHONE AND SPEAKERS PARROT" near the end. It will be a 5 megabyte Jar file.

** MORE TECHNICAL: Long-term goal: efficiently unify connectionist AI (the obs[] part of a audivolv.Func) (any recursive permutations of linear and exponential types) with evolution (the obs[] part of a audivolv.Func) and hypercube vector fields (the flos[] part of a audivolv.Func or any flo numbers recursively in permutations of linear and exponential connectionist array networks), represented in the same code and provably-predictable data format (arrays of arrays of arrays... allowing cycles and leaf nodes like audivolv.Func or numbers in hypercube range, with other constraints), code-string-firewalled for safety, so generated AI softwares can use other generated AIs as tools, starting from a few example AIs like bayesian (a type of exponential connectionist AI) and evolution, to play better mouse-music (a hypercube vector field that includes 1 dimension for each speaker and mouse x and y position).

JOB: View my resumE (and email I check more often). I want to creatively design platform-independent software, near San Jose California (Silicon Valley). --Ben Rayfield

Have you played Guitar Hero?
When you push buttons on the controller, it can add good or bad recorded sounds to the music, as they have divided the possible ways to play it, but it never creates new sounds. The game is to use the plastic guitar's controls in time with the music and moving objects.



That simple game is fun in a unique way. Most people want to interact with music instead of just listening, but its hard to learn to play an instrument or professional music editing software, so many have convinced themselves that interaction is only a little more fun. Using a guitar-shaped game controller to copy the best musicians, and instantly know how well you are copying, satisfies some of the desire to interact with music.

Audivolv has more interaction but not the specific music people already like. Audivolv has to create music.

If Guitar Hero interacted with music half as well as the current version of Audivolv (0.1.6) does, those games would have sold 10 billion dollars instead of 1 billion.

There is something that can be done on today's computers but all the money on Earth has failed to create: Artificial General Intelligence (AGI). An AGI is an AI that can learn to learn to learn to learn, and learns about things it was not designed for.

Audivolv (version 0.1.6) is a 10% done AGI, and every download includes text files describing how to create over half of the remaining parts (without escaping GNU GPL 2+ license). Audivolv is not designed for music. Its designed to learn about learning, and communicating with mouse and music to people is a good way to do that.

To start Audivolv now, click here then click OPEN.
If it does not work, install Java and try again, or call/email me

Audivolv is the only Artificial-Intelligence where you create music like a game, but it does not use recorded music. When it starts, it does not know what sound is, but it only takes a few minutes to teach it to play music like this .OGG .MP3 .WAV

Your mouse becomes a stupid musical instrument, but guitars have no brain at all. It starts as random sounds or no sound, but you teach it to sound how you like. Move your mouse to play music, click "Sounds Good" or "Sounds Bad" and in less than a second (I'll fix the delay) it writes new software that changes how your mouse sounds. It may do this a few times.


Free. Open-source GNU GPL 2+

Audivolv's Privacy And Safety Policy is only relevant in a later version after Internet code or more intelligence is added.
The Guitar Hero game series had over a billion dollars in sales since 2005.
(They buy music to create game) ==> music/video ==> person ==> controller ==> update score on screen. Play short sound and graphic.

Audivolv's information loops (much more effective than looping sounds) so all parts improve the other parts:
(starts empty) ... ==> music/color ==> person ==> mouse/good/bad ==> Better instrument reactions: music/color ==> Learns to play new instrument: person ==> mouse/good/bad ==> ...

Audivolv is only interesting now, but when I finish a few more of the general AI systems, playing intelligent interactive music with your mouse will be more fun than Guitar Hero. That will be after the Good/Bad buttons become obsolete and are removed. Instead, thousands of times per second, Audivolv will predict what you would have clicked.

Its not always that color. AI learns to change the window color too, based on mouse and music.

How will Audivolv become the first global network organized by Artificial-Intelligence? Read about it at kurzweilai.net, and write a response there if you want.

"Audivolv will evolve and run 1 million lines of Java code per second continuously on a normal 3 ghz computer" Read about it or write a response.



Technical parts of Audivolv and other subjects: ArrayAlign | ArrayAlignHierarchy | ArrayAlign_ConstantRange | ArrayAlign_Equal | ArrayAlign_Multiply | ArrayAlign_PermutationCycle | ArrayAlign_Power | AudivolvNetwork | AudivolvSummary | Brainwaves | CodeSimian | CodeTree | Func | FuncPurpose_MeasureOtherFunc | FuncPurpose_SoundColorMouse | FuncPurpose_Unknown | HardCoded_CodeTree_BreedFunction | HeapQueue | HighStandards | Hypercube | Javassist | LongTermGoalsOfAudivolvDesign | MindReadingHelmet | MovementScore | NeatVsScruffy | NoSpying | Node | NormedCodeTree | Exponentially Fast Permutation Partial-Compiler And Interpreter (Nobody has ever built one of these optimized for AI) | Plugin | ShortTermGoalsOfAudivolvDesign | SoundCardPart | Template | TuringTests_per_second | flo | ob

I love science-fiction. Its why I first became interested in AI. But more than that, I love to beat it with real technology.

To those who create science-fiction movies and the robot sounds for them:




Please stop faking those sounds. Real Artificial-Intelligence is here and can learn to sound like the R2D2 robot above if you move your mouse to teach it that. The sounds Audivolv creates are yours. The open-source license (GNU GPL 2+) grants you complete ownership of the sounds Audivolv creates (and the softwares Audivolv creates that interact with your mouse to generate those sounds), but for the same reason I have the right to play a guitar in the style of Metallica, everyone has the right to move their mouse as they please, and if they accidentally create sounds similar to any copyrighted music, the entire path of Java code is theirs, including the endpoint which may be similar to any existing sounds. For free, use Audivolv to generate robot sounds for your next movie.

Then create a video-game about your movie and directly use the code Audivolv wrote to make the robots in your game sound the same but reacting to the game like it reacted to your mouse when you recorded it for the movie. Audivolv writes reusable Java code, but only the part of Java that can easily be rewritten in other programming-languages. For free, use Audivolv to generate interactive robot sounds for your next video-game.



Audivolv is more advanced than all of the softwares below because I'm not just working toward intelligent music. I'm working toward Artificial-General-Intelligence. The main benefit of that so far is it uses the same AI algorithms for all devices.

Links to possibly similar softwares:

evolectronica.com
http://musicmouse.com
NEWS: "Instruments on the verge of a musical revolution"


Links about computer-music:

aaai.org AI music
Wikipedia Evolutionary music
list of music generation softwares
Many search-engines can find sounds by the text around them. Instead, "Query by humming"

The following is about Audivolv's future design, not what it does now:
Bayesian networks are the most advanced type of AI, except for time-related things. They are for statistics. No other comes close. Some AI softwares automatically grow bayesian-networks and choose which bayesian-nodes should connect to which other bayesian-nodes. But no AI software understands why bayesian networks work or improve the algorithms for it or create variations of those algorithms. Instead, a person manually types those algorithms and they do not change. The data changes.

A later version of Audivolv will include many of the best AI algorithms in a form that Audivolv knows how to modify. Audivolv will modify those algorithms as CodeTree objects, and will modify the designs of their data-structures as ArrayAlign of multiple arrays. All bayesian-network, neural-network, and other connectionist AI will use arrays as data. The arrays will contain other arrays recursively. If you look into the arrays recursively, you will often cycle back to the same array. Thats how recurrent networks will be defined. Most networks will have cycles. All nodes are ob arrays. For efficency, the first array of each node is always an int array, and the first int in it is the unique name of that node in the whole Audivolv. The most common types of arrays will be int[], flo[] (Java's double primitive type), ob[] (java.lang.Object), and CodeTree[].

Audivolv's existing design documents (get them by unizipping Audivolv*.jar) describe the most flexible AI design ever. There is no AI algorithm that can not be written as data that Audivolv can modify, and at the same time, reasonably efficient to run that way. That does not mean Audivolv would be smart enough to modify them effectively, but it would at least modify them stupidly and without the code crashing. Audivolv uses its own versions of stack and heap. Stack is the combination of the 4 parameters of the Func.run function, and heap is various types of arrays in arrays recursively with cycles. Audivolv is not just a software. Its an interpreter and on-the-fly compiler of a specific new code and data format.

Its the most flexible, but not the most efficient design. Its not slow either. A very skilled software-engineer could easily design something slower by accident, because I took a long time to consider the balance between speed and flexibility. The major sacrifice in speed I made was that arrays must be full at all times. To make them bigger, you must copy the array, but to make them smaller, some things in them can be marked as a tombstone. A node is an ob array, so if array resizing becomes a problem, some arrays in the node can be replaced by Java objects like dynamic size lists, which require many function calls to use but resize quickly. Evolved code would assume arrays, but the rules for evolving code could be modified to use arrays and java.util.List interchangably.

Speed will not be a bottleneck preventing smarter-than-Human intelligence on a global network of today's normal computers running Audivolv, while people play music with their mouse, and the music becomes smarter.

Those paragraphs above are about Audivolv's future design, not what it does now.

Audivolv - Where Instruments Play The Musicians is the only Artificial Intelligence that starts with no knowledge of mouse or sound but continues learning what you like to create better musical instruments you play with the mouse. All devices are interchangable so Audivolv has to learn that your mouse is not a slow moving speaker and your speaker is not a fast moving mouse. Throw away your keyboard and do not click the mouse. You will not need it. Evolving music is 100% mouse motion controlled. It has a simple interface, but evolves deep interactions of mouse, sound, and color. Its fun, free, and starts immediately. Click here to try it (click OPEN to start immediately, or SAVE) You can also get Audivolv from download.com, sourceforge.net, and a few others.

If Audivolv does not work, click here to tell me about it or learn how to fix it. If its confusing in any way, its my fault. Unlike most softwares, I do not think of people as users and software as their tools. People and AI are 2 types of intelligence that will cooperate like the left and right half of a global brain, and we will call it the Audivolv Network. Your brain is not mine to modify, but I can easily modify Audivolv before you download it (not after). If Audivolv confuses you, I want to understand why. You can call my phone.


Audivolv is 100% Java. It eVOLVes AUDIo, color, and musical instruments you play with the mouse. Its a Java software that creates reusable Java software without your help (and you own the Java code it writes and the music created with that code). Move the mouse to play music and touch "Sounds Good" or "Sounds Bad". Each time, it evolves a new interaction of sound, color, and mouse movements. It starts with no knowledge and no structure for representing sound (or color or mouse), like pitch, note events, or combinations of effects. Instead, it writes Java code that calculates the amount of electricity flowing to each speaker 44100 times per second (44.1 khz), so it can theoretically evolve any simple sound. For example, teach Audivolv to make electric-guitar-like sounds when the mouse changes direction and to make the window more blue when the sound is more distorted on the left speaker or louder on the right, or anything else it predicts you will say "Sounds Good". If it makes no sound (or ignores the mouse), touch "Sounds Bad" and it learns to play sound (and watch the mouse) more often. License is open-source GNU GPL 2+.




PROBLEMS? Or contact info

If it does not work, install Java and try again.

You can immediately describe any problems on the Bug List.

ben@audivolv.com
I'm Ben Rayfield. Call me any time at 408.334.7214






Audivolv eVOLVes AUDIo and will later evolve AI brainwaves the same way.

Brainwaves are easy to understand. Close your eyes, center your mind, and wait until light and dark blurry things flow across what you "see" at a rate of 2-6 per second. Those are brainwaves echoing across the visual part of your brain, and you can shape them into what your eyes see or dreams. I can build those in software.


The way brainwaves understand brainwaves is one type of consciousness, but not all types of consciousness are yet understood by science, and it is easier to build those types and AI into a larger mind than define them. Maybe we will understand them after we see how they work in that combination.

Mouse and speakers are enough if you are awake, but theoretically, years from now, if you wear a mind-reading helmet like OpenEEG while you sleep to control music you can hear while sleeping, your dreams could become an internet video game (with other dreaming players).


Now working on:

If I swapped your computer's CPU with a faster one in the middle of a calculation, your computer would crash, but thats similar to what Audivolv 0.2.1 will do. It will swap code with a faster version while its playing audio, and it will not affect the sound. It will take less CPU time.

Audivolv 0.2.1 will be much faster. Compiling new evolved Java code was the biggest bottleneck. Instead, new Java code will start running instantly and lose 10% running speed. Exponentially Fast Permutation Partial-Compiler And Interpreter. Most compilers check which millisecond (10^-3) a Java file was last compiled, and compile it again if it was modified after that. My new compiler works on only a subset of Java, does not need files, and will usually finish in the same millisecond it starts. Also, if all you are doing is renaming variables and concat existing codes, it will theoretically finish in 1 microsecond (10^-6). Its most useful for compiling code AI generates because AI tries lots of permutations.

Lambda-Calculus Exponent Music

View the Technical powerpoint-like html presentation I gave at the Bay Area Artificial Intelligence Meetup Group


My plan for Audivolv --> CEV

Eliezer Yudkowsky's Theory: Coherent Extrapolated Volition (CEV) is only possible to do in Audivolv after the Good/Bad buttons are replaced by more understanding of mouse movements, which can be done by learning to predict and influence mouse movements as the mouse controls evolved sound and (symmetricly) AI changes how sound reacts to the mouse to try to control the mouse. Mouse and music become a 2-way communication between AI, the Human user's unconscious mind, and between AI and the internet. From there, we design CEV algorithms.

is Audivolv's icon. It means Lambda-Calculus Exponent Music. Audivolv does not directly use the Lambda operator, like Lisp does, but it does evolve stateless code so it could be used to simulate lambdas. Stateless code allows exponential speed optimizations in some areas. For efficiency, all evolved code goes in a Java function like this: void run(double flos[], int f, Object obs[], int o) where flos[f] and obs[o] are the current stack position. A lambda could be a Func (0 flos and 1 ob) where (Func)o0 is the parameter and is replaced by (Func)returnedLambda.
Example evolved code: "f0 = .2*f17 + .8*Math.sin(f5); //f0 is amplitude of left speaker"
becomes this code: "flos[f] = .2*flos[f+17] + .8*Math.sin(flos[f+5]);"


Technical parts of Audivolv and other subjects: ArrayAlign | ArrayAlignHierarchy | ArrayAlign_ConstantRange | ArrayAlign_Equal | ArrayAlign_Multiply | ArrayAlign_PermutationCycle | ArrayAlign_Power | AudivolvNetwork | AudivolvSummary | Brainwaves | CodeSimian | CodeTree | Func | FuncPurpose_MeasureOtherFunc | FuncPurpose_SoundColorMouse | FuncPurpose_Unknown | HardCoded_CodeTree_BreedFunction | HeapQueue | HighStandards | Hypercube | Javassist | LongTermGoalsOfAudivolvDesign | MindReadingHelmet | MovementScore | NeatVsScruffy | NoSpying | Node | NormedCodeTree | Exponentially Fast Permutation Partial-Compiler And Interpreter | Plugin | ShortTermGoalsOfAudivolvDesign | SoundCardPart | Template | TuringTests_per_second | flo | ob


Open-source GNU GPL 2+

A baby has to learn the difference between seeing and hearing. Red is too loud. Blue is hot. The picture on television smells great. Babies learn pictures look great, but some time later start to smell pictures again when they look like food. This works great because if you experienced no thoughts of taste when seeing food at a store, you may forget what you wanted. Any Artificial-Intelligence that uses different code to understand different senses is a communication between different AIs, not 1 smarter AI. Audivolv's interchangable senses are mouse, speakers, and color, and can do things like swap code between left speaker and red brightness. There is only 1 type of AI code. New devices do not complicate the AI.

Audivolv eVOLVes AUDIo and will later evolve AI brainwaves the same way.

100% Java
Immediate evolution with no setup. Experience intelligent interactive music for the first time.

Download

Second download location (use this if the first one is slow)


Listen to this music created with Audivolv: .OGG .MP3 .WAV played with this Evolved Java Code



The core of Audivolv is the audivolv.Func Java-interface.

My strategy on NeatVsScruffy AI design.

CONTACT: Ben F Rayfield
ben@audivolv.com
Phone 408.334.7214 Call 11am-11pm weekends or 6pm-11pm mon-fri (West USA: Pacific Time)
Call me about anything if you think we'd have an interesting conversation. Also, if you live near Milpitas California, maybe we would meet.

Problems? Or is Audivolv just hard to use? Write your bad experience on the Bug List, but please do not duplicate whats already there.

Wiki

Source-Code (Its all Java) and
Technical documents are in every download of Audivolv???.jar. Rename it to Audivolv.zip, unzip it, and find them in the anyfiles/design documents folder.

What does Audivolv's new icon mean? 3 symbols that probably nobody has ever put together before. A technical description of half of what Audivolv does.

Privacy

Privacy and Safety Policy of the Audivolv software. Summary:
* If you have Epilepsy, read the details. Audivolv may learn how your brain works and try to use all parts of it.
* Internet (and most other permissions) starts off, until Human tries to change that.
* Like a firewall, only use evolved/downloaded Java code if its proven safe.
* Code you add to Audivolv, like plugins, are your responsibility to do safely.
* Never tries to learn if you break laws (legal in USA because of 4th amendment)

Audivolv 0.2.1 will be much faster. Compiling new evolved Java code was the biggest bottleneck. Instead, new Java code will start running instantly and lose 10% running speed. Exponentially Fast Permutation Partial-Compiler And Interpreter. Most compilers check which millisecond (10^-3) a Java file was last compiled, and compile it again if it was modified after that. My new compiler works on only a subset of Java, does not need files, and will usually finish in the same millisecond it starts. Also, if all you are doing is renaming variables and concat existing codes, it will theoretically finish in 1 microsecond (10^-6). Its most useful for compiling code AI generates because AI tries lots of permutations.

For those thinking about donating code to Audivolv (GNU GPL 2+ license): I have insanely high standards for Audivolv code. Please do not be offended if I reject your AI code for being "less than genius" (no sarcasm intended). It may be the best code ever written for that purpose, but this is one of the rare times when "We're only Human" is not an ok excuse. You can always put your version of Audivolv on your own website, or build it as a plugin. Please mercilessly report new bugs, design-flaws, and any other problems in Audivolv. I plan for my last version of Audivolv to have exactly 0 bugs. See Neat Vs Scruffy for details.

Audivolv's main user-interface is sound and mouse. Flowing color, game controllers, and other devices are optional. That will never change. When the "Audivolv Network" becomes the first interactive global AI network, everyone connecting to that network will use the same interface. The only difference is more intelligence in the music reacting to the mouse. If platform-independence is extended to include Humans and AI, then they can be used interchangably. Each is simply an interaction of numbers ranging -1 to 1. Example: Replace mouse with color output of a second Audivolv and the first Audivolv will view it as a mouse. Its interchangable.

A global network, made of only the interchangable behaviors of Humans and AI, can hill-climb up to passing a Turing-Test by exactly copying Humans data at first then using less and less Human help each time it passes. It will do audio-Turing-Tests continuously between all Human users and AI code at all times while it plays mouse and music.
It will do thousands or millions of Turing-Tests per second, on average. Its a source of intelligence, not just a measure. Of the whole network, it will stream gigabytes-per-second of evolved Java code across the internet, generating interactions that are easily more than the bandwidth of a Human brain. While playing mouse music, you will get a little confused and a lot smarter. You will ask "Did I just think that or did AI think it? This music is getting into my head, and why do I feel like multiplying in binary?" But you don't need to ask. Just move the mouse however feels natural at the time, and the answer will be encoded in a musical language evolved specifically for you (heard on your computer only), you listen and your intuition increases... at least in theory. That is a long-term goal. Audivolv version 0.1.6 evolves music/mouse interactions to your preferences, but it does no psychology. See the design documents in every download for details.




What is the purpose of Audivolv? Years from now, what will it do?
A small Jar file that you double-click then your mouse becomes a musical instrument that learns how you like it to sound.

How does it know? The same way you can watch people dance and know how much they like the music. Nothing to click. No options to choose. Close your eyes and enjoy.
Technically, how will it do that? Human moves mouse trying to control the music. Similarly, Audivolv changes how the music reacts to the mouse, trying to control your hand to move the mouse how Audivolv chooses. Your hand and the sound will try to control eachother, which forms a new type of communication as they learn to predict and control eachother better. Give Audivolv permission to talk to other Audivolvs on the internet. We will call them the "Audivolv Network". Knowing how your mind works, your Audivolv will find others who think like you, who would sound better together, or who know things you want to learn. Intuition and consciousness flow through your your brain. Whats so special about carbon, hydrogen, and oxygen? Theoretically, if you cut your brain in half and connected it back together with a tiny wire between each part you cut, it would work mostly the same way (except for the chemicals). If you separated the 2 brain halfs by making all those wires very long, and somehow made them still transfer electricity fast enough, then you would still think mostly the same way, but you would be in 2 places at once. The right music can make you feel any emotion and lots of other thoughts. It does almost the same thing as changing the electricity in those wires. In a much smaller way, the electricity in the wires can affect how your hand moves the mouse. Indirectly your computer's mouse and speakers do the same thing as those theoretical wires.

By connecting Artificial-Intelligence, mouse, speakers, and internet, theoretically we can get a little of the same effect as direct brain-to-brain communication. The same way, our thoughts and Artificial-Intelligence would amplify each other. Wikipedia has a page about Intelligence Amplification systems. Some people worry about AI becoming smarter-than-Human and taking over the world. One way to avoid that is to become smarter with the AI.

It will be an open-source global AI network where millions of people play intelligent music with their mouse. This new form of communication will work between people who know no spoken or written languages, using only their speakers and mouse movements.

Some researchers teach monkeys sign-language. Teach them to hold a mouse and listen. It will work better.




Almost everything we do reads/writes information in our brains. We should not optimize that with brain implants or expensive machines until we can use the simpler devices better. Audio, video, and game-controllers are very much underused. I have a Wii game system, which was probably the first to put lots of motion-detecting gyroscopes in their controllers, but most games do not use all the gyroscopes, and when they do use them, they are used as buttons. For example, in the Zelda game, regardless of which direction you swing the controller, the sword swings the same few ways. The pixels on your television and vibrations of speakers are a high bandwidth input into a Human brain, and very few people are using that path to brains.

Of all things a computer can control and are available to an average person, music has the most access to their unconscious mind. It can cause any emotion and lots of other ideas, and with practice, we should be able to use it for more communication than that, like if you agree with your friends that for 1 hour you will each only communicate through the instrument you hold and not talk, then you would certainly communicate better with them the last half hour. For use with a computer, music can only be an input to the Human.

You would think that if music is a good input, singing and playing normal instruments would be a good output, but that takes a lot of skill that most people do not learn.

Instead, the best outputs from Human to computer, that most people can get, are either a mouse or video camera. The camera could watch your movements (to interpret them like a game-controller would) and has more bandwidth. Mouse has less bandwidth and is easier to use than moving. People are usually lazy so I chose mouse as the primary input device. Maybe I'll add a camera option later.

My main user-interface is mouse and speakers. How will I use that to network brains together and to the internet?

First, I standardized the way all the devices, softwares, and Humans communicate. They all input and output numbers between -1 and 1. Each number in that range is called a "dimension".

A simple example that does nothing useful: Java code "leftSpeaker = Math.sin(mousexposition*500);" always keeps leftSpeaker in range -1 to 1 because sine oscillates between -1 and 1. Mousexposition would be -1 on the left, 1 on the right, and .9 near the right. By moving the mouse left and right, the amplitude of the left speaker would be a sine wave thats higher frequency when it moves faster. It usually evolves hundreds of lines of code at a time.

People move their mouse. AI moves the speaker amplitudes, and slower it moves the amounts of red, green, and blue. They are all interchangable. What you can do with mouse, AI can do with color. Swap them and see if its noticed.

Using that interchangable system, I can write the first software that runs the same on computers and Human brains. Platform-independent used to mean runs on any operating-system. Audivolv may become the first example where platform-independent also includes software and Human brains equally. Its all -1 to 1. Move your mouse. Listen. You are a very advanced CPU. Learn to program yourself and give and receive interesting programming. Why? Because playing music with the mouse is more fun as it gets more intelligent.

It would take around a million 1 ghz computers to simulate a Human brain, assuming we knew how it worked. I do not need to know how it works. Audivolv will learn that then tell us through music. Many people will use the "Audivolv Network" (does not exist yet) to connect to other Audivolvs and people on ther internet. Its a new framework to build a new kind of software on: musical AI software, and later other types of software.

The "Audivolv Network" wilL have a very high processing power because it includes millions of brains, and each brain may be around 1 million 1 ghz computers, but unlike social-networks, the Audivolv Network's main user-interface is mouse and speakers, so connecting to the internet will make it smarter but it will not appear to be a social-network.