Its is a method of programming that has shown to be very useful. I honestly think that neural networks could be the future of all, if not most computation.
Right now, neural networks are used only for things such as image recognition, mapping patterns. Overall used in a way that require you only to train the network, because apparently that is the only way to use them. This statement as you may have realized, foreshadows the main point/idea i am trying to get across.
So, as i said before, neural networks resemble the brain, this means just like brain, we do not entirely understand how they work. I also mentioned, that a better way of referring to neural networks, is as computation, rather than programming.
The reason why i say this, is because of the fact that they resemble the brain. It may even be appropriate to go so far as to say, they are the brain. And that by playing with neural networks, we are playing with what makes you, you.
The brain, is a computer, very much like the one i am using to record this information now. But with this computer, which is comprised of transistors rather than neurons. I can program it to do things such as, play video, do mathematical calculations, or do what i am doing right now. But most importantly, i can make this computer do what i think is the greatest thing you can do with any computer. That thing, is run simulations, the very process of recreating reality, such as the physics of a ball dropping.
My idea, is a fairly simple one, yet at the same time, profound (at least to me). What if you could do on neural networks what we do now with transistor based computers? What if you could run a simulation on a neural net? I don't entirely know why, but it seems as if the way neurons work, is the best way to run a simulation. I mentioned this before in my idea to compute information in a new way. Which turned out to be neural networks, just in hardware form.
Now i know we can run simulations with neurons, i do it all the time, at least i think i do. I can imagine a ball dropping, when coming up with my ideas, i imagine them and all their parts. Nikolai Tesla used to do the same thing, so well that he would imagine every single part that makes up his machines before he even started to build them. He would then build them to the exact specifications in his head, and they would work just like he imagined. How he used his ability to imagine, is just like how we use simulations on computers. To do things before we do them, allowing for error without the consequences.
Now, i have no idea how to make neural networks do as i have mentioned. But i do have some ideas as to how to begin to understand neural networks, which would then allow us to do as i have mentioned. It also would bring us closer to understanding how the brain works.
My idea to further our understanding of neural networks, is to play with them. What i mean by that, is to turn individual neurons off and on at your will, play with what they connect to, and watch the outcome.
This leads me to another idea, a new way to program neural networks. Right now programming a neural network consists of tediously, and in my opinion boringly righting down lines of code.
My idea is to change that, to the point where you are visually connecting and turning on artificial neurons. Looking something along the lines of whats shown in the GIF.
The first step is learning how to create them by hand, or better, if we can.
I have devised a test in my head, train a neural net to recognize patterns. Then make a blank neural net, and copy all the connections of the first net on to the blank one by hand.
If the one made by hand, does not work, then there is a problem, a big, mysterious one. It would mean that there is more to the brain and to computation than simply the connections and the neural activity. But what made the connections, and the neural activity.
If the net does work, then we play with it. Randomly disconnect one of the neurons from another, and see the effects.
I think the latter is far more likely, I also hope it is the case (though the first option does seem fun as we'll)
In my experience, when people talk of neural networks, they always talk of mapping, never the neural activity.
I think it is possible for two neural networks to have the same connections, yet have different outcomes, due to different neural activity.
I feel as neural activity, especially in simulations, is the important factor.
where programming a neural network to do what we do on transistor based computers. lies directly in turning neurons on and off in specific patterns.
I almost feel as if in a simulation, the connections the neurons have would describe the somewhat static parts of the world, such as the ground, and the ball.
while the neural activity would describe the movement of the ball.