Its is a method of programming that has shown to be very useful. I honestly think that neural networks could be the future of all, if not most computation.
Right now, neural networks are used only for things such as image recognition, mapping patterns. Overall used in a way that require you only to train the network, because apparently that is the only way to use them. This statement as you may have realized, foreshadows the main point/idea i am trying to get across.
So, as i said before, neural networks resemble the brain, this means just like brain, we do not entirely understand how they work. I also mentioned, that a better way of referring to neural networks, is as computation, rather than programming.
The reason why i say this, is because of the fact that they resemble the brain. It may even be appropriate to go so far as to say, they are the brain. And that by playing with neural networks, we are playing with what makes you, you.
The brain, is a computer, very much like the one i am using to record this information now. But with this computer, which is comprised of transistors rather than neurons. I can program it to do things such as, play video, do mathematical calculations, or do what i am doing right now. But most importantly, i can make this computer do what i think is the greatest thing you can do with any computer. That thing, is run simulations, the very process of recreating reality, such as the physics of a ball dropping.
My idea, is a fairly simple one, yet at the same time, profound (at least to me). What if you could do on neural networks what we do now with transistor based computers? What if you could run a simulation on a neural net? I don't entirely know why, but it seems as if the way neurons work, is the best way to run a simulation. I mentioned this before in my idea to compute information in a new way. Which turned out to be neural networks, just in hardware form.
Now i know we can run simulations with neurons, i do it all the time, at least i think i do. I can imagine a ball dropping, when coming up with my ideas, i imagine them and all their parts. Nikolai Tesla used to do the same thing, so well that he would imagine every single part that makes up his machines before he even started to build them. He would then build them to the exact specifications in his head, and they would work just like he imagined. How he used his ability to imagine, is just like how we use simulations on computers. To do things before we do them, allowing for error without the consequences.
Now, i have no idea how to make neural networks do as i have mentioned. But i do have some ideas as to how to begin to understand neural networks, which would then allow us to do as i have mentioned. It also would bring us closer to understanding how the brain works.
My idea to further our understanding of neural networks, is to play with them. What i mean by that, is to turn individual neurons off and on at your will, play with what they connect to, and watch the outcome.
This leads me to another idea, a new way to program neural networks. Right now programming a neural network consists of tediously, and in my opinion boringly righting down lines of code.
My idea is to change that, to the point where you are visually connecting and turning on artificial neurons. Looking something along the lines of whats shown in the GIF.
The first step is learning how to create them by hand, or better, if we can.
I have devised a test in my head, train a neural net to recognize patterns. Then make a blank neural net, and copy all the connections of the first net on to the blank one by hand.
If the one made by hand, does not work, then there is a problem, a big, mysterious one. It would mean that there is more to the brain and to computation than simply the connections and the neural activity. But what made the connections, and the neural activity.
If the net does work, then we play with it. Randomly disconnect one of the neurons from another, and see the effects.
I think the latter is far more likely, I also hope it is the case (though the first option does seem fun as we'll)
In my experience, when people talk of neural networks, they always talk of mapping, never the neural activity.
I think it is possible for two neural networks to have the same connections, yet have different outcomes, due to different neural activity.
I feel as neural activity, especially in simulations, is the important factor.
where programming a neural network to do what we do on transistor based computers. lies directly in turning neurons on and off in specific patterns.
I almost feel as if in a simulation, the connections the neurons have would describe the somewhat static parts of the world, such as the ground, and the ball.
while the neural activity would describe the movement of the ball.
The neural connections would describe the landscape, while the neural activity would describe your movement through the landscape.
You do know that what's commonly call "neural network" in computer science and not biologically realistic?ReplyDelete
I know they are not exactly the same, for example, neural networks do not contain glial cells. But if you see the function of glial cells, you see that they are not the important factor in the computation.Delete
do you have any suggestions to advance my idea?Delete
Nice try, but before talking about classifiers you should study some theory, because there quite few mistakes.ReplyDelete
Neural net are only a simple solution to a specific subset of problem and quite often not the better one. Also, it is well known how NN works, it is not known (sometimes, and only for complex one) the internal representation that you get once you have trained a NN.
Your experiment is useless as it is defined. NN are deterministic function (fuzzy implementation are quite hexotic, and derives from external algorithm that train them), the stochastical behavior is due to the statistical nature of samples that feed the NN.
Tools to design and train NN (graphical or automatic) exist, one example Matlab/Simulink Neural Network Toolbox
Neural activity is called for classical NN activation function. layer connectivity depends upon weights. The training of a net defines the weights, calling them the static part is an HUGE ERROR (like a zero division). The problem is neither mapping or activity. The key factor is abstraction capabilities and performance. And this is what NN users talk about, because they are fundamental for a classifier
You are right, i do not know much about neural networks. But my idea still stands. You keep saying the word "train" when my main point is to pull away from those methods.ReplyDelete
You also mentioned the word weight, which i see mentioned a lot within discussions about neural networks. I definitely know that manipulating this "weight" determines some outcome within the NN. What i don't understand, is how this "weight" even remotely accurately represents the functions of actual neurons.
I am starting to think that my idea, and neural networks, are entirely different things.
Another thing, you mentioned that you understand neural networks. No you don't, because if you did, you would understand how the brain works, which no one does. If you understood neural networks, yet did not understand the brain, then that means that neural networks are not an accurate representation of the computational processes that go on within the brain.