tag:blogger.com,1999:blog-5870989333828162642.post2349542963864724166..comments2023-08-13T00:36:02.389-07:00Comments on Ideas That Will Change The World: Idea to improve neural networksRoshawn Terrellhttp://www.blogger.com/profile/03575030242761754543noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-5870989333828162642.post-67516157478592806622014-11-17T15:48:39.463-08:002014-11-17T15:48:39.463-08:00You are right, i do not know much about neural net...You are right, i do not know much about neural networks. But my idea still stands. You keep saying the word "train" when my main point is to pull away from those methods. <br /><br />You also mentioned the word weight, which i see mentioned a lot within discussions about neural networks. I definitely know that manipulating this "weight" determines some outcome within the NN. What i don't understand, is how this "weight" even remotely accurately represents the functions of actual neurons.<br /><br />I am starting to think that my idea, and neural networks, are entirely different things.<br /><br />Another thing, you mentioned that you understand neural networks. No you don't, because if you did, you would understand how the brain works, which no one does. If you understood neural networks, yet did not understand the brain, then that means that neural networks are not an accurate representation of the computational processes that go on within the brain.Roshawn Terrellhttps://www.blogger.com/profile/03575030242761754543noreply@blogger.comtag:blogger.com,1999:blog-5870989333828162642.post-7854756405518100732014-09-29T17:30:50.224-07:002014-09-29T17:30:50.224-07:00Nice try, but before talking about classifiers you...Nice try, but before talking about classifiers you should study some theory, because there quite few mistakes.<br /><br />Neural net are only a simple solution to a specific subset of problem and quite often not the better one. Also, it is well known how NN works, it is not known (sometimes, and only for complex one) the internal representation that you get once you have trained a NN.<br /><br />Your experiment is useless as it is defined. NN are deterministic function (fuzzy implementation are quite hexotic, and derives from external algorithm that train them), the stochastical behavior is due to the statistical nature of samples that feed the NN.<br /><br />Tools to design and train NN (graphical or automatic) exist, one example Matlab/Simulink Neural Network Toolbox<br /><br />Neural activity is called for classical NN activation function. layer connectivity depends upon weights. The training of a net defines the weights, calling them the static part is an HUGE ERROR (like a zero division). The problem is neither mapping or activity. The key factor is abstraction capabilities and performance. And this is what NN users talk about, because they are fundamental for a classifierAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-5870989333828162642.post-57202835642575127282014-09-28T15:48:06.720-07:002014-09-28T15:48:06.720-07:00do you have any suggestions to advance my idea?do you have any suggestions to advance my idea?Roshawn Terrellhttps://www.blogger.com/profile/03575030242761754543noreply@blogger.comtag:blogger.com,1999:blog-5870989333828162642.post-75848257299397187822014-09-28T15:47:18.187-07:002014-09-28T15:47:18.187-07:00I know they are not exactly the same, for example,...I know they are not exactly the same, for example, neural networks do not contain glial cells. But if you see the function of glial cells, you see that they are not the important factor in the computation.Roshawn Terrellhttps://www.blogger.com/profile/03575030242761754543noreply@blogger.comtag:blogger.com,1999:blog-5870989333828162642.post-35730523861900223162014-09-28T08:45:10.864-07:002014-09-28T08:45:10.864-07:00You do know that what's commonly call "ne...You do know that what's commonly call "neural network" in computer science and not biologically realistic?Anonymoushttps://www.blogger.com/profile/16825562264468633218noreply@blogger.com