Saturday, July 21, 2018

Mimic Neuronal Behaviour Using Generative Adversarial Networks

The core problem with building sophisticated artificial intelligence, is understanding, and modeling, all the intricate dynamics behind how neurons work. Neurons, and all their various activities, are extremely complex, and we are still learning new things today. Artificial neural networks have been very instrumental in the recent advancements in AI. The problem, is that they are still very crude representations of the actual dynamics that go on inside the brain. If we could effectively model how a neuron works, we could simulate a neuron on a computer, and let its natural ability to learn, unfold in the simulations.

So, what if we trained a GAN to mimic the dynamics of a neuron?

What if you fed it as much data as you could about the neuron, its activity, how it interacts and affects other neurons. And how other neurons interact and affect it. And tried to get the GAN to model that behaviour?

And then once it captures what a neuron is, and all its dynamics. Have it generate multiple neurons, and have those neurons interact with each other in a simulation.

If you can figure out the general rules behind how the neurons interact with each other. Once you know those rules, you don't have to understand the brain. You can just simulate the neurons, and let the intelligence naturally emerge from their interactions.

This is footage of a neuron taken using atomic force microscopy, allowing for imaging of nanoscale dynamics of neurons.

The key to making this work is the training data, and figuring out how to best capture and collate it..

Footage like this could serve as a source of training data for the GAN. This data, combined with the electrical activity that correlates with the data. Could potentially serve as a great dataset from which to allow a GAN to model neuronal behaviour.

Tuesday, January 23, 2018

Music..


Music is likely much more special than we realize. Though deep down, maybe we always knew that there was something special about it.



Music, may actually have some deep connection to the nature of the universe.


What initially led me down this train of thought, was when I first heard a bird song, slowed down to human hearing range, as shown in the TED talk 'The unexpected beauty of everyday sounds' by Meklit Hadero. A beautiful talk, by a beautiful woman.

What I find so fascinating about their songs, is that they actually sound like melodies humans would make.

It hints at the possibility that our aesthetic for music isn't unique to humans. That possibly, something much deeper and more fundamental is at play that causes us to like music. The fact that we even make music at all is an anomaly. As we don't get food from it, nor does it seem to help in the survival of our species. Yet we dedicate so much time, energy, and resources to the endeavor. Music is such a fundamental part of our culture, it's so powerful, we all know how powerful it can be. At times it can feel transcendent.

The great philosopher Friedrich Nietzsche once said - “Without music, life would be a mistake" I'm inclined to agree. When I listen to Vivaldi, it truly feels like I'm connecting to something beyond me, something beyond this world. Birds sing, so do dolphins, chimpanzees even show a sense of musical preference. Clearly music is something special. But why?

All of these species are known for their intelligence. So maybe music is somehow fundamentally connected to intelligence? Interestingly, one of the problems of building artificial general intelligence, is how would one develop an AI that would actively go out of its way to make music? It's a difficult problem, and it may be that we will only have true AGI until we can make AI that can and will make music, not because it has to, but because it wants to.

The current popular understanding of the brain and of AI, takes the perspective of looking at the systems, as systems primarily built for learning, but that doesn't really explain how and why we make music. But the ideas Nell and I are developing about the brain, could potentially explain this. As our ideas don't look at the brain as primarily a system that learns, but as a resonance engine that continuously molds itself and it's neural activity to resonate with it's environment.
Our theory
proposes that the intelligence of the brain, and of life, is an emergent property of entropy maximization, otherwise known as the 2nd law of thermodynamics. In order for a system to maximize entropy within its environment, it must resonate with its environment. And by resonating with its environment, it forms what effectively is, a model of its environment.

It also turns out, that Andrés Gómez Emilsson and the Qualia Research Institute, are actually developing a theory to explain consciousness, as something that emerges from brain harmonics. They call this theory, The Symmetry Theory Of Valence.

Andrés gives a talk that goes into detail about the theory, and even makes clear, empirically testable predictions, that could falsify or prove the theory true. I highly recommend watching it.

In his talk he references research done by Selen Atasoy on the study of brain harmonics, and how certain brain harmonics determine certain states of consciousness. This research is not only very interesting, but also backs up the idea of looking at the brain as a resonance engine, rather than as simply a system that learns.

With this in mind, I think it becomes a bit easier to see why music seems to have such a powerful affect on humans, birds, dolphins, or really anything with a neocortex. Perhaps it's just embedded in the way the brain works. If the brain is a resonance engine, then perhaps the brain likes music, simply because music is very easy to resonate to. As nice sounds, have a clear, and beautiful mathematical structure. And dissonant sounds, sounds that we tend not to like, don't really have a structure. They literally cause dissonance in whatever medium they travel through.

All I know, is that I love music so very much. I lose myself in it. And perhaps there is a deeply profound reason why it affects me, why it affects all of us, in such a way.

-

(Note, birds actually predate mammals and have even evolved down a completely different path, and yet have still developed their own version of a neocortex, that looks a bit different, but essentially works the same way. Which is also amazing in it of itself. It suggests that the development of the neocortex, which allows mammals to learn new behaviors, isn't due to some chance occurrence. But rather, that evolution, regardless of circumstance, will naturally lend itself to the development of neocortex like structures.)








Monday, October 9, 2017

A New Theory Behind How And Why Neurons Work

(These ideas are presented in our paper 'Neuronal Entropy Maximisation - A New Model For Neural Networks') Recently, Nell Watson and I through our research have made many novel insights. Forming powerful connections between entropy maximisation and a variety of other fields such as neuroscience, cognitive science, machine learning, and so much more.
One result of these insights, is that we believe we have come up with a new theory that advances far beyond Hebbian theory. Describing how the functions and processes of neurons, actually emerge from microscopic ground truths, and how possibly all 'intelligent' behavior found in the universe, is an emergent property of these microscopic ground truths.

We also believe that we can apply knowledge from these insights in helping to advance the field of artificial intelligence. Possibly replacing backpropagation altogether, the current method dominating the field.

One possibility we see is an unexploited opportunity to optimize NEAT further, by applying thermodynamic (entropic) principles to aid in the mutation and evolution processes. Using principles of Entropic Computing for designing neural networks, we can apply thermodynamic optimisation to the process of generating machine intelligences which are hyper-optimised for specific use cases.

In essence, entropy maximisation processes enable a ‘unit’ to see multiple possible futures, select the most preferable, and take the necessary steps to bring it into being. This technique also may revolutionise the space of ethics, particularly machine ethics, creating new models by which to select for ethical preferences which are most likely to enable a pareto-optimal distribution of human flourishing.

To begin, the core driver of all these ideas, is entropy. Entropy is the degradation of the matter and energy in the universe to an ultimate state of inert uniformity.  

Entropy is the force that drives all structures to less and less complex configurations, in the most simplest of terms, it is the dissipation of energy.

This is a physical law that governs all interactions in the universe. By looking at complex adaptive systems such as life, or the brain, through the lens of entropy maximisation, it's possible to understand these systems at a much deeper level than ever before.

The interesting thing about life, is that life not only manages to keep its internal entropy low, at the cost of increasing its external entropy. But life also grows in complexity, forming more and more complex organizations over time, capable of carrying out increasingly complex functions.

life may actually act as a catalyst for entropy production.

Based off of recent research, it may be that entropy, in order to further increase the rate of dissipation of energy. Will actively strive towards the creation of more and more complex structures that are even better at dissipating energy than inert matter.

Jeremy England, a biophysicist that runs a lab at MIT doing research on this, notes that structures that form through reliable entropy production in a time-varying environment seem adapted to eating energy from their environment.

Here are links if you wish to look more into this.




Dissipative adaptation in driven self-assembly:
Recent research has also hinted at a possible deep connection between entropy maximisation, and intelligent behavior.

Alex Wissner-Gross, a physicist and computational scientist, decided to team up with a mathematician named Cameron Freer, to see if they could find more evidence of this potential connection.

They proposed a “causal path entropy”, this entropy is based not on the internal arrangements accessible to a system at any moment, but on the number of arrangements it could pass through on the way to possible future states. They then calculated a “causal entropic force” that pushes the system to evolve so as to increase this modified entropy.

What they found, was that through this simple physical process, sophisticated behavior would emerge. Such as, balancing a pole on a cart, a classic control problem.

From this research, it looks like intelligent behavior may not just be connected to entropy maximisation, but may actually emerge directly from it. And that intelligence itself, is really the drive to maximise future freedom of action.

It is essentially saying, that intelligence not just tries, but is the process of acquiring as much control of its environment as possible.

Alex Wissner-Gross notes that our ideas of AI "becoming" megalomaniacal may actually be in reverse, that at the very core, what intelligence truly is, is the drive towards megalomania.

Here are links if you wish to look more into this:




All this research seems to paint the picture that life itself is an entropy maximisation process, and that the intelligence and complex behavior we see in life, emerges from this process as well.

As cognitive scientists, we decided to see if we could apply these concepts to understanding the brain.

One idea that came to our attention, is the concept of homuncular functionalism. Homuncular functionalism proposes the idea that cells have their own agency, and that our actions, are actually the result of a collective agency.

This really helped in the development of our theory by shifting our perspective on the matter, we began looking at the problem through a different lens. Instead of viewing neurons as simply processing units, we instead looked at them as entities, with their own drives.

ezgif-3341908941.gifThis is footage of a neuron taken using atomic force microscopy, Allowing for imaging of nanoscale dynamics of neurons.

As we looked at this footage, we began to think.. What if neurons are engines of future freedom? Constantly searching to maximise its future freedom of action?

If we take the perspective of each neuron having it’s own agency, then we could take the perspective that each neuron is in fact, an entity that strives to maximise entropy, and as such, maximise its future freedom of action.

This intuitively seemed to make sense on a very deep level. But that still left it open to asking the question, how does firing, maximise future freedom of action?

We knew that future freedom of action is not simply defined by movement in space, but is really defined by the amount of options one has available. As shown with the inverted pendulum experiment in the paper ‘Causal Entropic Forces’. The pole took the inverted position, because in that position, it could maximise its options. This is because in the inverted position, the pendulum now has more potential energy, and can be dropped in any direction to carry that energy to another position. Whereas if the pendulum remained upside down, it would take more effort and time to move it to any other position.

With this we knew it was at least possible for firing to be represented as a form of maximising freedom. But, this still did not fully answer how it maximised freedom.

We also knew that there was a clear correlation between neural activity, and entropy maximisation. Not only because it intuitively makes sense, i.e neurons burning more energy by firing. But also because there has been research on the thermodynamics of learning in neurons, and they found a clear correlation between the rate at which neurons learn, and the amount of heat and entropy they produce. You can read about that here: https://m.phys.org/news/2017-02-thermodynamics.html

All of this was a clear sign that we were on the right track, but it still didn’t really connect the dots. There was something missing, we could feel it.
Then we took another look at Hebbian theory.

"Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.[…] When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.”

The theory is often summarized as “Cells that fire together, wire together." However, this summary should not be taken literally. Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can only occur if cell A fires just before, not at the same time as, cell B.

The thought then came to mind.

What if neurons, are actually trying to control other neurons?

Neurons are trying to maximise entropy by not only making themselves fire and burn energy, but trying their best to make other cells around them fire and as such burn more energy.
As stated in Hebbian theory, causality is very important in the process of learning. The cell will not strengthen its connection just because another cell happened to fire at the same time as it did. No, it will only strengthen its connection, if it feels as if it had influence over the cell.

It needs to "take part" in firing cell B. Because if it sees it has influence over cell B, it will strengthen its connection, to strengthen its influence. But cell A is constantly searching for more influence. So if it finds that it can have more influence over another cell, it will gravitate more so towards that cell. Or if it suddenly finds that it can no longer influence cell B's firing as much as it once did before, then it will begin to move and search elsewhere.

We felt, that this was it, right here.

We were looking for the missing connection between entropy maximisation, and the cells working together, and we think we found it.

What this would mean about firing, is that firing isn't really a method of processing information. But rather, it's a method, a tool that cells use, to influence other cells. And that really the processing of information, is an emergent property from this activity.

..it’s a paradigm shift.

Ha, our crappy functions in artificial neural networks, are just approximating the entropy maximisation process in a really crap way, but even so it kinda works (incredibly).

Hebbian theory emphasizes causality in the connection process, or 'seeking' process, as we think it’s better to call it.

This would mean that there could be a neuron next to another neuron, that has a lot of activity, in how frequent it fires, and in how strong it fires. Either due to how many neurons are connected to it, or due to sensory stimuli. But even though that neuron has a lot of activity, it may decide not to reach out to it, because it may see it does not cause it fire, as strong as another.

This makes sense from the perspective of entropy maximisation because the neuron will seek and favor sources it has more influence over, even if it does cause the neuron to fire, and as such, dissipate energy. How big of a firing did it cause? How much energy did it cause it to dissipate? This is deeply important, because in tightly packed groups where every neuron is constantly firing, some spikes strong, some spikes weak. A neuron can choose to loosen its grip on one neuron its firing, to strengthen its grip on another. The activity on both neurons it’s choosing between may be just as frequent, but one neuron may fire much stronger than the other. Due to other neurons around it, just so happening to fire in perfect sequence with its firing. So the neuron will seek and sort of grip onto that one more, because it sees it's actually causing it to dissipate more energy than the other.

They opt-in to participate, and to selectively choose collaborators. This may be mediated by the glial cells (tentatively some research suggests that glial cells mediate connections).

Imagine one passes energy along the chain. One isn't just influencing the next neuron in the chain necessarily, one might influence others further along.

if the pattern of firing travelling along the chain is replicated by the next one, then one can change the pattern of their firing and affect multiple others in the chain ahead to different degrees.

You may also be noticing that our description of the processes of neurons is much different than how they are normally described. As shown in our use of words such as ‘seeking’ and ‘gripping’.

This is because we think much of the vocabulary we currently use to describe the activity of neurons, may actually be working against us in our effort to understand their functions, causing us to look at the problem through the wrong lens.

With this theory we have developed, we predicted that it would likely be possible to replicate the functions of neurons with something else entirely. Anything that can be used as a tool to influence other agents in a system.

Well, much to our excitement, we later found strong evidence that our prediction was indeed correct.

It turns out, that bacteria can actually communicate electrically.

A very recent study discovered that bacteria communicate electrically, by using biofilms to propagate their electrical signals. The head researcher of this project, Gürol Süel, explained that “bacteria within biofilms can exert long-range and dynamic control over the behaviour of distant cells that are not part of their communities,"


This, is a perfect example of collective agency.

Small pores called ion channels allow electrically charged molecules to travel in and out of the cells. In this way potassium ions can ripple through the whole biofilm.

The bacteria, instead of sending the signals out in a directed channel like neurons, they send them out as a mass impulse.

We began to realize, that neurons don't really matter, do they? Could just as easily be slime. It's just the entropic principles that matter.
Our world is an ocean of intelligence, and we never knew.
It was under our noses. Of course plants and germs have agency. But... We just couldn't grok that somehow, until we had a model to describe it.
We just love theories that are so beautifully simple, yet explain so much complex phenomena we see in the world.
We also find it fascinating to see such a deep connection between entropy, and processing information, as shown when Shannon discovered the concept of information entropy. And to now see a deep connection between entropy and intelligence itself.

Seeing as we are true mavens for curious data points, we began connecting the dots to a variety of other fields as well. Lead by the intuition that it would be possible to create far better AI using these concepts.

One low hanging fruit we saw, was to apply the method of entropy maximisation to NEAT to aid in the mutation and evolution process. We are currently working with the mathematics department at Oxford on this specific opportunity.

But we also have a much bigger vision than this. Combining cellular automata, entropy maximisation, and possibly even quantum walks to speed up calculation times. We think it’s possible to create true AGI.

We felt that there was also a link to entropy maximisation and cellular automata too somehow.

At the end of the paper, it stated that they are still trying to figure out how to compute entropies of cellular automata.

They also said this.
"Understanding and being able to compute entropies for cellular automata
may lead to the development of CA that can model thermodynamic systems.
It is thought that CA may be the key to understanding a large number of
subjects on a deeper level."
Hm, maybe looking at the problem from the perspective of entropy maximisation would allow for that understanding?
Gas_velocity.gif
As stated, being able to understand and computing the entropies of cellular automata, may lead to the development of CA that can model thermodynamic systems. Biology, and the brain, are thermodynamic systems.

As such It’s possible that with CA, we could make a complex adaptive system, with emergent intelligent behavior. Then we could give this, newly formed complex adaptive system.. input from the real world, and it could begin to adapt and form a model of that input. It could learn.

Our theory shows, that it probably doesn’t matter that we replicate exactly how neurons work or even send out signals the same way they do. It’s the entropic principles that matter. It’s possible, that we could let a system such as CA evolve its own methods.

giphy.gif
In essence, it may be possible to make a brain like system out of cellular automata, and as such, achieve AGI.

Actually, if neurons don't matter, then there's no reason for us to assume that the brain is the epitome of an intelligent complex adaptive system. Evolution does not reach perfection, it simply evolves as far as its environmental pressures push it towards.

As such, it may even be possible to evolve a better system design than the brain, using CA.

The key to doing this we think, would be to make CA, where each entity is a causal entropy maximisation agent. Calculating the entropy of a system is inherently a stochastic process, as such we realized the states of the entities would have to be determined by a probability distribution. Luckily we found, and much to our excitement, that there is indeed a such thing as stochastic cellular automata: https://www.wikiwand.com/en/Stochastic_cellular_automaton

giphy.gif“From the spatial interaction between the entities, despite the simplicity of the updating rules, complex behaviour may emerge like self-organization.” We love wikipedia..

SCA also has a deep connection with the cellular potts model, which is a form of CA which has been used to model biological processes such as the growth of blood vessels.

Another idea that came to mind, was a potential way to calculate the entropy production of these systems.


Our idea, is that one could use the computer itself to measure the entropy production of these systems. We figured that computer processes, should have a real physical entropic effect on the computer, and you could measure that.


Computers produce heat, and consume electric energy while doing so. The processes that happen on the computer, are not separate from the computer itself. The computer and the processes happening on the computer, are actually consuming energy and as such should produce a real measurable amount of entropy in the universe. As shown in information theory, the processing of information is not separate from the physical system you use to process that information, and such to process information requires one to do 'work' on a system, and as shown in physics, any work requires the expenditure of energy, and increases the total entropy of the universe as a result.

We believe this is a truly novel idea, due to our novel viewpoint on the problem. Actually, we would like to give some credit to a famous writer and speaker, his name is Kevin Kelly and he has definitely influenced our way thinking. Kevin Kelly actually looks at technology as the 7th kingdom of life.

We are also currently working with the mathematics department at Oxford to look into this as a potential way to calculate entropies of and apply entropy maximisation to NN's and CA.

It should be possible to build a reservoir computer made out of CA, where the reservoir itself, is literally the energy being fed into the computer.

Recent research into reservoir computers has shown that they are likely deeply related to entropic processes and even the brain.

As shown in this article: https://www.quantamagazine.org/a-brain-built-from-atomic-switches-can-learn-20170920/

Engineers at the California NanoSystem Institute, recently built a mesh like computer that organized itself out of random chemical and electrical processes.

Essentially they found out that a material will organize into a wire like mesh, given an electrical charge. It's self organization is not only reminiscent of the brain, but it can perform simple learning and logic operations and it can clean the unwanted noise from received signals.

Another very interesting thing is the self-assembling wire networks experiment the Center For Complex Systems Research conducted.

You can view a version of this experiment here: https://cosmosmagazine.com/technology/watch-ball-bearings-organise-themselves-into-complex-tree-like-structures

giphy.gifGiven an electrical charge, ball bearings will self organize into tree like structures and can even perform simple computations.

Researchers Joy Chao and Alfred Hübler from CCSR note that this research will give us clues as to how the neural connections of the brain work and ways to control connections.

Researchers Marshall Kuypers and Alfred Hübler also from CCSR, note in another experiment they call the thermodynamics of thought experiment.

“That these types of particle networks “remember” which electrodes were charged--in effect, a kind of learning. Thus, these systems are hardware implementations of a neural net.”

The self organization in these systems is not only reminiscent of the brain, but they’re also able to learn and perform at least simple computations. Its only further evidence that the brain is likely a structure that emerged from entropic processes and that it’s intelligence is an emergent property of these microscopic ground truths.

Lastly, we thought of a potential way to enhance causal path entropy discovery. Path sampling is done through Monte Carlo methods such as a random walk. Well It turns out there exists a such as thing as a quantum walk, which is much more efficient than a normal ‘classical’ random walk. it also turns out bacteria have likely evolved their own method of doing these quantum walks, during photosynthesis.

Seth Lloyd who does quantum life research at MIT goes in depth about this process.

Quantum Life (very good video, highly recommend watching it): https://youtu.be/wcXSpXyZVuY

Recently, researchers also managed to build and perform quantum walks on a two qubit photonics quantum processor. They also found a link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers.

Efficient Quantum Walk On A Quantum Processor: https://www.nature.com/articles/ncomms11511

So with a quantum processor capable of doing quantum walks, when one needs to do path sampling. Instead of using a classical processor to do those computations, one could send those requests to an external quantum processor. Which would do the necessary computations, the main processor would retrieve the results, and then use them in Its calculation of the state change of the entities within SCA.

To accomplish something like this is going to take a massive effort. It will require experts from completely different domains all working and more importantly, thinking together to achieve this. This is truly a transdisciplinary project.

We naturally thought about what research group would likely be best suited to take on this challenge.

We think that the collective computation group C4 at the Sante Fe Institute, would be perfect for attacking a problem like this. As they not only have the right minds along with the transdisciplinary nature of the Sante Fe Institute, which pulls experts out of their fields, and forces to them collaborate and think together. But probably best of all, C4 seems to have the right mindset when it comes to attacking big interdisciplinary problems.

Here's a quote that I love from the director of the C4 group at Sante Fe.

“We have this argument at the Santa Fe Institute a lot. Some people will say, 'Well, at the end of the day it’s all math.' And I just don’t believe that. I believe that science sits at the intersection of these three things — the data, the discussions and the math. It is that triangulation — that’s what science is. And true understanding, if there is such a thing, comes only when we can do the translation between these three ways of representing the world.” - Jessica C. Flack C4 Director