A biological neural network is, by definition, any group of neurons which perform a specific physiological function. Included in that definition are all the associated parts that make up the network, such as the neurons themselves and the various connections involved. Neurons need not be physically connected to each other in order to make up a network. Often, several groups of unconnected neurons will act towards the same end, making them part of the network in general through association of their purpose. The biology of the brain is complex and only just beginning to be understood. The study of biological neural networks is chiefly the realm of neuroscience, although the results of the research may be applied to various fields, including psychology, biology, and computer science. While much more complex than computer generated neural networks, both run off of the same principles.
How BNNs Function
The exact science of brain functioning is an incredibly complex thing which can fill several volumes. For brevity’s sake only some basics will be discussed here.
Physiology of BNNs
Neural physiology consists of some elementary parts which work together to achieve incredibly complex results. Each neuron is connected from its body to other neurons through a series of dendrites. Every neuron has many dendrites, all running to different places in the brain in order to facilitate communication between individual networks. The dendrites act as input devices to receive information signals from the other neurons. One axon is connected to each cell body for output purposes. The axon branches to reach other dendrites, allowing a neuron to pass along its own information into the network. The connection points between neurons are called synapses and work electrochemically. When a certain threshold is reached within a synapse, it fires and allows the proposed information to pass on to the rest of the network to be picked up and processed by other neurons which in turn pass it along in turn. Once the whole network has been informed of the goal, the appropriate response is activated. If thresholds along the network are not reached, the signal is stopped and the action does not take place. This complex parallel processing allows the brain to work at speeds which match computers even though the actual process is inefficient and highly redundant.
Functionality of BNNs
The key to brain power is a concept known as plasticity. Both synapses and neurons exhibit plasticity and work together to facilitate learning from experience. As a person experiences certain things in their life, the brain makes decisions as to whether those experiences are helpful or harmful and takes appropriate action to wire the brain to be more accepting or repulsed by particular stimuli. It does this by using plasticity in one of two ways – neural plasticity and synaptic plasticity.
Neural plasticity increases or decreases the threshold in the cell body itself. This makes the brain either more receptive or less receptive to input asking it to perform particular functions. Synaptic plasticity works by affecting the connection points between neurons. The synapse either trains existing receptors to change the level of signal that is passed on through the network (thus creating stronger potential to reach the threshold point), or it creates new receptors to heighten sensitivity (giving preference to signals coming from particular neurons). Both methods serve to reinforce or reduce the desirability of particular actions.
Example of Plasticity at Work
Imagine going over to the home of a friend, someone you have known for years and really enjoy the company of. There is one problem, however – something in their house smells horrible. Your natural response is going to be to wrinkle your nose at the unpleasant smell. At first, it may be an automated response, learned from years of brain-training in dealing with bad odors. Now add to the recipe a desire to not offend your friend. This creates a counter-response which causes you to suppress the urge to wrinkle your nose. After visiting your friend several times and essentially “practicing,” the response to wrinkle the nose is no longer automatic. At first, it may still require some force of will to prevent it from happening, but eventually the response will become ingrained, and after a while, you may not even notice the smell unless you choose to.
In this example, the neurons of the network which controls your reaction to the unpleasant smell have been suppressed so the signal that is passing through grows weaker even as the threshold of the “wrinkle your nose” neuron grows higher. The result is the need for a much more concentrated effort to make the brain perform the undesirable action. If one day you visited your friend and the smell was much stronger, however, the increased input might break the threshold once again.
Plasticity in Children
There is increasing evidence to suggest that plasticity in children functions differently than in adults. This is the classic argument that children learn quicker than adults, especially in such areas as language acquisition. What appears to happen in children is a tendency of the brain to adjust itself in a primarily depressive fashion. Thus, a child acts before thinking and only stops acting once the thresholds of undesirable actions have been raised enough to stop the impulsive behavior. With adults, the opposite appears to be true. The older one gets, the more learning is instead relegated to taking deliberate action and reinforcing the neural network in a positive manner. Thus, adults need to use their will to initiate action instead of suppressing it.
While the research on this theory is still being done, the ramifications for architectural design of artificial neural networks and machine intelligence in general are significant. If a proper AI could be developed, the key to manifesting its human-level intelligence could be in letting it take many actions and discarding the ones that produce failures while reinforcing the ones that produce success. This is already being seen in the way ANNs are trained to adjust to the problems that are given to them.
What Does this Mean for AI?
By basing studies on the way that human brains work, programmers seek to replicate its functions by building neural networks that solve problems on their own and become increasingly smarter. It is the belief that neural networks’ ability to adapt and try to survive by adjusting to the situations that revolve around the problem given to them may eventually manifest a creative ability. The results are limited by the size of the NNs, which cannot be large enough at the current time to mimic an entire human brain. Thus, they can only be adapted to small problems.
Even these small problems, however, teach lessons in the way that neural networks evolve. They develop their own form of plasticity, though extremely case-specific, and become increasingly more adept at solving the particular problem to which they have been assigned. By studying the way groups of biological neurons relate new information to old information and work similar information across multiple networks, scientists may be able to understand the essence of creative and chaotic thought. This can further lead to an understanding of how emotion and socialization work through the reinforcement of specific neural networks that may be completely unrelated but somehow add up to these complex and intrinsically “human” behaviors.
Some Applications of Biological Neural Network Research
The combination of neural network research and other technologies leads to some interesting possibilities. The following videos show what scientists today are producing and what they have learned:
- Tan Le demonstrates the ability to map and measure biological neural activity and then use it as an interface to manipulate either an artificial or real-world environment with a portable headset that reads brainwaves.
- Charles Limb shows how fMRI can be used to measure the unique patterns of neural activity that take place in the brain during creative moments.
- Henry Markram shows how the continued mapping of the human brain can lead to amazingly complex potential for artificial neural networks and an understanding of the brain in genera.
- Jeff Hawkins talks about his views on how a deeper understanding of brain science will change computing.