The McCulloch-Pitts Neuron
Written by Harry Fairhead   
Article Index
The McCulloch-Pitts Neuron
What can the brain compute?

 

Divide By Two

If you know some digital electronics you might well be tempted to try and expand this circuit from a latch to a divide by two circuit – something like a D-Type flipflop. Most people who try come up with a two-cell design something like that shown in the diagram:

div1

A wrong divide by two network

 

You can see the basic idea.

The threshold 2 cell acts as half of a flipflop because of the inhibitory self-feedback. If it is a one it changes back to a zero no matter what the input is. The threshold 1 cell is also inhibited in the same way and every two pulses the whole circuit switches output but there is a problem.

If a pulse occurs while the cells are resetting then it is missed completely. In other words, this circuit divides by two but occasionally misses pulses.

The correct solution, which doesn’t drop any pulses, needs an extra cell.

 

div2A correct divide by two network

 

The fact that it takes three cells to divide by two is something that surprises many a designer used to standard digital components.

What can the brain compute?

You can see that it would be possible to continue in this way to build more and more complicated neural circuits using cells. Shift registers are easy, so are half and full adders – give them a try!

But at this point you might well be wondering why we are bothering at all?

The answer is that back in the early days of AI the McCulloch-Pitts neuron, and its associated mathematics, gave us clear proof that you could do computations with elements that looked like biological neurons.

To be more precise, it is relatively easy to show how to construct a network that will recognise or “accept” a regular expression. A regular expression is something that can be made up using simple rules. In terms of production rules any regular expression can be described by a grammar having rules of the type:

<non-terminal1> ->  symbol <non-terminal2>

or

<non-terminal1> -> symbol

That is, rules are only “triggered” in the right and symbols are only added at the left.

For example the rules:

1. <R1>-> A <R2>
2. <R2>-> B<R1>
3. <> -> <R1>

are regular. Production rules are used by picking a rule with a left-hand side that matches part of the sequence and then using the right-hand side to re-write that part of the sequence. (The symbol <> means the null sequence, i.e. the sequence with no symbols).

Once you see how this works things seem very simple.

Starting off from a null sequence <> which matches the lefthandside of Rule 3.

The last rule starts things off by allowing use to convert the null sequence <> into <R1> which Rule 1 matches.

Rule 1 lets us change <R1> into A<R2> and then rule 2 lets us change this into AB<R1> and so on.

You can see that all sequences of the type ABABAB and so on are regular according to this grammar.

Regular sequences are important because they are the most basic pattern that a simple computer – a finite state machine – can recognise.

What is important about McCulloch-Pitts networks is that it is easy to show that you can build a network that will generate or recognise any regular sequence and so such networks are equal in power to a finite state machine.

Why is this important?

Well if you agree that McCulloch-Pitts neurons capture the essence of the way biological neurons work then you also have to conclude that biological networks are just finite state machines and as such can only recognise or generate regular sequences.

In their original work McCulloch and Pitts extended this observation into deducing a great deal about human brain function. Most of this seems a bit far-fetched from today’s standpoint but the basic conclusion that the brain is probably nothing more than a simple computer – i.e. a finite state machine – still seems reasonable.

If you know a little about the theory of computation you might well not be happy about this “bottom line” because a finite state machine isn’t even as powerful as a Turing machine. That is, there are lots of things that a Turing machine can compute that in theory we, as finite state machines, can’t. In fact there are three or more complexities of grammar, and hence types of sequence, that finite state machines, and hence presumably us, cannot recognise.

This sort of argument is often used to demonstrate that there has to be more to a human brain than mere logic – it has a non-physical “mind” component or some strange quantum phenomena that are required to explain how we think.

All nonsense of course!

You shouldn’t get too worried about these conclusions because when you look at them in more detail some interesting facts emerge. For example, all finite sequences are regular and so we are really only worrying about philosophical difficulties that arise because we are willing to allow infinite sequences of symbols.

While this seems reasonable when the infinite sequence is just ABAB… it is less reasonable when there is no finite repetitive sequence which generates the chain. If you want to be philosophical about such things perhaps it would be better to distinguish between sequences that have no fixed length limit – i.e. unbounded but finite sequences - and truly infinite sequences.

Surprisingly, even in this case things work out in more or less the same way with finite state machines, and hence human brains, lagging behind other types of computer. The reason for this is simply that as soon as you consider a sequence longer than the number of elements in the brain it might as well be infinite!

As long as we restrict our attention to finite sequences with some upper limit on length, and assume that the number of computing elements available is much greater than this, then all computers are equal and the human brain is as good as anything!

McCulloch and Pitts neural networks are not well-known or widely studied these days because they grew into or were supersede by another sort of neural net – one that can be trained into generating any logic function or indeed any function you care to name.

 

Related Articles

The Influence Of Warren McCulloch       

The Truth About Spaun - The Brain Simulation       

John Von Neumann The Great Polymath       

Alan Turing's ACE       

The Triumph Of Deep Learning       

Marvin Minsky       

 

To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, FacebookGoogle+ or Linkedin,  or sign up for our weekly newsletter.

 
Banner


The Universe as a Computer

Is the Universe just a big computer?
We have been searching for the theory of the Universe, the Grand Unified Theory, for a long time but could the secret be that the Universe is just a computer…



Floating Point Numbers

Inconvenient though they may be, fractions are the real stuff of number and to work with them we need to know about floating point numbers ...


Other Articles
 

 

blog comments powered by Disqus

<ASIN:0387310738>

<ASIN:0123740193>

<ASIN:068482471X>

<ASIN:0316180661>

 



 
 

   
RSS feed of all content
I Programmer - full contents
Copyright © 2014 i-programmer.info. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.