Programmer's Guide To Theory - Gödel And All That
Written by Mike James   
Monday, 16 September 2024
Article Index
Programmer's Guide To Theory - Gödel And All That
Failure of Logic
The Influence of the Infinite

Given infinite computing power surely there cannot be any problem or puzzle that is incapable of solution? The famous, or infamous, incompleteness theory of Kurt Gödel says different, but what does it actually mean?

A Programmers Guide To Theory

Now available as a paperback and ebook from Amazon.

cover600

Contents

  1. What Is Computer Science?
    Part I What Is Computable?
  2. What Is Computation?
  3. The Halting Problem
  4. Finite State Machines
    Extract 1: Finite State Machines
  5. Practical Grammar
  6. Numbers, Infinity and Computation
    Extract 1: Numbers 
    Extract 2: Aleph Zero The First Transfinite
    Extract 3: In Search Of Aleph-One
    Extract 4: Transcendental Numbers
  7. Kolmogorov Complexity and Randomness
    Extract 1:Kolmogorov Complexity 
  8. The Algorithm of Choice
  9. Gödel’s Incompleteness Theorem ***NEW!
  10. Lambda Calculus
    Part II Bits, Codes and Logic
  11. Information Theory 
  12. Splitting the Bit 
  13. Error Correction 
  14. Boolean Logic
    Part III Computational Complexity
  15. How Hard Can It Be?
    Extract 1: Where Do The Big Os Come From
  16. Recursion
    Extract 1: What Is Recursion
    Extract 2: Why Recursion
  17. NP Versus P Algorithms
    Extract 1: NP & Co-NP
    Extract 2: NP Complete

<ASIN:1871962439>

<ASIN:1871962587>

 

We make the assumption that computers, logic and scientific thought can go anywhere we care to take them - there are no limits to what we can work out.

So, if I was to tell you that I had a mathematical theorem that so far didn’t have a proof you’d just assume that the mathematicians hadn’t really been working hard enough on the problem.

For example, the issue of Fermat’s theorem and its proof dragged on for centuries and then suddenly it was all solved by some new maths and some older maths cleverly applied. 

Fermat's last theorem is a negative sort of theorem. It states that there are no integers a,b and c that satisfy:

                                              an+bn=cn

for n>2 (there are lots of examples of a,b and c is n=2 or 1). It almost cries out for you to prove it by finding a counter example i.e an a,b,c that do satisfy the relationship for some n>2.

But the proof is much harder than you could ever possibly imagine.

Suppose for a moment that the halting problem was computable. Then we could solve Fermat’s last theorem easily. Write a program that searches a, b, c and n in a “zig-zag” pattern so that if we wait long enough we will see any a, b, c and n we care to specify and test each case to see if an+bn=cn. If it finds a,b,c and n that satisfies this relation we have a counter example. Of course, in any practical case we would potentially search forever, but if the halting problem is decidable all we have to do is submit the equivalent Turing machine tape to the halting machine and if it pronounces that it halts we have a counter example and Fermat’s last theorem is disproved. If it says that machine never halts, then Fermat’s last theorem is proved. Notice that it is the second case that is really important because it relieves us from the need for a search that lasts forever. But wait a minute – the theorem has been proved and so no need for an infinite search. Logical proof really is this powerful!

Similarly the four-color problem similarly dragged on for years, but it was solved in a completely new way and a computer generated a large part of the proof. The four-color theorem states that you only need at most four colors to color in a map so that no two adjacent regions have the same color (where adjacent means sharing a boundary not just a corner). Again the four-color theorem sets your mind working to see if you can find a map where four-colors aren't enough to ensure that no two adjacent regions share the same color. In the case of the four-color theorem the computer’s accurate logic was used to go through a long list of tests that would have been impossible for a human to complete in a reasonable time. 

Humans plus computers seem to be capable of working things out in a way that is precise and efficient and surely there cannot be anything that is beyond reach? The fact is it can be proved that there are truths that are indeed beyond the reach of logic. This is essentially what Kurt Gödel proved in 1931. The idea isn’t a difficult one but it is often used to justify some very strange conclusions most of which aren’t justified at all! It is important, and fun, to find out a little more about what it really means.

The mechanical maths machine

First we need to look a little more closely at how we really expect the world to work. Back at the turn of the 20th century mathematics had moved on from being a “creative” subject to trying to be precise and logical. The idea was that you stated your axioms – the things you regarded as true – and then you used these rules to derive new truths. This was the big project – the axiomatization of the whole of mathematics. If it could be achieved then we would be in position where we could say what was true with certainty – as long as the initial axioms were OK.

 

 

principia

 

Notice that we are talking about some very fundamental things that you might think could be taken for granted. For example, Alfred North Whitehead and Bertrand Russell produced a huge book (Principia Mathematica published in thee volumes in 1910, 1912 and 1913) that took several hundred pages to get to a proof that 1+1=2! The point is that this was very detailed work building from very simple initial axioms which were so basic that no one could dispute them. The fact that 1+1 was 2 wasn't just an observation of physics, but a logical necessity given the initial axioms. 

The axiomatization of mathematics also potentially reduced math to a mechanistic shuffling of symbols that a computer could happily be left to do. You didn’t even need to give the theorems and proofs any interpretation in the real world. You could regard it as a game with symbols that you moved around according to the rules. The new arrangements are theorems and the way that you got the new arrangements from the existing ones are the proofs.

Many mathematicians rejected this soulless brand of mathematics, but they still adopted the basic way of working and regarded a proof as a sequence of legal steps that takes us from a known truth to a new truth. The freedom and creativity was what came before the proof. You spend time thinking and experimenting to find a way to put together a logical argument from axioms to a new theorem.

Banner



Last Updated ( Tuesday, 17 September 2024 )