Programmer's Guide To Theory - Random?
Written by Mike James   
Wednesday, 20 August 2025
Article Index
Programmer's Guide To Theory - Random?
Pseudo and True Random
Summary

Randomness is more subtle than you might think and it's not easy to define or detect. This is what this extract from Programmer's Guide to Theory is all about.

A Programmers Guide To Theory

Now available as a paperback and ebook from Amazon.

cover600

Contents

    1. What Is Computer Science?
      Part I What Is Computable?
    2. What Is Computation?
    3. The Halting Problem
    4. Finite State Machines
      Extract 1: Finite State Machines
      Extract 2: Turing Thinking
    5. Practical Grammar
    6. Numbers, Infinity and Computation
      Extract 1: Numbers 
      Extract 2: Aleph Zero The First Transfinite
      Extract 3: In Search Of Aleph-One
      Extract 4: Transcendental Numbers
    7. Kolmogorov Complexity and Randomness
      Extract 1:Kolmogorov Complexity 
      Extract 2: Random?  ***NEW!
    8. The Algorithm of Choice
    9. Gödel’s Incompleteness Theorem 
    10. Lambda Calculus
      Part II Bits, Codes and Logic
    11. Information Theory 
    12. Splitting the Bit 
    13. Error Correction 
    14. Boolean Logic 
      Part III Computational Complexity
    15. How Hard Can It Be?
      Extract 1: Where Do The Big Os Come From
    16. Recursion
      Extract 1: What Is Recursion
      Extract 2: Why Recursion
    17. NP Versus P Algorithms
      Extract 1: NP & Co-NP
      Extract 2: NP Complete

<ASIN:1871962439>

<ASIN:1871962587>

At the end of the previous chapter we met the idea that if you want to generate an infinite anything then you have to use a finite program and this seems to imply that whatever it is that is being computed has to have some regularity. This idea is difficult to make precise, but that doesn’t mean it isn’t an important one. It might be the most important idea in the whole of computer science because it explains the relationship between the finite and the infinite and it makes clear what randomness is all about.

In chapter but not in this extract

  • Algorithmic Complexity
  • Kolmogorov Complexity Is Not Computable
  • Compressability

Random and Pseudo Random

The model of computation that we have been examining up to this point is that of the mechanistic universe. Our computations are just what the universe does. The planets orbit the sun according to the law of gravity and we can take this law and create a computer program that simulates this phenomena. In principle, we can perform the simulation to any accuracy that we care to select. The planets orbiting the sun and the computational model will be arbitrarily close to one another – closer as the accuracy increases. It all seems to be a matter of decimal places and how many we want our computations to work with. This gives the universe a mechanistic feel. The planets are where they are today because of where they were yesterday and where they were 1000 years earlier and so on back to the start of the universe. If you could give the position of every atom at time zero then you could run the model or the real universe and everything would be where it is today.

This is the model of the universe that dominated science until quite recently and was accepted despite it containing some disturbing ideas. If the universe really is a deterministic machine there can be no “free will”. You cannot choose to do something because it’s all a matter of the computation determining what happens next.

The idea of the absolute determinism reduces the universe to nothing but a playing out of a set of events that were set to happen by the initial conditions and the laws that govern their time development. In this universe there is no room for random.

Randomness and Ignorance

The standard idea of probability and randomness is linked with the idea that there is no way that we can predict what is going to happen even in principle. We have already mentioned the archetypal example of supposed randomness – the toss of a coin. We summarize the situation by saying that there is a 50% chance of it landing on either of its sides as if its dynamics were controlled by nothing by randomness. Of course this is not the case – the coin obeys precise laws that in principle allow you to predict exactly which side it will land on. You can take the initial conditions of the coin and how it is about to be thrown into the air and use the equations of Newtonian mechanics to predict exactly how it will land on the table.

In principle, this can be done with certainty and there is no need to invoke randomness or probability to explain the behavior of the coin. Of course, in practice we cannot know all of the initial conditions and so a probability model is used to describe certain features of the exact dynamics of the system. That is, the coin tends to come down equally on both sides unless there is something in its makeup that forces it to come down more on one side than the other for a range of initial conditions.

You need to think about this for a little while, but it soon becomes clear that a probability model is an excuse for what we don’t know – but still very useful nonetheless. What we do is make up for our lack of deterministic certainty by summarizing the dynamics by the observed frequency that we get any particular result. For example, in the case of the coin we could throw it 100 times and count the number of heads and the number of tails. For a fair coin we would expect this to be around 50% of each outcome. This is usually summarized by saying that the probability of getting heads is 0.5. Of course, after 100 throws it is unlikely that we would observe exactly 50 heads and this is allowed for in the theory. What we expect, however, is that as the number of throws increases the proportion gets closer and closer to 50% and the deviations away from 50% become smaller and smaller. It takes quite a lot of math, look up The Law of Large Numbers, to make these intuitions exact but this is the general idea.

You can, if you want to, stop the theory at this point and just live with the idea that some systems are too complex to predict and the relative frequency of each outcome is all you can really work with. However, you can also move beyond observed frequencies by making arguments about symmetry. You can argue that as one side of a coin is like the other, and the throwing of the coin shows no preference for one side or another, then by symmetry you would expect 50% of each outcome. Another way of saying this is that the physics is indifferent to the coin’s sides and the side of a coin cannot affect the dynamics. Compare this to a coin with a small something stuck to one side. Now there isn’t the symmetry we used before and the dynamics does take account of the side of the coin. Hence we cannot in this case conclude that the relative frequency of each side is 50%.

cover600

 



Last Updated ( Wednesday, 20 August 2025 )