As you continue to program you also start to see small chunks of algorithms that are fundamental in the sense that before you understood them there were things you couldn't do but after it's all so easy.

I can give you an example but if you already program you almost certainly have encountered these ideas before and won't be very impressed. What you have to keep in mind is that for a complete beginner this is all non-obvious. And if you don't believe me try teaching a complete beginner and take note of what they have difficulty understanding.

For example, after you have learned about loops as a way of repeating code, it is generally only a short time before you add to your pack of tools - the "sum loop".

Do some condition count=count+1; End Do

Beginners may well have encountered the idea of a loop that repeats instructions and they may have mastered the idea of a variable and how it stores a value. But when you put the two ideas together in this particular way, what you get is something special - its almost emergent behaviour.

This is a loop that counts how many times it repeats. Don't belittle it - this is a real breakthrough in thinking about algorithms.

The sum loop can be elaborated to:

Do some condition count=count+value; End Do

which is an accumulating sum or a reduction process.

Again, don't think that this is trivial; it is a loop that keeps a running sum and you only really understand the term "running sum" because you can program.

If you want to just put yourself in a slightly alien place, consider the product equivalents of both loops. Here you find something a little more unusual that might just manage to remind you how clever you were to have absorbed these ideas.

If you are still not impressed at how clever you and all programmers are consider the simple fact that the sum loop is equivalent to the mathematicians Sigma notation:

Σ_{condition}expression

and the product is the equivalent of the Pi notation:

Π_{condition}expression

It is worth pointing out that even though there are similarities, math is not programming and vice versa. Programming has an element of process and time that most math does its best to ignore. Math consists of static truths and in programming nothing is static.

So how do you use all this to teach programming?

Math, not arithmetic

Well we have to avoid the standard mistake that occurs in the world of math education. Most math teachers and mathematicians confuse math education with arithmetic. It isn't. You can do math even if you can't actually do arithmetic. You have to know how to do arithmetic but it really doesn't matter if you can't add up accurately. We probably turn lots of children off math simply because we insist on teaching it after arithmetic and by tying success in math with an ability to do arithmetic and memorise tables. This is a terrible shame.

When it comes to programming, it is vital that we don't make the same sort of error.

We do need to use a programming language to teach programming, but what we are teaching is not the language but the concepts that give rise to the language.

We need to teach loops and conditionals, variables, counting, flags, representations, states and eventually objects, methods, properties and so on. Above all we need to realize that teaching the language syntax and displaying example programs is a means to an end. It is not about being able to reproduce the syntax of a statement from memory.

Leave the tough stuff for later.

After the algorithmic way of thinking has started to settle in you can go for the interesting stuff. You can tackle parallel programming and the wonderful ways it can go wrong - race hazards, synchronization, dead lock, starvation and so on. You can leave recursion for when young programmers are old enough to see an x-rated topic - or you can introduce it as just another flow of control form, it is up to the taste of the teacher and pupil. We don't have to agree on the importance of every concept.

The discipline of debugging - science in a can

Then there is the wonderful art of debugging. It is science in miniature. You state your hypothesis, make a prediction and check that it is so. When you find some part of your program that isn't doing what you predict, then you know your theory, your understanding of the way the program works, is wrong - and you have your bug. What could be deeper or more philosophical than to put the scientific method to use over and over again.

Finally it worries me that many of the attempts at starting "learn programming" websites, courses and other resources seem to be based on the premise that you simply teach a language. It also worries me that many of the people behind these efforts really only seem to know a single language. They might know it well, but as you might begin to understand by this point, this isn't enough. Many of them seem to be committing the sin of teaching arithmetic while claiming to be teaching math.

Don't let this stop you from trying your hand at teaching, however. If you want to do the job well, always try to understand why your pupil made a mistake. Dig deep and don't assume that it is just because they misunderstood the syntax - this is usually a signal that their internal model is slightly wrong.

Always teach concepts and not just code.

References

M. James, "Computers and routes to learning", Bulletin of The British Psychological Society, 32 (1979), p. 420.