Page 2 of 3
Functional languages try to reduce, or is it enhance, programming languages to mathematical functions.
That is, when you use a mathematical function its result never changes - the result of Sin(0.3) is always the same and a function is just a lookup table. Sometimes the lookup table is too big to be stored and in this case the result is returned after being calculated using a procedure - but notice that all mathematical functions can in theory be implemented as lookup tables.
That is mathematical function provide a way to do computation without any dynamic change. A function returns its result and its the same result for the same input. It is static and stateless.
This isn't so with the procedural programming meaning of functions or procedures.
Note: a function is just a procedure that returns a single value as its result. All procedures can be cast as functions by the simple trick of having them return a single but perhaps complicated structure containing all there results rolled up into one object.
A programming function can produce side effects and state changes. For example:
In this case each time the function is called it returns a new value - it is counting - it is procedural - and this can't be allowed in a functional language.
In a functional language the sort of counting described above isn't allowed and variables aren't variable in the sense that once defined they have that value for the duration of the program - they are "bound" to their value.
At this point non-functional programmers generally give up because this is nothing like the way they think. How can I have loops and conditions if variables never change?
There are two answers to this question. The first is why are you counting?
If the counting is part of an algorithm that gets you to a solution then don't do it. Simply get the solution.
That is, if you implement Sin(0.3) with an iterative calculation then simply hide the iteration behind the function and return the result.
The second answer is that there are functional ways of doing the same job - mostly recursion and functional composition.
For example, if f(x) is a function which returns x+1 then counting can be implemented as:
and so on... Notice that in the nested set of functions
only one variable z is bound to the result, no variables change their values and yet even so there is still an "increment" as you work your way into the brackets.
In principle the function does something with the count much as you would in a for loop say. This is the functional languages equivalent of the fixed repeat for loop. There are also equivalents of for loops that perform variable numbers of repeat using either functional composition or recursion.
Functional programming has big advantages in that it doesn't specify how functions should be computed only that they should return their result.
This means it can be easier to see what is going on and easier to perform implementation based optimisations such as parallel execution. Of course under the covers there is a procedural implementation whirring away - isn't there always.
Logic programming is another approach to the declarative style and in many ways it is more true to the ideals than functional programming.
Logic programming is typified by a single language and dialects of the same - Prolog. The language itself was developed in 1972 but it was based on earlier advances in logic. Logic programming is a bit like functional programming in that it generally doesn't allow functions or variables to change their value but it has one extra ingredient - proof.
In a logic programming language you can write a statement like:
and the language will return true if A and B are connected and false otherwise.
This may sound trivial but it even works if A and B aren't directly connected but only connected by a chain of intermediates - perhaps thousands of intermediates.
The language has built into its core a recursive search facility that will "prove" statements according what what is stored in its database. For example if you write
where x is an unbound variable then the recursive search will attempt to find a value for x that makes the statement true.
More generally you can write:
and the search will find a combination of values for x, y and z that make the predicate (a function that returns true or false) true.
Prolog is a general purpose language and it has facilities for input/output, graphics and so on - but most programmers will admit that the methodology is a little strained when you are working with predicates simply for their side effects.
Some problems are very easy to solve using logic programming. What is encouraging is that many very difficult problems suddenly become easy the real question is how far can you go.
Today it isn't clear if logic programming will lead to anything more or if it is best treated as a specialist approach - a sort of functional programming with logic thrown in.
Now we come to a sideline in the language paradigm story.
There is a lot of talk about dynamic languages at the moment.
This is partly because until quite recently the dominant languages Java, C++ and C# were static languages - so it's quite a lot about a reaction to the old guard. The static/dynamic distinction is a very difficult one to pin down precisely.
In the old days it could be summed up as the split into compiled and interpreted languages but today it is more about a split in the approach to how object oriented programming should be done.
The current meaning of Dynamic when applied to languages usually refers to typing.
When you create an object oriented language you have to decide if it is going to be strongly or weakly typed. Every object defines a type and in a strongly typed language you specify exactly what type can be used in any given place.
If a function needs parameters that of type Apple you can't call it used parameters that are Oranges. The alternative approach is to allow objects of any type to be used anywhere and just let the language try to do the best job it can with what it is presented - this is weak typing.
Now in a strongly typed language you can choose to enforce the typing when the language is compiled or at run time. This is the main distinction between static and dynamic typing. In a static typed language you can look at a variable and see that it is being assigned to an Apple just by reading the code. In a dynamically typed language you can't tell what is being assigned to a variable until the assignment is done at run time.
Clearly there is an interaction between strong and weak typing and static and dynamic. A weakly typed language really doesn't have much choice but to use what looks like dynamic typing.
Weak typing or dynamic typing has the ability to make program easier to write but the loss of the discipline of controlling type makes it more likely that a runtime error will occur and arguably makes it harder to find bugs.
There is also another slightly different use of the term "typing". Algebraic typing is a system of typing that leads to the notion that a program is a proof in the type system. Some functional programming languages make a lot of use of algebraic typing and they invent ways around its restrictions so as to be able to do things that a procedural language can without seeming to deviate from the straight jacket of type. At the moment this approach has a very strong appeal to the brightest of programmer and it is difficult to see that it will ever become the best way to program - despite the fact that its proponents can prove that its better.
Dynamic languages my have something to offer the future but there is a sense in which we are simply returning to the wild primitive expression of programming that existed in a time before we learned better. Perhaps every so many generations programmers need to experience programming in the raw.